id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
325,020
https://en.wikipedia.org/wiki/Antarctic%20Circumpolar%20Wave
The Antarctic Circumpolar Wave (ACW) is a coupled ocean/atmosphere wave that circles the Southern Ocean in approximately eight years at . Since it is a wave-2 phenomenon (there are two ridges and two troughs in a latitude circle) at each fixed point in space a signal with a period of four years is seen. The wave moves eastward with the prevailing currents. History of the concept Although the "wave" is seen in temperature, atmospheric pressure, sea ice and ocean height, the variations are hard to see in the raw data and need to be filtered to become apparent. Because the reliable record for the Southern Ocean is short (since the early 1980s) and signal processing is needed to reveal its existence, some climatologists doubt the existence of the wave. Others accept its existence but say that it varies in strength over decades. The wave was discovered simultaneously by and . Since then, ideas about the wave structure and maintenance mechanisms have changed and grown: by some accounts it is now to be considered as part of a global ENSO wave. See also Antarctic Circle Antarctic Convergence References Notes Sources External links Antarctic Circumpolar Wave Description The Antarctic Circumpolar Wave: A Beta Effect in Ocean–Atmosphere Coupling over the Southern Ocean Environment of Antarctica Physical oceanography Geography of the Southern Ocean
Antarctic Circumpolar Wave
[ "Physics" ]
264
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
325,028
https://en.wikipedia.org/wiki/Technological%20escalation
Technological escalation describes the situation where two parties in competition tend to employ continual technological improvements in their attempt to defeat each other. Technology is defined here as a creative invention, either in the form of an object or a methodology. An example is the mutual escalation seen between e-mail spammers and the programmers of spam filters and other anti-spam techniques. Although escalation is usually meant negatively, if two companies are in an escalating competition to produce the best widget, the consumer benefits because they get a choice between better and better widgets. See also Arms race Competition Conflict (disambiguation) Industrial Revolution Second Industrial Revolution Technological escalation during World War II War References Notes Conflict (process) Military technology Technological races
Technological escalation
[ "Biology" ]
157
[ "Behavior", "Aggression", "Human behavior", "Conflict (process)" ]
325,060
https://en.wikipedia.org/wiki/Tidal%20power
Tidal power or tidal energy is harnessed by converting energy from tides into useful forms of power, mainly electricity using various methods. Although not yet widely used, tidal energy has the potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed and that economic and environmental costs may be brought down to competitive levels. Historically, tide mills have been used both in Europe and on the Atlantic coast of North America. Incoming water was contained in large storage ponds, and as the tide goes out, it turns waterwheels that use the mechanical power to mill grain. The earliest occurrences date from the Middle Ages, or even from Roman times. The process of using falling water and spinning turbines to create electricity was introduced in the U.S. and Europe in the 19th century. Electricity generation from marine technologies increased an estimated 16% in 2018, and an estimated 13% in 2019. Policies promoting R&D are needed to achieve further cost reductions and large-scale development. The world's first large-scale tidal power plant was France's Rance Tidal Power Station, which became operational in 1966. It was the largest tidal power station in terms of output until Sihwa Lake Tidal Power Station opened in South Korea in August 2011. The Sihwa station uses sea wall defense barriers complete with 10 turbines generating 254 MW. Principle Tidal energy is taken from the Earth's oceanic tides. Tidal forces result from periodic variations in gravitational attraction exerted by celestial bodies. These forces create corresponding motions or currents in the world's oceans. This results in periodic changes in sea levels, varying as the Earth rotates. These changes are highly regular and predictable, due to the consistent pattern of the Earth's rotation and the Moon's orbit around the Earth. The magnitude and variations of this motion reflect the changing positions of the Moon and Sun relative to the Earth, the effects of Earth's rotation, and local geography of the seafloor and coastlines. Tidal power is the only technology that draws on energy inherent in the orbital characteristics of the Earth–Moon system, and to a lesser extent in the Earth–Sun system. Other natural energies exploited by human technology originate directly or indirectly from the Sun, including fossil fuel, conventional hydroelectric, wind, biofuel, wave and solar energy. Nuclear energy makes use of Earth's mineral deposits of fissionable elements, while geothermal power utilizes the Earth's internal heat, which comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%). A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher tidal current velocities can dramatically increase the potential of a site for tidal electricity generation. On the other hand, tidal energy has high reliability, excellent energy density, and high durability. Because the Earth's tides are ultimately due to gravitational interaction with the Moon and Sun and the Earth's rotation, tidal power is practically inexhaustible, and is thus classified as a renewable energy resource. Movement of tides causes a loss of mechanical energy in the Earth-Moon system: this results from pumping of water through natural restrictions around coastlines and consequent viscous dissipation at the seabed and in turbulence. This loss of energy has caused the rotation of the Earth to slow in the 4.5 billion years since its formation. During the last 620 million years the period of rotation of the Earth (length of a day) has increased from 21.9 hours to 24 hours; in this period the Earth-Moon system has lost 17% of its rotational energy. While tidal power will take additional energy from the system, the effect is negligible and would not be noticeable in the foreseeable future. Methods Tidal power can be classified into four generating methods: Tidal stream generator Tidal stream generators make use of the kinetic energy of moving water to power turbines, in a similar way to wind turbines that use the wind to power turbines. Some tidal generators can be built into the structures of existing bridges or are entirely submersed, thus avoiding concerns over aesthetics or visual impact. Land constrictions such as straits or inlets can create high velocities at specific sites, which can be captured using turbines. These turbines can be horizontal, vertical, open, or ducted. Tidal barrage Tidal barrages use potential energy in the difference in height (or hydraulic head) between high and low tides. When using tidal barrages to generate power, the potential energy from a tide is seized through the strategic placement of specialized dams. When the sea level rises and the tide begins to come in, the temporary increase in tidal power is channeled into a large basin behind the dam, holding a large amount of potential energy. With the receding tide, this energy is then converted into mechanical energy as the water is released through large turbines that create electrical power through the use of generators. Barrages are essentially dams across the full width of a tidal estuary. Tidal lagoon A new tidal energy design option is to construct circular retaining walls embedded with turbines that can capture the potential energy of tides. The created reservoirs are similar to those of tidal barrages, except that the location is artificial and does not contain a pre-existing ecosystem. The lagoons can also be in double (or triple) format without pumping or with pumping that will flatten out the power output. The pumping power could be provided by excess to grid demand renewable energy from for example wind turbines or solar photovoltaic arrays. Excess renewable energy rather than being curtailed could be used and stored for a later period of time. Geographically dispersed tidal lagoons with a time delay between peak production would also flatten out peak production providing near baseload production at a higher cost than other alternatives such as district heating renewable energy storage. The cancelled Tidal Lagoon Swansea Bay in Wales, United Kingdom would have been the first tidal power station of this type once built. Dynamic tidal power Dynamic tidal power (or DTP) is a theoretical technology that would exploit an interaction between potential and kinetic energies in tidal flows. It proposes that very long dams (for example: 30–50 km length) be built from coasts straight out into the sea or ocean, without enclosing an area. Tidal phase differences are introduced across the dam, leading to a significant water-level differential in shallow coastal seas – featuring strong coast-parallel oscillating tidal currents such as found in the UK, China, and Korea. US and Canadian studies in the 20th century The first study of large scale tidal power plants was by the US Federal Power Commission in 1924. If built, power plants would have been located in the northern border area of the US state of Maine and the southeastern border area of the Canadian province of New Brunswick, with various dams, powerhouses, and ship locks enclosing the Bay of Fundy and Passamaquoddy Bay (note: see map in reference). Nothing came of the study, and it is unknown whether Canada had been approached about the study by the US Federal Power Commission. In 1956, utility Nova Scotia Light and Power of Halifax commissioned a pair of studies into commercial tidal power development feasibility on the Nova Scotia side of the Bay of Fundy. The two studies, by Stone & Webster of Boston and by Montreal Engineering Company of Montreal, independently concluded that millions of horsepower (i.e. gigawatts) could be harnessed from Fundy but that development costs would be commercially prohibitive. There was also a report on the international commission in April 1961 entitled "Investigation of the International Passamaquoddy Tidal Power Project" produced by both the US and Canadian Federal Governments. According to benefit to costs ratios, the project was beneficial to the US but not to Canada. A study was commissioned by the Canadian & Nova Scotian and New Brunswick governments (Reassessment of Fundy Tidal Power) to determine the potential for tidal barrages at Chignecto Bay and Minas Basin – at the end of the Fundy Bay estuary. There were three sites determined to be financially feasible: Shepody Bay (1550 MW), Cumberland Basin (1085 MW), and Cobequid Bay (3800 MW). These were never built despite their apparent feasibility in 1977. US studies in the 21st century The Snohomish PUD, a public utility district located primarily in Snohomish County, Washington State, began a tidal energy project in 2007. In April 2009 the PUD selected OpenHydro, a company based in Ireland, to develop turbines and equipment for eventual installation. The project as initially designed was to place generation equipment in areas of high tidal flow and operate that equipment for four to five years. After the trial period the equipment would be removed. The project was initially budgeted at a total cost of $10 million, with half of that funding provided by the PUD out of utility reserve funds, and half from grants, primarily from the US federal government. The PUD paid for part of this project from reserves and received a $900,000 grant in 2009 and a $3.5 million grant in 2010 in addition to using reserves to pay an estimated $4 million of costs. In 2010 the budget estimate was increased to $20 million, half to be paid by the utility, half by the federal government. The utility was unable to control costs on this project, and by October 2014, the costs had ballooned to an estimated $38 million and were projected to continue to increase. The PUD proposed that the federal government provide an additional $10 million towards this increased cost, citing a gentlemen's agreement. When the federal government refused to pay this, the PUD cancelled the project after spending nearly $10 million from reserves and grants. The PUD abandoned all tidal energy exploration after this project was cancelled and does not own or operate any tidal energy sources. Rance tidal power plant in France In 1966, Électricité de France opened the Rance Tidal Power Station, located on the estuary of the Rance River in Brittany. It was the world's first tidal power station. The plant was for 45 years the largest tidal power station in the world by installed capacity: Its 24 turbines reach peak output at 240 megawatts (MW) and average 57 MW, a capacity factor of approximately 24%. Tidal power development in the UK The world's first marine energy test facility was established in 2003 to start the development of the wave and tidal energy industry in the UK. Based in Orkney, Scotland, the European Marine Energy Centre (EMEC) has supported the deployment of more wave and tidal energy devices than at any other single site in the world. EMEC provides a variety of test sites in real sea conditions. Its grid connected tidal test site is located at the Fall of Warness, off the island of Eday, in a narrow channel which concentrates the tide as it flows between the Atlantic Ocean and North Sea. This area has a very strong tidal current, which can travel up to in spring tides. Tidal energy developers that have tested at the site include: Alstom (formerly Tidal Generation Ltd); ANDRITZ HYDRO Hammerfest; Atlantis Resources Corporation; Nautricity; OpenHydro; Scotrenewables Tidal Power; Voith. The resource could be 4 TJ per year. Elsewhere in the UK, annual energy of 50 TWh can be extracted if 25 GW capacity is installed with pivotable blades. Current and future tidal power schemes The Rance tidal power plant built over a period of six years from 1960 to 1966 at La Rance, France. It has 240 MW installed capacity. 254 MW Sihwa Lake Tidal Power Plant in South Korea is the largest tidal power installation in the world. Construction was completed in 2011. The Jiangxia Tidal Power Station, south of Hangzhou in China has been operational since 1985, with current installed capacity of 3.2 MW. More tidal power is planned near the mouth of the Yalu River. The first in-stream tidal current generator in North America (Race Rocks Tidal Power Demonstration Project) was installed at Race Rocks on southern Vancouver Island in September 2006. The Race Rocks project was shut down after operating for five years (2006–2011) because high operating costs produced electricity at a rate that was not economically feasible. The next phase in the development of this tidal current generator will be in Nova Scotia (Bay of Fundy). A small project was built by the Soviet Union at Kislaya Guba on the Barents Sea. It has 0.4 MW installed capacity. In 2006 it was upgraded with a 1.2 MW experimental advanced orthogonal turbine. Jindo Uldolmok Tidal Power Plant in South Korea is a tidal stream generation scheme planned to be expanded progressively to 90 MW of capacity by 2013. The first 1 MW was installed in May 2009. A 1.2 MW SeaGen system became operational in late 2008 on Strangford Lough in Northern Ireland. It was decommissioned and removed in 2016. The contract for an 812 MW tidal barrage near Ganghwa Island (South Korea) north-west of Incheon has been signed by Daewoo. Completion was planned for 2015 but project was retracted in 2013. A 1,320 MW barrage was proposed by the South Korean government in 2009, to be built around islands west of Incheon. The project halted since 2012 due to environmental concerns. The Scottish Government has approved plans for a 10 MW ''Òran na Mara'' array of tidal stream generators near Islay, Scotland, costing 40 million pounds, and consisting of 10 turbines – enough to power over 5,000 homes. The first turbine was expected to be in operation by 2013 and then once again announced in 2021, but as of 2023 none existed. The Indian state of Gujarat was planning to host South Asia's first commercial-scale tidal power station. The company Atlantis Resources planned to install a 50 MW tidal farm in the Gulf of Kutch on India's west coast, with construction planned to start 2012, later withdrawn due to high costs. Ocean Renewable Power Corporation was the first company to deliver tidal power to the US grid in September 2012 when its pilot TidGen system was successfully deployed in Cobscook Bay, near Eastport. In New York City, Verdant Power successfully deployed and operated three tidal turbines in the East River near Roosevelt Island, on a single triangular base system, called a TriFrame. The Roosevelt Island Tidal Energy (RITE) Project generated over 300MWh of electricity to the local grid, an American marine energy record. The system's performance was independently confirmed by Scotland's European Marine Energy Centre (EMEC) under the new International Electrotechnical Commission (IEC) international standards. This is the first instance of a third-party verification of a tidal energy converter to an international standard. The largest tidal energy project entitled MeyGen (398 MW) is currently in construction in the Pentland Firth in northern Scotland with 6 MW operational since 2018. Construction of a 320 MW tidal lagoon power plant outside the city of Swansea in the UK was granted planning permission in June 2015, however it was later rejected by the UK government in 2018. If built it would have been the world's first tidal power plant based on a constructed lagoon. Mersey Tidal Power, a proposed tidal range barrage within the channel of the Mersey Estuary with a capacity of up to 1 GW is undergoing local consultation by the Liverpool City Region Combined Authority. Up to 240 MW of tidal stream generation is proposed at Morlais, Anglesey from multiple developers, with the first turbines expected to be installed in 2026. , a total of 38 MW of capacity has been awarded Contracts for Difference to supply power to the GB grid. Issues and challenges Environmental concerns Tidal power can affect marine life. The turbines' rotating blades can accidentally kill swimming sea life. Projects such as the one in Strangford include a safety mechanism that turns off the turbine when marine animals approach. However, this feature causes a major loss in energy because of the amount of marine life that passes through the turbines. Some fish may avoid the area if threatened by a constantly rotating or noisy object. Marine life is a huge factor when siting tidal power energy generators, and precautions are taken to ensure that as few marine animals as possible are affected by it. In terms of global warming potential (i.e. carbon footprint), the impact of tidal power generation technologies ranges between 15 and 37 gCO2-eq/kWhe, with a median value of 23.8 gCO2-eq/kWhe. This is in line with the impact of other renewables like wind and solar power, and significantly better than fossil-based technologies. The Tethys database provides access to scientific literature and general information on the potential environmental effects of tidal energy. Tidal turbines The main environmental concern with tidal energy is associated with blade strike and entanglement of marine organisms as high-speed water increases the risk of organisms being pushed near or through these devices. As with all offshore renewable energies, there is also a concern about how the creation of electromagnetic fields and acoustic outputs may affect marine organisms. Because these devices are in the water, the acoustic output can be greater than those created with offshore wind energy. Depending on the frequency and amplitude of sound generated by the tidal energy devices, this acoustic output can have varying effects on marine mammals (particularly those who echolocate to communicate and navigate in the marine environment, such as dolphins and whales). Tidal energy removal can also cause environmental concerns such as degrading far-field water quality and disrupting sediment processes. Depending on the size of the project, these effects can range from small traces of sediment building up near the tidal device to severely affecting nearshore ecosystems and processes. Tidal barrage Installing a barrage may change the shoreline within the bay or estuary, affecting a large ecosystem that depends on tidal flats. Inhibiting the flow of water in and out of the bay, there may also be less flushing of the bay or estuary, causing additional turbidity (suspended solids) and less saltwater, which may result in the death of fish that act as a vital food source to birds and mammals. Migrating fish may also be unable to access breeding streams, and may attempt to pass through the turbines. The same acoustic concerns apply to tidal barrages. Decreasing shipping accessibility can become a socio-economic issue, though locks can be added to allow slow passage. However, the barrage may improve the local economy by increasing land access as a bridge. Calmer waters may also allow better recreation in the bay or estuary. In August 2004, a humpback whale swam through the open sluice gate of the Annapolis Royal Generating Station at slack tide, ending up trapped for several days before eventually finding its way out to the Annapolis Basin. Tidal lagoon Environmentally, the main concerns are blade strike on fish attempting to enter the lagoon, the acoustic output from turbines, and changes in sedimentation processes. However, all these effects are localized and do not affect the entire estuary or bay. Corrosion Saltwater causes corrosion in metal parts. It can be difficult to maintain tidal stream generators due to their size and depth in the water. The use of corrosion-resistant materials such as stainless steels, high-nickel alloys, copper-nickel alloys, nickel-copper alloys and titanium can greatly reduce, or eliminate corrosion damage. Composite materials could also be used, as composites do not corrode and could provide lightweight, durable structures for tidal power. Composite materials are being evaluated for tidal power. Mechanical fluids, such as lubricants, can leak out, which may be harmful to the marine life nearby. Proper maintenance can minimize the number of harmful chemicals that may enter the environment. Fouling The biological events that happen when placing any structure in an area of high tidal currents and high biological productivity in the ocean will ensure that the structure becomes an ideal substrate for the growth of marine organisms. Cost Tidal energy has a high initial cost, which may be one of the reasons why it is not a popular source of renewable energy, although research has shown that the public is willing to pay for and support research and development of tidal energy devices. The methods of generating electricity from tidal energy are relatively new technology. Tidal energy is however still very early in the research process and it may be possible to reduce costs in future. The cost-effectiveness varies according to the site of the tidal generators. One indication of cost-effectiveness is the Gibrat ratio, which is the length of the barrage in metres divided by the annual energy production in kilowatt hours. As tidal energy is reliable, it can reasonably be predicted how long it will take to pay off the high up-front cost of these generators. Due to the success of a greatly simplified design, the orthogonal turbine offers considerable cost savings. As a result, the production period of each generating unit is reduced, lower metal consumption is needed and technical efficiency is greater. A possible risk is rising sea levels due to climate change, which may alter the characteristics of the local tides reducing future power generation. Structural health monitoring The high load factors resulting from the fact that water is around 800 times denser than air, and the predictable and reliable nature of tides compared with the wind, make tidal energy particularly attractive for electric power generation. Condition monitoring is the key for exploiting it cost-efficiently. See also Hydroelectricity Hydropower List of tidal power stations Run-of-the-river hydroelectricity Structural health monitoring Tidal barrage Tidal farm Tidal power in Canada Tidal power in New Zealand Tidal power in Scotland Tidal stream generator Marine energy Marine current power Wave power Ocean thermal energy conversion Osmotic power World energy consumption References Further reading Baker, A. C. 1991, Tidal power, Peter Peregrinus Ltd., London. Baker, G. C., Wilson E. M., Miller, H., Gibson, R. A. & Ball, M., 1980. "The Annapolis tidal power pilot project", in Waterpower '79 Proceedings, ed. Anon, U.S. Government Printing Office, Washington, pp 550–559. Hammons, T. J. 1993, "Tidal power", Proceedings of the IEEE, [Online], v81, n3, pp 419–433. Available from: IEEE/IEEE Xplore. [July 26, 2004]. Lecomber, R. 1979, "The evaluation of tidal power projects", in Tidal Power and Estuary Management, eds. Severn, R. T., Dineley, D. L. & Hawker, L. E., Henry Ling Ltd., Dorchester, pp 31–39. Jubilo, A., 2019, "Renewable Tidal Energy Potential: Basis for Technology Development in Eastern Mindanao", 80th PIChE National Convention; Crowne Plaza Galleria, Ortigas Center, Quezon City, Philippines. Could the UK's tides help wean us off fossil fuels?. BBC News. Published 22 October 2023. Enhancing Electrical Supply by Pumped Storage in Tidal Lagoons. David J.C. MacKay, Cavendish Laboratory, University of Cambridge, UK. Published 3 May 2007. Turning the Tide: Tidal Power in the UK – Report by Sustainable Development Commission. Published October 2007. 2007 – Report by Global Energy Survey. Published 2007. External links Portal and Repository for Information on Marine Renewable Energy A network of databases providing broad access to marine energy information. Marine Energy Basics: Current Energy Basic information about current energy. Marine Energy Projects Database A database that provides up-to-date information on marine energy deployments in the U.S. and around the world. Tethys Database A database of information on potential environmental effects of marine energy and offshore wind energy development. Tethys Engineering Database A database of information on technical design and engineering of marine energy devices. Marine and Hydrokinetic Data Repository A database for all data collected by marine energy research and development projects funded by the U.S. Department of Energy. Severn Estuary Partnership: Tidal Power Resource Page University of Strathclyde ESRU—Detailed analysis of marine energy resource, current energy capture technology appraisal and environmental impact outline Coastal Research – Foreland Point Tidal Turbine and warnings on proposed Severn Barrage European Marine Energy Centre – Listing of Tidal Energy Developers -retrieved 1 July 2011 (link updated 31 January 2014) Resources on Tidal Energy Structural Health Monitoring of composite tidal energy converters Tidal Power: A New Source of Energy (1959) Tidal projects funded by the Australian Renewable Energy Agency Bright green environmentalism Coastal construction Tides Renewable energy
Tidal power
[ "Engineering" ]
5,084
[ "Construction", "Coastal construction" ]
325,064
https://en.wikipedia.org/wiki/Scenic%20painting%20%28theatre%29
Theatrical scenic painting includes wide-ranging disciplines, encompassing virtually the entire scope of painting and craft techniques. An experienced scenic painter (or scenic artist) will have skills in landscape painting, figurative painting, trompe-l'œil, and faux finishing, and be versatile in different media such as acrylic, oil, and tempera paint. The painter might also be accomplished in three-dimensional skills such as sculpting, plasterering and gilding. To select the optimal materials, scenic painters must also have knowledge of paint composition. The scenic painter takes direction from the theatre designer. In some cases designers paint their own designs. The techniques and specialized knowledge of the scenic painter replicate an image to a larger scale from a designer's maquette, perhaps with accompanying photographs, printouts and original research, and sometimes with paint and style samples. Often, custom tools are made to create the desired effect. History The first written description of scenic painting as an art form is from the Italian Renaissance, when Leon Battista Alberti examined Greek stage painting and decoration in the time of Aeschylus. During and after the Renaissance, the ability to draw in perspective became core to painting for the stage. In the late 19th century, it was not unusual for successful scenic artists to achieve celebrity status, as spectacular backdrops became fashionable. With the emergence of modern stage design in the early 20th century, painted scenery came to considered "quaint". Since then, he practice of modern stage painting has evolved and continues to flourish today. Although the best scenic painters are rarely credited in theatre programs on the same level as scenic designers, they are highly respected in the theatre profession and critical to the creative process. Scenic paint Scenic paint has traditionally been mixed by the painter using pigment powder colour, a binder and a medium. The binder adheres the powder to itself and to the surface on which it is applied. The medium is a thinner which allows the paint to be worked more easily, disappearing as the paint dries. Today it is common to use brands of ready-made scenic paint, or pigment suspended in a medium to which a binder will be added. References Further reading Crabtree, Susan; Beudert, Peter (2011), Scenic Art for the Theatre, Focal Press, Scenic design Theatrical occupations Visual arts genres
Scenic painting (theatre)
[ "Engineering" ]
470
[ "Scenic design", "Design" ]
325,077
https://en.wikipedia.org/wiki/Domain%20theory
Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology. Motivation and intuition The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f. To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions. Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled by considering, for each domain of computation (e.g. the natural numbers), an additional element that represents an undefined output, i.e. the "result" of a computation that never ends. In addition, the domain of computation is equipped with an ordering relation, in which the "undefined result" is the least element. The important step to finding a model for the lambda calculus is to consider only those functions (on such a partially ordered set) that are guaranteed to have least fixed points. The set of these functions, together with an appropriate ordering, is again a "domain" in the sense of the theory. But the restriction to a subset of all available functions has another great benefit: it is possible to obtain domains that contain their own function spaces, i.e. one gets functions that can be applied to themselves. Beside these desirable properties, domain theory also allows for an appealing intuitive interpretation. As mentioned above, the domains of computation are always partially ordered. This ordering represents a hierarchy of information or knowledge. The higher an element is within the order, the more specific it is and the more information it contains. Lower elements represent incomplete knowledge or intermediate results. Computation then is modeled by applying monotone functions repeatedly on elements of the domain in order to refine a result. Reaching a fixed point is equivalent to finishing a calculation. Domains provide a superior setting for these ideas since fixed points of monotone functions can be guaranteed to exist and, under additional restrictions, can be approximated from below. A guide to the formal definitions In this section, the central concepts and definitions of domain theory will be introduced. The above intuition of domains being information orderings will be emphasized to motivate the mathematical formalization of the theory. The precise formal definitions are to be found in the dedicated articles for each concept. A list of general order-theoretic definitions, which include domain theoretic notions as well can be found in the order theory glossary. The most important concepts of domain theory will nonetheless be introduced below. Directed sets as converging specifications As mentioned before, domain theory deals with partially ordered sets to model a domain of computation. The goal is to interpret the elements of such an order as pieces of information or (partial) results of a computation, where elements that are higher in the order extend the information of the elements below them in a consistent way. From this simple intuition it is already clear that domains often do not have a greatest element, since this would mean that there is an element that contains the information of all other elements—a rather uninteresting situation. A concept that plays an important role in the theory is that of a directed subset of a domain; a directed subset is a non-empty subset of the order in which any two elements have an upper bound that is an element of this subset. In view of our intuition about domains, this means that any two pieces of information within the directed subset are consistently extended by some other element in the subset. Hence we can view directed subsets as consistent specifications, i.e. as sets of partial results in which no two elements are contradictory. This interpretation can be compared with the notion of a convergent sequence in analysis, where each element is more specific than the preceding one. Indeed, in the theory of metric spaces, sequences play a role that is in many aspects analogous to the role of directed sets in domain theory. Now, as in the case of sequences, we are interested in the limit of a directed set. According to what was said above, this would be an element that is the most general piece of information that extends the information of all elements of the directed set, i.e. the unique element that contains exactly the information that was present in the directed set, and nothing more. In the formalization of order theory, this is just the least upper bound of the directed set. As in the case of the limit of a sequence, the least upper bound of a directed set does not always exist. Naturally, one has a special interest in those domains of computations in which all consistent specifications converge, i.e. in orders in which all directed sets have a least upper bound. This property defines the class of directed-complete partial orders, or dcpo for short. Indeed, most considerations of domain theory do only consider orders that are at least directed complete. From the underlying idea of partially specified results as representing incomplete knowledge, one derives another desirable property: the existence of a least element. Such an element models that state of no information—the place where most computations start. It also can be regarded as the output of a computation that does not return any result at all. Computations and domains Now that we have some basic formal descriptions of what a domain of computation should be, we can turn to the computations themselves. Clearly, these have to be functions, taking inputs from some computational domain and returning outputs in some (possibly different) domain. However, one would also expect that the output of a function will contain more information when the information content of the input is increased. Formally, this means that we want a function to be monotonic. When dealing with dcpos, one might also want computations to be compatible with the formation of limits of a directed set. Formally, this means that, for some function f, the image f(D) of a directed set D (i.e. the set of the images of each element of D) is again directed and has as a least upper bound the image of the least upper bound of D. One could also say that f preserves directed suprema. Also note that, by considering directed sets of two elements, such a function also has to be monotonic. These properties give rise to the notion of a Scott-continuous function. Since this often is not ambiguous one also may speak of continuous functions. Approximation and finiteness Domain theory is a purely qualitative approach to modeling the structure of information states. One can say that something contains more information, but the amount of additional information is not specified. Yet, there are some situations in which one wants to speak about elements that are in a sense much simpler (or much more incomplete) than a given state of information. For example, in the natural subset-inclusion ordering on some powerset, any infinite element (i.e. set) is much more "informative" than any of its finite subsets. If one wants to model such a relationship, one may first want to consider the induced strict order < of a domain with order ≤. However, while this is a useful notion in the case of total orders, it does not tell us much in the case of partially ordered sets. Considering again inclusion-orders of sets, a set is already strictly smaller than another, possibly infinite, set if it contains just one less element. One would, however, hardly agree that this captures the notion of being "much simpler". Way-below relation A more elaborate approach leads to the definition of the so-called order of approximation, which is more suggestively also called the way-below relation. An element x is way below an element y, if, for every directed set D with supremum such that , there is some element d in D such that . Then one also says that x approximates y and writes . This does imply that , since the singleton set {y} is directed. For an example, in an ordering of sets, an infinite set is way above any of its finite subsets. On the other hand, consider the directed set (in fact, the chain) of finite sets Since the supremum of this chain is the set of all natural numbers N, this shows that no infinite set is way below N. However, being way below some element is a relative notion and does not reveal much about an element alone. For example, one would like to characterize finite sets in an order-theoretic way, but even infinite sets can be way below some other set. The special property of these finite elements x is that they are way below themselves, i.e. . An element with this property is also called compact. Yet, such elements do not have to be "finite" nor "compact" in any other mathematical usage of the terms. The notation is nonetheless motivated by certain parallels to the respective notions in set theory and topology. The compact elements of a domain have the important special property that they cannot be obtained as a limit of a directed set in which they did not already occur. Many other important results about the way-below relation support the claim that this definition is appropriate to capture many important aspects of a domain. Bases of domains The previous thoughts raise another question: is it possible to guarantee that all elements of a domain can be obtained as a limit of much simpler elements? This is quite relevant in practice, since we cannot compute infinite objects but we may still hope to approximate them arbitrarily closely. More generally, we would like to restrict to a certain subset of elements as being sufficient for getting all other elements as least upper bounds. Hence, one defines a base of a poset P as being a subset B of P, such that, for each x in P, the set of elements in B that are way below x contains a directed set with supremum x. The poset P is a continuous poset if it has some base. Especially, P itself is a base in this situation. In many applications, one restricts to continuous (d)cpos as a main object of study. Finally, an even stronger restriction on a partially ordered set is given by requiring the existence of a base of finite elements. Such a poset is called algebraic. From the viewpoint of denotational semantics, algebraic posets are particularly well-behaved, since they allow for the approximation of all elements even when restricting to finite ones. As remarked before, not every finite element is "finite" in a classical sense and it may well be that the finite elements constitute an uncountable set. In some cases, however, the base for a poset is countable. In this case, one speaks of an ω-continuous poset. Accordingly, if the countable base consists entirely of finite elements, we obtain an order that is ω-algebraic. Special types of domains A simple special case of a domain is known as an elementary or flat domain. This consists of a set of incomparable elements, such as the integers, along with a single "bottom" element considered smaller than all other elements. One can obtain a number of other interesting special classes of ordered structures that could be suitable as "domains". We already mentioned continuous posets and algebraic posets. More special versions of both are continuous and algebraic cpos. Adding even further completeness properties one obtains continuous lattices and algebraic lattices, which are just complete lattices with the respective properties. For the algebraic case, one finds broader classes of posets that are still worth studying: historically, the Scott domains were the first structures to be studied in domain theory. Still wider classes of domains are constituted by SFP-domains, L-domains, and bifinite domains. All of these classes of orders can be cast into various categories of dcpos, using functions that are monotone, Scott-continuous, or even more specialized as morphisms. Finally, note that the term domain itself is not exact and thus is only used as an abbreviation when a formal definition has been given before or when the details are irrelevant. Important results A poset D is a dcpo if and only if each chain in D has a supremum. (The 'if' direction relies on the axiom of choice.) If f is a continuous function on a domain D then it has a least fixed point, given as the least upper bound of all finite iterations of f on the least element ⊥: . This is the Kleene fixed-point theorem. The symbol is the directed join. Generalizations A continuity space is a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains. See also Category theory Denotational semantics Scott domain Scott information system Type theory Further reading External links Introduction to Domain Theory by Graham Hutton, University of Nottingham Fixed points (mathematics)
Domain theory
[ "Mathematics" ]
2,902
[ "Mathematical analysis", "Fixed points (mathematics)", "Topology", "Domain theory", "Order theory", "Dynamical systems" ]
325,156
https://en.wikipedia.org/wiki/Mezzanine
A mezzanine (; or in Italian, a mezzanino) is an intermediate floor in a building which is partly open to the double-height ceilinged floor below, or which does not extend over the whole floorspace of the building, a loft with non-sloped walls. However, the term is often used loosely for the floor above the ground floor, especially where a very high-ceilinged original ground floor has been split horizontally into two floors. Mezzanines may serve a wide variety of functions. Industrial mezzanines, such as those used in warehouses, may be temporary or semi-permanent structures. In Royal Italian architecture, mezzanino also means a chamber created by partitioning that does not go up all the way to the arch vaulting or ceiling; these were historically common in Italy and France, for example in the palaces for the nobility at the Quirinal Palace. Definition A mezzanine is an intermediate floor (or floors) in a building which is open to the floor below. It is placed halfway (mezzo means 'half' in Italian) up the wall on a floor which has a ceiling at least twice as high as a floor with minimum height. A mezzanine does not count as one of the floors in a building, and generally does not count in determining maximum floorspace. The International Building Code permits a mezzanine to have as much as one-third of the floor space of the floor below. Local building codes may vary somewhat from this standard. A space may have more than one mezzanine, as long as the sum total of floor space of all the mezzanines is not greater than one-third the floor space of the complete floor below. Mezzanines help to make a high-ceilinged space feel more personal and less vast, and can create additional floor space. Mezzanines, however, may have lower-than-normal ceilings due to their location. The term "mezzanine" does not imply any particular function; mezzanines can be used for a wide array of purposes. Mezzanines are commonly used in modern architecture, which places a heavy emphasis on light and space. Industrial mezzanines In industrial settings, mezzanines may be installed (rather than built as part of the structure) in high-ceilinged spaces such as warehouses. These semi-permanent structures are usually free-standing, can be dismantled and relocated, and are sold commercially. Industrial mezzanine structures can be supported by structural steel columns and elements, or by racks or shelves. Depending on the span and the run of the mezzanine, different materials may be used for the mezzanine's deck like fibre cement boards. Some industrial mezzanines may also include enclosed, paneled office space on their upper levels. There are three basic types of industrial mezzanines: custom, standard or modular. A structural engineer is sometimes hired to help determine whether the floor of the building can support a mezzanine (and how heavy the mezzanine may be), and to design the appropriate mezzanine. Custom mezzanines Custom Mezzanines are steel, raised industrial platform structures that are designed specifically to match the space and capacity needs of a given facility. It will, at a minimum, include a stairway for accessing the mezzanine. These structures typically are the strongest in terms of support capacity. Standard mezzanines Standard Mezzanines are steel, raised industrial platform structures that are completely self-supporting and are sold in predetermined sizes and shapes. These off-the-shelf structures are usually strong (in terms of support capacity) and less expensive than custom mezzanines. Safety Employees in material handling and manufacturing are often at risk of falls when they are on the job. Recent figures show approximately 20,000 serious injuries and nearly 100 fatalities a year in industrial facilities. Falls of people and objects from mezzanines are of particular concern. In many industrial operations, openings are cut into the guardrail on mezzanines and elevated work platforms to allow picking of palletized material to be loaded and unloaded, often with a fork truck, to upper levels. The Occupational Safety and Health Administration (OSHA) and International Building Council (IBC) have published regulations for fall protection and The American National Standards Institute (ANSI) has published standards for securing pallet drop areas to protect workers that work on elevated platforms and are exposed to openings. In most cases, safety gates are used to secure these openings. OSHA requires openings 48 inches or taller to be secured with a fall protection system. Removable sections of railing or gates that swing or slide open would be used to open up the area and allow the transfer of material, and then close once the material is removed. However, current ANSI standards require dual-gate safety systems for fall protection. Dual-gate safety systems were created to secure these areas, allowing a barrier to be in place at all times, even while pallets are being loaded or removed. Dual-gate systems create a completely enclosed workstation providing protection for the worker during loading and off-loading operations. When the rear-side gate opens, the ledge gate automatically closes, ensuring there is always a gate between the operator and the ledge. See also Overhead storage References Bibliography External links Proper safeguarding for elevated work platforms (1:37 min. video) Video showing the main construction of an industrial mezzanine floor (2:46 min video) Architectural elements Floors Industrial equipment
Mezzanine
[ "Technology", "Engineering" ]
1,134
[ "Structural engineering", "Building engineering", "Floors", "Architectural elements", "nan", "Components", "Architecture" ]
325,232
https://en.wikipedia.org/wiki/Greenwashing
Greenwashing (a compound word modeled on "whitewash"), also called green sheen, is a form of advertising or marketing spin that deceptively uses green PR and green marketing to persuade the public that an organization's products, goals, or policies are environmentally friendly. Companies that intentionally adopt greenwashing communication strategies often do so to distance themselves from their environmental lapses or those of their suppliers. Firms engage in greenwashing for two primary reasons: to appear legitimate and to project an image of environmental responsibility to the public. Because there "is no harmonised definition of greenwashing", a determination that this is occurring in a given instance may be subjective. Greenwashing occurs when an organization spends significantly more resources on "green" advertising than on environmentally sound practices. Many corporations use greenwashing to improve public opinion of their brands. Complex corporate structures can further obscure the bigger picture. Corporations attempt to capitalize off of consumer's environmental guilt. Critics of the practice suggest that the rise of greenwashing, paired with ineffective regulation, contributes to consumer skepticism of all green claims and diminishes the power of the consumer to drive companies toward greener manufacturing processes and business operations. Greenwashing covers up unsustainable corporate agendas and policies. Highly public accusations of greenwashing have contributed to the term's increasing use. Greenwashing has recently increased to meet consumer demand for environmentally-friendly goods and services. New regulations, laws, and guidelines put forward by organizations such as the Committee of Advertising Practice in the UK aim to discourage companies from using greenwashing to deceive consumers. At the same time, activists have been increasingly inclined to accuse companies of greenwashing, with inconsistent standards as to what activities merit such an accusation. Characteristics TerraChoice, an environmental consulting division of UL, described "seven sins of greenwashing" in 2007 to "help consumers identify products that made misleading environmental claims": "Hidden Trade-off": a claim that a product is "green" based on an unreasonably narrow set of attributes without attention to other critical environmental issues. "No Proof": a claim that cannot be substantiated by easily accessible information or a reliable third-party certification. "Vagueness" is a poorly defined or broad claim that the consumer will likely misunderstand its meaning. "All-natural," for example, is not necessarily "green." "Worshipping False Labels": a claim that, through words or images, gives the impression of a third-party endorsement where none exists. "Irrelevance": a claim that may be truthful but unimportant or unhelpful to consumers seeking environmentally preferable products. "Lesser of Two Evils": a claim that may be true within the product category, but risks distracting consumers from the more significant environmental impact of the category. "Fibbing": a claim that is simply false. The organization noted that by 2010, approximately 95% of consumer products in the U.S. claiming to be green were discovered to commit at least one of these sins. According to the United Nations, greenwashing can present itself in many ways: A company can put out claims saying that they are eco-friendly or green while not putting any plans in place to do so. Being intentionally vague about operations or using vague claims that cannot be specifically proven (like saying they are “environmentally friendly” or “green”). Saying that a product does not contain harmful materials or use harmful practices that they would not use anyway. Dictating one thing the company does well regarding the environment while not doing anything else. Promoting products that meet minimum requirements rather than making improvements. Activities deemed to be characteristic of greenwashing can vary by time and place, product, and the opinions or expectations of the person making the determination. History The origins of greenwashing can be traced to several different instances. For example, Keep America Beautiful was a campaign founded by beverage manufacturers and others in 1953. The campaign focused on recycling and littering, diverting attention away from corporate responsibility to protect the environment. The objective was to forestall the regulation of disposable containers such as the one established by Vermont. In the mid-1960s, the environmental movement gained momentum, particularly after the publication of the landmark Silent Spring by Rachel Carson. The book marked a turning point about the environment and inspired citizen action. It prompted many companies to seek a new cleaner or greener image through advertising. Jerry Mander, a former Madison Avenue advertising executive, called this new form of advertising "ecopornography." The first Earth Day was held on 22 April 1970. Most companies did not actively participate in the initial Earth Day events because environmental issues were not a major corporate priority, and there was a sense of skepticism or resistance to the movement's message. Nevertheless, some industries began to advertise themselves as friendly to the environment. For example, public utilities were estimated to have spent around $300 million advertising themselves as clean and green companies, which was eight times what they spent on pollution reduction research. The term "greenwashing" was coined by New York environmentalist Jay Westerveld in a 1986 essay about the hotel industry's practice of placing notices in bedrooms promoting the reuse of towels to "save the environment". He noted that these institutions often made little or no effort toward reducing energy waste, although towel reuse saved them laundry costs. He concluded that the fundamental objective was most frequently increased profit. He labeled this and other profitable-but-ineffective "environmentally-conscientious" acts as "greenwashing". In 1991, a study published in the "Journal of Public Policy and Marketing" (American Marketing Association) found that 58% of environmental ads had at least one deceptive claim. Another study found that 77% of people said a company's environmental reputation affected whether they would buy its products. One-fourth of all household products marketed around Earth Day advertised themselves as being green and environmentally friendly. In 1998, the Federal Trade Commission created the "Green Guidelines", which defined terms used in environmental marketing. The following year, the FTC found the Nuclear Energy Institute's environmentally clean claims invalid. The FTC did nothing about the ads because they were out of the agency's jurisdiction. This caused the FTC to realize they needed new, clear, enforceable standards. In 1999, the word "greenwashing" was added to the "Oxford English Dictionary". Days before the 1992 Earth Summit in Rio de Janeiro, Greenpeace released the Greenpeace Book on Greenwash, which described the corporate takeover of the UN conference and provided case studies of the contrast between corporate polluters and their rhetoric. Third World Network published an expanded version of that report, "Greenwash: The Reality Behind Corporate Environmentalism." In 2002, during the World Summit on Sustainable Development in Johannesburg, the Greenwashing Academy hosted the Greenwash Academy Awards. The ceremony awarded companies like BP, ExxonMobil, and even the U.S. Government for their elaborate greenwashing ads and support for greenwashing. A European Union study from 2020 found that over 50% of examined environmental claims in the EU were vague, misleading or unfounded and 40% were unsubstantiated. Many companies have committed to lessen their greenhouse gas emissions to a net zero due to the Paris Agreement being established in 2015. A net zero emissions level means that any emissions given off by a company would be offset by carbon eliminators in the natural world (otherwise known as carbon sinks). However, companies are not actually cutting emissions, but are creating infeasible plans and trying to improve other things rather than their emissions. Therefore, most companies are not actually upholding their agreements and ultimately continue not to cause any positive change. Examples Fashion industry Kimberly Clark's claim of "Pure and Natural" diapers in green packaging. The product uses organic cotton on the outside but uses the same petrochemical gel inside as before. Pampers also claims that "Dry Max" diapers reduce landfills by decreasing the amount of paper fluff in the diaper, but also a way to trim product and to save money in producing Pampers. In January 2020, the Fur Free Alliance noted that the "WelFur" label, which advocated for animal welfare on fur farms, is run by the fur industry and is aimed at European fur farms. Clothing company H&M came under fire for greenwashing their manufacturing practices as a result of a report published by Quartz News. Food industry In 2009, McDonald's changed the color of its European logos from yellow-and-red to yellow-and-green; a spokesman explained that the change was "to clarify [their] responsibility for the preservation of natural resources." In October 2021 McDonald's was accused of greenwashing after announcing its pledge to reach net-zero emissions by 2050. In 2018, in response to increased calls to ban plastic straws, Starbucks introduced a lid with a built-in drinking straw that contained more plastic by weight than the old straw and lid together (though it can be recycled, unlike its predecessor). In 2020, Coca-Cola was found to be the world’s number one plastic polluter by Break Free From Plastic. However, the company continues to say that it is making headway in lessening plastic waste. They say they have a commitment to “get every bottle back by 2030” despite being the biggest plastic polluter for several years in a row. They were sued by the Earth Island Institute in 2021 for their false claims. Automobile industry The UK Advertising Standards Authority upheld complaints against major vehicle manufacturers, including Suzuki, SEAT, Toyota, and Lexus who made false claims about their vehicles. Volkswagen fitted their cars with a "defeat device" that activated only when a car's emissions were being tested to reduce polluting emissions. In normal use, by contrast, the cars were emitting 40 times the allowed rate of nitrogen oxide. Forbes estimates that this scandal cost Volkswagen US$35.4 billion. Other automakers also cheated on emissions systems. In November 2020, Aston Martin, Bosch, and other brands were discovered to have funded a report which downplayed electric vehicles' environmental benefits with misleading information about the emissions produced during the manufacture of electric cars, in response to the UK announcing that it would ban the sale of vehicles with internal combustion engines from 2030. The greenwashing scandal became known as Astongate given the relationship between the British automotive manufacturer and Clarendon Communications, a shell company posing as a public relations agency which was set up to promote the report, and which was registered to James Michael Stephens – the Director of Global Government & Corporate Affairs at Aston Martin Lagonda Ltd. Calling the next provisionally approved European emission standards for light and medium vehicles "Euro 7" instead of "Euro 6f" could be considered greenwashing because of unchanged pollutant limits. Calling start-stop systems "micro" hybrid. "Mild" or "smart" hybrids. Calling hybrid vehicles "self charging" or "fueled by petrol, driven by electric". The fleet of PHEVs underperforms on emissions reductions. Although they would have more potential if properly used. The true environmental footprint of battery electric cars is called into question. Fuel cell vehicles powered by non-green hydrogen. Coal In 2024 Turkey's Minister of Energy and Natural Resources Alparslan Bayraktar said that the government aimed to increase coal mining in an environmentally friendly way. Oil industry A 2010 advertising campaign by Chevron was described by the Rainforest Action Network, Amazon Watch, and The Yes Men as greenwash. A spoof campaign was launched to pre-empt Chevron's greenwashing. In 1985, the Chevron Corporation launched one of the most famous greenwashing ad campaigns. Chevron's "People Do" advertisements were aimed at a "hostile audience" of "societally conscious" people. Two years after the campaign's launch, surveys found people in California trusted Chevron more than other oil companies to protect the environment. In the late 1980s, The American Chemistry Council started a program called Responsible Care, which shone a light on the environmental performances and precautions of the group's members. The loose guidelines of responsible care caused industries to adopt self-regulation over government regulation. BP was also reported to have engaged in such conduct in the 2010s. Political campaigns In 2010, environmentalists stated the Bush Administration's "Clear Skies Initiative" actually weakened air pollution laws. Similar laws were issued under President Macron of France as "simplifying ecology rules" that were criticized on similar grounds while still being referred to by his government as "ecology laws". "Clean Coal," an initiative adopted by several platforms for the 2008 U.S. presidential election, cited carbon capture and storage as a means of reducing carbon emissions by capturing and injecting carbon dioxide produced by coal power plants into layers of porous rock below the ground. According to Fred Pearce's Greenwash column in The Guardian, clean coal is the "ultimate climate change oxymoron… pure and utter greenwash". In 2017, Australia's then Treasurer Scott Morrison used "Clean Coal" as the basis to suggest clean energy subsidies be used to build new coal power plants. The renaming of "Tar Sands" to "Oil Sands" (Alberta, Canada) in corporate and political language reflects an ongoing debate between the project's adherents and opponents. This semantic shift can be seen as a case of greenwashing in an attempt to counter growing public concern about the environmental and health impacts of the industry. While advocates claim that the shift is scientifically derived to better reflect the use of the sands as a precursor to oil, environmental groups argue that it is simply a means of cloaking the issue behind friendlier terminology. In 2021, Saudi Arabian Crown Prince Mohammed bin Salman announced a tree planting campaign in the desert as part of the plan to reach carbon neutrality by 2060. The plan was criticized as a greenwashing attempt by some climate scientists. Some environmental activists and critics condemned the 2021 United Nations Climate Change Conference (COP26) as greenwashing. They also condemned COP28, which is purported to have the highest carbon footprint of all COP events. In May 2023, a Wikipedia user who identified themselves as an employee of ADNOC was alleged to have suggested edits to the Wikipedia article of Sultan Al Jaber, president of COP28, which presented Al Jaber as a supporter of the climate movement. In June 2023, Marc Owen Jones of Hamad Bin Khalifa University noted that a large number of apparently fake Twitter profiles were used to defend Al Jaber's COP28 presidency. The construction of the new Indonesian capital Nusantara, despite being described as a smart, green, and clean city, has been accused by many groups of alleged greenwashing by the Indonesian government due to environmental damages caused by its construction. Business slogans "Clean Burning Natural Gas" When compared to the dirtiest fossil fuel, coal, natural gas is only 50% as dirty. Producing natural gas through fracking and distribution by a pipeline may lead to methane emissions into the atmosphere. Methane, the main component of natural gas, is a potent greenhouse agent. Despite this, natural gas is often presented as a cleaner fossil fuel in environmental discourse. In practice, it balances the intermittent nature of solar and wind energy. It can be considered a useful "transitional technology" towards hydrogen as hydrogen can already be blended in and eventually be used to replace it, inside gas networks initially conceived for natural gas-use. First-generation biofuels are said to be better for the environment than fossil fuels, but some, such as palm oil, contribute to deforestation (which contributes to global warming due to release of ). Higher-generation biofuels do not have these particular issues, but have contributed significantly to deforestation and habitat destruction in Canada due to rising corn prices, which make it economically worthwhile to clear-cut existing forests in agricultural areas. An article in Wired magazine highlighted slogans that suggest environmentally benign business activity: the Comcast Ecobill has the motto "PaperLESSisMORE," but Comcast uses large amounts of paper for direct marketing. The Poland Spring (from the American city of Poland, Maine) eco shape bottle is touted as "A little natural does a lot of good," although 80% of beverage containers go to landfills. The Airbus A380 airliner is described as "A better environment inside and out" even though air travel has a high environmental cost. The multinational oil company formerly known as British Petroleum launched a rebranding campaign in 2000, revising the company's acronym as "Beyond Petroleum." The campaign included a revised green logo, advertisements, a solar-paneled gas station in Los Angeles, and clean energy rhetoric across media to strategically position itself as the 'greenest' global oil company. The campaign became the center of public controversy due to the company's hypocrisy around lobbying efforts that sought permission to drill in protected areas and its negligent operating practices that led to severe oil spillsmost notably the Prudhoe Bay pipeline rupture in 2006 and the Gulf of Mexico rig explosion in 2010. ESG ratings In 2021, American financial services company MSCI upgraded the environmental, social, and governance (ESG) rating of the company McDonald's, which produces emissions comparable to an entire mid-size EU country like Portugal, by eliminating from its analysis the significance of greenhouse gas emissions and highlighting a new recycling initiative, which had been mandated by regulatory authorities in France and the United Kingdom for all fast-food companies. Volkswagen had an ESG rating higher than its peer average, even though in September 2015, the Environmental Protection Agency (EPA) sanctioned Volkswagen with over $25 billion in fines for using a "defeat device", causing vehicles produced from 2009 to 2015 to pollute at a much higher rate than advertised. Totalenergies was sued for claiming it can reach net zero objectives by 2050 while increasing fossil fuel activities. It is rated A- on climate by the CDP. Consequences Lack of integrity Some companies communicate and publicize unsubstantiated ethical claims or social responsibility, and practice greenwashing, which increases consumer cynicism and mistrust. By using greenwashing, companies can present their business as more ecologically sustainable than it is. According to a policy report, greenwashing includes risks such as misleading advertisements and public communications, misleading ESG credentials, and false or deceiving carbon credit claims. After a legal analysis, the corruption and integrity risks in climate solutions reports show that regulations are significantly weaker for misleading ESG credentials than for climate washing and advertising standards. Despite imposed obligations, ESG rating agencies or ESG auditors are not regulated in any reviewed jurisdictions. Factors such as the lack of oversight by third-party environmental service providers, the opacity of internal scoring methodologies, and the lack of alignment and consistency around ESG assessments can create opportunities for misleading or unsubstantiated claims and, worst cases, bribery or fraud. Psychological effects Greenwashing is a relatively new area of research within psychology, and there needs to be more consensus among studies on how greenwashing affects consumers and stakeholders. Because of the variance in country and geography in recently published studies, the discrepancy between consumer behavior in studies could be attributed to cultural or geographic differences. Effect on consumer perception Researchers found that consumers significantly favor environmentally friendly products over their greenwashed counterparts. A survey by LendingTree found that 55% of Americans are willing to spend more money on products they perceive to be more sustainable and eco-friendly. Consumer perceptions of greenwashing are also mediated by the level of greenwashing they are exposed to. Other research suggests that few consumers notice greenwashing, particularly when they perceive the company or brand as reputable. When consumers perceive green advertising as credible, they develop more positive attitudes towards the brand, even when the advertising is greenwashed. Other research suggests that consumers with more green concern are more able to tell the difference between honest green marketing and greenwashed advertising; the more green concern, the stronger the intention not to purchase from companies from which they perceive greenwashing advertising behavior. When consumers use word-of-mouth to communicate about a product, green concern strengthens the negative relationship between the consumer's intent to purchase and the perception of greenwashing. Research suggests that consumers distrust companies that greenwash because they view the act as deceptive. If consumers perceive that a company would realistically benefit from a green marketing claim being true, then it is more likely that the claim and the company will be seen as genuine. Consumers' willingness to purchase green products decreases when they perceive that green attributes compromise product quality, making greenwashing potentially risky, even when the consumer or stakeholder is not skeptical of green messaging. Words and phrases often used in green messaging and greenwashing, such as "gentle," can lead consumers to believe the green product is less effective than a non-green option. Attributions of greenwashing Eco-labels can be given to a product from an external organization and the company itself. This has raised concerns because companies can label a product as green or environmentally friendly by selectively disclosing positive attributes of the product while not disclosing environmental harms. Consumers expect to see eco-labels from both internal and external sources but perceive labels from external sources to be more trustworthy. Researchers from the University of Twente found that uncertified or greenwashed internal eco-labels may still contribute to consumer perceptions of a responsible company, with consumers attributing internal motivation to a company's internal eco-labeling. Other research connecting attribution theory and greenwashing found that consumers often perceive green advertising as greenwashing when companies use green advertisements, attributing the green messaging to corporate self-interest. Green advertising can backfire, particularly when the advertised environmental claim does not match a company's environmental engagement. Implications for green business Researchers working with consumer perception, psychology, and greenwashing note that companies should "walk the walk" regarding green advertising and behavior to avoid the negative connotations and perceptions of greenwashing. Green marketing, labeling, and advertising are most effective when they match a company's environmental engagement. This is also mediated by the visibility of those environmental engagements, meaning that if consumers are unaware of a company's commitment to sustainability or environmentally-conscious ethos, they cannot factor greenness in their assessment of the company or product. Exposure to greenwashing can make consumers indifferent to or generate negative feelings toward green marketing. Thus, genuinely green businesses must work harder to differentiate themselves from those who use false claims. Nevertheless, consumers may react negatively to valid sustainability claims because of negative experiences with greenwashing. Conversely, concerns about the perception of genuine efforts to develop more environmentally friendly practices can lead to "greenhushing", where a company avoids publicizing these efforts out of concern that they will be accused of greenwashing anyway. Deterrence Companies may pursue environmental certification to avoid greenwashing through independent verification of their green claims. For example, the Carbon Trust Standard launched in 2007 with the stated aim "to end 'greenwash' and highlight firms that are genuine about their commitment to the environment." There have been attempts to reduce the impact of greenwashing by exposing it to the public. The Greenwashing Index, created by the University of Oregon in partnership with EnviroMedia Social Marketing, allowed the public to upload and rate examples of greenwashing, but it was last updated in 2012. Research published in the Journal of Business Ethics in 2011 shows that Sustainability Ratings might deter greenwashing. Results concluded that higher sustainability ratings lead to significantly higher brand reputation than lower sustainability ratings. This same trend was found regardless of the company's level of corporate social responsibility (CSR) communications. This finding establishes that consumers pay more attention to sustainability ratings than CSR communications or greenwashing claims. The World Federation of Advertisers released six new guidelines for advertisers in 2022 to prevent greenwashing. These approaches encourage credible environmental claims and more sustainable outcomes. Regulation Worldwide regulations on misleading environmental claims vary from criminal liability to fines or voluntary guidelines. Australia The Australian Trade Practices Act punishes companies that provide misleading environmental claims. Any organization found guilty of such could face up in fines. In addition, the guilty party must pay for all expenses incurred while setting the record straight about their product or company's actual environmental impact. Canada Canada's Competition Bureau, along with the Canadian Standards Association, discourage companies from making "vague claims" about their products' environmental impact. Any claims must be backed up by "readily available data." European Union The European Anti-Fraud Office (OLAF) handles investigations that have an environmental or sustainability element, such as the misspending of EU funds intended for green products and the counterfeiting and smuggling of products with the potential to harm the environment and health. It also handles illegal logging and smuggling of precious wood and timber into the EU (wood laundering). In January 2021, the European Commission, in cooperation with national consumer protection authorities, published a report on its annual survey of consumer websites investigated for violations of EU consumer protection law. The study examined green claims across a wide range of consumer products, concluding that for 42 percent of the websites examined, the claims were likely false and misleading and could well constitute actionable claims for unfair commercial practices. In the context of escalating concerns regarding the authenticity of corporate ecological sustainability claims, greenwashing has emerged as a significant issue and poses a real challenge to sustainable finance regulations gaps. ESMA outlined the correlation between the growth of ESG-related funds and greenwashing. The exponential rise of funds integrating vague ESG-related language in their names started since the Paris Agreement (2015), and is effective in deceivingly attracting more investors. The 2020-2024 agenda of DG FISMA concern about greenwashing reconciles two objectives: increasing capital for sustainable investments and bolstering trust and investor protection in European financial markets. The European Union struck a provisional agreement to mandate new reporting rules for companies with over 250 staff and a turnover of . They must disclose environmental, social, and governance (ESG) information, which will help combat greenwashing. These requirements go into effect in 2024. The European Commission has introduced a proposal of ESG regulation aimed at bolstering transparency and integrity within ESG rating in 2023. Germany In June 2024, the Federal Constitutional Court of Germany ruled that companies that use "climate neutral" in advertising must define what the term means or use of the phrase would not continue to be permitted due to the phrase being too vague. Norway Norway's consumer ombudsman has targeted automakers who claim their cars are "green," "clean," or "environmentally friendly," with some of the world's strictest advertising guidelines. Consumer Ombudsman official Bente Øverli said: "Cars cannot do anything good for the environment except less damage than others." Manufacturers risk fines if they fail to drop misleading advertisements. Øverli said she did not know of other countries going so far in cracking down on cars and the environment. Thailand The Green Leaf Certification is an evaluation method created by the Association of Southeast Asian Nations (ASEAN) as a metric that rates the hotels' environmental efficiency of environmental protection. In Thailand, this certification is believed to help regulate greenwashing phenomena associated with green hotels. Eco hotel or "green hotel" are hotels that have adopted sustainable, environmentally-friendly practices in hospitality business operations. Since the development of the tourism industry in the ASEAN, Thailand superseded its neighboring countries in inbound tourism, with 9 percent of Thailand's direct GDP contributions coming from the travel and tourism industry in 2015. Because of the growth and reliance on tourism as an economic pillar, Thailand developed "responsible tourism" in the 1990s to promote the well-being of local communities and the environment affected by the industry. However, studies show the green hotel companies' principles and environmental perceptions contradict the basis of corporate social responsibilities in responsible tourism. Against this context, the Green Leaf Certification issuance aims to keep the hotel industry and supply chains accountable for corporate social responsibilities regarding sustainability by having an independent international organization evaluate a hotel and rate it one through five leaves. United Kingdom The Competition and Markets Authority is the UK's primary competition and consumer authority. In September 2021, it published a Green Claims Code to protect consumers from misleading environmental claims and businesses from unfair competition. In May 2024, the Financial Conduct Authority introduced anti-greenwashing rules covering sustainability claims made by regulated firms that market financial products or services. United States The Federal Trade Commission (FTC) provides voluntary guidelines for environmental marketing claims. These guidelines give the FTC the right to prosecute false and misleading claims. These guidelines are not enforceable but instead were intended to be followed voluntarily: Qualifications and disclosures: The Commission traditionally has held that to be effective, any qualifications or disclosures such as those described in the green guides should be sufficiently clear, prominent, and understandable to prevent deception. Clarity of language, relative type size and proximity to the claim being qualified, and an absence of contrary claims that could undercut effectiveness, will maximize the likelihood that the qualifications and disclosures are appropriately clear and prominent. Distinction between benefits of product, package, and service: An environmental marketing claim should be presented in a way that makes clear whether the environmental attribute or benefit being asserted refers to the product, the product's packaging, a service, or to a portion or component of the product, package or service. If the environmental attribute or benefit applies to all but minor, incidental components of a product or package, the claim need not be qualified to identify that fact. There may be exceptions to this general principle. For example, if an unqualified "recyclable" claim is made and the presence of the incidental component significantly limits the ability to recycle the product, then the claim would be deceptive. Overstatement of environmental attribute: An environmental marketing claim should not be presented in a manner that overstates the environmental attribute or benefit, expressly or by implication. Marketers should avoid implications of significant environmental benefits if the benefit is negligible. Comparative claims: Environmental marketing claims that include a comparative statement should be presented in a manner that makes the basis for the comparison sufficiently clear to avoid consumer deception. In addition, the advertiser should be able to substantiate the comparison. The FTC announced in 2010 that it would update its guidelines for environmental marketing claims in an attempt to reduce greenwashing. The revision to the FTC's Green Guides covers a wide range of public input, including hundreds of consumer and industry comments on previously proposed revisions, offering clear guidance on what constitutes misleading information and demanding clear factual evidence. According to FTC Chairman Jon Leibowitz, "The introduction of environmentally-friendly products into the marketplace is a win for consumers who want to purchase greener products and producers who want to sell them." Leibowitz also says such a win-win can only operate if marketers' claims are straightforward and proven. In 2013, the FTC began enforcing these revisions. It cracked down on six different companies; five of the cases concerned false or misleading advertising surrounding the biodegradability of plastics. The FTC charged ECM Biofilms, American Plastic Manufacturing, CHAMP, Clear Choice Housewares, and Carnie Cap, for misrepresenting the biodegradability of their plastics treated with additives. The FTC charged a sixth company, AJM Packaging Corporation, with violating a commission consent order prohibiting companies from using advertising claims based on the product or packaging being "degradable, biodegradable, or photodegradable" without reliable scientific information. The FTC now requires companies to disclose and provide the information that qualifies their environmental claims to ensure transparency. China The issue of green marketing and consumerism in China has gained significant attention as the country faces environmental challenges. According to "Green Marketing and Consumerism in China: Analyzing the Literature" by Qingyun Zhu and Joseph Sarkis, China has implemented environmental protection laws to regulate the business and commercial sector. Regulations such as the Environmental Protection Law and the Circular Economy Promotion Law contain provisions prohibiting false advertising (known as greenwashing). The Chinese government has issued regulations and standards to regulate green advertising and labeling, including the Guidelines for Green Advertising Certification, the Guidelines for Environmental Labeling and Eco-Product Certification, and the Standards for Environmental Protection Product Declaration. These guidelines promote transparency in green marketing and prevent false or misleading claims. The Guidelines for Green Advertising Certification require that green advertising claims should be truthful, accurate, and verifiable. These guidelines and certifications require that eco-labels should be based on scientific and technical evidence, and should not contain false or misleading information. The standards also require that eco-labels be easy to understand and not confuse or deceive consumers. The regulations that are set in place for greenwashing, green advertising, and labeling in China are designed to protect consumers and prevent misleading claims. China's climate crisis, sustainability, and greenwashing remain critical and require ongoing attention. The implementation of regulations and guidelines for green advertising and labeling in China aims to promote transparency and prevent false or misleading claims. In efforts to stop this practice, in November 2016, the General Office of the State Council introduced legislation to promote the development of green products, encourage companies to adopt sustainable practices, and mention the need for a unified standard for what was to be labeled green. This was a general plan or opinion on the matter, with no specifics on its implementation, however with similarly worded legislation and plans out at that time there was a push toward a unified green product standard. Until then, green products had various standards and guidelines developed by different government agencies or industry associations, resulting in a lack of consistency and coherence. One example of guidelines set then was from the Ministry of Environmental Protection of China (now known as the Ministry of Ecology and Environment). They issued specifications in 2000, but these guidelines were limited and not widely recognized by industry or consumers. It was not until 2017, with the launch of GB/T (a set of national standards and recommendations), that a widespread guideline was set for what would constitute green manufacturing and a green supply chain. Expanding on these guidelines in 2019 the State Administration for Market Regulation (SAMR) created regulations for Green Product Labels, which are symbols used on products to mark that they meet certain environmentally friendly criteria, and certification agencies have verified their manufacturing process. The standards and coverage for green products have increased as time passes, with changes and improvements to green product standardization still occurring in 2023. In China, the Greenpeace Campaign focuses on the pain point of air pollution. The campaign aims to address the severe air pollution problem prevalent in many Chinese communities. The campaign has been working to raise awareness about air pollution's health and environmental impacts, advocate for more robust government policies and regulations to reduce emissions, and encourage a shift toward clean and renewable energy sources. "From 2011 to 2016, we linked global fast fashion brands to toxic chemical pollution in China through their manufacturers. Many multinational companies and local suppliers have stopped using toxic and harmful chemicals. They included Adidas, Benetton, Burberry, Esprit, H&M, Puma, and Zara, among others." The Greenpeace Campaign in China has involved various activities, including scientific research, public education, and advocacy efforts. The campaign has organized public awareness events to engage both consumers and policymakers, urging them to take action to improve air quality. "In recent years, Chinese Communist Party general secretary Xi Jinping has committed to controlling the expansion of coal power plants. He has also pledged to stop building new coal power abroad". The campaign seeks to drive public and government interest toward more strict air pollution control measures, promote more clean energy technology, and contribute to health, wellness, and sustainability in China. However, the health of Chinese citizens is at the forefront of this issue, as air pollution is a critical issue in the nation. The article emphasizes that China has prioritized putting people front and center on environmental issues. China's Greenpeace campaigns and those in other countries are a part of their global efforts to address environmental challenges and promote sustainability. Related terms "Bluewashing" is a similar term. However, instead of falsely advertising environmentally friendly practices, companies are advertising corporate social responsibility. For example, companies are saying they are fighting for human rights while practicing very unethical production practices such as paying factory employees next to nothing. Carbon emission trading can be similar to greenwashing in that it gives an environmentally-friendly impression, but can be counterproductive if carbon is priced too low, or if large emitters are given "free credits." For example, Bank of America subsidiary MBNA offers "Eco-Logique" MasterCards that reward Canadian customers with carbon offsets when they use them. Customers may feel that they are nullifying their carbon footprint by purchasing goods with these, but only 0.5% of the purchase price goes to buy carbon offsets; the rest of the interchange fee still goes to the bank. Greenscamming Greenscamming describes an organization or product taking on a name that falsely implies environmental friendliness. It is related to both greenwashing and greenspeak. This is analogous to aggressive mimicry in biology. Greenscamming is used in particular by industrial companies and associations that deploy astroturfing organisations to try to dispute scientific findings that threaten their business model. One example is the denial of man-made global warming by companies in the fossil energy sector, also driven by specially-founded greenscamming organizations. One reason to establish greenscamming organizations is that openly communicating the benefits of activities that damage the environment is difficult. Sociologist Charles Harper stresses that marketing a group called "Coalition to Trash the Environment for Profit" would be difficult. Anti-environment initiatives, therefore, must give their front organizations deliberately deceptive names if they want to be successful, as surveys show that environmental protection has a social consensus. However, the danger of being exposed as an anti-environmental initiative entails a considerable risk that the greenscamming activities will backfire and be counterproductive for the initiators. Greenscamming organizations are active in organized climate denial. An important financier of greenscamming organizations was the oil company ExxonMobil, which financially supported more than 100 climate denial organizations and spent about 20 million U.S. dollars on greenscamming groups. James Lawrence Powell identified the "admirable" designations of many of these organizations as the most striking common feature, which for the most part sounded very rational. He quotes a list of climate denial organizations drawn up by the Union of Concerned Scientists, which includes 43 organizations funded by Exxon. None had a name that would lead one to infer that climate change denial was their "raison d'être". The list is headed by Africa Fighting Malaria, whose website features articles and commentaries opposing ambitious climate mitigation concepts, even though the dangers of malaria could be exacerbated by global warming. Examples Examples of greenscamming organizations include the National Wetlands Coalition, Friends of Eagle Mountain, The Sahara Club, The Alliance for Environment and Resources, The Abundant Wildlife Society of North America, the Global Climate Coalition, the National Wilderness Institute, the Environmental Policy Alliance of the Center for Organizational Research and Education, and the American Council on Science and Health. Behind these ostensible environmental protection organizations lie the interests of business sectors. For example, oil drilling companies and real estate developers support the National Wetlands Coalition. In contrast, the Friends of Eagle Mountain is backed by a mining company that wants to convert open-cast mines into landfills. The Global Climate Coalition was backed by commercial enterprises that fought against government-imposed climate protection measures. Other Greenscam organizations include the U.S. Council for Energy Awareness, backed by the nuclear industry; the Wilderness Impact Research Foundation, representing the interests of loggers and ranchers; and the American Environmental Foundation, representing the interests of landowners. Another Greenscam organization is the Northwesterners for More Fish, which had a budget of $2.6 million in 1998. This group opposed conservation measures for endangered fish that restricted the interests of energy companies, aluminum companies, and the region's timber industry and tried to discredit environmentalists who promoted fish habitats. The Center for the Study of Carbon Dioxide and Global Change, the National Environmental Policy Institute, and the Information Council on the Environment funded by the coal industry are also greenscamming organizations. In Germany, this form of mimicry or deception is used by the "European Institute for Climate and Energy" (EIKE), which suggests by its name that it is an important scientific research institution. In fact, EIKE is not a scientific institution at all, but a lobby organization that neither has an office nor employs climate scientists, but instead disseminates fake news on climate issues on its website. See also Astongate Climate bond (green bond) Coca-Cola Life Conspicuous conservation Dieselgate Eco-capitalism Eco-nationalism Ecodesign Ecolabel EMAS Ethics of philanthropy False advertising Farm to fork Fossil fuels lobby Gasoline additives Green brands Greenlash Green marketing Green parking lot Greenscamming Green transition Reputation laundering Sportswashing Sunshine unit References Further reading Catherine, P. (n.d). Eco-friendly labelling? It's a lot of 'greenwash'. Toronto Star (Canada), Retrieved from Newspaper Source database. Greenscamming. The Encyclopedia of World Problems and Human Potential. New rules aim to clamp down on corporate greenwashing Reuters. June 26, 2023. External links Roberts Environmental Center – ratings of corporate sustainability claims. Greenwashing in Popular Culture and Art What is Greenwashing, and Why is it a Problem? Understanding Greenwashing: How Do I Spot Misleading Sustainability Claims? Streaming audio of a 2011 radio program on the subject of Green Marketing/Greenwashingfrom CBC Radio. Green claims, European Commission. Environmentalism Green politics Public relations terminology Deception Environmental social science concepts 1980s neologisms Propaganda
Greenwashing
[ "Environmental_science" ]
8,660
[ "Environmental social science concepts", "Environmental social science" ]
325,260
https://en.wikipedia.org/wiki/Figurate%20number
The term figurate number is used by different writers for members of different sets of numbers, generalizing from triangular numbers to different shapes (polygonal numbers) and different dimensions (polyhedral numbers). The term can mean polygonal number a number represented as a discrete -dimensional regular geometric pattern of -dimensional balls such as a polygonal number (for ) or a polyhedral number (for ). a member of the subset of the sets above containing only triangular numbers, pyramidal numbers, and their analogs in other dimensions. Terminology Some kinds of figurate number were discussed in the 16th and 17th centuries under the name "figural number". In historical works about Greek mathematics the preferred term used to be figured number. In a use going back to Jacob Bernoulli's Ars Conjectandi, the term figurate number is used for triangular numbers made up of successive integers, tetrahedral numbers made up of successive triangular numbers, etc. These turn out to be the binomial coefficients. In this usage the square numbers (4, 9, 16, 25, ...) would not be considered figurate numbers when viewed as arranged in a square. A number of other sources use the term figurate number as synonymous for the polygonal numbers, either just the usual kind or both those and the centered polygonal numbers. History The mathematical study of figurate numbers is said to have originated with Pythagoras, possibly based on Babylonian or Egyptian precursors. Generating whichever class of figurate numbers the Pythagoreans studied using gnomons is also attributed to Pythagoras. Unfortunately, there is no trustworthy source for these claims, because all surviving writings about the Pythagoreans are from centuries later. Speusippus is the earliest source to expose the view that ten, as the fourth triangular number, was in fact the tetractys, supposed to be of great importance for Pythagoreanism. Figurate numbers were a concern of the Pythagorean worldview. It was well understood that some numbers could have many figurations, e.g. 36 is a both a square and a triangle and also various rectangles. The modern study of figurate numbers goes back to Pierre de Fermat, specifically the Fermat polygonal number theorem. Later, it became a significant topic for Euler, who gave an explicit formula for all triangular numbers that are also perfect squares, among many other discoveries relating to figurate numbers. Figurate numbers have played a significant role in modern recreational mathematics. In research mathematics, figurate numbers are studied by way of the Ehrhart polynomials, polynomials that count the number of integer points in a polygon or polyhedron when it is expanded by a given factor. Triangular numbers and their analogs in higher dimensions The triangular numbers for are the result of the juxtaposition of the linear numbers (linear gnomons) for : These are the binomial coefficients . This is the case of the fact that the th diagonal of Pascal's triangle for consists of the figurate numbers for the -dimensional analogs of triangles (-dimensional simplices). The simplicial polytopic numbers for are: (linear numbers), (triangular numbers), (tetrahedral numbers), (pentachoric numbers, pentatopic numbers, 4-simplex numbers), (-topic numbers, -simplex numbers). The terms square number and cubic number derive from their geometric representation as a square or cube. The difference of two positive triangular numbers is a trapezoidal number. Gnomon The gnomon is the piece added to a figurate number to transform it to the next larger one. For example, the gnomon of the square number is the odd number, of the general form , . The square of size 8 composed of gnomons looks like this: To transform from the -square (the square of size ) to the -square, one adjoins elements: one to the end of each row ( elements), one to the end of each column ( elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure. This gnomonic technique also provides a mathematical proof that the sum of the first odd numbers is ; the figure illustrates = 64 = 82. There is a similar gnomon with centered hexagonal numbers adding up to make cubes of each integer number. Notes References Integer sequences
Figurate number
[ "Mathematics" ]
962
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Figurate numbers", "Numbers", "Number theory" ]
325,329
https://en.wikipedia.org/wiki/Cold%20War
The Cold War was a period of global geopolitical rivalry between the United States and the Soviet Union (USSR) and their respective allies, the capitalist Western Bloc and communist Eastern Bloc, which lasted from 1947 until the dissolution of the Soviet Union in 1991. The term cold war is used because there was no direct fighting between the two superpowers, though each supported opposing sides in regional conflicts known as proxy wars. In addition to the struggle for ideological dominance and economic influence and an arms race in both conventional and nuclear weapons, the Cold War was expressed through technological competitions such as the Space Race, espionage, propaganda campaigns and psychological warfare, far-reaching trade embargoes, and sports diplomacy. After the end of World War II in 1945, during which the US and USSR had been allies, the USSR installed satellite governments in its occupied territories in Eastern Europe and North Korea by 1949, resulting in the political division of Europe by an "Iron Curtain" (including between the states of East and West Germany); it also allied with the People's Republic of China, founded in 1949. The US declared the Truman Doctrine of "containment" of communism in 1947, launched the Marshall Plan in 1948 to assist in Western Europe's economic recovery, and founded the NATO military alliance in 1949 (which was matched by the Soviet-led Warsaw Pact in 1955). A major proxy war was the Korean War of 1950 to 1953, which ended in stalemate. US involvement in regime change during the Cold War included support for anti-communist and right-wing dictatorships, governments, and uprisings across the world, while Soviet involvement in regime change included the funding of left-wing parties, wars of independence, revolutions and dictatorships. As nearly all the colonial states underwent decolonization and gained independence during the period, many became Third World battlefields of the Cold War. Both powers used economic aid in an attempt to win the loyalty of non-aligned countries. The Cuban Revolution of 1959 installed the first communist regime in the Western Hemisphere, and in 1962, the Cuban Missile Crisis began after deployments of US missiles in Europe and Soviet missiles in Cuba; it is considered the closest the Cold War came to escalating into nuclear war. Another major proxy conflict was the Vietnam War of 1955 to 1975, which ended in defeat for the US. The USSR solidified its domination of Eastern Europe with its crushing of the Hungarian Revolution in 1956 and the Warsaw Pact invasion of Czechoslovakia in 1968. Relations between the USSR and China broke down by 1961, with the Sino-Soviet split bringing the two states to the brink of war amid a border conflict in 1969. In 1972, the US initiated diplomatic contacts with China and the US and USSR signed a series of treaties limiting their nuclear arsenals during a period known as détente. In 1979, the toppling of US-allied governments in Iran and Nicaragua and the outbreak of the Soviet–Afghan War again raised tensions. In 1985, Mikhail Gorbachev became leader of the USSR and expanded political freedoms in the Eastern Bloc, which contributed to the revolutions of 1989 and the dissolution of the Soviet Union in 1991, ending the Cold War. Terminology Writer George Orwell used cold war, as a general term, in his essay "You and the Atomic Bomb", published 19 October 1945. Contemplating a world living in the shadow of the threat of nuclear warfare, Orwell looked at James Burnham's predictions of a polarized world, writing: In The Observer of 10 March 1946, Orwell wrote, "after the Moscow conference last December, Russia began to make a 'cold war' on Britain and the British Empire." The first use of the term to describe the specific post-war geopolitical confrontation between the Soviet Union and the United States came in a speech by Bernard Baruch, an influential advisor to Democratic presidents, on 16 April 1947. The speech, written by journalist Herbert Bayard Swope, proclaimed, "we are today in the midst of a cold war." Newspaper columnist Walter Lippmann gave the term wide currency with his book The Cold War. When asked in 1947 about the source of the term, Lippmann traced it to a French term from the 1930s, . Background and periodization The roots of the Cold War can be traced to diplomatic and military tensions preceding World War II. The 1917 Russian Revolution and the subsequent Treaty of Brest-Litovsk, where Soviet Russia ceded vast territories to Germany, deepened distrust among the Western Allies. Allied intervention in the Russian Civil War further complicated relations, and although the Soviet Union later allied with Western powers to defeat Nazi Germany, this cooperation was strained by mutual suspicions. In the immediate aftermath of World War II, disagreements about the future of Europe, particularly Eastern Europe, became central. The Soviet Union's establishment of communist regimes in the countries it had liberated from Nazi control—enforced by the presence of the Red Army—alarmed the US and UK. Western leaders saw this as Soviet expansionism, clashing with their vision of a democratic Europe. Economically, the divide was sharpened with the introduction of the Marshall Plan in 1947, a US initiative to provide financial aid to rebuild Europe and prevent the spread of communism by stabilizing capitalist economies. The Soviet Union rejected the Marshall Plan, seeing it as an effort by the US to impose its influence on Europe. In response, the Soviet Union established Comecon (Council for Mutual Economic Assistance) to foster economic cooperation among communist states. The United States and its Western European allies sought to strengthen their bonds and used the policy of containment against Soviet influence; they accomplished this most notably through the formation of NATO, which was essentially a defensive agreement in 1949. The Soviet Union countered with the Warsaw Pact in 1955, which had similar results with the Eastern Bloc. As by that time the Soviet Union already had an armed presence and political domination all over its eastern satellite states, the pact has been long considered superfluous. Although nominally a defensive alliance, the Warsaw Pact's primary function was to safeguard Soviet hegemony over its Eastern European satellites, with the pact's only direct military actions having been the invasions of its own member states to keep them from breaking away; in the 1960s, the pact evolved into a multilateral alliance, in which the non-Soviet Warsaw Pact members gained significant scope to pursue their own interests. In 1961, Soviet-allied East Germany constructed the Berlin Wall to prevent the citizens of East Berlin from fleeing to West Berlin, at the time part of United States-allied West Germany. Major crises of this phase included the Berlin Blockade of 1948–1949, the Chinese Communist Revolution of 1945–1949, the Korean War of 1950–1953, the Hungarian Revolution of 1956 and the Suez Crisis of that same year, the Berlin Crisis of 1961, the Cuban Missile Crisis of 1962, and the Vietnam War of 1955–1975. Both superpowers competed for influence in Latin America and the Middle East, and the decolonising states of Africa, Asia, and Oceania. Following the Cuban Missile Crisis, this phase of the Cold War saw the Sino-Soviet split. Between China and the Soviet Union's complicated relations within the Communist sphere, leading to the Sino-Soviet border conflict, while France, a Western Bloc state, began to demand greater autonomy of action. The Warsaw Pact invasion of Czechoslovakia occurred to suppress the Prague Spring of 1968, while the United States experienced internal turmoil from the civil rights movement and opposition to United States involvement in the Vietnam War. In the 1960s–1970s, an international peace movement took root among citizens around the world. Movements against nuclear weapons testing and for nuclear disarmament took place, with large anti-war protests. By the 1970s, both sides had started making allowances for peace and security, ushering in a period of détente that saw the Strategic Arms Limitation Talks and the 1972 visit by Richard Nixon to China that opened relations with China as a strategic counterweight to the Soviet Union. A number of self-proclaimed Marxist–Leninist governments were formed in the second half of the 1970s in developing countries, including Angola, Mozambique, Ethiopia, Cambodia, Afghanistan, and Nicaragua. Détente collapsed at the end of the decade with the beginning of the Soviet–Afghan War in 1979. Beginning in the 1980s, this phase was another period of elevated tension. The Reagan Doctrine led to increased diplomatic, military, and economic pressures on the Soviet Union, which at the time was undergoing the Era of Stagnation. This phase saw the new Soviet leader Mikhail Gorbachev introducing the liberalizing reforms of glasnost ("openness") and perestroika ("reorganization") and ending Soviet involvement in Afghanistan in 1989. Pressures for national sovereignty grew stronger in Eastern Europe, and Gorbachev refused to further support the Communist governments militarily. The fall of the Iron Curtain after the Pan-European Picnic and the Revolutions of 1989, which represented a peaceful revolutionary wave with the exception of the Romanian revolution and the Afghan Civil War (1989–1992), overthrew almost all of the Marxist–Leninist regimes of the Eastern Bloc. The Communist Party of the Soviet Union itself lost control in the country and was banned following the 1991 Soviet coup attempt that August. This in turn led to the formal dissolution of the Soviet Union in December 1991 and the collapse of Communist governments across much of Africa and Asia. The Russian Federation became the Soviet Union's successor state, while many of the other republics emerged as fully independent post-Soviet states. The United States was left as the world's sole superpower. Containment, Truman Doctrine, Korean War (1947–1953) Iron Curtain, Iran, Turkey, Greece, and Poland In February 1946, George F. Kennan's "Long Telegram" from Moscow to Washington helped to articulate the US government's increasingly hard line against the Soviets, which would become the basis for US strategy toward the Soviet Union. The telegram galvanized a policy debate that would eventually shape the Truman administration's Soviet policy. Washington's opposition to the Soviets accumulated after broken promises by Stalin and Molotov concerning Europe and Iran. Following the World War II Anglo-Soviet invasion of Iran, the country was occupied by the Red Army in the far north and the British in the south. Iran was used by the United States and British to supply the Soviet Union, and the Allies agreed to withdraw from Iran within six months after the cessation of hostilities. However, when this deadline came, the Soviets remained in Iran under the guise of the Azerbaijan People's Government and Kurdish Republic of Mahabad. On 5 March, former British prime minister Winston Churchill delivered his famous "Iron Curtain" speech calling for an Anglo-American alliance against the Soviets, whom he accused of establishing an "iron curtain" dividing Europe. A week later, on 13 March, Stalin responded vigorously to the speech, saying Churchill could be compared to Adolf Hitler insofar as he advocated the racial superiority of English-speaking nations so that they could satisfy their hunger for world domination, and that such a declaration was "a call for war on the USSR." The Soviet leader also dismissed the accusation that the USSR was exerting increasing control over the countries lying in its sphere. He argued that there was nothing surprising in "the fact that the Soviet Union, anxious for its future safety, [was] trying to see to it that governments loyal in their attitude to the Soviet Union should exist in these countries." Soviet territorial demands to Turkey regarding the Dardanelles in the Turkish Straits crisis and Black Sea border disputes were also a major factor in increasing tensions. In September, the Soviet side produced the Novikov telegram, sent by the Soviet ambassador to the US but commissioned and "co-authored" by Vyacheslav Molotov; it portrayed the US as being in the grip of monopoly capitalists who were building up military capability "to prepare the conditions for winning world supremacy in a new war". On 6 September 1946, James F. Byrnes delivered a speech in Germany repudiating the Morgenthau Plan (a proposal to partition and de-industrialize post-war Germany) and warning the Soviets that the US intended to maintain a military presence in Europe indefinitely. As Byrnes stated a month later, "The nub of our program was to win the German people ... it was a battle between us and Russia over minds ..." In December, the Soviets agreed to withdraw from Iran after persistent US pressure, an early success of containment policy. By 1947, US president Harry S. Truman was outraged by the perceived resistance of the Soviet Union to American demands in Iran, Turkey, and Greece, as well as Soviet rejection of the Baruch Plan on nuclear weapons. In February 1947, the British government announced that it could no longer afford to finance the Kingdom of Greece in its civil war against Communist-led insurgents. In the same month, Stalin conducted the rigged 1947 Polish legislative election which constituted an open breach of the Yalta Agreement. The US government responded by adopting a policy of containment, with the goal of stopping the spread of communism. Truman delivered a speech calling for the allocation of $400 million to intervene in the war and unveiled the Truman Doctrine, which framed the conflict as a contest between free peoples and totalitarian regimes. American policymakers accused the Soviet Union of conspiring against the Greek royalists in an effort to expand Soviet influence even though Stalin had told the Communist Party to cooperate with the British-backed government. Enunciation of the Truman Doctrine marked the beginning of a US bipartisan defense and foreign policy consensus between Republicans and Democrats focused on containment and deterrence that weakened during and after the Vietnam War, but ultimately persisted thereafter. Moderate and conservative parties in Europe, as well as social democrats, gave virtually unconditional support to the Western alliance, while European and American Communists, financed by the KGB and involved in its intelligence operations, adhered to Moscow's line, although dissent began to appear after 1956. Other critiques of the consensus policy came from anti-Vietnam War activists, the Campaign for Nuclear Disarmament, and the anti-nuclear movement. Marshall Plan, Czechoslovak coup and formation of two German states In early 1947, France, Britain and the United States unsuccessfully attempted to reach an agreement with the Soviet Union for a plan envisioning an economically self-sufficient Germany, including a detailed accounting of the industrial plants, goods and infrastructure already taken by the Soviets. In June 1947, in accordance with the Truman Doctrine, the United States enacted the Marshall Plan, a pledge of economic assistance for all European countries willing to participate. Under the plan, which President Harry S. Truman signed on 3 April 1948, the US government gave to Western European countries over $13 billion (equivalent to $189 billion in 2016). Later, the program led to the creation of the OECD. The plan's aim was to rebuild the democratic and economic systems of Europe and to counter perceived threats to the European balance of power, such as communist parties seizing control. The plan also stated that European prosperity was contingent upon German economic recovery. One month later, Truman signed the National Security Act of 1947, creating a unified Department of Defense, the Central Intelligence Agency (CIA), and the National Security Council (NSC). These would become the main bureaucracies for US defense policy in the Cold War. Stalin believed economic integration with the West would allow Eastern Bloc countries to escape Soviet control, and that the US was trying to buy a pro-US re-alignment of Europe. Stalin therefore prevented Eastern Bloc nations from receiving Marshall Plan aid. The Soviet Union's alternative to the Marshall Plan, which was purported to involve Soviet subsidies and trade with central and eastern Europe, became known as the Molotov Plan (later institutionalized in January 1949 as the Council for Mutual Economic Assistance). Stalin was also fearful of a reconstituted Germany; his vision of a post-war Germany did not include the ability to rearm or pose any kind of threat to the Soviet Union. In early 1948, Czech Communists executed a coup d'état in Czechoslovakia (resulting in the formation of the Czechoslovak Socialist Republic), the only Eastern Bloc state that the Soviets had permitted to retain democratic structures. The public brutality of the coup shocked Western powers more than any event up to that point and swept away the last vestiges of opposition to the Marshall Plan in the United States Congress. In an immediate aftermath of the crisis, the London Six-Power Conference was held, resulting in the Soviet boycott of the Allied Control Council and its incapacitation, an event marking the beginning of the full-blown Cold War, as well as ending any hopes at the time for a single German government and leading to formation in 1949 of the Federal Republic of Germany and German Democratic Republic. The twin policies of the Truman Doctrine and the Marshall Plan led to billions in economic and military aid for Western Europe, Greece, and Turkey. With the US assistance, the Greek military won its civil war. Under the leadership of Alcide De Gasperi the Italian Christian Democrats defeated the powerful Communist–Socialist alliance in the elections of 1948. Outside of Europe, the United States also began to express interest in the development of many other countries, so that they would not fall under the sway of Eastern Bloc communism. In his January 1949 inaugural address, Truman declared for the first time in U.S. history that international development would be a key part of U.S. foreign policy. The resulting program later became known as the Point Four Program because it was the fourth point raised in his address. Espionage All major powers engaged in espionage, using a great variety of spies, double agents, moles, and new technologies such as the tapping of telephone cables. The Soviet KGB ("Committee for State Security"), the bureau responsible for foreign espionage and internal surveillance, was famous for its effectiveness. The most famous Soviet operation involved its atomic spies that delivered crucial information from the United States' Manhattan Project, leading the USSR to detonate its first nuclear weapon in 1949, four years after the American detonation and much sooner than expected. A massive network of informants throughout the Soviet Union was used to monitor dissent from official Soviet politics and morals. Although to an extent disinformation had always existed, the term itself was invented, and the strategy formalized by a black propaganda department of the Soviet KGB. Based on the amount of top-secret Cold War archival information that has been released, historian Raymond L. Garthoff concludes there probably was parity in the quantity and quality of secret information obtained by each side. However, the Soviets probably had an advantage in terms of HUMINT (human intelligence or interpersonal espionage) and "sometimes in its reach into high policy circles." In terms of decisive impact, however, he concludes: We also can now have high confidence in the judgment that there were no successful "moles" at the political decision-making level on either side. Similarly, there is no evidence, on either side, of any major political or military decision that was prematurely discovered through espionage and thwarted by the other side. There also is no evidence of any major political or military decision that was crucially influenced (much less generated) by an agent of the other side. According to historian Robert L. Benson, "Washington's forte was 'signals' intelligence - the procurement and analysis of coded foreign messages." leading to the Venona project or Venona intercepts, which monitored the communications of Soviet intelligence agents. Moynihan wrote that the Venona project contained "overwhelming proof of the activities of Soviet spy networks in America, complete with names, dates, places, and deeds." The Venona project was kept highly secret even from policymakers until the Moynihan Commission in 1995. Despite this, the decryption project had already been betrayed and dispatched to the USSR by Kim Philby and Bill Weisband in 1946, as was discovered by the US by 1950. Nonetheless, the Soviets had to keep their discovery of the program secret, too, and continued leaking their own information, some of which was still useful to the American program. According to Moynihan, even President Truman may not have been fully informed of Venona, which may have left him unaware of the extent of Soviet espionage. Clandestine atomic spies from the Soviet Union, who infiltrated the Manhattan Project during WWII, played a major role in increasing tensions that led to the Cold War. In addition to usual espionage, the Western agencies paid special attention to debriefing Eastern Bloc defectors. Edward Jay Epstein describes that the CIA understood that the KGB used "provocations", or fake defections, as a trick to embarrass Western intelligence and establish Soviet double agents. As a result, from 1959 to 1973, the CIA required that East Bloc defectors went through a counterintelligence investigation before being recruited as a source of intelligence. During the late 1970s and 1980s, the KGB perfected its use of espionage to sway and distort diplomacy. Active measures were "clandestine operations designed to further Soviet foreign policy goals," consisting of disinformation, forgeries, leaks to foreign media, and the channeling of aid to militant groups. Retired KGB Major General Oleg Kalugin described active measures as "the heart and soul of Soviet intelligence." During the Sino-Soviet split, "spy wars" also occurred between the USSR and PRC. Cominform and the Tito–Stalin Split In September 1947, the Soviets created Cominform to impose orthodoxy within the international communist movement and tighten political control over Soviet satellites through coordination of communist parties in the Eastern Bloc. Cominform faced an embarrassing setback the following June, when the Tito–Stalin split obliged its members to expel Yugoslavia, which remained communist but adopted a non-aligned position and began accepting financial aid from the US. Besides Berlin, the status of the city of Trieste was at issue. Until the break between Tito and Stalin, the Western powers and the Eastern bloc faced each other uncompromisingly. In addition to capitalism and communism, Italians and Slovenes, monarchists and republicans as well as war winners and losers often faced each other irreconcilably. The neutral buffer state Free Territory of Trieste, founded in 1947 with the United Nations, was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito. Berlin Blockade The US and Britain merged their western German occupation zones into "Bizone" (1 January 1947, later "Trizone" with the addition of France's zone, April 1949). As part of the economic rebuilding of Germany, in early 1948, representatives of a number of Western European governments and the United States announced an agreement for a merger of western German areas into a federal governmental system. In addition, in accordance with the Marshall Plan, they began to re-industrialize and rebuild the West German economy, including the introduction of a new Deutsche Mark currency to replace the old Reichsmark currency that the Soviets had debased. The US had secretly decided that a unified and neutral Germany was undesirable, with Walter Bedell Smith telling General Eisenhower "in spite of our announced position, we really do not want nor intend to accept German unification on any terms that the Russians might agree to, even though they seem to meet most of our requirements." Shortly thereafter, Stalin instituted the Berlin Blockade (June 1948 – May 1949), one of the first major crises of the Cold War, preventing Western supplies from reaching West Germany's exclave of West Berlin. The United States (primarily), Britain, France, Canada, Australia, New Zealand, and several other countries began the massive "Berlin airlift", supplying West Berlin with provisions despite Soviet threats. The Soviets mounted a public relations campaign against the policy change. Once again, the East Berlin communists attempted to disrupt the Berlin municipal elections, which were held on 5 December 1948 and produced a turnout of 86% and an overwhelming victory for the non-communist parties. The results effectively divided the city into East and West, the latter comprising US, British and French sectors. 300,000 Berliners demonstrated and urged the international airlift to continue, and US Air Force pilot Gail Halvorsen created "Operation Vittles", which supplied candy to German children. The Airlift was as much a logistical as a political and psychological success for the West; it firmly linked West Berlin to the United States. In May 1949, Stalin lifted the blockade. In 1952, Stalin repeatedly proposed a plan to unify East and West Germany under a single government chosen in elections supervised by the United Nations, if the new Germany were to stay out of Western military alliances, but this proposal was turned down by the Western powers. Some sources dispute the sincerity of the proposal. Beginnings of NATO and Radio Free Europe Britain, France, the United States, Canada and eight other western European countries signed the North Atlantic Treaty of April 1949, establishing the North Atlantic Treaty Organization (NATO). That August, the first Soviet atomic device was detonated in Semipalatinsk, Kazakh SSR. Following Soviet refusals to participate in a German rebuilding effort set forth by western European countries in 1948, the US, Britain and France spearheaded the establishment of the Federal Republic of Germany from the three Western zones of occupation in April 1949. The Soviet Union proclaimed its zone of occupation in Germany the German Democratic Republic that October. Media in the Eastern Bloc was an organ of the state, completely reliant on and subservient to the communist party. Radio and television organizations were state-owned, while print media was usually owned by political organizations, mostly by the local communist party. Soviet radio broadcasts used Marxist rhetoric to attack capitalism, emphasizing themes of labor exploitation, imperialism and war-mongering. Along with the broadcasts of the BBC and the Voice of America to Central and Eastern Europe, a major propaganda effort began in 1949 was Radio Free Europe/Radio Liberty, dedicated to bringing about the peaceful demise of the communist system in the Eastern Bloc. Radio Free Europe attempted to achieve these goals by serving as a surrogate home radio station, an alternative to the controlled and party-dominated domestic press in the Soviet Bloc. Radio Free Europe was a product of some of the most prominent architects of America's early Cold War strategy, especially those who believed that the Cold War would eventually be fought by political rather than military means, such as George F. Kennan. Soviet and Eastern Bloc authorities used various methods to suppress Western broadcasts, including radio jamming. American policymakers, including Kennan and John Foster Dulles, acknowledged that the Cold War was in its essence a war of ideas. The United States, acting through the CIA, funded a long list of projects to counter the communist appeal among intellectuals in Europe and the developing world. The CIA also covertly sponsored a domestic propaganda campaign called Crusade for Freedom. German rearmament The rearmament of West Germany was achieved in the early 1950s. Its main promoter was Konrad Adenauer, the chancellor of West Germany, with France the main opponent. Washington had the decisive voice. It was strongly supported by the Pentagon (the US military leadership), and weakly opposed by President Truman; the State Department was ambivalent. The outbreak of the Korean War in June 1950 changed the calculations and Washington now gave full support. That also involved naming Dwight D. Eisenhower in charge of NATO forces and sending more American troops to West Germany. There was a strong promise that West Germany would not develop nuclear weapons. Widespread fears of another rise of German militarism necessitated the new military to operate within an alliance framework under NATO command. In 1955, Washington secured full German membership of NATO. In May 1953, Lavrentiy Beria, by then in a government post, had made an unsuccessful proposal to allow the reunification of a neutral Germany to prevent West Germany's incorporation into NATO, but his attempts were cut short after he was executed several months later during a Soviet power struggle. The events led to the establishment of the Bundeswehr, the West German military, in 1955. Chinese Civil War, SEATO, and NSC 68 In 1949, Mao Zedong's People's Liberation Army defeated Chiang Kai-shek's United States-backed Kuomintang (KMT) Nationalist Government in China. The KMT-controlled territory was now restricted to the island of Taiwan, the nationalist government of which exists to this day. The Kremlin promptly created an alliance with the newly formed People's Republic of China. According to Norwegian historian Odd Arne Westad, the communists won the Civil War because they made fewer military mistakes than Chiang Kai-Shek made, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Moreover, his party was weakened during the war against Japan. Meanwhile, the communists told different groups, such as the peasants, exactly what they wanted to hear, and they cloaked themselves under the cover of Chinese nationalism. Confronted with the communist revolution in China and the end of the American atomic monopoly in 1949, the Truman administration quickly moved to escalate and expand its containment doctrine. In NSC 68, a secret 1950 document, the National Security Council proposed reinforcing pro-Western alliance systems and quadrupling spending on defense. Truman, under the influence of advisor Paul Nitze, saw containment as implying complete rollback of Soviet influence in all its forms. United States officials moved to expand this version of containment into Asia, Africa, and Latin America, in order to counter revolutionary nationalist movements, often led by communist parties financed by the USSR. In this way, this US would exercise "preponderant power," oppose neutrality, and establish global hegemony. In the early 1950s (a period sometimes known as the "Pactomania"), the US formalized a series of alliances with Japan (a former WWII enemy), South Korea, Taiwan, Australia, New Zealand, Thailand and the Philippines (notably ANZUS in 1951 and SEATO in 1954), thereby guaranteeing the United States a number of long-term military bases. Korean War One of the more significant examples of the implementation of containment was the United Nations US-led intervention in the Korean War. In June 1950, after years of mutual hostilities, Kim Il Sung's North Korean People's Army invaded South Korea. Stalin had been reluctant to support the invasion but ultimately sent advisers. To Stalin's surprise, the United Nations Security Council backed the defense of South Korea, although the Soviets were then boycotting meetings in protest of the fact that Taiwan (Republic of China), not the People's Republic of China, held a permanent seat on the council. A UN force of sixteen countries faced North Korea, although 40 percent of troops were South Korean, and about 50 percent were from the United States. The US initially seemed to follow containment, only pushing back North Korea across the 38th Parallel and restoring South Korea's sovereignty while allowing North Korea's survival as a state. However, the success of the Inchon landing inspired the US/UN forces to pursue a rollback strategy instead and to overthrow communist North Korea, thereby allowing nationwide elections under U.N. auspices. General Douglas MacArthur then advanced into North Korea. The Chinese, fearful of a possible US invasion, sent in a large army and pushed the U.N. forces back below the 38th parallel. The episode was used to support the wisdom of the containment doctrine as opposed to rollback. The Communists were later pushed to roughly around the original border, with minimal changes. Among other effects, the Korean War galvanised NATO to develop a military structure. The Korean Armistice Agreement was approved in July 1953. Nuclear Arms Race and escalation (1953–1962) Khrushchev, Eisenhower, and de-Stalinization In 1953, changes in political leadership on both sides shifted the dynamic of the Cold War. Dwight D. Eisenhower was inaugurated president that January. During the last 18 months of the Truman administration, the American defense budget had quadrupled, and Eisenhower moved to reduce military spending by a third while continuing to fight the Cold War effectively. Joseph Stalin died in 1953. Nikita Khrushchev eventually won the ensuing power struggle by the mid-1950s. In 1956, he denounced Joseph Stalin and proceeded to ease controls over the party and society (de-Stalinization). On 18 November 1956, while addressing Western dignitaries at a reception in Moscow's Polish embassy, Khrushchev infamously declared, "Whether you like it or not, history is on our side. We will bury you", shocking everyone present. He would later claim he had not been referring to nuclear war, but the "historically fated victory of communism over capitalism." Eisenhower's secretary of state, John Foster Dulles, initiated a "New Look" for the containment strategy, calling for a greater reliance on nuclear weapons against US enemies in wartime. Dulles also enunciated the doctrine of "massive retaliation", threatening a severe US response to any Soviet aggression. Possessing nuclear superiority, for example, allowed Eisenhower to face down Soviet threats to intervene in the Middle East during the 1956 Suez Crisis. The declassified US plans for retaliatory nuclear strikes in the late 1950s included the "systematic destruction" of 1,200 major urban centers in the Soviet Bloc and China, including Moscow, East Berlin and Beijing. In spite of these events, there were substantial hopes for détente when an upswing in diplomacy took place in 1959, including a two-week visit by Khrushchev to the US, and plans for a two-power summit for May 1960. The latter was disturbed by the U-2 spy plane scandal, however, in which Eisenhower was caught lying about the intrusion of American surveillance aircraft into Soviet territory. Warsaw Pact and Hungarian Revolution While Stalin's death in 1953 slightly relaxed tensions, the situation in Europe remained an uneasy armed truce. The Soviets, who had already created a network of mutual assistance treaties in the Eastern Bloc by 1949, established a formal alliance therein, the Warsaw Pact, in 1955. It stood opposed to NATO. The Hungarian Revolution of 1956 occurred shortly after Khrushchev arranged the removal of Hungary's Stalinist leader Mátyás Rákosi. In response to a popular anti-communist uprising, the new regime formally disbanded the secret police, declared its intention to withdraw from the Warsaw Pact and pledged to re-establish free elections. The Soviet Army invaded. Thousands of Hungarians were killed and arrested, imprisoned and deported to the Soviet Union, and approximately 200,000 Hungarians fled Hungary. Hungarian leader Imre Nagy and others were executed following secret trials. From 1957 through 1961, Khrushchev openly and repeatedly threatened the West with nuclear annihilation. He claimed that Soviet missile capabilities were far superior to those of the United States, capable of wiping out any American or European city. According to John Lewis Gaddis, Khrushchev rejected Stalin's "belief in the inevitability of war," however. The new leader declared his ultimate goal was "peaceful coexistence". In Khrushchev's formulation, peace would allow capitalism to collapse on its own, as well as giving the Soviets time to boost their military capabilities, which remained for decades until Gorbachev's later "new thinking" envisioning peaceful coexistence as an end in itself rather than a form of class struggle. The events in Hungary produced ideological fractures within the communist parties of the world, particularly in Western Europe, with great decline in membership, as many in both western and socialist countries felt disillusioned by the brutal Soviet response. The communist parties in the West would never recover. Rapacki Plan and Berlin Crisis of 1958–1959 In 1957, Polish foreign minister Adam Rapacki proposed the Rapacki Plan for a nuclear free zone in central Europe. Public opinion tended to be favourable in the West, but it was rejected by leaders of West Germany, Britain, France and the United States. They feared it would leave the powerful conventional armies of the Warsaw Pact dominant over the weaker NATO armies. During November 1958, Khrushchev made an unsuccessful attempt to turn all of Berlin into an independent, demilitarized "free city". He gave the United States, Great Britain and France a six-month ultimatum to withdraw their troops from the sectors of West Berlin, or he would transfer control of Western access rights to the East Germans. Khrushchev earlier explained to Mao Zedong that "Berlin is the testicles of the West. Every time I want to make the West scream, I squeeze on Berlin." NATO formally rejected the ultimatum in mid-December and Khrushchev withdrew it in return for a Geneva conference on the German question. American military buildup Like Truman and Eisenhower, John F. Kennedy supported containment. President Eisenhower's New Look policy had emphasized the use of less expensive nuclear weapons to deter Soviet aggression by threatening massive nuclear attacks on all of the Soviet Union. Nuclear weapons were much cheaper than maintaining a large standing army, so Eisenhower cut conventional forces to save money. Kennedy implemented a new strategy known as flexible response. This strategy relied on conventional arms to achieve limited goals. As part of this policy, Kennedy expanded the United States special operations forces, elite military units that could fight unconventionally in various conflicts. Kennedy hoped that the flexible response strategy would allow the US to counter Soviet influence without resorting to nuclear war. To support his new strategy, Kennedy ordered a massive increase in defense spending and a rapid build-up of the nuclear arsenal to restore the lost superiority over the Soviet Union. In his inaugural address, Kennedy promised "to bear any burden" in the defense of liberty, and he repeatedly asked for increases in military spending and authorization of new weapons systems. From 1961 to 1964, the number of nuclear weapons increased by 50 percent, as did the number of B-52 bombers to deliver them. The new ICBM force grew from 63 intercontinental ballistic missiles to 424. He authorized 23 new Polaris submarines, each of which carried 16 nuclear missiles. Kennedy also called on cities to construct fallout shelters. Competition in the Third World Nationalist movements in some countries and regions, notably Guatemala, Indonesia and Indochina, were often allied with communist groups or otherwise perceived to be unfriendly to Western interests. In this context, the United States and the Soviet Union increasingly competed for influence by proxy in the Third World as decolonization gained momentum in the 1950s and early 1960s. Both sides were selling armaments to gain influence. The Kremlin saw continuing territorial losses by imperial powers as presaging the eventual victory of their ideology. The United States used the Central Intelligence Agency (CIA) to undermine neutral or hostile Third World governments and to support allied ones. In 1953, President Eisenhower implemented Operation Ajax, a covert coup operation to overthrow the Iranian prime minister, Mohammad Mosaddegh. The popularly elected Mosaddegh had been a Middle Eastern nemesis of Britain since nationalizing the British-owned Anglo-Iranian Oil Company in 1951. Winston Churchill told the United States that Mosaddegh was "increasingly turning towards Communist influence." The pro-Western shah, Mohammad Reza Pahlavi, assumed control as an autocratic monarch. The shah's policies included banning the communist Tudeh Party of Iran, and general suppression of political dissent by SAVAK, the shah's domestic security and intelligence agency. In Guatemala, a banana republic, the 1954 Guatemalan coup d'état ousted the left-wing President Jacobo Árbenz with material CIA support. The post-Arbenz government—a military junta headed by Carlos Castillo Armas—repealed a progressive land reform law, returned nationalized property belonging to the United Fruit Company, set up a National Committee of Defense Against Communism, and decreed a Preventive Penal Law Against Communism at the request of the United States. The non-aligned Indonesian government of Sukarno was faced with a major threat to its legitimacy beginning in 1956 when several regional commanders began to demand autonomy from Jakarta. After mediation failed, Sukarno took action to remove the dissident commanders. In February 1958, dissident military commanders in Central Sumatra (Colonel Ahmad Husein) and North Sulawesi (Colonel Ventje Sumual) declared the Revolutionary Government of the Republic of Indonesia-Permesta Movement aimed at overthrowing the Sukarno regime. They were joined by many civilian politicians from the Masyumi Party, such as Sjafruddin Prawiranegara, who were opposed to the growing influence of the communist Partai Komunis Indonesia. Due to their anti-communist rhetoric, the rebels received arms, funding, and other covert aid from the CIA until Allen Lawrence Pope, an American pilot, was shot down after a bombing raid on government-held Ambon in April 1958. The central government responded by launching airborne and seaborne military invasions of rebel strongholds at Padang and Manado. By the end of 1958, the rebels were militarily defeated, and the last remaining rebel guerilla bands surrendered by August 1961. In the Republic of the Congo, also known as Congo-Léopoldville, newly independent from Belgium since June 1960, the Congo Crisis erupted on 5 July leading to the secession of the regions Katanga and South Kasai. CIA-backed President Joseph Kasa-Vubu ordered the dismissal of the democratically elected Prime Minister Patrice Lumumba and the Lumumba cabinet in September over massacres by the armed forces during the invasion of South Kasai and for involving Soviets in the country. Later the CIA-backed Colonel Mobutu Sese Seko quickly mobilized his forces to seize power through a military coup d'état, and worked with Western intelligence agencies to imprison Lumumba and hand him over to Katangan authorities who executed him by firing squad. In British Guiana, the leftist People's Progressive Party (PPP) candidate Cheddi Jagan won the position of chief minister in a colonially administered election in 1953 but was quickly forced to resign from power after Britain's suspension of the still-dependent nation's constitution. Embarrassed by the landslide electoral victory of Jagan's allegedly Marxist party, the British imprisoned the PPP's leadership and maneuvered the organization into a divisive rupture in 1955. Jagan again won the colonial elections in 1957 and 1961, despite Britain's shift to a reconsideration of its view of the left-wing Jagan as a Soviet-style communist at this time. The United States pressured the British to withhold Guyana's independence until an alternative to Jagan could be identified, supported, and brought into office. In Malaya, the British colonialists suppressed the communist anti-colonial rebellion. Worn down by the communist guerrilla war for Vietnamese full independence and handed a watershed defeat by communist Viet Minh rebels at the Battle of Dien Bien Phu, the French accepted a negotiated abandonment of their neo-colonial stake in Vietnam right in 1954. On June 4, France granted full sovereignty to the anti-communist State of Vietnam, an independent country within the French Union. In the Geneva Conference in July, peace accords were signed, leaving Vietnam divided between a pro-Soviet administration in North Vietnam and a pro-Western administration in South Vietnam at the 17th parallel north. Between 1954 and 1961, Eisenhower's United States sent economic aid and military advisers to strengthen South Vietnam's pro-Western government against communist efforts to destabilize it. Many emerging nations of Asia, Africa, and Latin America rejected the pressure to choose sides in the East–West competition. In 1955, at the Bandung Conference in Indonesia, dozens of Third World governments resolved to stay out of the Cold War. The consensus reached at Bandung culminated with the creation of the Belgrade-headquartered Non-Aligned Movement in 1961. Meanwhile, Khrushchev broadened Moscow's policy to establish ties with India and other key neutral states. Independence movements in the Third World transformed the post-war order into a more pluralistic world of decolonized African and Middle Eastern nations and of rising nationalism in Asia and Latin America. Sino-Soviet split After 1956, the Sino-Soviet alliance began to break down. Mao had defended Stalin when Khrushchev criticized him in 1956 and treated the new Soviet leader as a superficial upstart, accusing him of having lost his revolutionary edge. For his part, Khrushchev, disturbed by Mao's glib attitude toward nuclear war, referred to the Chinese leader as a "lunatic on a throne". After this, Khrushchev made many desperate attempts to reconstitute the Sino-Soviet alliance, but Mao considered it useless and denied any proposal. The Chinese-Soviet animosity spilled out in an intra-communist propaganda war. Further on, the Soviets focused on a bitter rivalry with Mao's China for leadership of the global communist movement. Historian Lorenz M. Lüthi argues: The Sino-Soviet split was one of the key events of the Cold War, equal in importance to the construction of the Berlin Wall, the Cuban Missile Crisis, the Second Vietnam War, and Sino-American rapprochement. The split helped to determine the framework for the Cold War period 1979–1985 in general, and influenced the course of the Second Vietnam War in particular. Space Race On the nuclear weapons front, the United States and the Soviet Union pursued nuclear rearmament and developed long-range weapons with which they could strike the territory of the other. In August 1957, the Soviets successfully launched the world's first intercontinental ballistic missile (ICBM), and in October they launched the first Earth satellite, Sputnik 1. This led to what became known as the Sputnik crisis. The Central Intelligence Agency described the orbit of Sputnik 1 as a "stupendous scientific achievement" and concluded that the USSR had likely perfected an intercontinental ballistic missile (ICBM) capable of reaching 'any desired target with accuracy'. The launch of Sputnik inaugurated the Space Race. This led to a series of historic space exploration milestones, and most notably the Apollo Moon landings from 1969 by the United States, which astronaut Frank Borman later described as "just a battle in the Cold War." The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it. A major Cold War element of the Space Race was satellite reconnaissance, as well as signals intelligence to gauge which aspects of the space programs had military capabilities. The Soviet Salyut programme, conducted in the 1970s and 80s, put a manned space station in long term orbit; two of the successful installations to the station were covers for secret military Almaz reconnaissance stations: Salyut 3, and Salyut 5. During the whole duration of the cold war, the US and the USSR represented the largest and dominant space powers of the world. Despite their fierce competition, both nations signed international space treaties in the 1960s which would limit the militarization of space. The first research of anti-satellite weapon technology also came about during this period. Later, the US and USSR pursued some cooperation in space as part of détente, notably the Apollo–Soyuz orbital rendezvous and docking. Aftermath of the Cuban Revolution In Cuba, the 26th of July Movement, led by young revolutionaries Fidel Castro and Che Guevara, seized power in the Cuban Revolution on 1 January 1959. Although Fidel Castro's first refused to categorize his new government as socialist and repeatedly denying being a communist, Castro appointed Marxists to senior government and military positions. Diplomatic relations between Cuba and the United States continued for some time after Batista's fall, but President Eisenhower deliberately left the capital to avoid meeting Castro during the latter's trip to Washington, D.C. in April, leaving Vice President Richard Nixon to conduct the meeting in his place. Cuba began negotiating for arms purchases from the Eastern Bloc in March 1960. The same month, Eisenhower gave approval to CIA plans and funding to overthrow Castro. In January 1961, just prior to leaving office, Eisenhower formally severed relations with the Cuban government. That April, the administration of newly elected American President John F. Kennedy mounted the unsuccessful CIA-organized ship-borne invasion of the island by Cuban exiles at Playa Girón and Playa Larga in Santa Clara Province—a failure that publicly humiliated the United States. Castro responded by publicly embracing Marxism–Leninism, and the Soviet Union pledged to provide further support. In December, the US government began a violent campaign of terrorist attacks against civilians in Cuba, and covert operations and sabotage against the administration, in an attempt to overthrow the Cuban government. Berlin Crisis of 1961 The Berlin Crisis of 1961 was the last major incident in the Cold War regarding the status of Berlin and post–World War II Germany. By the early 1950s, the Soviet approach to restricting emigration movement was emulated by most of the rest of the Eastern Bloc. However, hundreds of thousands of East Germans annually emigrated to free and prosperous West Germany through a "loophole" in the system that existed between East Berlin and West Berlin. The emigration resulted in a massive "brain drain" from East Germany to West Germany of younger educated professionals, such that nearly 20% of East Germany's population had migrated to West Germany by 1961. That June, the Soviet Union issued a new ultimatum demanding the withdrawal of Allied forces from West Berlin. The request was rebuffed, but the United States now limited its security guarantees to West Berlin. On 13 August, East Germany erected a barbed-wire barrier that would eventually be expanded through construction into the Berlin Wall, effectively closing the loophole and preventing its citizens from fleeing to the West. Cuban Missile Crisis and Khrushchev's ousting The Kennedy administration continued seeking ways to oust Castro following the Bay of Pigs invasion, experimenting with various ways of covertly facilitating the overthrow of the Cuban government. Significant hopes were pinned on the program of terrorist attacks and other destabilization operations known as Operation Mongoose, that was devised under the Kennedy administration in 1961. Khrushchev learned of the project in February 1962, and preparations to install Soviet nuclear missiles in Cuba were undertaken in response. Alarmed, Kennedy considered various reactions. He ultimately responded to the installation of nuclear missiles in Cuba with a naval blockade, and he presented an ultimatum to the Soviets. Khrushchev backed down from a confrontation, and the Soviet Union removed the missiles in return for a public American pledge not to invade Cuba again as well as a covert deal to remove US missiles from Turkey. The Cuban Missile Crisis (October–November 1962) brought the world closer to nuclear war than ever before. The aftermath led to efforts in the nuclear arms race at nuclear disarmament and improving relations, although the Cold War's first arms control agreement, the Antarctic Treaty, had come into force in 1961. The compromise embarrassed Khrushchev and the Soviet Union because the withdrawal of US missiles from Italy and Turkey was a secret deal between Kennedy and Khrushchev, and the Soviets were seen as retreating from circumstances that they had started. In 1964, Khrushchev's Kremlin colleagues managed to oust him, but allowed him a peaceful retirement. He was accused of rudeness and incompetence, and John Lewis Gaddis argues that he was also blamed with ruining Soviet agriculture, bringing the world to the brink of nuclear war, and becoming an "international embarrassment" when he authorized construction of the Berlin Wall. According to Dobrynin, the top Soviet leadership took the Cuban outcome as "a blow to its prestige bordering on humiliation". From confrontation to détente (1962–1979) In the course of the 1960s and 1970s, Cold War participants struggled to adjust to a new, more complicated pattern of international relations in which the world was no longer divided into two clearly opposed blocs. From the beginning of the post-war period, with American help Western Europe and Japan rapidly recovered from the destruction of World War II and sustained strong economic growth through the 1950s and 1960s, with per capita GDPs approaching those of the United States, while Eastern Bloc economies stagnated. The Vietnam War descended into a quagmire for the United States, leading to a decline in international prestige and economic stability, derailing arms agreements, and provoking domestic unrest. America's withdrawal from the war led it to embrace a policy of détente with both China and the Soviet Union. In the 1973 oil crisis, Organization of Petroleum Exporting Countries (OPEC) cut their petroleum output. This raised oil prices and hurt Western economies, but helped the Soviet Union by generating a huge flow of money from its oil sales. As a result of the oil crisis, combined with the growing influence of Third World alignments such as OPEC and the Non-Aligned Movement, less powerful countries had more room to assert their independence and often showed themselves resistant to pressure from either superpower. Meanwhile, Moscow was forced to turn its attention inward to deal with the Soviet Union's deep-seated domestic economic problems. During this period, Soviet leaders such as Leonid Brezhnev and Alexei Kosygin embraced the notion of détente. Vietnam War Under President John F. Kennedy, US troop levels in Vietnam grew from just under a thousand in 1959 to 16,000 in 1963. South Vietnamese President Ngo Dinh Diem's heavy-handed crackdown on Buddhist monks in 1963 led the US to endorse a deadly military coup against Diem. The war escalated further in 1964 following the controversial Gulf of Tonkin incident, in which a US destroyer was alleged to have clashed with North Vietnamese fast attack craft. The Gulf of Tonkin Resolution gave President Lyndon B. Johnson broad authorization to increase US military presence, deploying ground combat units for the first time and increasing troop levels to 184,000. Soviet leader Leonid Brezhnev responded by reversing Khrushchev's policy of disengagement and increasing aid to the North Vietnamese, hoping to entice the North from its pro-Chinese position. The USSR discouraged further escalation of the war, however, providing just enough military assistance to tie up American forces. From this point, the People's Army of Vietnam (PAVN) engaged in more conventional warfare with US and South Vietnamese forces. The Tet Offensive of 1968 proved to be the turning point of the war. Despite years of American tutelage and aid, the South Vietnamese forces were unable to withstand the communist offensive and the task fell to US forces instead. At the same time, in 1963–1965, American domestic politics saw the triumph of liberalism. According to historian Joseph Crespino: It has become a staple of twentieth-century historiography that Cold War concerns were at the root of a number of progressive political accomplishments in the postwar period: a high progressive marginal tax rate that helped fund the arms race and contributed to broad income equality; bipartisan support for far-reaching civil rights legislation that transformed politics and society in the American South, which had long given the lie to America's egalitarian ethos; bipartisan support for overturning an explicitly racist immigration system that had been in place since the 1920s; and free health care for the elderly and the poor, a partial fulfillment of one of the unaccomplished goals of the New Deal era. The list could go on. Nuclear testing and Use of Outer-Space treaties The Partial Nuclear Test Ban Treaty was signed on August 5, 1963, by the United States, the Soviet Union, and over 100 other nations. This treaty banned nuclear weapons tests in the atmosphere, outer space, and underwater, restricting such tests to underground environments. The treaty followed heightened concerns over the militarization of space, amplified by the United States' Starfish Prime test in 1962, which involved the detonation of a nuclear device in the upper atmosphere. To further delineate the peaceful use of outer space, the United Nations facilitated the drafting of the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, commonly known as the Outer Space Treaty. Signed on January 27, 1967, by the United States, the Soviet Union, and the United Kingdom, it entered into force on October 10, 1967. The treaty established space as a domain to be used exclusively for peaceful purposes, prohibiting the placement of nuclear weapons or any other weapons of mass destruction in orbit or on celestial bodies. Invasion of Czechoslovakia In 1968, a period of political liberalization took place in Czechoslovakia called the Prague Spring. An "Action Program" of reforms included increasing freedom of the press, freedom of speech and freedom of movement, along with an economic emphasis on consumer goods, the possibility of a multiparty government, limitations on the power of the secret police, and potential withdrawal from the Warsaw Pact. In answer to the Prague Spring, on 20 August 1968, the Soviet Army, together with most of their Warsaw Pact allies, invaded Czechoslovakia. The invasion was followed by a wave of emigration, including an estimated 70,000 Czechs and Slovaks initially fleeing, with the total eventually reaching 300,000. The invasion sparked intense protests from Yugoslavia, Romania, China, and from Western European countries. Sino-Soviet split and Nixon-China visit As a result of the Sino-Soviet split, tensions along the Chinese–Soviet border reached their peak in 1969, when the Soviet planned to launch a large-scale nuclear strike against China. United States President Richard Nixon intervened, and decided to use the conflict to shift the balance of power towards the West in the Cold War through a policy of rapproachment with China, which began with his 1972 visit to China and culminated in 1979 with the signing of the Joint Communiqué on the Establishment of Diplomatic Relations by President Carter and Chinese Communist Party leader Deng Xiaoping. Nixon, Brezhnev, and détente Although indirect conflict between Cold War powers continued through the late 1960s and early 1970s, tensions were beginning to ease. Following the ousting of Khrushchev, another period of collective leadership ensued, consisting of Leonid Brezhnev as general secretary, Alexei Kosygin as Premier and Nikolai Podgorny as Chairman of the Presidium, lasting until Brezhnev established himself in the early 1970s as the preeminent Soviet leader. Following his visit to China, Nixon met with Soviet leaders in Moscow. These Strategic Arms Limitation Talks resulted in landmark arms control treaties. These aimed to limit the development of costly anti-ballistic missiles and nuclear missiles. Nixon and Brezhnev proclaimed a new era of "peaceful coexistence" and established the groundbreaking new policy of détente (or cooperation) between the superpowers. Meanwhile, Brezhnev attempted to revive the Soviet economy, which was declining in part because of heavy military expenditures. The Soviet Union's military budget in the 1970s was massive, 40–60% of the federal budget and 15% of GDP. Between 1972 and 1974, the two sides also agreed to strengthen their economic ties, including agreements for increased trade. As a result of their meetings, détente would replace the hostility of the Cold War and the two countries would live mutually. These developments coincided with Bonn's "Ostpolitik" policy formulated by the West German Chancellor Willy Brandt, an effort to normalize relations between West Germany and Eastern Europe. Other agreements were concluded to stabilize the situation in Europe, culminating in the Helsinki Accords signed at the Conference on Security and Co-operation in Europe in 1975. The Helsinki Accords, in which the Soviets promised to grant free elections in Europe, has been called a major concession to ensure peace by the Soviets. In practice, the Soviet government significantly curbed the rule of law, civil liberties, protection of law and guarantees of property, which were considered examples of "bourgeois morality" by Soviet legal theorists such as Andrey Vyshinsky. The Soviet Union signed legally-binding human rights documents, such as the International Covenant on Civil and Political Rights in 1973 and the Helsinki Accords in 1975, but they were neither widely known or accessible to people living under Communist rule, nor were they taken seriously by the Communist authorities. Human rights activists in the Soviet Union were regularly subjected to harassment, repressions and arrests. The pro-Soviet American business magnate Armand Hammer of Occidental Petroleum often mediated trade relations. Author Daniel Yergin, in his book The Prize, writes that Hammer "ended up as a go-between for five Soviet General Secretaries and seven U.S. Presidents." Hammer had extensive business relationship in the Soviet Union stretching back to the 1920s with Lenin's approval. According to Christian Science Monitor in 1980, "although his business dealings with the Soviet Union were cut short when Stalin came to power, he had more or less single-handedly laid the groundwork for the [1980] state of Western trade with the Soviet Union." Kissinger and Nixon were "realists" who deemphasized idealistic goals like anti-communism or promotion of democracy worldwide because those goals were too expensive in terms of America's economic capabilities. They rejected "idealism" as impractical and too expensive, and neither man showed much sensitivity to the plight of people living under Communism. Kissinger's realism fell out of fashion as idealism returned to American foreign policy with Carter's moralism emphasizing human rights, and Reagan's rollback strategy aimed at destroying Communism. Late 1970s deterioration of relations In the 1970s, the KGB, led by Yuri Andropov, continued to persecute distinguished Soviet dissidents, such as Aleksandr Solzhenitsyn and Andrei Sakharov, who were criticising the Soviet leadership in harsh terms. Indirect conflict between the superpowers continued through this period of détente in the Third World, particularly during political crises in the Middle East, Chile, Ethiopia, and Angola. In 1973, Nixon announced his administration was committed to seeking most favored nation trade status with the USSR, which was challenged by Congress in the Jackson-Vanik Amendment. The United States had long linked trade with the Soviet Union to its foreign policy toward the Soviet Union and, especially since the early 1980s, to Soviet human rights policies. The Jackson-Vanik Amendment, which was attached to the 1974 Trade Act, linked the granting of most-favored-nation to the USSR to the right of persecuted Soviet Jews to emigrate. Because the Soviet Union refused the right of emigration to Jewish refuseniks, the ability of the President to apply most-favored nation trade status to the Soviet Union was restricted. Although President Jimmy Carter tried to place another limit on the arms race with a SALT II agreement in 1979, his efforts were undermined by the other events that year, including the Iranian Revolution and the Nicaraguan Revolution, which both ousted pro-US governments, and his retaliation against the Soviet coup in Afghanistan in December. Renewal of tensions (1979–1985) The period in the late 1970s and early 1980s showed an intensive reawakening of Cold War tensions and conflicts. Tensions greatly increased between the major powers with both sides becoming more militant. Diggins says, "Reagan went all out to fight the second cold war, by supporting counterinsurgencies in the third world." Cox says, "The intensity of this 'second' Cold War was as great as its duration was short." Soviet invasion of Afghanistan and end of détente In April 1978, the communist People's Democratic Party of Afghanistan (PDPA) seized power in Afghanistan in the Saur Revolution. Within months, opponents of the communist regime launched an uprising in eastern Afghanistan that quickly expanded into a civil war waged by guerrilla mujahideen against government forces countrywide. The Islamic Unity of Afghanistan Mujahideen insurgents received military training and weapons in neighboring Pakistan and China, while the Soviet Union sent thousands of military advisers to support the PDPA government. Meanwhile, increasing friction between the competing factions of the PDPA—the dominant Khalq and the more moderate Parcham—resulted in the dismissal of Parchami cabinet members and the arrest of Parchami military officers under the pretext of a Parchami coup. By mid-1979, the United States had started a covert program to assist the mujahideen. In September 1979, Khalqist President Nur Muhammad Taraki was assassinated in a coup within the PDPA orchestrated by fellow Khalq member Hafizullah Amin, who assumed the presidency. Distrusted by the Soviets, Amin was assassinated by Soviet special forces during Operation Storm-333 in December 1979. Afghan forces suffered losses during the Soviet operation; 30 Afghan palace guards and over 300 army guards were killed while another 150 were captured. In the aftermath of the operation, a total of 1,700 Afghan soldiers who surrendered to Soviet forces were taken as prisoners, and the Soviets installed Babrak Karmal, the leader of the PDPA's Parcham faction, as Amin's successor. Veterans of the Soviet Union's Alpha Group have stated that Operation Storm-333 was one of the most successful in the unit's history. Documents released following the dissolution of the Soviet Union in the 1990s revealed that the Soviet leadership believed Amin had secret contacts within the American embassy in Kabul and "was capable of reaching an agreement with the United States"; however, allegations of Amin colluding with the Americans have been widely discredited. The PDBA was tasked to fill the vacuum and carried out a purge of Amin supporters. Soviet troops were deployed to put Afghanistan under Soviet control with Karmal in more substantial numbers, although the Soviet government did not expect to do most of the fighting in Afghanistan. As a result, however, the Soviets were now directly involved in what had been a domestic war in Afghanistan. Carter responded to the Soviet invasion by withdrawing the SALT II treaty from ratification, imposing embargoes on grain and technology shipments to the USSR, and demanding a significant increase in military spending, and further announced the boycott of the 1980 Summer Olympics in Moscow, which was joined by 65 other nations. He described the Soviet incursion as "the most serious threat to the peace since the Second World War". Reagan and Thatcher In January 1977, four years prior to becoming president, Ronald Reagan bluntly stated, in a conversation with Richard V. Allen, his basic expectation in relation to the Cold War. "My idea of American policy toward the Soviet Union is simple, and some would say simplistic," he said. "It is this: We win and they lose." In 1980, Ronald Reagan won the 1980 presidential election, vowing to increase military spending and confront the Soviets everywhere. Both Reagan and new British Prime Minister Margaret Thatcher denounced the Soviet Union and its ideology. Reagan labeled the Soviet Union an "evil empire" and predicted that Communism would be left on the "ash heap of history," while Thatcher inculpated the Soviets as "bent on world dominance." In 1982, Reagan tried to cut off Moscow's access to hard currency by impeding its proposed gas line to Western Europe. It hurt the Soviet economy, but it also caused ill will among American allies in Europe who counted on that revenue. Reagan retreated on this issue. By early 1985, Reagan's anti-communist position had developed into a stance known as the new Reagan Doctrine—which, in addition to containment, formulated an additional right to subvert existing communist governments. Besides continuing Carter's policy of supporting the Islamic opponents of the Soviet Union and the Soviet-backed PDPA government in Afghanistan, the CIA also sought to weaken the Soviet Union itself by promoting Islamism in the majority-Muslim Central Asian Soviet Union. Additionally, the CIA encouraged anti-communist Pakistan's ISI to train Muslims from around the world to participate in the jihad against the Soviet Union. Polish Solidarity movement and martial law Pope John Paul II provided a moral focus for anti-communism; a visit to his native Poland in 1979 stimulated a religious and nationalist resurgence centered on the Solidarity movement trade union that galvanized opposition, and may have led to his attempted assassination two years later. In December 1981, Poland's Wojciech Jaruzelski reacted to the crisis by imposing a period of martial law. Reagan imposed economic sanctions on Poland in response. Mikhail Suslov, the Kremlin's top ideologist, advised Soviet leaders not to intervene if Poland fell under the control of Solidarity, for fear it might lead to heavy economic sanctions, resulting in a catastrophe for the Soviet economy. US and USSR military and economic issues The Soviet Union had built up a military that consumed as much as 25 percent of its gross national product at the expense of consumer goods and investment in civilian sectors. Soviet spending on the arms race and other Cold War commitments both caused and exacerbated deep-seated structural problems in the Soviet system, which experienced at least a decade of economic stagnation during the late Brezhnev years. Soviet investment in the defense sector was not driven by military necessity but in large part by the interests of the nomenklatura, which was dependent on the sector for their own power and privileges. The Soviet Armed Forces became the largest in the world in terms of the numbers and types of weapons they possessed, in the number of troops in their ranks, and in the sheer size of their military–industrial base. However, the quantitative advantages held by the Soviet military often concealed areas where the Eastern Bloc dramatically lagged behind the West. For example, the Persian Gulf War demonstrated how the armor, fire control systems, and firing range of the Soviet Union's most common main battle tank, the T-72, were drastically inferior to the American M1 Abrams, yet the USSR fielded almost three times as many T-72s as the US deployed M1s. By the early 1980s, the USSR had built up a military arsenal and army surpassing that of the United States. Soon after the Soviet invasion of Afghanistan, President Carter began massively building up the United States military. This buildup was accelerated by the Reagan administration, which increased the military spending from 5.3 percent of GNP in 1981 to 6.5 percent in 1986, the largest peacetime defense buildup in United States history. The American-Soviet tensions present during 1983 was defined by some as the start of "Cold War II". While in retrospective this phase of the Cold War was generally defined as a "war of words", the Soviet's "peace offensive" was largely rejected by the West. Tensions continued to intensify as Reagan revived the B-1 Lancer program, which had been canceled by the Carter administration, produced LGM-118 Peacekeeper missiles, installed US cruise missiles in Europe, and announced the experimental Strategic Defense Initiative, dubbed "Star Wars" by the media, a defense program to shoot down missiles in mid-flight. The Soviets deployed RSD-10 Pioneer ballistic missiles targeting Western Europe, and NATO decided, under the impetus of the Carter presidency, to deploy MGM-31 Pershing and cruise missiles in Europe, primarily West Germany. This deployment placed missiles just 10 minutes' striking distance from Moscow. After Reagan's military buildup, the Soviet Union did not respond by further building its military, because the enormous military expenses, along with inefficient planned manufacturing and collectivized agriculture, were already a heavy burden for the Soviet economy. At the same time, Saudi Arabia increased oil production, even as other non-OPEC nations were increasing production. These developments contributed to the 1980s oil glut, which affected the Soviet Union as oil was the main source of Soviet export revenues. Issues with command economics, oil price decreases and large military expenditures gradually brought the Soviet economy to stagnation. On 1 September 1983, the Soviet Union shot down Korean Air Lines Flight 007, a Boeing 747 with 269 people aboard, including sitting Congressman Larry McDonald, an action which Reagan characterized as a massacre. The airliner was en route from Anchorage to Seoul but owing to a navigational mistake made by the crew, it flew through Russian prohibited airspace. The Soviet Air Force treated the unidentified aircraft as an intruding U.S. spy plane and destroyed it with air-to-air missiles. The incident increased support for military deployment, overseen by Reagan, which stood in place until the later accords between Reagan and Mikhail Gorbachev. During the early hours of 26 September 1983, the 1983 Soviet nuclear false alarm incident occurred; systems in Serpukhov-15 underwent a glitch that claimed several intercontinental ballistic missiles were heading towards Russia, but officer Stanislav Petrov correctly suspected it was a false alarm, ensuring the Soviets did not respond to the non-existent attack. As such, he has been credited as "the man who saved the world". The Able Archer 83 exercise in November 1983, a realistic simulation of a coordinated NATO nuclear release, was perhaps the most dangerous moment since the Cuban Missile Crisis, as the Soviet leadership feared that a nuclear attack might be imminent. American domestic public concerns about intervening in foreign conflicts persisted from the end of the Vietnam War. The Reagan administration emphasized the use of quick, low-cost counterinsurgency tactics to intervene in foreign conflicts. In 1983, the Reagan administration intervened in the multisided Lebanese Civil War, invaded Grenada, bombed Libya and backed the Central American Contras, anti-communist paramilitaries seeking to overthrow the Soviet-aligned Sandinista government in Nicaragua. While Reagan's interventions against Grenada and Libya were popular in the United States, his backing of the Contra rebels was mired in controversy. The Reagan administration's backing of the military government of Guatemala during the Guatemalan Civil War, in particular the regime of Efraín Ríos Montt, was also controversial. Meanwhile, the Soviets incurred high costs for their own foreign interventions. Although Brezhnev was convinced in 1979 that the Soviet war in Afghanistan would be brief, Muslim guerrillas, aided by the US, China, Britain, Saudi Arabia and Pakistan, waged a fierce resistance against the invasion. The Kremlin sent nearly 100,000 troops to support its puppet regime in Afghanistan, leading many outside observers to dub the war "the Soviets' Vietnam". However, Moscow's quagmire in Afghanistan was far more disastrous for the Soviets than Vietnam had been for the Americans because the conflict coincided with a period of internal decay and domestic crisis in the Soviet system. A senior US State Department official predicted such an outcome as early as 1980, positing that the invasion resulted in part from a: Final years (1985–1991) Gorbachev's reforms By the time the comparatively youthful Mikhail Gorbachev became General Secretary in 1985, the Soviet economy was stagnant and faced a sharp fall in foreign currency earnings as a result of the downward slide in oil prices in the 1980s. These issues prompted Gorbachev to investigate measures to revive the ailing state. An ineffectual start led to the conclusion that deeper structural changes were necessary, and in June 1987 Gorbachev announced an agenda of economic reform called perestroika, or restructuring. Perestroika relaxed the production quota system, allowed cooperative ownership of small businesses and paved the way for foreign investment. These measures were intended to redirect the country's resources from costly Cold War military commitments to more productive areas in the civilian sector. Despite initial skepticism in the West, the new Soviet leader proved to be committed to reversing the Soviet Union's deteriorating economic condition instead of continuing the arms race with the West. Partly as a way to fight off internal opposition from party cliques to his reforms, Gorbachev simultaneously introduced glasnost, or openness, which increased freedom of the press and the transparency of state institutions. Glasnost was intended to reduce the corruption at the top of the Communist Party and moderate the abuse of power in the Central Committee. Glasnost also enabled increased contact between Soviet citizens and the Western world, particularly with the United States, contributing to the accelerating détente between the two nations. Thaw in relations In response to the Kremlin's military and political concessions, Reagan agreed to renew talks on economic issues and the scaling-back of the arms race. The first summit was held in November 1985 in Geneva, Switzerland. A second summit was held in October 1986 in Reykjavík, Iceland. Talks went well until the focus shifted to Reagan's proposed Strategic Defense Initiative (SDI), which Gorbachev wanted to be eliminated. Reagan refused. The negotiations failed, but the third summit (Washington Summit (1987), 8–10 December 1987) led to a breakthrough with the signing of the Intermediate-Range Nuclear Forces Treaty (INF). The INF treaty eliminated all nuclear-armed, ground-launched ballistic and cruise missiles with ranges between and their infrastructure. During 1988, it became apparent to the Soviets that oil and gas subsidies, along with the cost of maintaining massive troops levels, represented a substantial economic drain. In addition, the security advantage of a buffer zone was recognised as irrelevant and the Soviets officially declared that they would no longer intervene in the affairs of satellite states in Central and Eastern Europe. George H. W. Bush and Gorbachev met at the Moscow Summit in May 1988 and the Governors Island Summit in December 1988. In 1989, Soviet forces withdrew from Afghanistan without achieving their objectives. Later that year, the Berlin Wall, the Inner German border and the Iron Curtain fell. On 3 December 1989, Gorbachev and Bush declared the Cold War over at the Malta Summit. In February 1990, Gorbachev agreed with the US-proposed Treaty on the Final Settlement with Respect to Germany and signed it on 12 September 1990, paving the way for the German reunification. When the Berlin Wall came down, Gorbachev's "Common European Home" concept began to take shape. The two former adversaries were partners in the Gulf War against Iraq (August 1990 – February 1991). During the final summit in Moscow in July 1991, Gorbachev and Bush signed the START I arms control treaty. Eastern Europe breaks away Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. Kenneth S. Deffeyes argued in Beyond Oil that the Reagan administration encouraged Saudi Arabia to lower the price of oil to the point where the Soviets could not make a profit selling their oil, and resulted in the depletion of the country's hard currency reserves. Brezhnev's next two successors, transitional figures with deep roots in his tradition, did not last long. Yuri Andropov was 68 years old and Konstantin Chernenko 72 when they assumed power; both died in less than two years. In an attempt to avoid a third short-lived leader, in 1985, the Soviets turned to the next generation and selected Mikhail Gorbachev. He made significant changes in the economy and party leadership, called perestroika. His policy of glasnost freed public access to information after decades of heavy government censorship. Gorbachev also moved to end the Cold War. In 1988, the USSR abandoned its war in Afghanistan and began to withdraw its forces. In the following year, Gorbachev refused to interfere in the internal affairs of the Soviet satellite states, which paved the way for the Revolutions of 1989. In particular, the standstill of the Soviet Union at the Pan-European Picnic in August 1989 then set a peaceful chain reaction in motion, at the end of which the Eastern Bloc collapsed. With the tearing down of the Berlin Wall and with East and West Germany pursuing re-unification, the Iron Curtain between the West and Soviet-occupied regions came down. By 1989, the Soviet alliance system was on the brink of collapse, and, deprived of Soviet military support, the communist leaders of the Warsaw Pact states were losing power. Grassroots organizations, such as Poland's Solidarity movement, rapidly gained ground with strong popular bases. The Pan-European Picnic in August 1989 in Hungary finally started a peaceful movement that the rulers in the Eastern Bloc could not stop. It was the largest movement of refugees from East Germany since the Berlin Wall was built in 1961 and ultimately brought about the fall of the Iron Curtain. The patrons of the picnic, Otto von Habsburg and the Hungarian Minister of State Imre Pozsgay, saw the planned event as an opportunity to test Mikhail Gorbachev's reaction. The Austrian branch of the Paneuropean Union, which was then headed by Karl von Habsburg, distributed thousands of brochures inviting the GDR holidaymakers in Hungary to a picnic near the border at Sopron. But with the mass exodus at the Pan-European Picnic the subsequent hesitant behavior of the ruling Socialist Unity Party of East Germany and the non-interference of the Soviet Union broke the dams. Now tens of thousands of media-informed East Germans made their way to Hungary, which was no longer willing to keep its borders completely closed or to oblige its border troops to use armed force. On the one hand, this caused disagreement among the Eastern European states and, on the other hand, it was clear to the Eastern European population that the governments no longer had absolute power. In 1989, the communist governments in Poland and Hungary became the first to negotiate the organization of competitive elections. In Czechoslovakia and East Germany, mass protests unseated entrenched communist leaders. The communist regimes in Bulgaria and Romania also crumbled, in the latter case as the result of a violent uprising. Attitudes had changed enough that US Secretary of State James Baker suggested that the American government would not be opposed to Soviet intervention in Romania, on behalf of the opposition, to prevent bloodshed. The tidal wave of change culminated with the fall of the Berlin Wall in November 1989, which symbolized the collapse of European communist governments and graphically ended the Iron Curtain divide of Europe. The 1989 revolutionary wave swept across Central and Eastern Europe and peacefully overthrew all of the Soviet-style Marxist–Leninist states: East Germany, Poland, Hungary, Czechoslovakia and Bulgaria; Romania was the only Eastern-bloc country to topple its communist regime violently and execute its head of state. Soviet dissolution At the same time, the Soviet republics started legal moves towards potentially declaring sovereignty over their territories, citing the freedom to secede in Article 72 of the USSR constitution. On 7 April 1990, a law was passed allowing a republic to secede if more than two-thirds of its residents voted for it in a referendum. Many held their first free elections in the Soviet era for their own national legislatures in 1990. Many of these legislatures proceeded to produce legislation contradicting the Union laws in what was known as the 'War of Laws'. In 1989, the Russian SFSR convened a newly elected Congress of People's Deputies. Boris Yeltsin was elected its chairman. On 12 June 1990, the Congress declared Russia's sovereignty over its territory and proceeded to pass laws that attempted to supersede some of the Soviet laws. After a landslide victory of Sąjūdis in Lithuania, that country declared its independence restored on 11 March 1990, citing the illegality of the Soviet occupation of the Baltic states. Soviet forces attempted to halt the secession by crushing popular demonstrations in Lithuania (Bloody Sunday) and Latvia (The Barricades), as a result, numerous civilians were killed or wounded. However, these actions only bolstered international support for the secessionists. A referendum for the preservation of the USSR was held on 17 March 1991 in nine republics (the remainder having boycotted the vote), with the majority of the population in those republics voting for preservation of the Union in the form of a new federation. The referendum gave Gorbachev a minor boost. In the summer of 1991, the New Union Treaty, which would have turned the country into a much looser Union, was agreed upon by eight republics. The signing of the treaty, however, was interrupted by the August Coup—an attempted coup d'état by hardline members of the government and the KGB who sought to reverse Gorbachev's reforms and reassert the central government's control over the republics. After the coup collapsed, Russian president Yeltsin was seen as a hero for his decisive actions, while Gorbachev's power was effectively ended. The balance of power tipped significantly towards the republics. In August 1991, Latvia and Estonia immediately declared the restoration of their full independence (following Lithuania's 1990 example). Gorbachev resigned as general secretary in late August, and soon afterwards, the party's activities were indefinitely suspended—effectively ending its rule. By the fall, Gorbachev could no longer influence events outside Moscow, and he was being challenged even there by Yeltsin, who had been elected President of Russia in July 1991. Later in August, Gorbachev resigned as general secretary of the Communist party, and Russian President Boris Yeltsin ordered the seizure of Soviet property. Gorbachev clung to power as the President of the Soviet Union until 25 December 1991, when the USSR dissolved. Fifteen states emerged from the Soviet Union, with by far the largest and most populous one (which also was the founder of the Soviet state with the October Revolution in Petrograd), the Russian Federation, taking full responsibility for all the rights and obligations of the USSR under the Charter of the United Nations, including the financial obligations. As such, Russia assumed the Soviet Union's UN membership and permanent membership on the Security Council, nuclear stockpile and the control over the armed forces. In his 1992 State of the Union Address, US President George H. W. Bush expressed his emotions: "The biggest thing that has happened in the world in my life, in our lives, is this: By the grace of God, America won the Cold War." Bush and Yeltsin met in February 1992, declaring a new era of "friendship and partnership". In January 1993, Bush and Yeltsin agreed to START II, which provided for further nuclear arms reductions on top of the original START treaty. Aftermath In summing up the international ramifications of these events, Vladislav Zubok stated: 'The collapse of the Soviet empire was an event of epochal geopolitical, military, ideological, and economic significance.' After the dissolution of the Soviet Union, Russia drastically cut military spending, and restructuring the economy left millions unemployed. According to Western analysis, the neoliberal reforms in Russia culminated in a recession in the early 1990s more severe than the Great Depression as experienced by the United States and Germany. Western analysts suggest that in the 25 years following the end of the Cold War, only five or six of the post-communist states are on a path to joining the rich and capitalist world while most are falling behind, some to such an extent that it will take several decades to catch up to where they were before the collapse of communism. Decommunization Stephen Holmes of the University of Chicago argued in 1996 that decommunization, after a brief active period, quickly ended in near-universal failure. After the introduction of lustration, demand for scapegoats has become relatively low, and former communists have been elected for high governmental and other administrative positions. Holmes notes that the only real exception was former East Germany, where thousands of former Stasi informers have been fired from public positions. Holmes suggests the following reasons for the failure of decommunization: After 45–70 years of communist rule, nearly every family has members associated with the state. After the initial desire "to root out the reds" came a realization that massive punishment is wrong and finding only some guilty is hardly justice. The urgency of the current economic problems of postcommunism makes the crimes of the communist past "old news" for many citizens. Decommunization is believed to be a power game of elites. The difficulty of dislodging the social elite makes it require a totalitarian state to disenfranchise the "enemies of the people" quickly and efficiently and a desire for normalcy overcomes the desire for punitive justice. Very few people have a perfectly clean slate and so are available to fill the positions that require significant expertise. Compared with the decommunization efforts of the other former constituents of the Eastern Bloc and the Soviet Union, decommunization in Russia has been restricted to half-measures, if conducted at all. Notable anti-communist measures in the Russian Federation include the banning of the Communist Party of the Soviet Union (and the creation of the Communist Party of the Russian Federation) as well as changing the names of some Russian cities back to what they were before the 1917 October Revolution (Leningrad to Saint Petersburg, Sverdlovsk to Yekaterinburg and Gorky to Nizhny Novgorod), though others were maintained, with Ulyanovsk (former Simbirsk), Tolyatti (former Stavropol) and Kirov (former Vyatka) being examples. Even though Leningrad and Sverdlovsk were renamed, regions that were named after them are still officially called Leningrad and Sverdlovsk oblasts. Nostalgia for the Soviet Union is gradually on the rise in Russia. Communist symbols continue to form an important part of the rhetoric used in state-controlled media, as banning on them in other countries is seen by the Russian foreign ministry as "sacrilege" and "a perverse idea of good and evil". The process of decommunization in Ukraine, a neighbouring post-Soviet state, was met with fierce criticism by Russia. The State Anthem of the Russian Federation, adopted in 2000 (the same year Vladimir Putin began his first term as president of Russia), uses the exact same music as the State Anthem of the Soviet Union, but with new lyrics written by Sergey Mikhalkov. Conversely, decommunization in Ukraine started during and after the dissolution of the Soviet Union in 1991 With the success of the Revolution of Dignity in 2014, the Ukrainian government approved laws that outlawed communist symbols. In July 2015, President of Ukraine Petro Poroshenko signed a set of laws that started a six-month period for the removal of communist monuments (excluding World War II monuments) and renaming of public places named after communist-related themes. At the time, this meant that 22 cities and 44 villages were set to get new names. In 2016, 51,493 streets and 987 cities and villages were renamed, and 1,320 Lenin monuments and 1,069 monuments to other communist figures removed. Violation of the law carries a penalty of a potential media ban and prison sentences of up to five years. The Ministry of the Interior stripped the Communist Party of Ukraine, the Communist Party of Ukraine (renewed), and the Communist Party of Workers and Peasants of their right to participate in elections and stated it was continuing the court actions that started in July 2014 to end the registration of communist parties in Ukraine. By 16 December 2015, these three parties had been banned in Ukraine; the Communist Party of Ukraine appealed the ban to the European Court of Human Rights. Collapse of Yugoslavia and Balkan conflicts The Cold War had provided external stabilizing pressures. Both the United States and the Soviet Union had a vested interest in Yugoslavia’s stability, ensuring it remained a buffer state in the East-West divide. This resulted in financial and political support for its regime. When the Cold War ended, this external support evaporated, leaving Yugoslavia more vulnerable to internal divisions. As Yugoslavia fragmented, the wars began after Slovenia and Croatia declared independence in 1991. Serbia, under Slobodan Milošević, opposed these moves. The Bosnian War (1992–1995) was the most brutal of the Yugoslav Wars, characterized by ethnic cleansing and genocide. International organizations, including the United Nations, struggled to manage the violence. NATO eventually intervened with airstrikes in Bosnia (1995) as part of Operation Deliberate Force and later in Kosovo (1999) as part of Operation allied force. These interventions marked the transition of NATO as a deterrent to the Soviet Union, to also functioning at the time as an active peacekeeping and conflict-resolution force. Influence The post-Cold War world is considered to be unipolar, with the United States the sole remaining superpower. The Cold War defined the political role of the United States after World War II—by 1989 the United States had military alliances with 50 countries, with 526,000 troops stationed abroad, with 326,000 in Europe (two-thirds of which were in West Germany) and 130,000 in Asia (mainly Japan and South Korea). The Cold War also marked the zenith of peacetime military–industrial complexes and large-scale military funding of science. Cumulative US military expenditures throughout the entire Cold War amounted to an estimated $8 trillion. Nearly 100,000 Americans died in the Korean and Vietnam Wars. Although Soviet casualties are difficult to estimate, as a share of gross national product the financial cost for the Soviet Union was much higher than that incurred by the United States. Millions died in the superpowers' proxy wars around the globe, most notably in eastern Asia. Most of the proxy wars and subsidies for local conflicts ended along with the Cold War; interstate wars, ethnic wars, revolutionary wars, as well as refugee and displaced persons crises have declined sharply in the post-Cold War years. However, the aftermath of the Cold War is not considered to be concluded. Many of the economic and social tensions that were exploited to fuel Cold War competition in parts of the Third World remain acute. The breakdown of state control in a number of areas formerly ruled by communist governments produced new civil and ethnic conflicts, particularly in the former Yugoslavia. In Central and Eastern Europe, the end of the Cold War has ushered in an era of economic growth and an increase in the number of liberal democracies, while in other parts of the world, such as Afghanistan, independence was accompanied by state failure. In popular culture The Cold War endures as a popular topic reflected in entertainment media, and continuing to the present with post-1991 Cold War-themed feature films, novels, television and web series, and other media. In 2013, a KGB-sleeper-agents-living-next-door action drama series, The Americans, set in the early 1980s, was ranked No. 6 on the Metacritic annual Best New TV Shows list; its six-season run concluded in May 2018. Historiography Interpreting the course and origins of the conflict has been a source of heated controversy among historians, political scientists, and journalists. In particular, historians have sharply disagreed as to who was responsible for the breakdown of Soviet–US relations after the Second World War; and whether the conflict between the two superpowers was inevitable or could have been avoided. Historians have also disagreed on what exactly the Cold War was, what the sources of the conflict were, and how to disentangle patterns of action and reaction between the two sides. Although explanations of the origins of the conflict in academic discussions are complex and diverse, several general schools of thought on the subject can be identified. Historians commonly speak of three different approaches to the study of the Cold War: "orthodox" accounts, "revisionism", and "post-revisionism". "Orthodox" accounts place responsibility for the Cold War on the Soviet Union and its expansion further into Europe. "Revisionist" writers place more responsibility for the breakdown of post-war peace on the United States, citing a range of US efforts to isolate and confront the Soviet Union well before the end of World War II. "Post-revisionists" see the events of the Cold War as more nuanced and attempt to be more balanced in determining what occurred during the Cold War. Much of the historiography on the Cold War weaves together two or even all three of these broad categories. See also :Category:Cold War by period American imperialism Canada in the Cold War Cold peace McCarthyism Outline of the Cold War Red Scare Second Cold War War on terror Notes and quotes References Sources Books Ang, Cheng Guan. Southeast Asia's Cold War: An Interpretive History (University of Hawai'i Press, 2018). online review Reports Journal articles Magazine articles News articles Web Further reading External links Archives The Cold War International History Project (CWIHP) The Cold War Files Select "Communism & Cold War" value to browse Maps from 1933–1982 at the Persuasive Cartography, The PJ Mode Collection, Cornell University Library CONELRAD Cold War Pop Culture Site CBC Digital Archives – Cold War Culture: The Nuclear Fear of the 1950s and 1960s Bibliography Annotated bibliography for the arms race from the Alsos Digital Library Educational resource Electronic Briefing Books at the National Security Archive, George Washington University News Video and audio news reports from during the cold war. Films André Bossuroy, Europe for Citizens Programme of the European Union, 1940s beginnings 1940s neologisms 1990s endings 20th-century conflicts Aftermath of World War II Geopolitical rivalry Historical eras History of international relations History of NATO Nuclear warfare Presidency of Harry S. Truman Presidency of Dwight D. Eisenhower Presidency of John F. Kennedy Presidency of Lyndon B. Johnson Presidency of Richard Nixon Presidency of Gerald Ford Presidency of Jimmy Carter Presidency of Ronald Reagan Presidency of George H. W. Bush Soviet Union–United States military relations Wars involving NATO Wars involving the Soviet Union Wars involving the United States
Cold War
[ "Chemistry" ]
19,829
[ "Radioactivity", "Nuclear warfare" ]
325,496
https://en.wikipedia.org/wiki/Servomechanism
In mechanical and control engineering, a servomechanism (also called servo system, or simply servo) is a control system for the position and its time derivatives, such as velocity, of a mechanical system. It often includes a servomotor, and uses closed-loop control to reduce steady-state error and improve dynamic response. In closed-loop control, error-sensing negative feedback is used to correct the action of the mechanism. In displacement-controlled applications, it usually includes a built-in encoder or other position feedback mechanism to ensure the output is achieving the desired effect. Following a specified motion trajectory is called servoing, where "servo" is used as a verb. The servo prefix originates from the Latin word servus meaning slave. The term correctly applies only to systems where the feedback or error-correction signals help control mechanical position, speed, attitude or any other measurable variables. For example, an automotive power window control is not a servomechanism, as there is no automatic feedback that controls position—the operator does this by observation. By contrast a car's cruise control uses closed-loop feedback, which classifies it as a servomechanism. Applications Position control A common type of servo provides position control. Commonly, servos are electric, hydraulic, or pneumatic. They operate on the principle of negative feedback, where the control input is compared to the actual position of the mechanical system as measured by some type of transducer at the output. Any difference between the actual and wanted values (an "error signal") is amplified (and converted) and used to drive the system in the direction necessary to reduce or eliminate the error. This procedure is one widely used application of control theory. Typical servos can give a rotary (angular) or linear output. Speed control Speed control via a governor is another type of servomechanism. The steam engine uses mechanical governors; another early application was to govern the speed of water wheels. Prior to World War II the constant speed propeller was developed to control engine speed for maneuvering aircraft. Fuel controls for gas turbine engines employ either hydromechanical or electronic governing. Others Positioning servomechanisms were first used in military fire-control and marine navigation equipment. Today servomechanisms are used in automatic machine tools, satellite-tracking antennas, remote control airplanes, automatic navigation systems on boats and planes, and antiaircraft-gun control systems. Other examples are fly-by-wire systems in aircraft which use servos to actuate the aircraft's control surfaces, and radio-controlled models which use RC servos for the same purpose. Many autofocus cameras also use a servomechanism to accurately move the lens. A hard disk drive has a magnetic servo system with sub-micrometer positioning accuracy. In industrial machines, servos are used to perform complex motion, in many applications. Servomotor A servomotor is a specific type of motor that is combined with a rotary encoder or a potentiometer to form a servomechanism. This assembly may in turn form part of another servomechanism. A potentiometer provides a simple analog signal to indicate position, while an encoder provides position and usually speed feedback, which by the use of a PID controller allow more precise control of position and thus faster achievement of a stable position (for a given motor power). Potentiometers are subject to drift when the temperature changes whereas encoders are more stable and accurate. Servomotors are used for both high-end and low-end applications. On the high end are precision industrial components that use a rotary encoder. On the low end are inexpensive radio control servos (RC servos) used in radio-controlled models which use a free-running motor and a simple potentiometer position sensor with an embedded controller. The term servomotor generally refers to a high-end industrial component while the term servo is most often used to describe the inexpensive devices that employ a potentiometer. Stepper motors are not considered to be servomotors, although they too are used to construct larger servomechanisms. Stepper motors have inherent angular positioning, owing to their construction, and this is generally used in an open-loop manner without feedback. They are generally used for medium-precision applications. RC servos are used to provide actuation for various mechanical systems such as the steering of a car, the control surfaces on a plane, or the rudder of a boat. Due to their affordability, reliability, and simplicity of control by microprocessors, they are often used in small-scale robotics applications. A standard RC receiver (or a microcontroller) sends pulse-width modulation (PWM) signals to the servo. The electronics inside the servo translate the width of the pulse into a position. When the servo is commanded to rotate, the motor is powered until the potentiometer reaches the value corresponding to the commanded position. History James Watt's steam engine governor is generally considered the first powered feedback system. The windmill fantail is an earlier example of automatic control, but since it does not have an amplifier or gain, it is not usually considered a servomechanism. The first feedback position control device was the ship steering engine, used to position the rudder of large ships based on the position of the ship's wheel. John McFarlane Gray was a pioneer. His patented design was used on the SS Great Eastern in 1866. Joseph Farcot may deserve equal credit for the feedback concept, with several patents between 1862 and 1868. The telemotor was invented around 1872 by Andrew Betts Brown, allowing elaborate mechanisms between the control room and the engine to be greatly simplified. Steam steering engines had the characteristics of a modern servomechanism: an input, an output, an error signal, and a means for amplifying the error signal used for negative feedback to drive the error towards zero. The Ragonnet power reverse mechanism was a general purpose air or steam-powered servo amplifier for linear motion patented in 1909. Electrical servomechanisms were used as early as 1888 in Elisha Gray's Telautograph. Electrical servomechanisms require a power amplifier. World War II saw the development of electrical fire-control servomechanisms, using an amplidyne as the power amplifier. Vacuum tube amplifiers were used in the UNISERVO tape drive for the UNIVAC I computer. The Royal Navy began experimenting with Remote Power Control (RPC) on HMS Champion in 1928 and began using RPC to control searchlights in the early 1930s. During WW2 RPC was used to control gun mounts and gun directors. Modern servomechanisms use solid state power amplifiers, usually built from MOSFET or thyristor devices. Small servos may use power transistors. The origin of the word is believed to come from the French "Le Servomoteur" or the slavemotor, first used by J. J. L. Farcot in 1868 to describe hydraulic and steam engines for use in ship steering. The simplest kind of servos use bang–bang control. More complex control systems use proportional control, PID control, and state space control, which are studied in modern control theory. Types of performances Servos can be classified by means of their feedback control systems: type 0 servos: under steady-state conditions they produce a constant value of the output with a constant error signal; type 1 servos: under steady-state conditions they produce a constant value of the output with null error signal, but a constant rate of change of the reference implies a constant error in tracking the reference; type 2 servos: under steady-state conditions they produce a constant value of the output with null error signal. A constant rate of change of the reference implies a null error in tracking the reference. A constant rate of acceleration of the reference implies a constant error in tracking the reference. The servo bandwidth indicates the capability of the servo to follow rapid changes in the commanded input. See also Further reading Hsue-Shen Tsien (1954) Engineering Cybernetics, McGraw Hill, link from HathiTrust References External links Ontario News "pioneer in servo technology" Rane Pro Audio Reference definition of "servo-loop" Seattle Robotics Society's "What is a Servo?" different types of servo motors" Control theory Control devices Mechanical amplifiers
Servomechanism
[ "Mathematics", "Technology", "Engineering" ]
1,755
[ "Mechanical amplifiers", "Control devices", "Applied mathematics", "Control theory", "Control engineering", "Amplifiers", "Dynamical systems" ]
325,521
https://en.wikipedia.org/wiki/Isolation%20%28database%20systems%29
In database systems, isolation is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. It determines how transaction integrity is visible to other users and systems. A lower isolation level increases the ability of many users to access the same data at the same time, but also increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency effects that users may encounter, but requires more system resources and increases the chances that one transaction will block another. DBMS Concurrency Control Concurrency control comprises the underlying mechanisms in a DBMS which handle isolation and guarantee related correctness. It is heavily used by the database and storage engines both to guarantee the correct execution of concurrent transactions, and (via different mechanisms) the correctness of other DBMS processes. The transaction-related mechanisms typically constrain the database data access operations' timing (transaction schedules) to certain orders characterized as the serializability and recoverability schedule properties. Constraining database access operation execution typically means reduced performance (measured by rates of execution), and thus concurrency control mechanisms are typically designed to provide the best performance possible under the constraints. Often, when possible without harming correctness, the serializability property is compromised for better performance. However, recoverability cannot be compromised, since such typically results in a database integrity violation. Two-phase locking is the most common transaction concurrency control method in DBMSs, used to provide both serializability and recoverability for correctness. In order to access a database object a transaction first needs to acquire a lock for this object. Depending on the access operation type (e.g., reading or writing an object) and on the lock type, acquiring the lock may be blocked and postponed, if another transaction is holding a lock for that object. Client Side Isolation Isolation is typically enforced at the database level. However, various client-side systems can also be used. It can be controlled in application frameworks or runtime containers such as J2EE Entity Beans On older systems, it may be implemented systemically (by the application developers), for example through the use of temporary tables. In two-tier, three-tier, or n-tier web applications a transaction manager can be used to maintain isolation. A transaction manager is middleware which sits between an app service (back-end application service) and the operating system. A transaction manager can provide global isolation and atomicity. It tracks when new servers join a transaction and coordinates an atomic commit protocol among the servers. The details are abstracted from the app, making transactions simpler and easier to code. A transaction processing monitor (TPM) is a collection of middle-ware including a transaction manager. A TPM might provide local isolation to an app with a lock manager. Read Phenomena The ANSI/ISO standard SQL 92 refers to three different read phenomena when a transaction retrieves data that another transaction might have updated. In the following examples, two transactions take place. In transaction 1, a query is performed, then in transaction 2, an update is performed, and finally in transaction 1, the same query is performed again. The examples use the following relation: Dirty reads A dirty read (aka uncommitted dependency) occurs when a transaction retrieves a row that has been updated by another transaction that is not yet committed. In this example, transaction 1 retrieves the row with id 1, then transaction 2 updates the row with id 1, and finally transaction 1 retrieves the row with id 1 again. Now if transaction 2 rolls back its update (already retrieved by transaction 1) or performs other updates, then the view of the row may be wrong in transaction 1. At the READ UNCOMMITTED isolation level, the second SELECT in transaction 1 retrieves the updated row: this is a dirty read. At the READ COMMITTED, REPEATABLE READ, and SERIALIZABLE isolation levels, the second SELECT in transaction 1 retrieves the initial row. Non-repeatable reads A non-repeatable read occurs when a transaction retrieves a row twice and that row is updated by another transaction that is committed in between. In this example, transaction 1 retrieves the row with id 1, then transaction 2 updates the row with id 1 and is committed, and finally transaction 1 retrieves the row with id 1 again. At the READ UNCOMMITTED and READ COMMITTED isolation levels, the second SELECT in transaction 1 retrieves the updated row: this is a non-repeatable read. At the REPEATABLE READ and SERIALIZABLE isolation levels, the second SELECT in transaction 1 retrieves the initial row. Phantom reads A phantom read occurs when a transaction retrieves a set of rows twice and new rows are inserted into or removed from that set by another transaction that is committed in between. In this example, transaction 1 retrieves the set of rows with age greater than 17, then transaction 2 inserts a row with age 26 and is committed, and finally transaction 1 retrieves the set of rows with age greater than 17 again. At the READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ isolation levels, the second SELECT in transaction 1 retrieves the new set of rows that includes the inserted row: this is a phantom read. At the SERIALIZABLE isolation level, the second SELECT in transaction 1 retrieves the initial set of rows. There are two basic strategies used to prevent non-repeatable reads and phantom reads. In the first strategy, lock-based concurrency control, transaction 2 is committed after transaction 1 is committed or rolled back. It produces the serial schedule T1, T2. In the other strategy, multiversion concurrency control, transaction 2 is committed immediately while transaction 1, which started before transaction 2, continues to operate on an old snapshot of the database taken at the start of transaction 1, and when transaction 1 eventually tries to commit, if the result of committing would be equivalent to the serial schedule T1, T2, then transaction 1 is committed; otherwise, there is a commit conflict and transaction 1 is rolled back with a serialization failure. Under lock-based concurrency control, non-repeatable reads and phantom reads may occur when read locks are not acquired when performing a SELECT, or when the acquired locks on affected rows are released as soon as the SELECT is performed. Under multiversion concurrency control, non-repeatable reads and phantom reads may occur when the requirement that a transaction affected by a commit conflict must be rolled back is relaxed. Isolation levels Of the four ACID properties in a DBMS (Database Management System), the isolation property is the one most often relaxed. When attempting to maintain the highest level of isolation, a DBMS usually acquires locks on data which may result in a loss of concurrency, or implements multiversion concurrency control. This requires adding logic for the application to function correctly. Most DBMSs offer a number of transaction isolation levels, which control the degree of locking that occurs when selecting data. For many database applications, the majority of database transactions can be constructed to avoid requiring high isolation levels (e.g. SERIALIZABLE level), thus reducing the locking overhead for the system. The programmer must carefully analyze database access code to ensure that any relaxation of isolation does not cause software bugs that are difficult to find. Conversely, if higher isolation levels are used, the possibility of deadlock is increased, which also requires careful analysis and programming techniques to avoid. Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level). The isolation levels defined by the ANSI/ISO SQL standard are listed as follows. Serializable This is the highest isolation level. With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon. When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic. From : (Second Informal Review Draft) ISO/IEC 9075:1992, Database Language SQL- July 30, 1992: The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be serializable. A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins. Repeatable reads In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur. Write skew is possible at this isolation level in some systems. Write skew is a phenomenon where two writes are allowed to the same column(s) in a table by two different writers (who have previously read the columns they are updating), resulting in the column having data that is a mix of the two transactions. Read committed In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level). As in the previous level, range-locks are not managed. Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read. Read uncommitted This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions. Default isolation level The default isolation level of different DBMS's varies quite widely. Most databases that feature transactions allow the user to set any isolation level. Some DBMS's also require additional syntax when performing a SELECT statement to acquire locks (e.g. SELECT … FOR UPDATE to acquire exclusive write locks on accessed rows). However, the definitions above have been criticized as being ambiguous, and as not accurately reflecting the isolation provided by many databases: This paper shows a number of weaknesses in the anomaly approach to defining isolation levels. The three ANSI phenomena are ambiguous, and even in their loosest interpretations do not exclude some anomalous behavior … This leads to some counter-intuitive results. In particular, lock-based isolation levels have different characteristics than their ANSI equivalents. This is disconcerting because commercial database systems typically use locking implementations. Additionally, the ANSI phenomena do not distinguish between a number of types of isolation level behavior that are popular in commercial systems. There are also other criticisms concerning ANSI SQL's isolation definition, in that it encourages implementors to do "bad things": ... it relies in subtle ways on an assumption that a locking schema is used for concurrency control, as opposed to an optimistic or multi-version concurrency scheme. This implies that the proposed semantics are ill-defined. Isolation levels vs read phenomena Anomaly serializable is not the same as serializable. That is, it is necessary, but not sufficient that a serializable schedule should be free of all three phenomena types. See also Atomicity Consistency Durability Lock (database) Optimistic concurrency control Relational Database Management System Snapshot isolation References External links Oracle® Database Concepts, chapter 13 Data Concurrency and Consistency, Preventable Phenomena and Transaction Isolation Levels Oracle® Database SQL Reference, chapter 19 SQL Statements: SAVEPOINT to UPDATE, SET TRANSACTION in JDBC: Connection constant fields, Connection.getTransactionIsolation(), Connection.setTransactionIsolation(int) in Spring Framework: @Transactional, Isolation P.Bailis. When is "ACID" ACID? Rarely Data management Transaction processing
Isolation (database systems)
[ "Technology" ]
2,565
[ "Data management", "Data" ]
325,542
https://en.wikipedia.org/wiki/Technology%20acceptance%20model
The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology. The actual system use is the end-point where people use the technology. Behavioral intention is a factor that leads people to use the technology. The behavioral intention (BI) is influenced by the attitude (A) which is the general impression of the technology. The model suggests that when users are presented with a new technology, a number of factors influence their decision about how and when they will use it, notably: Perceived usefulness (PU) – This was defined by Fred Davis as "the degree to which a person believes that using a particular system would enhance their job performance". It means whether or not someone perceives that technology to be useful for what they want to do. Perceived ease-of-use (PEOU) – Davis defined this as "the degree to which a person believes that using a particular system would be free from effort". If the technology is easy to use, then the barrier is conquered. If it's not easy to use and the interface is complicated, no one has a positive attitude towards it. External variables such as social influence is an important factor to determine the attitude. When these things (TAM) are in place, people will have the attitude and intention to use the technology. However, the perception may change depending on age and gender because everyone is different. The TAM has been continuously studied and expanded—the two major upgrades being the TAM 2 and the unified theory of acceptance and use of technology (or UTAUT). A TAM 3 has also been proposed in the context of e-commerce with an inclusion of the effects of trust and perceived risk on system use. Background TAM is one of the most influential extensions of Ajzen and Fishbein's theory of reasoned action (TRA) in the literature. Davis's technology acceptance model (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989) is the most widely applied model of users' acceptance and usage of technology (Venkatesh, 2000). It was developed by Fred Davis and Richard Bagozzi. TAM replaces many of TRA's attitude measures with the two technology acceptance measures—ease of use, and usefulness. TRA and TAM, both of which have strong behavioural elements, assume that when someone forms an intention to act, that they will be free to act without limitation. In the real world there will be many constraints, such as limited freedom to act. Bagozzi, Davis and Warshaw say: Earlier research on the diffusion of innovations also suggested a prominent role for perceived ease of use. Tornatzky and Klein analysed the adoption, finding that compatibility, relative advantage, and complexity had the most significant relationships with adoption across a broad range of innovation types. Eason studied perceived usefulness in terms of a fit between systems, tasks and job profiles, using the terms "task fit" to describe the metric. Legris, Ingham and Collerette suggest that TAM must be extended to include variables that account for change processes and that this could be achieved through adoption of the innovation model into TAM. Usage Several researchers have replicated Davis's original study to provide empirical evidence on the relationships that exist between usefulness, ease of use and system use. Much attention has focused on testing the robustness and validity of the questionnaire instrument used by Davis. Adams et al. replicated the work of Davis to demonstrate the validity and reliability of his instrument and his measurement scales. They also extended it to different settings and, using two different samples, they demonstrated the internal consistency and replication reliability of the two scales. Hendrickson et al. found high reliability and good test-retest reliability. Szajna found that the instrument had predictive validity for intent to use, self-reported usage and attitude toward use. The sum of this research has confirmed the validity of the Davis instrument, and to support its use with different populations of users and different software choices. Segars and Grover re-examined Adams et al.'s)replication of the Davis work. They were critical of the measurement model used, and postulated a different model based on three constructs: usefulness, effectiveness, and ease-of-use. These findings do not yet seem to have been replicated. However, some aspects of these findings were tested and supported by Workman by separating the dependent variable into information use versus technology use. Mark Keil and his colleagues have developed (or, perhaps rendered more popularisable) Davis's model into what they call the Usefulness/EOU Grid, which is a 2×2 grid where each quadrant represents a different combination of the two attributes. In the context of software use, this provides a mechanism for discussing the current mix of usefulness and EOU for particular software packages, and for plotting a different course if a different mix is desired, such as the introduction of even more powerful software. The TAM model has been used in most technological and geographic contexts. One of these contexts is health care, which is growing rapidly Saravanos et al. extended the TAM model to incorporate emotion and the effect that may play on the behavioral intention to accept a technology. Specifically, they looked at warm-glow. Venkatesh and Davis extended the original TAM model to explain perceived usefulness and usage intentions in terms of social influence (subjective norms, voluntariness, image) and cognitive instrumental processes (job relevance, output quality, result demonstrability, perceived ease of use). The extended model, referred to as TAM2, was tested in both voluntary and mandatory settings. The results strongly supported TAM2. Subjective norm – An individual's perception that other individuals who are important to him/her/them consider if he/she/they could perform a behavior. This was consistent with the theory of reasoned action (TRA). Voluntariness – This was defined by Venkatesh & Davis as "extent to which potential adopters perceive the adoption decision to be non-mandatory". Image – This was defined by Moore & Benbasat as "the degree to which use of an innovation perceived to enhance one's status in one's social system". Job relevance – Venkatesh & Davis defined this as personal perspective on the extent to which the target system is suitable for the job. Output quality – Venkatesh & Davis defined this as personal perception of the system's ability to perform specific tasks. Result demonstrability – The production of tangible results will directly influence the system's usefulness. In an attempt to integrate the main competing user acceptance models, Venkatesh et al. formulated the unified theory of acceptance and use of technology (UTAUT). This model was found to outperform each of the individual models (Adjusted R square of 69 percent). UTAUT has been adopted by some recent studies in healthcare. In addition, authors Jun et al. also think that the technology acceptance model is essential to analyze the factors affecting customers’ behavior towards online food delivery services. It is also a widely adopted theoretical model to demonstrate the acceptance of new technology fields. The foundation of TAM is a series of concepts that clarifies and predicts people’s behaviors with their beliefs, attitudes, and behavioral intention. In TAM, perceived ease of use and perceived usefulness, considered general beliefs, play a more vital role than salient beliefs in attitudes toward utilizing a particular technology. Alternative models The MPT model: Independent of TAM, Scherer developed the matching person and technology model in 1986 as part of her National Science Foundation-funded dissertation research. The MPT model is fully described in her 1993 text, "Living in the State of Stuck", now in its 4th edition. The MPT model has accompanying assessment measures used in technology selection and decision-making, as well as outcomes research on differences among technology users, non-users, avoiders, and reluctant users. The HMSAM: TAM has been effective for explaining many kinds of systems use (i.e. e-learning, learning management systems, webportals, etc.) (Fathema, Shannon, Ross, 2015; Fathema, Ross, Witte, 2014). However, TAM is not ideally suited to explain adoption of purely intrinsic or hedonic systems (e.g., online games, music, learning for pleasure). Thus, an alternative model to TAM, called the hedonic-motivation system adoption model (HMSAM) was proposed for these kinds of systems by Lowry et al. HMSAM is designed to improve the understanding of hedonic-motivation systems (HMS) adoption. HMS are systems used primarily to fulfill users' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, only pornography, gamified systems, and for general gamification. Instead of a minor TAM extension, HMSAM is an HMS-specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow-based cognitive absorption (CA). HMSAM may be especially useful in understanding gamification elements of systems use. Extended TAM: Several studies proposed extension of original TAM (Davis, 1989) by adding external variables in it with an aim of exploring the effects of external factors on users' attitude, behavioral intention and actual use of technology. Several factors have been examined so far. For example, perceived self-efficacy, facilitating conditions, and systems quality (Fathema, Shannon, Ross, 2015, Fathema, Ross, Witte, 2014). This model has also been applied in the acceptance of health care technologies. Criticisms TAM has been widely criticised, despite its frequent use, leading the original proposers to attempt to redefine it several times. Criticisms of TAM as a "theory" include its questionable heuristic value, limited explanatory and predictive power, triviality, and lack of any practical value. Benbasat and Barki suggest that TAM "has diverted researchers' attention away from other important research issues and has created an illusion of progress in knowledge accumulation. Furthermore, the independent attempts by several researchers to expand TAM in order to adapt it to the constantly changing IT environments has lead to a state of theoretical chaos and confusion". In general, TAM focuses on the individual 'user' of a computer, with the concept of 'perceived usefulness', with extension to bring in more and more factors to explain how a user 'perceives' 'usefulness', and ignores the essentially social processes of IS development and implementation, without question where more technology is actually better, and the social consequences of IS use. Lunceford argues that the framework of perceived usefulness and ease of use overlooks other issues, such as cost and structural imperatives that force users into adopting the technology. For a recent analysis and critique of TAM, see Bagozzi. Legris et al. claim that, together, TAM and TAM2 account for only 40% of a technological system's use. Perceived ease of use is less likely to be a determinant of attitude and usage intention according to studies of telemedicine, mobile commerce,) and online banking. See also Diffusion (business) Diffusion of innovations Domestication theory Lazy user model List of marketing topics New product development Product life cycle management Research and development Technology adoption lifecycle Technology lifecycle Theory of planned behavior Technology–organization–environment framework Notes References In V. Mahajan & Y. Wind (Eds.), Innovation Diffusion Models of New Product Acceptance. In N. Bjørn-Andersen, K. Eason, & D. Robey (Eds.), Managing computer impact: An international study of management and organizations Okafor, D. J., Nico, M. & Azman, B. B. (2016). The influence of perceived ease of use and perceived usefulness on the intention to use a suggested online advertising workflow. Canadian International Journal of Science and Technology, 6 (14), 162-174. Diffusion Innovation economics Product development Product lifecycle management Product management Science and technology studies Sociology of culture Technological change Technology in society
Technology acceptance model
[ "Physics", "Chemistry", "Technology" ]
2,493
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Science and technology studies" ]
325,634
https://en.wikipedia.org/wiki/Life%20on%20Earth%20%28TV%20series%29
Life on Earth: A Natural History by David Attenborough is a British television natural history series made by the BBC in association with Warner Bros. Television and Reiner Moritz Productions. It was transmitted in the UK from 16 January 1979. During the course of the series presenter David Attenborough, following the format established by Kenneth Clark's Civilisation and Jacob Bronowski's The Ascent of Man (both series which he designed and produced as director of BBC2), travels the globe in order to trace the story of the evolution of life on the planet. Like the earlier series, it was divided into 13 programmes (each of around 55 minutes' duration). The executive producer was Christopher Parsons and the music was composed by Edward Williams. At a cost exceeding £1 million ($1.2 million), it was an immense project that involved filming over 100 locations around the world and took three years in the making by a team of 30 people with the help of more than 500 scientists. Highly acclaimed as a milestone in the history of British wildlife television, it established Attenborough as not only the foremost television naturalist, but also an iconic figure in British cultural life. It is the first in Attenborough's Life series of programmes and was followed by The Living Planet (1984). Filming techniques Several special filming techniques were devised to obtain some of the footage of rare and elusive animals. One cameraman spent hundreds of hours waiting for the fleeting moment when a Darwin's frog, which incubates its young in its mouth, finally spat them out. Another built a replica of a mole rat burrow in a horizontally mounted wheel, so that as the mole rat ran along the tunnel, the wheel could be spun to keep the animal adjacent to the camera. To illustrate the motion of bats' wings in flight, a slow-motion sequence was filmed in a wind tunnel. The series was also the first to include footage of a live (although dying) coelacanth. The cameramen took advantage of improved film stock to produce some of the sharpest and most colourful wildlife footage that had been seen to date. The programmes also pioneered a style of presentation whereby David Attenborough would begin describing a certain species' behaviour in one location, before cutting to another to complete his illustration. Continuity was maintained, despite such sequences being filmed several months and thousands of miles apart. Gorilla encounter The best remembered sequence occurs in the twelfth episode, when Attenborough encounters a group of mountain gorillas in Dian Fossey's sanctuary in Rwanda. The primates had become used to humans through years of being studied by researchers. Attenborough originally intended merely to get close enough to narrate a piece about the apes' use of the opposable thumb, but as he advanced on all fours toward the area where they were feeding, he suddenly found himself face to face with an adult female. Discarding his scripted speech, he turned to camera and delivered a whispered ad lib: When Attenborough returned to the site the next day, the female and two young gorillas began to groom and play with him. In his memoirs, Attenborough describes this as "one of the most exciting encounters of my life". He subsequently discovered, to his chagrin, that only a few seconds had been recorded: the cameraman was running low on film and wanted to save it for the planned description of the opposable thumb. In 1999 viewers of Channel 4 voting for the 100 Greatest TV Moments placed the gorilla sequence at number 12—ranking it ahead of Queen Elizabeth II's coronation and the wedding of Charles and Diana. Critical and commercial reception The series attracted a weighted average of 15 million viewers in the UK, an exceptionally high figure for a BBC documentary back in the late 1970s. It was also a major international success, being sold to over 100 territories and watched by an estimated audience of 500 million people worldwide. However, Life on Earth did not generate the same revenue for the BBC as later Attenborough series because the corporation signed away the American and European rights to their co-production partners, Warner Bros. and Reiner Moritz. It was nominated for four BAFTA TV awards and won the Broadcasting Press Guild Award for Best Documentary Series. In a list of the 100 Greatest British Television Programmes drawn up by the British Film Institute in 2000, voted for by industry professionals, Life on Earth was placed 32nd. Episodes 1997 revision A shortened series, using the footage and commentary from the original, was aired in 1997, edited down to three episodes: early life forms, plants, insects, and amphibians in the first; fish, birds and reptiles in the second; and mammals in the third. DVD, Blu-ray and book The series is available in the UK for Regions 2 and 4 as a four-disc DVD set (BBCDVD1233, released 1 September 2003) and as part of The Life Collection. In 2012 the series was released as a four-disc Blu-ray set (released 12 November 2012). A hardback book, Life on Earth by David Attenborough, was published in 1979 and became a worldwide bestseller. Its cover image of a Panamanian red-eyed tree frog, was taken by Attenborough himself, became an instantly recognisable emblem of the series. It is currently out of print. A revised and updated edition of the book was published in 2018 to favourable reviews. Most if not all of the images in the 2018 edition are new, but the text remains substantially the same as the original. Music Edward Williams' avant-garde score matched the innovative production techniques of the television series. Williams used a traditional chamber music ensemble of (harp, flute, clarinet, strings and percussion) combined with electronic sounds. The pieces were crafted scene-by-scene to synchronise with and complement the imagery on screen: in one sequence examining the flight of birds, the instrumentation mirrors each new creature's appearance. The sounds were processed through an early British synthesiser, the EMS VCS 3, to create its evocative sound. The score was never intended to be released commercially, but Williams had 100 copies pressed as gifts for the musicians involved. One of these LPs found its way into the hands of Jonny Trunk, owner of independent label Trunk Records, who negotiated the licence from the BBC. The soundtrack was finally released on 2 November 2009. References eefu⁸xcvktfthuf8gg External links Life on Earth on the Eden website British Film Institute Screen Online The Reunion BBC Radio 4 programme about the making of Life on Earth 1970s British documentary television series 1979 British television series debuts 1979 British television series endings BBC television documentaries Evolutionary biology Nature educational television series Documentary television shows about evolution Television series by BBC Studios Television series by Warner Bros. Television Studios 1979 in science
Life on Earth (TV series)
[ "Biology" ]
1,378
[ "Evolutionary biology" ]
325,637
https://en.wikipedia.org/wiki/Successor%20function
In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n +1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function. Successor operations are also known as zeration in the context of a zeroth hyperoperation: H0(a, b) = 1 + b. In this context, the extension of zeration is addition, which is defined as repeated succession. Overview The successor function is part of the formal language used to state the Peano axioms, which formalise the structure of the natural numbers. In this formalisation, the successor function is a primitive operation on the natural numbers, in terms of which the standard natural numbers and addition are defined. For example, 1 is defined to be S(0), and addition on natural numbers is defined recursively by: {| |- | m + 0 || = m, |- | m + S(n) || = S(m + n). |} This can be used to compute the addition of any two natural numbers. For example, 5 + 2 = 5 + S(1) = S(5 + 1) = S(5 + S(0)) = S(S(5 + 0)) = S(S(5)) = S(6) = 7. Several constructions of the natural numbers within set theory have been proposed. For example, John von Neumann constructs the number 0 as the empty set {}, and the successor of n, S(n), as the set n ∪ {n}. The axiom of infinity then guarantees the existence of a set that contains 0 and is closed with respect to S. The smallest such set is denoted by N, and its members are called natural numbers. The successor function is the level-0 foundation of the infinite Grzegorczyk hierarchy of hyperoperations, used to build addition, multiplication, exponentiation, tetration, etc. It was studied in 1986 in an investigation involving generalization of the pattern for hyperoperations. It is also one of the primitive functions used in the characterization of computability by recursive functions. See also Successor ordinal Successor cardinal Increment and decrement operators Sequence References Mathematical logic Arithmetic Logic in computer science
Successor function
[ "Mathematics" ]
509
[ "Logic in computer science", "Mathematical logic", "Arithmetic", "Mathematical logic stubs", "Number theory" ]
325,714
https://en.wikipedia.org/wiki/Hopf%20algebra
In mathematics, a Hopf algebra, named after Heinz Hopf, is a structure that is simultaneously an (unital associative) algebra and a (counital coassociative) coalgebra, with these structures' compatibility making it a bialgebra, and that moreover is equipped with an antihomomorphism satisfying a certain property. The representation theory of a Hopf algebra is particularly nice, since the existence of compatible comultiplication, counit, and antipode allows for the construction of tensor products of representations, trivial representations, and dual representations. Hopf algebras occur naturally in algebraic topology, where they originated and are related to the H-space concept, in group scheme theory, in group theory (via the concept of a group ring), and in numerous other places, making them probably the most familiar type of bialgebra. Hopf algebras are also studied in their own right, with much work on specific classes of examples on the one hand and classification problems on the other. They have diverse applications ranging from condensed matter physics and quantum field theory to string theory and LHC phenomenology. Formal definition Formally, a Hopf algebra is an (associative and coassociative) bialgebra H over a field K together with a K-linear map S: H → H (called the antipode) such that the following diagram commutes: Here Δ is the comultiplication of the bialgebra, ∇ its multiplication, η its unit and ε its counit. In the sumless Sweedler notation, this property can also be expressed as As for algebras, one can replace the underlying field K with a commutative ring R in the above definition. The definition of Hopf algebra is self-dual (as reflected in the symmetry of the above diagram), so if one can define a dual of H (which is always possible if H is finite-dimensional), then it is automatically a Hopf algebra. Structure constants Fixing a basis for the underlying vector space, one may define the algebra in terms of structure constants for multiplication: for co-multiplication: and the antipode: Associativity then requires that while co-associativity requires that The connecting axiom requires that Properties of the antipode The antipode S is sometimes required to have a K-linear inverse, which is automatic in the finite-dimensional case, or if H is commutative or cocommutative (or more generally quasitriangular). In general, S is an antihomomorphism, so S2 is a homomorphism, which is therefore an automorphism if S was invertible (as may be required). If S2 = idH, then the Hopf algebra is said to be involutive (and the underlying algebra with involution is a *-algebra). If H is finite-dimensional semisimple over a field of characteristic zero, commutative, or cocommutative, then it is involutive. If a bialgebra B admits an antipode S, then S is unique ("a bialgebra admits at most 1 Hopf algebra structure"). Thus, the antipode does not pose any extra structure which we can choose: Being a Hopf algebra is a property of a bialgebra. The antipode is an analog to the inversion map on a group that sends g to g−1. Hopf subalgebras A subalgebra A of a Hopf algebra H is a Hopf subalgebra if it is a subcoalgebra of H and the antipode S maps A into A. In other words, a Hopf subalgebra A is a Hopf algebra in its own right when the multiplication, comultiplication, counit and antipode of H are restricted to A (and additionally the identity 1 of H is required to be in A). The Nichols–Zoeller freeness theorem of Warren Nichols and Bettina Zoeller (1989) established that the natural A-module H is free of finite rank if H is finite-dimensional: a generalization of Lagrange's theorem for subgroups. As a corollary of this and integral theory, a Hopf subalgebra of a semisimple finite-dimensional Hopf algebra is automatically semisimple. A Hopf subalgebra A is said to be right normal in a Hopf algebra H if it satisfies the condition of stability, adr(h)(A) ⊆ A for all h in H, where the right adjoint mapping adr is defined by adr(h)(a) = S(h(1))ah(2) for all a in A, h in H. Similarly, a Hopf subalgebra A is left normal in H if it is stable under the left adjoint mapping defined by adl(h)(a) = h(1)aS(h(2)). The two conditions of normality are equivalent if the antipode S is bijective, in which case A is said to be a normal Hopf subalgebra. A normal Hopf subalgebra A in H satisfies the condition (of equality of subsets of H): HA+ = A+H where A+ denotes the kernel of the counit on A. This normality condition implies that HA+ is a Hopf ideal of H (i.e. an algebra ideal in the kernel of the counit, a coalgebra coideal and stable under the antipode). As a consequence one has a quotient Hopf algebra H/HA+ and epimorphism H → H/A+H, a theory analogous to that of normal subgroups and quotient groups in group theory. Hopf orders A Hopf order O over an integral domain R with field of fractions K is an order in a Hopf algebra H over K which is closed under the algebra and coalgebra operations: in particular, the comultiplication Δ maps O to O⊗O. Group-like elements A group-like element is a nonzero element x such that Δ(x) = x⊗x. The group-like elements form a group with inverse given by the antipode. A primitive element x satisfies Δ(x) = x⊗1 + 1⊗x. Examples Note that functions on a finite group can be identified with the group ring, though these are more naturally thought of as dual – the group ring consists of finite sums of elements, and thus pairs with functions on the group by evaluating the function on the summed elements. Cohomology of Lie groups The cohomology algebra (over a field ) of a Lie group is a Hopf algebra: the multiplication is provided by the cup product, and the comultiplication by the group multiplication . This observation was actually a source of the notion of Hopf algebra. Using this structure, Hopf proved a structure theorem for the cohomology algebra of Lie groups. Theorem (Hopf) Let be a finite-dimensional, graded commutative, graded cocommutative Hopf algebra over a field of characteristic 0. Then (as an algebra) is a free exterior algebra with generators of odd degree. Quantum groups and non-commutative geometry Most examples above are either commutative (i.e. the multiplication is commutative) or co-commutative (i.e. Δ = T ∘ Δ where the twist map T: H ⊗ H → H ⊗ H is defined by T(x ⊗ y) = y ⊗ x). Other interesting Hopf algebras are certain "deformations" or "quantizations" of those from example 3 which are neither commutative nor co-commutative. These Hopf algebras are often called quantum groups, a term that is so far only loosely defined. They are important in noncommutative geometry, the idea being the following: a standard algebraic group is well described by its standard Hopf algebra of regular functions; we can then think of the deformed version of this Hopf algebra as describing a certain "non-standard" or "quantized" algebraic group (which is not an algebraic group at all). While there does not seem to be a direct way to define or manipulate these non-standard objects, one can still work with their Hopf algebras, and indeed one identifies them with their Hopf algebras. Hence the name "quantum group". Representation theory Let A be a Hopf algebra, and let M and N be A-modules. Then, M ⊗ N is also an A-module, with for m ∈ M, n ∈ N and Δ(a) = (a1, a2). Furthermore, we can define the trivial representation as the base field K with for m ∈ K. Finally, the dual representation of A can be defined: if M is an A-module and M* is its dual space, then where f ∈ M* and m ∈ M. The relationship between Δ, ε, and S ensure that certain natural homomorphisms of vector spaces are indeed homomorphisms of A-modules. For instance, the natural isomorphisms of vector spaces M → M ⊗ K and M → K ⊗ M are also isomorphisms of A-modules. Also, the map of vector spaces M* ⊗ M → K with f ⊗ m → f(m) is also a homomorphism of A-modules. However, the map M ⊗ M* → K is not necessarily a homomorphism of A-modules. Related concepts Graded Hopf algebras are often used in algebraic topology: they are the natural algebraic structure on the direct sum of all homology or cohomology groups of an H-space. Locally compact quantum groups generalize Hopf algebras and carry a topology. The algebra of all continuous functions on a Lie group is a locally compact quantum group. Quasi-Hopf algebras are generalizations of Hopf algebras, where coassociativity only holds up to a twist. They have been used in the study of the Knizhnik–Zamolodchikov equations. Multiplier Hopf algebras introduced by Alfons Van Daele in 1994 are generalizations of Hopf algebras where comultiplication from an algebra (with or without unit) to the multiplier algebra of tensor product algebra of the algebra with itself. Hopf group-(co)algebras introduced by V. G. Turaev in 2000 are also generalizations of Hopf algebras. Weak Hopf algebras Weak Hopf algebras, or quantum groupoids, are generalizations of Hopf algebras. Like Hopf algebras, weak Hopf algebras form a self-dual class of algebras; i.e., if H is a (weak) Hopf algebra, so is H*, the dual space of linear forms on H (with respect to the algebra-coalgebra structure obtained from the natural pairing with H and its coalgebra-algebra structure). A weak Hopf algebra H is usually taken to be a finite-dimensional algebra and coalgebra with coproduct Δ: H → H ⊗ H and counit ε: H → k satisfying all the axioms of Hopf algebra except possibly Δ(1) ≠ 1 ⊗ 1 or ε(ab) ≠ ε(a)ε(b) for some a,b in H. Instead one requires the following: for all a, b, and c in H. H has a weakened antipode S: H → H satisfying the axioms: for all a in H (the right-hand side is the interesting projection usually denoted by ΠR(a) or εs(a) with image a separable subalgebra denoted by HR or Hs); for all a in H (another interesting projection usually denoted by ΠR(a) or εt(a) with image a separable algebra HL or Ht, anti-isomorphic to HL via S); for all a in H. Note that if Δ(1) = 1 ⊗ 1, these conditions reduce to the two usual conditions on the antipode of a Hopf algebra. The axioms are partly chosen so that the category of H-modules is a rigid monoidal category. The unit H-module is the separable algebra HL mentioned above. For example, a finite groupoid algebra is a weak Hopf algebra. In particular, the groupoid algebra on [n] with one pair of invertible arrows eij and eji between i and j in [n] is isomorphic to the algebra H of n x n matrices. The weak Hopf algebra structure on this particular H is given by coproduct Δ(eij) = eij ⊗ eij, counit ε(eij) = 1 and antipode S(eij) = eji. The separable subalgebras HL and HR coincide and are non-central commutative algebras in this particular case (the subalgebra of diagonal matrices). Early theoretical contributions to weak Hopf algebras are to be found in as well as Hopf algebroids See Hopf algebroid Analogy with groups Groups can be axiomatized by the same diagrams (equivalently, operations) as a Hopf algebra, where G is taken to be a set instead of a module. In this case: the field K is replaced by the 1-point set there is a natural counit (map to 1 point) there is a natural comultiplication (the diagonal map) the unit is the identity element of the group the multiplication is the multiplication in the group the antipode is the inverse In this philosophy, a group can be thought of as a Hopf algebra over the "field with one element". Hopf algebras in braided monoidal categories The definition of Hopf algebra is naturally extended to arbitrary braided monoidal categories. A Hopf algebra in such a category is a sextuple where is an object in , and (multiplication), (unit), (comultiplication), (counit), (antipode) — are morphisms in such that 1) the triple is a monoid in the monoidal category , i.e. the following diagrams are commutative: 2) the triple is a comonoid in the monoidal category , i.e. the following diagrams are commutative: 3) the structures of monoid and comonoid on are compatible: the multiplication and the unit are morphisms of comonoids, and (this is equivalent in this situation) at the same time the comultiplication and the counit are morphisms of monoids; this means that the following diagrams must be commutative: where is the left unit morphism in , and the natural transformation of functors which is unique in the class of natural transformations of functors composed from the structural transformations (associativity, left and right units, transposition, and their inverses) in the category . The quintuple with the properties 1),2),3) is called a bialgebra in the category ; 4) the diagram of antipode is commutative: The typical examples are the following. Groups. In the monoidal category of sets (with the cartesian product as the tensor product, and an arbitrary singletone, say, , as the unit object) a triple is a monoid in the categorical sense if and only if it is a monoid in the usual algebraic sense, i.e. if the operations and behave like usual multiplication and unit in (but possibly without the invertibility of elements ). At the same time, a triple is a comonoid in the categorical sense iff is the diagonal operation (and the operation is defined uniquely as well: ). And any such a structure of comonoid is compatible with any structure of monoid in the sense that the diagrams in the section 3 of the definition always commute. As a corollary, each monoid in can naturally be considered as a bialgebra in , and vice versa. The existence of the antipode for such a bialgebra means exactly that every element has an inverse element with respect to the multiplication . Thus, in the category of sets Hopf algebras are exactly groups in the usual algebraic sense. Classical Hopf algebras. In the special case when is the category of vector spaces over a given field , the Hopf algebras in are exactly the classical Hopf algebras described above. Functional algebras on groups. The standard functional algebras , , , (of continuous, smooth, holomorphic, regular functions) on groups are Hopf algebras in the category (Ste,) of stereotype spaces, Group algebras. The stereotype group algebras , , , (of measures, distributions, analytic functionals and currents) on groups are Hopf algebras in the category (Ste,) of stereotype spaces. These Hopf algebras are used in the duality theories for non-commutative groups. See also Quasitriangular Hopf algebra Algebra/set analogy Representation theory of Hopf algebras Ribbon Hopf algebra Superalgebra Supergroup Anyonic Lie algebra Sweedler's Hopf algebra Hopf algebra of permutations Milnor–Moore theorem Notes and references Notes Citations References . Heinz Hopf, Uber die Topologie der Gruppen-Mannigfaltigkeiten und ihrer Verallgemeinerungen, Annals of Mathematics 42 (1941), 22–52. Reprinted in Selecta Heinz Hopf, pp. 119–151, Springer, Berlin (1964). , . . Monoidal categories Representation theory
Hopf algebra
[ "Mathematics" ]
3,758
[ "Mathematical structures", "Monoidal categories", "Fields of abstract algebra", "Category theory", "Representation theory" ]
325,726
https://en.wikipedia.org/wiki/Social%20network%20analysis
Social network analysis (SNA) is the process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes (individual actors, people, or things within the network) and the ties, edges, or links (relationships or interactions) that connect them. Examples of social structures commonly visualized through social network analysis include social media networks, meme proliferation, information circulation, friendship and acquaintance networks, business networks, knowledge networks, difficult working relationships, collaboration graphs, kinship, disease transmission, and sexual relationships. These networks are often visualized through sociograms in which nodes are represented as points and ties are represented as lines. These visualizations provide a means of qualitatively assessing networks by varying the visual representation of their nodes and edges to reflect attributes of interest. Social network analysis has emerged as a key technique in modern sociology. It has also gained significant popularity in the following: anthropology, biology, demography, communication studies, economics, geography, history, information science, organizational studies, physics, political science, public health, social psychology, development studies, sociolinguistics, and computer science, education and distance education research, and is now commonly available as a consumer tool (see the list of SNA software). History Social network analysis has its theoretical roots in the work of early sociologists such as Georg Simmel and Émile Durkheim, who wrote about the importance of studying patterns of relationships that connect social actors. Social scientists have used the concept of "social networks" since early in the 20th century to connote complex sets of relationships between members of social systems at all scales, from interpersonal to international. In the 1930s Jacob Moreno and Helen Jennings introduced basic analytical methods. In 1954, John Arundel Barnes started using the term systematically to denote patterns of ties, encompassing concepts traditionally used by the public and those used by social scientists: bounded groups (e.g., tribes, families) and social categories (e.g., gender, ethnicity). Starting in the 1970s, scholars such as Ronald Burt, Kathleen Carley, Mark Granovetter, David Krackhardt, Edward Laumann, Anatol Rapoport, Barry Wellman, Douglas R. White, and Harrison White expanded the use of systematic social network analysis. Beginning in the late 1990s, social network analysis experienced a further resurgence with work by sociologists, political scientists, economists, computer scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, Mark Newman, Matthew Jackson, Jon Kleinberg, and others, developing and applying new models and methods, prompted in part by the emergence of new data available about online social networks as well as "digital traces" regarding face-to-face networks. Computational SNA has been extensively used in research on study-abroad second language acquisition. Even in the study of literature, network analysis has been applied by Anheier, Gerhards and Romo, Wouter De Nooy, and Burgert Senekal. Indeed, social network analysis has found applications in various academic disciplines as well as practical contexts such as countering money laundering and terrorism. Metrics Size: The number of network members in a given network. Connections Homophily: The extent to which actors form ties with similar versus dissimilar others. Similarity can be defined by gender, race, age, occupation, educational achievement, status, values or any other salient characteristic. Homophily is also referred to as assortativity. Multiplexity: The number of content-forms contained in a tie. For example, two people who are friends and also work together would have a multiplexity of 2. Multiplexity has been associated with relationship strength and can also comprise overlap of positive and negative network ties. Mutuality/Reciprocity: The extent to which two actors reciprocate each other's friendship or other interaction. Network Closure: A measure of the completeness of relational triads. An individual's assumption of network closure (i.e. that their friends are also friends) is called transitivity. Transitivity is an outcome of the individual or situational trait of Need for Cognitive Closure. Propinquity: The tendency for actors to have more ties with geographically close others. Distributions Bridge: An individual whose weak ties fill a structural hole, providing the only link between two individuals or clusters. It also includes the shortest route when a longer one is unfeasible due to a high risk of message distortion or delivery failure. Centrality: Centrality refers to a group of metrics that aim to quantify the "importance" or "influence" (in a variety of senses) of a particular node (or group) within a network. Examples of common methods of measuring "centrality" include betweenness centrality, closeness centrality, eigenvector centrality, alpha centrality, and degree centrality. Density: The proportion of direct ties in a network relative to the total number possible. Distance: The minimum number of ties required to connect two particular actors, as popularized by Stanley Milgram's small world experiment and the idea of 'six degrees of separation'. Structural holes: The absence of ties between two parts of a network. Finding and exploiting a structural hole can give an entrepreneur a competitive advantage. This concept was developed by sociologist Ronald Burt, and is sometimes referred to as an alternate conception of social capital. Tie Strength: Defined by the linear combination of time, emotional intensity, intimacy and reciprocity (i.e. mutuality). Strong ties are associated with homophily, propinquity and transitivity, while weak ties are associated with bridges. Segmentation Groups are identified as 'cliques' if every individual is directly tied to every other individual, 'social circles' if there is less stringency of direct contact, which is imprecise, or as structurally cohesive blocks if precision is wanted. Clustering coefficient: A measure of the likelihood that two associates of a node are associates. A higher clustering coefficient indicates a greater 'cliquishness'. Cohesion: The degree to which actors are connected directly to each other by cohesive bonds. Structural cohesion refers to the minimum number of members who, if removed from a group, would disconnect the group. Modelling and visualization of networks Visual representation of social networks is important to understand the network data and convey the result of the analysis. Numerous methods of visualization for data produced by social network analysis have been presented. Many of the analytic software have modules for network visualization. The data is explored by displaying nodes and ties in various layouts and attributing colors, size, and other advanced properties to nodes. Visual representations of networks may be a powerful method for conveying complex information. Still, care should be taken in interpreting node and graph properties from visual displays alone, as they may misrepresent structural properties better captured through quantitative analyses. Signed graphs can be used to illustrate good and bad relationships between humans. A positive edge between two nodes denotes a positive relationship (friendship, alliance, dating), and a negative edge denotes a negative relationship (hatred, anger). Signed social network graphs can be used to predict the future evolution of the graph. In signed social networks, there is the concept of "balanced" and "unbalanced" cycles. A balanced cycle is defined as a cycle where the product of all the signs are positive. According to balance theory, balanced graphs represent a group of people who are unlikely to change their opinions of the other people in the group. Unbalanced graphs represent a group of people who are very likely to change their opinions of the people in their group. For example, a group of 3 people (A, B, and C) where A and B have a positive relationship, B and C have a positive relationship. Still, C and A have a negative relationship, an unbalanced cycle. This group is very likely to morph into a balanced cycle, such as one where B only has a good relationship with A, and both A and B have a negative relationship with C. By using the concept of balanced and unbalanced cycles, the evolution of signed social network graphs can be predicted. Different approaches to participatory network mapping have proven useful, especially when using social network analysis as a tool for facilitating change. Here, participants/interviewers provide network data by mapping the network (with pen and paper or digitally) during the data collection session. An example of a pen-and-paper network mapping approach, which also includes the collection of some actor attributes (perceived influence and goals of actors) is the * Net-map toolbox. One benefit of this approach is that it allows researchers to collect qualitative data and ask clarifying questions while the network data is collected. Social networking potential Social Networking Potential (SNP) is a numeric coefficient, derived through algorithms to represent both the size of an individual's social network and their ability to influence that network. SNP coefficients were first defined and used by Bob Gerstley in 2002. A closely related term is Alpha User, defined as a person with a high SNP. SNP coefficients have two primary functions: The classification of individuals based on their social networking potential, and The weighting of respondents in quantitative marketing research studies. By calculating the SNP of respondents and by targeting High SNP respondents, the strength and relevance of quantitative marketing research used to drive viral marketing strategies is enhanced. Variables used to calculate an individual's SNP include but are not limited to: participation in Social Networking activities, group memberships, leadership roles, recognition, publication/editing/contributing to non-electronic media, publication/editing/contributing to electronic media (websites, blogs), and frequency of past distribution of information within their network. The acronym "SNP" and some of the first algorithms developed to quantify an individual's social networking potential were described in the white paper "Advertising Research is Changing" (Gerstley, 2003) See Viral Marketing. The first book to discuss the commercial use of Alpha Users among mobile telecoms audiences was 3G Marketing by Ahonen, Kasper and Melkko in 2004. The first book to discuss Alpha Users more generally in the context of social marketing intelligence was Communities Dominate Brands by Ahonen & Moore in 2005. In 2012, Nicola Greco (UCL) presents at TEDx the Social Networking Potential as a parallelism to the potential energy that users generate and companies should use, stating that "SNP is the new asset that every company should aim to have". Practical applications Social network analysis is used extensively in a wide range of applications and disciplines. Some common network analysis applications include data aggregation and mining, network propagation modeling, network modeling and sampling, user attribute and behavior analysis, community-maintained resource support, location-based interaction analysis, social sharing and filtering, recommender systems development, and link prediction and entity resolution. In the private sector, businesses use social network analysis to support activities such as customer interaction and analysis, information system development analysis, marketing, and business intelligence needs (see social media analytics). Some public sector uses include development of leader engagement strategies, analysis of individual and group engagement and media use, and community-based problem solving. Longitudinal SNA in schools Large numbers of researchers worldwide examine the social networks of children and adolescents. In questionnaires, they list all classmates, students in the same grade, or schoolmates, asking: "Who are your best friends?". Students may sometimes nominate as many peers as they wish; other times, the number of nominations is limited. Social network researchers have investigated similarities in friendship networks. The similarity between friends was established as far back as classical antiquity. Resemblance is an important basis for the survival of friendships. Similarity in characteristics, attitudes, or behaviors means that friends understand each other more quickly, have common interests to talk about, know better where they stand with each other, and have more trust in each other. As a result, such relationships are more stable and valuable. Moreover, looking more alike makes young people more confident and strengthens them in developing their identity. Similarity in behavior can result from two processes: selection and influence. These two processes can be distinguished using longitudinal social network analysis in the R package SIENA (Simulation Investigation for Empirical Network Analyses), developed by Tom Snijders and colleagues. Longitudinal social network analysis became mainstream after the publication of a special issue of the Journal of Research on Adolescence in 2013, edited by René Veenstra and containing 15 empirical papers. Security applications Social network analysis is also used in intelligence, counter-intelligence and law enforcement activities. This technique allows the analysts to map covert organizations such as an espionage ring, an organized crime family or a street gang. The National Security Agency (NSA) uses its electronic surveillance programs to generate the data needed to perform this type of analysis on terrorist cells and other networks deemed relevant to national security. The NSA looks up to three nodes deep during this network analysis. After the initial mapping of the social network is complete, analysis is performed to determine the structure of the network and determine, for example, the leaders within the network. This allows military or law enforcement assets to launch capture-or-kill decapitation attacks on the high-value targets in leadership positions to disrupt the functioning of the network. The NSA has been performing social network analysis on call detail records (CDRs), also known as metadata, since shortly after the September 11 attacks. Textual analysis applications Large textual corpora can be turned into networks and then analyzed using social network analysis. In these networks, the nodes are Social Actors, and the links are Actions. The extraction of these networks can be automated by using parsers. The resulting networks, which can contain thousands of nodes, are then analyzed using tools from network theory to identify the key actors, the key communities or parties, and general properties such as the robustness or structural stability of the overall network or the centrality of certain nodes. This automates the approach introduced by Quantitative Narrative Analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object. In other approaches, textual analysis is carried out considering the network of words co-occurring in a text. In these networks, nodes are words and links among them are weighted based on their frequency of co-occurrence (within a specific maximum range). Internet applications Social network analysis has also been applied to understanding online behavior by individuals, organizations, and between websites. Hyperlink analysis can be used to analyze the connections between websites or webpages to examine how information flows as individuals navigate the web. The connections between organizations has been analyzed via hyperlink analysis to examine which organizations within an issue community. Netocracy Another concept that has emerged from this connection between social network theory and the Internet is the concept of netocracy, where several authors have emerged studying the correlation between the extended use of online social networks, and changes in social power dynamics. Social media internet applications Social network analysis has been applied to social media as a tool to understand behavior between individuals or organizations through their linkages on social media websites such as Twitter and Facebook. In computer-supported collaborative learning One of the most current methods of the application of SNA is to the study of computer-supported collaborative learning (CSCL). When applied to CSCL, SNA is used to help understand how learners collaborate in terms of amount, frequency, and length, as well as the quality, topic, and strategies of communication. Additionally, SNA can focus on specific aspects of the network connection, or the entire network as a whole. It uses graphical representations, written representations, and data representations to help examine the connections within a CSCL network. When applying SNA to a CSCL environment the interactions of the participants are treated as a social network. The focus of the analysis is on the "connections" made among the participants – how they interact and communicate – as opposed to how each participant behaved on his or her own. Key terms There are several key terms associated with social network analysis research in computer-supported collaborative learning such as: density, centrality, indegree, outdegree, and sociogram. Density refers to the "connections" between participants. Density is defined as the number of connections a participant has, divided by the total possible connections a participant could have. For example, if there are 20 people participating, each person could potentially connect to 19 other people. A density of 100% (19/19) is the greatest density in the system. A density of 5% indicates there is only 1 of 19 possible connections. Centrality focuses on the behavior of individual participants within a network. It measures the extent to which an individual interacts with other individuals in the network. The more an individual connects to others in a network, the greater their centrality in the network. In-degree and out-degree variables are related to centrality. In-degree centrality concentrates on a specific individual as the point of focus; centrality of all other individuals is based on their relation to the focal point of the "in-degree" individual. Out-degree is a measure of centrality that still focuses on a single individual, but the analytic is concerned with the out-going interactions of the individual; the measure of out-degree centrality is how many times the focus point individual interacts with others. A sociogram is a visualization with defined boundaries of connections in the network. For example, a sociogram which shows out-degree centrality points for Participant A would illustrate all outgoing connections Participant A made in the studied network. Unique capabilities Researchers employ social network analysis in the study of computer-supported collaborative learning in part due to the unique capabilities it offers. This particular method allows the study of interaction patterns within a networked learning community and can help illustrate the extent of the participants' interactions with the other members of the group. The graphics created using SNA tools provide visualizations of the connections among participants and the strategies used to communicate within the group. Some authors also suggest that SNA provides a method of easily analyzing changes in participatory patterns of members over time. A number of research studies have applied SNA to CSCL across a variety of contexts. The findings include the correlation between a network's density and the teacher's presence, a greater regard for the recommendations of "central" participants, infrequency of cross-gender interaction in a network, and the relatively small role played by an instructor in an asynchronous learning network. Other methods used alongside SNA Although many studies have demonstrated the value of social network analysis within the computer-supported collaborative learning field, researchers have suggested that SNA by itself is not enough for achieving a full understanding of CSCL. The complexity of the interaction processes and the myriad sources of data make it difficult for SNA to provide an in-depth analysis of CSCL. Researchers indicate that SNA needs to be complemented with other methods of analysis to form a more accurate picture of collaborative learning experiences. A number of research studies have combined other types of analysis with SNA in the study of CSCL. This can be referred to as a multi-method approach or data triangulation, which will lead to an increase of evaluation reliability in CSCL studies. Qualitative method – The principles of qualitative case study research constitute a solid framework for the integration of SNA methods in the study of CSCL experiences. Ethnographic data such as student questionnaires and interviews and classroom non-participant observations Case studies: comprehensively study particular CSCL situations and relate findings to general schemes Content analysis: offers information about the content of the communication among members Quantitative method – This includes simple descriptive statistical analyses on occurrences to identify particular attitudes of group members who have not been able to be tracked via SNA in order to detect general tendencies. Computer log files: provide automatic data on how collaborative tools are used by learners Multidimensional scaling (MDS): charts similarities among actors, so that more similar input data is closer together Software tools: QUEST, SAMSA (System for Adjacency Matrix and Sociogram-based Analysis), and Nud*IST See also Actor-network theory Attention inequality Blockmodeling Community structure Complex network Digital humanities Dynamic network analysis Friendship paradox Individual mobility Influence-for-hire Mathematical sociology Metcalfe's law Netocracy Network-based diffusion analysis Network science Organizational patterns Small world phenomenon Social media analytics Social media intelligence Social media mining Social network Social network analysis software Social networking service Social software Social web Sociomapping Virtual collective consciousness References Further reading Introduction to Stochastic Actor-Based Models for Network Dynamics – Snijders et al. Center for Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon NetLab at the University of Toronto, studies the intersection of social, communication, information and computing networks Program on Networked Governance, Harvard University Historical Dynamics in a time of Crisis: Late Byzantium, 1204–1453 (a discussion of social network analysis from the point of view of historical studies) Social Network Analysis: A Systematic Approach for Investigating Networks, Crowds, and Markets (2010) by D. Easley & J. Kleinberg Introduction to Social Networks Methods (2005) by R. Hanneman & M. Riddle Social Network Analysis with Applications (2013) by I. McCulloh, H. Armstrong & A. Johnson External links International Network for Social Network Analysis Awesome Network Analysis – 200+ links to books, conferences, courses, journals, research groups, software, tutorials and more Netwiki – wiki page devoted to social networks; maintained at University of North Carolina at Chapel Hill Social networks Value (ethics) Systems theory Social systems Self-organization Community building Cultural economics Social information processing Mass media monitoring Surveillance Types of analytics Methods in sociology Internet culture
Social network analysis
[ "Mathematics" ]
4,484
[ "Self-organization", "Dynamical systems" ]
325,736
https://en.wikipedia.org/wiki/Phenomenalism
In metaphysics, phenomenalism is the view that physical objects cannot justifiably be said to exist in themselves, but only as perceptual phenomena or sensory stimuli (e.g. redness, hardness, softness, sweetness, etc.) situated in time and in space. In particular, some forms of phenomenalism reduce all talk about physical objects in the external world to talk about bundles of sense data. History Phenomenalism is a radical form of empiricism. Its roots as an ontological view of the nature of existence can be traced back to George Berkeley and his subjective idealism, upon which David Hume further elaborated. John Stuart Mill had a theory of perception which is commonly referred to as classical phenomenalism. This differs from Berkeley's idealism in its account of how objects continue to exist when no one is perceiving them. Berkeley claimed that an omniscient God perceived all objects and that this was what kept them in existence, whereas Mill claimed that permanent possibilities of experience were sufficient for an object's existence. These permanent possibilities could be analysed into counterfactual conditionals, such as "if I were to have y-type sensations, then I would also have x-type sensations". As an epistemological theory about the possibility of knowledge of objects in the external world, however, the most accessible formulation of phenomenalism is perhaps to be found in the transcendental idealism of Immanuel Kant. According to Kant, space and time, which are the a priori forms and preconditions of all sensory experience, "refer to objects only to the extent that these are considered as phenomena, but do not represent the things in themselves". While Kant insisted that knowledge is limited to phenomena, he never denied or excluded the existence of objects which were not knowable by way of experience, the things-in-themselves or noumena, though his proof of noumena had many problems and is one of the most controversial aspects of his Critiques. Kant's "epistemological phenomenalism", as it has been called, is therefore quite distinct from Berkeley's earlier ontological version. In Berkeley's view, the so-called "things-in-themselves" do not exist except as subjectively perceived bundles of sensations which are guaranteed consistency and permanence because they are constantly perceived by the mind of God. Hence, while Berkeley holds that objects are merely bundles of sensations (see bundle theory), Kant holds (unlike other bundle theorists) that objects do not cease to exist when they are no longer perceived by some merely human subject or mind. In the late 19th century, an even more extreme form of phenomenalism was formulated by Ernst Mach, later developed and refined by Russell, Ayer and the logical positivists. Mach rejected the existence of God and also denied that phenomena were data experienced by the mind or consciousness of subjects. Instead, Mach held sensory phenomena to be "pure data" whose existence was to be considered anterior to any arbitrary distinction between mental and physical categories of phenomena. In this way, it was Mach who formulated the key thesis of phenomenalism, which separates it from bundle theories of objects: objects are logical constructions out of sense-data or ideas; whereas according to bundle theories, objects are made up of sets, or bundles, of actual ideas or perceptions. That is, according to bundle theory, to say that the pear before me exists is simply to say that certain properties (greenness, hardness, etc.) are being perceived at this moment. When these characteristics are no longer perceived or experienced by anyone, then the object (pear, in this case) no longer exists. Phenomenalism as formulated by Mach, in contrast, is the view that objects are logical constructions out of perceptual properties. On this view, to say there is a table in the other room when there is no one in that room to perceive it, is to say that if there were someone in that room, then that person would perceive the table. It is not the actual perception that counts, but the conditional possibility of perceiving. Logical positivism, a movement begun as a small circle which grew around the philosopher Moritz Schlick in Vienna, inspired many philosophers in the English speaking world from the 1930s through the 1950s. Important influences on their brand of empiricism included Ernst Mach — himself holding the Chair of Inductive Sciences at the University of Vienna, a position Schlick would later hold — and the Cambridge philosopher Bertrand Russell. The idea of some logical positivists, such as A. J. Ayer and Rudolf Carnap, was to apply phenomenalism in linguistic terms, enabling reliable discourse of physical objects, such as tables, in strict terms of either actual or possible sensory experiences. 20th century American philosopher Arthur Danto asserted that "a phenomenalist, believ[es] that whatever is finally meaningful can be expressed in terms of our own [sense] experience.". He claimed that "The phenomenalist really is committed to the most radical kind of empiricism: For him reference to objects is always finally a reference to sense-experience ... ." To the phenomenalist, objects of any kind must be related to experience. "John Stuart Mill once spoke of physical objects as but the 'permanent possibility of experience' and this, by and large, is what the phenomenalist exploits: All we can mean, in talking about physical objects — or nonphysical objects, if there are any — is what experiences we would have in dealing with them ... ." However, phenomenalism is based on mental operations. These operations, themselves, are not known from sense experience. Such non-empirical, non-sensual operations are the "...nonempirical matters of space, time, and continuity that empiricism in all its forms and despite its structures seems to require ... ." See for comparison Sensualism, to which phenomenalism is closely related. Criticisms C.I. Lewis had previously suggested that the physical claim "There is a doorknob in front of me" necessarily entails the sensory conditional "If I should seem to see a doorknob and if I should seem to myself to be initiating a grasping motion, then in all probability the sensation of contacting a doorknob should follow". Roderick Firth formulated another objection in 1950, stemming from perceptual relativity: White wallpaper looks white under white light and red under red light, etc. Any possible course of experience resulting from a possible course of action will apparently underdetermine our surroundings: it would determine, for example, that there is either white wallpaper under red light or red wallpaper under white light, and so on. Another criticism of phenomenalism comes from truthmaker theory. Truthmaker theorists hold that the truth depends on reality. In the terms of truthmaker theory: a truthbearer (e.g. a proposition) is true because of the existence of its truthmaker (e.g. a fact). Phenomenalists have been accused of violating this principle and thereby engaging in "ontological cheating": of positing truths without being able to account for the truthmakers of these truths. The criticism is usually directed at the phenomenalist account of material objects. The phenomenalist faces the problem of how to account for the existence of unperceived material objects. A well-known solution to this problem comes from John Stuart Mill. He claimed that we can account for unperceived objects in terms of counterfactual conditionals: It is true that valuables locked in a safe remain in existence, despite being unperceived, because if someone were to look inside then this person would have a corresponding sensory impression. Truthmaker theorist may object that this still leaves open what the truthmaker for this counterfactual conditional is, considering it unclear how such a truthmaker could be found within the phenomenalist ontology. Notable proponents Johannes Nikolaus Tetens John Foster See also References Bibliography Fenomenismo in L'Enciclopedia Garzanti di Filosofia (eds.) Gianni Vattimo and Gaetano Chiurazzi. Third Edition. Garzanti. Milan, 2004. Berlin, Isaiah. The Refutation of Phenomenalism. The Isaiah Berlin Virtual Library. 2004. Bolender, John. Factual Phenomenalism: a Supervenience Theory, in SORITES Issue #09. April 1998. pp. 16–31. External links Theory of mind Epistemological theories Analytic philosophy Empiricism Kantianism Idealism Metaphysics of mind Logical positivism
Phenomenalism
[ "Mathematics" ]
1,776
[ "Mathematical logic", "Logical positivism" ]
325,772
https://en.wikipedia.org/wiki/Nova%20%28American%20TV%20program%29
Nova (stylized as NOVΛ) is an American popular science television program produced by WGBH in Boston, Massachusetts, since 1974. It is broadcast on PBS in the United States, and in more than 100 other countries. The program has won many major television awards. Nova often includes interviews with scientists doing research in the subject areas covered and occasionally includes footage of a particular discovery. Some episodes have focused on the history of science. Examples of topics covered include the following: Colditz Castle, the Drake equation, elementary particles, the 1980 eruption of Mount St. Helens, Fermat's Last Theorem, the AIDS epidemic, global warming, moissanite, Project Jennifer, storm chasing, Unterseeboot 869, Vinland, Tarim mummies, and the COVID-19 pandemic. The Nova programs have been praised for their pacing, writing, and editing. Websites that accompany the segments have also won awards. Episodes History Nova was first aired on March 3, 1974. The show was created by Michael Ambrosino, inspired by the BBC 2 television series Horizon, which Ambrosino had seen while working in the UK. In the early years, many Nova episodes were either co-productions with the BBC Horizon team, or other documentaries originating outside of the United States, with the narration re-voiced in American English. Of the first 50 programs, only 19 were original WGBH productions, and the first Nova episode, "The Making of a Natural History Film", was originally an episode of Horizon that premiered in 1972. The practice continues to this day. All the producers and associate producers for the original Nova teams came from either England (with experience on the Horizon series), Los Angeles or New York. Ambrosino was succeeded as executive producer by John Angier, John Mansfield, and Paula S. Apsell, acting as senior executive producer. Reception Rob Owen of Pittsburgh Post-Gazette wrote, "Fascinating and gripping." Alex Strachan of Calgary Herald wrote,"TV for people who don't normally watch TV." Lynn Elber of the Associated Press wrote of the episode "The Fabric of the Cosmos", "Mind-blowing TV." The Futon Critic wrote of the episode "Looking for Life on Mars", "Astounding [and] exhilarating." Awards Nova has been recognized with multiple Peabody Awards and Emmy Awards. The program won a Peabody in 1974, citing it as "an imaginative series of science adventures," with a "versatility rarely found in television." Subsequent Peabodys went to specific episodes: "The Miracle of Life" (1983) was cited as a "fascinating and informative documentary of the human reproductive process," which used "revolutionary microphotographic techniques." This episode also won an Emmy. "Spy Machines" (1987) was cited for "neatly recount[ing] the key events of the Cold War and look[ing] into the future of American/Soviet SDI competition." "The Elegant Universe" (2003) was lauded for exploring "science's most elaborate and ambitious theory, the string theory" while making "the abstract concrete, the complicated clear, and the improbable understandable" by "blending factual story telling with animation, special effects, and trick photography." The episode also won an Emmy for editing. The National Academy of Television Arts and Sciences (responsible for documentary Emmys) recognized the program with awards in 1978, 1981, 1983, and 1989. Julia Cort won an Emmy in 2001 for writing "Life's Greatest Miracle." Emmys were also awarded for the following episodes: 1982 "Here's Looking at You, Kid" 1983 "The Miracle of Life" (also won a Peabody) 1985 "AIDS: Chapter One", "Acid Rain: New Bad News" 1992 "Suicide Mission to Chernobyl", "The Russian Right Stuff" 1994 "Secret of the Wild Child" 1995 "Siamese Twins", "Secret of the Wild Child" 1999 "Decoding Nazi Secrets" 2001 "Bioterror" 2002 "Galileo's Battle for the Heavens", "Mountain of Ice", "Shackleton's Voyage of Endurance", "Why the Towers Fell" 2003 "Battle of the X-planes", "The Elegant Universe" (also won a Peabody) 2005 "Rx for Survival: A Global Health Challenge" In 1998, the National Science Board of the National Science Foundation awarded Nova its first-ever Public Service Award. References External links 1974 American television series debuts 1970s American documentary television series 1980s American documentary television series 1990s American documentary television series 2000s American documentary television series 2010s American documentary television series 2020s American documentary television series American educational television series Emmy Award–winning programs American English-language television shows PBS original programming Peabody Award–winning television programs Science education television series Physics education Television series by WGBH Documentary television shows about evolution
Nova (American TV program)
[ "Physics" ]
1,003
[ "Applied and interdisciplinary physics", "Physics education" ]
325,789
https://en.wikipedia.org/wiki/Architectural%20state
Architectural state is the collection of information in a computer system that defines the state of a program during execution. Architectural state includes main memory, architectural registers, and the program counter. Architectural state is defined by the instruction set architecture and can be manipulated by the programmer using instructions. A core dump is a file recording the architectural state of a computer program at some point in time, such as when it has crashed. Examples of architectural state include: Main Memory (Primary storage) Control registers Instruction flag registers (such as EFLAGS in x86) Interrupt mask registers Memory management unit registers Status registers General purpose registers (such as AX, BX, CX, DX, etc. in x86) Address registers Counter registers Index registers Stack registers String registers Architectural state is not microarchitectural state. Microarchitectural state is hidden machine state used for implementing the microarchitecture. Examples of microarchitectural state include pipeline registers, cache tags, and branch predictor state. While microarchitectural state can change to suit the needs of each processor implementation in a processor family, binary compatibility among processors in a processor family requires a common architectural state. Architectural state naturally does not include state-less elements of a computer such as busses and computation units (e.g., the ALU). References Central processing unit
Architectural state
[ "Technology" ]
279
[ "Computing stubs", "Computer hardware stubs" ]
325,802
https://en.wikipedia.org/wiki/Glossary%20of%20graph%20theory
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges. Symbols A B C D E F G H I J K L M N O P Q R S T U V W See also List of graph theory topics Gallery of named graphs Graph algorithms Glossary of areas of mathematics References Graph theory Graph theory Wikipedia glossaries using description lists he:גרף (תורת הגרפים)#תת גרף
Glossary of graph theory
[ "Mathematics" ]
109
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
325,806
https://en.wikipedia.org/wiki/Graph%20%28discrete%20mathematics%29
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called link or line). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. In contrast, if an edge from a person A to a person B means that A owes money to B, then this graph is directed, because owing money is not necessarily reciprocated. Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image). Definitions Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. Graph A graph (sometimes called an undirected graph to distinguish it from a directed graph, or a simple graph to distinguish it from a multigraph) is a pair , where is a set whose elements are called vertices (singular: vertex), and is a set of unordered pairs of vertices, whose elements are called edges (sometimes links or lines). The vertices and of an edge are called the edge's endpoints. The edge is said to join and and to be incident on them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is called isolated. When an edge exists, the vertices and are called adjacent. A multigraph is a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs. Sometimes, graphs are allowed to contain loops, which are edges that join a vertex to itself. To allow loops, the pairs of vertices in must be allowed to have the same node twice. Such generalized graphs are called graphs with loops or simply graphs when it is clear from the context that loops are allowed. Generally, the vertex set is taken to be finite (which implies that the edge set is also finite). Sometimes infinite graphs are considered, but they are usually viewed as a special kind of binary relation, because most results on finite graphs either do not extend to the infinite case or need a rather different proof. An empty graph is a graph that has an empty set of vertices (and thus an empty set of edges). The order of a graph is its number of vertices, usually denoted by . The size of a graph is its number of edges, typically denoted by . However, in some contexts, such as for expressing the computational complexity of algorithms, the term size is used for the quantity (otherwise, a non-empty graph could have size 0). The degree or valency of a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice. In a graph of order , the maximum degree of each vertex is (or if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges is (or if loops are allowed). The edges of a graph define a symmetric relation on the vertices, called the adjacency relation. Specifically, two vertices and are adjacent if is an edge. A graph is fully determined by its adjacency matrix , which is an square matrix, with specifying the number of connections from vertex to vertex . For a simple graph, is either 0, indicating disconnection, or 1, indicating connection; moreover because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or all being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (meaning ). Directed graph A directed graph or digraph is a graph in which edges have orientations. In one restricted but very common sense of the term, a directed graph is a pair comprising: , a set of vertices (also called nodes or points); , a set of edges (also called directed edges, directed links, directed lines, arrows, or arcs), which are ordered pairs of distinct vertices: . To avoid ambiguity, this type of object may be called precisely a directed simple graph. In the edge directed from to , the vertices and are called the endpoints of the edge, the tail of the edge and the head of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. The edge is called the inverted edge of . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head. In one more general sense of the term allowing multiple edges, a directed graph is sometimes defined to be an ordered triple comprising: , a set of vertices (also called nodes or points); , a set of edges (also called directed edges, directed links, directed lines, arrows or arcs); , an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices): . To avoid ambiguity, this type of object may be called precisely a directed multigraph. A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) which is not in . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of should be modified to . For directed multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively. The edges of a directed simple graph permitting loops is a homogeneous relation ~ on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted . Mixed graph A mixed graph is a graph in which some edges may be directed and some may be undirected. It is an ordered triple for a mixed simple graph and for a mixed multigraph with , (the undirected edges), (the directed edges), and defined as above. Directed and undirected graphs are special cases. Weighted graph A weighted graph or a network is a graph in which a number (the weight) is assigned to each edge. Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example in shortest path problems such as the traveling salesman problem. Types of graphs Oriented graph One definition of an oriented graph is that it is a directed graph in which at most one of and may be edges of the graph. That is, it is a directed graph that can be formed as an orientation of an undirected (simple) graph. Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph. Regular graph A regular graph is a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k. Complete graph A complete graph is a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges. Finite graph A finite graph is a graph in which the vertex set and the edge set are finite sets. Otherwise, it is called an infinite graph. Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated. Connected graph In an undirected graph, an unordered pair of vertices is called connected if a path leads from x to y. Otherwise, the unordered pair is called disconnected. A connected graph is an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called a disconnected graph. In a directed graph, an ordered pair of vertices is called strongly connected if a directed path leads from x to y. Otherwise, the ordered pair is called weakly connected if an undirected path leads from x to y after replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is called disconnected. A strongly connected graph is a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called a weakly connected graph if every ordered pair of vertices in the graph is weakly connected. Otherwise it is called a disconnected graph. A k-vertex-connected graph or k-edge-connected graph is a graph in which no set of vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called simply a k-connected graph. Bipartite graph A bipartite graph is a simple graph in which the vertex set can be partitioned into two sets, W and X, so that no two vertices in W share a common edge and no two vertices in X share a common edge. Alternatively, it is a graph with a chromatic number of 2. In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X. Path graph A path graph or linear graph of order is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph. Planar graph A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect. Cycle graph A cycle graph or circular graph of order is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the where i = 1, 2, …, n − 1, plus the edge . Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph. Tree A tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees. Polytree A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. Advanced classes More advanced kinds of graphs are: Petersen graph and its generalizations; perfect graphs; cographs; chordal graphs; other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distance-transitive graphs; strongly regular graphs and their generalizations distance-regular graphs. Properties of graphs Two edges of a graph are called adjacent if they share a common vertex. Two edges of a directed graph are called consecutive if the head of the first one is the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if the first one is the tail and the second one is the head of an edge), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident. The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object. Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are called edge-labeled. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (In the literature, the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.) The category of all graphs is the comma category Set ↓ D where D: Set → Set is the functor taking a set s to s × s. Examples The diagram is a schematic representation of the graph with vertices and edges In computer science, directed graphs are used to represent knowledge (e.g., conceptual graph), finite-state machines, and many other discrete structures. A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X if and only if xRy. A directed graph can model information networks such as Twitter, with one user following another. Particularly regular examples of directed graphs are given by the Cayley graphs of finitely-generated groups, as well as Schreier coset graphs In category theory, every small category has an underlying directed multigraph whose vertices are the objects of the category, and whose edges are the arrows of the category. In the language of category theory, one says that there is a forgetful functor from the category of small categories to the category of quivers. Graph operations There are several operations that produce new graphs from initial ones, which might be classified into the following categories: unary operations, which create a new graph from an initial one, such as: edge contraction, line graph, dual graph, complement graph, graph rewriting; binary operations, which create a new graph from two initial ones, such as: disjoint union of graphs, cartesian product of graphs, tensor product of graphs, strong product of graphs, lexicographic product of graphs, series–parallel graphs. Generalizations In a hypergraph, an edge can join any positive number of vertices. An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices. Every graph gives rise to a matroid. In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph. In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs. In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids. See also Conceptual graph Graph (abstract data type) Graph database Graph drawing List of graph theory topics List of publications in graph theory Network theory Notes References Further reading External links Graph theory
Graph (discrete mathematics)
[ "Mathematics" ]
3,427
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
325,819
https://en.wikipedia.org/wiki/Cuban%20solenodon
The Cuban solenodon or almiquí (Atopogale cubana) is a small, furry, shrew-like mammal endemic to mountainous forests on Cuba. It is the only species in the genus Atopogale. An elusive animal, it lives in burrows and is only active at night when it uses its unusual toxic saliva to feed on insects. The solenodons (family Solenodontidae), native to the Caribbean, are one of only a few mammals that are venomous. The Cuban solenodon is endangered and was once considered extinct due to its rarity. It and the Hispaniolan solenodon (Solenodon paradoxus) are the only surviving solenodon species; the others are extinct. Taxonomy Although formerly classified in the genus Solenodon, phylogenetic evidence supports it being in its own genus, Atopogale. Rediscovery Since its discovery in 1861 by the German naturalist Wilhelm Peters, only 36 had ever been caught. By 1970, some thought the Cuban solenodon had become extinct, since no specimens had been found since 1890. Three were captured in 1974 and 1975, and subsequent surveys showed it still occurred in many places in central and western Oriente Province, at the eastern end of Cuba; however, it is rare everywhere. Prior to 2003, the most recent sighting was in 1999, mainly because it is a nocturnal burrower, living underground, and thus is very rarely seen. The Cuban solenodon found in 2003 was named Alejandrito. It had a mass of and was healthy. It was released back into the wild after two days of scientific study were completed. Appearance With small eyes, and dark brown to black hair, the Cuban solenodon is sometimes compared to a shrew, although it most closely resembles members of the family Tenrecidae of Madagascar. It is long from nose to tail-tip and resembles a large brown rat with an extremely elongated snout and a long, naked, scaly tail. Status Willy Ley wrote in 1964 that the Cuban solenodon was, if not extinct, among "the rarest animals on earth". It was declared extinct in 1970, but was rediscovered in 1974. Since 1982, it has been listed as an endangered species, in part because it only breeds a single litter of one to three in a year (leading to a long population recovery time), and because of predation by invasive species, such as small Indian mongooses, black rats, feral cats, and feral dogs. The species is also thought to be threatened by deforestation as well as habitat degradation due to logging and mining. However, there is very little conservation attention given to the species. Distribution and habitat It is endemic to mountainous forests in the Nipe-Sagua-Baracoa mountain range of eastern Cuba, in the provinces of Holguín, Guantánamo, and Santiago de Cuba, though subfossil evidence showed it once inhabited throughout the island. It is nocturnal and travels at night along the forest floor, looking for insects and small animals on which to feed. Behavior This species has a varied diet. At night, they search the forest floor litter for insects and other invertebrates, fungi, and roots. They climb well and feed on fruits, berries, and buds, but have more predatory habits, too. With venom from modified salivary glands in the lower jaw, they can kill lizards, frogs, small birds, or even rodents. They seem not to be immune to the venom of their own kind, and cage mates have been reported dying after fights. Mating Cuban solenodons only meet to mate, and the male practices polygeny (i.e. mates with multiple females). The males and females are not found together unless they are mating. The pair will meet up, mate, then separate. The males do not participate in raising any of the young. References External links Archived 2009-10-31 EDGE of Existence "(Cuban solenodon)" Saving the World's most Evolutionarily Distinct and Globally Endangered (EDGE) species Solenodon Mammals of Cuba EDGE species Mammals of the Caribbean Venomous mammals Mammals described in 1861 Endemic fauna of Cuba Taxa named by Wilhelm Peters
Cuban solenodon
[ "Biology" ]
854
[ "EDGE species", "Biodiversity" ]
325,829
https://en.wikipedia.org/wiki/Green%20Man
The Green Man, also known as a foliate head, is a motif in architecture and art, of a face made of, or completely surrounded by, foliage, which normally spreads out from the centre of the face. Apart from a purely decorative function, the Green Man is primarily interpreted as a symbol of rebirth, representing the cycle of new growth that occurs every spring. The Green Man motif has many variations. Branches or vines may sprout from the mouth, nostrils, or other parts of the face, and these shoots may bear flowers or fruit. Found in many cultures from many ages around the world, the Green Man is often related to natural vegetation deities. Often used as decorative architectural ornaments, where they are a form of mascaron or ornamental head, Green Men are frequently found in architectural sculpture on both secular and ecclesiastical buildings in the Western tradition. In churches in England, the image was used to illustrate a popular sermon describing the mystical origins of the cross of Jesus. "Green Man" type foliate heads first appeared in England during the early 12th century deriving from those of France, and were especially popular in the Gothic architecture of the 13th to 15th centuries. The idea that the Green Man motif represents a pagan mythological figure, as proposed by Lady Raglan in 1939, despite its popularity with the lay public, is not supported by evidence. Types Usually referred to in art history as foliate heads or foliate masks, representations of the Green Man take many forms, but most just show a "mask" or frontal depiction of a face, which in architecture is usually in relief. The simplest depict a man's face peering out of dense foliage. Some may have leaves for hair, perhaps with a leafy beard. Often leaves or leafy shoots are shown growing from his open mouth and sometimes even from the nose and eyes as well. In the most abstract examples, the carving at first glance appears to be merely stylised foliage, with the facial element only becoming apparent on closer examination. The face is almost always male; green women are rare. Lady Raglan coined the term "Green Man" for this type of architectural feature in her 1939 article The Green Man in Church Architecture in The Folklore Journal. It is thought that her interest stemmed from carvings at St. Jerome's Church in Llangwm, Monmouthshire. The Green Man appears in many forms, with the three most common types categorized as: the Foliate Head: completely covered in green leaves the Disgorging Head: spews vegetation from its mouth the Bloodsucker Head: sprouts vegetation from all facial orifices (e.g. tear ducts, nostrils, mouth, and ears) History In terms of formalism, art historians see a connection with the masks in Iron Age Celtic art, where faces emerge from stylized vegetal ornament in the "Plastic style" metalwork of La Tène art. Since there are so few survivals, and almost none in wood, the lack of a continuous series of examples is not a fatal objection to such a continuity. The Oxford Dictionary of English Folklore suggests that they ultimately have their origins in late Roman art from leaf masks used to represent gods and mythological figures. A character superficially similar to the Green Man, in the form of a partly foliate mask surrounded by Bacchic figures, appears at the centre of the 4th-century silver salver in the Mildenhall Treasure, found at a Roman villa site in Suffolk, England; the mask is generally agreed to represent Neptune or Oceanus and the foliation is of seaweed. In his lectures at Gresham College, historian and professor Ronald Hutton traces the green man to India, stating "the component parts of Lady Raglan's construct of the Green Man were dismantled. The medieval foliate heads were studied by Kathleen Basford in 1978 and Mercia MacDermott in 2003. They were revealed to have been a motif originally developed in India, which travelled through the medieval Arab empire to Christian Europe. There it became a decoration for monks’ manuscripts, from which it spread to churches." A late 4th-century example of a green man disgorging vegetation from his mouth is at St. Abre, in St. Hilaire-le-grand, France. 11th century Romanesque Templar churches in Jerusalem have Romanesque foliate heads. Harding tentatively suggested that the symbol may have originated in Asia Minor and been brought to Europe by travelling stone carvers. The tradition of the Green Man carved into Christian churches is found across Europe, including examples such as the Seven Green Men of Nicosia carved into the facade of the thirteenth century St Nicholas Church in Cyprus. The motif fitted very easily into the developing use of vegetal architectural sculpture in Romanesque and Gothic architecture in Europe. Later foliate heads in churches may have reflected the legends around Seth, the son of Adam, according to which he plants seeds in his dead father's mouth as he lies in his grave. The tree that grew from them became the tree of the true cross of the crucifixion. This tale was in The Golden Legend of Jacobus de Voragine, a very popular thirteenth century compilation of Christian religious stories, from which the subjects of church sermons were often taken, especially after 1483, when William Caxton printed an English translation of the Golden Legend. According to the Christian author Stephen Miller, author of "The Green Man in Medieval England: Christian Shoots from Pagan Roots" (2022), "It is a Christian/Judaic-derived motif relating to the legends and medieval hagiographies of the Quest of Seth – the three twigs/seeds/kernels planted below the tongue of post-fall Adam by his son Seth (provided by the angel of mercy responsible for guarding Eden) shoot forth, bringing new life to humankind". This notion was first proposed by James Coulter (2006). From the Renaissance onward, elaborate variations on the Green Man theme, often with animal heads rather than human faces, appear in many media other than carvings (including manuscripts, metalwork, bookplates, and stained glass). They seem to have been used for purely decorative effect rather than reflecting any deeply held belief. Modern history In Britain, the image of the Green Man enjoyed a revival in the 19th century, becoming popular with architects during the Gothic revival and the Arts and Crafts era, when it appeared as a decorative motif in and on many buildings, both religious and secular. American architects took up the motif around the same time. Many variations can be found in Neo-gothic Victorian architecture. He was popular amongst Australian stonemasons and can be found on many secular and sacred buildings, including an example on Broadway, Sydney. In 1887 a Swiss engraver, Numa Guyot, created a bookplate depicting a Green Man in exquisite detail. In April 2023, a Green Man's head was depicted on the invitation for the Coronation of Charles III and Camilla, designed by heraldic artist and manuscript illuminator Andrew Jamieson. According to the official royal website: "Central to the design is the motif of the Green Man, an ancient figure from British folklore, symbolic of spring and rebirth, to celebrate the new reign. The shape of the Green Man, crowned in natural foliage, is formed of leaves of oak, ivy, and hawthorn, and the emblematic flowers of the United Kingdom." which alluded to "the nature worshipper in King Charles" but polarized the public. Indeed, as the medieval art historian Cassandra Harrington pointed out, although vegetal figures were abundant throughout the medieval and early modern period, the foliate head motif is not ‘an ancient figure from British folklore’, as the Royal Household has proclaimed, but a European import.' In folklore Citations Sources cited Sandars, Nancy K., Prehistoric Art in Europe, Penguin (Pelican, now Yale, History of Art), 1968 (nb 1st edn.) Further reading Amis, Kingsley. The Green Man, Vintage, London (2004) (Novel) Anderson, William. Green Man: The Archetype of our Oneness with the Earth, HarperCollins (1990) Basford, Kathleen. The Green Man, D.S. Brewer (2004) (The first monograph on the subject, now reprinted in paperback) Beer, Robert. The Encyclopedia of Tibetan Symbols and Motifs Shambhala. (1999) , Cheetham, Tom. Green Man, Earth Angel: The Prophetic Tradition and the Battle for the Soul of the World , SUNY Press 2004 Doel, Fran and Doel, Geoff. The Green Man in Britain, Tempus Publishing Ltd (May 2001) Harding, Mike. A Little Book of the Green Man, Aurium Press, London (1998) Hicks, Clive. The Green Man: A Field Guide, Compass Books (August 2000) MacDermott, Mercia. Explore Green Men, Explore Books, Heart of Albion Press (September 2003) Matthews, John. The Quest for the Green Man, Godsfield Press Ltd (May 2004) Neasham, Mary. The Spirit of the Green Man, Green Magic (December 2003) Varner, Gary R. The Mythic Forest, the Green Man and the Spirit of Nature, Algora Publishing (March 4, 2006) The name of the Green Man Research paper by Brandon S Centerwall from Folklore magazine External links Greenman Encyclopedia Wiki A site with a comprehensive listings of locations of Green Men in the UK Christmas characters Church architecture Cornish folklore English folklore Fairies Iconography Life-death-rebirth gods Medieval legends Mythological human hybrids Romanesque art Scottish folklore Supernatural legends Visual motifs
Green Man
[ "Mathematics" ]
1,964
[ "Symbols", "Visual motifs" ]
325,831
https://en.wikipedia.org/wiki/Injection%20moulding
Injection moulding (U.S. spelling: injection molding) is a manufacturing process for producing parts by injecting molten material into a mould, or mold. Injection moulding can be performed with a host of materials mainly including metals (for which the process is called die-casting), glasses, elastomers, confections, and most commonly thermoplastic and thermosetting polymers. Material for the part is fed into a heated barrel, mixed (using a helical screw), and injected into a mould cavity, where it cools and hardens to the configuration of the cavity. After a product is designed, usually by an industrial designer or an engineer, moulds are made by a mould-maker (or toolmaker) from metal, usually either steel or aluminium, and precision-machined to form the features of the desired part. Injection moulding is widely used for manufacturing a variety of parts, from the smallest components to entire body panels of cars. Advances in 3D printing technology, using photopolymers that do not melt during the injection moulding of some lower-temperature thermoplastics, can be used for some simple injection moulds. Injection moulding uses a special-purpose machine that has three parts: the injection unit, the mould and the clamp. Parts to be injection-moulded must be very carefully designed to facilitate the moulding process; the material used for the part, the desired shape and features of the part, the material of the mould, and the properties of the moulding machine must all be taken into account. The versatility of injection moulding is facilitated by this breadth of design considerations and possibilities. Applications Injection moulding is used to create many things such as wire spools, packaging, bottle caps, automotive parts and components, toys, pocket combs, some musical instruments (and parts of them), one-piece chairs and small tables, storage containers, mechanical parts (including gears), and most other plastic products available today. Injection moulding is the most common modern method of manufacturing plastic parts; it is ideal for producing high volumes of the same object. Process characteristics Injection moulding uses a ram or screw-type plunger to force molten plastic or rubber material into a mould cavity; this solidifies into a shape that has conformed to the contour of the mould. It is most commonly used to process both thermoplastic and thermosetting polymers, with the volume used of the former being considerably higher. Thermoplastics are prevalent due to characteristics that make them highly suitable for injection moulding, such as ease of recycling, versatility for a wide variety of applications, and ability to soften and flow on heating. Thermoplastics also have an element of safety over thermosets; if a thermosetting polymer is not ejected from the injection barrel in a timely manner, chemical crosslinking may occur causing the screw and check valves to seize and potentially damaging the injection moulding machine. Injection moulding consists of the high pressure injection of the raw material into a mould, which shapes the polymer into the desired form. Moulds can be of a single cavity or multiple cavities. In multiple cavity moulds, each cavity can be identical and form the same parts or can be unique and form multiple different geometries during a single cycle. Moulds are generally made from tool steels, but stainless steels and aluminium moulds are suitable for certain applications. Aluminium moulds are typically ill-suited for high volume production or parts with narrow dimensional tolerances, as they have inferior mechanical properties and are more prone to wear, damage, and deformation during the injection and clamping cycles; however, aluminium moulds are cost-effective in low-volume applications, as mould fabrication costs and time are considerably reduced. Many steel moulds are designed to process well over a million parts during their lifetime and can cost hundreds of thousands of dollars to fabricate. When thermoplastics are moulded, typically pelletised raw material is fed through a hopper into a heated barrel with a reciprocating screw. Upon entrance to the barrel, the temperature increases and the Van der Waals forces that resist relative flow of individual chains are weakened as a result of increased space between molecules at higher thermal energy states. This process reduces its viscosity, which enables the polymer to flow with the driving force of the injection unit. The screw delivers the raw material forward, mixes and homogenises the thermal and viscous distributions of the polymer, and reduces the required heating time by mechanically shearing the material and adding a significant amount of frictional heating to the polymer. The material feeds forward through a check valve and collects at the front of the screw into a volume known as a shot. A shot is the volume of material that is used to fill the mould cavity, compensate for shrinkage, and provide a cushion (approximately 10% of the total shot volume, which remains in the barrel and prevents the screw from bottoming out) to transfer pressure from the screw to the mould cavity. When enough material has gathered, the material is forced at high pressure and velocity into the part forming cavity. The exact amount of shrinkage is a function of the resin being used, and can be relatively predictable. To prevent spikes in pressure, the process normally uses a transfer position corresponding to a 95–98% full cavity where the screw shifts from a constant velocity to a constant pressure control. Often injection times are well under 1 second. Once the screw reaches the transfer position the packing pressure is applied, which completes mould filling and compensates for thermal shrinkage, which is quite high for thermoplastics relative to many other materials. The packing pressure is applied until the gate (cavity entrance) solidifies. Due to its small size, the gate is normally the first place to solidify through its entire thickness. Once the gate solidifies, no more material can enter the cavity; accordingly, the screw reciprocates and acquires material for the next cycle while the material within the mould cools so that it can be ejected and be dimensionally stable. This cooling duration is dramatically reduced by the use of cooling lines circulating water or oil from an external temperature controller. Once the required temperature has been achieved, the mould opens and an array of pins, sleeves, strippers, etc. are driven forward to demould the article. Then, the mould closes and the process is repeated. For a two-shot mould, two separate materials are incorporated into one part. This type of injection moulding is used to add a soft touch to knobs, to give a product multiple colours, or to produce a part with multiple performance characteristics. For thermosets, typically two different chemical components are injected into the barrel. These components immediately begin irreversible chemical reactions that eventually crosslinks the material into a single connected network of molecules. As the chemical reaction occurs, the two fluid components permanently transform into a viscoelastic solid. Solidification in the injection barrel and screw can be problematic and have financial repercussions; therefore, minimising the thermoset curing within the barrel is vital. This typically means that the residence time and temperature of the chemical precursors are minimised in the injection unit. The residence time can be reduced by minimising the barrel's volume capacity and by maximising the cycle times. These factors have led to the use of a thermally isolated, cold injection unit that injects the reacting chemicals into a thermally isolated hot mould, which increases the rate of chemical reactions and results in shorter time required to achieve a solidified thermoset component. After the part has solidified, valves close to isolate the injection system and chemical precursors, and the mould opens to eject the moulded parts. Then, the mould closes and the process repeats. Pre-moulded or machined components can be inserted into the cavity while the mould is open, allowing the material injected in the next cycle to form and solidify around them. This process is known as insert moulding and allows single parts to contain multiple materials. This process is often used to create plastic parts with protruding metal screws so they can be fastened and unfastened repeatedly. This technique can also be used for In-mould labelling and film lids may also be attached to moulded plastic containers. A parting line, sprue, gate marks, and ejector pin marks are usually present on the final part. None of these features are typically desired, but are unavoidable due to the nature of the process. Gate marks occur at the gate that joins the melt-delivery channels (sprue and runner) to the part forming cavity. Parting line and ejector pin marks result from minute misalignments, wear, gaseous vents, clearances for adjacent parts in relative motion, and/or dimensional differences of the melting surfaces contacting the injected polymer. Dimensional differences can be attributed to non-uniform, pressure-induced deformation during injection, machining tolerances, and non-uniform thermal expansion and contraction of mould components, which experience rapid cycling during the injection, packing, cooling, and ejection phases of the process. Mould components are often designed with materials of various coefficients of thermal expansion. These factors cannot be simultaneously accounted for without astronomical increases in the cost of design, fabrication, processing, and quality monitoring. The skillful mould and part designer positions these aesthetic detriments in hidden areas if feasible. History In 1846 the British inventor Charles Hancock, a relative of Thomas Hancock, patented an injection molding machine. American inventor John Wesley Hyatt, together with his brother Isaiah, patented one of the first injection moulding machines in 1872. This machine was relatively simple compared to machines in use today: it worked like a large hypodermic needle, using a plunger to inject plastic through a heated cylinder into a mould. The industry progressed slowly over the years, producing products such as collar stays, buttons, and hair combs (generally though, plastics, in its modern definition, are a more recent development ). The German chemists Arthur Eichengrün and Theodore Becker invented the first soluble forms of cellulose acetate in 1903, which was much less flammable than cellulose nitrate. It was eventually made available in a powder form from which it was readily injection moulded. Arthur Eichengrün developed the first injection moulding press in 1919. In 1939, Arthur Eichengrün patented the injection moulding of plasticised cellulose acetate. The industry expanded rapidly in the 1940s because World War II created a huge demand for inexpensive, mass-produced products. In 1946, American inventor James Watson Hendry built the first screw injection machine, which allowed much more precise control over the speed of injection and the quality of articles produced. This machine also allowed material to be mixed before injection, so that coloured or recycled plastic could be added to virgin material and mixed thoroughly before being injected. In the 1970s, Hendry went on to develop the first gas-assisted injection moulding process, which permitted the production of complex, hollow articles that cooled quickly. This greatly improved design flexibility as well as the strength and finish of manufactured parts while reducing production time, cost, weight and waste. By 1979, plastic production overtook steel production, and by 1990, aluminium moulds were widely used in injection moulding. Today, screw injection machines account for the vast majority of all injection machines. The plastic injection moulding industry has evolved over the years from producing combs and buttons to producing a vast array of products for many industries including automotive, medical, aerospace, consumer products, toys, plumbing, packaging, and construction. Examples of polymers best suited for the process Most polymers, sometimes referred to as resins, may be used, including all thermoplastics, some thermosets, and some elastomers. Since 1995, the total number of available materials for injection moulding has increased at a rate of 750 per year; there were approximately 18,000 materials available when that trend began. Available materials include alloys or blends of previously developed materials, so product designers can choose the material with the best set of properties from a vast selection. Major criteria for selection of a material are the strength and function required for the final part, as well as the cost, but also each material has different parameters for moulding that must be taken into account. Other considerations when choosing an injection moulding material include flexural modulus of elasticity, or the degree to which a material can be bent without damage, as well as heat deflection and water absorption. Common polymers like epoxy and phenolic are examples of thermosetting plastics while nylon, polyethylene, and polystyrene are thermoplastic. Until comparatively recently, plastic springs were not possible, but advances in polymer properties make them now quite practical. Applications include buckles for anchoring and disconnecting outdoor-equipment webbing. Equipment Injection moulding machines consist of a material hopper, an injection ram or screw-type plunger, and a heating unit. Also known as platens, they hold the moulds in which the components are shaped. Presses are rated by tonnage, which expresses the amount of clamping force that the machine can exert. This force keeps the mould closed during the injection process. Tonnage can vary from less than 5 tons to over 9,000 tons, with the higher figures used in comparatively few manufacturing operations. The total clamp force needed is determined by the projected area of the part being moulded. This projected area is multiplied by a clamp force of from 1.8 to 7.2 tons for each square centimetre of the projected areas. As a rule of thumb, 4 or 5 tons/in2 can be used for most products. If the plastic material is very stiff, it requires more injection pressure to fill the mould, and thus more clamp tonnage to hold the mould closed. The required force can also be determined by the material used and the size of the part. Larger parts require higher clamping force. Mould Mould or die are the common terms used to describe the tool used to produce plastic parts in moulding. Since moulds have been expensive to manufacture, they were usually only used in mass production where thousands of parts were being produced. Typical moulds are constructed from hardened steel, pre-hardened steel, aluminium, and/or beryllium-copper alloy. The choice of material for the mold is not only based on cost considerations, but also has a lot to do with the product life cycle. In general, steel moulds cost more to construct, but their longer lifespan offsets the higher initial cost over a higher number of parts made before wearing out. Pre-hardened steel moulds are less wear-resistant and are used for lower volume requirements or larger components; their typical steel hardness is 38–45 on the Rockwell-C scale. Hardened steel moulds are heat treated after machining; these are by far superior in terms of wear resistance and lifespan. Typical hardness ranges between 50 and 60 Rockwell-C (HRC). Aluminium moulds can cost substantially less, and when designed and machined with modern computerised equipment can be economical for moulding tens or even hundreds of thousands of parts. Beryllium copper is used in areas of the mould that require fast heat removal or areas that see the most shear heat generated. The moulds can be manufactured either by CNC machining or by using electrical discharge machining processes. Mould design The mould consists of two primary components, the injection mould (A plate) and the ejector mould (B plate). These components are also referred to as moulder and mouldmaker. Plastic resin enters the mould through a sprue or gate in the injection mould; the sprue bushing is to seal tightly against the nozzle of the injection barrel of the moulding machine and to allow molten plastic to flow from the barrel into the mould, also known as the cavity. The sprue bushing directs the molten plastic to the cavity images through channels that are machined into the faces of the A and B plates. These channels allow plastic to run along them, so they are referred to as runners. The molten plastic flows through the runner and enters one or more specialised gates and into the cavity geometry to form the desired part. The amount of resin required to fill the sprue, runner and cavities of a mould comprises a "shot". Trapped air in the mould can escape through air vents that are ground into the parting line of the mould, or around ejector pins and slides that are slightly smaller than the holes retaining them. If the trapped air is not allowed to escape, it is compressed by the pressure of the incoming material and squeezed into the corners of the cavity, where it prevents filling and can also cause other defects. The air can even become so compressed that it ignites and burns the surrounding plastic material. To allow for removal of the moulded part from the mould, the mould features must not overhang one another in the direction that the mould opens, unless parts of the mould are designed to move from between such overhangs when the mould opens using components called Lifters. Sides of the part that appear parallel with the direction of draw (the axis of the cored position (hole) or insert is parallel to the up and down movement of the mould as it opens and closes) are typically angled slightly, called draft, to ease release of the part from the mould. Insufficient draft can cause deformation or damage. The draft required for mould release is primarily dependent on the depth of the cavity; the deeper the cavity, the more draft necessary. Shrinkage must also be taken into account when determining the draft required. If the skin is too thin, then the moulded part tends to shrink onto the cores that form while cooling and cling to those cores, or the part may warp, twist, blister or crack when the cavity is pulled away. A mould is usually designed so that the moulded part reliably remains on the ejector (B) side of the mould when it opens, and draws the runner and the sprue out of the (A) side along with the parts. The part then falls freely when ejected from the (B) side. Tunnel gates, also known as submarine or mould gates, are located below the parting line or mould surface. An opening is machined into the surface of the mould on the parting line. The moulded part is cut (by the mould) from the runner system on ejection from the mould. Ejector pins, also known as knockout pins, are circular pins placed in either half of the mould (usually the ejector half), which push the finished moulded product, or runner system out of a mould.The ejection of the article using pins, sleeves, strippers, etc., may cause undesirable impressions or distortion, so care must be taken when designing the mould. The standard method of cooling is passing a coolant (usually water) through a series of holes drilled through the mould plates and connected by hoses to form a continuous pathway. The coolant absorbs heat from the mould (which has absorbed heat from the hot plastic) and keeps the mould at a proper temperature to solidify the plastic at the most efficient rate. To ease maintenance and venting, cavities and cores are divided into pieces, called inserts, and sub-assemblies, also called inserts, blocks, or chase blocks. By substituting interchangeable inserts, one mould may make several variations of the same part. More complex parts are formed using more complex moulds. These may have sections called slides, that move into a cavity perpendicular to the draw direction, to form overhanging part features. When the mould is opened, the slides are pulled away from the plastic part by using stationary “angle pins” on the stationary mould half. These pins enter a slot in the slides and cause the slides to move backward when the moving half of the mould opens. The part is then ejected and the mould closes. The closing action of the mould causes the slides to move forward along the angle pins. A mould can produce several copies of the same parts in a single "shot". The number of "impressions" in the mould of that part is often incorrectly referred to as cavitation. A tool with one impression is often called a single impression (cavity) mould. A mould with two or more cavities of the same parts is usually called a multiple impression (cavity) mould. (Not to be confused with "Multi-shot moulding" {which is dealt with in the next section.}) Some extremely high production volume moulds (like those for bottle caps) can have over 128 cavities. In some cases, multiple cavity tooling moulds a series of different parts in the same tool. Some toolmakers call these moulds family moulds, as all the parts are related—e.g., plastic model kits. Some moulds allow previously moulded parts to be reinserted to allow a new plastic layer to form around the first part. This is often referred to as overmoulding. This system can allow for production of one-piece tires and wheels. Moulds for highly precise and extremely small parts from micro injection molding requires extra care in the design stage, as material resins react differently compared to their full-sized counterparts where they must quickly fill these incredibly small spaces, which puts them under intense shear strains. Multi-shot moulding Two-shot, double-shot or multi-shot moulds are designed to "overmould" within a single moulding cycle and must be processed on specialised injection moulding machines with two or more injection units. This process is actually an injection moulding process performed twice and therefore can allow only for a much smaller margin of error. In the first step, the base colour material is moulded into a basic shape, which contains spaces for the second shot. Then the second material, a different colour, is injection-moulded into those spaces. Pushbuttons and keys, for instance, made by this process have markings that cannot wear off, and remain legible with heavy use. Mould storage Manufacturers go to great lengths to protect custom moulds due to their high average costs. The perfect temperature and humidity levels are maintained to ensure the longest possible lifespan for each custom mould. Custom moulds, such as those used for rubber injection moulding, are stored in temperature and humidity controlled environments to prevent warping. Tool materials Tool steel is often used. Mild steel, aluminium, nickel or epoxy are suitable only for prototype or very short production runs. Modern hard aluminium (7075 and 2024 alloys) with proper mould design, can easily make moulds capable of 100,000 or more part life with proper mould maintenance. Machining Moulds are built through two main methods: standard machining and EDM. Standard machining, in its conventional form, has historically been the method of building injection moulds. With technological developments, CNC machining became the predominant means of making more complex moulds with more accurate mould details in less time than traditional methods. The electrical discharge machining (EDM) or spark erosion process has become widely used in mould making. As well as allowing the formation of shapes that are difficult to machine, the process allows pre-hardened moulds to be shaped so that no heat treatment is required. Changes to a hardened mould by conventional drilling and milling normally require annealing to soften the mould, followed by heat treatment to harden it again. EDM is a simple process in which a shaped electrode, usually made of copper or graphite, is very slowly lowered onto the mould surface over a period of many hours, which is immersed in paraffin oil (kerosene). A voltage applied between tool and mould causes spark erosion of the mould surface in the inverse shape of the electrode. Cost The number of cavities incorporated into a mould directly correlate in moulding costs. Fewer cavities require far less tooling work, so limiting the number of cavities lowers initial manufacturing costs to build an injection mould. As the number of cavities play a vital role in moulding costs, so does the complexity of the part's design. Complexity can be incorporated into many factors such as surface finishing, tolerance requirements, internal or external threads, fine detailing or the number of undercuts that may be incorporated. Further details, such as undercuts, or any feature that needs additional tooling, increases mould cost. Surface finish of the core and cavity of moulds further influences cost. Rubber injection moulding process produces a high yield of durable products, making it the most efficient and cost-effective method of moulding. Consistent vulcanisation processes involving precise temperature control significantly reduces all waste material. Injection process Usually, the plastic materials are formed in the shape of pellets or granules and sent from the raw material manufacturers in paper bags. With injection moulding, pre-dried granular plastic is fed by a forced ram from a hopper into a heated barrel. As the granules are slowly moved forward by a screw-type plunger, the plastic is forced into a heated chamber, where it is melted. As the plunger advances, the melted plastic is forced through a nozzle that rests against the mould, allowing it to enter the mould cavity through a gate and runner system. The mould remains cold so the plastic solidifies almost as soon as the mould is filled. Injection moulding cycle The sequence of events during the injection mould of a plastic part is called the injection moulding cycle. The cycle begins when the mould closes, followed by the injection of the polymer into the mould cavity. Once the cavity is filled, a holding pressure is maintained to compensate for material shrinkage. In the next step, the screw turns, feeding the next shot to the front screw. This causes the screw to retract as the next shot is prepared. Once the part is sufficiently cool, the mould opens and the part is ejected. Scientific versus traditional moulding Traditionally, the injection portion of the moulding process was done at one constant pressure to fill and pack the cavity. This method, however, allowed for a large variation in dimensions from cycle-to-cycle. More commonly used now is scientific or decoupled moulding, a method pioneered by RJG Inc. In this the injection of the plastic is "decoupled" into stages to allow better control of part dimensions and more cycle-to-cycle (commonly called shot-to-shot in the industry) consistency. First the cavity is filled to approximately 98% full using velocity (speed) control. Although the pressure should be sufficient to allow for the desired speed, pressure limitations during this stage are undesirable. Once the cavity is 98% full, the machine switches from velocity control to pressure control, where the cavity is "packed out" at a constant pressure, where sufficient velocity to reach desired pressures is required. This lets workers control part dimensions to within thousandths of an inch or better. Different types of injection moulding processes Although most injection moulding processes are covered by the conventional process description above, there are several important moulding variations including, but not limited to: Die casting Metal injection moulding Thin-wall injection moulding Injection moulding of liquid silicone rubber Reaction injection moulding Micro injection moulding Gas-assisted injection moulding Cube mold technology Multi-material injection molding A more comprehensive list of injection moulding processes may be found here: Process troubleshooting Like all industrial processes, injection molding can produce flawed parts, even in toys. In the field of injection moulding, troubleshooting is often performed by examining defective parts for specific defects and addressing these defects with the design of the mould or the characteristics of the process itself. Trials are often performed before full production runs in an effort to predict defects and determine the appropriate specifications to use in the injection process. When filling a new or unfamiliar mould for the first time, where shot size for that mould is unknown, a technician/tool setter may perform a trial run before a full production run. They start with a small shot weight and fills gradually until the mould is 95 to 99% full. Once they achieve this, they apply a small amount of holding pressure and increase holding time until gate freeze off (solidification time) has occurred. Gate freeze off time can be determined by increasing the hold time, and then weighing the part. When the weight of the part does not change, the gate has frozen and no more material is injected into the part. Gate solidification time is important, as this determines cycle time and the quality and consistency of the product, which itself is an important issue in the economics of the production process. Holding pressure is increased until the parts are free of sinks and part weight has been achieved. Moulding defects Injection moulding is a complex technology with possible production problems. They can be caused either by defects in the moulds, or more often by the moulding process itself. Methods such as industrial CT scanning can help with finding these defects externally as well as internally. Tolerances Tolerance depends on the dimensions of the part. An example of a standard tolerance for a 1-inch dimension of an LDPE part with 0.125 inch wall thickness is +/- 0.008 inch (0.2 mm). Power requirements The power required for this process of injection moulding depends on many things and varies between materials used. Manufacturing Processes Reference Guide states that the power requirements depend on "a material's specific gravity, melting point, thermal conductivity, part size, and molding rate." Below is a table from page 243 of the same reference as previously mentioned that best illustrates the characteristics relevant to the power required for the most commonly used materials. Robotic moulding Automation means that the smaller size of parts permits a mobile inspection system to examine multiple parts more quickly. In addition to mounting inspection systems on automatic devices, multiple-axis robots can remove parts from the mould and position them for further processes. Specific instances include removing of parts from the mould immediately after the parts are created, as well as applying machine vision systems. A robot grips the part after the ejector pins have been extended to free the part from the mould. It then moves them into either a holding location or directly onto an inspection system. The choice depends upon the type of product, as well as the general layout of the manufacturing equipment. Vision systems mounted on robots have greatly enhanced quality control for insert moulded parts. A mobile robot can more precisely determine the placement accuracy of the metal component, and inspect faster than a human can. Gallery See also Craft Design of plastic components Direct injection expanded foam molding Extrusion moulding Fusible core injection moulding Gravimetric blender Hobby injection moulding Injection mould construction Matrix moulding Multi-material injection moulding Rapid Heat Cycle Molding Reaction injection moulding Rotational moulding Urethane casting References Further reading External links Page information Shrinkage and Warpage – Santa Clara University Engineering Design Center Industrial design
Injection moulding
[ "Engineering" ]
6,601
[ "Industrial design", "Design engineering", "Design" ]
325,857
https://en.wikipedia.org/wiki/List%20of%20assets%20owned%20by%20General%20Electric
List of assets owned by General Electric: Primary business units GE Aerospace GE Power GE Renewable Energy Other business units GE Additive GE Capital GE Energy Financial Services GE Digital GE Research GE Licensing GE Hot Dogs See also Lists of corporate assets References Sources http://www.ge.com General Electric General E Assets
List of assets owned by General Electric
[ "Engineering" ]
62
[ "Electrical engineering", "Electrical-engineering-related lists" ]
325,950
https://en.wikipedia.org/wiki/Extendible%20cardinal
In mathematics, extendible cardinals are large cardinals introduced by , who was partly motivated by reflection principles. Intuitively, such a cardinal represents a point beyond which initial pieces of the universe of sets start to look similar, in the sense that each is elementarily embeddable into a later one. Definition For every ordinal η, a cardinal κ is called η-extendible if for some ordinal λ there is a nontrivial elementary embedding j of Vκ+η into Vλ, where κ is the critical point of j, and as usual Vα denotes the αth level of the von Neumann hierarchy. A cardinal κ is called an extendible cardinal if it is η-extendible for every nonzero ordinal η (Kanamori 2003). Properties For a cardinal , say that a logic is -compact if for every set of -sentences, if every subset of or cardinality has a model, then has a model. (The usual compactness theorem shows -compactness of first-order logic.) Let be the infinitary logic for second-order set theory, permitting infinitary conjunctions and disjunctions of length . is extendible iff is -compact. Variants and relation to other cardinals A cardinal κ is called η-C(n)-extendible if there is an elementary embedding j witnessing that κ is η-extendible (that is, j is elementary from Vκ+η to some Vλ with critical point κ) such that furthermore, Vj(κ) is Σn-correct in V. That is, for every Σn formula φ, φ holds in Vj(κ) if and only if φ holds in V. A cardinal κ is said to be C(n)-extendible if it is η-C(n)-extendible for every ordinal η. Every extendible cardinal is C(1)-extendible, but for n≥1, the least C(n)-extendible cardinal is never C(n+1)-extendible (Bagaria 2011). Vopěnka's principle implies the existence of extendible cardinals; in fact, Vopěnka's principle (for definable classes) is equivalent to the existence of C(n)-extendible cardinals for all n (Bagaria 2011). All extendible cardinals are supercompact cardinals (Kanamori 2003). See also List of large cardinal properties Reinhardt cardinal References Large cardinals
Extendible cardinal
[ "Mathematics" ]
519
[ "Large cardinals", "Mathematical objects", "Infinity" ]
325,963
https://en.wikipedia.org/wiki/Pollen%20tube
A pollen tube is a tubular structure produced by the male gametophyte of seed plants when it germinates. Pollen tube elongation is an integral stage in the plant life cycle. The pollen tube acts as a conduit to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. In maize, this single cell can grow longer than to traverse the length of the pistil. Pollen tubes were first discovered by Giovanni Battista Amici in the 19th century. They are used as a model for understanding plant cell behavior. Research is ongoing to comprehend how the pollen tube responds to extracellular guidance signals to achieve fertilization. Pollen tubes are unique to seed plants and their structures have evolved over their history since the Carboniferous period. Pollen tube formation is complex and the mechanism is not fully understood. Angiosperms The male reproductive organ of the flower, the stamen, produces pollen. The opening of anthers makes pollen available for subsequent pollination (transfer of pollen grains to the pistil, the female reproductive organ). Each pollen grain contains a vegetative cell, and a generative cell that divides to form two sperm cells. Abiotic vectors such as wind, water, or biotic vectors such as animals carry out the pollen distribution. Once a pollen grain settles on a compatible pistil, it may germinate in response to a sugary fluid secreted by the mature stigma. Lipids at the surface of the stigma may also stimulate pollen tube growth for compatible pollen. Plants that are self-sterile often inhibit the pollen grains from their own flowers from growing pollen tubes. The presence of multiple grains of pollen has been observed to stimulate quicker pollen tube growth in some plants. The vegetative cell then produces the pollen tube, a tubular protrusion from the pollen grain, which carries the sperm cells within its cytoplasm. The sperm cells are the male gametes that will join with the egg cell and the central cell in double fertilization. The first fertilization event produces a diploid zygote and the second fertilization event produces a triploid endosperm. The germinated pollen tube must drill its way through the nutrient-rich style and curl to the bottom of the ovary to reach an ovule. Once the pollen tube reaches an ovule, it bursts to deliver the two sperm cells. One of the sperm cells fertilizes the egg cell which develops into an embryo, which will become the future plant. The other one fuses with both polar nuclei of the central cell to form the endosperm, which serves as the embryo's food supply. Finally, the ovary will develop into a fruit and the ovules will develop into seeds. Gymnosperms Gymnosperm pollen is produced in microsporangia borne on the scales of the male cone or microstrobilus. In most species, the plants are wind-pollinated, and the pollen grains of conifers have air bladders that provide buoyancy in air currents. The grains are deposited in the micropyle of the ovule of a female cone or megastrobilus, where they mature for up to a year. In conifers and gnetophytes, the pollen germinate to produce a pollen tube that penetrates the megasporangium or nucellus carrying with it sperm nuclei that are transferred to the egg cell in the developing archegonia of the female plant. Mechanism of pollen tube growth Recognition The female sporophyte must recognize the pollen stuck to the stigma. Often, only pollen of the same species can successfully grow. Outcrossed pollen grows more successfully. With self-incompatibility systems, outcrossed pollen grows and outcompetes self pollen. The interaction between the style and the pollen detects compatibility and influences growth rate of the pollen tube. This selection process relies on gene level regulation in which gene loci of the gynoecium allow either self-pollen to grow slowly, stop growing or burst while faster growth of outcrossed pollen occurs. Self-incompatibility systems maintain genetic diversity. As for gymnosperms, they do not contain a pistil with a stigma. Therefore, pollen must submerge into the pollination droplet, bringing the male gametophyte to the egg of the exposed ovule. However, pollen of different species will not submerge into the droplet; the pollen is left floating on top, while the droplet retracts back into the micropyle. Initiation Once the pollen grain is recognized and hydrated, the pollen grain germinates to grow a pollen tube. There is competition in this step as many pollen grains may compete to reach the egg. The stigma plays a role in guiding the sperm to a receptive ovule, in the case of many ovules. Only compatible pollen grains are allowed to grow as determined by signaling with the stigma. In the pollen grain, the generative cell gives rise to the sperm, whereas the vegetative cells have a tube cell that grows the pollen tube. Some plants have mechanisms in place to prevent self pollination, such as having stigma and anther mature at different times or being of different lengths, which significantly contributes to increasing genetic diversity of the next generation. There is great variation in the rate of growth of pollen tubes and many studies have focused on signaling. The gene expression in the pollen grain has been identified as that of the gametophyte and not of the parental sporophyte, as it expresses its own unique mRNA and enzymes. In the peach tree, the style environment which the pollen tube grows through provides nutrition for the tube's growth to the ovule. Pollen tubes are tolerant and even pollen damaged by X-rays and gamma rays can still grow pollen tubes. Growth and signaling Pollen tube growth is influenced by the interaction between the stigma-style and the pollen grain. The elongation of the tube is achieved with elongation of the cytoskeleton and it extends from the tip, which is regulated by high levels of calcium in the cytosol. The calcium levels help the synaptic vesicles in the membranes grow and extend at the tip. Polypeptides found in the style also regulate growth of tube and specific peptides that play a role in signaling for growth have been identified. The LURE peptides that are secreted from the synergids, which occupy the space adjacent to the egg cell, can use attractants. In mutant Arabidopsis plant embryos, specifically in those without the synergids, the pollen tubes were unable to grow. Pollen tube growth is toward eggs of the same species as the pollen. Intraspecific signaling helps fertilize egg and sperm of the same species. The signaling in the style is important as pollen tubes can grow without the presence of an embryo sac with just interaction with the style. Other parts in the ovary include cytoplasmic factors like miRNA and chemical gradients that attract the pollen tube to grow toward the synergids. Calcium and ethylene in Arabidopsis thaliana were involved in termination of the pollen tube when it grows near the ovary. The increase in calcium allowed release of the two sperm cells from the tube as well as degeneration of a synergid cell. The chemical gradient of calcium can also contribute to termination early on in tube growth or at the appropriate time. The length of the pollen tube varies by species. It grows in an oscillating fashion until it is ready to release the sperm near the egg for fertilization to take place. Some fast-growing pollen tubes have been observed in lily, tobacco, and Impatiens sultanii. The rate of growth confers advantage to the organism but it is not clear whether the variation in growth rate exists in the population or has been selected for over generations due to increased fitness. Evolution Many transitional features have been identified that show correlation between the evolution of the pollen tube with that of a non-motile sperm. Early seed plants like ferns have spores and motile sperm that swim in a water medium, called zooidogamy. The angiosperm pollen tube is simple, unbranched, and fast growing, however this is not the case for ancestral plants. In gymnosperms like Ginkgo biloba and cycadophyta, a haustorial pollen tube forms. The tube simply soaks up nutrients from the female nucellus and grows in two stages. The pollen tube is highly branched and grows on the female sporophyte tissues. First, it grows the main tube followed by a more spherical tip at the end to allow the sperm to burst near the archegonia. The binucleated, multiflagellated sperm can then swim to the egg. Cycads have a less branched structured and the tip end swells the same way as in the ginkgo. In cycads, however, various enzymes have been identified in the pollen tube that direct growth and the nucellus tissues are more damaged with the tube growth. In other phyla of gymnosperms, the Coniferophyta and Gnetophyta, the sperm is non motile, and the pollen tube delivers the sperm to the egg directly, in a process called siphonogamy. Conifers can be branched or unbranched and they cause degeneration of the female tissue as it grows through more tissue. Pines, for instance discharge cytoplasm of the sperm and union of the one sperm occurs as the other sperm degenerates. Yet, in Gnetophyta, there are features more similar to angiosperm pollen tubes where the tube reaches the egg with an early form of double fertilization. However, the endosperm does not form and the second fertilization is aborted. In angiosperms, the mechanism has been studied more extensively as pollen tubes in flowering plants grow very fast through long styles to reach the well-protected egg. There is great variation in pollen tubes in angiosperms and many model plants like petunia, Arabidopsis, lily and tobacco plants have been studied for intraspecific variation and signaling mechanisms. In flowering plants, a phenomenon called polyamory can occur where many ovules are fertilized and overall fitness of the organism is yet to be studied with respect to rate of pollen tube growth. Behavior Pollen tubes are an excellent model for the understanding of plant cell behavior. They are easily cultivated in vitro and have a very dynamic cytoskeleton that polymerizes at very high rates, providing the pollen tube with interesting mechanical properties. The pollen tube has an unusual kind of growth; it extends exclusively at its apex. Extending the cell wall only at the tip minimizes friction between the tube and the invaded tissue. This tip growth is performed in a pulsating manner rather than in a steady fashion. The pollen tube's journey through the style often results in depth-to-diameter ratios above 100:1 and up to 1000:1 in certain species. In maize, this single cell can grow longer than to traverse the length of the pistil. The internal machinery and the external interactions that govern the dynamics of pollen tube growth are far from being fully understood. Role of actin cytoskeleton The actin cytoskeleton has proven to be critical in assisting pollen tube growth. In terms of spatial distribution, actin filaments are arranged into three different structures within the pollen tube. Each unique arrangement, or pattern, contributes to the maintenance of polarized cell growth characteristic of the pollen tube. In the apical region - the site of tip-directed growth- actin filaments are less abundant, however they are highly dynamic. Furthermore, small vesicles accumulate in the apex, indicating that this region is the site of critical vesicle targeting and fusing events. Such events are essential for regulating the velocity and direction of pollen tube growth. In the subapical region, actin filaments are arranged into a collar-like structure. Reverse-fountain cytoplasmic streaming occurs at the subapex; the direction of cytoplasmic streaming is reversed and continues along the axial actin cables comprising the shank. The shank region comprises the central part of the pollen tube. In this region, actin filaments are arranged into axial bundles of uniform polarity, thereby enabling the transport of various organelles and vesicles from the base of the pollen tube to the tip, propelling overall tube growth. Actin filament dynamics Both the spatial distribution and dynamics of the actin cytoskeleton are regulated by actin-binding proteins (ABPs). In order to experimentally observe distributional changes that take place in the actin cytoskeleton during pollen tube growth, green fluorescent proteins (GFPs) have been put to use. GFPs were mainly selected for the purposes of dynamic visualization due to the fact that they provided an efficient means for the non-invasive imaging of actin filaments in plants. Amongst the various GFPs employed during experimentation were GFP-mTalin, LIM-GFP and GFP-fimbrin/ABD2-GFP. However, each of these markers either disrupted the natural structure of the actin filaments or unfavorably labeled such filaments. For example, GFP-mTalin resulted in excessive filament bundling and GFP-fimbrin/ABD2-GFP did not label actin filaments located in the apical or subapical regions of the pollen tube. In light of these drawbacks, Lifeact-mEGFP has been designated as the prominent marker of choice for actin filaments in the pollen tube; Lifeact-mEGFP is able to detect all three arrangements of actin filaments, and it has minimal effects on the natural structure of actin filaments. Lifeact-mEGFP has been used as a marker to study the dynamics of actin filaments in the growing pollen tubes of tobacco, lilies and Arabidopsis. Through studies conducted with GFP, it has been confirmed that the dynamic state of actin filaments located in the apical region are essential for pollen tube growth. Experimentation of actin filaments stained with GFP-mTalin have yielded results confirming that tip-localized actin filaments are highly dynamic. Such experimentation has made a connection between the dynamics of tip-localized actin filaments and their role in the formation of actin structures in the subapical region. Furthermore, experimentation of actin filaments located in the apical dome of Arabidopsis indicates that actin filaments are continuously produced from the apical membrane of the pollen tube; the production of these actin filaments are mediated by formins. These findings have provided evidence supporting the theory that actin filaments located in the apical region are highly dynamic and are the site of vesicle targeting and fusing events. Experimentation of etiolated hypocotyl cells as well as BY-2 suspension cells show that highly dynamic actin filaments produced from the apical membrane can either be turned over by filament severing and depolarizing events, or they can move from the apex to the apical flank, resulting in decreased accumulation of actin filaments in the apical region of the pollen tube. Experimentation of actin filament dynamics in the shank region were also conducted with the use of GFP. Findings indicated that maximum filament length in this region significantly increased, and the severing frequency significantly decreased. Such findings indicate that actin filaments located in the shank region are relatively stable compared to actin filaments located in the apical and subapical regions. Regulation ABPs regulate the organization and dynamics of the actin cytoskeleton. As stated previously, actin filaments are continuously synthesized from the apical membrane. This indicates the presence of membrane-anchored actin nucleation factors. Through experimentation, it has been theorized that formins are representative of such actin nucleation factors. For example, formin AtFH5 has been identified as a major regulator of actin filament nucleation, specifically for actin filaments synthesized from the apical membrane of the pollen tube. Genetic knockouts of AtFH5 resulted in a decreased abundance of actin filaments in both apical and subapical regions of the pollen tube, thereby providing more evidence to support the theory that AtFH5 nucleates actin filament assembly in apical and subapical regions of the pollen tube. Class I formin AtFH3 is another actin nucleation factor. AtFH3 nucleates actin filament assembly of the longitudinal actin cables located in the shank region of the pollen tube. More specifically, AtFH3 uses the actin/profilin complex in order to interact with the end of actin filaments, thereby initiating actin filament nucleation. Guidance Extensive work has been dedicated to comprehend how the pollen tube responds to extracellular guidance signals to achieve fertilization. Pollen tubes react to a combination of chemical, electrical, and mechanical cues during their journey through the pistil. However, it is not clear how these external cues work or how they are processed internally. Moreover, sensory receptors for any external cue have not been identified yet. Nevertheless, several aspects have already been identified as central in the process of pollen tube growth. The actin filaments in the cytoskeleton, the peculiar cell wall, secretory vesicle dynamics, and the flux of ions, to name a few, are some of the fundamental features readily identified as crucial, but whose role has not yet been completely elucidated. DNA repair During pollen tube growth, DNA damages that arise need to be repaired in order for the male genomic information to be transmitted intact to the next generation. In the plant Cyrtanthus mackenii, bicellular mature pollen contains a generative cell and a vegetative cell. Sperm cells are derived by mitosis of the generative cell during pollen tube elongation. The vegetative cell is responsible for pollen tube development. Double-strand breaks in DNA that arise appear to be efficiently repaired in the generative cell, but not in the vegetative cell, during the transport process to the female gametophyte. RMD Actin filament organization is a contributor to pollen tube growth Overview In order for fertilization to occur, there is rapid tip growth in pollen tubes which delivers the male gametes into the ovules. A pollen tube consists of three different regions: the apex which is the growth region, the subapex which is the transition region, and the shank which acts like normal plant cells with the specific organelles. The apex region is where tip growth occurs and requires the fusion of secretory vesicles. There is mostly pectin and homogalacturonans (part of the cell wall at the pollen tube tip) inside these vesicles. The pectin in the apex region contains methylesters which allow for flexibility, before the enzyme pectin methylesterase removes the methylester groups allowing calcium to bind between pectins and give structural support. The homogalacturonans accumulate in the apex region via exocytosis in order to loosen the cell wall. A thicker and softer tip wall with a lower stress yield will form and this allows cell expansion to occur, which leads to an increase in tip growth. Reverse-fountain cytoplasmic streaming occurs during the tip growth which is essential for the cellular expansion, because it is transporting organelles and vesicles between the shank region and subapex region. The actin cytoskeleton is an important factor in pollen tube growth, because there are different patterns of actin cytoskeleton within the different regions of the pollen tube for the maintenance of polarized cell growth. For instance, there are longitudinal actin cables in the shank region in order to regulate reverse-fountain cytoplasmic streaming. The F-actin controls the accumulation of the homogalacturonans full vesicles- essentially mediating tip growth- in the subapex region. The actin filaments controls the apical membrane and cytoplasm interactions while the pollen tube is growing in the apex region. The F-actin from the apical membrane makes an actin binding protein called formin which is essential for pollen tube tip growth. Formins are expressed in the tip growth cells and are divided into two subgroups: type I and type II. The type I formins make the actin structures and partake in cytokinesis. The type II formins on the other hand contribute to the growth of polarized cells which is necessary for tip growth. Tip growth is a form of extreme polarized growth and this polarized process requires actin-binding protein-mediated organization of actin cytoskeleton. An essential protein required for this tip growth is the actin-organizing protein and type II formin protein called Rice Morphology Determinant (RMD). RMD is localized in the tip of the pollen tube and controls pollen tube growth by regulating the polarity and organization of F-actin array. RMD promotes pollen tube growth RMD promotes pollen germination and pollen tube growth, and this is proven through numerous experiments. The first experiment compares the features of the pistil and the stigma of rmd-1 mutant (rice plant without a functional RMD) and the wild-type rice plant (with a functional RMD). The anther and pistil were shorter in the rmd-1 mutants than the wild-type. This experiment showed that RMD is critical for pollen development. Wild-type rice plants have increased germination rates while rmd-1 mutants have decreased germination rates. This was seen when both were germinated in a liquid germination medium. After the germination rates were tested, there was a comparison of the lengths and widths of the pollen tubes between the two plants. The pollen tubes of the wild-type plants had a greater pollen tube length than the mutants, but the mutants had a greater tube width. This greater pollen tube width within the mutants indicates the decrease in the growth of polarized cells and thus decrease in tip growth. Next, pollen grains from the wild type and mutants were collected to compare the pollination activities between the wild types and mutants. There was decreased activity and minimal penetration within the mutants whereas an increased activity and penetration through the style and to the bottom of the pistils within the wild types. These observations indicated the delayed pollen tube growth in the rmd-1 mutants. Additionally, there was no effect on fertilization rates between the wild type and the mutant and this was tested by measuring the seed-setting rates between the wild type and mutant. It was found that both had similar seed-setting rates. Therefore, RMD does not affect fertilization and has an effect only on tip growth. RMD expression in the pollen tube Total RNA extractions from the whole flower, lemma, palea, lodicule, pistil, anther, and mature pollen grains of the wild type plants took place in order to discover where RMD is specifically expressed in the plant as a whole. Using RT-qPCR (reverse transcription quantitative PCR), it was evident that there were different amounts of RMD transcripts within each part of the plant. And then it was evident where RMD was present in each part of the plant using RT-PCR (reverse transcription PCR) and using UBIQUITIN as a control. These two methods demonstrated that there was an abundant presence of the RMD transcripts in the lemma, pistil, anther, and mature pollen grains. In order to confirm these results, another method was performed. This method used transgenic plants that had an RMD promoter region fused with a reporter gene encoding GUS. Histochemical staining of the tissues of these transgenic plants then showed high GUS activity within the pistil, anther wall, and mature pollen grains. Therefore, these combined results demonstrated that RMD is expressed in these specific organs of the plant. Detection of GUS signals were employed once again in order to study where RMD is specifically expressed within the pollen tube. First, pollen grains were collected from proRMD::GUS trangenic plants, and it was noted that there was a strong GUS signal within these mature pollen grains. These pollen grains were then germinated in vitro and GUS signals were observed within the tip growth of the pollen tubes. However, the strength of these GUS signals varied at different germination stages. The GUS signals were weak within the pollen tube tip at the early germination stage, but stronger at the later germination stages. Therefore, these results support that RMD is involved in pollen germination and pollen tube growth. RMD localization in the pollen tube RMD, which are type II formins, consist of a phosphatase, (PTEN)-like domain (responsible for protein localization), and FH1 and FH2 domains (promotes actin polymerization). In order to discover the localization of RMD in the pollen tube, transient assays of growing pollen tubes of tobacco was performed and the fluorescent protein-GFP was used. Many confocal images of various pollen tubes under specific conditions were observed: pLat52::eGFP (single eGFP driven by the pollen specific Lat52 promoter and this acts as a control); pLat52::RMD-eGFP (RMD protein fused with eGFP); pLat52::PTEN-eGFP (the PTEN domain fused with eGFP); and pLat52::FH1FH2-eGFP (the FH1 and FH2 domains fused with eGFP). By comparing the images of the control with pLat52::RMD-eGFP, it is observed that the single GFP was spread throughout the entire tube whereas RMD-eGFP accumulated in the tip region of the tube. Therefore, this shows that RMD is localized within the tip of the pollen tube. In order to discover whether the PTEN-like domain is responsible for the localization of RMD, there was a comparison between the confocal images of GFP fused with PTEN domain and shortened RMD without the PTEN domain (pLat52::FH1FH2-eGFP). The PTEN-eGFP signals were localized in the tip of the pollen tubes like the RMD-eGFP signals, whereas the FH1FH2-eGFP signals were present throughout the pollen tube and not localized in a polar manner. Therefore, these combined results demonstrate that the PTEN-like domain is responsible for the tip localization of RMD in the pollen tubes. RMD controls F-Actin distribution and polarity in the pollen tube In order to determine if RMD controls F-actin organization within the pollen tube, F-actin arrays in wild type and rmd-1 mature pollen grains were observed using Alexa Fluor 488-phalloidin staining. Strongly bundled actin filaments were present around the apertures of the wild type pollen grains although there was no accumulation of actin filaments around the apertures in the rmd-1 pollen grains. Additionally, there were weak signals and random organization of the actin filaments within the rmd-1 pollen grain. Therefore, these results support that RMD is essential for controlling pollen germination. Fluorescent intensity was measured using statistical analysis in order to observe the actin filament densities within the pollen tubes. There was greater fluorescence intensity in the shank region of the rmd-mutant tubes which means there was a higher density of F-actin within this region. But, there was a lower density of F-actin observed in the tip region of the rmd-mutant tubes compared to the wild type tubes. This demonstrates that the F-actin distribution pattern of pollen tubes is altered without a functional RMD. In order to determine the polarity of the actin cables, the angles between the actin cables and elongation axis of the pollen tube were measured. The angles in the shank region of the wild type pollen tubes were predominantly less than 20° whereas the angles for the rmd-mutant pollen tubes were greater than 60°. These results support the fact that RMD is essential for polarized tip growth, because the rmd-mutant pollen tubes (without a functional RMD) exhibited an increased width, and thus a decrease in tip growth. The maximum length of the single cables of F-actin filaments from the apical to the shank region of elongating pollen tubes were also measured to test the polarity within the pollen tube. The maximum length of the F-actin cables were shorter in the rmd-mutant pollen tubes compared to those in the wild type tubes. Therefore, these combined results support that the proper organization of actin cables as well as normal F-actin densities within the tip of the tube can only be achieved if RMD is present. See also Evolutionary history of plants Flowering plant Self-incompatibility in plants Siphonogamy References External links Pollen tube primer Images : Pollen tetrad and Pollen tube Calanthe discolor Lindl. - Flavon's Secret Flower Garden Pollination Plant physiology Botany
Pollen tube
[ "Biology" ]
6,212
[ "Plant physiology", "Plants", "Botany" ]
325,996
https://en.wikipedia.org/wiki/National%20Superconducting%20Cyclotron%20Laboratory
The National Superconducting Cyclotron Laboratory (NSCL), located on the campus of Michigan State University was a rare isotope research facility in the United States. Established in 1963, the cyclotron laboratory has been succeeded by the Facility for Rare Isotope Beams, a linear accelerator providing beam to the same detector halls. NSCL was the nation's largest nuclear science facility on a university campus. Funded primarily by the National Science Foundation and MSU, the NSCL operated two superconducting cyclotrons. The lab's scientists investigated the properties of rare isotopes and nuclear reactions. In nature, these reactions would take place in stars and exploding stellar environments such as novae and supernovae. The K1200 cyclotron was the highest-energy continuous beam accelerator in the world (as compared to synchrotrons such as the Large Hadron Collider which provide beam in "cycles"). The laboratory's primary goal was to understand the properties of atomic nuclei. Atomic nuclei are ten thousand times smaller than the atoms they reside in, but they contain nearly all the atom's mass (more than 99.9 percent). Most of the atomic nuclei found on earth are stable, but there are many unstable and rare isotopes that exist in the universe, sometimes only for a fleeting moment in conditions of high pressure or temperature. The NSCL made and studied atomic nuclei that could not be found on earth. Rare isotope research is essential for understanding how the elements—and ultimately the universe—were formed. The nuclear physics graduate program at MSU was ranked best in America by the 2018 Best Grad Schools index published by U.S. News & World Report. Laboratory upgrades The upgrade plans are in close alignment with a report issued December 2006 by the National Academies, "Scientific Opportunities with a Rare-Isotope Facility in the United States", which defines a scientific agenda for a U.S.-based rare-isotope facility and addresses the need for such a facility in context of international efforts in this area. NSCL is working towards a significant capability upgrade that will keep the laboratory – and nuclear science – at the cutting edge well into the 21st century. The upgrade of NSCL – the $750 million Facility for Rare Isotope Beams (FRIB), under construction as of 2020 – will boost intensities and varieties of rare isotope beams produced at MSU by replacing the K500 and K1200 cyclotrons with a powerful linear accelerator to be built beneath the ground. Such beams will allow researchers and students to continue to address a host of questions at the intellectual frontier of nuclear science: How does the behavior of novel and short-lived nuclei differ from more stable nuclei? What is the nature of nuclear processes in explosive stellar environments? What is the structure of hot nuclear matter at abnormal densities? Beyond basic research, FRIB may lead to cross-disciplinary benefits. Experiments there will help astronomers better interpret data from ground- and space-based observatories. Scientists at the Isotope Science Facility will contribute to research on self-organization and complexity arising from elementary interactions, a topic relevant to the life sciences and quantum computing. Additionally, the facility's capabilities may lead to advances in fields as diverse as biomedicine, materials science, national and international security, and nuclear energy. Joint Institute for Nuclear Astrophysics The Joint Institute for Nuclear Astrophysics (JINA) is a collaboration between Michigan State University, the University of Notre Dame, and the University of Chicago to address a broad range of experimental, theoretical, and observational questions in nuclear astrophysics. A portion of the Michigan State collaboration is housed at the National Superconducting Cyclotron Laboratory, directly involving roughly 30 nuclear physicists and astrophysicists. See also CERN Cyclotron Elementary particle FRIB Gesellschaft für Schwerionenforschung Particle physics Particle accelerator References External links Isotope Science Facility at Michigan State University "Scientific Opportunities with a Rare-Isotope Facility in the United States" A report by the National Academies "Nuclear science hits new frontiers" A commentary by NSCL Director C. Konrad Gelbke in the December 2006 CERN Courier “NSCL nets $100 million in NSF funds” MSU News Bulletin, Oct. 26, 2006 The Spartan Podcast – Arden L. Bement, Jr. An audio interview with NSF Director Arden L. Bement Jr. who visited MSU Oct. 26, 2006 to award NSCL more than $100 million to fund operations through 2011, highlighting the lab's status as a world-leading nuclear science facility Online Tour of Isotope Science Facility at Michigan State University Michigan State University Michigan State University campus Nuclear research institutes Research institutes in Michigan
National Superconducting Cyclotron Laboratory
[ "Engineering" ]
959
[ "Nuclear research institutes", "Nuclear organizations" ]
326,120
https://en.wikipedia.org/wiki/IEC%2060320
IEC 60320 Appliance couplers for household and similar general purposes is a set of standards from the International Electrotechnical Commission (IEC) specifying non-locking connectors for connecting power supply cords to electrical appliances of voltage not exceeding 250 V (a.c.) and rated current not exceeding 16 A. Different types of connector (distinguished by shape and size) are specified for different combinations of current, temperature and earthing requirements. Unlike IEC 60309 connectors, they are not coded for voltage; users must ensure that the voltage rating of the equipment is compatible with the mains supply. The standard uses the term coupler to encompass connectors on power cords and power inlets and outlets built into appliances. The first edition of IEC 320 was published in 1970 and was renumbered to 60320 in 1994. Terminology Appliance couplers enable the use of standard inlets and country-specific cord sets which allow manufacturers to produce the same appliance for many markets, where only the cord set has to be changed for a particular market. Interconnection couplers allow a power supply from a piece of equipment or an appliance to be made available to other equipment or appliances. Couplers described under these standards have standardized current and temperature ratings. The parts of the couplers are defined in the standard as follows. Connector: "part of the appliance coupler integral with, or intended to be attached to, one cord connected to the supply". Appliance inlet: "part of the appliance coupler integrated as a part of an appliance or incorporated as a separate part in the appliance or equipment or intended to be fixed to it". Plug connector: "part of the interconnection coupler integral with or intended to be attached to one cord". Appliance outlet: "part of the interconnection coupler which is the part integrated or incorporated in the appliance or equipment or intended to be fixed to it and from which the supply is obtained". Cord set: "assembly consisting of one cable or cord fitted with one non-rewirable plug and one non-rewirable connector, intended for the connection of an electrical appliance or equipment to the electrical supply". Interconnection cord set: "assembly consisting of one cable or cord fitted with one non-rewirable plug connector and one non-rewirable connector, intended for the interconnection between two electrical appliances". Non-rewirable plugs and connectors are typically permanently molded onto cords and cannot be removed or rewired without cutting the cords. The standard uses the terms "male" and "female" only for individual pins and socket contacts, but in general usage they are also applied to the complete plugs and connectors. "Connectors" and "appliance outlets" are fitted with socket contacts, and "appliance inlets" and "plug connectors" are fitted with pin contacts. Each type of coupler is identified by a standard sheet number. For appliance couplers this consists of the letter "C" followed by a number, where the standard sheet for the appliance inlet is 1 higher than the sheet for the corresponding cable connector. Many types of coupler also have common names. The most common ones are "IEC connector" for the common C13 and C14, the "figure-8 connector" for C7 and C8, and "cloverleaf connector" or "Mickey Mouse connector" for the C5/C6. "Kettle plug" (often "jug plug" in Australian or New Zealand English) is a colloquial term used for the high-temperature C16 appliance inlet (and sometimes for C15 connector that the plug goes into). “Kettle/jug plug” is also informally used to refer to regular temperature-rated C13 and C14 connectors. (A high-temperature-rated cord with a C15 connector can be used to power a computer with a C14 plug, but a cord with a low-temperature C13 connector will not fit a high-temperature appliance that has a C16 plug.) Application Detachable appliance couplers are used in office equipment, measuring instruments, IT environments, and medical devices, among many types of equipment for worldwide distribution. Each appliance's power system must be adapted to the different plugs used in different regions. An appliance with a permanently-attached plug for use in one country cannot be readily sold in another which uses an incompatible wall socket; this requires keeping track of variations throughout the product's life cycle from assembly and testing to shipping and repairs. Instead, a country-specific power supply cord can be included in the product packaging, so that model variations are minimized and factory testing is simplified. A cord which is fitted with non-rewireable (usually moulded) connectors at both ends is termed a cord set. Appliance manufacturing may be simplified by mounting an appliance coupler directly on the printed circuit board. Assembly and handling of an appliance is easier if the power cord can be removed without much effort. Appliances can be used in another country easily, with a simple change of the power supply cord (including a connector and a country-specific plug). The power supply cord can be replaced easily if damaged, because it is a standardized part that can be unplugged and re-inserted. Safety hazards, maintenance expenditure and repairs are minimized. Standards Parts of the standard IEC 60320 is divided into several parts: IEC 60320-1: General Requirements specifies two-pole and two-pole with earth couplers intended for the connection of a mains supply cord to electrical appliances. Beginning with the IEC 60320-1:2015 edition, this part also specifies interconnection couplers which enable the connection and disconnection of an appliance to a cord leading to another appliance, incorporating IEC 60320-2-2. At the same time, this part of the standard no longer includes standard sheets which were moved to a new part: IEC 60320-3. IEC 60320-2-1: Sewing machine couplers specifies couplers which are not interchangeable with other couplers from IEC 60320, for use with household sewing machines. They are rated no higher than 2.5 A and 250 V AC. IEC 60320-2-2: Interconnection couplers for household and similar equipment. This section was withdrawn in January 2016. The general requirements for these items were incorporated into IEC 60320-1 and the standard sheets were moved to IEC 60320-3. IEC 60320-2-3: Couplers with a degree of protection higher than IPX0 specifies couplers with some degree of liquid ingress protection (IP). In its second edition published in 2018, the standard sheets were moved to IEC 60320-3. IEC 60320-2-4: Couplers dependent on appliance weight for engagement. IEC 60320-3: Standard sheets and gauges. First published October 31, 2014, this part initially included the standard sheets for both appliance couplers and interconnection couplers. In a 2018 amendment, the standard sheets for the IP couplers defined by 60320-2-3 were added. For appliance couplers the various coupler outlines are designated using a combination of letters and numbers, e.g., "C14". The connector supplies power to the appliance inlet. The appliance inlet is designated by the even number one greater than the number assigned to the connector, so a C1 connector mates with a C2 inlet, and a C15A mates with a C16A. Interconnection couplers have single letter designators, e.g., "F". They consist of a plug connector and an appliance outlet. The plug connector is the part integral with, or intended to be attached to, the cord, and the appliance outlet is the part integrated with or incorporated into the appliance or equipment or intended to be fixed to it, and from which the supply is obtained. Contents of standards The standards define the mechanical, electrical and thermal requirements and safety goals of power couplers. The standard scope is limited to appliance couplers with a rated voltage not exceeding 250 V (a.c.) at 50 Hz or 60 Hz, and a rated current not exceeding 16 A. Further sub-parts of IEC 60320 focus on special topics such as protection ratings and appliance specific requirements. Selection of a coupler depends in part on the IEC appliance classes. The shape and dimensions of appliance inlets and connectors are coordinated so that a connector with lower current rating, temperature rating, or polarization cannot be inserted into an appliance inlet that requires higher ratings. (i.e. a Protection Class II connector cannot mate with a Class I inlet which requires an earth); whereas connecting a Class I connector to a Class II appliance inlet is possible because it creates no safety hazard. Pin temperature is measured where the pin projects from the engagement surface. The maximum permitted pin temperatures, are , , and , respectively (the higher temperatures are not applicable to interconnection couplers). The pin temperature is determined by the design of the appliance, and its interior temperature, rather than by its ambient temperature. Typical applications with increased pin temperatures include appliances with heating coils such as ovens or electric grills. It is generally possible to use a connector with a higher rated temperature with a lower rated appliance inlet, but the keying feature of the inlet prevents use of a connector with a lower temperature rating. Connectors are also classified according to the method of connecting the cord, either as rewirable connectors or non-rewirable connectors. In addition the standards define further general criteria such as withdrawal forces, testing procedures, the minimum number of insertion cycles, and the number of flexings of cords. IEC 60320-1 defines a cord set as an "assembly consisting of one cable or cord fitted with one plug and one connector, intended for the connection of an electrical appliance or equipment to the electrical supply". It also defines an interconnection cord set as an "assembly consisting of one cable or cord fitted with one plug connector and one connector, intended for the interconnection between two electrical appliances". In addition to the connections within the standards, as mentioned, there are possible combinations between appliance couplers and IEC interconnection couplers. Fitted with a flexible cord, the components become interconnection cords to be used for connecting appliances or for extending other interconnection cords or power supply cords. North American ratings North American rating agencies (CSA, NOM-ANCE, and UL) will certify IEC 60320 connectors for higher currents than are specified in the IEC standard itself. In particular, UL will certify: C5/C6 connectors for up to 13 A, although 10 A is more commonly seen (IEC maximum is 2.5 A) C7/C8 connectors for up to 10 A (IEC maximum is 2.5 A) C13/C14 and C15/C16 connectors for up to 15 A (IEC maximum is 10 A) C19/C20 and C21/C22 connectors for up to 20 A (IEC maximum is 16 A) Given the 120 V (±5%) mains supply used in the United States and Canada, these higher ratings permit devices with C6 and C8 inputs to draw more than 114 V × 2.5 A = 285 W from the mains, and devices with C14 inputs to draw more than 1140 W from the mains. This is exploited by high-powered computer power supplies, up to 1200 W output, and even some particularly efficient 1500 W output models to use the more popular C14 input on products sold worldwide. Although less common, power bricks with C6 and C8 inputs and ratings up to 300 W also exist. Appliance couplers The dimensions and tolerances for connectors, appliance inlets, appliance outlets and plug connectors are given in standard sheets, which are dimensioned drawings showing the features required for safety and interchangeability. Mains appliance couplers The mains appliance couplers are all engineered to connect an appliance via a power cord to mains power. C1/C2 coupler The C1 coupler and C2 inlet were commonly used for mains-powered electric shavers. These have largely been supplanted by cordless shavers with rechargeable batteries or corded shavers with an AC adapter. C5/C6 coupler This coupler is sometimes colloquially called a "cloverleaf" or "Mickey Mouse" coupler (because the cross section resembles the silhouette of the Disney character). The C6 inlet is used on laptop power supplies and portable projectors, as well as on some desktop computers, and some LCD monitors. C7/C8 coupler Commonly known as a "figure-8", "infinity" or "shotgun" connector due to the shape of its cross-section, or less commonly, a Telefunken connector after its originator. This coupler is often used for small cassette recorders, battery/mains-operated radios, battery chargers, some full-size audio-visual equipment, laptop computer power supplies, video game consoles, and similar double-insulated appliances. A C8B inlet type is defined by the standard for use by dual-voltage appliances; it has three pins and can hold a C7 connector in either of two positions, allowing the user to select voltage by choosing the position the connector is inserted. A similar but polarized connector has been made, but is not part of the standard. Sometimes called C7P, it is asymmetrical, with one side squared. Unpolarized C7 connectors can be inserted into the polarized inlets; however, doing so might be a safety risk if the device is designed for polarized power. Although not specified by IEC 60320, and not clear whether any formal written standard exists, the most common wiring appears to connect the squared side to the hot (live) line, and the rounded to the neutral. Note: Clause 9.5 was added to IEC 60320-1:2015, this requires that "It shall not be possible to engage a part of a non-standard appliance coupler with a complementary part of an appliance coupler complying with the standard sheets in any part of IEC 60320." Apple uses a modified version of this connector, with the receptacle having a proprietary pin that secures the adapter in place and provides grounding. Most Apple supplied cable adapter provide grounding through a slide-in connector, while the angled AC adapter ("duckhead") does not provide grounding, with North American plugs. These power supplies do accept the standard C7 connector, and supported by Apple for non-grounded applications. C13/C14 coupler The C13/C14 connector and inlet combination are used in a wide variety of electronic equipment ranging from computer components like the power supply, monitors, printers and other peripherals to video game consoles, instrument amplifiers, professional audio equipment and virtually all professional video equipment. An early example of a product that uses this connector is the Apple II. A power cord with a suitable power plug (for the locality where the appliance is being used) on one end and a C13 connector (connecting to the appliance) on the other is commonly called an IEC cord. There are also a variety of splitter blocks, splitter cables, and similar devices available. These are usually un-fused (with the exception of C13 cords attached to BS 1363 plugs, which are always fused). These cables are sometimes informally referred to as a "kettle cord" or "kettle lead", but the C13/14 connectors are only rated for : a device such as a kettle requires the C15/16 connector, rated for . A cable consisting of a C13 and an E interconnection connector is commonly mislabeled as an "extension cord", as although that is not the intended purpose, it can be used as such. They are also commonly mislabeled as C14 instead of E. The C13 connector and C14 inlet are also commonly found on servers, routers, and switches. Power cord sets utilizing a C13 connector and an E interconnection plug are commonplace in data centers to provide power from a PDU (power distribution unit) to a server. These data-center power cables are now offered in many colors. Colored power cables are used to color-code installations. C15/C16 coupler Some electric kettles and similar hot household appliances like home stills use a supply cord with a C15 connector and a matching C16 inlet on the appliance; their temperature rating is rather than the of the similar C13/C14 combination. The official designation in Europe for the C15/C16 coupler is a "hot-condition" coupler. These are similar in form to the C13/C14 coupler, except with a ridge opposite the earth in the C16 inlet (preventing a C13 fitting), and a corresponding valley in the C15 connector (which doesn't prevent it fitting a C14 inlet). For example, an electric kettle cord can be used to power a computer, but an unmodified computer cord cannot be used to power a kettle. There is some public confusion between C13/C14 and C15/C16 couplers, and it is not uncommon for C13/C14 to be informally referred to as "kettle plug" or "kettle lead" (or some local equivalent). In European countries the C15/C16 coupler has replaced and made obsolete the formerly common types of national appliance coupler in many applications. C15A/C16A coupler This modification of the C15/C16 coupler has an even higher temperature rating. C17/C18 coupler Similar to C13/C14 coupler, but unearthed. A C18 inlet accepts a C13 connector, but a C14 inlet does not accept a C17 connector. The IBM Wheelwriter series of electronic typewriters are one common application. Three-wire cords with C13 connectors, which are easier to find, are sometimes used in place of the two-wire cords for replacement. In this case, the ground wire will not be connected. The C17/C18 coupler is often used in audio applications where a floating ground is maintained to eliminate hum caused by ground loops. Other common applications are the power supplies of Xbox 360 game consoles, replacing the C15/C16 coupler employed initially, and large CRT televisions manufactured by RCA in the early 1990s. C19/C20 coupler Earthed, 16 A, polarized. This coupler is used for supplying power in IT applications where higher currents are required, for instance on high-power workstations and servers, power to uninterruptible power supplies, power to some power distribution units, large network routers, switches, blade enclosures, and similar equipment. This connector can also be found on high-current medical equipment. It is rectangular and has pins parallel to the long axis of the coupler face. Interconnection couplers Interconnection couplers are similar to appliance couplers, but with the gender roles reversed. Specifically, the female appliance outlet is built into a piece of equipment, while the male plug connector is attached to a cord. They are identified by letters, not numbers, with one letter identifying the plug connector, and the alphabetical next letter identifying the mating appliance outlet. For example, an E plug fits into an F outlet. Beginning with an amendment published in 2022, three of the commonly used high temperature variants have been standardized as types M–R. Cables with a C13 connector at one end and a type E plug connector at the other are commonly available. They have a variety of common uses, including connecting power between older PCs and their monitors, extending existing power cords, connecting to type F outlets strips (commonly used with rack-mount gear to save space and for international standardization) and connecting computer equipment to the output of an uninterruptible power supply (UPS). Type J outlets are used in a similar way. Withdrawn and other standard sheets C3, C4, C11 and C12 standard sheets are no longer listed in the standard. Standard sheet C25 shows retaining device dimensions. Sheet C26 shows detail dimensions for pillar-type terminals, where the end of the screw bears on a wire directly or through a pressure plate. Sheet 27 shows details for screw terminals, where the wire is held by wrapping it around the head of a screw. See also AC power plugs and sockets Power entry module Power Cannon (XLR-LNE), a compact alternative power entry connector. IEC 60309 specifies larger couplers used for higher currents, higher voltages, and polyphase systems. IEC 60906-1 A proposed standard for domestic wall sockets. NEMA connector North American standard for building receptacles and compatible cord connectors. AC power plugs and sockets#BS 1363 three-pin (rectangular) plugs and sockets British standard for building receptacles and compatible cord connectors. CEE 7 standard AC plugs and sockets References External links IEC 60799 edition 2.0 Electrical accessories — Cord sets and interconnection cord sets International Standardized Appliance Connectors (IEC-60320) Reference Chart Includes diagrams of all couplers, their rated current, equipment class, and temperature rating. Previews (table of contents and introduction) of IEC standard 60320: Appliance couplers for household and similar general purposes: IEC 60320-1 General requirements IEC 60320-2-1 Sewing machine couplers IEC 60320-2-3 Appliance couplers with a degree of protection higher than IPX0 IEC 60320-2-4 Couplers dependent on appliance weight for engagement IEC 60320-3 Standard sheets and gauges Indian national standards equivalent to IEC standards: IS/IEC 60320-1:2001 General requirements IS/IEC 60320-2-2:1998 Interconnection couplers for household and similar equipment IS/IEC 60320-2-3:1998 Appliance couplers with a degree of protection higher than IPX0 60320 Mains power connectors
IEC 60320
[ "Technology" ]
4,734
[ "Computer standards", "IEC standards" ]
326,123
https://en.wikipedia.org/wiki/Windows%207
Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly three years earlier. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time. It was succeeded by Windows 8 in October 2012. Extended support ended on January 14, 2020, over ten years after the release of Windows 7, after which the operating system ceased receiving further updates. A paid support program was available for enterprises, providing security updates for Windows 7 for up to three years since the official end of life. Windows 7 was intended to be an incremental upgrade to Windows Vista, addressing the previous OS's poor reception while maintaining hardware and software compatibility as well as fixing some of Vista's inconsistencies (such as Vista's aggressive User Account Control). Windows 7 continued improvements on the Windows Aero user interface with the addition of a redesigned taskbar that allows pinned applications, and new window management features. Other new features were added to the operating system, including libraries, the new file-sharing system HomeGroup, and support for multitouch input. A new "Action Center" was also added to provide an overview of system security and maintenance information, and tweaks were made to the User Account Control system to make it less intrusive. Windows 7 also shipped with updated versions of several stock applications, including Internet Explorer 8, Windows Media Player, and Windows Media Center. Unlike Windows Vista, Windows 7 received warm reception among reviewers and consumers with critics considering the operating system to be a major improvement over its predecessor because of its improved performance, its more intuitive interface, fewer User Account Control popups, and other improvements made across the platform. Windows 7 was a major success for Microsoft; even before its official release, pre-order sales for the operating system on the online retailer Amazon.com had surpassed previous records. In just six months, over 100 million copies had been sold worldwide, increasing to over 630 million licenses by July 2012. By January 2018, Windows 10 surpassed Windows 7 as the most popular version of Windows worldwide. Windows 11 overtook Windows 7 as the second most popular Windows version on all continents in August 2022. , just 3% of traditional PCs running Windows are running Windows 7, although it remains relatively popular in parts of the world, such as China (where it is tied with Windows 11), and is second most popular in some countries. It is the final version of Microsoft Windows that supports processors without SSE2 or NX (although an update released in 2018 dropped support for non-SSE2 processors). Naming Windows 7 is the successor to Windows Vista, and its version name is Windows NT 6.1, compared to Vista's NT 6.0; its naming caused some confusion when it was announced in 2008. Windows president Steven Sinofsky commented that Windows 95 was the fourth version of Windows, but Windows 7 counts up from Windows NT 4.0 as it is a descendant of NT. Development history Originally, a version of Windows codenamed "Blackcomb" was planned as the successor to Windows XP and Windows Server 2003 in 2000. Major features were planned for Blackcomb, including an emphasis on searching and querying data and an advanced storage system named WinFS to enable such scenarios. However, an interim, minor release, codenamed "Longhorn," was announced for 2003, delaying the development of Blackcomb. By the middle of 2003, however, Longhorn had acquired some of the features originally intended for Blackcomb. After three major malware outbreaks—the Blaster, Nachi, and Sobig worms—exploited flaws in Windows operating systems within a short time period in August 2003, Microsoft changed its development priorities, putting some of Longhorn's major development work on hold while developing new service packs for Windows XP and Windows Server 2003. Development of Longhorn (Windows Vista) was also restarted, and thus delayed, in August 2004. A number of features were cut from Longhorn. Blackcomb was renamed Vienna in early 2006, and was later canceled in 2007 due to the scope of the project. When released, Windows Vista was criticized for its long development time, performance issues, spotty compatibility with existing hardware and software at launch, changes affecting the compatibility of certain PC games, and unclear assurances by Microsoft that certain computers shipping with XP before launch would be "Vista Capable" (which led to a class-action lawsuit), among other critiques. As such, the adoption of Vista in comparison to XP remained somewhat low. In July 2007, following the shelving of the Vienna project and six months following the public release of Vista, it was reported that the next version of Windows would then be codenamed Windows 7, with plans for a final release within three years. Bill Gates, in an interview with Newsweek, suggested that Windows 7 would be more "user-centric". Gates later said that Windows 7 would also focus on performance improvements. Steven Sinofsky later expanded on this point, explaining in the Engineering Windows 7 blog that the company was using a variety of new tracing tools to measure the performance of many areas of the operating system on an ongoing basis, to help locate inefficient code paths and to help prevent performance regressions. Senior Vice President Bill Veghte stated that Windows Vista users migrating to Windows 7 would not find the kind of device compatibility issues they encountered migrating from Windows XP. An estimated 1,000 developers worked on Windows 7. These were broadly divided into "core operating system" and "Windows client experience", in turn organized into 25 teams of around 40 developers on average. In October 2008, it was announced that Windows 7 would also be the official name of the operating system. There had been some confusion over naming the product Windows 7, while versioning it as 6.1 to indicate its similar build to Windows Vista and increase compatibility with applications that only check major version numbers, similar to Windows 2000 and Windows XP both having 5.x version numbers. The first external release to select Microsoft partners came in January 2008 with Milestone 1, build 6519. Speaking about Windows 7 on October 16, 2008, Microsoft CEO Steve Ballmer confirmed compatibility between Windows Vista and Windows 7, indicating that Windows 7 would be a refined version of Windows Vista. At PDC 2008, Microsoft demonstrated Windows 7 with its reworked taskbar. On December 27, 2008, the Windows 7 Beta was leaked onto the Internet via BitTorrent. According to a performance test by ZDNet, Windows 7 Beta beat both Windows XP and Windows Vista in several key areas, including boot and shutdown time and working with files, such as loading documents. Other areas did not beat XP, including PC Pro benchmarks for typical office activities and video editing, which remain identical to Vista and slower than XP. On January 7, 2009, the x64 version of the Windows 7 Beta (build 7000) was leaked onto the web, with some torrents being infected with a trojan. At CES 2009, Microsoft CEO Steve Ballmer announced the Windows 7 Beta, build 7000, had been made available for download to MSDN and TechNet subscribers in the format of an ISO image. The stock wallpaper of the beta version contained a digital image of the Betta fish. The release candidate, build 7100, became available for MSDN and TechNet subscribers, and Connect Program participants on April 30, 2009. On May 5, 2009, it became available to the general public, although it had also been leaked onto the Internet via BitTorrent. The release candidate was available in five languages and expired on June 1, 2010, with shutdowns every two hours starting March 1, 2010. Microsoft stated that Windows 7 would be released to the general public on October 22, 2009, less than three years after the launch of its predecessor. Microsoft released Windows 7 to MSDN and Technet subscribers on August 6, 2009. Microsoft announced that Windows 7, along with Windows Server 2008 R2, was released to manufacturing in the United States and Canada on July 22, 2009. Windows 7 build 7600.16385.090713-1255, which was compiled on July 13, 2009, was declared the final RTM build after passing all Microsoft's tests internally. Features New and changed Among Windows 7's new features are advances in touch and handwriting recognition, support for virtual hard disks, improved performance on multi-core processors, improved boot performance, DirectAccess, and kernel improvements. Windows 7 adds support for systems using multiple heterogeneous graphics cards from different vendors (Heterogeneous Multi-adapter), a new version of Windows Media Center, a Gadget for Windows Media Center, improved media features, XPS Essentials Pack, and Windows PowerShell being included, and a redesigned Calculator with multiline capabilities including Programmer and Statistics modes along with unit conversion for length, weight, temperature, and several others. Many new items have been added to the Control Panel, including ClearType Text Tuner, Display Color Calibration Wizard, Gadgets, Recovery, Troubleshooting, Workspaces Center, Location and Other Sensors, Credential Manager, Biometric Devices, System Icons, and Display. Windows Security Center has been renamed to Action Center (Windows Health Center and Windows Solution Center in earlier builds), which encompasses both security and maintenance of the computer. ReadyBoost on 32-bit editions now supports up to 256 gigabytes of extra allocation. Windows 7 also supports images in RAW image format through the addition of Windows Imaging Component-enabled image decoders, which enables raw image thumbnails, previewing and metadata display in Windows Explorer, plus full-size viewing and slideshows in Windows Photo Viewer and Windows Media Center. Windows 7 also has a native TFTP client with the ability to transfer files to or from a TFTP server. The taskbar has seen the biggest visual changes, where the old Quick Launch toolbar has been replaced with the ability to pin applications to the taskbar. Buttons for pinned applications are integrated with the task buttons. These buttons also enable Jump Lists to allow easy access to common tasks, and files frequently used with specific applications. The revamped taskbar also allows the reordering of taskbar buttons. To the far right of the system clock is a small rectangular button that serves as the Show desktop icon. By default, hovering over this button makes all visible windows transparent for a quick look at the desktop. In touch-enabled displays such as touch screens, tablet PCs, etc., this button is slightly (8 pixels) wider in order to accommodate being pressed by a finger. Clicking this button minimizes all windows, and clicking it a second time restores them. Window management in Windows 7 has several new features: Aero Snap maximizes a window when it is dragged to the top, left, or right of the screen. Dragging windows to the left or right edges of the screen allows users to snap software windows to either side of the screen, such that the windows take up half the screen. When a user moves windows that were snapped or maximized using Snap, the system restores their previous state. Snap functions can also be triggered with keyboard shortcuts. Aero Shake hides all inactive windows when the active window's title bar is dragged back and forth rapidly. Windows 7 includes 13 additional sound schemes, titled Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savanna, and Sonata. Internet Spades, Internet Backgammon and Internet Checkers, which were removed in Windows Vista, were restored in Windows 7. Users are able to disable or customize many more Windows components than was possible in Windows Vista. New additions to this list of components include Internet Explorer 8, Windows Media Player 12, Windows Media Center, Windows Search, and Windows Gadget Platform. A new version of Microsoft Virtual PC, newly renamed as Windows Virtual PC was made available for Windows 7 Professional, Enterprise, and Ultimate editions. It allows multiple Windows environments, including Windows XP Mode, to run on the same machine. Windows XP Mode runs Windows XP in a virtual machine, and displays applications within separate windows on the Windows 7 desktop. Furthermore, Windows 7 supports the mounting of a virtual hard disk (VHD) as a normal data storage, and the bootloader delivered with Windows 7 can boot the Windows system from a VHD; however, this ability is only available in the Enterprise and Ultimate editions. The Remote Desktop Protocol (RDP) of Windows 7 is also enhanced to support real-time multimedia application including video playback and 3D games, thus allowing use of DirectX 10 in remote desktop environments. The three application limit, previously present in the Windows Vista and Windows XP Starter Editions, has been removed from Windows 7. All editions include some new and improved features, such as Windows Search, Security features, and some features new to Windows 7, that originated within Vista. Optional BitLocker Drive Encryption is included with Windows 7 Ultimate and Enterprise. Windows Defender is included; Microsoft Security Essentials antivirus software is a free download. All editions include Shadow Copy, which—every day or so—System Restore uses to take an automatic "previous version" snapshot of user files that have changed. Backup and restore have also been improved, and the Windows Recovery Environment—installed by default—replaces the optional Recovery Console of Windows XP. A new system known as "Libraries" was added for file management; users can aggregate files from multiple folders into a "Library." By default, libraries for categories such as Documents, Pictures, Music, and Video are created, consisting of the user's personal folder and the Public folder for each. The system is also used as part of a new home networking system known as HomeGroup; devices are added to the network with a password, and files and folders can be shared with all other devices in the HomeGroup, or with specific users. The default libraries, along with printers, are shared by default, but the personal folder is set to read-only access by other users, and the Public folder can be accessed by anyone. Windows 7 includes improved globalization support through a new Extended Linguistic Services API to provide multilingual support (particularly in Ultimate and Enterprise editions). Microsoft also implemented better support for solid-state drives, including the new TRIM command, and Windows 7 is able to identify a solid-state drive uniquely. Native support for USB 3.0 is not included because of delays in the finalization of the standard. At WinHEC 2008 Microsoft announced that color depths of 30-bit and 48-bit would be supported in Windows 7 along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB. For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to simplify development of installation packages and shorten application install times. Windows 7, by default, generates fewer User Account Control (UAC) prompts because it allows digitally signed Windows components to gain elevated privileges without a prompt. Additionally, users can now adjust the level at which UAC operates using a sliding scale. Removed Certain capabilities and programs that were a part of Windows Vista are no longer present or have been changed, resulting in the removal of certain functionalities; these include the classic Start Menu user interface, some taskbar features, Windows Explorer features, Windows Media Player features, Windows Ultimate Extras, Search button, and InkBall. Four applications bundled with Windows Vista—Windows Photo Gallery, Windows Movie Maker, Windows Calendar and Windows Mail—are not included with Windows 7 and were replaced by Windows Live-branded versions as part of the Windows Live Essentials suite. Editions Windows 7 is available in six different editions, of which the Home Premium, Professional, and Ultimate were available at retail in most countries, and as pre-loaded software on most new computers. Home Premium and Professional were aimed at home users and small businesses respectively, while Ultimate was aimed at enthusiasts. Each edition of Windows 7 includes all of the capabilities and features of the edition below it, and adds additional features oriented towards their market segments; for example, Professional adds additional networking and security features such as Encrypting File System and the ability to join a domain. Ultimate contained a superset of the features from Home Premium and Professional, along with other advanced features oriented towards power users, such as BitLocker drive encryption; unlike Windows Vista, there were no "Ultimate Extras" add-ons created for Windows 7 Ultimate. Retail copies were available in "upgrade" and higher-cost "full" version licenses; "upgrade" licenses require an existing version of Windows to install, while "full" licenses can be installed on computers with no existing operating system. The remaining three editions were not available at retail, of which two were available exclusively through OEM channels as pre-loaded software. The Starter edition is a stripped-down version of Windows 7 meant for low-cost devices such as netbooks. In comparison to Home Premium, Starter has reduced multimedia functionality, does not allow users to change their desktop wallpaper or theme, disables the "Aero Glass" theme, does not have support for multiple monitors, and can only address 2GB of RAM. Home Basic was sold only in emerging markets, and was positioned in between Home Premium and Starter. The highest edition, Enterprise, is functionally similar to Ultimate, but is only sold through volume licensing via Microsoft's Software Assurance program. All editions aside from Starter support both IA-32 and x86-64 architectures, Starter only supports 32-bit systems. Retail copies of Windows 7 are distributed on two DVDs: one for the IA-32 version and the other for x86-64. OEM copies include one DVD, depending on the processor architecture licensed. The installation media for consumer versions of Windows 7 are identical, the product key and corresponding license determines the edition that is installed. The Windows Anytime Upgrade service can be used to purchase an upgrade that unlocks the functionality of a higher edition, such as going from Starter to Home Premium, and Home Premium to Ultimate. Most copies of Windows 7 only contained one license; in certain markets, a "Family Pack" version of Windows 7 Home Premium was also released for a limited time, which allowed upgrades on up to three computers. In certain regions, copies of Windows 7 were only sold in, and could only be activated in a designated region. Support lifecycle Support for the original release of Windows 7 (without a service pack) ended on April 9, 2013, requiring users to update to Windows 7 Service Pack 1 in order to continue receiving updates and support. Microsoft ended the sale of new retail copies of Windows 7 in October 2014, and the sale of new OEM licenses for Windows 7 Home Basic, Home Premium, and Ultimate ended on October 31, 2014. OEM sales of PCs with Windows 7 Professional pre-installed ended on October 31, 2016. The sale of non-Professional OEM licenses was stopped on October 31, 2014. Mainstream support for Windows 7 ended on January 13, 2015. Extended support for Windows 7 ended on January 14, 2020. Variants of Windows 7 for embedded systems and thin clients have different support policies: Windows Embedded Standard 7 support ended in October 2020. Windows Thin PC and Windows Embedded POSReady 7 had support until October 2021. In March 2019, Microsoft announced that it would display notifications to users informing users of the upcoming end of support, and direct users to a website urging them to purchase a Windows 10 upgrade or a new computer. In August 2019, researchers reported that "all modern versions of Microsoft Windows" may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available. In September 2019, Microsoft announced that it would provide free security updates for Windows 7 on federally-certified voting machines through the 2020 United States elections. Extended Security Updates On September 7, 2018, Microsoft announced a paid "Extended Security Updates" (ESU) service that will offer additional updates for Windows 7 Professional and Enterprise for up to three years after the end of extended support, available via specific volume licensing programs in yearly installments. Windows 7 Professional for Embedded Systems, Windows Embedded Standard 7, and Windows Embedded POSReady 7 also get Extended Security Updates for up to three years after their end of extended support date, via OEMs. The Extended Security Updates program for Windows Embedded POSReady 7 ended on October 8, 2024, marking the final end of IA-32 updates on the Windows NT 6.1 product line after more than 15 years. In August 2019, Microsoft announced it would offer a year of 'free' extended security updates to some business users. Third-party support In January 2023, version 109 of the Chromium-based Microsoft Edge became the last version of Edge to support Windows 7, Windows 8/8.1, Windows Server 2012, and Windows Server 2012 R2. Alongside this, several other web browsers based on the Chromium codebase also dropped support for these operating systems after version 109, including Google Chrome and Opera. A fork of Chromium named Supermium is maintained for versions of Windows older than Windows 10, including Windows 7. Mozilla maintains Firefox 115 Extended Support Release (ESR) to support Windows 7, 8 and 8.1. Mozilla has committed to support it until at least March 2025. Steam ended support for Windows 7, 8, and 8.1 on January 1, 2024. Upgradability Several Windows 7 components are upgradable to the latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows 7 include: DirectX 11 Internet Explorer 11 Microsoft Edge (Chromium, version 109) Windows Virtual PC .NET Framework 4.8 Visual Studio 2019 Office 2016 was the last version of Microsoft Office to be compatible with Windows 7. System requirements Additional requirements to use certain features: Windows XP Mode (Professional, Ultimate and Enterprise): Requires an additional 1 GB of RAM and additional 15 GB of available hard disk space. As of March 18, 2010, the requirement for a processor capable of hardware virtualization has been lifted. Windows Media Center (included in Home Premium, Professional, Ultimate and Enterprise), requires a TV tuner to receive and record TV. Physical memory The maximum amount of RAM that Windows 7 supports varies depending on the product edition and on the processor architecture, as shown in the following table. Processor limits Windows 7 Professional and up support up to 2 physical processors (CPU sockets), whereas Windows 7 Starter, Home Basic, and Home Premium editions support only 1. Physical processors with either multiple cores, or hyper-threading, or both, implement more than one logical processor per physical processor. The x86 editions of Windows 7 support up to 32 logical processors; x64 editions support up to 256 (4 x 64). Extent of hardware support In January 2016, Microsoft announced that it would no longer support Windows platforms older than Windows 10 on any future Intel-compatible processor lines, citing difficulties in reliably allowing the operating system to operate on newer hardware. Microsoft stated that effective July 17, 2017, devices with Intel Skylake CPUs were only to receive the "most critical" updates for Windows 7 and 8.1, and only if they have been judged not to affect the reliability of Windows 7 on older hardware. For enterprise customers, Microsoft issued a list of Skylake-based devices "certified" for Windows 7 and 8.1 in addition to Windows 10, to assist them in migrating to newer hardware that can eventually be upgraded to 10 once they are ready to transition. Microsoft and their hardware partners provide special testing and support for these devices on 7 and 8.1 until the July 2017 date. On March 18, 2016, in response to criticism from enterprise customers, Microsoft delayed the end of support and non-critical updates for Skylake systems to July 17, 2018, but stated that they would also continue to receive security updates through the end of extended support. In August 2016, citing a "strong partnership with our OEM partners and Intel", Microsoft retracted the decision and stated that it would continue to support Windows 7 and 8.1 on Skylake hardware through the end of their extended support lifecycle. However, the restrictions on newer CPU microarchitectures remain in force. In March 2017, a Microsoft knowledge base article announced which implies that devices using Intel Kaby Lake, AMD Bristol Ridge, or AMD Ryzen, would be blocked from using Windows Update entirely. In addition, official Windows 7 device drivers are not available for the Kaby Lake and Ryzen platforms. Security updates released since March 2018 contained bugs that affect processors that do not support SSE2 extensions, including all Pentium III, Athlon XP, and prior processors. Microsoft initially stated that it would attempt to resolve this issue, and prevented installation of the affected patches on these systems. However, Microsoft retroactively modified its support documents on June 15, 2018 to remove the promise that this bug would be resolved, replacing it with a statement suggesting that users obtain a newer processor. This effectively ends further patch support for Windows 7 on these older systems. Updates Service Pack 1 Windows 7 Service Pack 1 (SP1) was announced on March 18, 2010. A beta was released on July 12, 2010. The final version was released to the public on February 22, 2011. At the time of release, it was not made mandatory. It was available via Windows Update, direct download, or by ordering the Windows 7 SP1 DVD. The service pack is on a much smaller scale than those released for previous versions of Windows, particularly Windows Vista. Windows 7 Service Pack 1 adds support for Advanced Vector Extensions (AVX), a 256-bit instruction set extension for processors, and improves IKEv2 by adding additional identification fields such as E-mail ID to it. In addition, it adds support for Advanced Format 512e as well as additional Identity Federation Services. Windows 7 Service Pack 1 also resolves a bug related to HDMI audio and another related to printing XPS documents. In Europe, the automatic nature of the BrowserChoice.eu feature was dropped in Windows 7 Service Pack 1 in February 2011 and remained absent for 14 months despite Microsoft reporting that it was still present, subsequently described by Microsoft as a "technical error." As a result, in March 2013, the European Commission fined Microsoft €561 million to deter companies from reneging on settlement promises. Platform Update The Platform Update for Windows 7 SP1 and Windows Server 2008 R2 SP1 was released on February 26, 2013 after a pre-release version had been released on November 5, 2012. It is also included with Internet Explorer 10 for Windows 7. It includes enhancements to Direct2D, DirectWrite, Direct3D, Windows Imaging Component (WIC), Windows Advanced Rasterization Platform (WARP), Windows Animation Manager (WAM), XPS Document API, H.264 Video Decoder and JPEG XR decoder. However support for Direct3D 11.1 is limited as the update does not include DXGI/WDDM 1.2 from Windows 8, making unavailable many related APIs and significant features such as stereoscopic frame buffer, feature level 11_1 and optional features for levels 10_0, 10_1 and 11_0. Disk Cleanup update In October 2013, a Disk Cleanup Wizard addon was released that lets users delete outdated Windows updates on Windows 7 SP1, thus reducing the size of the WinSxS directory. This update backports some features found in Windows 8. Windows Management Framework 5.0 Windows Management Framework 5.0 includes updates to Windows PowerShell 5.0, Windows PowerShell Desired State Configuration (DSC), Windows Remote Management (WinRM), Windows Management Instrumentation (WMI). It was released on February 24, 2016 and was eventually superseded by Windows Management Framework 5.1. Convenience rollup In May 2016, Microsoft released a "Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1," which contains all patches released between the release of SP1 and April 2016. The rollup is not available via Windows Update, and must be downloaded manually. This package can also be integrated into a Windows 7 installation image. Since October 2016, all security and reliability updates are cumulative. Downloading and installing updates that address individual problems is no longer possible, but the number of updates that must be downloaded to fully update the OS is significantly reduced. Monthly update rollups (July 2016 – January 2020) In June 2018, Microsoft announced that Windows 7 would be moved to a monthly update model beginning with updates released in September 2018, two years after Microsoft switched the rest of their supported operating systems to that model. With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows 7 reached its end of life—one package containing security and quality updates, and a smaller package that contained only the security updates. Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup. Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020). The last non-extended security update rollup packages were released on January 14, 2020, the last day that Windows 7 had extended support. End of support (after January 14, 2020) On January 14, 2020, Windows 7 support ended with Microsoft no longer providing security updates or fixes after that date, except for subscribers of the Windows 7 Extended Security Updates (ESU), who were able to receive Windows 7 security updates through January 10, 2023. However, there have been two updates that have been issued to non-ESU subscribers: In February 2020, Microsoft released an update via Windows Update to fix a black wallpaper issue caused by the January 2020 update for Windows 7. In June 2020, Microsoft released an update via Windows Update to roll out the new Chromium-based Microsoft Edge to Windows 7 and 8.1 machines that are not connected to Active Directory. Users, e.g. those on Active Directory, can download Edge from Microsoft's website. In a support document, Microsoft has stated that a full-screen upgrade warning notification would be displayed on Windows 7 PCs on all editions except the Enterprise edition after January 15, 2020. The notification does not appear on machines connected to Active Directory, machines in kiosk mode, or machines subscribed for Extended Security Updates. ESU rollups As part of the September 2022 Extended Security Updates (ESU) rollup, Microsoft quietly added in Secure Boot support, along with partial UEFI support. Reception Critical reception Windows 7 received critical acclaim, with critics noting the increased usability and functionality when compared with its predecessor, Windows Vista. CNET gave Windows 7 Home Premium a rating of 4.5 out of 5 stars, stating that it "is more than what Vista should have been, [and] it's where Microsoft needed to go". PC Magazine rated it a 4 out of 5 saying that Windows 7 is a "big improvement" over Windows Vista, with fewer compatibility problems, a retooled taskbar, simpler home networking and faster start-up. Maximum PC gave Windows 7 a rating of 9 out of 10 and called Windows 7 a "massive leap forward" in usability and security, and praised the new Taskbar as "worth the price of admission alone." PC World called Windows 7 a "worthy successor" to Windows XP and said that speed benchmarks showed Windows 7 to be slightly faster than Windows Vista. PC World also named Windows 7 one of the best products of the year. In its review of Windows 7, Engadget said that Microsoft had taken a "strong step forward" with Windows 7 and reported that speed is one of Windows 7's major selling points—particularly for the netbook sets. Laptop Magazine gave Windows 7 a rating of 4 out of 5 stars and said that Windows 7 makes computing more intuitive, offered better overall performance including a "modest to dramatic" increase in battery life on laptop computers. TechRadar gave Windows 7 a rating of 5 out of 5 stars, concluding that "it combines the security and architectural improvements of Windows Vista with better performance than XP can deliver on today's hardware. No version of Windows is ever perfect, but Windows 7 really is the best release of Windows yet." USA Today and The Telegraph also gave Windows 7 favorable reviews. Nick Wingfield of The Wall Street Journal wrote, "Visually arresting," and "A pleasure." Mary Branscombe of Financial Times wrote, "A clear leap forward." Jesus Diaz of Gizmodo wrote, "Windows 7 Kills Snow Leopard." Don Reisinger of CNET wrote, "Delightful." David Pogue of The New York Times wrote, "Faster." J. Peter Bruzzese and Richi Jennings of Computerworld wrote, "Ready." Some Windows Vista Ultimate users have expressed concerns over Windows 7 pricing and upgrade options. Windows Vista Ultimate users wanting to upgrade from Windows Vista to Windows 7 had to either pay $219.99 to upgrade to Windows 7 Ultimate or perform a clean install, which requires them to reinstall all of their programs. The changes to User Account Control on Windows 7 were criticized for being potentially insecure, as an exploit was discovered allowing untrusted software to be launched with elevated privileges by exploiting a trusted component. Peter Bright of Ars Technica argued that "the way that the Windows 7 UAC 'improvements' have been made completely exempts Microsoft's developers from having to do that work themselves. With Windows 7, it's one rule for Redmond, another one for everyone else." Microsoft's Windows kernel engineer Mark Russinovich acknowledged the problem, but noted that malware can also compromise a system when users agree to a prompt. Sales In July 2009, in only eight hours, pre-orders of Windows 7 at amazon.co.uk surpassed the demand which Windows Vista had in its first 17 weeks. It became the highest-grossing pre-order in Amazon's history, surpassing sales of the previous record holder, the seventh Harry Potter book. After 36 hours, 64-bit versions of Windows 7 Professional and Ultimate editions sold out in Japan. Two weeks after its release its market share had surpassed that of Snow Leopard, released two months previously as the most recent update to Apple's Mac OS X operating system. According to Net Applications, Windows 7 reached a 4% market share in less than three weeks; in comparison, it took Windows Vista seven months to reach the same mark. As of February 2014, Windows 7 had a market share of 47.49% according to Net Applications; in comparison, Windows XP had a market share of 29.23%. On March 4, 2010, Microsoft announced that it had sold more than 90 million licenses. By April 23, 2010, more than 100 million copies were sold in six months, which made it Microsoft's fastest-selling operating system. As of June 23, 2010, Windows 7 has sold 150 million copies which made it the fastest selling operating system in history with seven copies sold every second. Based on worldwide data taken during June 2010 from Windows Update 46% of Windows 7 PCs run the 64-bit edition of Windows 7. According to Stephen Baker of the NPD Group during April 2010 in the United States 77% of PCs sold at retail were pre-installed with the 64-bit edition of Windows 7. As of July 22, 2010, Windows 7 had sold 175 million copies. On October 21, 2010, Microsoft announced that more than 240 million copies of Windows 7 had been sold. Three months later, on January 27, 2011, Microsoft announced total sales of 300 million copies of Windows 7. On July 12, 2011, the sales figure was refined to over 400 million end-user licenses and business installations. As of July 9, 2012, over 630 million licenses have been sold; this number includes licenses sold to OEMs for new PCs. Antitrust concerns As with other Microsoft operating systems, Windows 7 was studied by United States federal regulators who oversee the company's operations following the 2001 United States v. Microsoft Corp. settlement. According to status reports filed, the three-member panel began assessing prototypes of the new operating system in February 2008. Michael Gartenberg, an analyst at Jupiter Research, said, "[Microsoft's] challenge for Windows 7 will be how can they continue to add features that consumers will want that also don't run afoul of regulators." In order to comply with European antitrust regulations, Microsoft proposed the use of a "ballot" screen containing download links to competing web browsers, thus removing the need for a version of Windows completely without Internet Explorer, as previously planned. Microsoft announced that it would discard the separate version for Europe and ship the standard upgrade and full packages worldwide, in response to criticism involving Windows 7 E and concerns from manufacturers about possible consumer confusion if a version of Windows 7 with Internet Explorer were shipped later, after one without Internet Explorer. As with the previous version of Windows, an N version, which does not come with Windows Media Player, has been released in Europe, but only for sale directly from Microsoft sales websites and selected others. See also BlueKeep, a security vulnerability discovered in May 2019 that affected most Windows NT–based computers up to Windows 7 References Further reading External links Windows 7 Service Pack 1 (SP1) Windows 7 SP1 update history 2009 software IA-32 operating systems 7 X86-64 operating systems Products and services discontinued in 2020 Microsoft Windows
Windows 7
[ "Technology" ]
7,966
[ "Computing platforms", "Microsoft Windows" ]
326,182
https://en.wikipedia.org/wiki/Isoperimetric%20inequality
In mathematics, the isoperimetric inequality is a geometric inequality involving the square of the circumference of a closed curve in the plane and the area of a plane region it encloses, as well as its various generalizations. Isoperimetric literally means "having the same perimeter". Specifically, the isoperimetric inequality states, for the length L of a closed curve and the area A of the planar region that it encloses, that and that equality holds if and only if the curve is a circle. The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. The closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to the isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found. The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces. Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere. The isoperimetric problem in the plane The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves in the plane of fixed perimeter, which curve (if any) maximizes the area of its enclosed region? This question can be shown to be equivalent to the following problem: Among all closed curves in the plane enclosing a fixed area, which curve (if any) minimizes the perimeter? This problem is conceptually related to the principle of least action in physics, in that it can be restated: what is the principle of action which encloses the greatest area, with the greatest economy of effort? The 15th-century philosopher and scientist, Cardinal Nicholas of Cusa, considered rotational action, the process by which a circle is generated, to be the most direct reflection, in the realm of sensory impressions, of the process by which the universe is created. German astronomer and astrologer Johannes Kepler invoked the isoperimetric principle in discussing the morphology of the solar system, in Mysterium Cosmographicum (The Sacred Mystery of the Cosmos, 1596). Although the circle appears to be an obvious solution to the problem, proving this fact is rather difficult. The first progress toward the solution was made by Swiss geometer Jakob Steiner in 1838, using a geometric method later named Steiner symmetrisation. Steiner showed that if a solution existed, then it must be the circle. Steiner's proof was completed later by several other mathematicians. Steiner begins with some geometric constructions which are easily understood; for example, it can be shown that any closed curve enclosing a region that is not fully convex can be modified to enclose more area, by "flipping" the concave areas so that they become convex. It can further be shown that any closed curve which is not fully symmetrical can be "tilted" so that it encloses more area. The one shape that is perfectly convex and symmetrical is the circle, although this, in itself, does not represent a rigorous proof of the isoperimetric theorem (see external links). On a plane The solution to the isoperimetric problem is usually expressed in the form of an inequality that relates the length L of a closed curve and the area A of the planar region that it encloses. The isoperimetric inequality states that and that the equality holds if and only if the curve is a circle. The area of a disk of radius R is πR2 and the circumference of the circle is 2πR, so both sides of the inequality are equal to 4π2R2 in this case. Dozens of proofs of the isoperimetric inequality have been found. In 1902, Hurwitz published a short proof using the Fourier series that applies to arbitrary rectifiable curves (not assumed to be smooth). An elegant direct proof based on comparison of a smooth simple closed curve with an appropriate circle was given by E. Schmidt in 1938. It uses only the arc length formula, expression for the area of a plane region from Green's theorem, and the Cauchy–Schwarz inequality. For a given closed curve, the isoperimetric quotient is defined as the ratio of its area and that of the circle having the same perimeter. This is equal to and the isoperimetric inequality says that Q ≤ 1. Equivalently, the isoperimetric ratio is at least 4 for every curve. The isoperimetric quotient of a regular n-gon is Let be a smooth regular convex closed curve. Then the improved isoperimetric inequality states the following where denote the length of , the area of the region bounded by and the oriented area of the Wigner caustic of , respectively, and the equality holds if and only if is a curve of constant width. On a sphere Let C be a simple closed curve on a sphere of radius 1. Denote by L the length of C and by A the area enclosed by C. The spherical isoperimetric inequality states that and that the equality holds if and only if the curve is a circle. There are, in fact, two ways to measure the spherical area enclosed by a simple closed curve, but the inequality is symmetric with the respect to taking the complement. This inequality was discovered by Paul Lévy (1919) who also extended it to higher dimensions and general surfaces. In the more general case of arbitrary radius R, it is known that In Euclidean space The isoperimetric inequality states that a sphere has the smallest surface area per given volume. Given a bounded open set with boundary, having surface area and volume , the isoperimetric inequality states where is a unit ball. The equality holds when is a ball in . Under additional restrictions on the set (such as convexity, regularity, smooth boundary), the equality holds for a ball only. But in full generality the situation is more complicated. The relevant result of (for a simpler proof see ) is clarified in as follows. An extremal set consists of a ball and a "corona" that contributes neither to the volume nor to the surface area. That is, the equality holds for a compact set if and only if contains a closed ball such that and For example, the "corona" may be a curve. The proof of the inequality follows directly from Brunn–Minkowski inequality between a set and a ball with radius , i.e. . By taking Brunn–Minkowski inequality to the power , subtracting from both sides, dividing them by , and taking the limit as (; ). In full generality , the isoperimetric inequality states that for any set whose closure has finite Lebesgue measure where is the (n-1)-dimensional Minkowski content, Ln is the n-dimensional Lebesgue measure, and ωn is the volume of the unit ball in . If the boundary of S is rectifiable, then the Minkowski content is the (n-1)-dimensional Hausdorff measure. The n-dimensional isoperimetric inequality is equivalent (for sufficiently smooth domains) to the Sobolev inequality on with optimal constant: for all . In Hadamard manifolds Hadamard manifolds are complete simply connected manifolds with nonpositive curvature. Thus they generalize the Euclidean space , which is a Hadamard manifold with curvature zero. In 1970's and early 80's, Thierry Aubin, Misha Gromov, Yuri Burago, and Viktor Zalgaller conjectured that the Euclidean isoperimetric inequality holds for bounded sets in Hadamard manifolds, which has become known as the Cartan–Hadamard conjecture. In dimension 2 this had already been established in 1926 by André Weil, who was a student of Hadamard at the time. In dimensions 3 and 4 the conjecture was proved by Bruce Kleiner in 1992, and Chris Croke in 1984 respectively. In a metric measure space Most of the work on isoperimetric problem has been done in the context of smooth regions in Euclidean spaces, or more generally, in Riemannian manifolds. However, the isoperimetric problem can be formulated in much greater generality, using the notion of Minkowski content. Let be a metric measure space: X is a metric space with metric d, and μ is a Borel measure on X. The boundary measure, or Minkowski content, of a measurable subset A of X is defined as the lim inf where is the ε-extension of A. The isoperimetric problem in X asks how small can be for a given μ(A). If X is the Euclidean plane with the usual distance and the Lebesgue measure then this question generalizes the classical isoperimetric problem to planar regions whose boundary is not necessarily smooth, although the answer turns out to be the same. The function is called the isoperimetric profile of the metric measure space . Isoperimetric profiles have been studied for Cayley graphs of discrete groups and for special classes of Riemannian manifolds (where usually only regions A with regular boundary are considered). For graphs In graph theory, isoperimetric inequalities are at the heart of the study of expander graphs, which are sparse graphs that have strong connectivity properties. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes. Isoperimetric inequalities for graphs relate the size of vertex subsets to the size of their boundary, which is usually measured by the number of edges leaving the subset (edge expansion) or by the number of neighbouring vertices (vertex expansion). For a graph and a number , the following are two standard isoperimetric parameters for graphs. The edge isoperimetric parameter: The vertex isoperimetric parameter: Here denotes the set of edges leaving and denotes the set of vertices that have a neighbour in . The isoperimetric problem consists of understanding how the parameters and behave for natural families of graphs. Example: Isoperimetric inequalities for hypercubes The -dimensional hypercube is the graph whose vertices are all Boolean vectors of length , that is, the set . Two such vectors are connected by an edge in if they are equal up to a single bit flip, that is, their Hamming distance is exactly one. The following are the isoperimetric inequalities for the Boolean hypercube. Edge isoperimetric inequality The edge isoperimetric inequality of the hypercube is . This bound is tight, as is witnessed by each set that is the set of vertices of any subcube of . Vertex isoperimetric inequality Harper's theorem says that Hamming balls have the smallest vertex boundary among all sets of a given size. Hamming balls are sets that contain all points of Hamming weight at most and no points of Hamming weight larger than for some integer . This theorem implies that any set with satisfies As a special case, consider set sizes of the form for some integer . Then the above implies that the exact vertex isoperimetric parameter is Isoperimetric inequality for triangles The isoperimetric inequality for triangles in terms of perimeter p and area T states that with equality for the equilateral triangle. This is implied, via the AM–GM inequality, by a stronger inequality which has also been called the isoperimetric inequality for triangles: See also Blaschke–Lebesgue theorem Chaplygin problem: isoperimetric problem is a zero wind speed case of Chaplygin problem Curve-shortening flow Expander graph Gaussian isoperimetric inequality Isoperimetric dimension Isoperimetric point List of triangle inequalities Planar separator theorem Mixed volume Notes References Blaschke and Leichtweiß, Elementare Differentialgeometrie (in German), 5th edition, completely revised by K. Leichtweiß. Die Grundlehren der mathematischen Wissenschaften, Band 1. Springer-Verlag, New York Heidelberg Berlin, 1973 . Gromov, M.: "Paul Levy's isoperimetric inequality". Appendix C in Metric structures for Riemannian and non-Riemannian spaces. Based on the 1981 French original. With appendices by M. Katz, P. Pansu and S. Semmes. Translated from the French by Sean Michael Bates. Progress in Mathematics, 152. Birkhäuser Boston, Inc., Boston, Massachusetts, 1999. . . . . External links History of the Isoperimetric Problem at Convergence Treiberg: Several proofs of the isoperimetric inequality Isoperimetric Theorem at cut-the-knot Analytic geometry Calculus of variations Geometric inequalities Multivariable calculus Theorems in measure theory
Isoperimetric inequality
[ "Mathematics" ]
2,804
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Calculus", "Geometric inequalities", "Inequalities (mathematics)", "Theorems in geometry", "Multivariable calculus" ]
326,213
https://en.wikipedia.org/wiki/Technology%20during%20World%20War%20I
Technology during World War I (1914–1918) reflected a trend toward industrialism and the application of mass-production methods to weapons and to the technology of warfare in general. This trend began at least fifty years prior to World War I during the American Civil War of 1861–1865, and continued through many smaller conflicts in which soldiers and strategists tested new weapons. World War I weapons included types standardised and improved over the preceding period, together with some newly developed types using innovative technology and a number of improvised weapons used in trench warfare. Military technology of the time included important innovations in machine guns, grenades, and artillery, along with essentially new weapons such as submarines, poison gas, warplanes and tanks. The earlier years of the First World War could be characterized as a clash of 20th-century technology with 19th-century military science creating ineffective battles with huge numbers of casualties on both sides. On land, the quick descent into trench warfare came as a surprise, and only in the final year of the war did the major armies make effective steps in revolutionizing matters of command and control and tactics to adapt to the modern battlefield and start to harness the myriad new technologies to effective military purposes. Tactical reorganizations (such as shifting the focus of command from the 100+ man company to the 10+ man squad) went hand-in-hand with armoured cars, the first submachine guns, and automatic rifles that a single individual soldier could carry and use. Trench warfare Much of the combat involved trench warfare, in which hundreds often died for each metre gained. Many of the deadliest battles in history occurred during World WarI. Such battles include Ypres, the Marne, Cambrai, the Somme, Verdun, and Gallipoli. The Germans employed the Haber process of nitrogen fixation to provide their forces with a constant supply of gunpowder despite the British naval blockade. Artillery was responsible for the largest number of casualties and consumed vast quantities of explosives. Trench warfare led to the development of the concrete pill box, a small, hardened blockhouse that could be used to deliver machine gun fire. Pillboxes could be placed across a battlefield with interlocking fields of fire. Because attacking an entrenched enemy was so difficult, tunnel warfare became a major effort during the war. Once enemy positions were undermined, huge amounts of explosives would be planted and detonated to prepare for an overland charge. Sensitive listening devices that could detect the sounds of digging were crucial for defense against these underground incursions. The British proved especially adept at these tactics, thanks to the skill of their tunnel-digging "sappers" and the sophistication of their listening devices. During the war, the immobility of trench warfare and a need for protection from snipers created a requirement for loopholes both for discharging firearms and for observation. Often a steel plate was used with a "key hole", which had a rotating piece to cover the loophole when not in use. Clothing The British and German armies had already changed from red coat (British army) (1902) or Prussian blue (1910) for field uniforms, to less conspicuous khaki or field gray. Adolphe Messimy, Joseph Gallieni and other French leaders had proposed following suit, but the French army marched to war in their traditional red trousers, and only began receiving the new "horizon blue" ones in 1915. A type of raincoat for British officers, introduced long before the war, gained fame as the trench coat. The principal armies entered the war under cloth caps or leather helmets. They hastened to develop new steel helmets, in designs that became icons of their respective countries. Observation trees Observing the enemy in trench warfare was difficult, prompting the invention of technology such as the camouflage tree, a man made observation tower that enables forces to discreetly observe their enemy. Artillery Artillery also underwent a revolution. In 1914, cannons were positioned in the front line and fired directly at their targets. By 1917, indirect fire with guns (as well as mortars and even machine guns) was commonplace, using new techniques for spotting and ranging, notably, aircraft and the often overlooked field telephone. At the beginning of the war, artillery was often sited in the front line to fire over open sights at enemy infantry. During the war, the following improvements were made: Indirect counter-battery fire was developed for the first time Forward observers were used to direct artillery positioned out of direct line of sight from the targets, and sophisticated communications and fire plans were developed Artillery sound ranging and flash spotting, for the location and eventual destruction of enemy batteries Factors such as weather, air temperature, and barrel wear could for the first time be accurately measured and taken into account for indirect fire The first "box barrage" in history was fired in the Battle of Neuve Chapelle in 1915; this was the use of a three- or four-sided curtain of shell-fire to prevent the movement of enemy infantry The creeping barrage was perfected The wire-cutting No. 106 fuze was developed, specifically designed to explode on contact with barbed wire, or the ground before the shell buried itself in mud, and equally effective as an anti-personnel weapon The first anti-aircraft guns were devised out of necessity Germany was far ahead of the Allies in using heavy indirect fire. The German Army employed and howitzers in 1914, when typical French and British guns were only and . The British had a 6-inch (152 mm) howitzer, but it was so heavy it had to be hauled to the field in pieces and assembled. The Germans also fielded Austrian and guns and, even at the beginning of the war, had inventories of various calibres of Minenwerfer, which were ideally suited for trench warfare. Field artillery entered the war with the idea that each gun should be accompanied by hundreds of shells, and armouries ought to have about a thousand on hand for resupply. This proved utterly inadequate when it became commonplace for a gun to sit in one place and fire a hundred shells or more per day for weeks or months on end. To meet the resulting Shell Crisis of 1915, factories were hastily converted from other purposes to make more ammunition. Railways to the front were expanded or built, leaving the question of the last mile. Horses in World War I were the main answer, and their high death rate seriously weakened the Central Powers late in the war. In many places the newly invented trench railways helped. The new motor trucks as yet lacked pneumatic tires, versatile suspension, and other improvements that in later decades would allow them to perform well. Poison gas The widespread use of chemical warfare was a distinguishing feature of the conflict. Gases used included chlorine, mustard gas and phosgene. Relatively few war casualties were caused by gas, as effective countermeasures to gas attacks were quickly created, such as gas masks. The use of chemical warfare and small-scale strategic bombing (as opposed to tactical bombing) were both outlawed by the Hague Conventions of 1899 and 1907, and both proved to be of limited effectiveness, though they captured the public imagination. At the beginning of the war, Germany had the most advanced chemical industry in the world, accounting for more than 80% of the world's dye and chemical production. Although the use of poison gas had been banned by the Hague Conventions of 1899 and 1907, Germany turned to this industry for what it hoped would be a decisive weapon to break the deadlock of trench warfare. Chlorine gas was first used on the battlefield in April 1915 at the Second Battle of Ypres in Belgium. The unknown gas appeared to be a simple smoke screen, used to hide attacking soldiers, and Allied troops were ordered to the front trenches to repel the expected attack. The gas had a devastating effect, killing many defenders or, when the wind direction changed and blew the gas back, many attackers. The wind being unreliable, another way had to be found to transmit the gas. It began being delivered in artillery shells. Later, mustard gas, phosgene and other gasses were used. Britain and France soon followed suit with their own gas weapons. The first defenses against gas were makeshift, mainly rags soaked in water or urine. Later, relatively effective gas masks were developed, and these greatly reduced the effectiveness of gas as a weapon. Although it sometimes resulted in brief tactical advantages and probably caused over 1,000,000 casualties, gas seemed to have had no significant effect on the course of the war. Chemical weapons were easily attained, and cheap. Gas was especially effective against troops in trenches and bunkers that protected them from other weapons. Most chemical weapons attacked an individual's respiratory system. The concept of choking easily caused fear in soldiers and the resulting terror affected them psychologically. Because there was such a great fear of chemical weapons it was not uncommon that a soldier would panic and misinterpret symptoms of the common cold as being affected by a poisonous gas. Command and control The introduction of radio telegraphy was a significant step in communication during World War I. The stations utilized at that time were spark-gap transmitters. As an example, the information of the start of World War I was transmitted to German South West Africa on 2 August 1914 via radio telegraphy from the Nauen transmitter station via a relay station in Kamina and Lomé in Togo to the radio station in Windhoek. In the early days of the war, generals tried to direct tactics from headquarters many miles from the front, with messages being carried back and forth by runners or motorcycle couriers. It was soon realized that more immediate methods of communication were needed. Radio sets of the period were too heavy to carry into battle, and field telephone lines laid were quickly broken. Either one was subject to eavesdropping, and trench codes were not very satisfactory. Runners, flashing lights, and mirrors were often used instead; dogs were also used, but were only used occasionally as troops tended to adopt them as pets and men would volunteer to go as runners in the dog's place. There were also aircraft (called "contact patrols") that carried messages between headquarters and forward positions, sometimes dropping their messages without landing. Technical advances in radio, however, continued during the war and radio telephony was perfected, being most useful for airborne artillery spotters. The new long-range artillery developed just before the war now had to fire at positions it could not see. Typical tactics were to pound the enemy front lines and then stop to let infantry move forward, hoping that the enemy line was broken, though it rarely was. The lifting and then the creeping barrage were developed to keep artillery fire landing directly in front of the infantry "as it advanced." Communications being impossible, the danger was that the barrage would move too fast — losing the protection — or too slowly — holding up the advance. There were also countermeasures to these artillery tactics: by aiming a counter barrage directly behind an enemy's creeping barrage, one could target the infantry that was following the creeping barrage. Microphones (Sound ranging) were used to triangulate the position of enemy guns and engage in counter-battery fire. Muzzle flashes of guns could also be spotted and used to target enemy artillery. The impressive spread of telecommunications in the armed forces during WWI - which extended commanders' command and control over distant forces and ships - also led to the Intelligence branch assuming an ever greater importance. The growth of military telephone and radiotelegraphic communications encouraged all Intelligence Services to study how to extract the greatest amount of intel from the enemies’ communication systems, by relying on some inherent weaknesses of those media. Radio interception was particularly easy, creating the need to invent World War I cryptography. Even the earliest episodes of the war showed, often surprisingly, what type of impact the eavesdropping and interpretation of the enemy’s transmissions could have on military operations. As such, this period witnessed significant developments in the intelligence category today commonly known as SIGINT or ‘Signals intelligence’. Exploitation of intercepted Russian radio signals contributed to the German victory at Tannenberg in August 1914. Even when messages could not be decoded, radio direction finding was used to track the motion of enemy units. Railways Railways dominated in this war as in no other. The German strategy was known beforehand by the Allies simply because of the vast marshaling yards on the Belgian border that had no other purpose than to deliver the mobilized German army to its start point. The German mobilization plan was little more than a vast detailed railway timetable. Men and material could get to the front at an unprecedented rate by rail, but trains were vulnerable at the front itself. Thus, armies could only advance at the pace that they could build or rebuild a railway, e.g. the British advance across Sinai. Motorized transport was only extensively used in the last two years of World War I. After the rail head, troops moved the last mile on foot, and guns and supplies were drawn by horses and trench railways. Railways lacked the flexibility of motor transport and this lack of flexibility percolated through into the conduct during the war. War of attrition The countries involved in the war applied the full force of industrial mass-production to the manufacture of weapons and ammunition, especially artillery shells. Women on the home-front played a crucial role in this by working in munitions factories. This complete mobilization of a nation's resources, or "total war" meant that not only the armies, but also the economies of the warring nations were in competition. For a time, in 1914–1915, some hoped that the war could be won through an attrition of materiel—that the enemy's supply of artillery shells could be exhausted in futile exchanges. But production was ramped up on both sides and hopes proved futile. In Britain the Shell Crisis of 1915 brought down the British government, and led to the building of HM Factory, Gretna, a huge munitions factory on the English-Scottish border. The war of attrition then focused on another resource: human lives. In the Battle of Verdun in particular, German Chief of Staff Erich Von Falkenhayn hoped to "bleed France white" through repeated attacks on this French city. In the end, the war ended through a combination of attrition (of men and material), advances on the battlefield, arrival of American troops in large numbers, and a breakdown of morale and production on the German home-front due to an effective naval blockade of her seaports. Air warfare Aviation in World War I started with primitive aircraft, primitively used. Technological progress was swift, leading to ground attack, tactical bombing, and highly publicized, deadly dogfights among aircraft equipped with forward-firing, synchronized machine guns from July 1915 onwards. However, these uses made a lesser impact on the war than more mundane roles in intelligence, sea patrol and especially artillery spotting. Antiaircraft warfare also had its beginnings in this war. Fixed-wing aircraft were first used militarily by the Italians in Libya on 23 October 1911 during the Italo-Turkish War for reconnaissance, soon followed by the dropping of grenades and aerial photography the next year. By 1914, their military utility was obvious. They were initially used for reconnaissance and ground attack. To shoot down enemy planes, anti-aircraft guns and fighter aircraft were developed. Strategic bombers were created, principally by the Germans and British, though the former used Zeppelins as well. Perhaps the most famous fighter plane during World War 1 was the Fokker as it was the first to include a synchronized machine gun. Towards the end of the conflict, aircraft carriers were used for the first time, with HMS Furious launching Sopwith Camels in a raid to destroy the Zeppelin hangars at Tønder in 1918. Manned observation balloons, floating high above the trenches, were used as stationary reconnaissance platforms, reporting enemy movements and directing artillery. Balloons commonly had a crew of two, equipped with parachutes, so that if there was an enemy air attack the crew could parachute to safety. At the time, parachutes were too heavy to be used by pilots of aircraft (with their marginal power output), and smaller versions were not developed until the end of the war; they were also opposed by the British leadership, who feared they might promote cowardice. Recognised for their value as observation platforms, balloons were important targets for enemy aircraft. To defend them against air attack, they were heavily protected by anti-aircraft guns and patrolled by friendly aircraft; to attack them, unusual weapons such as air-to-air rockets were tried. Thus, the reconnaissance value of blimps and balloons contributed to the development of air-to-air combat between all types of aircraft, and to the trench stalemate, because it was impossible to move large numbers of troops undetected. The Germans conducted air raids on England during 1915 and 1916 with airships, hoping to damage British morale and cause aircraft to be diverted from the front lines, and indeed the resulting panic led to the diversion of several squadrons of fighters from France. While early air spotters were unarmed, they soon began firing at each other with handheld weapons. An arms race commenced, quickly leading to increasingly agile planes equipped with machine guns. A key innovation was the interrupter gear, a Dutch invention that allowed a machine gun to be mounted behind the propeller so the pilot could fire directly ahead, along the plane's flight path. German strategic bombing during World War I struck Warsaw, Paris, London and other cities. Germany led the world in Zeppelins, and used these airships to make occasional bombing raids on military targets, London and other British cities, without great effect. Later in the war, Germany introduced long range strategic bombers. Damage was again minor but they forced the British air forces to maintain squadrons of fighters in England to defend against air attack, depriving the British Expeditionary Force of planes, equipment, and personnel badly needed on the Western front. Mobility In the early days of the war, armoured cars armed with machine guns were organized into combat units, along with cyclist infantry and machine guns mounted on motor cycle sidecars. Though not able to assault entrenched positions, they provided mobile fire support to infantry, and performed scouting, reconnaissance, and other roles similar to cavalry. After trench warfare took hold of major battle-lines, opportunities for such vehicles greatly diminished, though they continued to see use in the more open campaigns in Russia and the Middle East. Between late 1914 and early 1918, the Western Front hardly moved. When the Russian Empire surrendered after the October Revolution in 1917, Germany was able to move many troops to the Western Front. With new stormtrooper infantry trained in infiltration tactics to exploit enemy weak points and penetrate into rear areas, they launched a series of offensives in the spring of 1918. In the largest of these, Operation Michael, General Oskar von Hutier pushed forward 60 kilometers, gaining in a couple weeks what France and Britain had spent years to achieve. Although initially successful tactically, these offensives stalled after outrunning their horse-drawn supply, artillery, and reserves, leaving German forces weakened and exhausted. The mobile personnel shield was a less successful attempt at restoring mobility. Several kinds of bullet-proof body armor were tested in use, but they more impaired movement than protected the body. In the Battle of Amiens of August 1918, the Triple Entente forces began a counterattack that would be called the "Hundred Days Offensive." The Australian and Canadian divisions that spearheaded the attack managed to advance 13 kilometers on the first day alone. These battles marked the end of trench warfare on the Western Front and a return to mobile warfare. After the war, the defeated Germans would seek to combine their infantry-based mobile warfare of 1918 with vehicles, eventually leading to blitzkrieg, or 'lightning warfare'. Tanks Although the concept of the tank had been suggested as early as the 1890s, authorities showed little more than a passing interest in them until the trench stalemate of World War I caused reconsideration. In early 1915, the British Royal Navy and French industrialists both started dedicated development of tanks. Basic tank design combined several existing technologies. It included armour plating thick enough to be proof against all standard infantry arms, caterpillar track for mobility over the shell-torn battlefield, the four-stroke gasoline powered internal combustion engine (refined in the 1870s), and heavy firepower, provided by the same machine guns which had recently become so dominant in warfare, or even light artillery guns. In Britain, a committee was formed to work out a practical tank design. The outcome was large tanks with a rhomboidal shape, to allow crossing of an trench: the Mark I tank, with the "male" versions mounting small naval guns and machine guns, and the "female" carrying only machine guns. In France, several competing arms industry organizations each proposed radically different designs. Smaller tanks became favored, leading to the Renault FT tank, in part by being able to leverage the engines and manufacturing techniques of commercial tractors and automobiles. Although the tanks' initial appearance on the battlefield in 1916 terrified some German troops, such engagements provided more opportunities for development than battle successes. Early tanks were unreliable, breaking down often. Germans learned they were vulnerable to direct hits from field artillery and heavy mortars, their trenches were widened and other obstacles devised to halt them, and special anti-tank rifles were rapidly developed. Also, both Britain and France found new tactics and training were required to make effective use of their tanks, such as larger coordinated formations of tanks and close support with infantry. Once tanks could be organized in the hundreds, as in the opening assault of the Battle of Cambrai in November 1917, they began to have notable impact. Throughout the remainder of the war, new tank designs often revealed flaws in battle, to be addressed in later designs, but reliability remained the primary weakness of tanks. In the Battle of Amiens, a major Entente counteroffensive near the end of the war, British forces went to field with 532 tanks; after several days, only a few were still in commission, with those that suffered mechanical difficulties outnumbering those disabled by enemy fire. Germany utilized many captured enemy tanks, and made a few of their own late in the war. In the last year of the war, despite rapidly increasing production (especially by France) and improving designs, tank technology struggled to make more than a modest impact on the war's overall progress. Plan 1919 proposed the future use of massive tank formations in great offensives combined with ground attack aircraft. Even without achieving the decisive results hoped for during World War I, tank technology and mechanized warfare had been launched and would grow increasingly sophisticated in the years following the war. By World War II, the tank would evolve into a fearsome weapon critical to restoring mobility to land warfare. At sea The years leading up to the war saw the use of improved metallurgical and mechanical techniques to produce larger ships with larger guns and, in reaction, more armour. The launching of HMS Dreadnought (1906) revolutionized battleship construction, leaving many ships obsolete before they were completed. German ambitions brought an Anglo-German naval arms race in which the Imperial German Navy was built up from a small force to the world's most modern and second most powerful. However, even this high-technology navy entered the war with a mix of newer ships and obsolete older ones. The advantage was in long-range gunnery, and naval battles took place at far greater distances than before. The 1916 Battle of Jutland demonstrated the excellence of German ships and crews, but also showed that the High Seas Fleet was not big enough to challenge openly the British blockade of Germany. It was the only full-scale battle between fleets in the war. Having the largest surface fleet, the United Kingdom sought to press its advantage. British ships blockaded German ports, hunted down German and Austro-Hungarian ships wherever they might be on the high seas, and supported actions against German colonies. The German surface fleet was largely kept in the North Sea. This situation pushed Germany, in particular, to direct its resources to a new form of naval power: submarines. Naval mines were deployed in hundreds of thousands, or far greater numbers than in previous wars. Submarines proved surprisingly effective for this purpose. Influence mines were a new development but moored contact mines were the most numerous. They resembled those of the late 19th century, improved so they less often exploded while being laid. The Allies produced enough mines to build the North Sea Mine Barrage to help bottle the Germans into the North Sea, but it was too late to make much difference. Submarines Germany deployed U-boats (submarines) after the war began. Alternating between restricted and unrestricted submarine warfare in the Atlantic, the Imperial German Navy employed them to deprive the British Isles of vital supplies. The deaths of British merchant sailors and the seeming invulnerability of U-boats led to the development of depth charges (1916), hydrophones (sonar, 1917), blimps, hunter-killer submarines (HMS R-1, 1917), forward-throwing anti-submarine weapons, and dipping hydrophones (the latter two both abandoned in 1918). To extend their operations, the Germans proposed supply submarines (1916). Most of these would be forgotten in the interwar period until World WarII revived the need. The United Kingdom relied heavily on imports to feed its population and supply its war industry, and the German Navy hoped to blockade and starve Britain using U-boats to attack merchant ships. Lieutenant Otto Weddigen remarked of the second submarine attack of the Great War: Submarines soon came under persecution by submarine chasers and other small warships using hastily devised anti-submarine weapons. They could not impose an effective blockade while acting under the restrictions of the prize rules and international law of the sea. They resorted to unrestricted submarine warfare, which cost Germany public sympathy in neutral countries and was a factor contributing to the American entry into World War I. This struggle between German submarines and British countermeasures became known as the "First Battle of the Atlantic." As German submarines became more numerous and effective, the British sought ways to protect their merchant ships. "Q-ships," attack vessels disguised as civilian ships, were one early strategy. Consolidating merchant ships into convoys protected by one or more armed navy vessels was adopted later in the war. There was initially a great deal of debate about this approach, out of fear that it would provide German U-boats with a wealth of convenient targets. Thanks to the development of active and passive sonar devices, coupled with increasingly deadly anti-submarine weapons, the convoy system reduced British losses to U-boats to a small fraction of their former level. Holland 602 type submarines and other Allied types were fewer, being unnecessary for the blockade of Germany. Small arms Infantry weapons for major powers were mainly bolt-action rifles, capable of firing ten or more rounds per minute. German soldiers carried the Gewehr 98 rifle in 8mm Mauser, the British carried the Short Magazine Lee–Enfield rifle, and the US military employed the M1903 Springfield and M1917 Enfield. Rifles with telescopic sights were used by snipers, and were first used by the Germans. Machine guns were also used by great powers; both sides used the Maxim gun, a fully automatic belt-fed weapon, capable of long-term sustained use provided it was supplied to adequate amounts of ammunition and cooling water, and its French counterpart, the Hotchkiss M1914 machine gun. Their use in defense, combined with barbed wire obstacles, converted the expected mobile battlefield to a static one. The machine gun was useful in stationary battle but could not move easily through a battlefield, and therefore forced soldiers to face enemy machine guns without machine guns of their own. Before the war, the French Army studied the question of a light machine gun but had made none for use. At the start of hostilities, France quickly turned an existing prototype (the "CS" for Chauchat and Sutter) into the lightweight Chauchat M1915 automatic rifle with a high rate of fire. Besides its use by the French, the first American units to arrive in France used it in 1917 and 1918. Hastily mass-manufactured under desperate wartime pressures, the weapon developed a reputation for unreliability. Seeing the potential of such a weapon, the British Army adopted the American-designed Lewis gun chambered in .303 British. The Lewis gun was the first true light machine gun that could in theory be operated by one man, though in practice the bulky ammo pans required an entire section of men to keep the gun operating. The Lewis Gun was also used for marching fire, notably by the Australian Corps in the July 1918 Battle of Hamel. To serve the same purpose, the German Army adopted the MG08/15 which was impractically heavy at counting the water for cooling and one belt of ammunition holding 100 rounds. In 1918 the M1918 Browning Automatic Rifle (BAR) was introduced in the US military, the weapon was an "automatic rifle" and like the Chauchat was designed with the concept of walking fire in mind. The tactic was to be employed under conditions of limited field of fire and poor visibility such as advancing through woods. Early submachine guns were much used near the end of the war, such as the MP-18. The US military deployed combat shotguns, commonly known as trench guns. American troops used Winchester Models 1897 and 1912 short-barreled pump action shotguns loaded with 6 rounds containing antimony hardened 00 buckshot to clear enemy trenches. Pump actions can be fired rapidly, simply by working the slide when the trigger is held down, and when fighting within a trench, the shorter shotgun could be rapidly turned and fired in the opposite direction along the trench axis. The shotguns prompted a diplomatic protest from Germany, claiming the shotguns caused excessive injury, and that any U.S. combatants found in possession of them would be subject to execution. The U.S. rejected the claims, and threatened reprisals in kind if any of its troops were executed for possession of a shotgun. Grenades Grenades proved to be effective weapons in the trenches. When the war started, grenades were few and poor. Hand grenades were used and improved throughout the war. Contact fuzes became less common, replaced by time fuzes. The British entered the war with the long-handled impact detonating "Grenade, Hand No 1". This was replaced by the No. 15 "Ball Grenade" to partially overcome some of its inadequacies. An improvised hand grenade was developed in Australia for use by ANZAC troops called the Double Cylinder "jam tin" which consisted of a tin filled with dynamite or guncotton, packed round with scrap metal or stones. To ignite, at the top of the tin there was a Bickford safety fuse connecting the detonator, which was lit by either the user, or a second person. The "Mills bomb" (Grenade, Hand No. 5) was introduced in 1915 and would serve in its basic form in the British Army until the 1970s. Its improved fusing system relied on the soldier removing a pin and while holding down a lever on the side of the grenade. When the grenade was thrown the safety lever would automatically release, igniting the grenade's internal fuse which would burn down until the grenade detonated. The French would use the F1 defensive grenade. The major grenades used in the beginning by the German Army were the impact-detonating "discus" or "oyster shell" bomb and the Mod 1913 black powder Kugelhandgranate with a friction-ignited time fuse. In 1915 Germany developed the much more effective Stielhandgranate, nicknamed "potato masher" for its shape, whose variants remained in use for decades; it used a timed fuse system similar to the Mills bomb. Hand grenades were not the only attempt at projectile explosives for infantry. A rifle grenade was brought into the trenches to attack the enemy from a greater distance. The Hales rifle grenade got little attention from the British Army before the war began but, during the war, Germany showed great interest in this weapon. The resulting casualties for the Allies caused Britain to search for a new defense. The Stokes mortar, a lightweight and very portable trench mortar with short tube and capable of indirect fire, was rapidly developed and widely imitated. Mechanical bomb throwers of lesser range were used in a similar fashion to fire upon the enemy from a safe distance within the trench. The Sauterelle was a grenade launching Crossbow used before the Stokes mortar by French and British troops. Flamethrowers The Imperial German Army deployed flamethrowers () on the Western Front attempting to flush out French or British soldiers from their trenches. Introduced in 1915, it was used with greatest effect during the Hooge battle of the Western Front on 30 July 1915. The German Army had two main types of flame throwers during the Great War: a small single person version called the Kleinflammenwerfer and a larger crew served configuration called the Grossflammenwerfer. In the latter, one soldier carried the fuel tank while another aimed the nozzle. Both the large and smaller versions of the flame-thrower were of limited use because their short range left the exposed to small arms fire. See also List of German weapons of World War I Romanian military equipment of World War I References Bibliography Deals with technical developments, including the first dipping hydrophones External links Johnson, Jeffrey: Science and Technology, in: 1914-1918-online. International Encyclopedia of the First World War. Historical film documents on technology during World War I at www.europeanfilmgateway.eu. Zabecki, David T.: Military Developments of World War I, in: 1914-1918-online. International Encyclopedia of the First World War. Audoin-Rouzeau, Stéphane: Weapons, in: 1914-1918-online. International Encyclopedia of the First World War. Pöhlmann, Markus: Close Combat Weapons, in: 1914-1918-online. International Encyclopedia of the First World War. Watanabe, Nathan: Hand Grenade, in: 1914-1918-online. International Encyclopedia of the First World War. Storz, Dieter: Rifles, in: 1914-1918-online. International Encyclopedia of the First World War. Cornish, Paul: Flamethrower, in: 1914-1918-online. International Encyclopedia of the First World War. Storz, Dieter: Artillery, in: 1914-1918-online. International Encyclopedia of the First World War. Cornish, Paul: Machine Gun, in: 1914-1918-online. International Encyclopedia of the First World War. Military equipment of World War I Science and technology during World War I 20th century in science 20th century in technology
Technology during World War I
[ "Technology" ]
7,067
[ "Science and technology during World War I", "Science and technology by war" ]
326,225
https://en.wikipedia.org/wiki/Technology%20during%20World%20War%20II
Technology played a significant role in World War II. Some of the technologies used during the war were developed during the interwar years of the 1920s and 1930s, much was developed in response to needs and lessons learned during the war, while others were beginning to be developed as the war ended. Many wars have had major effects on the technologies that we use in our daily lives, but World War II had the greatest effect on the technology and devices that are used today. Technology also played a greater role in the conduct of World War II than in any other war in history, and had a critical role in its outcome. Many types of technology were customized for military use, and major developments occurred across several fields including: Weaponry: ships, vehicles, submarines, aircraft, tanks, artillery, small arms; and biological, chemical, and atomic weapons Logistical support: vehicles necessary for transporting soldiers and supplies, such as trains, trucks, tanks, ships, and aircraft Communications and intelligence: devices used for remote sensing, navigation, communication, cryptography and espionage Medicine: surgical innovations, chemical medicines, and techniques Rocketry: guided missiles, medium-range ballistic missiles, and automatic aircraft Military weapons technology experienced rapid advances during World War II, and over six years there was a disorientating rate of change in combat in everything from aircraft to small arms. Indeed, the war began with most armies utilizing technology that had changed little from that of World War I, and in some cases, had remained unchanged since the 19th century. For instance cavalry, trenches, and World War I-era battleships were normal in 1940, but six years later, armies around the world had developed jet aircraft, ballistic missiles, and even atomic weapons in the case of the United States. World War II was the first war where military operations often targeted the research efforts of the enemy. This included the exfiltration of Niels Bohr from German-occupied Denmark to Britain in 1943; the sabotage of Norwegian heavy water production; and the bombing of Peenemunde. Military operations were also conducted to obtain intelligence on the enemy's technology; for example, the Bruneval Raid for German radar and Operation Most III for the German V-2. Between The Wars In August 1919 the British Ten Year Rule declared the government should not expect another war within ten years. Consequently, they conducted very little military R & D. In contrast, Germany and the Soviet Union were dissatisfied powers who, for different reasons, cooperated with each other on military R & D. The Soviets offered Weimar Germany facilities deep inside the USSR for building and testing arms and for military training, well away from Treaty inspectors' eyes. In return, they asked for access to German technical developments, and for assistance in creating the Red Army General Staff. The great artillery manufacturer Krupp was soon active in the south of the USSR, near Rostov-on-Don. In 1925, the Lipetsk fighter-pilot school was established near Lipetsk to train the first pilots for the future Luftwaffe. Since 1926, the Reichswehr used the Kama tank school in Kazan, and tested chemical weapons at the Tomka gas test site in Saratov Oblast. In turn, the Red Army gained access to these training facilities, as well as military technology and theory from Weimar Germany. In the late 1920s, Germany helped the Soviet industry begin to modernize and to assist in the establishment of tank production facilities at the Leningrad Bolshevik Factory and the Kharkiv Locomotive Factory. This cooperation would break down when Hitler rose to power in 1933. The failure of the World Disarmament Conference marked the beginnings of the arms race leading to war. In France the lesson of World War I was translated into the Maginot Line which was supposed to hold a line at the border with Germany. The Maginot Line did achieve its political objective of ensuring that any German invasion had to go through Belgium, ensuring that France would have Britain as a military ally. France and Russia had more, and much better, tanks than Germany at the outbreak of their hostilities in 1940. As in World War I, the French generals expected that armour would mostly serve to help infantry break the static trench lines and storm machine gun nests. They thus spread the armour among their infantry divisions, ignoring the new German doctrine of blitzkrieg based on fast, coordinated movement using concentrated armour attacks, against which the only effective defense was mobile anti-tank guns, as the old infantry antitank rifles were ineffective against the new medium and heavy tanks. Air power was a major concern of Germany and Britain between the wars. The trade in aircraft engines continued, with Britain selling hundreds of its best to German firms – which used them in the first generation of aircraft and much improved them for use in later German aircraft. These new inventions led the way to major success for the Germans in World War II. As always, Germany was at the forefront of internal combustion engine development. The laboratory of Ludwig Prandtl at University of Göttingen was the world center of aerodynamics and fluid dynamics in general, until its dispersal after the Allied victory. This contributed to the German development of jet aircraft and of submarines with improved underwater performance. Meanwhile, the RAF secretly developed the Chain Home radar and Dowding system for defending against enemy planes. Induced nuclear fission was discovered in Germany in 1939 by Otto Hahn (and expatriate Jews in Sweden), but many of the scientists needed to develop nuclear power had already been lost, due to Nazi anti-Jewish and anti-intellectual policies. Scientists have been at the heart of warfare and their contributions have often been decisive. As Ian Jacob, the wartime military secretary of Winston Churchill, famously remarked on the influx of refugee scientists (including 19 Nobel laureates), "the Allies won the [Second World] War because our German scientists were better than their German scientists". Allied cooperation The Allies of World War II cooperated extensively in the development and manufacture of existing and new technologies to support military operations and intelligence gathering during the Second World War. There were various ways in which the allies cooperated, including the American Lend-Lease scheme and hybrid weapons such as the Sherman Firefly as well as the British Tube Alloys nuclear weapons research project which was absorbed into the American-led Manhattan Project. Several technologies invented in Britain proved critical to the military and were widely manufactured by the Allies during the Second World War. The origin of the cooperation stemmed from a 1940 visit by the Aeronautical Research Committee chairman Henry Tizard that arranged to transfer U.K. military technology to the U.S. in case of the successful invasion of the U.K. that Hitler was planning as Operation Sea Lion. Tizard led a British technical mission, known as the Tizard Mission, containing details and examples of British technological developments in fields such as radar, jet propulsion and also the early British research into the atomic bomb. One of the devices brought to the U.S. by the Mission, the resonant cavity magnetron, was later described as "the most valuable cargo ever brought to our shores". Vehicles The best jet fighters at the end of the war easily outflew any of the leading aircraft of 1939, such as the Spitfire Mark I. The early war bombers that caused such carnage would almost all have been shot down in 1945, many by radar-aimed, proximity fuse-detonated anti-aircraft fire, just as the 1941 "invincible fighter", the Zero, had by 1944 become the "turkey" of the "Marianas Turkey Shoot". The best late-war tanks, such as the Soviet JS-3 heavy tank or the German Panther medium tank, handily outclassed the best tanks of 1939 such as Panzer IIIs. In the navy the battleship, long seen as the dominant element of sea power, was displaced by the greater range and striking power of the aircraft carrier. The chaotic importance of amphibious landings stimulated the Western Allies to develop the Higgins boat, a primary troop landing craft; the DUKW, a six-wheel-drive amphibious truck, amphibious tanks to enable beach landing attacks and Landing Ship, Tanks to land tanks on beaches. Increased organization and coordination of amphibious assaults coupled with the resources necessary to sustain them caused the complexity of planning to increase by orders of magnitude, thus requiring formal systematization giving rise to what has become the modern management methodology of project management by which almost all modern engineering, construction and software developments are organized. Aircraft In the Western European Theatre of World War II, air power became crucial throughout the war, both in tactical and strategic operations (respectively, battlefield and long-range). Superior German aircraft, aided by ongoing introduction of design and technology innovations, allowed the German armies to overrun Western Europe with great speed in 1940, assisted by lack of Allied aircraft, which in any case lagged in design and technical development during the slump in research investment after the Great Depression. Since the end of World War I, the French Air Force had been badly neglected, as military leaders preferred to spend money on ground armies and static fortifications to fight another World War I-style war. As a result, by 1940, the French Air Force had only 1562 planes and was together with 1070 RAF planes facing 5,638 Luftwaffe fighters and fighter-bombers. Most French airfields were located in north-east France, and were quickly overrun in the early stages of the campaign. The Royal Air Force of the United Kingdom possessed some very advanced fighter planes, such as Spitfires and Hurricanes, but these were not useful for attacking ground troops on a battlefield, and the small number of planes dispatched to France with the British Expeditionary Force were destroyed fairly quickly. Subsequently, the Luftwaffe was able to achieve air superiority over France in 1940, giving the German military an immense advantage in terms of reconnaissance and intelligence. German aircraft rapidly achieved air superiority over France in early 1940, allowing the Luftwaffe to begin a campaign of strategic bombing against British cities. Utilizing France's airfields near the English Channel the Germans were able to launch raids on London and other cities during the Blitz, with varying degrees of success. After World War I, the concept of massed aerial bombing—"The bomber will always get through"—had become very popular with politicians and military leaders seeking an alternative to the carnage of trench warfare, and as a result, the air forces of Britain, France, and Germany had developed fleets of bomber planes to enable this (France's bomber wing was severely neglected, whilst Germany's bombers were developed in secret as they were explicitly forbidden by the Treaty of Versailles). Air warfare of World War II began with the bombing of Shanghai by the Imperial Japanese Navy on January 28, 1932, and August 1937. The bombings during the Spanish Civil War (1936–1939), further demonstrated the power of strategic bombing, and so air forces in Europe and the United States came to view bomber aircraft as extremely powerful weapons which, in theory, could bomb an enemy nation into submission on their own. The resulting fear of bombers triggered major developments in aircraft technology. The Spanish Civil War had proved that tactical dive-bombing using Stukas was a very efficient way of destroying enemy troops concentrations, and so resources and money had been devoted to the development of smaller bomber craft. As a result, the Luftwaffe was forced to attack London in 1940 with heavily overloaded Heinkel and Dornier medium bombers, and even with the unsuitable Junkers Ju 87. These bombers were painfully slow—engineers had been unable to develop sufficiently large piston aircraft engines (those that were produced tended to explode through extreme overheating), and so the bombers used for the Battle of Britain were woefully undersized. As German bombers had not been designed for long-range strategic missions, they lacked sufficient defenses. The Messerschmitt Bf 109 fighter escorts had not been equipped to carry enough fuel to guard the bombers on both the outbound and return journeys, and the longer-range Bf 110s could be outmaneuvered by the short-range British fighters. (A bizarre feature of the war was how long it took to conceive of the Drop tank.) The air defense was well organized and equipped with effective radar that survived the bombing. As a result, German bombers were shot down in large numbers, and were unable to inflict enough damage on cities and military-industrial targets to force Britain out of the war in 1940 or to prepare for the planned invasion. Nazi Germany put into production only one large, long-range strategic bomber (the Heinkel He 177 Greif, with many delays and problems), while the America Bomber concept resulted only in prototypes. British long-range bomber planes such as the Short Stirling had been designed before 1939 for strategic flights and given a large armament, but their technology still suffered from numerous flaws. The smaller and shorter ranged Bristol Blenheim, the RAF's most-used bomber, was defended by only one hydraulically operated machine-gun turret, which was soon revealed to be incapable of defending against squadrons of German fighter planes. American bomber planes such as the B-17 Flying Fortress had been built before the war as the only adequate long-range bombers in the world, designed to patrol the long American coastlines. With six machine-gun turrets providing 360° cover, the B-17s were still vulnerable without fighter protection even when used in large formations. Despite the abilities of Allied bombers, though, Germany was not quickly crippled by Allied Strategic bombing during World War II. Accuracy was poor and Allied airmen frequently could not find their targets at night. The bombs used by the Allies were very technologically advanced devices, and mass production meant that the precision bombs were often made sloppily and so failed to explode. German industrial production actually rose continuously. Significantly, the bomber offensive kept the revolutionary Type XXI U-Boat from entering service during the war. Moreover, Allied air raids had a serious propaganda impact on the German government, all prompting Germany to begin serious development of air defense technology—in the form of fighter planes. The practical jet aircraft age began just before the start of the war with the development of the Heinkel He 178, the first true turbojet. Late in the war, the Germans brought in the first operational Jetfighter, the Messerschmitt Me 262(Me 262). However, despite their seeming technological edge, German jets were often hampered by technical problems, such as short engine lives, with the Me 262 having an estimated operating life of just ten hours before failing. German jets were also overwhelmed by Allied air superiority, frequently being destroyed on or near the airstrip. The first and only operational Allied jet fighter of the war, the British Gloster Meteor, saw combat against German V-1 flying bombs but did not significantly distinguish from top-line, late-war piston-driven aircraft. Aircraft saw rapid and broad development during the war to meet the demands of aerial combat and address lessons learned from combat experience. From the open cockpit airplane to the sleek jet fighter, many different types were employed, often designed for very specific missions. Aircraft were used in anti-submarine warfare against German U-boats, by the Germans to mine shipping lanes and by the Japanese against previously formidable Royal Navy battleships such as . During the war the Germans produced various glide bombs, which were the first "smart" weapons; the V-1 flying bomb, which was the first cruise missile weapon; and the V-2 rocket, the first ballistic missile weapon. The latter of these was the first step into the space age as its trajectory took it through the stratosphere, higher and faster than any aircraft. This later led to the development of the Intercontinental ballistic missile (ICBM). Wernher Von Braun led the V-2 development team and later emigrated to the United States where he contributed to the development of the Saturn V rocket, which took men to the moon in 1969. Fuel The Axis countries had serious shortages of petroleum from which to make liquid fuel. The Allies had much more petroleum production. Germany, long before the war, developed a process to make synthetic fuel from coal. Synthesis factories were principal targets of the Oil Campaign of World War II. The USA added tetra ethyl lead to its aviation fuel, with which it supplied Britain and other Allies. This octane enhancing additive allowed higher compression ratios, allowing higher efficiency, giving more speed and range to Allied Airplanes, and reducing the cooling load. Land vehicles The Treaty of Versailles had imposed severe restrictions upon Germany constructing vehicles for military purposes, and so throughout the 1920s and 1930s, German arms manufacturers and the Wehrmacht had begun secretly developing tanks. As these vehicles were produced in secret, their technical specifications and battlefield potentials were largely unknown to the European Allies until the war actually began. French and British Generals believed that a future war with Germany would be fought under very similar conditions as those of 1914–1918. Both invested in thickly armoured, heavily armed vehicles designed to cross shell-damaged ground and trenches under fire. At the same time the British also developed faster but lightly armoured Cruiser tanks to range behind the enemy lines. Only a handful of French tanks had radios, and these often broke as the tank lurched over uneven ground. German tanks were, on the contrary, all equipped with radios, allowing them to communicate with one another throughout battles, whilst French tank commanders could rarely contact other vehicles. The Matilda Mk I tanks of the British Army were also designed for infantry support and were protected by thick armour. This suited trench warfare, but made the tanks painfully slow in open battles. Their light armament was usually unable to inflict serious damage on German vehicles. The exposed caterpillar tracks were easily broken by gunfire, and the Matilda tanks had a tendency to incinerate their crews if hit, as the petrol tanks were located on the top of the hull. By contrast the Infantry tank Matilda II fielded in lesser numbers was largely invulnerable to German gunfire and its gun was able to punch through the German tanks. However French and British tanks were at a disadvantage compared to the air supported German armoured assaults, and a lack of armoured support contributed significantly to the rapid Allied collapse in 1940. World War II marked the first full-scale war where mechanization played a significant role. Most nations did not begin the war equipped for this. Even the vaunted German Panzer forces relied heavily on non-motorised support and flank units in large operations. While Germany recognized and demonstrated the value of concentrated use of mechanized forces, they never had these units in enough quantity to supplant traditional units. However, the British also saw the value in mechanization. For them it was a way to enhance an otherwise limited manpower reserve. America as well sought to create a mechanized army. For the United States, it was not so much a matter of limited troops, but instead a strong industrial base that could afford such equipment on a great scale. The most visible vehicles were the tanks of World War II, forming the armored spearhead of mechanized warfare. Their impressive firepower and armor made them the premier fighting machine of ground warfare. However, the large number of trucks and lighter vehicles that kept the infantry, artillery, and others moving were massive undertakings also. Ships Naval warfare changed dramatically during World War II, with the ascent of the aircraft carrier to the premier vessel of the fleet, and the impact of increasingly capable submarines on the course of the war. The development of new ships during the war was somewhat limited due to the protracted time period needed for production, but important developments were often retrofitted to older vessels. Advanced German submarine types came into service too late and after nearly all the experienced crews had been lost. In addition to aircraft carriers, its assisting counterpart of destroyers were advanced as well. From the Imperial Japanese Navy, the Fubuki-class destroyer was introduced. The Fubuki class set a new standard not only for Japanese vessels, but for destroyers around the world. At a time when British and American destroyers had changed little from their un-turreted, single-gun mounts and light weaponry, the Japanese destroyers were bigger, more powerfully armed, and faster than any similar class of vessel in the other fleets. The Japanese destroyers of World War II are said to be the world's first modern destroyer. The German U-boats were used primarily for stopping/destroying the resources from the United States and Canada coming across the Atlantic. Submarines were critical in the Pacific Ocean as well as in the Atlantic Ocean. Advances in submarine technology included the snorkel. Japanese defenses against Allied submarines were ineffective. Much of the merchant fleet of the Empire of Japan, needed to supply its scattered forces and bring supplies such as petroleum and food back to the Japanese Archipelago, was sunk. Among the warships sunk by submarines was the war's largest aircraft carrier, the Shinano. The Kriegsmarine introduced the pocket battleship to get around constraints imposed by the Treaty of Versailles. Innovations included the use of diesel engines, and welded rather than riveted hulls. The most important shipboard advances were in the field of anti-submarine warfare. Driven by the desperate necessity of keeping Britain supplied, technologies for the detection and destruction of submarines was advanced at high priority. The use of ASDIC (SONAR) became widespread and so did the installation of shipboard and airborne radar. The Allies Ultra code breaking allowed convoys to be steered around German U-boat wolfpacks. Weapons The actual weapons (guns, mortars, artillery, bombs, and other devices) were as diverse as the participants and objectives. A large array were developed during the war to meet specific needs that arose, but many traced their early development to prior to World War II. Torpedoes began to use magnetic detonators; compass-directed, programmed and even acoustic guidance systems; and improved propulsion. Fire-control systems continued to develop for ships' guns and came into use for torpedoes and anti-aircraft fire. Human torpedoes and the Hedgehog were also developed. Armoured vehicles: The tank destroyer, specialist tanks for combat engineering including mine clearing flail tanks, flame tank, and amphibious designs Aircraft: glide bombs – the first "smart bombs", such as the Fritz X anti-shipping missile, had wire or radio remote control; the world's first jet fighter (Messerschmitt Me 262) and jet bomber (Arado 234), the world's first operational military helicopters (Flettner Fl 282), the world's first rocket-powered fighter (Messerschmitt 163) Missiles: The pulse jet-powered V-1 flying bomb was the world's first cruise missile, Rockets progressed enormously: V-2 rocket, Katyusha rocket artillery and air-launched rockets. Specialised bombs: cluster bombs, blockbuster bombs, bouncing bombs, and bunker busters. Specialised warheads: high-explosive anti-tank (HEAT), and high-explosive squash head (HESH) for anti-armour and anti-fortification use. Proximity fuze for shells, bombs and rockets. This fuze is designed to detonate an explosive automatically when close enough to the target to destroy it, so a direct hit is not required and time/place of closest approach does not need to be estimated. Some torpedoes and mines used a magnetic pistol, a sort of proximity fuze that was not perfected during the war. Guided weapons (by radio or trailing wires): glide bombs, crawling bombs and rockets – the precursors of today's precision-guided munitions existed between 1942 and 1945, in the German Fritz X and Henschel Hs 293 anti-ship ordnance designs, which along with the American Azon, were all MCLOS radio-guided ordnance designs in World War II service. Self-guiding weapons: torpedoes (sound-seeking, compass-guided and looping), V1 missile (compass- and timer-guided), and the U.S. Navy's Bat air-launched anti-ship glide ordnance, using active radar homing for the first time anywhere. Aiming devices for bombs, torpedoes, artillery and machine guns, using special purpose mechanical and electronic analog and (perhaps) digital "computers". The mechanical analog Norden bomb sight is a well-known example. The first generation of nerve agents was invented and produced in Germany, but wasn't used as a weapon Napalm was developed, but did not see wide use until the Korean War Plastic explosives like Nobel 808, Hexoplast 75, Compositions C and C2 Small arms development New production methods for weapons such as stamping, riveting, and welding came into being to produce the number of arms needed. Design and production methods had advanced enough to manufacture weapons of reasonable reliability such as the PPSh-41, PPS-42, Sten, Beretta Model 38, MP 40, M3 Grease Gun, Gewehr 43, Thompson submachine gun and the M1 Garand rifle. Other Weapons commonly found during World War II include the American, Browning Automatic Rifle (BAR), M1 Carbine Rifle, as well as the Colt M1911 A-1; The Japanese Type 11, the Type 96 machine gun, and the Arisaka bolt-action rifles all were significant weapons used during the war. World War II saw the establishment of the reliable semi-automatic rifle, such as the American M1 Garand and, more importantly, of the first widely used assault rifles, named after the German sturmgewehrs of the late war. Earlier renditions that hinted at this idea were that of the employment of the Browning Automatic Rifle and 1916 Fedorov Avtomat in a walking fire tactic in which men would advance on the enemy position showering it with a hail of lead. The Germans first developed the FG 42 for its paratroopers in the assault and later the Sturmgewehr 44 (StG 44), the world's first assault rifle, firing an intermediate cartridge; the FG 42's use of a full-powered rifle cartridge made it difficult to control. Developments in machine gun technology culminated in the Maschinengewehr 42 (MG42) which was of an advanced design unmatched at the time. It spurred post-war development on both sides of the upcoming Cold War and is still used by some armies to this day including the German Bundeswehr's MG 3. The Heckler & Koch G3, and many other Heckler & Koch designs, came from its system of operation. The United States military meshed the operating system of the FG 42 with the belt feed system of the MG42 to create the M60 machine gun used in the Vietnam War. Despite being overshadowed by self-loading/automatic rifles and sub-machine guns, bolt-action rifles remained the mainstay infantry weapon of many nations during World War II. When the United States entered World War II, there were not enough M1 Garand rifles available to American forces which forced the US to start producing more M1903 rifles in order to act as a "stop gap" measure until sufficient quantities of M1 Garands were produced. During the conflict, many new models of bolt-action rifle were produced as a result of lessons learned from the First World War, with the designs of a number of bolt-action infantry rifles being modified in order to speed production and to make the rifles more compact and easier to handle. Examples include the German Mauser Kar98k, the British Lee–Enfield No.4, and the Springfield M1903A3. During the course of World War II, bolt-action rifles and carbines were modified even further to meet new forms of warfare the armies of certain nations faced e.g. urban warfare and jungle warfare. Examples include the Soviet Mosin–Nagant M1944 carbine, developed by the Soviets as a result of the Red Army's experiences with urban warfare e.g. the Battle of Stalingrad, and the British Lee–Enfield No.5 carbine, developed for British and Commonwealth forces fighting the Japanese in South-East Asia and the Pacific. When World War II ended in 1945, the small arms that were used in the conflict still saw action in the hands of the armed forces of various nations and guerrilla movements during and after the Cold War era. Nations like the Soviet Union and the United States provided many surplus, World War II-era small arms to a number of nations and political movements during the Cold War era as a pretext to providing more modern infantry weapons. Atomic bomb The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. The prospect that a German atomic bomb project would develop one first alarmed scientists who were refugees from Nazi Germany and other fascist countries. In Britain, Frisch and Rudolf Peierls, working under Mark Oliphant at the University of Birmingham, made a breakthrough investigating the critical mass of uranium-235 in June 1939. Their calculations indicated that it was within an order of magnitude of , which was small enough to be carried by a bomber of the day. Their March 1940 Frisch–Peierls memorandum prompted the creation of the MAUD Committee to investigate. A directorate known as Tube Alloys was established in the Department of Scientific and Industrial Research under Wallace Akers to pursue the development of an atomic bomb. In July 1940, Britain offered to give the United States access to its scientific research, and the Tizard Mission's John Cockcroft briefed American scientists on British developments. He discovered that although an American atomic bomb project already existed, it was smaller than the British, and not as far advanced. Oliphant flew to the United States in late August 1941 and spoke persuasively to Ernest O. Lawrence and other key American physicists about the feasibility and potential power of an atomic bomb. Between 1942 and 1946, the American project was under the direction of Brigadier General Leslie R. Groves Jr. of the United States Army Corps of Engineers. The Army component of the project was designated the "Manhattan District" as its first headquarters were in Manhattan; this name gradually superseded the official codename, Development of Substitute Materials, for the entire project. The British and American projects were merged with the Quebec Agreement in August 1943, and a British mission joined Manhattan Project's sites in the United States. The Manhattan Project began modestly, but grew to employ nearly 130,000 people at its peak. Due to high turnover, over 500,000 people worked on the project. Three entire secret cities were built at Oak Ridge, Tennessee, Richland, Washington, and Los Alamos, New Mexico. The Manhattan Project cost nearly US$2 billion (equivalent to about $ billion in ). Over 90 percent of the cost was for building factories and to produce fissile material, with less than 10 percent for development and production of the weapons. It was the second most expensive weapons project undertaken by the United States in World War II, behind only the Boeing B-29 Superfortress bomber. The fissile Uranium-235 isotope makes up only 0.7 percent of natural uranium. Because it is chemically identical to the most common isotope, uranium-238, and has almost the same mass, separating the two proved challenging. Three methods were employed for uranium enrichment: electromagnetic, gaseous and thermal. This work was carried out at the Clinton Engineer Works at Oak Ridge, Tennessee. In parallel was an effort to produce plutonium, which was theorised to also be fissile, and could be produced by the nuclear transmutation of uranium in a nuclear reactor. The feasibility of a nuclear reactor was demonstrated in 1942 at the Manhattan Project's Metallurgical Laboratory at the University of Chicago with the start up of Chicago Pile-1. A pilot reactor, the X-10 Graphite Reactor, was constructed at the Clinton Engineer Works, and three production reactors were built at the Hanford Engineer Works in Washington state. Work on weapon design was carried out by Project Y at Los Alamos under the direction of Robert Oppenheimer. The Manhattan Project pursued the development of two types of atomic bombs concurrently: a relatively simple gun-type fission weapon known as Thin Man and a more complex implosion-type nuclear weapon known as Fat Man. The gun-type design proved impractical to use with plutonium, so effort was concentrated on the implosion design. A simpler gun-type called Little Boy was then developed that used highly enriched uranium. Atomic bombs were then employed against the Japan in August 1945. The German nuclear weapon project failed for a variety of reasons, most notably insufficient resources, time, and a lack of official interest in a project unlikely to yield results before the war ended. The leading nuclear physicist in Germany was Werner Heisenberg. Other key figures in the German project included Manfred von Ardenne, Walther Bothe, Kurt Diebner and Otto Hahn. The Japanese nuclear weapon program also floundered due to lack of resources despite gaining interest from the government. Electronics, communications and intelligence Electronics rose to prominence quickly. Blitzkrieg was highly effective early in the war, with all German tanks having a radio. Enemy forces quickly learned from their defeats, discarded their obsolete tactics, and installed radios. Combat Information Centers on ships and aircraft established networked computing, later essential to civilian life. While prior to the war few electronic devices were seen as important to war, by the middle of the war instruments such as the radar and ASDIC (sonar) had become invaluable. Germany started the war ahead in some aspects of radar, but lost ground to research and development of the cavity magnetron in Britain and to later work at the "Radiation Laboratory" of the Massachusetts Institute of Technology. Half of the German theoretical physicists were Jewish and had emigrated or otherwise been lost to Germany long before WW II started. Equipment designed for communications and the interception of communications became critical. World War II cryptography became an important application, and the newly developed machine ciphers, mostly rotor machines, were widespread. By the end of 1940, the Germans had broken most American and all British military ciphers except the Enigma-based Typex. The Germans in turn widely relied on their own variants of the Enigma coding machine for encrypting operations communications, and Lorenz cipher for strategic messages. The British developed a new method for decoding Enigma benefiting from information given to Britain by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war. Later, they also accomplished the cryptanalysis of the Lorenz cipher. The meticulous work of code breakers based at Britain's Bletchley Park played a crucial role in the final defeat of Germany. German radio intelligence operations during World War II were extensive. The intercept part of signals intelligence was for the most part successful but success in cryptanalysis depended in large part on loose discipline in enemy radio operations. Americans also used electronic computers for equations, such as battlefield equations, ballistics, and more. The Electronic Numerical Integrator and Computer (ENIAC) machine was the first general purpose computer, built in 1945. Previously, human computers would spend hours solving these equations. However, there were not enough mathematicians to handle the many ballistic equations that needed to be solved. The resulting Von Neumann architecture later became the basis of general-purpose computers. Rocketry Rocketry was used greatly in World War II. There were many different inventions and advances in rocketry, such as the following. The V-1, which is also known as the buzz bomb. This automatic aircraft would be known as a "cruise missile" today. The V-1 was developed at Peenemünde Army Research Center by the Nazi German Luftwaffe during the Second World War. During initial development it was known by the codename "Cherry Stone". The first of the so-called Vergeltungswaffen series designed for terror bombing of London, the V-1 was fired from launch facilities along the French (Pas-de-Calais) and Dutch coasts. The first V-1 was launched at London on 13 June 1944), one week after (and prompted by) the successful Allied landings in Europe. At its peak, more than one hundred V-1s a day were fired at south-east England, 9,521 in total, decreasing in number as sites were overrun until October 1944, when the last V-1 site in range of Britain was overrun by Allied forces. After this, the V-1s were directed at the port of Antwerp and other targets in Belgium, with 2,448 V-1s being launched. The attacks stopped when the last launch site was overrun on 29 March 1945. The V-2 (German: Vergeltungswaffe 2, "Retribution Weapon 2"), technical name Aggregat-4 (A-4), was the world's first long-range guided ballistic missile. The missile with liquid-propellant rocket engine was developed during the Second World War in Germany as a "vengeance weapon", designed to attack Allied cities as retaliation for the Allied bombings of German cities. The V-2 rocket was also the first artificial object to cross the boundary of space. These two rocketry advances took the lives of many civilians in London during 1944 and 1945. Medicine Penicillin was first developed, mass-produced and used during the war. The widespread use of mepacrine (Atabrine) for the prevention of malaria, sulfanilamide, blood plasma, and morphine were also among the chief wartime medical advancements. Advances in the treatment of burns, including the use of skin grafts, mass immunization for tetanus and improvements in gas masks also took place during the war. The use of metal plates to help heal fractures began during the war. See also Military invention Military funding of science Military production during World War II List of equipment used in World War II List of ships of the Second World War List of aircraft of World War II Notes References Further reading Ford, Brian J. (1969). German Secret Weapons: Blueprint for Mars (Ballantine's Illustrated History of World War II / the Violent Century: Weapons Book #5) Ford, Brian J. (1970). Allied Secret Weapons: The War of Science (Ballantine's Illustrated History of World War II / the Violent Century: Weapons Book #19) Military equipment of World War II Technological races
Technology during World War II
[ "Technology" ]
7,796
[ "Science and technology during World War II", "Science and technology by war" ]
326,285
https://en.wikipedia.org/wiki/Paul%20Elvstr%C3%B8m
Paul Bert Elvstrøm (25 February 1928 – 7 December 2016) was a Danish yachtsman and the founder of Elvstrøm Sails. He won four Olympic gold medals and twenty world titles in a range of classes including Snipe, Soling, Star, Flying Dutchman, Finn, 505, and 5.5 Metre. For his achievements, Elvstrøm was chosen as "Danish Sportsman of the Century." Early life Paul Elvstrøm was born, north of Copenhagen, in a house overlooking the sound between Denmark and Sweden. His father was a sea captain but died when Elvstrøm was young, and he was brought up by his mother along with a brother and sister. A second brother drowned at the age of 5 when he fell off a seawall near the family home. Growing up along the Øresund, Elvstrøm quickly became consumed by sailing, which began with crewing in a club fleet of small clinker keelboats. He was soon given an Oslo dinghy by a neighbour who realised Elvstrøm's mother was too poor to be able to buy one. In his book Elvstrøm Speaks on Yacht Racing he claimed to be ‘word blind’ and could not read or write when he was at school, which may have been due to dyslexia. It is clear that Elvstrøm considered schooling a distraction from sailing: "I was very bad in school," he said, "The only interest I had was in sailing fast…The teacher knew that if I was not at school, I was sailing." After leaving school he became a member of the Hellerup Sailing Club, where he gained a reputation as an excellent sailer. He was funding himself during this period as a bricklayer, but in 1954 also started cutting sails for club members in his basement. Innovation Elvstrøm was noted as a developer of sails and sailing equipment, and later founded Elvstrøm sails. One of his most successful innovations was a new type of self-bailer. The new features were a wedge shaped venturi that closes automatically if the boat grounds or hits an obstruction, and a flap that acts as a non return valve to minimise water coming in if the boat is stationary or moving too slowly for the device to work. Previous automatic bailers would be damaged or destroyed if they met an obstruction, and would let considerable amounts of water in if the boat was moving too slowly. The Elvstrøm self-bailer is still in production under the Andersen brand and has been widely copied; it is still found on Olympic boats, and other grand prix boats at the leading edge of the sport. In 2016, Dan Ibsen, the executive director of the Royal Danish Yacht Club said, “Today the Elvstrøm Bailer is still the only functional bailer on Olympic dinghies and boats around the world.” Other innovations include the Elvstrøm Lifejacket, which was the first specifically designed and produced for active sailors. He also popularised the kicking strap, or boom vang (US). This may take the form of a block and tackle linking a low point on the mast (or an equivalent point on the hull) and the boom close to the mast, which allows the boom to be let out when reaching or running without lifting. This controls the twist of the mainsail from its foot to its head, increasing the sail's power and the boat's speed and controlability. Elvstrøm did not advertise his new invention, leaving his competitors mystified at his superior boat-speed. Investigation of his dinghy revealed nothing as he used to remove the kicking strap before coming ashore. Among the innovative concepts he brought to sailboat racing was the concept of gates instead of a single windward or leeward mark in large regattas. The leeward gate on a windward-leeward course is commonly used. The windward gate is less often used due to the difficulties in managing right-of-way around the right gate, the subtleties of which are understood mostly by match racers. He has also been instrumental in developing several international yacht racing rules. Training Elvstrøm was a very early innovator in training techniques. For example, he used the technique of 'sitting out' or hiking using toe-straps to a greater degree than previously, getting all his body weight from the knees upwards outside the boat, thus providing extra leverage to enable the boat to remain level in stronger winds and hence go faster than his competitors. This technique required great strength and fitness, and so after the 1948 Olympics, in order to improve his physical conditioning in readiness for the 1952 games, Elvstrøm built a training bench with toe-straps in his garage to replicate the sitting-out position in his dinghy. He then proceeded to spend many training hours on dry land sitting out on the bench at home. “He did take sailing to a level that you had to call it a sport,” said Jesper Bank, a principal at Elvstrøm Sails and a two-time Olympic gold medalist for Denmark. “Before Paul, you would see competitors with pipes in their mouths and wearing skippers’ caps. At that time, they certainly thought he was superhuman.” According to an obituary by the International Finn Association, "He was a sportsman and the first real sailing athlete. He trained harder and longer than anyone else so that when the day of the race came he was better prepared than anyone else. He was famous for his physical strength and fitness, able to out-hike anyone on the race course.” Business In 1954, Elvstrøm established a manufacturing company, Elvstrøm Sails, whose products included masts, booms, and sails. Displaying a keen marketing mind to go along with his engineering nous, the business grew rapidly and by the 1970s Elvstrøm products were seen on boats all around the world. Today, Elvstrøm Sails is among the world's leading sailmakers, employing around 300 worldwide. Elvstrøm founded his business in the family villa just north of Copenhagen. It grew out of its premises multiple times, and today, Elvstrøm Sails is based in Aabenraa in the south of Denmark. Personal life Elvstrøm was married to Anne, who pre-deceased him by three years; together they had four daughters: Pia, Stine, Gitte and Trine. Elvstrøm continued to sail in his later years until Parkinson's disease began to afflict him. In 2009 he sailed his Dragonfly trimaran — solo — to visit his daughter Gitte and her family on the east coast of Sweden, 600 miles from his home. Elvstrøm's success and celebrity brought personal stress. At the 1972 Games in Munich, under the pressures of competition and his challenges facing his sail-making business, he suffered a nervous breakdown. He died on 7 December 2016 at the age of 88, after battling Alzheimers for a few years. Legacy As well as being remembered as arguably the greatest sailing racer ever, Elvstrøm was also known to be a model of sportsmanship. He is famous for his philosophy that, "If you, by winning, are losing your friends, you are not winning." Achievements Elvstrøm competed in eight Olympic Games from 1948 to 1988, being one of only nine persons ever (the others are sailor Ben Ainslie, swimmers Michael Phelps and Katie Ledecky, wrestlers Kaori Icho and Mijaín López, speed skater Ireen Wüst and athletes Carl Lewis in the long jump and Al Oerter in the discus) to win four consecutive individual gold medals (1948–60), first time in a Firefly, subsequently in Finns. In his last two Olympic games he sailed the Tornado Catamaran class, which, in those days, was normally sailed by two young men, with his daughter Trine Elvstrøm as forward hand. He is one of only five athletes who have competed in the Olympics over a span of 40 years, along with fencer Ivan Joseph Martin Osiier, sailors Magnus Konow and Durward Knowles and showjumper Ian Millar. Elvstrøm won medals at the world championships: Finn, 505, Snipe, Flying Dutchman, 5.5 Metre, Star, Soling, Tornado, and Half Ton. In 1996, Elvstrøm was chosen as "Danish Sportsman of the Century." In 2007, Elvstrøm was among the first six inductees into the ISAF Sailing Hall of Fame. Bibliography Elvstrom, Paul. Expert Dinghy and Keelboat Racing, 1967, Times Books, Elvstrom, Paul. Elvström Speaks on Yacht Racing, 1970, One-Design & Offshore Yachtsman Magazine, Elvstrom, Paul. Elvström Speaks -- to His Sailing Friends on His Life and Racing Career, 1970, Nautical Publishing Company, Paul Elvström Explains the Yacht Racing Rules, First edition 1969, title updated to Paul Elvstrom Explains the Racing Rules of Sailing: 2005–2008 Rules. Updated four-yearly in accordance with racing rules revisions, various authors and publishers. See also Elvstrøm 717 List of athletes with the most appearances at Olympic Games List of multiple Olympic gold medalists in one event Multiple World champion in sailing References External links Paul Elvström, Sailing's Greatest at Sail-World.com 1928 births 2016 deaths Marine engineers Sailmakers Danish male sailors (sport) Hellerup Sejlklub sailors Danish yacht designers Olympic sailors for Denmark Olympic gold medalists for Denmark Olympic medalists in sailing Medalists at the 1960 Summer Olympics Medalists at the 1956 Summer Olympics Medalists at the 1952 Summer Olympics Medalists at the 1948 Summer Olympics Sailors at the 1948 Summer Olympics – Firefly Sailors at the 1952 Summer Olympics – Finn Sailors at the 1956 Summer Olympics – Finn Sailors at the 1960 Summer Olympics – Finn Sailors at the 1968 Summer Olympics – Star Sailors at the 1972 Summer Olympics – Soling Sailors at the 1984 Summer Olympics – Tornado Sailors at the 1988 Summer Olympics – Tornado World champions in sailing for Denmark Finn class world champions Flying Dutchman class world champions Open Snipe class world champions Star class world champions Soling class sailors Sportspeople from Copenhagen Soling class world champions Sportspeople with dyslexia 20th-century Danish sportsmen
Paul Elvstrøm
[ "Engineering" ]
2,161
[ "Marine engineers", "Marine engineering" ]
326,298
https://en.wikipedia.org/wiki/Power%20center%20%28geometry%29
In geometry, the power center of three circles, also called the radical center, is the intersection point of the three radical axes of the pairs of circles. If the radical center lies outside of all three circles, then it is the center of the unique circle (the radical circle) that intersects the three given circles orthogonally; the construction of this orthogonal circle corresponds to Monge's problem. This is a special case of the three conics theorem. The three radical axes meet in a single point, the radical center, for the following reason. The radical axis of a pair of circles is defined as the set of points that have equal power with respect to both circles. For example, for every point on the radical axis of circles 1 and 2, the powers to each circle are equal: . Similarly, for every point on the radical axis of circles 2 and 3, the powers must be equal, . Therefore, at the intersection point of these two lines, all three powers must be equal, . Since this implies that , this point must also lie on the radical axis of circles 1 and 3. Hence, all three radical axes pass through the same point, the radical center. The radical center has several applications in geometry. It has an important role in a solution to Apollonius' problem published by Joseph Diaz Gergonne in 1814. In the power diagram of a system of circles, all of the vertices of the diagram are located at radical centers of triples of circles. The Spieker center of a triangle is the radical center of its excircles. Several types of radical circles have been defined as well, such as the radical circle of the Lucas circles. Notes Further reading External links Radical Center at Cut-the-Knot Radical Axis and Center at Cut-the-Knot Elementary geometry Geometric centers
Power center (geometry)
[ "Physics", "Mathematics" ]
366
[ "Point (geometry)", "Geometric centers", "Elementary mathematics", "Elementary geometry", "Symmetry" ]
326,356
https://en.wikipedia.org/wiki/Covering%20lemma
In the foundations of mathematics, a covering lemma is used to prove that the non-existence of certain large cardinals leads to the existence of a canonical inner model, called the core model, that is, in a sense, maximal and approximates the structure of the von Neumann universe V. A covering lemma asserts that under some particular anti-large cardinal assumption, the core model exists and is maximal in a sense that depends on the chosen large cardinal. The first such result was proved by Ronald Jensen for the constructible universe assuming 0# does not exist, which is now known as Jensen's covering theorem. Example For example, if there is no inner model for a measurable cardinal, then the Dodd–Jensen core model, KDJ is the core model and satisfies the covering property, that is for every uncountable set x of ordinals, there is y such that y ⊃ x, y has the same cardinality as x, and y ∈ KDJ. (If 0# does not exist, then KDJ = L.) Versions If the core model K exists (and has no Woodin cardinals), then If K has no ω1-Erdős cardinals, then for a particular countable (in K) and definable in K sequence of functions from ordinals to ordinals, every set of ordinals closed under these functions is a union of a countable number of sets in K. If L=K, these are simply the primitive recursive functions. If K has no measurable cardinals, then for every uncountable set x of ordinals, there is y ∈ K such that x ⊂ y and |x| = |y|. If K has only one measurable cardinal κ, then for every uncountable set x of ordinals, there is y ∈ K[C] such that x ⊂ y and |x| = |y|. Here C is either empty or Prikry generic over K (so it has order type ω and is cofinal in κ) and unique except up to a finite initial segment. If K has no inaccessible limit of measurable cardinals and no proper class of measurable cardinals, then there is a maximal and unique (except for a finite set of ordinals) set C (called a system of indiscernibles) for K such that for every sequence S in K of measure one sets consisting of one set for each measurable cardinal, C minus ∪S is finite. Note that every κ \ C is either finite or Prikry generic for K at κ except for members of C below a measurable cardinal below κ. For every uncountable set x of ordinals, there is y ∈ K[C] such that x ⊂ y and |x| = |y|. For every uncountable set x of ordinals, there is a set C of indiscernibles for total extenders on K such that there is y ∈ K[C] and x  ⊂ y and |x| = |y|. K computes the successors of singular and weakly compact cardinals correctly (Weak Covering Property). Moreover, if |κ| > ω1, then cofinality((κ+)K) ≥ |κ|. Extenders and indiscernibles For core models without overlapping total extenders, the systems of indiscernibles are well understood. Although (if K has an inaccessible limit of measurable cardinals), the system may depend on the set to be covered, it is well-determined and unique in a weaker sense. One application of the covering is counting the number of (sequences of) indiscernibles, which gives optimal lower bounds for various failures of the singular cardinals hypothesis. For example, if K does not have overlapping total extenders, and κ is singular strong limit, and 2κ = κ++, then κ has Mitchell order at least κ++ in K. Conversely, a failure of the singular cardinal hypothesis can be obtained (in a generic extension) from κ with o(κ) = κ++. For core models with overlapping total extenders (that is with a cardinal strong up to a measurable one), the systems of indiscernibles are poorly understood, and applications (such as the weak covering) tend to avoid rather than analyze the indiscernibles. Additional properties If K exists, then every regular Jónsson cardinal is Ramsey in K. Every singular cardinal that is regular in K is measurable in K. Also, if the core model K(X) exists above a set X of ordinals, then it has the above discussed covering properties above X. References Inner model theory Lemmas Covering lemmas
Covering lemma
[ "Mathematics" ]
983
[ "Covering lemmas", "Mathematical theorems", "Mathematical problems", "Lemmas" ]
326,357
https://en.wikipedia.org/wiki/Neurotoxin
Neurotoxins are toxins that are destructive to nerve tissue (causing neurotoxicity). Neurotoxins are an extensive class of exogenous chemical neurological insults that can adversely affect function in both developing and mature nervous tissue. The term can also be used to classify endogenous compounds, which, when abnormally contacted, can prove neurologically toxic. Though neurotoxins are often neurologically destructive, their ability to specifically target neural components is important in the study of nervous systems. Common examples of neurotoxins include lead, ethanol (drinking alcohol), glutamate, nitric oxide, botulinum toxin (e.g. Botox), tetanus toxin, and tetrodotoxin. Some substances such as nitric oxide and glutamate are in fact essential for proper function of the body and only exert neurotoxic effects at excessive concentrations. Neurotoxins inhibit neuron control over ion concentrations across the cell membrane, or communication between neurons across a synapse. Local pathology of neurotoxin exposure often includes neuron excitotoxicity or apoptosis but can also include glial cell damage. Macroscopic manifestations of neurotoxin exposure can include widespread central nervous system damage such as intellectual disability, persistent memory impairments, epilepsy, and dementia. Additionally, neurotoxin-mediated peripheral nervous system damage such as neuropathy or myopathy is common. Support has been shown for a number of treatments aimed at attenuating neurotoxin-mediated injury, such as antioxidant and antitoxin administration. Background Exposure to neurotoxins in society is not new, as civilizations have been exposed to neurologically destructive compounds for thousands of years. One notable example is the possible significant lead exposure during the Roman Empire resulting from the development of extensive plumbing networks and the habit of boiling vinegared wine in lead pans to sweeten it, the process generating lead acetate, known as "sugar of lead". In part, neurotoxins have been part of human history because of the fragile and susceptible nature of the nervous system, making it highly prone to disruption. The nervous tissue found in the brain, spinal cord, and periphery comprises an extraordinarily complex biological system that largely defines many of the unique traits of individuals. As with any highly complex system, however, even small perturbations to its environment can lead to significant functional disruptions. Properties leading to the susceptibility of nervous tissue include a high surface area of neurons, a high lipid content which retains lipophilic toxins, high blood flow to the brain inducing increased effective toxin exposure, and the persistence of neurons through an individual's lifetime, leading to compounding of damages. As a result, the nervous system has a number of mechanisms designed to protect it from internal and external assaults, including the blood brain barrier. The blood–brain barrier (BBB) is one critical example of protection which prevents toxins and other adverse compounds from reaching the brain. As the brain requires nutrient entry and waste removal, it is perfused by blood flow. Blood can carry a number of ingested toxins, however, which would induce significant neuron death if they reach nervous tissue. Thus, protective cells termed astrocytes surround the capillaries in the brain and absorb nutrients from the blood and subsequently transport them to the neurons, effectively isolating the brain from a number of potential chemical insults. This barrier creates a tight hydrophobic layer around the capillaries in the brain, inhibiting the transport of large or hydrophilic compounds. In addition to the BBB, the choroid plexus provides a layer of protection against toxin absorption in the brain. The choroid plexuses are vascularized layers of tissue found in the third, fourth, and lateral ventricles of the brain, which through the function of their ependymal cells, are responsible for the synthesis of cerebrospinal fluid (CSF). Importantly, through selective passage of ions and nutrients and trapping heavy metals such as lead, the choroid plexuses maintain a strictly regulated environment which contains the brain and spinal cord. By being hydrophobic and small, or inhibiting astrocyte function, some compounds including certain neurotoxins are able to penetrate into the brain and induce significant damage. In modern times, scientists and physicians have been presented with the challenge of identifying and treating neurotoxins, which has resulted in a growing interest in both neurotoxicology research and clinical studies. Though clinical neurotoxicology is largely a burgeoning field, extensive inroads have been made in the identification of many environmental neurotoxins leading to the classification of 750 to 1000 known potentially neurotoxic compounds. Due to the critical importance of finding neurotoxins in common environments, specific protocols have been developed by the United States Environmental Protection Agency (EPA) for testing and determining neurotoxic effects of compounds (USEPA 1998). Additionally, in vitro systems have increased in use as they provide significant improvements over the more common in vivo systems of the past. Examples of improvements include tractable, uniform environments, and the elimination of contaminating effects of systemic metabolism. In vitro systems, however, have presented problems as it has been difficult to properly replicate the complexities of the nervous system, such as the interactions between supporting astrocytes and neurons in creating the BBB. To even further complicate the process of determining neurotoxins when testing in-vitro, neurotoxicity and cytotoxicity may be difficult to distinguish as exposing neurons directly to compounds may not be possible in-vivo, as it is in-vitro. Additionally, the response of cells to chemicals may not accurately convey a distinction between neurotoxins and cytotoxins, as symptoms like oxidative stress or skeletal modifications may occur in response to either. In an effort to address this complication, neurite outgrowths (either axonal or dendritic) in response to applied compounds have recently been proposed as a more accurate distinction between true neurotoxins and cytotoxins in an in-vitro testing environment. Due to the significant inaccuracies associated with this process, however, it has been slow in gaining widespread support. Additionally, biochemical mechanisms have become more widely used in neurotoxin testing, such that compounds can be screened for sufficiency to induce cell mechanism interference, like the inhibition of acetylcholinesterase capacity of organophosphates (includes parathion and sarin gas). Though methods of determining neurotoxicity still require significant development, the identification of deleterious compounds and toxin exposure symptoms has undergone significant improvement. Applications in neuroscience Though diverse in chemical properties and functions, neurotoxins share the common property that they act by some mechanism leading to either the disruption or destruction of necessary components within the nervous system. Neurotoxins, however, by their very design can be very useful in the field of neuroscience. As the nervous system in most organisms is both highly complex and necessary for survival, it has naturally become a target for attack by both predators and prey. As venomous organisms often use their neurotoxins to subdue a predator or prey very rapidly, toxins have evolved to become highly specific to their target channels such that the toxin does not readily bind other targets (see Ion Channel toxins). As such, neurotoxins provide an effective means by which certain elements of the nervous system may be accurately and efficiently targeted. An early example of neurotoxin based targeting used radiolabeled tetrodotoxin to assay sodium channels and obtain precise measurements about their concentration along nerve membranes. Likewise through isolation of certain channel activities, neurotoxins have provided the ability to improve the original Hodgkin-Huxley model of the neuron in which it was theorized that single generic sodium and potassium channels could account for most nervous tissue function. From this basic understanding, the use of common compounds such as tetrodotoxin, tetraethylammonium, and bungarotoxins have led to a much deeper understanding of the distinct ways in which individual neurons may behave. Mechanisms of activity As neurotoxins are compounds which adversely affect the nervous system, a number of mechanisms through which they function are through the inhibition of neuron cellular processes. These inhibited processes can range from membrane depolarization mechanisms to inter-neuron communication. By inhibiting the ability for neurons to perform their expected intracellular functions, or pass a signal to a neighboring cell, neurotoxins can induce systemic nervous system arrest as in the case of botulinum toxin, or even nervous tissue death. The time required for the onset of symptoms upon neurotoxin exposure can vary between different toxins, being on the order of hours for botulinum toxin and years for lead. Inhibitors Sodium channel Tetrodotoxin Tetrodotoxin (TTX) is a poison produced by organisms belonging to the Tetraodontiformes order, which includes the puffer fish, ocean sunfish, and porcupine fish. Within the puffer fish, TTX is found in the liver, gonads, intestines, and skin. TTX can be fatal if consumed, and has become a common form of poisoning in many countries. Common symptoms of TTX consumption include paraesthesia (often restricted to the mouth and limbs), muscle weakness, nausea, and vomiting and often manifest within 30 minutes of ingestion. The primary mechanism by which TTX is toxic is through the inhibition of sodium channel function, which reduces the functional capacity of neuron communication. This inhibition largely affects a susceptible subset of sodium channels known as TTX-sensitive (TTX-s), which also happens to be largely responsible for the sodium current that drives the depolarization phase of neuron action potentials. TTX-resistant (TTX-r) is another form of sodium channel which has limited sensitivity to TTX, and is largely found in small diameter axons such as those found in nociception neurons. When a significant level of TTX is ingested, it will bind sodium channels on neurons and reduce their membrane permeability to sodium. This results in an increased effective threshold of required excitatory signals in order to induce an action potential in a postsynaptic neuron. The effect of this increased signaling threshold is a reduced excitability of postsynaptic neurons, and subsequent loss of motor and sensory function which can result in paralysis and death. Though assisted ventilation may increase the chance of survival after TTX exposure, there is currently no antitoxin. The use of the acetylcholinesterase inhibitor Neostigmine or the muscarinic acetylcholine antagonist atropine (which will inhibit parasympathetic activity), however, can increase sympathetic nerve activity enough to improve the chance of survival after TTX exposure. Potassium channel Tetraethylammonium Tetraethylammonium (TEA) is a compound that, like a number of neurotoxins, was first identified through its damaging effects to the nervous system and shown to have the capacity of inhibiting the function of motor nerves and thus the contraction of the musculature in a manner similar to that of curare. Additionally, through chronic TEA administration, muscular atrophy would be induced. It was later determined that TEA functions in-vivo primarily through its ability to inhibit both the potassium channels responsible for the delayed rectifier seen in an action potential and some population of calcium-dependent potassium channels. It is this capability to inhibit potassium flux in neurons that has made TEA one of the most important tools in neuroscience. It has been hypothesized that the ability for TEA to inhibit potassium channels is derived from its similar space-filling structure to potassium ions. What makes TEA very useful for neuroscientists is its specific ability to eliminate potassium channel activity, thereby allowing the study of neuron response contributions of other ion channels such as voltage gated sodium channels. In addition to its many uses in neuroscience research, TEA has been shown to perform as an effective treatment of Parkinson's disease through its ability to limit the progression of the disease. Chloride channel Chlorotoxin Chlorotoxin (Cltx) is the active compound found in scorpion venom, and is primarily toxic because of its ability to inhibit the conductance of chloride channels. Ingestion of lethal volumes of Cltx results in paralysis through this ion channel disruption. Similar to botulinum toxin, Cltx has been shown to possess significant therapeutic value. Evidence has shown that Cltx can inhibit the ability for gliomas to infiltrate healthy nervous tissue in the brain, significantly reducing the potential invasive harm caused by tumors. Calcium channel Conotoxin Conotoxins represent a category of poisons produced by the marine cone snail, and are capable of inhibiting the activity of a number of ion channels such as calcium, sodium, or potassium channels. In many cases, the toxins released by the different types of cone snails include a range of different types of conotoxins, which may be specific for different ion channels, thus creating a venom capable of widespread nerve function interruption. One of the unique forms of conotoxins, ω-conotoxin (ω-CgTx) is highly specific for Ca channels and has shown usefulness in isolating them from a system. As calcium flux is necessary for proper excitability of a cell, any significant inhibition could prevent a large amount of functionality. Significantly, ω-CgTx is capable of long term binding to and inhibition of voltage-dependent calcium channels located in the membranes of neurons but not those of muscle cells. Synaptic vesicle release Botulinum toxin Botulinum toxin (BTX) is a group of neurotoxins consisting of eight distinct compounds, referred to as BTX-A,B,C,D,E,F,G,H, which are produced by the bacterium Clostridium botulinum and lead to muscular paralysis. A notably unique feature of BTX is its relatively common therapeutic use in treating dystonia and spasticity disorders, as well as in inducing muscular atrophy despite being the most poisonous substance known. BTX functions peripherally to inhibit acetylcholine (ACh) release at the neuromuscular junction through degradation of the SNARE proteins required for ACh vesicle-membrane fusion. As the toxin is highly biologically active, an estimated dose of 1μg/kg body weight is sufficient to induce an insufficient tidal volume and resultant death by asphyxiation. Due to its high toxicity, BTX antitoxins have been an active area of research. It has been shown that capsaicin (active compound responsible for heat in chili peppers) can bind the TRPV1 receptor expressed on cholinergic neurons and inhibit the toxic effects of BTX. Tetanus toxin Tetanus neurotoxin (TeNT) is a compound that functionally reduces inhibitory transmissions in the nervous system resulting in muscular tetany. TeNT is similar to BTX, and is in fact highly similar in structure and origin; both belonging to the same category of clostridial neurotoxins. Like BTX, TeNT inhibits inter-neuron communication by means of vesicular neurotransmitter (NT) release. One notable difference between the two compounds is that while BTX inhibits muscular contractions, TeNT induces them. Though both toxins inhibit vesicle release at neuron synapses, the reason for this different manifestation is that BTX functions mainly in the peripheral nervous system (PNS) while TeNT is largely active in the central nervous system (CNS). This is a result of TeNT migration through motor neurons to the inhibitory neurons of the spinal cord after entering through endocytosis. This results in a loss of function in inhibitory neurons within the CNS resulting in systemic muscular contractions. Similar to the prognosis of a lethal dose of BTX, TeNT leads to paralysis and subsequent suffocation. Blood brain barrier Aluminium Neurotoxic behavior of Aluminium is known to occur upon entry into the circulatory system, where it can migrate to the brain and inhibit some of the crucial functions of the blood brain barrier (BBB). A loss of function in the BBB can produce significant damage to the neurons in the CNS, as the barrier protecting the brain from other toxins found in the blood will no longer be capable of such action. Though the metal is known to be neurotoxic, effects are usually restricted to patients incapable of removing excess ions from the blood, such as those experiencing renal failure. Patients experiencing aluminium toxicity can exhibit symptoms such as impaired learning and reduced motor coordination. Additionally, systemic aluminium levels are known to increase with age, and have been shown to correlate with Alzheimer's disease, implicating it as a neurotoxic causative compound of the disease. Despite its known toxicity in its ionic form, studies are divided on the potential toxicity of using aluminium in packaging and cooking appliances. Mercury Mercury is capable of inducing CNS damage by migrating into the brain by crossing the BBB. Mercury exists in a number of different compounds, though methylmercury (MeHg), dimethylmercury and diethylmercury are the only significantly neurotoxic forms. Diethylmercury and dimethylmercury are considered some of the most potent neurotoxins ever discovered. MeHg is usually acquired through consumption of seafood, as it tends to concentrate in organisms high on the food chain. It is known that the mercuric ion inhibits amino acid (AA) and glutamate (Glu) transport, potentially leading to excitotoxic effects. Receptor agonists and antagonists Anatoxin-a Investigations into anatoxin-a, also known as "Very Fast Death Factor", began in 1961 following the deaths of cows that drank from a lake containing an algal bloom in Saskatchewan, Canada. It is a cyanotoxin produced by at least four different genera of cyanobacteria, and has been reported in North America, Europe, Africa, Asia, and New Zealand. Toxic effects from anatoxin-a progress very rapidly because it acts directly on the nerve cells (neurons). The progressive symptoms of anatoxin-a exposure are loss of coordination, twitching, convulsions and rapid death by respiratory paralysis. The nerve tissues which communicate with muscles contain a receptor called the nicotinic acetylcholine receptor. Stimulation of these receptors causes a muscular contraction. The anatoxin-a molecule is shaped so it fits this receptor, and in this way it mimics the natural neurotransmitter normally used by the receptor, acetylcholine. Once it has triggered a contraction, anatoxin-a does not allow the neurons to return to their resting state, because it is not degraded by cholinesterase which normally performs this function. As a result, the muscle cells contract permanently, the communication between the brain and the muscles is disrupted and breathing stops. When it was first discovered, the toxin was called the Very Fast Death Factor (VFDF) because when it was injected into the body cavity of mice it induced tremors, paralysis and death within a few minutes. In 1977, the structure of VFDF was determined as a secondary, bicyclic amine alkaloid, and it was renamed anatoxin-a. Structurally, it is similar to cocaine. There is continued interest in anatoxin-a because of the dangers it presents to recreational and drinking waters, and because it is a particularly useful molecule for investigating acetylcholine receptors in the nervous system. The deadliness of the toxin means that it has a high military potential as a toxin weapon. Bungarotoxin Bungarotoxin is a compound with known interaction with nicotinic acetylcholine receptors (nAChRs), which constitute a family of ion channels whose activity is triggered by neurotransmitter binding. Bungarotoxin is produced in a number of different forms, though one of the commonly used forms is the long chain alpha form, α-bungarotoxin, which is isolated from the banded krait snake. Though extremely toxic if ingested, α-bungarotoxin has shown extensive usefulness in neuroscience as it is particularly adept at isolating nAChRs due to its high affinity to the receptors. As there are multiple forms of bungarotoxin, there are different forms of nAChRs to which they will bind, and α-bungarotoxin is particularly specific for α7-nAChR. This α7-nAChR functions to allow calcium ion influx into cells, and thus when blocked by ingested bungarotoxin will produce damaging effects, as ACh signaling will be inhibited. Likewise, the use of α-bungarotoxin can be very useful in neuroscience if it is desirable to block calcium flux in order to isolate effects of other channels. Additionally, different forms of bungarotoxin may be useful for studying inhibited nAChRs and their resultant calcium ion flow in different systems of the body. For example, α-bungarotoxin is specific for nAChRs found in the musculature and κ-bungarotoxin is specific for nAChRs found in neurons. Caramboxin Caramboxin (CBX) is a toxin found in star fruit (Averrhoa carambola). Individuals with some types of kidney disease are susceptible to adverse neurological effects including intoxication, seizures and even death after eating star fruit or drinking juice made of this fruit. Caramboxin is a new nonpeptide amino acid toxin that stimulate the glutamate receptors in neurons. Caramboxin is an agonist of both NMDA and AMPA glutamatergic ionotropic receptors with potent excitatory, convulsant, and neurodegenerative properties. Curare The term "curare" is ambiguous because it has been used to describe a number of poisons which at the time of naming were understood differently from present day understandings. In the past the characterization has meant poisons used by South American tribes on arrows or darts, though it has matured to specify a specific categorization of poisons which act on the neuromuscular junction to inhibit signaling and thus induce muscle relaxation. The neurotoxin category contains a number of distinct poisons, though all were originally purified from plants originating in South America. The effect with which injected curare poison is usually associated is muscle paralysis and resultant death. Curare notably functions to inhibit nicotinic acetylcholine receptors at the neuromuscular junction. Normally, these receptor channels allow sodium ions into muscle cells to initiate an action potential that leads to muscle contraction. By blocking the receptors, the neurotoxin is capable of significantly reducing neuromuscular junction signaling, an effect which has resulted in its use by anesthesiologists to produce muscular relaxation. Cytoskeleton interference Ammonia Ammonia toxicity is often seen through two routes of administration, either through consumption or through endogenous ailments such as liver failure. One notable case in which ammonia toxicity is common is in response to cirrhosis of the liver which results in hepatic encephalopathy, and can result in cerebral edema (Haussinger 2006). This cerebral edema can be the result of nervous cell remodeling. As a consequence of increased concentrations, ammonia activity in-vivo has been shown to induce swelling of astrocytes in the brain through increased production of cGMP (Cyclic Guanosine Monophosphate) within the cells which leads to Protein Kinase G-mediated (PKG) cytoskeletal modifications. The resultant effect of this toxicity can be reduced brain energy metabolism and function. Importantly, the toxic effects of ammonia on astrocyte remodeling can be reduced through administration of L-carnitine. This astrocyte remodeling appears to be mediated through ammonia-induced mitochondrial permeability transition. This mitochondrial transition is a direct result of glutamine activity a compound which forms from ammonia in-vivo. Administration of antioxidants or glutaminase inhibitor can reduce this mitochondrial transition, and potentially also astrocyte remodeling. Arsenic Arsenic is a neurotoxin commonly found concentrated in areas exposed to agricultural runoff, mining, and smelting sites (Martinez-Finley 2011). One of the effects of arsenic ingestion during the development of the nervous system is the inhibition of neurite growth which can occur both in PNS and the CNS. This neurite growth inhibition can often lead to defects in neural migration, and significant morphological changes of neurons during development,) often leading to neural tube defects in neonates. As a metabolite of arsenic, arsenite is formed after ingestion of arsenic and has shown significant toxicity to neurons within about 24 hours of exposure. The mechanism of this cytotoxicity functions through arsenite-induced increases in intracellular calcium ion levels within neurons, which may subsequently reduce mitochondrial transmembrane potential which activates caspases, triggering cell death. Another known function of arsenite is its destructive nature towards the cytoskeleton through inhibition of neurofilament transport. This is particularly destructive as neurofilaments are used in basic cell structure and support. Lithium administration has shown promise, however, in restoring some of the lost neurofilament motility. Additionally, similar to other neurotoxin treatments, the administration of certain antioxidants has shown some promise in reducing neurotoxicity of ingested arsenic. Calcium-mediated cytotoxicity Lead Lead is a potent neurotoxin whose toxicity has been recognized for at least thousands of years. Though neurotoxic effects for lead are found in both adults and young children, the developing brain is particularly susceptible to lead-induced harm, effects which can include apoptosis and excitotoxicity. An underlying mechanism by which lead is able to cause harm is its ability to be transported by calcium ATPase pumps across the BBB, allowing for direct contact with the fragile cells within the central nervous system. Neurotoxicity results from lead's ability to act in a similar manner to calcium ions, as concentrated lead will lead to cellular uptake of calcium which disrupts cellular homeostasis and induces apoptosis. It is this intracellular calcium increase that activates protein kinase C (PKC), which manifests as learning deficits in children as a result of early lead exposure. In addition to inducing apoptosis, lead inhibits interneuron signaling through the disruption of calcium-mediated neurotransmitter release. Neurotoxins with multiple effects Ethanol As a neurotoxin, ethanol has been shown to induce nervous system damage and affect the body in a variety of ways. Among the known effects of ethanol exposure are both transient and lasting consequences. Some of the lasting effects include long-term reduced neurogenesis in the hippocampus, widespread brain atrophy, and induced inflammation in the brain. Of note, chronic ethanol ingestion has additionally been shown to induce reorganization of cellular membrane constituents, leading to a lipid bilayer marked by increased membrane concentrations of cholesterol and saturated fat. This is important as neurotransmitter transport can be impaired through vesicular transport inhibition, resulting in diminished neural network function. One significant example of reduced inter-neuron communication is the ability for ethanol to inhibit NMDA receptors in the hippocampus, resulting in reduced long-term potentiation (LTP) and memory acquisition. NMDA has been shown to play an important role in LTP and consequently memory formation. With chronic ethanol intake, however, the susceptibility of these NMDA receptors to induce LTP increases in the mesolimbic dopamine neurons in an inositol 1,4,5-triphosphate (IP3) dependent manner. This reorganization may lead to neuronal cytotoxicity both through hyperactivation of postsynaptic neurons and through induced addiction to continuous ethanol consumption. It has, additionally, been shown that ethanol directly reduces intracellular calcium ion accumulation through inhibited NMDA receptor activity, and thus reduces the capacity for the occurrence of LTP. In addition to the neurotoxic effects of ethanol in mature organisms, chronic ingestion is capable of inducing severe developmental defects. Evidence was first shown in 1973 of a connection between chronic ethanol intake by mothers and defects in their offspring. This work was responsible for creating the classification of fetal alcohol syndrome, a disease characterized by common morphogenesis aberrations such as defects in craniofacial formation, limb development, and cardiovascular formation. The magnitude of ethanol neurotoxicity in fetuses leading to fetal alcohol syndrome has been shown to be dependent on antioxidant levels in the brain such as vitamin E. As the fetal brain is relatively fragile and susceptible to induced stresses, severe deleterious effects of alcohol exposure can be seen in important areas such as the hippocampus and cerebellum. The severity of these effects is directly dependent upon the amount and frequency of ethanol consumption by the mother, and the stage in development of the fetus. It is known that ethanol exposure results in reduced antioxidant levels, mitochondrial dysfunction (Chu 2007), and subsequent neuronal death, seemingly as a result of increased generation of reactive oxidative species (ROS). This is a plausible mechanism, as there is a reduced presence in the fetal brain of antioxidant enzymes such as catalase and peroxidase. In support of this mechanism, administration of high levels of dietary vitamin E results in reduced or eliminated ethanol-induced neurotoxic effects in fetuses. n-Hexane n-Hexane is a neurotoxin which has been responsible for the poisoning of several workers in Chinese electronics factories in recent years. Receptor-selective neurotoxins MPP+ MPP+, the toxic metabolite of MPTP is a selective neurotoxin which interferes with oxidative phosphorylation in mitochondria by inhibiting complex I, leading to the depletion of ATP and subsequent cell death. This occurs almost exclusively in dopaminergic neurons of the substantia nigra, resulting in the presentation of permanent parkinsonism in exposed subjects 2–3 days after administration. Endogenous neurotoxin sources Unlike most common sources of neurotoxins which are acquired by the body through ingestion, endogenous neurotoxins both originate from and exert their effects in-vivo. Additionally, though most venoms and exogenous neurotoxins will rarely possess useful in-vivo capabilities, endogenous neurotoxins are commonly used by the body in useful and healthy ways, such as nitric oxide which is used in cell communication. It is often only when these endogenous compounds become highly concentrated that they lead to dangerous effects. Nitric oxide Though nitric oxide (NO) is commonly used by the nervous system in inter-neuron communication and signaling, it can be active in mechanisms leading to ischemia in the cerebrum (Iadecola 1998). The neurotoxicity of NO is based on its importance in glutamate excitotoxicity, as NO is generated in a calcium-dependent manner in response to glutamate mediated NMDA activation, which occurs at an elevated rate in glutamate excitotoxicity. Though NO facilitates increased blood flow to potentially ischemic regions of the brain, it is also capable of increasing oxidative stress, inducing DNA damage and apoptosis. Thus an increased presence of NO in an ischemic area of the CNS can produce significantly toxic effects. Glutamate Glutamate, like nitric oxide, is an endogenously produced compound used by neurons to perform normally, being present in small concentrations throughout the gray matter of the CNS. One of the most notable uses of endogenous glutamate is its functionality as an excitatory neurotransmitter. When concentrated, however, glutamate becomes toxic to surrounding neurons. This toxicity can be both a result of direct lethality of glutamate on neurons and a result of induced calcium flux into neurons leading to swelling and necrosis. Support has been shown for these mechanisms playing significant roles in diseases and complications such as Huntington's disease, epilepsy, and stroke. See also Babycurus toxin 1 Cangitoxin Chronic solvent-induced encephalopathy Fertilizer Herbicide Pesticides Solvent Toxic encephalopathy Notes References Chan, H. M. (2011) "Mercury in Fish: Human Health Risks." Encyclopedia of Environmental Health: 697–704. Costa, Lucio G., Gennaro Giordano, and Marina Guizzetti (2011) In Vitro Neurotoxicology: Methods and Protocols. New York: Humana. Debin, John A., John E. Maggio, and Gary R. Strichartz (1993) "Purification and Characterization of Chlorotoxin, a Chloride Channel Ligand from the Venom of the Scorpion." The American Physiological Society, pp. 361–69. Defuria, Jason (2006) "The Environmental Neurotoxin Arsenic Impairs Neurofilament Dynamics by Overactivation of C-JUN Terminal Kinase: Potential Role for Amyotrophic Lateral Sclerosis." UMI, pp. 1–16. Dobbs, Michael R (2009) Clinical Neurotoxicology. Philadelphia: Saunders-Elsevier. Herbert, M. R. (2006) "Autism and Environmental Genomics." NeuroToxicology, pp. 671–84. Web. Hodge, A. Trevor (2002) Roman Aqueducts and Water Supply. London: Duckworth. Lotti, Marcello, and Angelo Moretto (1989) "Organophosphate-Induced Delayed Polyneuropathy." Toxicological Reviews, 24 (1) (2005): 37–49. Martini, Frederic, Michael J. Timmons, and Robert B. Tallitsch (2009) Human Anatomy. San Francisco: Pearson/Benjamin Cummings. Morris, Stephanie A., David W. Eaves, Aleksander R. Smith, and Kimberly Nixon (2009) "Alcohol Inhibition of Neurogenesis: A Mechanism of Hippocampal Neurodegeneration in an Adolescent Alcohol Abuse Model." Hippocampus: NA. National Center for Environmental Assessment (2006) "Toxicological Reviews of Cyanobacterial Toxins: Anatoxin-a" NCEA-C-1743 Pirazzini, Marco, Ornella Rossetto, Paolo Bolognese, Clifford C. Shone, and Cesare Montecucco (2011) "Double Anchorage to the Membrane and Intact Inter-chain Disulfide Bond Are Required for the Low PH Induced Entry of Tetanus and Botulinum Neurotoxins into Neurons." Cellular Microbiology: No. Print. Spencer PS, Schaumburg HH, Ludolph AC (Eds) (2000) Experimental and Clinical Neurotoxicology. Oxford University Press, Oxford, pp. 1310. USEPA (United States Environmental Protection Agency) (1998) Health Effects Test Guidelines. OPPTS 870.6200. Neurotoxicity screening battery. Washington DC, USEPA. Vahidnia, A., G.B. Van Der Voet, and F.A. De Wolff (2007) "Arsenic Neurotoxicity A Review." Human & Experimental Toxicology, 26 (10) : 823–32. Widmaier, Eric P., Hershel Raff, Kevin T. Strang, and Arthur J. Vander (2008) Vander's Human Physiology: the Mechanisms of Body Function. Boston: McGraw-Hill Higher Education. Yang, X (2007) Occurrence of the cyanobacterial neurotoxin, anatoxin-a, in New York State waters ProQuest. . Further reading Brain Facts Book at The Society for Neuroscience Neuroscience Texts at University of Texas Medical School In Vitro Neurotoxicology: An Introduction at Springerlink Biology of the NMDA Receptor at NCBI Advances in the Neuroscience of Addiction, 2nd edition at NCBI External links Environmental Protection Agency at United States Environmental Protection Agency Alcohol and Alcoholism at Oxford Medical Journals Neurotoxicology at Elsevier Journals Neurotoxin Institute at Neurotoxin Institute Neurotoxins at Toxipedia
Neurotoxin
[ "Chemistry" ]
7,717
[ "Neurochemistry", "Neurotoxins" ]
326,365
https://en.wikipedia.org/wiki/Reverse%20mathematics
Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. It can be conceptualized as sculpting out necessary conditions from sufficient ones. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory. Reverse mathematics is usually carried out using subsystems of second-order arithmetic, where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory. The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis. In higher-order reverse mathematics, the focus is on subsystems of higher-order arithmetic, and the associated richer language. The program was founded by and brought forward by Steve Simpson. A standard reference for the subject is , while an introduction for non-specialists is . An introduction to higher-order reverse mathematics, and also the founding paper, is . A comprehensive introduction, covering major results and methods, is General principles In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum” it is necessary to use a base system that can speak of real numbers and sequences of real numbers. For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T. Use of second-order arithmetic Most reverse mathematics research focuses on subsystems of second-order arithmetic. The body of research in reverse mathematics has established that weak subsystems of second-order arithmetic suffice to formalize almost all undergraduate-level mathematics. In second-order arithmetic, all objects can be represented as either natural numbers or sets of natural numbers. For example, in order to prove theorems about real numbers, the real numbers can be represented as Cauchy sequences of rational numbers, each of which sequence can be represented as a set of natural numbers. The axiom systems most often considered in reverse mathematics are defined using axiom schemes called comprehension schemes. Such a scheme states that any set of natural numbers definable by a formula of a given complexity exists. In this context, the complexity of formulas is measured using the arithmetical hierarchy and analytical hierarchy. The reason that reverse mathematics is not carried out using set theory as a base system is that the language of set theory is too expressive. Extremely complex sets of natural numbers can be defined by simple formulas in the language of set theory (which can quantify over arbitrary sets). In the context of second-order arithmetic, results such as Post's theorem establish a close link between the complexity of a formula and the (non)computability of the set it defines. Another effect of using second-order arithmetic is the need to restrict general mathematical theorems to forms that can be expressed within arithmetic. For example, second-order arithmetic can express the principle "Every countable vector space has a basis" but it cannot express the principle "Every vector space has a basis". In practical terms, this means that theorems of algebra and combinatorics are restricted to countable structures, while theorems of analysis and topology are restricted to separable spaces. Many principles that imply the axiom of choice in their general form (such as "Every vector space has a basis") become provable in weak subsystems of second-order arithmetic when they are restricted. For example, "every field has an algebraic closure" is not provable in ZF set theory, but the restricted form "every countable field has an algebraic closure" is provable in RCA0, the weakest system typically employed in reverse mathematics. Use of higher-order arithmetic A recent strand of higher-order reverse mathematics research, initiated by Ulrich Kohlenbach in 2005, focuses on subsystems of higher-order arithmetic. Due to the richer language of higher-order arithmetic, the use of representations (aka 'codes') common in second-order arithmetic, is greatly reduced. For example, a continuous function on the Cantor space is just a function that maps binary sequences to binary sequences, and that also satisfies the usual 'epsilon-delta'-definition of continuity. Higher-order reverse mathematics includes higher-order versions of (second-order) comprehension schemes. Such a higher-order axiom states the existence of a functional that decides the truth or falsity of formulas of a given complexity. In this context, the complexity of formulas is also measured using the arithmetical hierarchy and analytical hierarchy. The higher-order counterparts of the major subsystems of second-order arithmetic generally prove the same second-order sentences (or a large subset) as the original second-order systems. For instance, the base theory of higher-order reverse mathematics, called , proves the same sentences as RCA0, up to language. As noted in the previous paragraph, second-order comprehension axioms easily generalize to the higher-order framework. However, theorems expressing the compactness of basic spaces behave quite differently in second- and higher-order arithmetic: on one hand, when restricted to countable covers/the language of second-order arithmetic, the compactness of the unit interval is provable in WKL0 from the next section. On the other hand, given uncountable covers/the language of higher-order arithmetic, the compactness of the unit interval is only provable from (full) second-order arithmetic. Other covering lemmas (e.g. due to Lindelöf, Vitali, Besicovitch, etc.) exhibit the same behavior, and many basic properties of the gauge integral are equivalent to the compactness of the underlying space. The big five subsystems of second-order arithmetic Second-order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings, groups, and fields, as well as points in effective Polish spaces, can be represented as sets of natural numbers, and modulo this representation can be studied in second-order arithmetic. Reverse mathematics makes use of several subsystems of second-order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second-order arithmetic over a weaker subsystem B. This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have meaning, this system must not itself be able to prove the mathematical theorem T. Steve Simpson describes five particular subsystems of second-order arithmetic, which he calls the Big Five, that occur frequently in reverse mathematics. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π-CA0. The following table summarizes the "big five" systems and lists the counterpart systems in higher-order arithmetic. The latter generally prove the same second-order sentences (or a large subset) as the original second-order systems. The subscript 0 in these names means that the induction scheme has been restricted from the full second-order induction scheme. For example, ACA0 includes the induction axiom . This together with the full comprehension axiom of second-order arithmetic implies the full second-order induction scheme given by the universal closure of for any second-order formula φ. However ACA0 does not have the full comprehension axiom, and the subscript 0 is a reminder that it does not have the full second-order induction scheme either. This restriction is important: systems with restricted induction have significantly lower proof-theoretical ordinals than systems with the full second-order induction scheme. The base system RCA0 RCA0 is the fragment of second-order arithmetic whose axioms are the axioms of Robinson arithmetic, induction for Σ formulas, and comprehension for formulas. The subsystem RCA0 is the one most commonly used as a base system for reverse mathematics. The initials "RCA" stand for "recursive comprehension axiom", where "recursive" means "computable", as in recursive function. This name is used because RCA0 corresponds informally to "computable mathematics". In particular, any set of natural numbers that can be proven to exist in RCA0 is computable, and thus any theorem that implies that noncomputable sets exist is not provable in RCA0. To this extent, RCA0 is a constructive system, although it does not meet the requirements of the program of constructivism because it is a theory in classical logic including the law of excluded middle. Despite its seeming weakness (of not proving any non-computable sets exist), RCA0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA0 include: Basic properties of the natural numbers, integers, and rational numbers (for example, that the latter form an ordered field). Basic properties of the real numbers (the real numbers are an Archimedean ordered field; any nested sequence of closed intervals whose lengths tend to zero has a single point in its intersection; the real numbers are not countable).Section II.4 The Baire category theorem for a complete separable metric space (the separability condition is necessary to even state the theorem in the language of second-order arithmetic).theorem II.5.8 The intermediate value theorem on continuous real functions.theorem II.6.6 The Banach–Steinhaus theorem for a sequence of continuous linear operators on separable Banach spaces.theorem II.10.8 A weak version of Gödel's completeness theorem (for a set of sentences, in a countable language, that is already closed under consequence). The existence of an algebraic closure for a countable field (but not its uniqueness).II.9.4--II.9.8 The existence and uniqueness of the real closure of a countable ordered field.II.9.5, II.9.7 The first-order part of RCA0 (the theorems of the system that do not involve any set variables) is the set of theorems of first-order Peano arithmetic with induction limited to Σ formulas. It is provably consistent, as is RCA0, in full first-order Peano arithmetic. Weak Kőnig's lemma WKL0 The subsystem WKL0 consists of RCA0 plus a weak form of Kőnig's lemma, namely the statement that every infinite subtree of the full binary tree (the tree of all finite sequences of 0's and 1's) has an infinite path. This proposition, which is known as weak Kőnig's lemma, is easy to state in the language of second-order arithmetic. WKL0 can also be defined as the principle of Σ separation (given two Σ formulas of a free variable n that are exclusive, there is a set containing all n satisfying the one and no n satisfying the other). When this axiom is added to RCA0, the resulting subsystem is called WKL0. A similar distinction between particular axioms on the one hand, and subsystems including the basic axioms and induction on the other hand, is made for the stronger subsystems described below. In a sense, weak Kőnig's lemma is a form of the axiom of choice (although, as stated, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice). It is not constructively valid in some senses of the word "constructive". To show that WKL0 is actually stronger than (not provable in) RCA0, it is sufficient to exhibit a theorem of WKL0 that implies that noncomputable sets exist. This is not difficult; WKL0 implies the existence of separating sets for effectively inseparable recursively enumerable sets. It turns out that RCA0 and WKL0 have the same first-order part, meaning that they prove the same first-order sentences. WKL0 can prove a good number of classical mathematical results that do not follow from RCA0, however. These results are not expressible as first-order statements but can be expressed as second-order statements. The following results are equivalent to weak Kőnig's lemma and thus to WKL0 over RCA0: The Heine–Borel theorem for the closed unit real interval, in the following sense: every covering by a sequence of open intervals has a finite subcovering. The Heine–Borel theorem for complete totally bounded separable metric spaces (where covering is by a sequence of open balls). A continuous real function on the closed unit interval (or on any compact separable metric space, as above) is bounded (or: bounded and reaches its bounds). A continuous real function on the closed unit interval can be uniformly approximated by polynomials (with rational coefficients). A continuous real function on the closed unit interval is uniformly continuous. A continuous real function on the closed unit interval is Riemann integrable. The Brouwer fixed point theorem (for continuous functions on an -simplex).Theorem IV.7.7 The separable Hahn–Banach theorem in the form: a bounded linear form on a subspace of a separable Banach space extends to a bounded linear form on the whole space. The Jordan curve theorem Gödel's completeness theorem (for a countable language). Determinacy for open (or even clopen) games on {0,1} of length ω. Every countable commutative ring has a prime ideal. Every countable formally real field is orderable. Uniqueness of algebraic closure (for a countable field). Arithmetical comprehension ACA0 ACA0 is RCA0 plus the comprehension scheme for arithmetical formulas (which is sometimes called the "arithmetical comprehension axiom"). That is, ACA0 allows us to form the set of natural numbers satisfying an arbitrary arithmetical formula (one with no bound set variables, although possibly containing set parameters).pp. 6--7 Actually, it suffices to add to RCA0 the comprehension scheme for Σ1 formulas (also including second-order free variables) in order to obtain full arithmetical comprehension.Lemma III.1.3 The first-order part of ACA0 is exactly first-order Peano arithmetic; ACA0 is a conservative extension of first-order Peano arithmetic.Corollary IX.1.6 The two systems are provably (in a weak system) equiconsistent. ACA0 can be thought of as a framework of predicative mathematics, although there are predicatively provable theorems that are not provable in ACA0. Most of the fundamental results about the natural numbers, and many other mathematical theorems, can be proven in this system. One way of seeing that ACA0 is stronger than WKL0 is to exhibit a model of WKL0 that doesn't contain all arithmetical sets. In fact, it is possible to build a model of WKL0 consisting entirely of low sets using the low basis theorem, since low sets relative to low sets are low. The following assertions are equivalent to ACA0 over RCA0: The sequential completeness of the real numbers (every bounded increasing sequence of real numbers has a limit).theorem III.2.2 The Bolzano–Weierstrass theorem.theorem III.2.2 Ascoli's theorem: every bounded equicontinuous sequence of real functions on the unit interval has a uniformly convergent subsequence. Every countable field embeds isomorphically into its algebraic closure.theorem III.3.2 Every countable commutative ring has a maximal ideal.theorem III.5.5 Every countable vector space over the rationals (or over any countable field) has a basis.theorem III.4.3 For any countable fields , there is a transcendence basis for over .theorem III.4.6 Kőnig's lemma (for arbitrary finitely branching trees, as opposed to the weak version described above).theorem III.7.2 For any countable group and any subgroups of , the subgroup generated by exists.p.40 Any partial function can be extended to a total function. Various theorems in combinatorics, such as certain forms of Ramsey's theorem.Theorem III.7.2 Arithmetical transfinite recursion ATR0 The system ATR0 adds to ACA0 an axiom that states, informally, that any arithmetical functional (meaning any arithmetical formula with a free number variable n and a free set variable X, seen as the operator taking X to the set of n satisfying the formula) can be iterated transfinitely along any countable well ordering starting with any set. ATR0 is equivalent over ACA0 to the principle of Σ separation. ATR0 is impredicative, and has the proof-theoretic ordinal , the supremum of that of predicative systems. ATR0 proves the consistency of ACA0, and thus by Gödel's theorem it is strictly stronger. The following assertions are equivalent to ATR0 over RCA0: Any two countable well orderings are comparable. That is, they are isomorphic or one is isomorphic to a proper initial segment of the other.theorem V.6.8 Ulm's theorem for countable reduced Abelian groups. The perfect set theorem, which states that every uncountable closed subset of a complete separable metric space contains a perfect closed set. Lusin's separation theorem (essentially Σ separation).Theorem V.5.1 Determinacy for open sets in the Baire space. Π comprehension Π-CA0 Π-CA0 is stronger than arithmetical transfinite recursion and is fully impredicative. It consists of RCA0 plus the comprehension scheme for Π formulas. In a sense, Π-CA0 comprehension is to arithmetical transfinite recursion (Σ separation) as ACA0 is to weak Kőnig's lemma (Σ separation). It is equivalent to several statements of descriptive set theory whose proofs make use of strongly impredicative arguments; this equivalence shows that these impredicative arguments cannot be removed. The following theorems are equivalent to Π-CA0 over RCA0: The Cantor–Bendixson theorem (every closed set of reals is the union of a perfect set and a countable set).Exercise VI.1.7 Silver's dichotomy (every coanalytic equivalence relation has either countably many equivalence classes or a perfect set of incomparables)Theorem VI.3.6 Every countable abelian group is the direct sum of a divisible group and a reduced group.Theorem VI.4.1 Determinacy for games.Theorem VI.5.4 Additional systems Weaker systems than recursive comprehension can be defined. The weak system RCA consists of elementary function arithmetic EFA (the basic axioms plus Δ induction in the enriched language with an exponential operation) plus Δ comprehension. Over RCA, recursive comprehension as defined earlier (that is, with Σ induction) is equivalent to the statement that a polynomial (over a countable field) has only finitely many roots and to the classification theorem for finitely generated Abelian groups. The system RCA has the same proof theoretic ordinal ω3 as EFA and is conservative over EFA for Π sentences. Weak Weak Kőnig's Lemma is the statement that a subtree of the infinite binary tree having no infinite paths has an asymptotically vanishing proportion of the leaves at length n (with a uniform estimate as to how many leaves of length n exist). An equivalent formulation is that any subset of Cantor space that has positive measure is nonempty (this is not provable in RCA0). WWKL0 is obtained by adjoining this axiom to RCA0. It is equivalent to the statement that if the unit real interval is covered by a sequence of intervals then the sum of their lengths is at least one. The model theory of WWKL0 is closely connected to the theory of algorithmically random sequences. In particular, an ω-model of RCA0 satisfies weak weak Kőnig's lemma if and only if for every set X there is a set Y that is 1-random relative to X. DNR (short for "diagonally non-recursive") adds to RCA0 an axiom asserting the existence of a diagonally non-recursive function relative to every set. That is, DNR states that, for any set A, there exists a total function f such that for all e the eth partial recursive function with oracle A is not equal to f. DNR is strictly weaker than WWKL (Lempp et al., 2004). Δ-comprehension is in certain ways analogous to arithmetical transfinite recursion as recursive comprehension is to weak Kőnig's lemma. It has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Δ-comprehension but not the other way around. Σ-choice is the statement that if η(n,X) is a Σ formula such that for each n there exists an X satisfying η then there is a sequence of sets Xn such that η(n,Xn) holds for each n. Σ-choice also has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Σ-choice but not the other way around. HBU (short for "uncountable Heine-Borel") expresses the (open-cover) compactness of the unit interval, involving uncountable covers. The latter aspect of HBU makes it only expressible in the language of third-order arithmetic. Cousin's theorem (1895) implies HBU, and these theorems use the same notion of cover due to Cousin and Lindelöf. HBU is hard to prove: in terms of the usual hierarchy of comprehension axioms, a proof of HBU requires full second-order arithmetic. Ramsey's theorem for infinite graphs does not fall into one of the big five subsystems, and there are many other weaker variants with varying proof strengths. Stronger Systems Over RCA0, Π transfinite recursion, ∆ determinacy, and the ∆ Ramsey theorem are all equivalent to each other. Over RCA0, Σ monotonic induction, Σ determinacy, and the Σ Ramsey theorem are all equivalent to each other. The following are equivalent: (schema) Π consequences of Π-CA0 RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ sets RCA0 + {τ: τ is a true S2S sentence} The set of Π consequences of second-order arithmetic Z2 has the same theory as RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ sets. For a poset , let denote the topological space consisting of the filters on whose open sets are the sets of the form for some . The following statement is equivalent to over : for any countable poset , the topological space is completely metrizable iff it is regular. ω-models and β-models The ω in ω-model stands for the set of non-negative integers (or finite ordinals). An ω-model is a model for a fragment of second-order arithmetic whose first-order part is the standard model of Peano arithmetic, but whose second-order part may be non-standard. More precisely, an ω-model is given by a choice of subsets of . The first-order variables are interpreted in the usual way as elements of , and , have their usual meanings, while second-order variables are interpreted as elements of . There is a standard ω-model where one just takes to consist of all subsets of the integers. However, there are also other ω-models; for example, RCA0 has a minimal ω-model where consists of the recursive subsets of . A β-model is an ω model that agrees with the standard ω-model on truth of and sentences (with parameters). Non-ω models are also useful, especially in the proofs of conservation theorems. See also Closed-form expression § Conversion from numerical forms Induction, bounding and least number principles Ordinal analysis Notes References External links Stephen G. Simpson's home page Reverse Mathematics Zoo Computability theory Mathematical logic Proof theory
Reverse mathematics
[ "Mathematics" ]
5,432
[ "Computability theory", "Mathematical logic", "Proof theory" ]
326,386
https://en.wikipedia.org/wiki/Ion%20source
An ion source is a device that creates atomic and molecular ions. Ion sources are used to form ions for mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters and ion engines. Electron ionization Electron ionization is widely used in mass spectrometry, particularly for organic molecules. The gas phase reaction producing electron ionization is M{} + e^- -> M^{+\bullet}{} + 2e^- where M is the atom or molecule being ionized, e^- is the electron, and M^{+\bullet} is the resulting ion. The electrons may be created by an arc discharge between a cathode and an anode. An electron beam ion source (EBIS) is used in atomic physics to produce highly charged ions by bombarding atoms with a powerful electron beam. Its principle of operation is shared by the electron beam ion trap. Electron capture ionization Electron capture ionization (ECI) is the ionization of a gas phase atom or molecule by attachment of an electron to create an ion of the form A−•. The reaction is A + e^- ->[M] A^- where the M over the arrow denotes that to conserve energy and momentum a third body is required (the molecularity of the reaction is three). Electron capture can be used in conjunction with chemical ionization. An electron capture detector is used in some gas chromatography systems. Chemical ionization Chemical ionization (CI) is a lower energy process than electron ionization because it involves ion/molecule reactions rather than electron removal. The lower energy yields less fragmentation, and usually a simpler spectrum. A typical CI spectrum has an easily identifiable molecular ion. In a CI experiment, ions are produced through the collision of the analyte with ions of a reagent gas in the ion source. Some common reagent gases include: methane, ammonia, and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will preferentially ionize the reagent gas. The resultant collisions with other reagent gas molecules will create an ionization plasma. Positive and negative ions of the analyte are formed by reactions with this plasma. For example, protonation occurs by CH4 + e^- -> CH4+ + 2e^- (primary ion formation), CH4 + CH4+ -> CH5+ + CH3 (reagent ion formation), M + CH5+ -> CH4 + [M + H]+ (product ion formation, e.g. protonation). Charge exchange ionization Charge-exchange ionization (also known as charge-transfer ionization) is a gas phase reaction between an ion and an atom or molecule in which the charge of the ion is transferred to the neutral species. A+ + B -> A + B+ Chemi-ionization Chemi-ionization is the formation of an ion through the reaction of a gas phase atom or molecule with an atom or molecule in an excited state. Chemi-ionization can be represented by G^\ast{} + M -> G{} + M^{+\bullet}{} + e^- where G is the excited state species (indicated by the superscripted asterisk), and M is the species that is ionized by the loss of an electron to form the radical cation (indicated by the superscripted "plus-dot"). Associative ionization Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. One or both of the interacting species may have excess internal energy. For example, A^\ast{} + B -> AB^{+\bullet}{} + e^- where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+. Penning ionization Penning ionization is a form of chemi-ionization involving reactions between neutral atoms or molecules. The process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927. Penning ionization involves a reaction between a gas-phase excited-state atom or molecule G* and a target molecule M resulting in the formation of a radical molecular cation M+., an electron e−, and a neutral gas molecule G: G^\ast{} + M -> G{} + M^{+\bullet}{} + e^- Penning ionization occurs when the target molecule has an ionization potential lower than the internal energy of the excited-state atom or molecule. Associative Penning ionization can proceed via G^\ast{} + M -> MG^{+\bullet}{} + e^- Surface Penning ionization (also known as Auger deexcitation) refers to the interaction of the excited-state gas with a bulk surface S, resulting in the release of an electron according to G^\ast{} + S -> G{} + S{} + e^-. Ion attachment Ion-attachment ionization is similar to chemical ionization in which a cation is attached to the analyte molecule in a reactive collision: M + X+ + A -> MX+ + A Where M is the analyte molecule, X+ is the cation and A is a non-reacting collision partner. In a radioactive ion source, a small piece of radioactive material, for instance 63Ni or 241Am, is used to ionize a gas. This is used in ionization smoke detectors and ion mobility spectrometers. Gas-discharge ion sources These ion sources use a plasma source or electric discharge to create ions. Inductively-coupled plasma Ions can be created in an inductively coupled plasma, which is a plasma source in which the energy is supplied by electrical currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields. Microwave-induced plasma Microwave induced plasma ion sources are capable of exciting electrodeless gas discharges to create ions for trace element mass spectrometry. A microwave plasma has high frequency electromagnetic radiation in the GHz range. It is capable of exciting electrodeless gas discharges. If applied in surface-wave-sustained mode, they are especially well suited to generate large-area plasmas of high plasma density. If they are both in surface-wave and resonator mode, they can exhibit a high degree of spatial localization. This allows to spatially separate the location of plasma generations from the location of surface processing. Such a separation (together with an appropriate gas-flow scheme) may help reduce the negative effect, that particles released from a processed substrate may have on the plasma chemistry of the gas phase. ECR ion source The ECR ion source makes use of the electron cyclotron resonance to ionize a plasma. Microwaves are injected into a volume at the frequency corresponding to the electron cyclotron resonance, defined by the magnetic field applied to a region inside the volume. The volume contains a low pressure gas. Glow discharge Ions can be created in an electric glow discharge. A glow discharge is a plasma formed by the passage of electric current through a low-pressure gas. It is created by applying a voltage between two metal electrodes in an evacuated chamber containing gas. When the voltage exceeds a certain value, called the striking voltage, the gas forms a plasma. A duoplasmatron is a type of glow discharge ion source that consists of a hot cathode or cold cathode that produces a plasma that is used to ionize a gas. THey can produce positive or negative ions. They are used for secondary ion mass spectrometry, ion beam etching, and high-energy physics. Flowing afterglow In a flowing plasma afterglow, ions are formed in a flow of inert gas, typically helium or argon. Reagents are added downstream to create ion products and study reaction rates. Flowing-afterglow mass spectrometry is used for trace gas analysis for organic compounds. Spark ionization Electric spark ionization is used to produce gas phase ions from a solid sample. When incorporated with a mass spectrometer the complete instrument is referred to as a spark ionization mass spectrometer or as a spark source mass spectrometer (SSMS). A closed drift ion source uses a radial magnetic field in an annular cavity in order to confine electrons for ionizing a gas. They are used for ion implantation and for space propulsion (Hall-effect thrusters). Photoionization Photoionization is the ionization process in which an ion is formed from the interaction of a photon with an atom or molecule. Multi-photon ionization In multi-photon ionization (MPI), several photons of energy below the ionization threshold may actually combine their energies to ionize an atom. Resonance-enhanced multiphoton ionization (REMPI) is a form of MPI in which one or more of the photons accesses a bound-bound transition that is resonant in the atom or molecule being ionized. Atmospheric pressure photoionization Atmospheric pressure photoionization (APPI) uses a source of photons, usually a vacuum UV (VUV) lamp, to ionize the analyte with single photon ionization process. Analogous to other atmospheric pressure ion sources, a spray of solvent is heated to relatively high temperatures (above 400 degrees Celsius) and sprayed with high flow rates of nitrogen for desolvation. The resulting aerosol is subjected to UV radiation to create ions. Atmospheric-pressure laser ionization uses UV laser light sources to ionize the analyte via MPI. Desorption ionization Field desorption Field desorption refers to an ion source in which a high-potential electric field is applied to an emitter with a sharp surface, such as a razor blade, or more commonly, a filament from which tiny "whiskers" have formed. This results in a very high electric field which can result in ionization of gaseous molecules of the analyte. Mass spectra produced by FI have little or no fragmentation. They are dominated by molecular radical cations and less often, protonated molecules Particle bombardment Fast atom bombardment Particle bombardment with atoms is called fast atom bombardment (FAB) and bombardment with atomic or molecular ions is called secondary ion mass spectrometry (SIMS). Fission fragment ionization uses ionic or neutral atoms formed as a result of the nuclear fission of a suitable nuclide, for example the Californium isotope 252Cf. In FAB the analytes is mixed with a non-volatile chemical protection environment called a matrix and is bombarded under vacuum with a high energy (4000 to 10,000 electron volts) beam of atoms. The atoms are typically from an inert gas such as argon or xenon. Common matrices include glycerol, thioglycerol, 3-nitrobenzyl alcohol (3-NBA), 18-crown-6 ether, 2-nitrophenyloctyl ether, sulfolane, diethanolamine, and triethanolamine. This technique is similar to secondary ion mass spectrometry and plasma desorption mass spectrometry. Secondary ionization Secondary ion mass spectrometry (SIMS) is used to analyze the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analyzing ejected secondary ions. The mass/charge ratios of these secondary ions are measured with a mass spectrometer to determine the elemental, isotopic, or molecular composition of the surface to a depth of 1 to 2 nm. In a liquid metal ion source (LMIS), a metal (typically gallium) is heated to the liquid state and provided at the end of a capillary or a needle. Then a Taylor cone is formed under the application of a strong electric field. As the cone's tip get sharper, the electric field becomes stronger, until ions are produced by field evaporation. These ion sources are particularly used in ion implantation or in focused ion beam instruments. Plasma desorption ionization Plasma desorption ionization mass spectrometry (PDMS), also called fission fragment ionization, is a mass spectrometry technique in which ionization of material in a solid sample is accomplished by bombarding it with ionic or neutral atoms formed as a result of the nuclear fission of a suitable nuclide, typically the californium isotope 252Cf. Laser desorption ionization Matrix-assisted laser desorption/ionization (MALDI) is a soft ionization technique. The sample is mixed with a matrix material. Upon receiving a laser pulse, the matrix absorbs the laser energy and it is thought that primarily the matrix is desorbed and ionized (by addition of a proton) by this event. The analyte molecules are also desorbed. The matrix is then thought to transfer proton to the analyte molecules (e.g., protein molecules), thus charging the analyte. Surface-assisted laser desorption/ionization Surface-assisted laser desorption/ionization (SALDI) is a soft laser desorption technique used for analyzing biomolecules by mass spectrometry. In its first embodiment, it used graphite matrix. At present, laser desorption/ionization methods using other inorganic matrices, such as nanomaterials, are often regarded as SALDI variants. A related method named "ambient SALDI" - which is a combination of conventional SALDI with ambient mass spectrometry incorporating the DART ion source - has also been demonstrated. Surface-enhanced laser desorption/ionization Surface-enhanced laser desorption/ionization (SELDI) is a variant of MALDI that is used for the analysis of protein mixtures that uses a target modified to achieve biochemical affinity with the analyte compound. Desorption ionization on silicon Desorption ionization on silicon (DIOS) refers to laser desorption/ionization of a sample deposited on a porous silicon surface. Smalley source A laser vaporization cluster source produces ions using a combination of laser desorption ionization and supersonic expansion. The Smalley source (or Smalley cluster source) was developed by Richard Smalley at Rice University in the 1980s and was central to the discovery of fullerenes in 1985. Aerosol ionization In aerosol mass spectrometry with time-of-flight analysis, micrometer sized solid aerosol particles extracted from the atmosphere are simultaneously desorbed and ionized by a precisely timed laser pulse as they pass through the center of a time-of-flight ion extractor. Spray ionization Spray ionization methods involve the formation of aerosol particles from a liquid solution and the formation of bare ions after solvent evaporation. Solvent-assisted ionization (SAI) is a method in which charged droplets are produced by introducing a solution containing analyte into a heated inlet tube of an atmospheric pressure ionization mass spectrometer. Just as in Electrospray Ionization (ESI), desolvation of the charged droplets produces multiply charged analyte ions. Volatile and nonvolatile compounds are analyzed by SAI, and high voltage is not required to achieve sensitivity comparable to ESI. Application of a voltage to the solution entering the hot inlet through a zero dead volume fitting connected to fused silica tubing produces ESI-like mass spectra, but with higher sensitivity. The inlet tube to the mass spectrometer becomes the ion source. Matrix-Assisted Ionization Matrix-Assisted Ionization (MAI) is similar to MALDI in sample preparation, but a laser is not required to convert analyte molecules included in a matrix compound into gas-phase ions. In MAI, analyte ions have charge states similar to electrospray ionization but obtained from a solid matrix rather than a solvent. No voltage or laser is required, but a laser can be used to obtain spatial resolution for imaging. Matrix-analyte samples are ionized in the vacuum of a mass spectrometer and can be inserted into the vacuum through an atmospheric pressure inlet. Less volatile matrices such as 2,5-dihydroxybenzoic acid require a hot inlet tube to produce analyte ions by MAI, but more volatile matrices such as 3-nitrobenzonitrile require no heat, voltage, or laser. Simply introducing the matrix-analyte sample to the inlet aperture of an atmospheric pressure ionization mass spectrometer produces abundant ions. Compounds at least as large as bovine serum albumin [66 kDa] can be ionized with this method. In this method, the inlet to the mass spectrometer can be considered the ion source. Atmospheric-pressure chemical ionization Atmospheric-pressure chemical ionization uses a solvent spray at atmospheric pressure. A spray of solvent is heated to relatively high temperatures (above 400 degrees Celsius), sprayed with high flow rates of nitrogen and the entire aerosol cloud is subjected to a corona discharge that creates ions with the evaporated solvent acting as the chemical ionization reagent gas. APCI is not as "soft" (low fragmentation) an ionization technique as ESI. Note that atmospheric pressure ionization (API) should not be used as a synonym for APCI. Thermospray ionization Thermospray ionization is a form of atmospheric pressure ionization in mass spectrometry. It transfers ions from the liquid phase to the gas phase for analysis. It is particularly useful in liquid chromatography-mass spectrometry. Electrospray ionization In electrospray ionization, a liquid is pushed through a very small, charged and usually metal, capillary. This liquid contains the substance to be studied, the analyte, dissolved in a large amount of solvent, which is usually much more volatile than the analyte. Volatile acids, bases or buffers are often added to this solution as well. The analyte exists as an ion in solution either in its anion or cation form. Because like charges repel, the liquid pushes itself out of the capillary and forms an aerosol, a mist of small droplets about 10 μm across. The aerosol is at least partially produced by a process involving the formation of a Taylor cone and a jet from the tip of this cone. An uncharged carrier gas such as nitrogen is sometimes used to help nebulize the liquid and to help evaporate the neutral solvent in the droplets. As the solvent evaporates, the analyte molecules are forced closer together, repel each other and break up the droplets. This process is called Coulombic fission because it is driven by repulsive Coulombic forces between charged molecules. The process repeats until the analyte is free of solvent and is a bare ion. The ions observed are created by the addition of a proton (a hydrogen ion) and denoted , or of another cation such as sodium ion, , or the removal of a proton, . Multiply charged ions such as are often observed. For macromolecules, there can be many charge states, occurring with different frequencies; the charge can be as great as , for example. Probe electrospray ionization Probe electrospray ionization (PESI) is a modified version of electrospray, where the capillary for sample solution transferring is replaced by a sharp-tipped solid needle with periodic motion. Contactless atmospheric pressure ionization Contactless atmospheric pressure ionization is a technique used for analysis of liquid and solid samples by mass spectrometry. Contactless API can be operated without an additional electric power supply (supplying voltage to the source emitter), gas supply, or syringe pump. Thus, the technique provides a facile means for analyzing chemical compounds by mass spectrometry at atmospheric pressure. Sonic spray ionization Sonic spray ionization is method for creating ions from a liquid solution, for example, a mixture of methanol and water. A pneumatic nebulizer is used to turn the solution into a supersonic spray of small droplets. Ions are formed when the solvent evaporates and the statistically unbalanced charge distribution on the droplets leads to a net charge and complete desolvation results in the formation of ions. Sonic spray ionization is used to analyze small organic molecules and drugs and can analyze large molecules when an electric field is applied to the capillary to help increase the charge density and generate multiple charged ions of proteins. Sonic spray ionization has been coupled with high performance liquid chromatography for the analysis of drugs. Oligonucleotides have been studied with this method. SSI has been used in a manner similar to desorption electrospray ionization for ambient ionization and has been coupled with thin-layer chromatography in this manner. Ultrasonication-assisted spray ionization Ultrasonication-assisted spray ionization (UASI) is similar to the above techniques but uses an ultrasonic transducer to achieve atomization of the material and generate ions. Thermal ionization Thermal ionization (also known as surface ionization, or contact ionization) involves spraying vaporized, neutral atoms onto a hot surface, from which the atoms re-evaporate in ionic form. To generate positive ions, the atomic species should have a low ionization energy, and the surface should have a high work function. This technique is most suitable for alkali atoms (Li, Na, K, Rb, Cs) which have low ionization energies and are easily evaporated. To generate negative ions, the atomic species should have a high electron affinity, and the surface should have a low work function. This second approach is most suited for halogen atoms Cl, Br, I, At. Ambient ionization In ambient ionization, ions are formed outside the mass spectrometer without sample preparation or separation. Ions can be formed by extraction into charged electrospray droplets, thermally desorbed and ionized by chemical ionization, or laser desorbed or ablated and post-ionized before they enter the mass spectrometer. Solid-liquid extraction based ambient ionization uses a charged spray to create a liquid film on the sample surface. Molecules on the surface are extracted into the solvent. The action of the primary droplets hitting the surface produces secondary droplets that are the source of ions for the mass spectrometer. Desorption electrospray ionization (DESI) creates charged droplets that are directed at a solid sample a few millimeters to a few centimeters away. The charged droplets pick up the sample through interaction with the surface and then form highly charged ions that can be sampled into a mass spectrometer. Plasma-based ambient ionization is based on an electrical discharge in a flowing gas that produces metastable atoms and molecules and reactive ions. Heat is often used to assist in the desorption of volatile species from the sample. Ions are formed by chemical ionization in the gas phase. A direct analysis in real time (DART) source operates by exposing the sample to a dry gas stream (typically helium or nitrogen) that contains long-lived electronically or vibronically excited neutral atoms or molecules (or "metastables"). Excited states are typically formed in the DART source by creating a glow discharge in a chamber through which the gas flows. A similar method called atmospheric solids analysis probe (ASAP) uses the heated gas from ESI or APCI probes to vaporize sample placed on a melting point tube inserted into an ESI/APCI source. Ionization is by APCI. Laser-based ambient ionization is a two-step process in which a pulsed laser is used to desorb or ablate material from a sample and the plume of material interacts with an electrospray or plasma to create ions. Electrospray-assisted laser desorption/ionization (ELDI) uses a 337 nm UV laser or 3 μm infrared laser to desorb material into an electrospray source. Matrix-assisted laser desorption electrospray ionization (MALDESI) is an atmospheric pressure ionization source for generation of multiply charged ions. An ultraviolet or infrared laser is directed onto a solid or liquid sample containing the analyte of interest and matrix desorbing neutral analyte molecules that are ionized by interaction with electrosprayed solvent droplets generating multiply charged ions. Laser ablation electrospray ionization (LAESI) is an ambient ionization method for mass spectrometry that combines laser ablation from a mid-infrared (mid-IR) laser with a secondary electrospray ionization (ESI) process. Applications Mass spectrometry In a mass spectrometer a sample is ionized in an ion source and the resulting ions are separated by their mass-to-charge ratio. The ions are detected and the results are displayed as spectra of the relative abundance of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses to the identified masses or through a characteristic fragmentation pattern. Particle accelerators In particle accelerators an ion source creates a particle beam at the beginning of the machine, the source. The technology to create ion sources for particle accelerators depends strongly on the type of particle that needs to be generated: electrons, protons, H− ion or a Heavy ions. Electrons are generated with an electron gun, of which there are many varieties. Protons are generated with a plasma-based device, like a duoplasmatron or a magnetron. H− ions are generated with a magnetron or a Penning source. A magnetron consists of a central cylindrical cathode surrounded by an anode. The discharge voltage is typically greater than 150 V and the current drain is around 40 A. A magnetic field of about 0.2 tesla is parallel to the cathode axis. Hydrogen gas is introduced by a pulsed gas valve. Caesium is often used to lower the work function of the cathode, enhancing the amount of ions that are produced. Large caesiated sources are also used for plasma heating in nuclear fusion devices. For a Penning source, a strong magnetic field parallel to the electric field of the sheath guides electrons and ions on cyclotron spirals from cathode to cathode. Fast H-minus ions are generated at the cathodes as in the magnetron. They are slowed down due to the charge exchange reaction as they migrate to the plasma aperture. This makes for a beam of ions that is colder than the ions obtained from a magnetron. Heavy ions can be generated with an electron cyclotron resonance ion source. The use of electron cyclotron resonance (ECR) ion sources for the production of intense beams of highly charged ions has immensely grown over the last decade. ECR ion sources are used as injectors into linear accelerators, Van-de-Graaff generators or cyclotrons in nuclear and elementary particle physics. In atomic and surface physics ECR ion sources deliver intense beams of highly charged ions for collision experiments or for the investigation of surfaces. For the highest charge states, however, Electron beam ion sources (EBIS) are needed. They can generate even bare ions of mid-heavy elements. The Electron beam ion trap (EBIT), based on the same principle, can produce up to bare uranium ions and can be used as an ion source as well. Heavy ions can also be generated with an ion gun which typically uses the thermionic emission of electrons to ionize a substance in its gaseous state. Such instruments are typically used for surface analysis. Gas flows through the ion source between the anode and the cathode. A positive voltage is applied to the anode. This voltage, combined with the high magnetic field between the tips of the internal and external cathodes allow a plasma to start. Ions from the plasma are repelled by the anode's electric field. This creates an ion beam. Surface modification Surface cleaning and pretreatment for large area deposition Thin film deposition Deposition of thick diamond-like carbon (DLC) films Surface roughening of polymers for improved adhesion and/or biocompatibility See also Ion beam RF antenna ion source On-Line Isotope Mass Separator References Ions Accelerator physics
Ion source
[ "Physics", "Chemistry" ]
5,902
[ "Matter", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Ion source", "Experimental physics", "Mass spectrometry", "Accelerator physics", "Ions" ]
326,395
https://en.wikipedia.org/wiki/Lagrange%27s%20four-square%20theorem
Lagrange's four-square theorem, also known as Bachet's conjecture, states that every nonnegative integer can be represented as a sum of four non-negative integer squares. That is, the squares form an additive basis of order four. where the four numbers are integers. For illustration, 3, 31, and 310 can be represented as the sum of four squares as follows: This theorem was proven by Joseph Louis Lagrange in 1770. It is a special case of the Fermat polygonal number theorem. Historical development From examples given in the Arithmetica, it is clear that Diophantus was aware of the theorem. This book was translated in 1621 into Latin by Bachet (Claude Gaspard Bachet de Méziriac), who stated the theorem in the notes of his translation. But the theorem was not proved until 1770 by Lagrange. Adrien-Marie Legendre extended the theorem in 1797–8 with his three-square theorem, by proving that a positive integer can be expressed as the sum of three squares if and only if it is not of the form for integers and . Later, in 1834, Carl Gustav Jakob Jacobi discovered a simple formula for the number of representations of an integer as the sum of four squares with his own four-square theorem. The formula is also linked to Descartes' theorem of four "kissing circles", which involves the sum of the squares of the curvatures of four circles. This is also linked to Apollonian gaskets, which were more recently related to the Ramanujan–Petersson conjecture. Proofs The classical proof Several very similar modern versions of Lagrange's proof exist. The proof below is a slightly simplified version, in which the cases for which m is even or odd do not require separate arguments. Proof using the Hurwitz integers Another way to prove the theorem relies on Hurwitz quaternions, which are the analog of integers for quaternions. Generalizations Lagrange's four-square theorem is a special case of the Fermat polygonal number theorem and Waring's problem. Another possible generalization is the following problem: Given natural numbers , can we solve for all positive integers in integers ? The case is answered in the positive by Lagrange's four-square theorem. The general solution was given by Ramanujan. He proved that if we assume, without loss of generality, that then there are exactly 54 possible choices for such that the problem is solvable in integers for all . (Ramanujan listed a 55th possibility , but in this case the problem is not solvable if .) Algorithms In 1986, Michael O. Rabin and Jeffrey Shallit proposed randomized polynomial-time algorithms for computing a single representation for a given integer , in expected running time . It was further improved to by Paul Pollack and Enrique Treviño in 2018. Number of representations The number of representations of a natural number n as the sum of four squares of integers is denoted by r4(n). Jacobi's four-square theorem states that this is eight times the sum of the divisors of n if n is odd and 24 times the sum of the odd divisors of n if n is even (see divisor function), i.e. Equivalently, it is eight times the sum of all its divisors which are not divisible by 4, i.e. We may also write this as where the second term is to be taken as zero if n is not divisible by 4. In particular, for a prime number p we have the explicit formula . Some values of r4(n) occur infinitely often as whenever n is even. The values of r4(n)/n can be arbitrarily large: indeed, r4(n)/n is infinitely often larger than 8. Uniqueness The sequence of positive integers which have only one representation as a sum of four squares of non-negative integers (up to order) is: 1, 2, 3, 5, 6, 7, 8, 11, 14, 15, 23, 24, 32, 56, 96, 128, 224, 384, 512, 896 ... . These integers consist of the seven odd numbers 1, 3, 5, 7, 11, 15, 23 and all numbers of the form or . The sequence of positive integers which cannot be represented as a sum of four non-zero squares is: 1, 2, 3, 5, 6, 8, 9, 11, 14, 17, 24, 29, 32, 41, 56, 96, 128, 224, 384, 512, 896 ... . These integers consist of the eight odd numbers 1, 3, 5, 9, 11, 17, 29, 41 and all numbers of the form or . Further refinements Lagrange's four-square theorem can be refined in various ways. For example, Zhi-Wei Sun proved that each natural number can be written as a sum of four squares with some requirements on the choice of these four numbers. One may also wonder whether it is necessary to use the entire set of square integers to write each natural as the sum of four squares. Eduard Wirsing proved that there exists a set of squares with such that every positive integer smaller than or equal to can be written as a sum of at most 4 elements of . See also Fermat's theorem on sums of two squares Fermat's polygonal number theorem Waring's problem Legendre's three-square theorem Sum of two squares theorem Sum of squares function 15 and 290 theorems Notes References External links Proof at PlanetMath.org Another proof An applet decomposing numbers as sums of four squares OEIS index to sequences related to sums of squares and sums of cubes Additive number theory Articles containing proofs Squares in number theory Theorems in number theory
Lagrange's four-square theorem
[ "Mathematics" ]
1,223
[ "Mathematical theorems", "Theorems in number theory", "Articles containing proofs", "Mathematical problems", "Squares in number theory", "Number theory" ]
326,430
https://en.wikipedia.org/wiki/Finitely%20generated%20module
In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R, or a module of finite type. Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide. A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group. Definition The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan. The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map: for some n (M is a quotient of a free module of finite rank). If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express the generators in any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example the set of the prime numbers is a generating set of viewed as -module, and a generating set formed from prime numbers has at least two elements, while the singleton is also a generating set. In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements: this is the dimension theorem for vector spaces). Any module is the union of the directed set of its finitely generated submodules. A module M is finitely generated if and only if any increasing chain Mi of submodules with union M stabilizes: i.e., there is some i such that Mi = M. This fact with Zorn's lemma implies that every nonzero finitely generated module admits maximal submodules. If any increasing chain of submodules stabilizes (i.e., any submodule is finitely generated), then the module M is called a Noetherian module. Examples If a module is generated by one element, it is called a cyclic module. Let R be an integral domain with K its field of fractions. Then every finitely generated R-submodule I of K is a fractional ideal: that is, there is some nonzero r in R such that rI is contained in R. Indeed, one can take r to be the product of the denominators of the generators of I. If R is Noetherian, then every fractional ideal arises in this way. Finitely generated modules over the ring of integers Z coincide with the finitely generated abelian groups. These are completely classified by the structure theorem, taking Z as the principal ideal domain. Finitely generated (say left) modules over a division ring are precisely finite dimensional vector spaces (over the division ring). Some facts Every homomorphic image of a finitely generated module is finitely generated. In general, submodules of finitely generated modules need not be finitely generated. As an example, consider the ring R = Z[X1, X2, ...] of all polynomials in countably many variables. R itself is a finitely generated R-module (with {1} as generating set). Consider the submodule K consisting of all those polynomials with zero constant term. Since every polynomial contains only finitely many terms whose coefficients are non-zero, the R-module K is not finitely generated. In general, a module is said to be Noetherian if every submodule is finitely generated. A finitely generated module over a Noetherian ring is a Noetherian module (and indeed this property characterizes Noetherian rings): A module over a Noetherian ring is finitely generated if and only if it is a Noetherian module. This resembles, but is not exactly Hilbert's basis theorem, which states that the polynomial ring R[X] over a Noetherian ring R is Noetherian. Both facts imply that a finitely generated commutative algebra over a Noetherian ring is again a Noetherian ring. More generally, an algebra (e.g., ring) that is a finitely generated module is a finitely generated algebra. Conversely, if a finitely generated algebra is integral (over the coefficient ring), then it is finitely generated module. (See integral element for more.) Let 0 → M′ → M → M′′ → 0 be an exact sequence of modules. Then M is finitely generated if M′, M′′ are finitely generated. There are some partial converses to this. If M is finitely generated and M′′ is finitely presented (which is stronger than finitely generated; see below), then M′ is finitely generated. Also, M is Noetherian (resp. Artinian) if and only if M′, M′′ are Noetherian (resp. Artinian). Let B be a ring and A its subring such that B is a faithfully flat right A-module. Then a left A-module F is finitely generated (resp. finitely presented) if and only if the B-module is finitely generated (resp. finitely presented). The Forster–Swan theorem gives an upper bound for the minimal number of generators of a finitely generated module M a commutative Noetherian ring. Finitely generated modules over a commutative ring For finitely generated modules over a commutative ring R, Nakayama's lemma is fundamental. Sometimes, the lemma allows one to prove finite dimensional vector spaces phenomena for finitely generated modules. For example, if f : M → M is a surjective R-endomorphism of a finitely generated module M, then f is also injective, and hence is an automorphism of M. This says simply that M is a Hopfian module. Similarly, an Artinian module M is coHopfian: any injective endomorphism f is also a surjective endomorphism. Any R-module is an inductive limit of finitely generated R-submodules. This is useful for weakening an assumption to the finite case (e.g., the characterization of flatness with the Tor functor). An example of a link between finite generation and integral elements can be found in commutative algebras. To say that a commutative algebra A is a finitely generated ring over R means that there exists a set of elements of A such that the smallest subring of A containing G and R is A itself. Because the ring product may be used to combine elements, more than just R-linear combinations of elements of G are generated. For example, a polynomial ring R[x] is finitely generated by as a ring, but not as a module. If A is a commutative algebra (with unity) over R, then the following two statements are equivalent: A is a finitely generated R module. A is both a finitely generated ring over R and an integral extension of R. Generic rank Let M be a finitely generated module over an integral domain A with the field of fractions K. Then the dimension is called the generic rank of M over A. This number is the same as the number of maximal A-linearly independent vectors in M or equivalently the rank of a maximal free submodule of M (cf. Rank of an abelian group). Since , is a torsion module. When A is Noetherian, by generic freeness, there is an element f (depending on M) such that is a free -module. Then the rank of this free module is the generic rank of M. Now suppose the integral domain A is generated as algebra over a field k by finitely many homogeneous elements of degrees . Suppose M is graded as well and let be the Poincaré series of M. By the Hilbert–Serre theorem, there is a polynomial F such that . Then is the generic rank of M. A finitely generated module over a principal ideal domain is torsion-free if and only if it is free. This is a consequence of the structure theorem for finitely generated modules over a principal ideal domain, the basic form of which says a finitely generated module over a PID is a direct sum of a torsion module and a free module. But it can also be shown directly as follows: let M be a torsion-free finitely generated module over a PID A and F a maximal free submodule. Let f be in A such that . Then is free since it is a submodule of a free module and A is a PID. But now is an isomorphism since M is torsion-free. By the same argument as above, a finitely generated module over a Dedekind domain A (or more generally a semi-hereditary ring) is torsion-free if and only if it is projective; consequently, a finitely generated module over A is a direct sum of a torsion module and a projective module. A finitely generated projective module over a Noetherian integral domain has constant rank and so the generic rank of a finitely generated module over A is the rank of its projective part. Equivalent definitions and finitely cogenerated modules The following conditions are equivalent to M being finitely generated (f.g.): For any family of submodules {Ni | i ∈ I} in M, if , then for some finite subset F of I. For any chain of submodules {Ni | i ∈ I} in M, if , then for some i in I. If is an epimorphism, then the restriction is an epimorphism for some finite subset F of I. From these conditions it is easy to see that being finitely generated is a property preserved by Morita equivalence. The conditions are also convenient to define a dual notion of a finitely cogenerated module M. The following conditions are equivalent to a module being finitely cogenerated (f.cog.): For any family of submodules {Ni | i ∈ I} in M, if , then for some finite subset F of I. For any chain of submodules {Ni | i ∈ I} in M, if , then Ni = for some i in I. If is a monomorphism, where each is an R module, then is a monomorphism for some finite subset F of I. Both f.g. modules and f.cog. modules have interesting relationships to Noetherian and Artinian modules, and the Jacobson radical J(M) and socle soc(M) of a module. The following facts illustrate the duality between the two conditions. For a module M: M is Noetherian if and only if every submodule N of M is f.g. M is Artinian if and only if every quotient module M/N is f.cog. M is f.g. if and only if J(M) is a superfluous submodule of M, and M/J(M) is f.g. M is f.cog. if and only if soc(M) is an essential submodule of M, and soc(M) is f.g. If M is a semisimple module (such as soc(N) for any module N), it is f.g. if and only if f.cog. If M is f.g. and nonzero, then M has a maximal submodule and any quotient module M/N is f.g. If M is f.cog. and nonzero, then M has a minimal submodule, and any submodule N of M is f.cog. If N and M/N are f.g. then so is M. The same is true if "f.g." is replaced with "f.cog." Finitely cogenerated modules must have finite uniform dimension. This is easily seen by applying the characterization using the finitely generated essential socle. Somewhat asymmetrically, finitely generated modules do not necessarily have finite uniform dimension. For example, an infinite direct product of nonzero rings is a finitely generated (cyclic!) module over itself, however it clearly contains an infinite direct sum of nonzero submodules. Finitely generated modules do not necessarily have finite co-uniform dimension either: any ring R with unity such that R/J(R) is not a semisimple ring is a counterexample. Finitely presented, finitely related, and coherent modules Another formulation is this: a finitely generated module M is one for which there is an epimorphism mapping Rk onto M : f : Rk → M. Suppose now there is an epimorphism, φ : F → M. for a module M and free module F. If the kernel of φ is finitely generated, then M is called a finitely related module. Since M is isomorphic to F/ker(φ), this basically expresses that M is obtained by taking a free module and introducing finitely many relations within F (the generators of ker(φ)). If the kernel of φ is finitely generated and F has finite rank (i.e. ), then M is said to be a finitely presented module. Here, M is specified using finitely many generators (the images of the k generators of ) and finitely many relations (the generators of ker(φ)). See also: free presentation. Finitely presented modules can be characterized by an abstract property within the category of R-modules: they are precisely the compact objects in this category. A coherent module M is a finitely generated module whose finitely generated submodules are finitely presented. Over any ring R, coherent modules are finitely presented, and finitely presented modules are both finitely generated and finitely related. For a Noetherian ring R, finitely generated, finitely presented, and coherent are equivalent conditions on a module. Some crossover occurs for projective or flat modules. A finitely generated projective module is finitely presented, and a finitely related flat module is projective. It is true also that the following conditions are equivalent for a ring R: R is a right coherent ring. The module RR is a coherent module. Every finitely presented right R module is coherent. Although coherence seems like a more cumbersome condition than finitely generated or finitely presented, it is nicer than them since the category of coherent modules is an abelian category, while, in general, neither finitely generated nor finitely presented modules form an abelian category. See also Integral element Artin–Rees lemma Countably generated module Finite algebra References Textbooks . Module theory fr:Module sur un anneau#Propriétés de finitude
Finitely generated module
[ "Mathematics" ]
3,310
[ "Fields of abstract algebra", "Module theory" ]
326,454
https://en.wikipedia.org/wiki/Free%20module
In mathematics, a free module is a module that has a basis, that is, a generating set that is linearly independent. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules. Given any set and ring , there is a free -module with basis , which is called the free module on or module of formal -linear combinations of the elements of . A free abelian group is precisely a free module over the ring of integers. Definition For a ring and an -module , the set is a basis for if: is a generating set for ; that is to say, every element of is a finite sum of elements of multiplied by coefficients in ; and is linearly independent if for every of distinct elements, implies that (where is the zero element of and is the zero element of ). A free module is a module with a basis. An immediate consequence of the second half of the definition is that the coefficients in the first half are unique for each element of M. If has invariant basis number, then by definition any two bases have the same cardinality. For example, nonzero commutative rings have invariant basis number. The cardinality of any (and therefore every) basis is called the rank of the free module . If this cardinality is finite, the free module is said to be free of finite rank, or free of rank if the rank is known to be . Examples Let R be a ring. R is a free module of rank one over itself (either as a left or right module); any unit element is a basis. More generally, If R is commutative, a nonzero ideal I of R is free if and only if it is a principal ideal generated by a nonzerodivisor, with a generator being a basis. Over a principal ideal domain (e.g., ), a submodule of a free module is free. If R is commutative, the polynomial ring in indeterminate X is a free module with a possible basis 1, X, X2, .... Let be a polynomial ring over a commutative ring A, f a monic polynomial of degree d there, and the image of t in B. Then B contains A as a subring and is free as an A-module with a basis . For any non-negative integer n, , the cartesian product of n copies of R as a left R-module, is free. If R has invariant basis number, then its rank is n. A direct sum of free modules is free, while an infinite cartesian product of free modules is generally not free (cf. the Baer–Specker group). A finitely generated module over a commutative local ring is free if and only if it is faithfully flat. Also, Kaplansky's theorem states a projective module over a (possibly non-commutative) local ring is free. Sometimes, whether a module is free or not is undecidable in the set-theoretic sense. A famous example is the Whitehead problem, which asks whether a Whitehead group is free or not. As it turns out, the problem is independent of ZFC. Formal linear combinations Given a set and ring , there is a free -module that has as a basis: namely, the direct sum of copies of R indexed by E . Explicitly, it is the submodule of the Cartesian product (R is viewed as say a left module) that consists of the elements that have only finitely many nonzero components. One can embed E into as a subset by identifying an element e with that of whose e-th component is 1 (the unity of R) and all the other components are zero. Then each element of can be written uniquely as where only finitely many are nonzero. It is called a formal linear combination of elements of . A similar argument shows that every free left (resp. right) R-module is isomorphic to a direct sum of copies of R as left (resp. right) module. Another construction The free module may also be constructed in the following equivalent way. Given a ring R and a set E, first as a set we let We equip it with a structure of a left module such that the addition is defined by: for x in E, and the scalar multiplication by: for r in R and x in E, Now, as an R-valued function on E, each f in can be written uniquely as where are in R and only finitely many of them are nonzero and is given as (this is a variant of the Kronecker delta). The above means that the subset of is a basis of . The mapping is a bijection between and this basis. Through this bijection, is a free module with the basis E. Universal property The inclusion mapping defined above is universal in the following sense. Given an arbitrary function from a set to a left -module , there exists a unique module homomorphism such that ; namely, is defined by the formula: and is said to be obtained by extending by linearity. The uniqueness means that each R-linear map is uniquely determined by its restriction to E. As usual for universal properties, this defines up to a canonical isomorphism. Also the formation of for each set E determines a functor , from the category of sets to the category of left -modules. It is called the free functor and satisfies a natural relation: for each set E and a left module N, where is the forgetful functor, meaning is a left adjoint of the forgetful functor. Generalizations Many statements true for free modules extend to certain larger classes of modules. Projective modules are direct summands of free modules. Flat modules are defined by the property that tensoring with them preserves exact sequences. Torsion-free modules form an even broader class. For a finitely generated module over a PID (such as Z), the properties free, projective, flat, and torsion-free are equivalent. See local ring, perfect ring and Dedekind ring. See also Free object Projective object free presentation free resolution Quillen–Suslin theorem stably free module generic freeness Notes References . Module theory Free algebraic structures
Free module
[ "Mathematics" ]
1,315
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Module theory", "Algebraic structures", "Free algebraic structures" ]
326,462
https://en.wikipedia.org/wiki/Undersampling
In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal. When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion. Description The Fourier transforms of real-valued functions are symmetrical around the 0 Hz axis. After sampling, only a periodic summation of the Fourier transform (called discrete-time Fourier transform) is still available. The individual frequency-shifted copies of the original transform are called aliases. The frequency offset between adjacent aliases is the sampling-rate, denoted by fs. When the aliases are mutually exclusive (spectrally), the original transform and the original continuous function, or a frequency-shifted version of it (if desired), can be recovered from the samples. The first and third graphs of Figure 1 depict a baseband spectrum before and after being sampled at a rate that completely separates the aliases. The second graph of Figure 1 depicts the frequency profile of a bandpass function occupying the band (A, A+B) (shaded blue) and its mirror image (shaded beige). The condition for a non-destructive sample rate is that the aliases of both bands do not overlap when shifted by all integer multiples of fs. The fourth graph depicts the spectral result of sampling at the same rate as the baseband function. The rate was chosen by finding the lowest rate that is an integer sub-multiple of A and also satisfies the baseband Nyquist criterion: fs > 2B.  Consequently, the bandpass function has effectively been converted to baseband. All the other rates that avoid overlap are given by these more general criteria, where A and A+B are replaced by fL and fH, respectively: , for any integer n satisfying: The highest n for which the condition is satisfied leads to the lowest possible sampling rates. Important signals of this sort include a radio's intermediate-frequency (IF), radio-frequency (RF) signal, and the individual channels of a filter bank. If n > 1, then the conditions result in what is sometimes referred to as undersampling, bandpass sampling, or using a sampling rate less than the Nyquist rate (2fH). For the case of a given sampling frequency, simpler formulae for the constraints on the signal's spectral band are given below. Example: Consider FM radio to illustrate the idea of undersampling. In the US, FM radio operates on the frequency band from fL = 88 MHz to fH = 108 MHz. The bandwidth is given by The sampling conditions are satisfied for Therefore, n can be 1, 2, 3, 4, or 5. The value n = 5 gives the lowest sampling frequencies interval and this is a scenario of undersampling. In this case, the signal spectrum fits between 2 and 2.5 times the sampling rate (higher than 86.4–88 MHz but lower than 108–110 MHz). A lower value of n will also lead to a useful sampling rate. For example, using n = 4, the FM band spectrum fits easily between 1.5 and 2.0 times the sampling rate, for a sampling rate near 56 MHz (multiples of the Nyquist frequency being 28, 56, 84, 112, etc.). See the illustrations at the right. When undersampling a real-world signal, the sampling circuit must be fast enough to capture the highest signal frequency of interest. Theoretically, each sample should be taken during an infinitesimally short interval, but this is not practically feasible. Instead, the sampling of the signal should be made in a short enough interval that it can represent the instantaneous value of the signal with the highest frequency. This means that in the FM radio example above, the sampling circuit must be able to capture a signal with a frequency of 108 MHz, not 43.2 MHz. Thus, the sampling frequency may be only a little bit greater than 43.2 MHz, but the input bandwidth of the system must be at least 108 MHz. Similarly, the accuracy of the sampling timing, or aperture uncertainty of the sampler, frequently the analog-to-digital converter, must be appropriate for the frequencies being sampled 108MHz, not the lower sample rate. If the sampling theorem is interpreted as requiring twice the highest frequency, then the required sampling rate would be assumed to be greater than the Nyquist rate 216 MHz. While this does satisfy the last condition on the sampling rate, it is grossly oversampled. Note that if a band is sampled with n > 1, then a band-pass filter is required for the anti-aliasing filter, instead of a lowpass filter. As we have seen, the normal baseband condition for reversible sampling is that X(f) = 0 outside the interval:   and the reconstructive interpolation function, or lowpass filter impulse response, is   To accommodate undersampling, the bandpass condition is that X(f) = 0 outside the union of open positive and negative frequency bands for some positive integer . which includes the normal baseband condition as case n = 1 (except that where the intervals come together at 0 frequency, they can be closed). The corresponding interpolation function is the bandpass filter given by this difference of lowpass impulse responses: . On the other hand, reconstruction is not usually the goal with sampled IF or RF signals. Rather, the sample sequence can be treated as ordinary samples of the signal frequency-shifted to near baseband, and digital demodulation can proceed on that basis, recognizing the spectrum mirroring when n is even. Further generalizations of undersampling for the case of signals with multiple bands are possible, and signals over multidimensional domains (space or space-time) and have been worked out in detail by Igor Kluvánek. See also Drizzle (image processing) References Signal processing
Undersampling
[ "Technology", "Engineering" ]
1,277
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
984,061
https://en.wikipedia.org/wiki/Orbit%20phasing
In astrodynamics, orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly. Orbital phasing is primarily used in scenarios where a spacecraft in a given orbit must be moved to a different location within the same orbit. The change in position within the orbit is usually defined as the phase angle, ϕ, and is the change in true anomaly required between the spacecraft's current position to the final position. The phase angle can be converted in terms of time using Kepler's Equation: where t is defined as time elapsed to cover phase angle in original orbit T1 is defined as period of original orbit E is defined as change of eccentric anomaly between spacecraft and final position e1 is defined as orbital eccentricity of original orbit φ is defined as change in true anomaly between spacecraft and final position This time derived from the phase angle is the required time the spacecraft must gain or lose to be located at the final position within the orbit. To gain or lose this time, the spacecraft must be subjected to a simple two-impulse Hohmann transfer which takes the spacecraft away from, and then back to, its original orbit. The first impulse to change the spacecraft's orbit is performed at a specific point in the original orbit (point of impulse, POI), usually performed in the original orbit's periapsis or apoapsis. The impulse creates a new orbit called the “phasing orbit” and is larger or smaller than the original orbit resulting in a different period time than the original orbit. The difference in period time between the original and phasing orbits will be equal to the time converted from the phase angle. Once one period of the phasing orbit is complete, the spacecraft will return to the POI and the spacecraft will once again be subjected to a second impulse, equal and opposite to the first impulse, to return it to the original orbit. When complete, the spacecraft will be in the targeted final position within the original orbit. To find some of the phasing orbital parameters, first one must find the required period time of the phasing orbit using the following equation. where T1 is defined as period of original orbit T2 is defined as period of phasing orbit t is defined as time elapsed to cover phase angle in original orbit Once phasing orbit period is determined, the phasing orbit semimajor axis can be derived from the period formula: where a2 is defined as semimajor axis of phasing orbit T2 is defined as period of phasing orbit μ is defined as Standard gravitational parameter From the semimajor axis, the phase orbit apogee and perigee can be calculated: where a2 is defined as semimajor axis of phasing orbit ra is defined as apogee of phasing orbit rp is defined as perigee of phasing orbit Finally, the phasing orbit's angular momentum can be found from the equation: where h2 is defined as angular momentum of phasing orbit ra is defined as apogee of phasing orbit rp is defined as perigee of phasing orbit μ is defined as Standard gravitational parameter To find the impulse required to change the spacecraft from its original orbit to the phasing orbit, the change of spacecraft velocity, ∆V, at POI must be calculated from the angular momentum formula: where ∆V is change in velocity between phasing and original orbits at POI v1 is defined as the spacecraft velocity at POI in original orbit v2 is defined as the spacecraft velocity at POI in phasing orbit r is defined as radius of spacecraft from the orbit’s focal point to POI h1 is defined as specific angular momentum of the original orbit h2 is defined as specific angular momentum of the phasing orbit Remember that this change in velocity, ∆V, is only the amount required to change the spacecraft from its original orbit to the phasing orbit. A second change in velocity equal to the magnitude but opposite in direction of the first must be done after the spacecraft travels one phase orbit period to return the spacecraft from the phasing orbit to the original orbit. Total change of velocity required for the phasing maneuver is equal to two times ∆V. Orbit phasing can also be referenced as co-orbital rendezvous like a successful approach to a space station in a docking maneuver. Here, two spacecraft on the same orbit but at different true anomalies rendezvous by either one or both of the spacecraft entering phasing orbits which cause them to return to their original orbit at the same true anomaly at the same time. Phasing maneuvers are also commonly employed by geosynchronous satellites, either to conduct station-keeping maneuvers to maintain their orbit above a specific longitude, or to change longitude altogether. See also Orbital maneuver Hohmann transfer orbit Clohessy-Wiltshire equations for co-orbit analysis Space rendezvous References General Phasing Maneuver Astrodynamics
Orbit phasing
[ "Engineering" ]
1,035
[ "Astrodynamics", "Aerospace engineering" ]
984,070
https://en.wikipedia.org/wiki/Relief%20valve
A relief valve or pressure relief valve (PRV) is a type of safety valve used to control or limit the pressure in a system; excessive pressure might otherwise build up and create a process upset, instrument or equipment failure, explosion, or fire. Pressure relief Excess pressure is relieved by allowing the pressurized fluid to flow from an auxiliary passage out of the system. The relief valve is designed or set to open at a predetermined set pressure to protect pressure vessels and other equipment from being subjected to pressures that exceed their design limits. When the set pressure is exceeded, the relief valve becomes the "path of least resistance" as the valve is forced open and a portion of the fluid is diverted through the auxiliary route. In systems containing flammable fluids, the diverted fluid (liquid, gas or liquid-gas mixture) is either recaptured by a low pressure, high-flow vapor recovery system or is routed through a piping system known as a flare header or relief header to a central, elevated gas flare where it is burned, releasing naked combustion gases into the atmosphere. In non-hazardous systems, the fluid is often discharged to the atmosphere by a suitable discharge pipework designed to prevent rainwater ingress which can affect the set lift pressure, and positioned not to cause a hazard to personnel. As the fluid is diverted, the pressure inside the vessel will stop rising. Once it reaches the valve's reseating pressure, the valve will close. The blowdown is usually stated as a percentage of set pressure and refers to how much the pressure needs to drop before the valve reseats. The blowdown can vary roughly 2–20%, and some valves have adjustable blowdowns. In high-pressure gas systems, it is recommended that the outlet of the relief valve be in the open air. In systems where the outlet is connected to piping, the opening of a relief valve will give a pressure build-up in the piping system downstream of the relief valve. This often means that the relief valve will not re-seat once the set pressure is reached. For these systems often so-called "differential" relief valves are used. This means that the pressure is only working on an area that is much smaller than the area of the opening of the valve. If the valve is opened, the pressure has to decrease enormously before the valve closes and also the outlet pressure of the valve can easily keep the valve open. Another consideration is that if other relief valves are connected to the outlet pipe system, they may open as the pressure in the exhaust pipe system increases. This may cause undesired operation. In some cases, a so-called bypass valve acts as a relief valve by being used to return all or part of the fluid discharged by a pump or gas compressor back to either a storage reservoir or the inlet of the pump or gas compressor. This is done to protect the pump or gas compressor and any associated equipment from excessive pressure. The bypass valve and bypass path can be internal (an integral part of the pump or compressor) or external (installed as a component in the fluid path). Many fire engines have such relief valves to prevent the overpressurization of fire hoses. In other cases, equipment must be protected against being subjected to an internal vacuum (i.e., low pressure) that is lower than the equipment can withstand. In such cases, vacuum relief valves are used to open at a predetermined low-pressure limit and to admit air or an inert gas into the equipment to control the amount of vacuum. Technical terms In the petroleum refining, petrochemical and chemical manufacturing, natural gas processing and power generation industries, the term relief valve is associated with the terms pressure relief valve (PRV), pressure safety valve (PSV) and safety valve: Pressure relief valve (PRV) or Pressure Release valve (PRV) or pressure safety valve (PSV): The difference is that PSVs have a manual lever to activate the valve in case of emergency. Most PRVs are spring operated. At lower pressures some use a diaphragm in place of a spring. The oldest PRV designs use a weight to seal the valve. Set pressure: When the system pressure increases to this value, the PRV opens. The accuracy of the set pressure may follow guidelines set by the American Society of Mechanical Engineers (ASME). Relief valve (RV): A valve is used on a liquid service, which opens proportionally as the increasing pressure overcomes the spring pressure. Safety valve (SV): Used in gas service. Most SVs are full lift or snap-acting, in that they pop completely open. Safety relief valve (SRV): A relief valve that can be used for gas or liquid service. However, the set pressure will usually only be accurate for one type of fluid at a time. Pilot-operated relief valve (POSRV, PORV, POPRV): A device that relieves by remote command from a pilot valve which is connected to the upstream system pressure. Low-pressure safety valve (LPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small and near the atmospheric pressure. Vacuum pressure safety valve (VPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small, negative, and near the atmospheric pressure. Low and vacuum pressure safety valve (LVPSV): An automatic system that relieves by the static pressure of a gas. The relieving pressure is small, negative, or positive, and near the atmospheric pressure. Pressure vacuum release valve (PVRV): A combination of vacuum pressure and a relief valve in one housing. Used on storage tanks for liquids to prevent implosion or overpressure. Snap acting: The opposite of modulating, refers to a valve that "pops" open. It snaps into a full lift in milliseconds. Usually accomplished with a skirt on the disc so that the fluid passing the seat suddenly affects a larger area and creates more lifting force. Modulating: Opens in proportion to the overpressure. Legal and code requirements in industry In most countries, industries are legally required to protect pressure vessels and other equipment by using relief valves. Also in most countries, equipment design codes such as those provided by the American Society of Mechanical Engineers (ASME), American Petroleum Institute (API) and other organizations like ISO (ISO 4126) must be complied with and those codes include design standards for relief valves. The main standards, laws, or directives are: AD Merkblatt (German) American Petroleum Institute (API); Standards 520, 521, 526, and 2000 American Society of Mechanical Engineers (ASME); Boiler & Pressure Vessel Code, Section VIII Division 1 and Section I American Water Works Association (AWWA), storage tanks EN 764-7; European Standard based on pressure Equipment Directive 97/23/EC Eurocode EN 1993-4-2, storage tanks. International Organization for Standardization; ISO 4126 Pressure Systems Safety Regulations 2000 (PSSR); UK Design Institute for Emergency Relief Systems (DIERS) Formed in 1977, the Design Institute for Emergency Relief Systems was a consortium of 29 companies under the auspices of the American Institute of Chemical Engineers (AIChE) that developed methods for the design of emergency relief systems to handle runaway reactions. Its purpose was to develop the technology and methods needed for sizing pressure relief systems for chemical reactors, particularly those in which exothermic reactions are carried out. Such reactions include many classes of industrially important processes including polymerizations, nitrations, diazotizations, sulphonations, epoxidations, aminations, esterifications, neutralizations, and many others. Pressure relief systems can be difficult to design, not least because what is expelled can be gas/vapor, liquid, or a mixture of the two – just as with a can of carbonated drink when it is suddenly opened. For chemical reactions, it requires extensive knowledge of both chemical reaction hazards and fluid flow. DIERS has investigated the two-phase vapor-liquid onset/disengagement dynamics and the hydrodynamics of emergency relief systems with extensive experimental and analysis work. Of particular interest to DIERS were the prediction of two-phase flow venting and the applicability of various sizing methods for two-phase vapor-liquid flashing flow. DIERS became a user's group in 1985. European DIERS Users' Group (EDUG) is a group of mainly European industrialists, consultants and academics who use the DIERS technology. The EDUG started in the late 1980s and has an annual meeting. A summary of many of key aspects of the DIERS technology has been published in the UK by the HSE. See also Blowoff valve Rupture disc Safety valve Surge control References External links PED 97/23/EC; Pressure Equipment Directive – European Union. Pressure vessels Safety valves ru:Сбросной клапан (сантехника)
Relief valve
[ "Physics", "Chemistry", "Engineering" ]
1,862
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Industrial safety devices", "Pressure vessels", "Safety valves" ]
984,081
https://en.wikipedia.org/wiki/Space%20rendezvous
A space rendezvous () is a set of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities and position vectors of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous may or may not be followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them. The same rendezvous technique can be used for spacecraft "landing" on natural objects with a weak gravitational field, e.g. landing on one of the Martian moons would require the same matching of orbital velocities, followed by a "descent" that shares some similarities with docking. History In its first human spaceflight program Vostok, the Soviet Union launched pairs of spacecraft from the same launch pad, one or two days apart (Vostok 3 and 4 in 1962, and Vostok 5 and 6 in 1963). In each case, the launch vehicles' guidance systems inserted the two craft into nearly identical orbits; however, this was not nearly precise enough to achieve rendezvous, as the Vostok lacked maneuvering thrusters to adjust its orbit to match that of its twin. The initial separation distances were in the range of , and slowly diverged to thousands of kilometers (over a thousand miles) over the course of the missions. In early 1964 the Soviet Union were able to guide two unmanned satellites designated Polyot 1 and Polyot 2 within 5km, and the crafts were able to establish radio communication. In 1963 Buzz Aldrin submitted his doctoral thesis titled, Line-Of-Sight Guidance Techniques For Manned Orbital Rendezvous. As a NASA astronaut, Aldrin worked to "translate complex orbital mechanics into relatively simple flight plans for my colleagues." First attempt failed NASA's first attempt at rendezvous was made on June 3, 1965, when US astronaut Jim McDivitt tried to maneuver his Gemini 4 craft to meet its spent Titan II launch vehicle's upper stage. McDivitt was unable to get close enough to achieve station-keeping, due to depth-perception problems, and stage propellant venting which kept moving it around. However, the Gemini 4 attempts at rendezvous were unsuccessful largely because NASA engineers had yet to learn the orbital mechanics involved in the process. Simply pointing the active vehicle's nose at the target and thrusting was unsuccessful. If the target is ahead in the orbit and the tracking vehicle increases speed, its altitude also increases, actually moving it away from the target. The higher altitude then increases orbital period due to Kepler's third law, putting the tracker not only above, but also behind the target. The proper technique requires changing the tracking vehicle's orbit to allow the rendezvous target to either catch up or be caught up with, and then at the correct moment changing to the same orbit as the target with no relative motion between the vehicles (for example, putting the tracker into a lower orbit, which has a shorter orbital period allowing it to catch up, then executing a Hohmann transfer back to the original orbital height). First successful rendezvous Rendezvous was first successfully accomplished by US astronaut Wally Schirra on December 15, 1965. Schirra maneuvered the Gemini 6 spacecraft within of its sister craft Gemini 7. The spacecraft were not equipped to dock with each other, but maintained station-keeping for more than 20 minutes. Schirra later commented: Schirra used another metaphor to describe the difference between the two nations' achievements: First docking The first docking of two spacecraft was achieved on March 16, 1966 when Gemini 8, under the command of Neil Armstrong, rendezvoused and docked with an uncrewed Agena Target Vehicle. Gemini 6 was to have been the first docking mission, but had to be cancelled when that mission's Agena vehicle was destroyed during launch. The Soviets carried out the first automated, uncrewed docking between Cosmos 186 and Cosmos 188 on October 30, 1967. The first Soviet cosmonaut to attempt a manual docking was Georgy Beregovoy who unsuccessfully tried to dock his Soyuz 3 craft with the uncrewed Soyuz 2 in October 1968. Automated systems brought the craft to within , while Beregovoy brought this closer with manual control. The first successful crewed docking occurred on January 16, 1969 when Soyuz 4 and Soyuz 5 docked, collecting the two crew members of Soyuz 5, which had to perform an extravehicular activity to reach Soyuz 4. In March 1969 Apollo 9 achieved the first internal transfer of crew members between two docked spacecraft. The first rendezvous of two spacecraft from different countries took place in 1975, when an Apollo spacecraft docked with a Soyuz spacecraft as part of the Apollo–Soyuz mission. The first multiple space docking took place when both Soyuz 26 and Soyuz 27 were docked to the Salyut 6 space station during January 1978. Uses [[File:Mir collision damage STS086-720-091.JPG|right|thumb|Damaged solar arrays on Mir'''s Spektr module following a collision with an uncrewed Progress spacecraft in September 1997 as part of Shuttle-Mir. The Progress spacecraft were used for re-supplying the station. In this space rendezvous gone wrong, the Progress collided with Mir, beginning a depressurization that was halted by closing the hatch to Spektr.|alt=A gold-coloured solar array, bent and twisted out of shape and with several holes. The edge of a module can be seen to the right of the image, and Earth is visible in the background.]] A rendezvous takes place each time a spacecraft brings crew members or supplies to an orbiting space station. The first spacecraft to do this was Soyuz 11, which successfully docked with the Salyut 1 station on June 7, 1971. Human spaceflight missions have successfully made rendezvous with six Salyut stations, with Skylab, with Mir and with the International Space Station (ISS). Currently Soyuz spacecraft are used at approximately six month intervals to transport crew members to and from ISS. With the introduction of NASA's Commercial Crew Program, the US is able to use their own launch vehicle along with the Soyuz, an updated version of SpaceX's Cargo Dragon; Crew Dragon. Robotic spacecraft are also used to rendezvous with and resupply space stations. Soyuz and Progress spacecraft have automatically docked with both Mir and the ISS using the Kurs docking system, Europe's Automated Transfer Vehicle also used this system to dock with the Russian segment of the ISS. Several uncrewed spacecraft use NASA's berthing mechanism rather than a docking port. The Japanese H-II Transfer Vehicle (HTV), SpaceX Dragon, and Orbital Sciences' Cygnus spacecraft all maneuver to a close rendezvous and maintain station-keeping, allowing the ISS Canadarm2 to grapple and move the spacecraft to a berthing port on the US segment. However the updated version of Cargo Dragon will no longer need to berth but instead will autonomously dock directly to the space station. The Russian segment only uses docking ports so it is not possible for HTV, Dragon and Cygnus to find a berth there. Space rendezvous has been used for a variety of other purposes, including recent service missions to the Hubble Space Telescope. Historically, for the missions of Project Apollo that landed astronauts on the Moon, the ascent stage of the Apollo Lunar Module would rendezvous and dock with the Apollo Command/Service Module in lunar orbit rendezvous maneuvers. Also, the STS-49 crew rendezvoused with and attached a rocket motor to the Intelsat VI F-3 communications satellite to allow it to make an orbital maneuver. Possible future rendezvous may be made by a yet to be developed automated Hubble Robotic Vehicle (HRV), and by the CX-OLEV, which is being developed for rendezvous with a geosynchronous satellite that has run out of fuel. The CX-OLEV would take over orbital stationkeeping and/or finally bring the satellite to a graveyard orbit, after which the CX-OLEV can possibly be reused for another satellite. Gradual transfer from the geostationary transfer orbit to the geosynchronous orbit will take a number of months, using Hall effect thrusters. Alternatively the two spacecraft are already together, and just undock and dock in a different way: Soyuz spacecraft from one docking point to another on the ISS or Salyut In the Apollo spacecraft, a maneuver known as transposition, docking, and extraction was performed an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket / LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM crewed, LM at this stage uncrewed: the CSM separated, while the four upper panels of the LM adapter were disposed of the CSM turned 180 degrees (from engine backward, toward LM, to forward) the CSM connected to the LM while that was still connected to the third stage the CSM/LM combination then separated from the third stage NASA sometimes refers to "Rendezvous, Proximity-Operations, Docking, and Undocking" (RPODU) for the set of all spaceflight procedures that are typically needed around spacecraft operations where two spacecraft work in proximity to one another with intent to connect to one another. Phases and methods The standard technique for rendezvous and docking is to dock an active vehicle, the "chaser", with a passive "target". This technique has been used successfully for the Gemini, Apollo, Apollo/Soyuz, Salyut, Skylab, Mir, ISS, and Tiangong programs. To properly understand spacecraft rendezvous it is essential to understand the relation between spacecraft velocity and orbit. A spacecraft in a certain orbit cannot arbitrarily alter its velocity. Each orbit correlates to a certain orbital velocity. If the spacecraft fires thrusters and increases (or decreases) its velocity it will obtain a different orbit, one with a higher or lower altitude. In circular orbits, higher orbits have a lower orbital velocity. Lower orbits have a higher orbital velocity. For orbital rendezvous to occur, both spacecraft must be in the same orbital plane, and the phase of the orbit (the position of the spacecraft in the orbit) must be matched. For docking, the speed of the two vehicles must also be matched. The "chaser" is placed in a slightly lower orbit than the target. The lower the orbit, the higher the orbital velocity. The difference in orbital velocities of chaser and target is therefore such that the chaser is faster than the target, and catches up with it. Once the two spacecraft are sufficiently close, the chaser's orbit is synchronized with the target's orbit. That is, the chaser will be accelerated. This increase in velocity carries the chaser to a higher orbit. The increase in velocity is chosen such that the chaser approximately assumes the orbit of the target. Stepwise, the chaser closes in on the target, until proximity operations (see below) can be started. In the very final phase, the closure rate is reduced by use of the active vehicle's reaction control system. Docking typically occurs at a rate of to . Rendezvous phases Space rendezvous of an active, or "chaser", spacecraft with an (assumed) passive spacecraft may be divided into several phases, and typically starts with the two spacecraft in separate orbits, typically separated by more than : A variety of techniques may be used to effect the translational and rotational maneuvers necessary for proximity operations and docking. Methods of approach The two most common methods of approach for proximity operations are in-line with the flight path of the spacecraft (called V-bar, as it is along the velocity vector of the target) and perpendicular to the flight path along the line of the radius of the orbit (called R-bar, as it is along the radial vector, with respect to Earth, of the target). The chosen method of approach depends on safety, spacecraft / thruster design, mission timeline, and, especially for docking with the ISS, on the location of the assigned docking port. V-bar approach The V-bar approach is an approach of the "chaser" horizontally along the passive spacecraft's velocity vector. That is, from behind or from ahead, and in the same direction as the orbital motion of the passive target. The motion is parallel to the target's orbital velocity. In the V-bar approach from behind, the chaser fires small thrusters to increase its velocity in the direction of the target. This, of course, also drives the chaser to a higher orbit. To keep the chaser on the V-vector, other thrusters are fired in the radial direction. If this is omitted (for example due to a thruster failure), the chaser will be carried to a higher orbit, which is associated with an orbital velocity lower than the target's. Consequently, the target moves faster than the chaser and the distance between them increases. This is called a natural braking effect, and is a natural safeguard in case of a thruster failure. STS-104 was the third Space Shuttle mission to conduct a V-bar arrival at the International Space Station. The V-bar, or velocity vector, extends along a line directly ahead of the station. Shuttles approach the ISS along the V-bar when docking at the PMA-2 docking port. R-bar approach The R-bar approach consists of the chaser moving below or above the target spacecraft, along its radial vector. The motion is orthogonal to the orbital velocity of the passive spacecraft. When below the target the chaser fires radial thrusters to close in on the target. By this it increases its altitude. However, the orbital velocity of the chaser remains unchanged (thruster firings in the radial direction have no effect on the orbital velocity). Now in a slightly higher position, but with an orbital velocity that does not correspond to the local circular velocity, the chaser slightly falls behind the target. Small rocket pulses in the orbital velocity direction are necessary to keep the chaser along the radial vector of the target. If these rocket pulses are not executed (for example due to a thruster failure), the chaser will move away from the target. This is a natural braking effect''. For the R-bar approach, this effect is stronger than for the V-bar approach, making the R-bar approach the safer one of the two. Generally, the R-bar approach from below is preferable, as the chaser is in a lower (faster) orbit than the target, and thus "catches up" with it. For the R-bar approach from above, the chaser is in a higher (slower) orbit than the target, and thus has to wait for the target to approach it. Astrotech proposed meeting ISS cargo needs with a vehicle which would approach the station, "using a traditional nadir R-bar approach." The nadir R-bar approach is also used for flights to the ISS of H-II Transfer Vehicles, and of SpaceX Dragon vehicles. Z-bar approach An approach of the active, or "chaser", spacecraft horizontally from the side and orthogonal to the orbital plane of the passive spacecraft—that is, from the side and out-of-plane of the orbit of the passive spacecraft—is called a Z-bar approach. Surface rendezvous Apollo 12, the second crewed lunar landing, performed the first ever rendezvous outside of Low Earth Orbit by landing close to Surveyor 3 and taking parts of it back to Earth. See also Androgynous Peripheral Attach System Clohessy-Wiltshire equations for co-orbit analysis Common Berthing Mechanism Deliberate crash landings on extraterrestrial bodies Flyby (spaceflight) Lunar orbit rendezvous Mars orbit rendezvous Nodal precession of orbits around the Earth's axis Path-constrained rendezvous – the process of moving an orbiting object from its current position to a desired position, in such a way that no orbiting obstacles are contacted along the way Soyuz Kontakt Notes References External links Analysis of a New Nonlinear Solution of Relative Orbital Motion by T. Alan Lovell The Visitors (rendezvous) Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse Docking system agreement key to global space policy – October 20, 2010 Astrodynamics Orbital maneuvers 1965 introductions Projects established in 1965
Space rendezvous
[ "Engineering" ]
3,372
[ "Astrodynamics", "Aerospace engineering" ]
984,130
https://en.wikipedia.org/wiki/Book%20music
Book music () is a medium for storing the music played on mechanical organs, mainly of European manufacture. Book music is made from thick cardboard, containing perforated holes specifying the musical notes to be played, with the book folded zig-zag style. Unlike the heavy pinned barrels, which could only contain a few tunes of fixed length, that had been used on earlier instruments, book music enabled large repertoires to be built up. The length of each tune was no longer determined by the physical dimensions of the instrument. In 1892, organ maker Anselmo Gavioli patented the "book organ," with a series of folded sheets of cardboard. Holes punched on the pages of the folded book allowed keys to rise in the key frame playing the required note. The keys would cause considerable wear to the music books over time. A solution to this was the keyless frame where the holes allowed air to pass through. The development marked a turning point in the history of the mechanical organ, and made Gavioli, until his demise in 1910, the most famous and prolific fair-organ builders. Book music was the most commonly used medium for large instruments. Used extensively by fairground and street organ makers, book music was also used by Henri Fourneaux in 1863 in his Pianista. One of the advantages of book music is that it can be mechanically interpreted. Keys, small levers which rock upwards when a hole passes by, run underneath the book. This motion then mechanically opens the valves of the organ. Paper rolls on the other hand are "key-less" and are generally only read by pneumatic pressure or suction. Some mechanical organs, particularly those of German manufacture by firms such as Gbr Bruder and Ruth, play keyless cardboard book music, operating pneumatically. The disadvantage of book music, compared to paper rolls, is the increased size and weight to store an equivalent amount of music. The major advantage of book music, however, is that it is sturdy and not subject to expansion and contraction with humidity. In addition, it is not necessary to rewind a book after playing; therefore, a musical performance may continue almost immediately without a prolonged break (while the instrument is occupied with rewinding the roll). This allows for large books and sets of books to be manufactured, allowing musically versatile capabilities. In Europe the book format, rather than the roll, is the preferred method of operating all but the smallest instruments designed for outdoor use. See also References Organs (music) Audio storage Mechanical musical instruments French inventions French musical instruments
Book music
[ "Physics", "Technology" ]
521
[ "Physical systems", "Mechanical musical instruments", "Machines" ]
984,232
https://en.wikipedia.org/wiki/Methylcholanthrene
Methylcholanthrene is a highly carcinogenic polycyclic aromatic hydrocarbon produced by burning organic compounds at very high temperatures. Methylcholanthrene is also known as 3-methylcholanthrene, 20-methylcholanthrene or the IUPAC name 3-methyl-1,2-dyhydrobenzo[j]aceanthrylene. The short notation often used is 3-MC or MCA. This compound forms pale yellow solid crystals when crystallized from benzene and ether. It has a melting point around 180 °C and its boiling point is around 280 °C at a pressure of 80 mmHg. Methylcholanthrene is used in laboratory studies of chemical carcinogenesis. It is an alkylated derivative of benz[a]anthracene and has a similar UV spectrum. The most common isomer is 3-methylcholanthrene, although the methyl group can occur in other places. 3-Methylcholanthrene, a known carcinogen which builds up in the prostate due to cholesterol breakdown, is implicated in prostate cancer. It "readily produces" primary sarcomas in mice. History In 1933, the first article about methylcholanthrene was published. Here they described the synthesis of the compound. Not many years later, it became clear that this compound had toxic properties to humans and animals. Therefore, a lot of interest was shown in the compound and it was used often in toxicological research. Methylcholanthrene is often tested on mice and rats to derive information for cancer medicine development. Due to the influence of the compound on the central nervous system, its responses and change in response are compared. It is also known that due to genetic mutations, the compound causes cancer cells to develop. In 1982, the last article appeared on the synthesis of methylcholanthrene. The yield of 93% was reached and therefore no further adjustments were made to the synthesis scheme. Synthesis First 3-MC was synthesized with the method of reference. Later the synthesis of the compound was improved. The synthesis of 3-MC consists of a few steps, visualized in figure 1; the first step is the key to success for the synthesis. 4-methylindanone (1) reacts in condensation with lithium salt of N,N-diethyl-1-naphthamide (2). At -60 ̊C the reaction of 1 and 2 afforded evenly to the lactone (3), the carbonyl addition product which underwent conversion on treatment with acid. The free acid (4) was obtained when the latter was cleaved reductively with zinc and alkali. Cyclization of the product occurred when treated with ZnCl2 in acetic acid anhydride and gave the compound 6-acetoxy-3-MC (5). Reducing this product with hydriodic acid in propionic acid resulted in 3-MC. Mechanism 3-MC has an inhibitory function in a dimethylnitrosamine demethylase process in rat livers. Inhibition could happen on by interfering in demethylase conformations or by interfering in synthesis and/or degradation of demethylase. Experiments showed that the Km doesn't change after 3-MC treatment. This strongly indicates that enzyme affinity is not influenced by 3-MC. Instead, incubation with 3-MC leads to a decrease in the amount of enzyme activity. These results point towards inhibition of demethylase synthesis and/or induction of demetylase degradation. Unpublished observations of Venkatesan, Argus and Arcos suggest that demethylase synthesis inhibition is most plausible. A possible mechanism for this reaction is depicted in figure (2). 3-MC is metabolized, via epoxidation, hydrolysis and another epoxidation, to a very reactive epoxide. Epoxidations are realized by the enzyme cytochrome P450. The second epoxide is not hydrolysed immediately because it is localized next to a bay region, which shields the epoxide. This way, the metabolite is able to travel and bind to DNA in figure (2). The mechanism is derived from the binding mechanism of benzo[a]pyrene to DNA. This is likely because it is plausible that two polycyclic aromatic hydrocarbons are metabolized via the same pathway. Deoxyguanosine is used in the figure, since that base appears to be bound to Benzo[a]pyrene far more often than the other bases. There appears to be an equilibrium in 3-MC-free and 3-MC-bound. It is hard to determine how when the equilibrium is formed due to difficulties with radioactive measurements. A probable saturating dose is thought to be around 40 mg 3-MC/kg. Research on the effect of 3-MC in rat uteri concludes that 3-MC acts as an estrogen antagonist. The sexhormone is, like 3-MC, a polycyclic aromatic hydrocarbon. 3-MC and estrogen bind to estrogenreceptors competitively, reducing the estrogen expression. Metabolism MC is metabolized by rat liver microsomes into oxygenated forms which alkylate DNA. These oxygenated metabolites bind to double-stranded and single-stranded. Empirical data show that MC tends to bind mostly to G-bases8. When injected in lung, kidney or liver tissue of rats, it appears that the liver is able to reverse the MC-binding to DNA. Lung and kidney tissue are not capable of doing this, which may explain why MC is more carcinogenic in lungs and kidneys than in the liver. To be carcinogenic, the MC metabolite has to be covalently bound to DNA. Therefore, it is necessary for MC to be oxygenated in order to carcinogenic. Injected MC does not move away from the injection site. In a rat body MC has a half-life of about 4 weeks. After 8 weeks, 80% of MC is metabolized into water-soluble metabolites. MC and its metabolites mainly exit the body via feces (a ninefold more urine). Three months after injection with MC, 85% of the rats are reported to have tumors. 82% of the tumors is a form of spindle-cell sarcoma. Efficacy Methylcholanthrene is often used to induce tumors in rodents for carcinogenesis and mutagenesis research. In a study from 1991, lung precancerous and cancerous lesions were induced in Wistar rats by one intrabronchial injection of 3-MC. After 30 days, atypical hyperplasia of bronchiolar epithelium, adenoid hyperplasia or adenomas, and squamous cell carcinoma occurred in 15 (88.2%), 12 (70.6%) and 3 (17.7%) out of 17 rats respectively. After 60 days, the incidences were 15/18 (83.3%), 4/17 (23.5%) and 7/18 (38.9%). All of the precancerous lesions and carcinomas showed positive expression of gamma-glutamyltranspeptidase (GGT). Jin et al. (2013) found that the cellular redox balance is altered by acute exposure to 3-MC. This causes the nuclear factor erythroid 2-related factor 2 (Nrf2)-regulated response pathway to induce antioxidant responses. Toxicity 3-MC is a ligand of the aryl hydrocarbon receptor (AhR), which stimulates transcription directed by xenobiotic response elements. AhR ligands can induce formation of an AhR-estrogen receptor (ER) complex. 3-MC was found to elicit estrogenic activity by this mechanism, and by stimulation of the expression of some endogenous ER target genes. 3-MC may cause respiratory tract irritation, skin irritation or eye irritation. Effects on animals Effects on human 3-MC is mutagenic to human cells. Curren et al. (1978) were the first to report successfully induced mutations in human cells with 3-MC. Skin epithelial cells are thought to metabolize the compound to mutagenic products. The ability to metabolize mutagens may express genetically regulated differences within a species such as man or mouse, causing environmental chemicals to show a different level of mutagenicity and carcinogenicity to specific individuals. Non-human toxicity studies The administration of 3-MC to pregnant mice results in the formation of lung tumors in the offspring. Miller et al. (1990) compared the effects of fetal versus adult exposure to 3-MC on both induction of aryl hydrocarbon hydroxylase (AHH) activity in lung and dependence of lung tumorigenesis on the Ah genotype. A single ip injection (in inducible fetal lung supernatants) of 100 mg/kg of 3-MC to the mothers resulted in a maximal 50-fold induction of AHH activity by 8 hr, which persisted for 48 hr. The same injections to adult F1 mice revealed only a 4- to 7—fold increase in lung AHH activity, compared to the large fetal induction ratio. References External links National Pollutant Inventory - Polycyclic Aromatic Hydrocarbon Fact Sheet Carcinogens Hepatotoxins Polycyclic aromatic hydrocarbons
Methylcholanthrene
[ "Chemistry", "Environmental_science" ]
1,977
[ "Carcinogens", "Toxicology" ]
984,289
https://en.wikipedia.org/wiki/Jiuzhaigou
Jiuzhaigou (; ) is a nature reserve and national park located in the north of Sichuan Province in southwestern China. A long valley running north to south, Jiuzhaigou was inscribed by UNESCO as a World Heritage Site in 1992 and a World Biosphere Reserve in 1997. It belongs to the category V (Protected Landscape) in the IUCN system of protected area categorization. The Jiuzhaigou valley is part of the Min Mountains on the edge of the Tibetan Plateau and stretches over . It has an altitude of over 4800 meters and is composed of a series of diverse forest ecosystems. It is known for its many multi-level waterfalls, colorful lakes, and snow-capped peaks. Its elevation ranges from . The Jiuzhaigou area borders the Minshan Garna Peak in the south, and the Huanglong Scenic Area is to Jiuzhaigou's north. It originates from the Baishui River area, one of the headwaters of the Jialing River and a part of the Yangtze River system. History Jiuzhaigou (literally "Nine Settlement Valley") takes its name from the nine Tibetan settlements along its length. The remote region was inhabited by various Tibetan and Qiang peoples for centuries. Until 1975 this inaccessible area was little known. Extensive logging took place until 1979, when the Chinese government banned such activity and made the area a national park in 1982. An Administration Bureau was established and the site officially opened to tourism in 1984; layout of facilities and regulations were completed in 1987. The site was inscribed by UNESCO as a World Heritage Site in 1992 and a World Biosphere Reserve in 1997. The tourism area is classified as a AAAAA scenic area by the China National Tourism Administration. Since opening, tourist activity has increased every year: from 5,000 in 1984 to 170,000 in 1991, 160,000 in 1995, to 200,000 in 1997, including about 3,000 foreigners. Visitors numbered 1,190,000 in 2002. , the site averages 7,000 visits per day, with a quota of 12,000 being reportedly enforced during high season. The Town of Zhangzha at the exit of the valley and the nearby Songpan County feature an ever-increasing number of hotels, including several luxury five-stars, such as Sheraton. Developments related to mass tourism in the region have caused concerns about the impact on the environment around the park. 2017 earthquake In August 2017, a magnitude 7.0 earthquake struck Jiuzhaigou County, causing significant structural damage. The authorities closed the valley to tourists until March 3, 2018, before reopening the park with limited access. The Jiuzhaigou earthquake in Sichuan, China had a significant impact on the scenic area. The earthquake also resulted in the damage and breakage of two natural dams, namely the Nuorilang Waterfall dam and the Huohua Lake dam. The Nuorilang Waterfall suffered damage due to its initial low stability and topographic effects. The dam was composed of poor material with low mechanical strength, making it prone to rockfalls even during non-earthquake periods. The nearly vertical structure of the dam amplified the seismic influences at its upper part, increasing the likelihood of deformation and collapse. The Jiuzhaigou earthquake caused the Huohua Lake dam to break. After the dam broke, the water discharge increased from the normal level of 9.3 m3/s to a maximum of 21.5 m3/s. As a result, the water level rapidly descended, leading to collapses along the dam. Population Seven of the nine Tibetan villages are still populated today. The main agglomerations that are readily accessible to tourists are Heye, Shuzheng and Zechawa along the main paths that cater to tourists, selling various handicrafts, souvenirs and snacks. There is also Rexi in the smaller Zaru Valley and behind Heye village are Jianpan, Panya and Yana villages. Guodu and Hejiao villages are no longer populated. Penbu, Panxing and Yongzhu villages lie along the road that passes through the town of Jiuzhaigou/Zhangza outside the valley. In 2003, the permanent population of the valley was about 1,000 comprising 112 families, and due to the protected nature of the park, agriculture is no longer permitted so the locals now rely on tourism and local government subsidies to make a living. Geography and climate Jiuzhaigou lies at the southern end of the Minshan mountain range, north of the provincial capital of Chengdu. It is part of the Jiuzhaigou County (formerly Nanping County) in the Aba Tibetan Qiang Autonomous Prefecture of northwestern Sichuan province, near the Gansu border. The valley covers , with buffer zones covering an additional . Its elevation, depending on the area considered, ranges from 1,998 to 2,140 m (at the mouth of Shuzheng Gully) to 4,558-4,764 m (on Mount Ganzigonggai at the top of Zechawa Gully). The climate is subtropical to temperate monsoon with a mean annual temperature of 7.8 °C, with means of −3.7 °C in January and 16.8 °C in July. Total annual rainfall is 761 mm but in the cloud forest it is at least 1,000 mm. 80% of rainfall occurs between May and October.Due to the monsoon moving towards the valley, summer is mild, cloudy, and moderately humid. Above an altitude of 3500 meters, the climate is colder and drier. Ecology Jiuzhaigou's ecosystem is classified as temperate broad-leaf forest and woodlands, with mixed mountain and highland systems. Nearly of the core scenic area are covered by virgin mixed forests. Those forests take on attractive (vibrant) yellow, orange and red hues in the autumn, making that season a popular one for visitors. They are home to a number of plant species of interest, such as endemic varieties of rhododendron and bamboo. Local fauna includes the endangered giant panda and golden snub-nosed monkey. Both populations are very small (fewer than 20 individuals for the pandas) and isolated. Their survival is in question in a valley subject to increasing tourism. It is one of only three known locations for the threatened Duke of Bedford's vole. Jiuzhaigou is also home to approximately 140 bird species. The region is indeed a natural museum for mountain karst hydrology and research. It preserves a series of important forest ecosystems, including ancient forests that provide important habitats for many endangered animal and plant species such as giant pandas and antelopes. Jiuzhaigou Valley Scenic and Historic Interest Area also contains a large number of well preserved Quaternary glacial relics, which are of great scenic value. Geology and hydrology Jiuzhaigou's landscape is made up of high-altitude karsts shaped by glacial, hydrological and tectonic activity. It lies on major faults on the diverging belt between the Qinghai-Tibet Plate and the Yangtze Plate, and earthquakes have also shaped the landscape. The rock strata are mostly made up of carbonate rocks such dolomite and tufa, as well as some sandstone and shales. The region contains a large amount of tuff, which is a type of limestone formed by the rapid precipitation of calcium carbonate in freshwater. It falls on rocks, lake beds, and even fallen trees in the water, sometimes accumulating into terraces, shoals, and dam barriers in the lake. The valley includes the catchment area of three gullies (which due to their large size are often called valleys themselves), and is one of the sources of the Jialing River via the Bailong River, part of the Yangtze River system. Jiuzhaigou's best-known feature is its dozens of blue, green and turquoise-colored lakes. The local Tibetan people call them Haizi in Chinese, meaning "son of the sea". Originating in glacial activity, they were dammed by rockfalls and other natural phenomena, then solidified by processes of carbonate deposition. Some lakes have a high concentration of calcium carbonate, and their water is very clear so that the bottom is often visible even at high depths. The lakes vary in color and aspect according to their depths, residues, and surroundings. Some of the less stable dams and formations have been artificially reinforced, and direct contact with the lakes or other features is forbidden to tourists. Notable features Jiuzhaigou is composed of three valleys arranged in a Y shape. The Rize and Zechawa valleys flow from the south and meet at the centre of the site where they form the Shuzheng valley, flowing north to the mouth of the valley. The mountainous watersheds of these gullies are lined with of roads for shuttle buses, as well as wooden boardwalks and small pavilions. The boardwalks are typically located on the opposite side of the lakes from the road, shielding them from disturbance by passing buses. Most visitors will first take the shuttle bus to the end of Rize and/or Shuzheng gully, then make their way back downhill by foot on the boardwalks, taking the bus instead when the next site is too distant. Here is a summary of the sites found in each of the gullies: Rize Valley The Rize Valley (日则沟, pinyin: Rìzé Gōu) is the south-western branch of Jiuzhaigou. It contains the largest variety of sites and is typically visited first. Going downhill from its highest point, one passes the following sites: The Primeval Forest (原始森林 Yuánshǐ Sēnlín) is a preserved ancient woodland. It is fronted by views of the surrounding mountains and cliffs, including the 500-metre-high, blade-shaped Sword Rock (剑岩 Jiàn Yán). Swan Lake (天鹅海, Tiān'é Hǎi) is a 2250-metre-long, 125-metre-wide lake named for its visiting swans and ducks. Grass Lake (草海, Cǎo Hǎi) is a shallow lake covered in intricate vegetation patterns. Arrow Bamboo Lake (箭竹海, Jiànzhú Hǎi), covering an area of 170,000 m2, is a shallow lake with a depth of 6 m. It lies at an elevation of 2,618 m, and was a main feature site for the 2002 Chinese film Hero. Panda Lake (熊猫海, Xióngmāo Hǎi) features curious color patterns of blue and green. Giant Pandas were said to have come to this lake to drink, though there have been no sightings for many years. The lake empties into the multi-stream, multi-level Panda Waterfalls, dropping 78 m in three steps. Five Flower Lake (五花海, Wǔhuā Hǎi) is a shallow multi-colored lake whose bottom is criss-crossed by ancient fallen tree trunks. Pearl Shoal (珍珠滩, Zhēnzhū Tān) is a wide, gently sloping area of active calcareous tufa deposition covered in a thin sheet of flowing water. It empties into the famous Pearl Waterfalls, where the shoal drops 28 m in a 310-metre-wide broad curtain of water. A scene of the television adaptation of Journey to the West was filmed there. Mirror Lake (镜海, Jìng Hǎi) is another quiet lake casting beautiful reflections of the surroundings when the water is calm. Zechawa Valley The Zechawa Gully (则查洼沟, Zécháwā Gōu) is the south-eastern branch of Jiuzhaigou. It is approximately the same length as Rize gully (18 km) but climbs to a higher altitude (3150 m at the Long Lake). Going downhill from its highest point, it features the following sites: Long Lake (长海, Cháng Hǎi) is crescent-shaped and is the highest, largest and deepest lake in Jiuzhaigou, measuring in length and up to 103 m in depth. It reportedly has no outgoing waterways, getting its water from snowmelt and losing it from seepage. Local folklore features a monster in its depths. Five-Color Pond (五彩池, Wǔcǎi Chí) is one of the smallest bodies of water in Jiuzhaigou lakes. Despite its very modest dimensions and depth, it has a richly colored underwater landscape with some of the brightest and clearest waters in the area. According to legend, the pond was where Goddess Semo washed her hair and God Dage came daily to bring her water. The Seasonal Lakes (季节海, Jìjié Hǎi) are a series of 3 lakes (Lower, Middle and Upper) along the main road, that change from empty to full during each year. Shuzheng Valley The Shuzheng Valley (树正沟, Shùzhèng Gōu) is the northern (main) branch of Jiuzhaigou. It ends after at the Y-shaped intersection of the three gullies. Going downhill from the intersection to the mouth of the valley, visitors encounter the following: Nuorilang Falls (诺日朗瀑布, Nuòrìlǎng Pùbù), near the junction of the valleys, are 20 m high and 320 m wide. They are reportedly the widest highland waterfall in China, the widest travertine-topped waterfall in the world, and one of the symbols of Jiuzhaigou. Nuorilang Lakes (诺日朗群海, Nuòrìlǎng Qúnhǎi) and Shuzheng Lakes (树正群海 Shùzhèng Qúnhǎi) are stepped series of respectively 18 and 19 ribbon lakes formed by the passage of glaciers, then naturally dammed. Some of them have their own folkloric names, such as the Rhinoceros, Unknown, and Tiger lakes. Sleeping Dragon Lake (卧龙海, Wòlóng Hǎi) is one of the lower lakes in the area. With a depth of 20 m, it is notable for the clearly visible calcareous dyke running through it, whose shape has been compared to a dragon lying on the bottom. Reed Lake (芦苇海, Lúwěi Hǎi) is a 1375-metre-long, reed-covered marsh with a clear turquoise brook (known as the "Jade Ribbon") zigzagging through it. The contrast is particularly striking in the autumn when the reeds turn golden yellow. Others The Fairy Pool (神仙池, Shénxiān Chí) lies west of Jiuzhaigou and features travertine pools very similar to those of the nearby Huanglong Scenic and Historic Interest Area. Tourism The Zharu Valley (扎如沟, Zhārú Gōu) runs southeast from the main Shuzheng gully and is rarely visited by tourists. The valley begins at the Zharu Buddhist monastery and ends at the Red, Black, and Daling lakes. Zharu Valley is the home of tourism in Jiuzhaigou. The valley has recently been opened to a small number of tourists wishing to go hiking and camping off the beaten track. Visitors can choose from day walks and multiple day hikes, depending on their time availability. Knowledgeable guides accompany tourists through the valley, sharing their knowledge about the unique biodiversity and local culture of the national park. The Zharu Valley has 40% of all the plant species that exist in China and it is the best place to spot wildlife inside the national park. The main hike follows the pilgrimage of the local Benbo Buddhists circumnavigating the sacred 4,528 m Zha Yi Zha Ga Mountain. Access Jiuzhaigou, compared with other high-traffic scenic spots in China, can be difficult to reach by land. The majority of tourists reach the valley by a ten-hour bus ride from Chengdu along the Min River canyon, which is prone to occasional minor rock-slides and, in the rainy season, mudslides that can add several hours to the trip. The new highway constructed along this route was badly damaged during the 2008 Sichuan Earthquake, but has since been repaired and the road is open to public buses and private vehicles. Since 2003, it has been possible to fly from Chengdu or Chongqing to Jiuzhai Huanglong Airport on a mountain side in Songpan County, and then take an hour-long bus ride to Huanglong, or a 90-minute bus ride to Jiuzhaigou. Since 2006, a daily flight to Xi'an opens in the peak season. In October 2009, new direct flights were added from Beijing, Shanghai, and Hangzhou. Jiuzhaigou is served by the Huanglongjiuzhai railway station, opened on 30 August 2024. The station forms part of the Sichuan—Qinghai Railway. Protection As a national park and national nature reserve, Jiuzhaigou Valley Scenic and Historic Interest Area is protected by national and provincial laws and regulations, ensuring the long-term management and protection of the heritage. In 2004, the Sichuan Provincial Regulations on the Protection of World Heritage and the Implementation of the Sichuan Provincial Regulations on the Protection of World Heritage in Aba Autonomous Prefecture became laws, providing a stricter basis for the protection of heritage. The Sichuan Provincial Construction Commission is fully responsible for the protection and management of the site. The Jiuzhaigou Valley Scenic and Historic Interest Area Administrative Bureau (ABJ) consists of several departments, including the Protection Section, the Construction Section, the police station and so on, which are responsible for the field administration. In addition to national legislation, there are many relevant local government laws and regulations. The revised management plan in 2001 is based on these laws and contains specific regulations and recommendations: prohibiting the logging of trees and forests, as well as activities that cause pollution, and fully considering the needs of local Tibetan residents. Image gallery See also Related places Pearl Waterfall Huanglong Scenic and Historic Interest Area, south of Jiuzhaigou Related lists List of World Heritage Sites in China List of Biosphere Reserves in China References Further reading External links Jiuzhaigou National Park official website Jiuzhaigou at the World Heritage Sites of UNESCO Jiuzhaigou at the MAB Biosphere Reserves of UNESCO Jiuzhaigou at the Terrestrial Ecosystem Monitoring Sites (TEMS) of FAO World Heritage Sites in China Biosphere reserves of China Valleys of China National parks of China Geography of Sichuan Landforms of Sichuan Old-growth forests Tourist attractions in Sichuan AAAAA-rated tourist attractions
Jiuzhaigou
[ "Biology" ]
3,781
[ "Old-growth forests", "Ecosystems" ]
984,337
https://en.wikipedia.org/wiki/Dujiangyan
The Dujiangyan () is an ancient irrigation system in Dujiangyan City, Sichuan, China. Originally constructed around 256 BC by the State of Qin as an irrigation and flood control project, it is still in use today. The system's infrastructure develops on the Min River (Minjiang), the longest tributary of the Yangtze. The area is in the west part of the Chengdu Plain, between the Sichuan Basin and the Tibetan Plateau. Originally, the Min would rush down from the Min Mountains and slow down abruptly after reaching the Chengdu Plain, filling the watercourse with silt, thus making the nearby areas extremely prone to floods. King Zhao of Qin commissioned the project, and the construction of the Dujiangyan harnessed the river using a new method of channeling and dividing the water rather than simply damming it. The water management scheme is still in use today to irrigate over of land in the region and has produced comprehensive benefits in flood control, irrigation, water transport and general water consumption. Begun over 2,250 years ago, it now irrigates 668,700 hectares of farmland. The Dujiangyan, the Zhengguo Canal in Shaanxi and the Lingqu Canal in Guangxi are collectively known as the "three great hydraulic engineering projects of the Qin." Dujiangyan Irrigation System were inscribed on the World Heritage List in 2000. It has also been declared a State Priority Protected Site, among the first batch of National Scenic Areas and Historical Sites, and a National ISO14000 Demonstration Area. History Planning During the Warring States period, people who lived in the area of the Min River were plagued by annual flooding. Qin hydrologist Li Bing investigated the problem and discovered that the river was swelled by fast flowing spring melt-water from the local mountains that burst the banks when it reached the slow moving and heavily silted stretch below. One solution would have been to build a dam, but the Qin wanted to keep the waterway open for military vessels to supply troops on the frontier, so instead an artificial levee was constructed to redirect a portion of the river's flow and then to cut a channel through Mount Yulei to discharge the excess water upon the dry Chengdu Plain beyond. Construction King Zhao of Qin allocated 100,000 taels of silver for the project and sent a team said to number tens of thousands. The levee was constructed from long sausage-shaped baskets of woven bamboo filled with stones known as Zhulong held in place by wooden tripods known as Macha. The construction of a water-diversion levee resembling a fish's mouth took four years to complete. Cutting the channel proved to be a far greater problem, as the hand tools available at the time, prior to the invention of gunpowder, would have taken decades to cut through the mountain. Li Bing devised an ingenious method of using fire and water to rapidly heat and cool the rocks, causing them to crack and allowing them to be easily removed. After eight years of work, a channel wide had been gouged through the mountain. Legacy After the system was finished, no more floods occurred. The irrigation made Sichuan the most productive agricultural region in China for a time. The construction is also credited with giving the people of the region a laid-back attitude to life; by eliminating disaster and ensuring a regular and bountiful harvest, it left them with plenty of free time. Turmoil surrounding the conquering of Chengdu by peasant rebel leader Zhang Xianzhong in 1644, and the Ming-Qing transition more generally, led to depopulation and the deterioration of the Dujiangyan irrigation system to the point where rice cultivation was set back for decades. The original Dujiangyan irrigation system was destroyed by the 1933 Diexi earthquake. The current Dujiangyan irrigation system was rebuilt after the Diexi earthquake in 1933 by Zhang Yuan (张沅) and his sons, including Zhang Shiling (张世龄). In 2000, Dujiangyan became a UNESCO World Heritage Site. Today it has become a major tourist attraction. 2008 Sichuan earthquake On May 12, 2008, a massive earthquake struck a vast portion of west Sichuan, including the Dujiangyan area. Initial reports indicated that the Yuzui Levee was cracked but not severely damaged. Diversion of flow could still be seen as the river turns. Engineering constructions Irrigation head The irrigation system consists of three main constructions that work in harmony with one another to ensure against flooding and keep the fields well supplied with water: The Yuzui or Fish Mouth Levee (Chinese:鱼嘴), named for its conical head that is said to resemble the mouth of a fish, is the key part of the construction. It is an artificial levee that divides the water into inner and outer streams. The inner stream is deep and narrow, while the outer stream is relatively shallow but wide. This special structure ensures that the inner stream carries approximately 60% of the river's flow into the irrigation system during dry season. While during flood, this amount decreases to 40% to protect the people from flooding. The outer stream drains away the rest, flushing out much of the silt and sediment. The Feishayan or Flying Sand Weir (Chinese:飞沙堰) has a -wide opening that connects the inner and outer streams. This ensures against flooding by allowing the natural swirling flow of the water to drain out excess water from the inner to the outer stream. The swirl also drains out silt and sediment that failed to go into the outer stream. A modern reinforced concrete weir has replaced the original weighted bamboo baskets. The Baopingkou or Bottle-Neck Channel (Chinese:宝瓶口), which was gouged through the mountain, is the final part of the system. The channel distributes the water to the farmlands in the Chengdu Plain, whilst the narrow entrance, that gives it its name, works as a check gate, creating the whirlpool flow that carries away the excess water over Flying Sand Fence, to ensure against flooding. Anlan Suspension Bridge Anlan or Couple's Bridge spans the full width of the river connecting the artificial island to both banks and is known as one of the Five Ancient Bridges of China. The original Zhupu Bridge only spanned the inner stream connecting the levee to the foot of Mount Yulei. This was replaced in the Song dynasty by Pingshi Bridge which burned down during the wars that marked the end of the Ming dynasty. In 1803 during the Qing dynasty a local man named He Xiande and his wife proposed the construction of a replacement, made of wooden plates and bamboo handrails, to span both streams and this was nicknamed Couple's Bridge in their honour. This was demolished in the 1970s and replaced by a modern bridge. Geography Location The Dujiangyan irrigation system is located in the western portion of the Chengdu flatlands, at the junction between the Sichuan basin and the Qinghai-Tibet plateau. Geology The Dujiangyan irrigation system is located at the turning point of the two topographic steps of the western plateau mountains and the Chengdu Plain. It is the southwest extension of the Longmen Mountains and the area through which the Longmen Mountain Fault Zone passes. Topography and geomorphology The Dujiangyan irrigation system is higher in the northwest and lower in the southeast. The west belongs to the southern section of Longmen Mountains, with the mountain elevation below 3000 meters. The east is Chengdu Plain, with an altitude of 720 meters. Hydrology The Dujiangyan irrigation system was built at the entrance of Minjiang River, with an average annual inflow of 15.082 billion cubic meters. There are two hydrological stations in the upper reaches of the Minjiang River. One is the Zipingpu Dam at the mouth of the main stream. The water catchment area of the control survey is 22664 square kilometers, accounting for 98.38% of the total water catchment area of the upper reaches of the Minjiang River. The other is Yangliuping Dam at the outlet of Baisha River, with a controlled catchment area of 363 square kilometers, accounting for 1.58% of the total catchment area. There is a catchment area of 10 square kilometers from Estuary of Baisha River to Dujiangyan irrigation system, accounting for 0.04% of the total catchment area. Temple sites Two Kings Temple Erwang or Two Kings Temple is on the bank of the river at the foot of Mount Yulei. The original Wangdi Temple built in memory of an ancient Shu king was moved, so locals renamed the temple here. The 10,072 m2 Qing dynasty wooden complex conforms to the traditional standard of temple design except that it does not follow a north–south axis. The main hall, which contains a modern statue of Li Bing, opens up onto a courtyard facing an opera stage. On Li Bing's traditional birthday, 24th day of the 7th month of the lunar calendar, local operas were performed for the public, and on Tomb Sweeping Day a Water Throwing Festival is held. The rear hall contains a modern statue of the god Erlang Shen. Because it would be a problem if a family had no offspring in Chinese feudal society, locals regarded this Erlang as Li Bing's son. Guanlantin Pavilion stands above the complex and is inscribed with wise words from Li Bing such as, When the river flows in zigzags, cut a straight channel; when the riverbed is wide and shallow, dig it deeper. Dragon-Taming Temple Fulongguan or Dragon-Taming Temple in Lidui Park was founded in the third century in honour of Fan Changsheng. Following Li Bing's death a hall was established here in his honour and the temple was renamed to commemorate the dragon fighting legends that surrounded him. It is here that Erlang Shen, the legendary son of Li Bing, is said to have chained the dragon that he and his seven sworn brothers had captured in an ambush at the River God Temple when it came to collect a human sacrifice. This action is said to have protected the region from floods ever since. During the East Han dynasty a statue of Li Bing was placed in the river to monitor the water flow, with the level rising above his shoulders to indicate flood and falling beneath his calves to indicate drought. Recovered from the river in 1974 and placed on display in the main hall, this is the oldest known stone statue of a human in China. See also Turfan water system Grand Canal of China References AAAAA-rated tourist attractions Canals in China Chinese architectural history Chinese inventions Irrigation in China Irrigation projects Major National Historical and Cultural Sites in Sichuan National parks of China Qin (state) Tourist attractions in Chengdu World Heritage Sites in China Qin dynasty architecture he:הר צ'ינגצ'נג ומערכת ההשקיה של דוג'יאנגין#מערכת ההשקיה של דוּגְ'יָאנְגְיֵן
Dujiangyan
[ "Engineering" ]
2,239
[ "Irrigation projects" ]
984,355
https://en.wikipedia.org/wiki/Uranium%20glass
Uranium glass is glass which has had uranium, usually in oxide diuranate form, added to a glass mix before melting for colouration. The proportion usually varies from trace levels to about 2% uranium by weight, although some 20th-century pieces were made with up to 25% uranium. First identified in 1789 by German chemist Martin Heinrich Klaproth, uranium was soon being added to decorative glass for its fluorescent effect. James Powell's Whitefriars Glass company in London, England, was one of the first to market the glowing glass, but other manufacturers soon realised its sales potential and uranium glass was produced across Europe and later in Ohio. Uranium glass was made into tableware and household items, but fell out of widespread use when the availability of uranium to most industries was sharply curtailed during the Cold War in the 1940s to 1990s, with the vast majority of the world's uranium supply being utilised as a strategic material for use in nuclear weapons or nuclear power. Most uranium glass is now considered to be antiques or retro-era collectables, although there has been a minor revival in art glassware. Otherwise, modern uranium glass is now mainly limited to small objects like beads or marbles as scientific or decorative novelties. Appearance The normal colour of uranium glass ranges from yellow to green depending on the oxidation state and concentration of the metal ions, although this may be altered by the addition of other elements as glass colorants. Uranium glass also fluoresces bright green under ultraviolet light. Vaseline glass The most common color of uranium glass is pale yellowish-green, which in the 1930s led to the nickname "Vaseline glass", based on a perceived resemblance to the appearance of Vaseline-brand petroleum jelly as formulated at that time. Specialized collectors still define Vaseline glass as transparent or semi-transparent uranium glass in this specific color. Vaseline glass is sometimes used as a synonym for any uranium glass, especially in the United States, but this usage is frowned upon, since Vaseline-brand petroleum jelly was only yellow, not other colors. The term is sometimes applied to other types of glass based on certain aspects of their superficial appearance in normal light, regardless of actual uranium content which requires a blacklight test to verify the characteristic green fluorescence. In the United Kingdom and Australia, the term Vaseline glass can be used to refer to any type of translucent glass. Other colors Several other common subtypes of uranium glass have their own nicknames: Custard glass (opaque or semiopaque pale yellow) Jadite glass (opaque or semi-opaque pale green; initially, the name was trademarked as "Jadite", although this is sometimes over-corrected in modern usage to "jadeite") Depression glass (transparent or semitransparent pale green). Burmese glass (opaque glass that shades from pink to yellow) Like "Vaseline", the terms "custard" and "jad(e)ite" are often applied on the basis of superficial appearance rather than uranium content. Conversely, "Depression glass" is a general description for any piece of glassware manufactured during the Great Depression regardless of appearance or formula. Fabrication Uranium glass is used as one of several intermediate glasses in what is known to scientific glass blowers as a 'graded seal'. This is typically used in glass-to-metal seals such as tungsten and molybdenum or nickel based alloys such as Kovar, as an intermediary glass between the metal sealing glass and lower expansion borosilicate glass. Usage Ancient usage The use of uranium glass dates back to at least 79 AD, the date of a mosaic containing yellow glass with 1% uranium oxide, which was found in a Roman villa on Cape Posillipo in the Bay of Naples, Italy, in 1912. Medieval usage Starting in the late Middle Ages, pitchblende was extracted from the Habsburg silver mines in Joachimsthal, Bohemia (now Jáchymov in the Czech Republic), and was used as a coloring agent in the local glassmaking industry. Modern usage In 1789, Martin Klaproth discovered uranium. He later experimented with the use of the element as a glass colourant. Uranium glass became popular in the late-19th and early-20th centuries, with its period of greatest popularity being from the 1880s to the 1920s. The first major producer of items made of uranium glass is commonly recognized as Austrian Franz Xaver Riedel, who named the yellow () and yellow-green (German: Gelb-Grün) varieties of the glass "annagelb" and "annagrün", respectively, in honor of his daughter Anna Maria. Riedel was a prolific blower of uranium glass in Unter-Polaun (today Dolni Polubny), Bohemia from 1830 to 1848. By the 1840s, many other European glassworks began to produce uranium glass items and developed new varieties of uranium glass. The Baccarat glassworks in France created an opaque green uranium glass which they named chrysoprase from its similarity to that green form of chalcedony. At the end of the 19th century, glassmakers discovered that uranium glass with certain mineral additions could be tempered at high temperatures, inducing varying degrees of micro-crystallization. This produced a range of increasingly opaque glasses from the traditional transparent yellow or yellow-green to an opaque white. During the Depression years, more iron oxide was added to the mixture to match popular preferences for a greener glass. This material, technically a glass-ceramic, acquired the name "vaseline glass" because of its supposedly similar appearance to petroleum jelly. , a few manufacturers continue the vaseline glass tradition: Fenton Glass, Mosser Glass, Gibson Glass and Jack Loranger. U.S. production of uranium glasses ceased in the middle years of World War II because of the government's confiscation of uranium supplies for the Manhattan Project from 1942 to 1958. After the restrictions in the United States were eased, several firms resumed production of uranium glass, including Fenton and Mosser; though uranium was still regulated as a strategic material. Following the Cold War, restrictions on uranium glass were completely lifted. During this time many older pieces entered the free market and new pieces continued to be produced in small quantities into the 2000s. Riihimäki Glass produced uranium glass designer pieces after World War II. Health concerns Uranium glass can register above background radiation on a sufficiently sensitive Geiger counter, although most pieces of uranium glass are considered to be harmless and only negligibly radioactive. A study conducted on uranium glass in a private collection found that the dose rates of beta and gamma radiation emitted from the glass posed no danger to the public or to conservators. See also Carnival glass Depression glass Fiestaware Sievert Uranium tile References Further reading These People Love to Collect Radioactive Glass, Collectors Weekly External links Uranium Glass – The Glass Association Davidson English Pressed Glass at the Glass Museum Vaseline and Uranium Glass at the Health Physics Historical Instrumentation Museum Collection Collecting Glass compositions Uranium
Uranium glass
[ "Chemistry" ]
1,431
[ "Glass compositions", "Glass chemistry" ]
984,380
https://en.wikipedia.org/wiki/Metalloproteinase
A metalloproteinase, or metalloprotease, is any protease enzyme whose catalytic mechanism involves a metal. An example is ADAM12 which plays a significant role in the fusion of muscle cells during embryo development, in a process known as myogenesis. Most metalloproteases require zinc, but some use cobalt. The metal ion is coordinated to the protein via three ligands. The ligands coordinating the metal ion can vary with histidine, glutamate, aspartate, lysine, and arginine. The fourth coordination position is taken up by a labile water molecule. Treatment with chelating agents such as EDTA leads to complete inactivation. EDTA is a metal chelator that removes zinc, which is essential for activity. They are also inhibited by the chelator orthophenanthroline. Classification There are two subgroups of metalloproteinases: Exopeptidases, metalloexopeptidases (EC number: 3.4.17). Endopeptidases, metalloendopeptidases (3.4.24). Well known metalloendopeptidases include ADAM proteins and matrix metalloproteinases, and M16 metalloproteinases such as Insulin Degrading Enzyme and Presequence Protease In the MEROPS database peptidase families are grouped by their catalytic type, the first character representing the catalytic type: A, aspartic; C, cysteine; G, glutamic acid; M, metallo; S, serine; T, threonine; and U, unknown. The serine, threonine and cysteine peptidases utilise the amino acid as a nucleophile and form an acyl intermediate - these peptidases can also readily act as transferases. In the case of aspartic, glutamic and metallopeptidases, the nucleophile is an activated water molecule. In many instances, the structural protein fold that characterises the clan or family may have lost its catalytic activity, yet retained its function in protein recognition and binding. Metalloproteases are the most diverse of the four main protease types, with more than 50 families classified to date. In these enzymes, a divalent cation, usually zinc, activates the water molecule. The metal ion is held in place by amino acid ligands, usually three in number. The known metal ligands are histidine, glutamate, aspartate or lysine and at least one other residue is required for catalysis, which may play an electrophilic role. Of the known metalloproteases, around half contain an HEXXH motif, which has been shown in crystallographic studies to form part of the metal-binding site. The HEXXH motif is relatively common, but can be more stringently defined for metalloproteases as 'abXHEbbHbc', where 'a' is most often valine or threonine and forms part of the S1' subsite in thermolysin and neprilysin, 'b' is an uncharged residue, and 'c' a hydrophobic residue. Proline is never found in this site, possibly because it would break the helical structure adopted by this motif in metalloproteases. Metallopeptidases from family M48 are integral membrane proteins associated with the endoplasmic reticulum and Golgi, binding one zinc ion per subunit. These endopeptidases include CAAX prenyl protease 1, which proteolytically removes the C-terminal three residues of farnesylated proteins. Metalloproteinase inhibitors are found in numerous marine organisms, including fish, cephalopods, mollusks, algae and bacteria. Members of the M50 metallopeptidase family include: mammalian sterol-regulatory element binding protein (SREBP) site 2 protease and Escherichia coli protease EcfE, stage IV sporulation protein FB. See also Matrix metalloproteinase The Proteolysis Map References External links The MEROPS online database for peptidases and their inhibitors: Metallo Peptidases Proteopedia: Metalloproteases Protein families Proteases Fibrinolytic enzymes
Metalloproteinase
[ "Biology" ]
943
[ "Protein families", "Protein classification" ]
984,528
https://en.wikipedia.org/wiki/Slipway
A slipway, also known as boat ramp or launch or boat deployer, is a ramp on the shore by which ships or boats can be moved to and from the water. They are used for building and repairing ships and boats, and for launching and retrieving small boats on trailers towed by automobiles and flying boats on their undercarriage. The nautical terms ways and skids are alternative names for slipway. A ship undergoing construction in a shipyard is said to be on the ways. If a ship is scrapped there, she is said to be broken up in the ways. As the word "slip" implies, the ships or boats are moved over the ramp, by way of crane or fork lift. Prior to the move the vessel's hull is coated with grease, which then allows the ship or boat to "slip" off the ramp and progress safely into the water. Slipways are used to launch (newly built) large ships, but can only dry-dock or repair smaller ships. Pulling large ships against the greased ramp would require too much force. Therefore, for dry-docking large ships, one must use carriages supported by wheels or by roller-pallets. These types of dry-docking installations are called "marine railways". Nevertheless the words "slip" and "slipway" are also used for all dry-docking installations that use a ramp. Simple slipways In its simplest form, a slipway is a plain ramp, typically made of concrete, steel, stone or even wood. The height of the tide can limit the usability of a slip: unless the ramp continues well below the low water level it may not be usable at low tide. Normally there is a flat paved area on the landward end. When engaged in building or repairing boats or small ships (i.e. ships of no more than about 300 tons), slipways can use a wheeled carriage, or "cradle", which is run down the ramp until the vessel can float on or off the carriage. Such slipways are used for repair as well as for putting newly built vessels in the water. When used for launching and retrieving small boats, the trailer is placed in the water. The boat may be either floated on and off the trailer or pulled off. When recovering the boat from the water, it is winched back up the trailer. From 1925 onwards, modern whaling factory ships have usually been equipped by their designers with a slipway at the stern<ref> {{cite book |last1 = Tønnessen |first1 = Johan Nicolay |author-link1 = Johan Nicolay Tønnessen |last2 = Johnsen |first2 = Arne Odd |author-link2 = Arne Odd Johnsen |translator-last1 = Christophersen |translator-first1 = R. I. |date = 1 January 1982 |orig-date = 1959 |title = The History of Modern Whaling |location = Berkeley |publisher = University of California Press |pages = 354–355 |isbn = 9780520039735 |access-date = 12 June 2024 |url = https://books.google.com/books?id=-miE3r5DgPUC |quote = [...] the Lancing, with the first stern slipway, left Sandefjord on 5 June 1925 [...]. [...] In order to train hands in the new form of catching, [...] whaling was carried out that summer off the Congo, and the occasion when the first humpback whale was hauled on to the deck on 14 July 1925 marked a milestone in the history of whaling. [...] the Lancing'''s operations in 1925–6 were to prove of decisive importance in the transition to the new epoch of whaling. }} </ref> to haul harpooned whales on deck to be processed by flensers. To achieve a safe launch of some types of land-based lifeboats in bad weather and difficult sea conditions, the lifeboat and slipway are designed so that the lifeboat slides down a relatively steep steel slip under gravity. Slipways in ship construction For large ships, slipways are only used in construction of the vessel. They may be arranged parallel or perpendicular to the shore line (or as nearly so as the water and maximum length of vessel allows). On launching, the vessel slides down the slipway on the ways until it floats by itself. The process of transferring the vessel to the water is known as launching and is normally a ceremonial and celebratory occasion. It is the point where the vessel is formally named. At this point the hull is complete and the propellers and associated shafting are in place, but dependent on the depth of water, stability and weight the engines might have not been fitted or the superstructure may not be completed. In a perpendicular slipway, the ship is normally built with its stern facing the water. Modern slipways take the form of a reinforced concrete mat of sufficient strength to support the vessel, with two "barricades" that extend to well below the water level taking into account tidal variations. The barricades support the two launch ways. The vessel is built upon temporary cribbing that is arranged to give access to the hull's outer bottom, and to allow the launchways to be erected under the complete hull. When it is time to prepare for launching a pair of standing ways are erected under the hull and out onto the barricades. The surface of these ways are greased (Tallow and whale oil were used as grease in sailing ship days). A pair of sliding ways is placed on top, under the hull, and a launch cradle with bow and stern poppets is erected on these sliding ways. The weight of the hull is then transferred from the build cribbing onto the launch cradle. Provision is made to hold the vessel in place and then release it at the appropriate moment in the launching ceremony, these are either a weak link designed to be cut at a signal or a mechanical trigger controlled by a switch from the ceremonial platform. Some slipways are built so that the vessel is side on to the water and is launched sideways. This is done where the limitations of the water channel would not allow lengthwise launching, but occupies a much greater length of shore. The Great Eastern'' built by Brunel was built this way as were many landing craft during World War II. This method requires many more sets of ways to support the weight of the ship. In both cases heavy chains are attached to the ship and the drag effect is used to slow the vessel once afloat until tugboats can move the hull to a jetty for fitting out. The practice of building on a slipway is dying out with the increasing size of vessels from about the 1970s. Part of the reason is the space requirement for slowing and maneuvering the vessel immediately after it has left the slipway, but the sheer size of the vessel causes design problems, since the hull is basically supported only at its end points during the launch process and this imposes stresses not met during normal operation. See also Boat lift Canoe launch Dry dock Ferry slip Harbor Hoverport Patent slip (marine railway) Seaport Ship cradle Shiplift Travel lift References Shipbuilding Coastal construction
Slipway
[ "Engineering" ]
1,492
[ "Construction", "Coastal construction", "Shipbuilding", "Marine engineering" ]
984,629
https://en.wikipedia.org/wiki/Social%20complexity
In sociology, social complexity is a conceptual framework used in the analysis of society. In the sciences, contemporary definitions of complexity are found in systems theory, wherein the phenomenon being studied has many parts and many possible arrangements of the parts; simultaneously, what is complex and what is simple are relative and change in time. Contemporary usage of the term complexity specifically refers to sociologic theories of society as a complex adaptive system, however, social complexity and its emergent properties are recurring subjects throughout the historical development of social philosophy and the study of social change. Early theoreticians of sociology, such as Ferdinand Tönnies, Émile Durkheim, and Max Weber, Vilfredo Pareto and Georg Simmel, examined the exponential growth and interrelatedness of social encounters and social exchanges. The emphases on the interconnectivity among social relationships, and the emergence of new properties within society, is found in the social theory produced in the subfields of sociology. Social complexity is a basis for the connection of the phenomena reported in microsociology and macrosociology, and thus provides an intellectual middle-range for sociologists to formulate and develop hypotheses. Methodologically, social complexity is theory-neutral, and includes the phenomena studied in microsociology and the phenomena studied in macrosociology. Theoretic background In 1937, the sociologist Talcott Parsons continued the work of the early theoreticians of sociology with his work on action theory; and by 1951, Parson had developed action theory into formal systems theory in The Social System (1951). In the following decades, the synergy between general systems thinking and the development of social system theories is carried forward by Robert K. Merton in discussions of theories of the middle-range and social structure and agency. From the late 1970s until the early 1990s, sociological investigation concerned the properties of systems in which the strong correlation of sub-parts leads to the observation of autopoetic, self-organizing, dynamical, turbulent, and chaotic behaviours that arise from mathematical complexity, such as the work of Niklas Luhmann. One of the earliest usages of the term "complexity", in the social and behavioral sciences, to refer specifically to a complex system is found in the study of modern organizations and management studies. However, particularly in management studies, the term often has been used in a metaphorical rather than in a qualitative or quantitative theoretical manner. By the mid-1990s, the "complexity turn" in social sciences begins as some of the same tools generally used in complexity science are incorporated into the social sciences. By 1998, the international, electronic periodical, Journal of Artificial Societies and Social Simulation, had been created. In the last several years, many publications have presented overviews of complexity theory within the field of sociology. Within this body of work, connections also are drawn to yet other theoretical traditions, including constructivist epistemology and the philosophical positions of phenomenology, postmodernism and critical realism. Methodologies Methodologically, social complexity is theory-neutral, meaning that it accommodates both local and global approaches to sociological research. The very idea of social complexity arises out of the historical-comparative methods of early sociologists; obviously, this method is important in developing, defining, and refining the theoretical construct of social complexity. As complex social systems have many parts and there are many possible relationships between those parts, appropriate methodologies are typically determined to some degree by the research level of analysis differentiated by the researcher according to the level of description or explanation demanded by the research hypotheses. At the most localized level of analysis, ethnographic, participant- or non-participant observation, content analysis and other qualitative research methods may be appropriate. More recently, highly sophisticated quantitative research methodologies are being developed and used in sociology at both local and global levels of analysis. Such methods include (but are not limited to) bifurcation diagrams, network analysis, non-linear modeling, and computational models including cellular automata programming, sociocybernetics and other methods of social simulation. Complex social network analysis Complex social network analysis is used to study the dynamics of large, complex social networks. Dynamic network analysis brings together traditional social network analysis, link analysis and multi-agent systems within network science and network theory. Through the use of key concepts and methods in social network analysis, agent-based modeling, theoretical physics, and modern mathematics (particularly graph theory and fractal geometry), this method of inquiry brought insights into the dynamics and structure of social systems. New computational methods of localized social network analysis are coming out of the work of Duncan Watts, Albert-László Barabási, Nicholas A. Christakis, Kathleen Carley and others. New methods of global network analysis are emerging from the work of John Urry and the sociological study of globalization, linked to the work of Manuel Castells and the later work of Immanuel Wallerstein. Since the late 1990s, Wallerstein increasingly makes use of complexity theory, particularly the work of Ilya Prigogine. Dynamic social network analysis is linked to a variety of methodological traditions, above and beyond systems thinking, including graph theory, traditional social network analysis in sociology, and mathematical sociology. It also links to mathematical chaos and complex dynamics through the work of Duncan Watts and Steven Strogatz, as well as fractal geometry through Albert-László Barabási and his work on scale-free networks. Computational sociology The development of computational sociology involves such scholars as Nigel Gilbert, Klaus G. Troitzsch, Joshua M. Epstein, and others. The foci of methods in this field include social simulation and data-mining, both of which are sub-areas of computational sociology. Social simulation uses computers to create an artificial laboratory for the study of complex social systems; data-mining uses machine intelligence to search for non-trivial patterns of relations in large, complex, real-world databases. The emerging methods of socionics are a variant of computational sociology. Computational sociology is influenced by a number of micro-sociological areas as well as the macro-level traditions of systems science and systems thinking. The micro-level influences of symbolic interaction, exchange, and rational choice, along with the micro-level focus of computational political scientists, such as Robert Axelrod, helped to develop computational sociology's bottom-up, agent-based approach to modeling complex systems. This is what Joshua M. Epstein calls generative science. Other important areas of influence include statistics, mathematical modeling and computer simulation. Sociocybernetics Sociocybernetics integrates sociology with second-order cybernetics and the work of Niklas Luhmann, along with the latest advances in complexity science. In terms of scholarly work, the focus of sociocybernetics has been primarily conceptual and only slightly methodological or empirical. Sociocybernetics is directly tied to systems thought inside and outside of sociology, specifically in the area of second-order cybernetics. Areas of application In the first decade of the 21st century, the diversity of areas of application has grown as more sophisticated methods have developed. Social complexity theory is applied in studies of social cooperation and public goods; altruism; education; global civil society collective action and social movements; social inequality; workforce and unemployment; policy analysis; health care systems; and innovation and social change, to name a few. A current international scientific research project, the Seshat: Global History Databank, was explicitly designed to analyze changes in social complexity from the Neolithic Revolution until the Industrial Revolution. As a middle-range theoretical platform, social complexity can be applied to any research in which social interaction or the outcomes of such interactions can be observed, but particularly where they can be measured and expressed as continuous or discrete data points. One common criticism often cited regarding the usefulness of complexity science in sociology is the difficulty of obtaining adequate data. Nonetheless, application of the concept of social complexity and the analysis of such complexity has begun and continues to be an ongoing field of inquiry in sociology. From childhood friendships and teen pregnancy to criminology and counter-terrorism, theories of social complexity are being applied in almost all areas of sociological research. In the area of communications research and informetrics, the concept of self-organizing systems appears in mid-1990s research related to scientific communications. Scientometrics and bibliometrics are areas of research in which discrete data are available, as are several other areas of social communications research such as sociolinguistics. Social complexity is also a concept used in semiotics. See also Social science Complex society Complexity economics Complexity theory and organizations Differentiation (sociology) Econophysics Engaged theory Network Analysis and Ethnographic Problems Personal information management General Aggregate data Artificial neural network Cognitive complexity Computational complexity theory Dual-phase evolution Evolutionary programming Game theory Generic-case complexity Multi-agent system Systemography References Further reading Byrne, David (1998). Complexity Theory and the Social Sciences. London: Routledge. Byrne, D., & Callaghan, G. (2013). Complexity theory and the social sciences: The state of the art. Routledge. Castellani, Brian and Frederic William Hafferty (2009). Sociology and Complexity Science: A New Area of Inquiry (Series: Understanding Complex Systems XV). Berlin, Heidelberg: Springer-Verlag. Eve, Raymond, Sara Horsfall and Mary E. Lee (1997). Chaos, Complexity and Sociology: Myths, Models, and Theories. Thousand Oaks, CA: Sage Publications. Jenks, Chris and John Smith (2006). Qualitative Complexity: Ecology, Cognitive Processes and the Re-Emergence of Structures in Post-Humanist Social Theory. New York, NY: Routledge. Kiel, L. Douglas (ed.) (2008). Knowledge Management, Organizational Intelligence, Learning and Complexity. UNESCO (EOLSS): Paris, France. Kiel, L. Douglas and Euel Elliott (eds.) (1997). Chaos Theory in the Social Sciences: Foundations and Applications. The University of Michigan Press: Ann Arbor, MI. Leydesdorff, Loet (2001). A Sociological Theory of Communication: The Self-Organization of the Knowledge-Based Society. Parkland, FL: Universal Publishers. Urry, John (2005). "The Complexity Turn." Theory, Culture and Society, 22(5): 1–14. Complex systems theory Self-organization Nonlinear systems Sociological theories Sociological terminology
Social complexity
[ "Mathematics" ]
2,131
[ "Self-organization", "Nonlinear systems", "Dynamical systems" ]
984,692
https://en.wikipedia.org/wiki/Computational%20sociology
Computational sociology is a branch of sociology that uses computationally intensive methods to analyze and model social phenomena. Using computer simulations, artificial intelligence, complex statistical methods, and analytic approaches like social network analysis, computational sociology develops and tests theories of complex social processes through bottom-up modeling of social interactions. It involves the understanding of social agents, the interaction among these agents, and the effect of these interactions on the social aggregate. Although the subject matter and methodologies in social science differ from those in natural science or computer science, several of the approaches used in contemporary social simulation originated from fields such as physics and artificial intelligence. Some of the approaches that originated in this field have been imported into the natural sciences, such as measures of network centrality from the fields of social network analysis and network science. In relevant literature, computational sociology is often related to the study of social complexity. Social complexity concepts such as complex systems, non-linear interconnection among macro and micro process, and emergence, have entered the vocabulary of computational sociology. A practical and well-known example is the construction of a computational model in the form of an "artificial society", by which researchers can analyze the structure of a social system. History Background In the past four decades, computational sociology has been introduced and gaining popularity . This has been used primarily for modeling or building explanations of social processes and are depending on the emergence of complex behavior from simple activities. The idea behind emergence is that properties of any bigger system do not always have to be properties of the components that the system is made of. Alexander, Morgan, and Broad, classical emergentists, introduced the idea of emergence in the early 20th century. The aim of this method was to find a good enough accommodation between two different and extreme ontologies, which were reductionist materialism and dualism. While emergence has had a valuable and important role with the foundation of Computational Sociology, there are those who do not necessarily agree. One major leader in the field, Epstein, doubted the use because there were aspects that are unexplainable. Epstein put up a claim against emergentism, in which he says it "is precisely the generative sufficiency of the parts that constitutes the whole's explanation". Agent-based models have had a historical influence on Computational Sociology. These models first came around in the 1960s, and were used to simulate control and feedback processes in organizations, cities, etc. During the 1970s, the application introduced the use of individuals as the main units for the analyses and used bottom-up strategies for modeling behaviors. The last wave occurred in the 1980s. At this time, the models were still bottom-up; the only difference is that the agents interact interdependently. Systems theory and structural functionalism In the post-war era, Vannevar Bush's differential analyser, John von Neumann's cellular automata, Norbert Wiener's cybernetics, and Claude Shannon's information theory became influential paradigms for modeling and understanding complexity in technical systems. In response, scientists in disciplines such as physics, biology, electronics, and economics began to articulate a general theory of systems in which all natural and physical phenomena are manifestations of interrelated elements in a system that has common patterns and properties. Following Émile Durkheim's call to analyze complex modern society sui generis, post-war structural functionalist sociologists such as Talcott Parsons seized upon these theories of systematic and hierarchical interaction among constituent components to attempt to generate grand unified sociological theories, such as the AGIL paradigm. Sociologists such as George Homans argued that sociological theories should be formalized into hierarchical structures of propositions and precise terminology from which other propositions and hypotheses could be derived and operationalized into empirical studies. Because computer algorithms and programs had been used as early as 1956 to test and validate mathematical theorems, such as the four color theorem, some scholars anticipated that similar computational approaches could "solve" and "prove" analogously formalized problems and theorems of social structures and dynamics. Macrosimulation and microsimulation By the late 1960s and early 1970s, social scientists used increasingly available computing technology to perform macro-simulations of control and feedback processes in organizations, industries, cities, and global populations. These models used differential equations to predict population distributions as holistic functions of other systematic factors such as inventory control, urban traffic, migration, and disease transmission. Although simulations of social systems received substantial attention in the mid-1970s after the Club of Rome published reports predicting that policies promoting exponential economic growth would eventually bring global environmental catastrophe, the inconvenient conclusions led many authors to seek to discredit the models, attempting to make the researchers themselves appear unscientific. Hoping to avoid the same fate, many social scientists turned their attention toward micro-simulation models to make forecasts and study policy effects by modeling aggregate changes in state of individual-level entities rather than the changes in distribution at the population level. However, these micro-simulation models did not permit individuals to interact or adapt and were not intended for basic theoretical research. Cellular automata and agent-based modeling The 1970s and 1980s were also a time when physicists and mathematicians were attempting to model and analyze how simple component units, such as atoms, give rise to global properties, such as complex material properties at low temperatures, in magnetic materials, and within turbulent flows. Using cellular automata, scientists were able to specify systems consisting of a grid of cells in which each cell only occupied some finite states and changes between states were solely governed by the states of immediate neighbors. Along with advances in artificial intelligence and microcomputer power, these methods contributed to the development of "chaos theory" and "complexity theory" which, in turn, renewed interest in understanding complex physical and social systems across disciplinary boundaries. Research organizations explicitly dedicated to the interdisciplinary study of complexity were also founded in this era: the Santa Fe Institute was established in 1984 by scientists based at Los Alamos National Laboratory and the BACH group at the University of Michigan likewise started in the mid-1980s. This cellular automata paradigm gave rise to a third wave of social simulation emphasizing agent-based modeling. Like micro-simulations, these models emphasized bottom-up designs but adopted four key assumptions that diverged from microsimulation: autonomy, interdependency, simple rules, and adaptive behavior. Agent-based models are less concerned with predictive accuracy and instead emphasize theoretical development. In 1981, mathematician and political scientist Robert Axelrod and evolutionary biologist W.D. Hamilton published a major paper in Science titled "The Evolution of Cooperation" which used an agent-based modeling approach to demonstrate how social cooperation based upon reciprocity can be established and stabilized in a prisoner's dilemma game when agents followed simple rules of self-interest. Axelrod and Hamilton demonstrated that individual agents following a simple rule set of (1) cooperate on the first turn and (2) thereafter replicate the partner's previous action were able to develop "norms" of cooperation and sanctioning in the absence of canonical sociological constructs such as demographics, values, religion, and culture as preconditions or mediators of cooperation. Throughout the 1990s, scholars like William Sims Bainbridge, Kathleen Carley, Michael Macy, and John Skvoretz developed multi-agent-based models of generalized reciprocity, prejudice, social influence, and organizational information processing (psychology). In 1999, Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist and established its most relevant journal: the Journal of Artificial Societies and Social Simulation. Data mining and social network analysis Independent from developments in computational models of social systems, social network analysis emerged in the 1970s and 1980s from advances in graph theory, statistics, and studies of social structure as a distinct analytical method and was articulated and employed by sociologists like James S. Coleman, Harrison White, Linton Freeman, J. Clyde Mitchell, Mark Granovetter, Ronald Burt, and Barry Wellman. The increasing pervasiveness of computing and telecommunication technologies throughout the 1980s and 1990s demanded analytical techniques, such as network analysis and multilevel modeling, that could scale to increasingly complex and large data sets. The most recent wave of computational sociology, rather than employing simulations, uses network analysis and advanced statistical techniques to analyze large-scale computer databases of electronic proxies for behavioral data. Electronic records such as email and instant message records, hyperlinks on the World Wide Web, mobile phone usage, and discussion on Usenet allow social scientists to directly observe and analyze social behavior at multiple points in time and multiple levels of analysis without the constraints of traditional empirical methods such as interviews, participant observation, or survey instruments. Continued improvements in machine learning algorithms likewise have permitted social scientists and entrepreneurs to use novel techniques to identify latent and meaningful patterns of social interaction and evolution in large electronic datasets. The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analysed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by quantitative narrative analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object. Computational content analysis Content analysis has been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items. Gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents. The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al. showing how different topics have different gender biases and levels of readability; the possibility to detect mood shifts in a vast population by analysing Twitter content was demonstrated as well. The analysis of vast quantities of historical newspaper content has been pioneered by Dzogang et al., which showed how periodic structures can be automatically discovered in historical newspapers. A similar analysis was performed on social media, again revealing strongly periodic structures. Challenges Computational sociology, as with any field of study, faces a set of challenges. These challenges need to be handled meaningfully so as to make the maximum impact on society. Levels and their interactions Each society that is formed tends to be in one level or the other and there exists tendencies of interactions between and across these levels. Levels need not only be micro-level or macro-level in nature. There can be intermediate levels in which a society exists say - groups, networks, communities etc. The question however arises as to how to identify these levels and how they come into existence? And once they are in existence how do they interact within themselves and with other levels? If we view entities (agents) as nodes and the connections between them as the edges, we see the formation of networks. The connections in these networks do not come about based on just objective relationships between the entities, rather they are decided upon by factors chosen by the participating entities. The challenge with this process is that, it is difficult to identify when a set of entities will form a network. These networks may be of trust networks, co-operation networks, dependence networks etc. There have been cases where heterogeneous set of entities have shown to form strong and meaningful networks among themselves. As discussed previously, societies fall into levels and in one such level, the individual level, a micro-macro link refers to the interactions which create higher-levels. There are a set of questions that needs to be answered regarding these Micro-Macro links. How they are formed? When do they converge? What is the feedback pushed to the lower levels and how are they pushed? Another major challenge in this category concerns the validity of information and their sources. In recent years there has been a boom in information gathering and processing. However, little attention was paid to the spread of false information between the societies. Tracing back the sources and finding ownership of such information is difficult. Culture modeling The evolution of the networks and levels in the society brings about cultural diversity. A thought which arises however is that, when people tend to interact and become more accepting of other cultures and beliefs, how is it that diversity still persists? Why is there no convergence? A major challenge is how to model these diversities. Are there external factors like mass media, locality of societies etc. which influence the evolution or persistence of cultural diversities? Experimentation and evaluation Any study or modelling when combined with experimentation needs to be able to address the questions being asked. Computational social science deals with large scale data and the challenge becomes much more evident as the scale grows. How would one design informative simulations on a large scale? And even if a large scale simulation is brought up, how is the evaluation supposed to be performed? Model choice and model complexities Another challenge is identifying the models that would best fit the data and the complexities of these models. These models would help us predict how societies might evolve over time and provide possible explanations on how things work. Generative models Generative models helps us to perform extensive qualitative analysis in a controlled fashion. A model proposed by Epstein, is the agent-based simulation, which talks about identifying an initial set of heterogeneous entities (agents) and observe their evolution and growth based on simple local rules. But what are these local rules? How does one identify them for a set of heterogeneous agents? Evaluation and impact of these rules state a whole new set of difficulties. Heterogeneous or ensemble models Integrating simple models which perform better on individual tasks to form a Hybrid model is an approach that can be looked into. These models can offer better performance and understanding of the data. However the trade-off of identifying and having a deep understanding of the interactions between these simple models arises when one needs to come up with one combined, well performing model. Also, coming up with tools and applications to help analyse and visualize the data based on these hybrid models is another added challenge. Impact Computational sociology can bring impacts to science, technology and society. Impact on science In order for the study of computational sociology to be effective, there has to be valuable innovations. These innovation can be of the form of new data analytics tools, better models and algorithms. The advent of such innovation will be a boom for the scientific community in large. Impact on society One of the major challenges of computational sociology is the modelling of social processes . Various law and policy makers would be able to see efficient and effective paths to issue new guidelines and the mass in general would be able to evaluate and gain fair understanding of the options presented in front of them enabling an open and well balanced decision process. . See also Journal of Artificial Societies and Social Simulation Artificial society Simulated reality Social simulation Agent-based social simulation Social complexity Computational economics Computational epidemiology Cliodynamics Predictive analytics References External links On-line book "Simulation for the Social Scientist" by Nigel Gilbert and Klaus G. Troitzsch, 1999, second edition 2005 Journal of Artificial Societies and Social Simulation Agent based models for social networks, interactive java applets Sociology and Complexity Science Website Journals and academic publications Complexity Research Journal List, from UIUC, IL Related Research Groups, from UIUC, IL Associations, conferences and workshops North American Association for Computational Social and Organization Sciences ESSA: European Social Simulation Association Academic programs, departments and degrees University of Bristol "Mediapatterns" project Carnegie Mellon University , PhD program in Computation, Organizations and Society (COS) University of Chicago Certificate and MA in Computational Social Science George Mason University PhD program in CSS (Computational Social Sciences) MA program in Master's of Interdisciplinary Studies, CSS emphasis Portland State, PhD program in Systems Science Portland State, MS program in Systems Science University College Dublin, PhD Program in Complex Systems and Computational Social Science MSc in Social Data Analytics BSc in Computational Social Science UCLA, Minor in Human Complex Systems UCLA, Major in Computational & Systems Biology (including behavioral sciences) Univ. of Michigan, Minor in Complex Systems Systems Sciences Programs List, Portland State. List of other worldwide related programs. Centers and institutes North America Center for Complex Networks and Systems Research, Indiana University, Bloomington, IN, USA. Center for Complex Systems Research, University of Illinois at Urbana-Champaign, IL, USA. Center for Social Complexity, George Mason University, Fairfax, VA, USA. Center for Social Dynamics and Complexity, Arizona State University, Tempe, AZ, USA. Center of the Study of Complex Systems, University of Michigan, Ann Arbor, MI, USA. Human Complex Systems, University of California Los Angeles, Los Angeles, CA, USA. Institute for Quantitative Social Science, Harvard University, Boston, MA, USA. Northwestern Institute on Complex Systems (NICO), Northwestern University, Evanston, IL USA. Santa Fe Institute, Santa Fe, NM, USA. Duke Network Analysis Center, Duke University, Durham, NC, USA South America Modelagem de Sistemas Complexos, University of São Paulo - EACH, São Paulo, SP, Brazil Instituto Nacional de Ciência e Tecnologia de Sistemas Complexos, Centro Brasileiro de Pesquisas Físicas, Rio de Janeiro, RJ, Brazil Asia Bandung Fe Institute, Centre for Complexity in Surya University, Bandung, Indonesia. Europe Centre for Policy Modelling, Manchester, UK. Centre for Research in Social Simulation, University of Surrey, UK. UCD Dynamics Lab- Centre for Computational Social Science, Geary Institute for Public Policy, University College Dublin, Ireland. Groningen Center for Social Complexity Studies (GCSCS), Groningen, NL. Chair of Sociology, in particular of Modeling and Simulation (SOMS), Zürich, Switzerland. Research Group on Experimental and Computational Sociology (GECS), Brescia, Italy Subfields of sociology Complex systems theory Methods in sociology Computational fields of study
Computational sociology
[ "Technology" ]
3,698
[ "Computational fields of study", "Computing and society" ]
984,726
https://en.wikipedia.org/wiki/Neuropeptide
Neuropeptides are chemical messengers made up of small chains of amino acids that are synthesized and released by neurons. Neuropeptides typically bind to G protein-coupled receptors (GPCRs) to modulate neural activity and other tissues like the gut, muscles, and heart. Neuropeptides are synthesized from large precursor proteins which are cleaved and post-translationally processed then packaged into large dense core vesicles. Neuropeptides are often co-released with other neuropeptides and neurotransmitters in a single neuron, yielding a multitude of effects. Once released, neuropeptides can diffuse widely to affect a broad range of targets. Neuropeptides are extremely ancient and highly diverse chemical messengers. Placozoans such as Trichoplax, extremely basal animals which do not possess neurons, use peptides for cell-to-cell communication in a way similar to the neuropeptides of higher animals. Examples Peptide signals play a role in information processing that is different from that of conventional neurotransmitters, and many appear to be particularly associated with specific behaviours. For example, oxytocin and vasopressin have striking and specific effects on social behaviours, including maternal behaviour and pair bonding. CCAP has several functions including regulating heart rate, allatostatin and proctolin regulate food intake and growth, bursicon controls tanning of the cuticle and corazonin has a role in cuticle pigmentation and moulting. Synthesis Neuropeptides are synthesized from inactive precursor proteins called prepropeptides. Prepropeptides contain sequences for a family of distinct peptides and often contain duplicated copies of the same peptides, depending on the organism. In addition to the precursor peptide sequences, prepropeptides also contain a signal peptide, spacer peptides, and cleavage sites. The signal peptide sequence guides the protein to the secretory pathway, starting at the endoplasmic reticulum. The signal peptide sequence is removed in the endoplasmic reticulum, yielding a propeptide. The propeptide travels to the Golgi apparatus where it is proteolytically cleaved and processed into multiple peptides. Peptides are packaged into dense core vesicles, where further cleaving and processing, such as C-terminal amidation, can occur. Dense core vesicles are transported throughout the neuron and can release peptides at the synaptic cleft, cell body, and along the axon. Mechanism Neuropeptides are released by dense core vesicles after depolarization of the cell. Compared to classical neurotransmitter signaling, neuropeptide signaling is more sensitive. Neuropeptide receptor affinity is in the nanomolar to micromolar range while neurotransmitter affinity is in the micromolar to millimolar range. Additionally, dense core vesicles contain a small amount of neuropeptide (3 - 10mM) compared to synaptic vesicles containing neurotransmitters (e.g. 100mM for acetylcholine). Evidence shows that neuropeptides are released after high-frequency firing or bursts, distinguishing dense core vesicle from synaptic vesicle release. Neuropeptides utilize volume transmission and are not reuptaken quickly, allowing diffusion across broad areas (nm to mm) to reach targets. Almost all neuropeptides bind to G protein-coupled receptors (GPCRs), inducing second messenger cascades to modulate neural activity on long time-scales. Expression of neuropeptides in the nervous system is diverse. Neuropeptides are often co-released with other neuropeptides and neurotransmitters, yielding a diversity of effects depending on the combination of release. For example, vasoactive intestinal peptide is typically co-released with acetylcholine. Neuropeptide release can also be specific. In Drosophila larvae, for example, eclosion hormone is expressed in just two neurons. Receptor targets Most neuropeptides act on G-protein coupled receptors (GPCRs). Neuropeptide-GPCRs fall into two families: rhodopsin-like and the secretin class.  Most peptides activate a single GPCR, while some activate multiple GPCRs (e.g. AstA, AstC, DTK). Peptide-GPCR binding relationships are highly conserved across animals. Aside from conserved structural relationships, some peptide-GPCR functions are also conserved across the animal kingdom. For example, neuropeptide F/neuropeptide Y signaling is structurally and functionally conserved between insects and mammals. Although peptides mostly target metabotropic receptors, there is some evidence that neuropeptides bind to other receptor targets. Peptide-gated ion channels (FMRFamide-gated sodium channels) have been found in snails and Hydra. Other examples of non-GPCR targets include: insulin-like peptides and tyrosine-kinase receptors in Drosophila and atrial natriuretic peptide and eclosion hormone with membrane-bound guanylyl cyclase receptors in mammals and insects. Actions Due to their modulatory and diffusive nature, neuropeptides can act on multiple time and spatial scales. Below are some examples of neuropeptide actions: Co-release Neuropeptides are often co-released with other neurotransmitters and neuropeptides to modulate synaptic activity. Synaptic vesicles and dense core vesicles can have differential activation properties for release, resulting in context-dependent co-release combinations. For example, insect motor neurons are glutamatergic and some contain dense core vesicles with proctolin. At low frequency activation, only glutamate is released, yielding fast and rapid excitation of the muscle. At high frequency activation however, dense core vesicles release proctolin, inducing prolonged contractions. Thus, neuropeptide release can be fine-tuned to modulate synaptic activity in certain contexts. Some regions of the nervous system are specialized to release distinctive sets of peptides. For example, the hypothalamus and the pituitary gland release peptides (e.g. TRH, GnRH, CRH, SST) that act as hormones In one subpoplation of the arcuate nucleus of the hypothalamus, three anorectic peptides are co-expressed: α-melanocyte-stimulating hormone (α-MSH), galanin-like peptide, and cocaine-and-amphetamine-regulated transcript (CART), and in another subpopulation two orexigenic peptides are co-expressed, neuropeptide Y and agouti-related peptide (AGRP). These peptides are all released in different combinations to signal hunger and satiation cues. The following is a list of neuroactive peptides co-released with other neurotransmitters. Transmitter names are shown in bold. Norepinephrine (noradrenaline). In neurons of the A2 cell group in the nucleus of the solitary tract), norepinephrine co-exists with: Galanin Enkephalin Neuropeptide Y GABA Somatostatin (in the hippocampus) Cholecystokinin Neuropeptide Y (in the arcuate nucleus) Acetylcholine VIP Substance P Dopamine Cholecystokinin Neurotensin Glucagon-like peptide-1 (in the nucleus accumbens) Epinephrine (adrenaline) Neuropeptide Y Neurotensin Serotonin (5-HT) Substance P TRH Enkephalin Some neurons make several different peptides. For instance, vasopressin co-exists with dynorphin and galanin in magnocellular neurons of the supraoptic nucleus and paraventricular nucleus, and with CRF (in parvocellular neurons of the paraventricular nucleus) Oxytocin in the supraoptic nucleus co-exists with enkephalin, dynorphin, cocaine-and amphetamine regulated transcript (CART) and cholecystokinin. Evolution of Neuropeptide Signaling Peptides are ancient signaling systems that are found in almost all animals on Earth. Genome sequencing reveals evidence of neuropeptide genes in Cnidaria, Ctenophora, and Placozoa, some of oldest living animals with nervous systems or neural-like tissues. Recent studies also show genomic evidence of neuropeptide processing machinery in metazoans and choanoflagellates, suggesting that neuropeptide signaling may predate the development of nervous tissues. Additionally, Ctenophore and Placozoa neural signaling is entirely peptidergic and lacks the major amine neurotransmitters such as acetylcholine, dopamine, and serotonin. This also suggests that neuropeptide signaling developed before amine neurotransmitters. Research history In the early 1900s, chemical messengers were crudely extracted from whole animal brains and tissues and studied for their physiological effects. In 1931, von Euler and Gaddum, used a similar method to try and isolate acetylcholine but instead discovered a peptide substance that induced physiological changes including muscle contractions and depressed blood pressure. These effects were not abolished using atropine, ruling out the substance as acetylcholine. In insects, proctolin was the first neuropeptide to be isolated and sequenced. In 1975, Alvin Starratt and Brian Brown extracted the peptide from hindgut muscles of the cockroach and found that its application enhanced muscle contractions. While Starratt and Brown initially thought of proctolin as an excitatory neurotransmitter, proctolin was later confirmed as a neuromodulatory peptide. David de Wied first used the term "neuropeptide" in the 1970s to delineate peptides derived from the nervous system. References External links Neuropeptides Journal Neuropeptides reference website (a comprehensive neuropeptide database) Neuropeptides eBook series Neuropeptide chapter in the C. elegans Wormbook excellent, and very accessible, discussion of neuropeptide biology in C. elegans Molecular biology
Neuropeptide
[ "Chemistry", "Biology" ]
2,244
[ "Biochemistry", "Molecular biology" ]
984,752
https://en.wikipedia.org/wiki/Lyapunov%20equation
The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems. In particular, the discrete-time Lyapunov equation (also known as Stein equation) for is where is a Hermitian matrix and is the conjugate transpose of , while the continuous-time Lyapunov equation is . Application to stability In the following theorems , and and are symmetric. The notation means that the matrix is positive definite. Theorem (continuous time version). Given any , there exists a unique satisfying if and only if the linear system is globally asymptotically stable. The quadratic function is a Lyapunov function that can be used to verify stability. Theorem (discrete time version). Given any , there exists a unique satisfying if and only if the linear system is globally asymptotically stable. As before, is a Lyapunov function. Computational aspects of solution The Lyapunov equation is linear; therefore, if contains entries, the equation can be solved in time using standard matrix factorization methods. However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used. For the continuous Lyapunov equation the Bartels–Stewart algorithm can be used. Analytic solution Defining the vectorization operator as stacking the columns of a matrix and as the Kronecker product of and , the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix is "stable", the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case). Discrete time Using the result that , one has where is a conformable identity matrix and is the element-wise complex conjugate of . One may then solve for by inverting or solving the linear equations. To get , one must just reshape appropriately. Moreover, if is stable (in the sense of Schur stability, i.e., having eigenvalues with magnitude less than 1), the solution can also be written as . For comparison, consider the one-dimensional case, where this just says that the solution of is . Continuous time Using again the Kronecker product notation and the vectorization operator, one has the matrix equation where denotes the matrix obtained by complex conjugating the entries of . Similar to the discrete-time case, if is stable (in the sense of Hurwitz stability, i.e., having eigenvalues with negative real parts), the solution can also be written as , which holds because For comparison, consider the one-dimensional case, where this just says that the solution of is . Relationship Between Discrete and Continuous Lyapunov Equations We start with the continuous-time linear dynamics: . And then discretize it as follows: Where indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for . Where we've defined . Now we can use the discrete time Lyapunov equation for : Plugging in our definition for , we get: Expanding this expression out yields: Recall that is a small displacement in time. Letting go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by on both sides, and then letting we find that: which is the continuous-time Lyapunov equation, as desired. See also Sylvester equation, which generalizes the Lyapunov equation Algebraic Riccati equation Kalman filter References Control theory
Lyapunov equation
[ "Mathematics" ]
795
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
984,871
https://en.wikipedia.org/wiki/ONElist
ONElist was a free mailing list service created by Mark Fletcher in August 1997. In November 1999 ONElist merged with eGroups, which was later purchased by Yahoo! in June 2000. External links Electronic mailing lists Discontinued Yahoo! services Products introduced in 1997
ONElist
[ "Technology" ]
55
[ "Computing stubs", "World Wide Web stubs" ]
984,882
https://en.wikipedia.org/wiki/Windows%20NT%203.1
Windows NT 3.1 is the first major release of the Windows NT operating system developed by Microsoft, released on July 27, 1993. It marked the company's entry into the corporate computing environment, designed to support large networks and to be portable, compiled for Intel x86, DEC Alpha and MIPS based workstations and servers. It was Microsoft's first 32-bit operating system, providing advantages over the constrictive 16-bit architecture of previous versions of Windows that relied on DOS, but retaining a desktop environment familiar to Windows 3.1 users. Windows NT began as a rewrite of the OS/2 operating system, which Microsoft had co-developed with IBM but failed to gain much traction against Unix, with vendor Sun Microsystems dominating the market for powerful desktop workstations. For several reasons, including the market success of Windows 3.0 in 1990, Microsoft decided to advance Windows rather than OS/2 and relinquished their OS/2 development responsibilities. By extending the Windows brand and beginning NT at version 3.1, like Windows 3.1 which had established brand recognition and market share, Microsoft implied that consumers should expect a familiar user experience. The name Windows NT ("New Technology") advertised that this was a re-engineered version of Windows. First publicly demonstrated at Comdex 1991, NT 3.1 was released in 1993 in two editions: Workstation and Advanced Server. When Windows NT premiered, their sales were limited by high system requirements, and a general lack of 32-bit applications to take advantage of the OS's data processing capabilities. It sold about 300,000 copies before it was succeeded by Windows NT 3.5 in 1994. On December 31, 2000, Microsoft declared Windows NT 3.1 obsolete and stopped providing support and updates for the system. Development history The origins of Windows NT date back to 1988, where Microsoft had a major foothold on the personal computer market due to the use of its MS-DOS as the operating system of IBM PC compatibles. Nathan Myhrvold, who had joined Microsoft after its acquisition of Dynamical Systems Research, identified two major threats to Microsoft's monopoly—RISC architectures, which proved to be more powerful than the equivalent Intel processors that MS-DOS ran on, and Unix, a family of cross-platform multitasking operating systems with support for multiprocessing and networking. While the widespread use of Unix was hindered by the need to adapt programs for each individual variant, Bill Gates believed that the combination of a Unix-like operating system with RISC processors could be a market threat, prompting the need for Microsoft to develop a "Unix killer" that could run on multiple architectures. Myhrvold wanted to develop a new system that would run on RISC workstations and Intel chips and multiprocessing computers. Gates had also hired Dave Cutler from Digital Equipment Corporation to assist in developing the new operating system; Cutler left DEC after the cancellation of the PRISM architecture and its MICA operating system, and agreed to join Microsoft on the condition that he be able to bring a number of staff members from his team at DEC with him. Cutler arrived at Microsoft in October 1988, and began working on the development of the operating system in November. The operating system was first developed as a revised version of OS/2, an operating system Microsoft had jointly developed with IBM. While OS/2 was originally intended to succeed MS-DOS, it had yet to be commercially successful. The OS was to be designed so it could be ported to different processor platforms, and support multiprocessor systems, which few operating systems did at that time. To target the enterprise market, the OS was also to support networking, the POSIX standard, and a security platform compliant with the "Orange Book" standards; which would require the OS to be a multi-user system with a permission framework and the ability to audit security-related events. Both Microsoft and IBM wanted to market an operating system that appealed to corporate "enterprise software" customers. That meant greater security, reliability, processing power, and computer networking features. However, since Microsoft also wanted to capture market share from Unix on other computing platforms, they needed a system design that was more portable than that of OS/2. To this end, Microsoft began by developing and testing their new operating system for a non-x86 processor: an emulated version of the Intel i860. Alluding to the chip's codename, "N10", Microsoft codenamed their operating system NT OS/2. DEC preemptively sued Microsoft, alleging that they stole code from MICA for use in the new operating system. In an out-of-court settlement, Microsoft agreed to make NT OS/2 compatible with DEC's Alpha processor. The development team originally estimated that development would be complete within 18 months. By April 1989, the NT OS/2 kernel could run inside the i860 emulator. However, the development team later determined that the i860 was unsuitable for the project. By December they had begun porting NT OS/2 to the MIPS R3000 processor instead, and completed the task in three months. Senior Microsoft executive Paul Maritz was targeting a release date in 1992, but the development schedule was uncertain. The company was eager to silence naysayers who speculated that NT wouldn't be on the market until 1994, and had planned to present the new OS at COMDEX in 1990. As Windows NT In May 1990, Microsoft released Windows 3.0, a new version of its MS-DOS-based Windows desktop environment. Windows 3.0 sold well, and the resulting shift in Microsoft's marketing strategy eroded their partnership with IBM—who wanted Microsoft to concentrate solely on developing OS/2 as its primary platform as opposed to building their future business around Windows. Users and developers were unsure of whether to adopt Windows or OS/2 due to these uncertainties (a situation magnified by the fact that the operating systems were incompatible with each other at the API level), while Microsoft's resources were also being drained by the simultaneous development of multiple operating systems. In August 1990, as a response to the popularity of Windows 3.0, the NT OS/2 team decided to re-work the operating system to use an extended 32-bit port of the Windows API known as Win32. Win32 maintained the familiar structure of the 16-bit APIs used by Windows, which would allow developers to easily adapt their software for the new platform while maintaining a level of compatibility with existing software for Windows. With the shift to a Windows-like architecture, the operating system's shell was also changed from OS/2's Presentation Manager to Windows' Program Manager. Due to these changes, NT was not presented at COMDEX 1990 as was originally planned. Neither the general public nor IBM knew about the transformation of NT OS/2 into Windows NT at the time. Although the companies did agree to a revised partnership where IBM and Microsoft would alternate developing major versions of OS/2 instead of collaborating on each version, IBM eventually learned of Microsoft's Windows NT plans in January 1991, and immediately ended the OS/2 partnership. IBM would solely develop OS/2 2.0 (as was planned under the amended version) and all future versions, without any further involvement from Microsoft. In October 1991, Windows NT received its first public demonstration at COMDEX. In an effort to ensure software taking advantage of Windows NT was available upon its release (scheduled for late-1992), Microsoft also distributed a 32-bit software development kit to selected developers in attendance. The demonstration was positively received; PC Magazine called Windows NT "the modern reinvention of the operating system", but at the same time claimed that it was unlikely that the promised backward compatibility would be kept for the final release. In March 1992, Microsoft also released Win32s, which would allow Windows 3.1 to have partial compatibility with Windows NT programs for the purposes of developing software optimized for the platform. At Microsoft's Win32 Professional Developers Conference in June 1992, Windows NT was demonstrated running on x86 and MIPS processors, while a beta version of Windows NT and an updated development kit were also made available. Concurrently, Microsoft announced a new version of its SQL Server product for Windows NT; Unix vendors feared that the software could be a killer app that would affect the market share of Unix systems. Concerns were also raised over NT's memory usage; while most computers of the era shipped with 4 megabytes of RAM, 16 MB was recommended for NTs. Due to the high cost of RAM at the time, critics thought that its high system requirements could affect the sales and adoption of Windows NT. Steps were taken to reduce its memory usage through methods such as paging. Microsoft began releasing public beta builds of NT in October 1992, and a month later at COMDEX, a presentation focusing on third-party software for Windows NT was held. The final pre-release version of NT was released in March 1993, alongside the unveiling of the server version, LAN Manager for Windows NT. Although its stability and performance had improved, there were still fears that the OS could be released in an unfinished state or delayed further into 1993. By extending the Windows brand and beginning NT at version 3.1, like Windows 3.1 which had established brand recognition and market share, Microsoft implied that consumers should expect a familiar user experience yet re-engineered. Release Windows NT 3.1 and Windows NT 3.1 Advanced Server (so numbered to associate them with Windows 3.1) were released on July 26, 1993. At first, only the x86 and MIPS versions shipped; the DEC Alpha version followed in September. Microsoft sold the workstation version for , and the server version for . Ostensibly, the server price was meant to be a promotional discount offered only during the first six months of sale, but they never raised the retail price to the listed one—. 250 programmers wrote 5.6 million lines of code; the development cost . In the last year of development, the team fixed more than 30,000 bugs. During the product's lifecycle, Microsoft published three service packs: Service Pack 1 was released on October 8, 1993; Service Pack 2 followed on January 24, 1994; and Service Pack 3's release date was October 29, 1994. The service packs were distributed on CD-ROM and floppy disk, and also through bulletin board systems, CompuServe, and the Internet. Microsoft terminated support for the operating system on December 31, 2000. Support for Windows NT 3.1 RTM (without a service pack) ended on January 8, 1994. Service Pack 1 support ended on April 24, 1994, and finally, Service Pack 2 support ended on January 29, 1995, only 1 year after general availability. Windows NT 3.1 was localized into various languages. Besides English, it was available in Dutch, French, German, Japanese, Spanish and Swedish. The version for workstations, but not Windows NT 3.1 Server, was additionally available in Danish, Finnish, Italian, Norwegian and Portuguese. Operating system goals Cutler set three main goals for Windows NT. The first goal was portability: in contrast to previous operating systems, which were strongly tied to one architecture, Windows NT should be able to operate on multiple architectures. To meet this goal, most of the operating systems, including the operating system core, had to be written in the C programming language. During the planning phase it was clear that this would cause Windows NT to have higher memory consumption than all previous operating systems. Besides the graphics system and parts of the networking system, which were written in C++, only parts of the operating systems which required direct hardware access and performance critical functions were written in assembly language. These parts were isolated so that they could easily be rewritten when porting the operating system to a new architecture. The second goal was reliability: The system should no longer crash due to a faulty application or faulty hardware. This way, the operating system should be made attractive for critical applications. To meet this goal, the architecture of Windows NT was designed so that the operating system core was isolated and applications could not access it directly. The kernel was designed as a microkernel and components of the core were to run atop the kernel in a modular fashion; Cutler knew this principle from his work at Digital. Reliability also includes security, and the operating system should be able to resist external attacks. Mainframes already had a system where every user had their own account which was assigned specific rights by the administrator, this way, users could be prevented access to confidential documents. A virtual memory management was designed to thwart attacks by malware and prevent users from accessing foreign areas of memory. The third goal was called personality: The operating system should be able to run applications designed for various operating systems, such as Windows, MS-DOS and OS/2 applications. The Mach kernel followed a similar concept by moving the APIs to components which operated in user mode as applications, these could be changed and new ones could be added. This principle was applied to Windows NT. Despite all these goals, the performance of the operating system was optimized where possible, by adapting critical sections of the code to fast execution speed. To improve networking performance, large parts of the networking system were moved to the operating system core. Windows NT was designed as a networking operating system. In this branch, Novell had a lead with its product NetWare, mostly because of a lack of competition, and Microsoft failed to develop a product which could challenge NetWare's lead. Cutler hoped to gain additional customers with a reliable networking operating system. Bill Gates already dominated the market of desktop operating systems with MS-DOS and Windows and hoped to do the same in the networking market with Windows NT. He especially hoped to find a market in the emerging number of servers, while at the same time he did not expect a success in the desktop market until 1995. Therefore, Windows NT was positioned as a high-end operating system in an interview with the product manager David Thacher. It was not designed to replace Windows 3.1 completely, but it should rather supplement Microsoft's product palette with an operating system for critical applications. The expectations were 10% to 20% among all Windows sales and a market share of 10% in the high end market, which amounted to one million copies. Features Architecture While Windows NT 3.1 uses the same graphical user interface as Windows 3.1, it was developed anew. The operating system is not DOS-based, but an independent 32-bit operating system; many concepts were taken from Cutler's previous operating system, VMS. The architecture of Windows NT takes some ideas of the client–server model, like the modular structure and the communication between the modules. System resources like memory, files or devices are viewed as objects by the operating system, which are all accessed in the same way through handles and which can in this way be secured against unauthorized access. The operating system was designed for multiprocessor systems; it supports preemptive multitasking and can make use of threads to run multiple processes in parallel. Using symmetric multiprocessing, the processing usage is evenly distributed among all available processors. The inter-process communication in Windows NT 3.1 is designed around networks; two newly introduced functions, Remote Procedure Call (RPC) and Network DDE, an extension of Dynamic Data Exchange (DDE), facilitate the access and data exchange between processes running on different computers inside a network. The operating system is designed to combine certain elements of a monolithic kernel and a microkernel; nowadays this is most often referred to as a hybrid kernel. The hardware abstraction layer represents the lowermost layer and isolates the operating system from the underlying hardware to make it easy to port the operating system to other platforms. The kernel running atop only has very basic functions like interrupt management and processor synchronization. All other functions of the operating system core are handled by modules which operate independently from one another and can be swapped without affecting the rest of the operating system. Positioned above the operating system core are the subsystems. There are two types of subsystems: one are the integral subsystems, which perform important operating system functions. One such subsystem is the security subsystem, which handles the logon process and monitors the security of the system. The other type of subsystem is the environment subsystem, which exposes the operating system functions to applications via application programming interfaces. The base subsystem is the 32-bit subsystem which runs 32-bit applications written for Windows NT. Windows NT applications can only run on one platform, and must be recompiled for every platform. The 32-bit subsystem also contains all output functions, including the Graphics Device Interface (GDI), so all other subsystems have to call the 32-bit subsystem to be able to output text or graphics. Other subsystems contained in Windows NT 3.1 are the POSIX subsystem, which supports POSIX-compatible applications built for Windows NT, and, in the x86 version only, the OS/2 subsystem, which allows command-line based OS/2 1.x applications to run. The Virtual DOS Machine (VDM) is sometimes also viewed as a subsystem, but is, strictly speaking, a normal 32-bit Windows application. It manages applications originally built for DOS. Built on top is Windows on Windows (WoW), which allows applications built for 16-bit Windows operating systems like Windows 3.1 to run. On x86 computers, the virtual DOS machine uses the virtual 8086 mode to run DOS applications directly, on RISC computers, an emulator licensed from Insignia Solutions is used which emulates a 80286 processor. However, not all DOS and 16-bit Windows applications can be run on Windows NT 3.1 due to various limitations, one of them being the inability of applications to directly access the hardware. As well, VxD files sometimes needed by applications cannot be used with Windows NT 3.1. While pure DOS applications are run in separate memory spaces, 16-bit Windows applications have to share one memory space. While this is done due to compatibility reasons with applications which depend on this ability, like Schedule+ and Microsoft Mail, it also means that 16-bit Windows applications only run in cooperative multitasking. A faulty 16-bit Windows application is in this way able to cause all other 16-bit Windows applications (but not Windows NT itself) to crash. System Windows NT 3.1 provides a boot manager called NTLDR which is loaded during the startup process of the operating system on x86-based computers. It allows a multiboot setup of multiple instances of Windows NT 3.1, as well as MS-DOS and OS/2 1.x. NTLDR is not used for the RISC versions because the RISC computers' firmware provides their own boot manager. Every user has to log on to the computer after Windows NT 3.1 is booted up by pressing the key combination Ctrl+Alt+Del and entering the user name and password. All users have their own user account, and user-specific settings like the Program Manager groups are stored separately for every user. Users can be assigned specific rights, like the right to change the system time or the right to shut down the computer. To facilitate management of user accounts, it is also possible to group multiple user accounts and assign rights to groups of users. Windows NT 3.1 introduced the new NTFS file system. This new file system is more robust against hardware failures and allows assignment of read and write rights to users or groups on the file system level. NTFS supports long file names and has features to accommodate POSIX applications like hard links. For compatibility reasons, Windows NT 3.1 also supports FAT16 as well as OS/2's file system HPFS, but does not support long file names on FAT file system (VFAT). This was added in Windows NT 3.5. Designed as a networking operating system, Windows NT 3.1 supports multiple network protocols. Besides IPX/SPX and NetBEUI, the TCP/IP protocol is supported allowing access to the Internet. Similar to Windows for Workgroups, files and printers can be shared and the access rights and configuration of these resources can be edited over the network. When a network printer is installed, the required drivers are automatically transferred over the network, removing the need to manually install the drivers for every computer. The Remote Access Service (RAS) allows a client from outside the network to connect to the network using a modem, ISDN or X.25 and access its resources. While the workstation allows one RAS connection at a time, the server supports 64. Windows NT 3.1 supports the then-new Unicode standard, a character set which allows multiple languages to be displayed. This facilitates localization of the operating system. All strings, as well as file and folder names, are internally processed in Unicode, but the included programs, like the File Manager, are not Unicode aware, so folders containing Unicode characters cannot be accessed. For demonstration purposes, a Unicode typeface called Lucida Sans Unicode is shipped with Windows NT 3.1 even though it is not installed by default. The previous code pages are still supported for compatibility purposes. The Windows registry, introduced with Windows 3.1, is a central, hierarchical configuration database designed to allow configuration of computers over the network and to replace the commonly-used text-based configuration files, like INI files, AUTOEXEC.BAT and CONFIG.SYS. Using the undocumented registry editor, the Windows registry can be viewed and edited by the user. The Advanced Server is designed to manage the workstation computers. It can function as a Domain controller, where all users and groups as well as their rights are stored. This way, a user can log on from any computer in the network, and users can be managed centrally on the server. Trust relationships can be built to other domains to be able to exchange data cross-domain. Using the replication service, files like logon scripts can be synchronized across all computers on the network. The Advanced Server supports the AppleTalk protocol to allow connections to Macintosh computers. Hard drives can be combined to RAIDs in Windows NT 3.1 Advanced Server, the supported configurations are RAID 0, RAID 1 and RAID 5. Included programs Windows NT 3.1, for the most part, comes with 32-bit versions of the components featured in Windows 3.1 and Windows for Workgroups. However, it also included applications specifically aimed at the needs of Windows NT, like the User Manager, the Performance Monitor, the Disk Administrator, the Event Viewer and the Backup application. The Advanced Server contained further, server-specific administration tools. Because Windows NT 3.1 is not DOS-based, a new 32-bit command-line processor, called CMD.EXE was included which was compatible with MS-DOS 5.0. For compatibility reasons, Windows NT 3.1 shipped with a few 16-bit applications, like Microsoft Write or EDLIN. Windows NT 3.1, being an all-new operating system for which no previous drivers could be used, includes a wealth of drivers for various common components and peripherals. This includes common SCSI devices like hard drives, CD-ROM drives, tape drives and image scanners, as well as ISA devices like graphics cards, sound cards and network cards. The PCI bus, however, is expressly not supported. Windows NT 3.1 supports an uninterruptible power supply. Windows NT 3.1 could be installed either by using the CD-ROM and a provided boot disk, or by utilizing a set of twenty-two 3.5" floppies (twenty-three floppies for Advanced Server). Windows NT 3.1 could also be installed over the network. A coupon was included that made it possible to order a set of twenty-seven 5.25" floppies (or twenty-eight floppies for Advanced Server). Compared to the floppies, the CD-ROM contained additional drivers and applications. System requirements Windows NT 3.1 supports multiple platforms: Aside from the x86 architecture, it runs on computers with DEC Alpha or MIPS (R4000 and R4400) processors. Minimum system requirements on x86 systems include a 25 MHz 80386 processor, at least 12 megabytes of memory, 75 megabytes of hard drive space, and a VGA graphics card. RISC systems require 16 megabytes of memory, 92 megabytes of hard drive space, and a CD-ROM drive. The Advanced Server edition requires an 80386 processor with 16 megabytes of memory and 90 megabytes of hard drive space. On RISC systems, 110 megabytes of hard drive space is needed. Windows NT 3.1 supports dual processor systems, while the Advanced Server edition supports up to four processors. Due to an error in the processor detection routine, Windows NT 3.1 cannot be installed on Pentium II or newer processors. Microsoft never fixed the problem, but unofficial patches are available. Reception Windows NT 3.1 sold about 300,000 copies in its first year. The hardware requirements were deemed to be very high at that time; the recommended system requirements of a 486 processor with 16 megabytes of memory were well above the average computer's configuration, and the operating system turned out to be too slow to use. 32-bit applications which could have used the capabilities of Windows NT 3.1 were scarce, so users had to resort to the old 16-bit applications; however, these ran slower than on Windows 3.1. Estimates in November 1993 counted only 150 Windows NT applications. Common types of software, like office suites, were not available for Windows NT 3.1. During the development of the operating system, the API calls were changed so 32-bit applications built on the 1992 pre-release version of Windows NT 3.1 could not be run on the final version. This affected software such as Microsoft Visual C++ 1.0 and Microsoft Fortran PowerStation. RISC systems with Windows NT 3.1 had an even bigger disadvantage: even though they were more powerful than x86 systems, almost no 32-bit applications or drivers were ported to these platforms. 16-bit applications ran much slower under RISC systems because of the 80286 emulation compared to x86 systems which could run 16-bit applications natively, and DOS and 16-bit applications which depended on 386 calls could not be run at all on RISC systems. However, not all reception was negative; the multitasking capabilities of the operating system were rated positively, especially compared to Windows 3.1. Compared to the size of the operating system, the installation turned out to be very easy, even though installing from floppies was a very time-consuming task. The Advanced Server, intended to be the successor to the unsuccessful LAN Manager product, was technically much superior to its predecessor, and only failed to gain success because it shared the same problems with its workstation pendant, such as the low performance running 16-bit applications. The Advanced Server provided a financial advantage for large networks because its price was not dependent on the number of clients, unlike its competitor Novell NetWare. With Windows NT, Microsoft entered a market it could not previously address and which was mostly dominated by Unix, Novell NetWare and OS/2. A test performed by the InfoWorld magazine in November 1993, where the networking capabilities of several operating systems were tested, showed that Windows NT 3.1 was seriously lacking in inter-client communication: it could only connect to its own server via NetBEUI; attempts to connect to Unix, NetWare and OS/2 all failed because no client software was available. For the Advanced Server, only their own client, the Macintosh and, if only limited, OS/2 were able to connect to the server. Even though the operating system's actual success was only moderate, it had a huge lasting impact. Developers of Unix derivations for the first time strived to standardize their operating systems, and Novell was so concerned about its market share that it bought a Unix vendor. Manufacturers of microprocessors hoped to use the portability of the new operating system to increase their own sales, and thus ports of Windows NT were announced for various platforms, like the Sun SPARC architecture and the Clipper architecture. It was recognized that Windows NT would dominate the desktop market as soon as the hardware became powerful enough to run the operating system at an acceptable speed. Eight years later, Microsoft would unify the consumer-oriented Windows line (which had remained MS-DOS based) with the NT line with the October 2001 release of Windows XP—the first consumer-oriented version of Windows to use the NT architecture. References External links Guidebook: Windows NT 3.1 Gallery – Gallery of UI screenshots of Windows NT 3.1 1993 software Products and services discontinued in 2000 3.1 IA-32 operating systems MIPS operating systems History of Microsoft History of software Products introduced in 1993 Microsoft Windows
Windows NT 3.1
[ "Technology" ]
5,996
[ "Computing platforms", "Microsoft Windows", "History of computing", "History of software" ]
984,884
https://en.wikipedia.org/wiki/Windows%20NT%203.5
Windows NT 3.5 is a major release of the Windows NT operating system developed by Microsoft and oriented towards businesses. It was released on September 21, 1994, as the successor to Windows NT 3.1. One of the primary goals during its development was to improve the operating system's performance. As a result, the project was codenamed "Daytona", after the Daytona International Speedway in Daytona Beach, Florida. Windows NT 3.5 was succeeded by Windows NT 3.51, released in 1995. Support and updates for Windows NT 3.5 was ended by Microsoft on December 31, 2001. Features Windows NT 3.5 comes in two editions: NT Workstation and NT Server. They respectively replace the NT and NT Advanced Server editions of Windows NT 3.1. The Workstation edition allows only 10 concurrent clients to access the file server and does not support Mac clients. Windows NT 3.5 includes integrated Winsock and TCP/IP support. (Its predecessor, Windows NT 3.1, only includes an incomplete implementation of TCP/IP based on the AT&T UNIX System V "STREAMS" API.) TCP/IP and IPX/SPX stacks in Windows NT 3.5 are rewritten. NetBIOS over TCP/IP (NetBT) support as a compatibility layer for TCP/IP was introduced as also the Microsoft DHCP and WINS clients and DHCP and WINS servers. Windows NT 3.5 can share files via the File Transfer Protocol, and printers through the Line Printer Daemon protocol. It can act as a Gopher, HTTP, or WAIS server, and includes Remote Access Service for remote dial-up modem access to LAN services using either SLIP or PPP protocols. Windows NT 3.5 Resource Kit includes the first implementation of Microsoft DNS. Other new features in Windows NT 3.5 include support for the VFAT file system, Object Linking and Embedding (OLE) version 2.0 and support for input/output completion ports. Microsoft updated the graphical user interface to be consistent with that of Windows for Workgroups 3.11. NT 3.5 shows performance improvements over NT 3.1, and requires less memory. Limitations A lack of drivers for PCMCIA cards limited NT 3.5's suitability for notebook computers. To install Windows NT 3.5 on a computer that has a sixth-generation or later x86 processor, one has to modify files on the installation CD-ROM. Reception In July 1995, Windows NT 3.5 with Service Pack 3 was rated by the National Security Agency as complying with Trusted Computer System Evaluation Criteria (TCSEC) C2 criteria. Source code In May 2020, the full source code for the second release candidate build (build 782.1) of Windows NT 3.5, along with source code for the original Xbox, leaked onto the Internet. References External links Guidebook: Windows NT 3.51 Gallery – A website dedicated to preserving and showcasing Graphical User Interfaces 1994 software Products and services discontinued in 2001 3.5 IA-32 operating systems MIPS operating systems Microsoft Windows
Windows NT 3.5
[ "Technology" ]
633
[ "Computing platforms", "Microsoft Windows" ]
984,885
https://en.wikipedia.org/wiki/Windows%20NT%203.51
Windows NT 3.51 is a major release of the Windows NT operating system developed by Microsoft and oriented towards businesses. It is the third version of Windows NT and was released on May 30, 1995, eight months following the release of Windows NT 3.5. The most significant enhancement offered in this release was that it provides client/server support for inter-operating with Windows 95, which was released almost three months after NT 3.51. Windows NT 4.0 became its successor a year later. Mainstream support for Windows NT 3.51 Workstation ended on December 31, 2000, and extended support ended on December 31, 2001, while Windows NT 3.51 Server mainstream support ended on September 30, 2000, followed by extended support on September 30, 2002. Both editions were succeeded by Windows NT 4.0 Workstation and Windows NT 4.0 Server, respectively. Overview The release of Windows NT 3.51 was dubbed "the PowerPC release" at Microsoft. The original intention was to release a PowerPC edition of NT 3.5, but according to Microsoft's David Thompson, "we basically sat around for 9 months fixing bugs while we waited for IBM to finish the Power PC hardware". Editions of NT 3.51 were also released for the x86, MIPS, and Alpha architectures. New features introduced in Windows NT 3.51 include PCMCIA support, NTFS file compression, replaceable WinLogon (GINA), 3D support in OpenGL, persistent IP routes when using TCP/IP, automatic display of textual descriptions when the mouse pointer was placed on toolbar buttons ("tooltips") and support for Windows 95 common controls. In view of the significant difference in the kernel base, Windows NT 3.51 is readily able to run a large number of Win32 applications designed for Windows 95. More recent 32-bit applications will not work, as the developers have prevented their application from working with any Windows version earlier than Windows 98, and also because some applications do not work properly with the older Windows NT 3.51 interface. Despite this, Microsoft in their application releases muddied the issue, releasing 32-bit versions of Microsoft Office right up to Office 97 (the last version of Microsoft Office supported on NT 3.51), but relying upon 16-bit versions of Internet Explorer technology from versions 3.0 to 5.0. Web browsers based on and including Firefox were operable up to version 2.0.0.22, released in April 2009; they required a few manual file updates to work without compromising browsing security. Windows NT 3.51 is the last of the series to be compatible with the Intel 80386 processor. NewShell On May 26, 1995, Microsoft released a test version of a shell refresh, named the Shell Technology Preview, and often referred to informally as "NewShell". This was the first incarnation of the modern Windows GUI with the Taskbar and Start menu. It was designed to replace the Windows 3.x Program Manager/File Manager based shell with Windows Explorer-based graphical user interface. The release provided capabilities quite similar to that of the Windows "Chicago" (codename for Windows 95) shell during its late beta phases; however, it was intended to be nothing more than a test release. There was a second public release of the Shell Technology Preview, called Shell Technology Preview Update made available to MSDN and CompuServe users on August 8, 1995. Both releases held Windows Explorer builds of 3.51.1053.1. The preview program provided early feedback for the Shell Update Release, the next major Windows NT version with the new interface built-in, which was released in July 1996 as Windows NT 4.0. Updates Five Service Packs were released for NT 3.51, introducing both bug fixes and new features. Service Pack 5, for example, fixed issues related to the Year 2000 problem. Hardware requirements Supported EIDE addressing schemes include logical block addressing (LBA), ONTrack Disk Manager, EZDrive, and extended cylinder-head-sector (ECHS). References External links HPC:Factor Windows NT 3.51 Patches & Updates Guide More Information Shell Update Release (file dates: 08/09/95) 1995 software Products and services discontinued in 2001 3.51 IA-32 operating systems MIPS operating systems PowerPC operating systems Microsoft Windows
Windows NT 3.51
[ "Technology" ]
897
[ "Computing platforms", "Microsoft Windows" ]
984,944
https://en.wikipedia.org/wiki/Silver%20halide
A silver halide (or silver salt) is one of the chemical compounds that can form between the element silver (Ag) and one of the halogens. In particular, bromine (Br), chlorine (Cl), iodine (I) and fluorine (F) may each combine with silver to produce silver bromide (AgBr), silver chloride (AgCl), silver iodide (AgI), and four forms of silver fluoride, respectively. As a group, they are often referred to as the silver halides, and are often given the pseudo-chemical notation AgX. Although most silver halides involve silver atoms with oxidation states of +1 (Ag+), silver halides in which the silver atoms have oxidation states of +2 (Ag2+) are known, of which silver(II) fluoride is the only known stable one. Silver halides are light-sensitive chemicals, and are commonly used in photographic film and paper. Applications Light sensitivity Silver halides are used in photographic film and photographic paper, including graphic art film and paper, where silver halide crystals in gelatin are coated on to a film base, glass or paper substrate. The gelatin is a vital part of the emulsion as the protective colloid of appropriate physical and chemical properties. The gelatin may also contain trace elements (such as sulfur) which increase the light sensitivity of the emulsion, although modern practice uses gelatin without such components. When a silver halide crystal is exposed to light, a sensitivity speck on the surface of the crystal is turned into a speck of metallic silver (these comprise the invisible or latent image). If the speck of silver contains approximately four or more atoms, it is rendered developable - meaning that it can undergo development which turns the entire crystal into metallic silver. Areas of the emulsion receiving larger amounts of light (reflected from a subject being photographed, for example) undergo the greatest development and therefore results in the highest optical density. Silver bromide and silver chloride may be used separately or combined, depending on the sensitivity and tonal qualities desired in the product. Silver iodide is always combined with silver bromide or silver chloride, except in the case of some historical processes such as the collodion wet plate and daguerreotype, in which the iodide is sometimes used alone (generally regarded as necessary if a daguerreotype is to be developed by the Becquerel method, in which exposure to strong red light, which affects only the crystals bearing latent image specks, is substituted for exposure to mercury fumes). Silver fluoride is not used in photography. When absorbed by an AgX crystal, photons cause electrons to be promoted to a conduction band (de-localized electron orbital with higher energy than a valence band) which can be attracted by a sensitivity speck, which is a shallow electron trap, which may be a crystalline defect or a cluster of silver sulfide, gold, other trace elements (dopant), or combination thereof, and then combined with an interstitial silver ion to form a silver metal speck. Silver halides are also used to make corrective lenses darken when exposed to ultraviolet light (see photochromism). Chemistry Silver halides, except for silver fluoride, are very insoluble in water. Silver nitrate can be used to precipitate halides; this application is useful in quantitative analysis of halides. 689-703 The three main silver halide compounds have distinctive colours that can be used to quickly identify halide ions in a solution. The silver chloride compound forms a white precipitate, silver bromide a creamy coloured precipitate and silver iodide a yellow coloured precipitate. Some compounds can considerably increase or decrease the solubility of AgX. Examples of compounds that increase the solubility include: cyanide, thiocyanate, thiosulfate, thiourea, amines, ammonia, sulfite, thioether, crown ether. Examples of compounds that reduces the solubility include many organic thiols and nitrogen compounds that do not possess solubilizing group other than mercapto group or the nitrogen site, such as mercaptooxazoles, mercaptotetrazoles, especially 1-phenyl-5-mercaptotetrazole, benzimidazoles, especially 2-mercaptobenzimidazole, benzotriazole, and these compounds further substituted by hydrophobic groups. Compounds such as thiocyanate and thiosulfate enhance solubility when they are present in a sufficiently large quantity, due to formation of highly soluble complex ions, but they also significantly depress solubility when present in a very small quantity, due to formation of sparingly soluble complex ions. Archival use Silver halide can be used to deposit fine details of metallic silver on surfaces, such as film. Because of the chemical stability of metallic silver, this film can be used for archival purposes. For example, the Arctic World Archive uses film developed with silver halides to store data of historical and cultural interest, such as a snapshot of the Open Source code in all active GitHub repositories . References Silver compounds Metal halides Photographic chemicals Optical devices
Silver halide
[ "Chemistry", "Materials_science", "Engineering" ]
1,111
[ "Glass engineering and science", "Optical devices", "Inorganic compounds", "Salts", "Metal halides" ]
985,045
https://en.wikipedia.org/wiki/Spic%20and%20Span
Spic and Span is a brand of all-purpose household cleaner marketed by KIK Custom Products Inc. for home consumer use and by Procter & Gamble for professional (non-home-consumer) use. History On June 15, 1926, Whistle Bottling Company of Johnsonburg, Pennsylvania, registered "Spic and Span" trademark No. 214,076 — washing and cleaning compound in crystal form with incidental water-softening properties. The modern cleaner was invented by housewives Elizabeth "Bet" MacDonald and Naomi Stenglein in Saginaw, Michigan in 1933. Their formula included equal parts of ground-up glue, sodium carbonate, and trisodium phosphate; though trisodium phosphate is no longer part of the modern formula out of a concern for environmental damage from phosphates making their way into waterways. Stenglein observed that testing in her house made it spotless, or "spick and span". They took the k off "spick" and started selling the product to local markets. From 1933 to 1944, both families helped run their "Spic and Span Products Company". On January 29, 1945, Procter & Gamble, a major international manufacturer of household and personal products based in Cincinnati, Ohio, bought Spic and Span for $1.9 million. On August 30, 1949, Procter & Gamble registered the "Spic and Span" trademark (soluble cleaner, cleanser, and detergent). The product was advertised in many soap operas, serving as the main sponsor of Search for Tomorrow for two decades. The brand, along with Comet, was acquired by Prestige Brands in 2001. In 2018, Prestige Brands sold the brand to KIK Custom Products Inc. Procter & Gamble retained the rights to market the brand to the professional (non-home-consumer) market in the United States. Usage The powdered form must be mixed in water to use. A liquid version is also available. Although considered all-purpose, it is "not recommended for carpets, upholstery, aluminum, glass, laundry, or mixing with bleach or ammonia" as written on product label. Etymology The product was named from the older phrase "spick and span". The phrase "span-new" meant as new as a freshly cut wood chip, such as those once used to make spoons. In a metaphor dating from at least 1300, something span-new was neat and unstained. Spic was added in the 16th century, as a "spick" (a spike or nail) was another metaphor for something neat and trim. The British phrase may have evolved from the Dutch spiksplinter nieuw, "spike-splinter new". In 1665, Samuel Pepys used "spicke and span" in his famous diary. The "clean" sense appears to have arisen only recently. The term is completely unrelated to the modern epithet spic. References External links Cleaning products Prestige Brands brands Former Procter & Gamble brands Products introduced in 1933
Spic and Span
[ "Chemistry" ]
615
[ "Cleaning products", "Products of chemical industry" ]
985,101
https://en.wikipedia.org/wiki/List%20of%20defunct%20hard%20disk%20manufacturers
At least 218 companies have manufactured hard disk drives (HDDs) since 1956. Most of that industry has vanished through bankruptcy or mergers and acquisitions. None of the first several entrants (including IBM, who invented the HDD) continue in the industry today. Only three manufacturers have survived—Seagate, Toshiba and Western Digital (WD)—all of which grew at least in part through mergers and acquisitions. Manufacturers The following is a partial list of defunct hard disk manufacturers. There are currently manufacturers in this incomplete list. See also History of hard disk drives List of computer hardware manufacturers List of hard disk manufacturers List of solid-state drive manufacturers References General references Specific references Further reading Hard-disk manufacturers Defunct hard-disk manufacturers Defunct hard-disk manufacturers Hard disk Hard disk
List of defunct hard disk manufacturers
[ "Technology" ]
159
[ "Computing-related lists" ]
985,124
https://en.wikipedia.org/wiki/Time%20sink
A time sink (also timesink), time drain or time-waster is an activity that consumes a significant amount of time, especially one which is seen as a wasteful way of spending it. Although it is unknown when the term was coined, it makes an analogy with heat sink. In video games In massively multiplayer online role-playing games (MMORPGs), time sinks are a method of increasing the time needed by players to do certain tasks, hopefully causing them to subscribe for longer periods of time. Players may use the term disparagingly to describe a simplistic and time-consuming aspect of gameplay, possibly designed to keep players playing longer without significant benefit. Time sinks can also be used for other gameplay reasons, such as to help regenerate resources or monsters in the game world. Negative connotations Many players consider time sinks to be an inherently poor design decision, only included so that game companies can increase profits. For example, one Slashdot article describes time sinks as "gameplay traps intended to waste your time and keep you playing longer". In most games, boring and lengthy parts of gameplay are merely an annoyance, but when used in subscription-based MMORPGs, where players are paying recurring fees for access to the game, they become a much more inflammatory issue. Game designers must be prudent in balancing efforts to produce both involving gameplay and the length of content that players expect. Time sinks are often associated with hardcore games, though whether this is a positive or negative association depends on the context. Trade-offs Implementing time sinks in a video game is a delicate balancing act. Excessive use of time sinks may cause players to stop playing. However, if not enough time sinks are implemented, players may feel the game is too short or too easy, causing them to abandon the game much sooner out of boredom. A number of criteria can be used to evaluate use of time sinks, such as frequency, length, and variety (both of the nature of the time sink and the actions taken to overcome it). What is considered a good balance depends in part on the type of game in question. Casual games are often expected to have less in the way of time sinks, and hardcore games to have more, though this is not a hard and fast rule. General term A time sink is an enjoyable but time-wasting activity. Some parents call video games a waste of time, while some introverts call parties a waste of time, making the term highly subjective; even sleeping could be considered a time sink. Some time sinks become popular and are therefore not as commonly referred to as a time sink. More examples of time sinks include watching a sports game, spending time at a bar, spending a day at the beach, day-long spa treatments, and camping in the woods. A time sink generally has a negative connotation but it can be a more neutral term. MMORPGs are known for time-wasting activity. However, the genre of incremental games use waiting as a core feature. Players of such games can take pleasure in repetitive and easy tasks. See also References External links The Jargon File - time sink Wiki: "Timesink" Massively multiplayer online role-playing games Neologisms Video game terminology
Time sink
[ "Technology" ]
666
[ "Computing terminology", "Video game terminology" ]
985,187
https://en.wikipedia.org/wiki/Driving%20simulator
Driving simulators are used for entertainment as well as in training of driver's education courses taught in educational institutions and private businesses. They are also used for research purposes in the area of human factors and medical research, to monitor driver behavior, performance, and attention and in the car industry to design and evaluate new vehicles or new advanced driver assistance systems. Training Driving simulators are being increasingly used for training drivers. Versions exist for cars, trucks, buses, etc. Uses Novice driver training and testing Professional driver training and testing Training in critical driving conditions Testing the effects of impairment on driver performance Analysis of the driver behaviours Analysis of driver responses Evaluating user performances in different conditions (handling of controls) Assessing fitness to drive for aging drivers Testing future in-vehicle technologies on drivers or passengers (Human -Machine Interface) entertainment and fun Types Ambulance simulator: Used to train and assess ambulance drivers in basic and advanced vehicle control skills as well as how to respond to emergencies and interact with other emergency responders. Car simulator: Used to train and test novice drivers in all the skills required to pass a driver's license road test as well as hazard perception and crash risk mitigation. Modular-design simulator: Interchangeable vehicle cabins or cockpits can be configured for use as tractor/trailer trucks, dump trucks and other construction vehicles, airport-operated vehicles, emergency response and police pursuit vehicles, buses, subway trains, passenger vehicles, and heavy equipment such as cranes. Multi-station driving simulator: This type of simulator enables one instructor to train more drivers at the same time thus saving time and reducing costs... These systems are equipped with instructor stations connected to control several driving simulators. Truck simulator: Used to train and assess novice and experienced truck drivers in skills ranging from basic control maneuvers, e.g. shifting and backing, to advanced skills, e.g. fuel efficiency, rollover prevention, defensive driving. Bus simulator: is used to train bus drivers on route familiarisation, safe driving techniques, fuel efficiency techniques. It can be used for training drivers on a variety of bus models and on different types of gear transmissions. Physical simulator: Large scale simulators employ Stewart platforms and xy tables to physically move the driver around in 6-axis space, simulating acceleration, braking and centripetal forces, similar to physical flight simulators. Entertainment In the 1980s, it became a trend for arcade racing games to use hydraulic motion simulator arcade cabinets. The trend was sparked by Sega's "taikan" games, with "taikan" meaning "body sensation" in Japanese. The "taikan" trend began when Yu Suzuki's team at Sega (later known as Sega AM2) developed Hang-On (1985), a racing video game where the player sits on and moves a motorbike replica to control the in-game actions. Suzuki's team at Sega followed it with hydraulic motion simulator cockpit cabinets for later racing games such as Out Run (1986). Sega have since continued to manufacture motion simulator cabinets for arcade racing games through to the 2010s. In 1991, Namco released the arcade game Mitsubishi Driving Simulator, co-developed with Mitsubishi. It was a serious educational street driving simulator that used 3D polygon technology and a sit-down arcade cabinet to simulate realistic driving, including basics such as ensuring the car is in neutral or parking position, starting the engine, placing the car into gear, releasing the hand-brake, and then driving. The player can choose from three routes while following instructions, avoiding collisions with other vehicles or pedestrians, and waiting at traffic lights; the brakes are accurately simulated, with the car creeping forward after taking the foot off the brake until the hand-brake is applied. Leisure Line magazine considered it the "hit of the show" upon its debut at the 1991 JAMMA show. It was designed for use by Japanese driving schools, with a very expensive cost of AU$150,000 or per unit. Advances in processing power have led to more realistic simulators known as sim racing games on home systems, beginning with Papyrus Design Group's groundbreaking IndyCar Racing (1993) and Grand Prix Legends (1998) for PC and Gran Turismo (1997) for home consoles. Occasionally, a racing game or driving simulator will also include an attachable steering wheel that can be used to play the game in place of a controller. The wheel, which is usually plastic, may also include pedals to add to the game's reality. These wheels are usually used only for arcade and computer games. In addition to the myriad commercial releases there is a bustling community of amateur coders working on closed and open source free simulators. Some of the major features popular with fans of the genre are online racing, realism and diversity of cars and tracks. Research Driving simulators are used at research facilities for many purposes. Many vehicle manufacturers operate driving simulators, e.g. BMW, Ford, Renault. Many universities also operate simulators for research. Driving simulators allow researchers to study driver training issues and driver behavior under conditions in which it would be illegal and/or unethical to place drivers. For instance, studies of driver distraction would be dangerous and unethical (because of the inability to obtain informed consent from other drivers) to do on the road. With the increasing use of various in-vehicle information systems (IVIS) such as satellite navigation systems, cell phones, DVD players and e-mail systems, simulators are playing an important rule in assessing the safety and utility of such devices. Fidelity There exists a number of types research driving simulators, with a wide range of capabilities. The most complex, like the National Advanced Driving Simulator, have a full-sized vehicle body, with six-axis movement and 360-degree visual displays. On the other end of the range are simple desktop simulators that are often implemented using a computer monitor for the visual display and a videogame-type steering wheel and pedal input devices. These low cost simulators are used readily in the evaluation of basic and clinically oriented scientific questions. The issue is complicated by political and economic factors, as facilities with low-fidelity simulators claim their systems are "good enough" for the job, while the high-fidelity simulator groups insist that their (considerably more expensive) systems are necessary. Research into motion fidelity indicates that, while some motion is necessary in a research driving simulator, it does not need to have enough range to match real-world forces. Recent research has also considered the use of the real-time photo-realistic video content that reacts dynamically to driver behaviour in the environment. Validity There is a question of validity—whether results obtained in the simulator are applicable to real-world driving. One review of research studies found that driver behavior on a driving simulator approximates (relative validity) but does not exactly replicate (absolute validity) on-road driving behavior. Another study found absolute validity for the types and number of driver errors committed on a simulator and on the road. Yet another study found that drivers who reported impaired performance on a low fidelity driving simulator were significantly more likely to take part in an accident in which the driver was at least partially at fault, within five years after the simulator session. Some research teams are using automated vehicles to recreate simulator studies on a test track, enabling a more direct comparison between the simulator study and the real world. As computers have grown faster and simulation is more widespread in the automotive industry, commercial vehicle math models that have been validated by manufacturers are seeing use in simulators. See also Full motion racing simulator Virtual reality simulator Sim racing, collective term for auto racing games which aim to be realistic, but do not necessarily include motion simulation output Flight simulator Full flight simulator Simulator ride References Educational software Automotive software Simulation software Simulator
Driving simulator
[ "Technology" ]
1,570
[ "Driving simulators", "Real-time simulation" ]
985,295
https://en.wikipedia.org/wiki/Walther%20Ritz
Walther Heinrich Wilhelm Ritz (22 February 1878 – 7 July 1909) was a Swiss theoretical physicist. He is most famous for his work with Johannes Rydberg on the Rydberg–Ritz combination principle. Ritz is also known for the variational method named after him, the Ritz method. Life Walter Ritz's father Raphael Ritz was born in Valais and was a well-known painter. His mother, born Nördlinger, was the daughter of an engineer from Tübingen. Ritz was a particularly gifted student and attended the municipal lyceum in Sion. In 1897, he entered the polytechnic school in Zürich, where he studied engineering. Soon, he found out that he could not live with the approximations and compromises associated with engineering, so he switched to the more mathematically accurate physical sciences. In 1900, Ritz contracted tuberculosis, possibly also pleurisy, which he later died from. In 1901 he moved to Göttingen for health reasons. There he was influenced by Woldemar Voigt and David Hilbert. Ritz wrote a dissertation on spectral lines of atoms and received his doctorate with summa cum laude. The theme later led to the Ritz combination principle and in 1913 to the atomic model of Ernest Rutherford and Niels Bohr. In the spring of 1903, he heard lectures by Hendrik Antoon Lorentz in Leiden on electrodynamic problems and his new electron theory. In June 1903 he was in Bonn at the Heinrich Kayser Institute, where he found in potash a spectral line that he had predicted in his dissertation. In November 1903, he was in Paris at the Ecole Normale Supérieure. There he worked on infrared photo plates. In July 1904 his illness worsened and he moved back to Zürich. The disease prevented him from publishing further scientific publications until 1906. In September 1907 he moved to Tübingen, the place of origin of his mother, and in 1908 again to Göttingen, where he became a private lecturer at the university. There he published his work Recherches critiques sur l'Electrodynamique Générale, see below. As a student, friend or colleague, Ritz had contacts with many contemporary scholars such as Hilbert, Andreas Heinrich Voigt, Hermann Minkowski, Lorentz, Aimé Cotton, Friedrich Paschen, Henri Poincaré and Albert Einstein. He was a fellow student of Einstein in Zürich, while he studied there. Ritz was an opponent of Einstein's theory of relativity. Ritz died in Göttingen and was buried in the Nordheim cemetery in Zürich. The family tomb was lifted on 15 November 1999. His tombstone is in section 17 with the grave number 84457. Works Criticism of Maxwell–Lorentz electromagnetic theory Not so well known is the fact that in 1908 Ritz produced a lengthy criticism of Maxwell–Lorentz electromagnetic theory, in which he contended that the theory's connection with the luminescent ether (see Lorentz ether theory) made it "essentially inappropriate to express the comprehensive laws for the propagation of electrodynamic actions." Ritz pointed out seven problems with Maxwell–Lorentz electromagnetic field equations: Electric and magnetic forces really express relations about space and time and should be replaced with non-instantaneous elementary actions. Advanced potentials don't exist (and their erroneous use led to the Rayleigh–Jeans ultraviolet catastrophe). Localization of energy in the ether is vague. It is impossible to reduce gravity to the same notions. The unacceptable inequality of action and reaction is brought about by the concept of absolute motion with respect to the ether. Apparent relativistic mass increase is amenable to different interpretations. The use of absolute coordinates, if independent of all motions of matter, requires throwing away the time-honoured use of Galilean relativity and our notions of rigid ponderable bodies. Instead, he indicated that light is not propagated (in a medium) but is projected. This theory, however, is considered to be refuted. Ritz's method In 1909 Ritz developed a direct method to find an approximate solution for boundary value problems. It converts the often insoluble differential equation into the solution of a matrix equation. It is a theoretical preparatory work for the finite element method (FEM). This method is also known as Ritz's variation principle and the Rayleigh-Ritz principle. Ritz's combination principle In 1908, Ritz found empirically the Ritz combination principle named after him. After that, the sum or difference of the frequencies of two spectral lines is often the frequency of another line. Which of these calculated frequencies is actually observed was only explained later by selection rules, which follow from quantum mechanical calculations. The basis for this was the spectral line research (Balmer series) by Johann Jakob Balmer. Honours The lunar crater Ritz is named after him. References Jean-Claude Pont (ed.) Le Destin Douloureux de Walther Ritz, physicien théoricien de génie, Sion: Archives de l'Etat de Valais, 2012 (= Proceedings of the International Conference in Honor of Walther Ritz's 100th Anniversary). External links Abbreviated Biographical Sketch of Walter Ritz Critical Researches on General Electrodynamics, Walter Ritz, 1908, English translation Ritz, Einstein and the Emission Hypothesis 1878 births 1909 deaths 20th-century deaths from tuberculosis People from Valais Relativity critics Swiss physicists Tuberculosis deaths in Germany University of Göttingen alumni
Walther Ritz
[ "Physics" ]
1,117
[ "Relativity critics", "Theory of relativity" ]
985,583
https://en.wikipedia.org/wiki/Sothic%20cycle
The Sothic cycle or Canicular period is a period of 1,461 Egyptian civil years of 365 days each or 1,460 Julian years averaging  days each. During a Sothic cycle, the 365-day year loses enough time that the start of its year once again coincides with the heliacal rising of the star Sirius ( or , 'Triangle'; , ) on 19 July in the Julian calendar. It is an important aspect of Egyptology, particularly with regard to reconstructions of the Egyptian calendar and its history. Astronomical records of this displacement may have been responsible for the later establishment of the more accurate Julian and Alexandrian calendars. Mechanics The ancient Egyptian civil year, its holidays, and religious records reflect its apparent establishment at a point when the return of the bright star Sirius to the night sky was considered to herald the annual flooding of the Nile. However, because the civil calendar was exactly 365 days long and did not incorporate leap years until 22 BCE, its months "wandered" backwards through the solar year at the rate of about one day in every four years. This almost exactly corresponded to its displacement against the Sothic year as well. (The Sothic year is about a minute longer than a Julian year.) The sidereal year of 365.25636 days is valid only for stars on the ecliptic (the apparent path of the Sun across the sky) and having no proper motion, whereas Sirius's displacement ~40° below the ecliptic, its proper motion, and the wobbling of the celestial equator cause the period between its heliacal risings to be almost exactly 365.25 days long instead. This steady loss of one relative day every four years over the course of the 365-day calendar meant that the "wandering" day would return to its original place relative to the solar and Sothic year after precisely 1461 Egyptian civil years or 1460 Julian years. Discovery This calendar cycle was well known in antiquity. Censorinus described it in his book De Die Natale, in CE 238, and stated that the cycle had renewed 100 years earlier on the 12th of August. In the ninth century, Syncellus epitomized the Sothic Cycle in the "Old Egyptian Chronicle." Isaac Cullimore, an early Egyptologist and member of the Royal Society, published a discourse on it in 1833 in which he was the first to suggest that Censorinus had fudged the terminus date, and that it was more likely to fall in CE 136. He also computed the likely date of its invention as being around 1600 BCE. In 1904, seven decades after Cullimore, Eduard Meyer carefully combed known Egyptian inscriptions and written materials to find any mention of the calendar dates when Sirius rose at dawn. He found six of them, on which the dates of much of conventional Egyptian chronology are based. A heliacal rise of Sirius was recorded by Censorinus as having happened on the Egyptian New Year's Day between 139 CE and 142 CE. The record itself actually refers to 21 July 140 CE, but astronomical calculation definitely dates the heliacal rising at 20 July 139 CE, Julian. This correlates the Egyptian calendar to the Julian calendar. A Julian leap day occurs in 140 CE, and so the new year on 1 Thoth is 20 July in 139 CE but it is 19 July for 140–142 CE. Thus Meyer was able to compare the Egyptian civil calendar date on which Sirius was observed rising heliacally to the Julian calendar date on which Sirius ought to have risen, count the number of intercalary days needed, and determine how many years were between the beginning of a cycle and the observation. To calculate a date astronomically, one also needs to know the place of observation, since the latitude of the observation changes the day when the heliacal rising of Sirius can be seen, and mislocating an observation can potentially throw off the resulting chronology by several decades. Official observations are known to have been made at Heliopolis (or Memphis, near Cairo), Thebes, and Elephantine (near Aswan), with the rising of Sirius observed at Cairo about 8 days after it is seen at Aswan. Meyer concluded that the Egyptian civil calendar was created in 4241 BCE. Recent scholarship, however, has discredited that claim. Most scholars either move the observation upon which he based this forward by one cycle of Sirius, to 19 July 2781 BCE, or reject the assumption that the document on which Meyer relied indicates a rise of Sirius at all. Chronological interpretation Three specific observations of the heliacal rise of Sirius are extremely important for Egyptian chronology. The first is the aforementioned ivory tablet from the reign of Djer which supposedly indicates the beginning of a Sothic cycle, the rising of Sirius on the same day as the new year. If this does indicate the beginning of a Sothic cycle, it must date to about 17 July 2773 BCE. However, this date is too late for Djer's reign, so many scholars believe that it indicates a correlation between the rising of Sirius and the Egyptian lunar calendar, instead of the solar Egyptian civil calendar, which would render the tablet essentially devoid of chronological value. Gautschy et al. (2017) claimed that a newly discovered Sothis date from the Old Kingdom and a subsequent astronomic study confirms the Sothic cycle model. The second observation is clearly a reference to a heliacal rising, and is believed to date to the seventh year of Senusret III. This observation was almost certainly made at Itj-Tawy, the Twelfth Dynasty capital, which would date the Twelfth Dynasty from 1963 to 1786 BCE. The Ramses or Turin Papyrus Canon says 213 years (1991–1778 BCE), Parker reduces it to 206 years (1991–1785 BCE), based on 17 July 1872 BCE as the Sothic date (120th year of 12th dynasty, a drift of 30 leap days). Prior to Parker's investigation of lunar dates, the 12th dynasty was placed as 213 years of 2007–1794 BCE interpreting the date 21 July 1888 BCE as the 120th year, and then for 2003–1790 BCE interpreting the date 20 July 1884 BCE as the 120th year. The third observation was in the reign of Amenhotep I, and, assuming it was made in Thebes, dates his reign between 1525 and 1504 BCE. If made in Memphis, Heliopolis, or some other Delta site instead, as a minority of scholars still argue, the entire chronology of the 18th Dynasty needs to be extended some 20 years. Observational procedure and precession The Sothic cycle is a specific example of two cycles of differing length interacting to cycle together, here called a tertiary cycle. This is mathematically defined by the formula or half the harmonic mean. In the case of the Sothic cycle the two cycles are the Egyptian civil year and the Sothic year. The Sothic year is the length of time for the star Sirius to visually return to the same position in relation to the sun. Star years measured in this way vary due to axial precession, the movement of the Earth's axis in relation to the sun. The length of time for a star to make a yearly path can be marked when it rises to a defined altitude above a local horizon at the time of sunrise. This altitude does not have to be the altitude of first possible visibility, nor the exact position observed. Throughout the year the star will rise to whatever altitude was chosen near the horizon approximately four minutes earlier each successive sunrise. Eventually the star will return to the same relative location at sunrise, regardless of the altitude chosen. This length of time can be called an observational year. Stars that reside close to the ecliptic or the ecliptic meridian will – on average – exhibit observational years close to the sidereal year of 365.2564 days. The ecliptic and the meridian cut the sky into four quadrants. The axis of the earth wobbles around slowly moving the observer and changing the observation of the event. If the axis swings the observer closer to the event its observational year will be shortened. Likewise, the observational year can be lengthened when the axis swings away from the observer. This depends upon which quadrant of the sky the phenomenon is observed. The Sothic year is remarkable because its average duration happened to have been nearly exactly 365.25 days, in the early before the unification of Egypt. The slow rate of change from this value is also of note. If observations and records could have been maintained during predynastic times the Sothic rise would optimally return to the same calendar day after 1461 calendar years. This value would drop to about 1456 calendar years by the Middle Kingdom. The value 1461 could also be maintained if the date of the Sothic rise were artificially maintained by moving the feast in celebration of this event one day every fourth year instead of rarely adjusting it according to observation. It has been noticed, and the Sothic cycle confirms, that Sirius does not move retrograde across the sky, like other stars, a phenomenon widely known as the precession of the equinox: Sirius remains about the same distance from the equinoxes – and so from the solstices – throughout these many centuries, despite precession. — J.Z. Buchwald (2003) For the same reason, the heliacal rising or zenith of Sirius does not slip through the calendar at the precession rate of about one day per 71.6 years as other stars do, but much slower. This remarkable stability within the solar year may be one reason that the Egyptians used it as a basis for their calendar. The coincidence of a heliacal rising of Sirius and the New Year reported by Censorinus occurred about 20 July, that is a month after the summer solstice. Problems and criticisms Determining the date of a heliacal rise of Sirius has been shown to be difficult, especially considering the need to know the exact latitude of the observation. Another problem is that because the Egyptian calendar loses one day every four years, a heliacal rise will take place on the same day for four years in a row, and any observation of that rise can date to any of those four years, making the observation imprecise. A number of criticisms have been levelled against the reliability of dating by the Sothic cycle. Some are serious enough to be considered problematic. Firstly, none of the astronomical observations have dates that mention the specific pharaoh in whose reign they were observed, forcing Egyptologists to supply that information on the basis of a certain amount of informed speculation. Secondly, there is no information regarding the nature of the civil calendar throughout the course of Egyptian history, forcing Egyptologists to assume that it existed unchanged for thousands of years; the Egyptians would only have needed to carry out one calendar reform in a few thousand years for these calculations to be worthless. Other criticisms are not considered as problematic, e.g. there is no extant mention of the Sothic cycle in ancient Egyptian writing, which may simply be a result of it either being so obvious to Egyptians that it didn't merit mention, or to relevant texts being destroyed over time or still awaiting discovery. Marc Van de Mieroop, in his discussion of chronology and dating, does not mention the Sothic cycle at all, and asserts that the bulk of historians nowadays would consider that it is not possible to put forward exact dates earlier than the 8th century BCE. Some have recently claimed that the Theran eruption marks the beginning of the Eighteenth Dynasty, due to Theran ash and pumice discovered in the ruins of Avaris, in layers that mark the end of the Hyksos era. Because the evidence of dendrochronologists indicates the eruption took place in 1626 BCE, this has been taken to indicate that dating by the Sothic cycle is off by 50–80 years at the outset of the 18th Dynasty. Claims that the Thera eruption is described on the Tempest Stele of Ahmose I have been disputed by writers such as Peter James. See also Chronology of the ancient Near East Notes References External links Egyptian calendar Chronology Egyptology Units of time
Sothic cycle
[ "Physics", "Mathematics" ]
2,477
[ "Chronology", "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
985,616
https://en.wikipedia.org/wiki/Ceric%20ammonium%20nitrate
Ceric ammonium nitrate (CAN) is the inorganic compound with the formula . This orange-red, water-soluble cerium salt is a specialised oxidizing agent in organic synthesis and a standard oxidant in quantitative analysis. Preparation, properties, and structure The anion is generated by dissolving in hot and concentrated nitric acid (). The salt consists of the hexanitratocerate(IV) anion and a pair of ammonium cations . The ammonium ions are not involved in the oxidising reactions of this salt. In the anion each nitrate group chelates the cerium atom in a bidentate manner as shown below: The anion has Th (idealized Oh) molecular symmetry. The core defines an icosahedron. is a strong one-electron oxidizing agent. In terms of its redox potential ( vs. N.H.E.) it is an even stronger oxidizing agent than (). Few shelf-stable reagents are stronger oxidants. In the redox process Ce(IV) is converted to Ce(III), a one-electron change, signaled by the fading of the solution color from orange to a pale yellow (providing that the substrate and product are not strongly colored). Applications in organic chemistry In organic synthesis, CAN is useful as an oxidant for many functional groups (alcohols, phenols, and ethers) as well as C–H bonds, especially those that are benzylic. Alkenes undergo dinitroxylation, although the outcome is solvent-dependent. Quinones are produced from catechols and hydroquinones and even nitroalkanes are oxidized. CAN provides an alternative to the Nef reaction; for example, for ketomacrolide synthesis where complicating side reactions usually encountered using other reagents. Oxidative halogenation can be promoted by CAN as an in situ oxidant for benzylic bromination, and the iodination of ketones and uracil derivatives. For the synthesis of heterocycles Catalytic amounts of aqueous CAN allow the efficient synthesis of quinoxaline derivatives. Quinoxalines are known for their applications as dyes, organic semiconductors, and DNA cleaving agents. These derivatives are also components in antibiotics such as echinomycin and actinomycin. The CAN-catalyzed three-component reaction between anilines and alkyl vinyl ethers provides an efficient entry into 2-methyl-1,2,3,4-tetrahydroquinolines and the corresponding quinolines obtained by their aromatization. As a deprotection reagent CAN is traditionally used to release organic ligands from metal carbonyls. In the process, the metal is oxidised, CO is evolved, and the organic ligand is released for further manipulation. For example, with the Wulff–Dötz reaction an alkyne, carbon monoxide, and a chromium carbene are combined to form a chromium half-sandwich complex and the phenol ligand can be isolated by mild CAN oxidation. CAN is used to cleave para-methoxybenzyl and 3,4-dimethoxybenzyl ethers, which are protecting groups for alcohols. Two equivalents of CAN are required for each equivalent of para-methoxybenzyl ether. The alcohol is released, and the para-methoxybenzyl ether converts to para-methoxybenzaldehyde. The balanced equation is as follows: Other applications CAN is also a component of chrome etchant, a material that is used in the production of photomasks and liquid crystal displays. It is also an effective nitration reagent, especially for the nitration of aromatic ring systems. In acetonitrile, CAN reacts with anisole to obtain ortho-nitration products. References External links Oxidizing Agents: Cerium Ammonium Nitrate Ammonium compounds Cerium(IV) compounds Nitrates Coordination complexes Oxidizing agents
Ceric ammonium nitrate
[ "Chemistry" ]
863
[ "Redox", "Coordination complexes", "Coordination chemistry", "Nitrates", "Oxidizing agents", "Salts", "Ammonium compounds" ]
985,631
https://en.wikipedia.org/wiki/Theory%20X%20and%20Theory%20Y
Theory X and Theory Y are theories of human work motivation and management. They were created by Douglas McGregor while he was working at the MIT Sloan School of Management in the 1950s, and developed further in the 1960s. McGregor's work was rooted in motivation theory alongside the works of Abraham Maslow, who created the hierarchy of needs. The two theories proposed by McGregor describe contrasting models of workforce motivation applied by managers in human resource management, organizational behavior, organizational communication and organizational development. Theory X explains the importance of heightened supervision, external rewards, and penalties, while Theory Y highlights the motivating role of job satisfaction and encourages workers to approach tasks without direct supervision. Management use of Theory X and Theory Y can affect employee motivation and productivity in different ways, and managers may choose to implement strategies from both theories into their practices. McGregor and Maslow McGregor's Theory X and Theory Y and Maslow's hierarchy of needs are both rooted in motivation theory. Maslow's hierarchy of needs consists of physiological needs (lowest level), safety needs, love needs, esteem needs, and self-actualization (highest level). According to Maslow, a human is motivated by the level they have not yet reached, and self-actualization cannot be met until each of the lower levels has been fulfilled. Assumptions of Theory Y, in relation to Maslow's hierarchy put an emphasis on employee higher level needs, such as esteem needs and self-actualization. McGregor also believed that self-actualization was the highest level of reward for employees. He theorized that the motivation employees use to reach self-actualization allows them to reach their full potential. This led companies to focus on how their employees were motivated, managed, and led, creating a Theory Y management style which focuses on the drive for individual self-fulfillment. McGregor's perspective places the responsibility for performance on managers as well as subordinates. Theory X Theory X is based on negative assumptions regarding the typical worker. This management style assumes that the typical worker has little ambition, avoids responsibility, and is individual-goal oriented. In general, Theory X style managers believe their employees are less intelligent, lazier, and work solely for a sustainable income. Management believes employees' work is based on their own self-interest. Managers who believe employees operate in this manner are more likely to use rewards or punishments as motivation. Due to these assumptions, Theory X concludes the typical workforce operates more efficiently under a hands-on approach to management. Theory X managers believe all actions should be traceable to the individual responsible. This allows the individual to receive either a direct reward or a reprimand, depending on the outcome's positive or negative nature. This managerial style is more effective when used in a workforce that is not essentially motivated to perform. According to McGregor, there are two opposing approaches to implementing Theory X: the hard approach and the soft approach. The hard approach depends on close supervision, intimidation, and immediate punishment. This approach can potentially yield a hostile, minimally cooperative workforce and resentment towards management. Managers are always looking for mistakes from employees, because they do not trust their work. Theory X is a "we versus they" approach, meaning it is the management versus the employees. The soft approach is characterized by leniency and less strict rules in hopes for creating high workplace morale and cooperative employees. Implementing a system that is too soft could result in an entitled, low-output workforce. McGregor believes both ends of the spectrum are too extreme for efficient real-world application. Instead, McGregor feels that an approach located in the middle would be the most effective implementation of Theory X. Because managers and supervisors are in almost complete control of the work, this produces a more systematic and uniform product or work flow. Theory X can benefit a work place that utilizes an assembly line or manual labor. Using this theory in these types of work conditions allows employees to specialize in particular work areas which in turn allows the company to mass-produce a higher quantity and quality of work. Theory Y Theory Y is based on positive assumptions regarding the typical worker. Theory Y managers assume employees are internally motivated, enjoy their job, and work to better themselves without a direct reward in return. These managers view their employees as one of the most valuable assets to the company, driving the internal workings of the corporation. Employees additionally tend to take full responsibility for their work and do not need close supervision to create a quality product. It is important to note, however, that before an employee carries out their task, they must first obtain the manager's approval. This ensures work stays efficient, productive, and in-line with company standards. Theory Y managers gravitate towards relating to the worker on a more personal level, as opposed to a more conductive and teaching-based relationship. As a result, Theory Y followers may have a better relationship with their boss, creating a healthier atmosphere in the workplace. In comparison to Theory X, Theory Y incorporates a pseudo-democratic environment to the workforce. This allows the employee to design, construct, and publish their work in a timely manner in co-ordinance to their workload and projects. Although Theory Y encompasses creativity and discussion, it does have limitations. While there is a more personal and individualistic feel, this leaves room for error in terms of consistency and uniformity. The workplace lacks unvarying rules and practices, which could potentially be detrimental to the quality standards of the product and strict guidelines of a given company. Theory Z Humanistic psychologist Abraham Maslow, upon whose work McGregor drew for Theories X and Y, went on to propose his own model of workplace motivation, Theory Z. Unlike Theories X and Y, Theory Z recognizes a transcendent dimension to work and worker motivation. An optimal managerial style would help cultivate worker creativity, insight, meaning and moral excellence. Choosing a management style For McGregor, Theory X and Theory Y are not opposite ends of the same continuum, but rather two different continua in themselves. In order to achieve the most efficient production, a combination of both theories may be appropriate. This approach is derived from Fred Fiedler's research over various leadership styles known as the contingency theory. This theory states that managers evaluate the workplace and choose their leadership style based upon both internal and external conditions presented. Managers who choose the Theory X approach have an authoritarian style of management. An organization with this style of management is made up of several levels of supervisors and managers who actively intervene and micromanage the employees. On the contrary, managers who choose the Theory Y approach have a hands-off style of management. An organization with this style of management encourages participation and values individuals' thoughts and goals. However, because there is no optimal way for a manager to choose between adopting either Theory X or Theory Y, it is likely that a manager will need to adopt both approaches depending on the evolving circumstances and levels of internal and external locus of control throughout the workplace. Military command and control Theory X and Theory Y also have implications in military command and control (C2). Older, strictly hierarchical conceptions of C2, with narrow centralization of decision rights, highly constrained patterns of interaction, and limited information distribution tend to arise from cultural and organizational assumptions compatible with Theory X. On the other hand, more modern, network-centric, and decentralized concepts of C2, that rely on individual initiative and self-synchronization, tend to arise more from a "Theory Y" philosophy. Mission Command, for example, is a command philosophy to which many modern military establishments aspire, and which involves individual judgment and action within the overall framework of the commander's intent. Its assumptions about the value of individual initiative make it more a Theory-Y than a Theory X philosophy. See also Outline of management Scientific management Type A and Type B personality theory References External links A diagram representing Theory X and Theory Y, Alan Chapman, 2002. Another diagram representing Theory X and Theory Y Organizational behavior Motivational theories Human resource management
Theory X and Theory Y
[ "Biology" ]
1,616
[ "Behavior", "Organizational behavior", "Human behavior" ]
985,793
https://en.wikipedia.org/wiki/Vacuum%20extraction
Vacuum extraction (VE), also known as ventouse, is a method to assist delivery of a baby using a vacuum device. It is used in the second stage of labor if it has not progressed adequately. It may be an alternative to a forceps delivery and caesarean section. It cannot be used when the baby is in the breech position or for premature births. The use of VE is generally safe, but it can occasionally have negative effects on either the mother or the child. The term ventouse comes from the French word for "suction cup". Medical uses There are several indications to use a vacuum extraction to aid delivery: Maternal exhaustion Prolonged second stage of labor Foetal distress in the second stage of labor, generally indicated by changes in the foetal heart-rate (usually measured on a CTG) Maternal illness where prolonged "bearing down" or pushing efforts would be risky (e.g. cardiac conditions, blood pressure, aneurysm, glaucoma). If these conditions are known about before the birth, or are severe, then an elective caesarean section may be performed. Technique The woman is placed in the lithotomy position and assists throughout the process by pushing. A suction cup is placed onto the head of the baby and the suction draws the skin from the scalp into the cup. Correct placement of the cup directly over the flexion point, about 3 cm anterior from the occipital (posterior) fontanelle, is critical to the success of a vacuum extraction. Ventouse devices have handles to allow for traction. When the baby's head is delivered, the device is detached, allowing the birthing attendant and the mother to complete the delivery of the baby. For proper use of the ventouse, the maternal cervix has to be fully dilated, the head engaged in the birth canal, and the head position known. Preferably the operator of the vacuum extractor needs to be experienced in order to safely perform the procedure. The baby should not be preterm, previously exposed to scalp sampling or failed forceps delivery. If the ventouse attempt fails, it may be necessary to deliver the infant by forceps or caesarean section. History In 1849 the Edinburgh professor of obstetrics James Young Simpson, subsequently known for pioneering the use of chloroform in childbirth, designed the Air Tractor which consisted of a metal syringe attached to a soft rubber cup. This was the earliest known vacuum extractor to assist childbirth but it did not become popular. Swedish professor Tage Malmstrom developed the ventouse, or Malmstrom extractor in the 1950s. Originally made with a metal cap, new materials such as plastics and siliconised rubber have improved the design so that it is now used more than forceps. Vacuum delivery as a percentage of vaginal births vary depending on location. In the USA they comprise about 10% to 15% of vaginal births while in Italy 4.8% of vaginal births were delivered via vacuum in 2013. Comparisons to other forms of assisted delivery Positive aspects An episiotomy may not be required. The mother still takes an active role in the birth. No special anesthesia is required. There is less potential for maternal trauma compared to forceps and caesarean section. Negative aspects The baby will be left with a temporary lump on its head, known as a chignon. There is a possibility of cephalohematoma formation, or subgaleal hemorrhage which can be life-threatening. There is a higher risk of failure to deliver the baby than with forceps, and an increased likelihood of perineal trauma. See also Odón device References Childbirth Medical equipment Obstetrical procedures
Vacuum extraction
[ "Biology" ]
759
[ "Medical equipment", "Medical technology" ]
985,897
https://en.wikipedia.org/wiki/Derived%20category
In mathematics, the derived category D(A) of an abelian category A is a construction of homological algebra introduced to refine and in a certain sense to simplify the theory of derived functors defined on A. The construction proceeds on the basis that the objects of D(A) should be chain complexes in A, with two such chain complexes considered isomorphic when there is a chain map that induces an isomorphism on the level of homology of the chain complexes. Derived functors can then be defined for chain complexes, refining the concept of hypercohomology. The definitions lead to a significant simplification of formulas otherwise described (not completely faithfully) by complicated spectral sequences. The development of the derived category, by Alexander Grothendieck and his student Jean-Louis Verdier shortly after 1960, now appears as one terminal point in the explosive development of homological algebra in the 1950s, a decade in which it had made remarkable strides. The basic theory of Verdier was written down in his dissertation, published finally in 1996 in Astérisque (a summary had earlier appeared in SGA 4½). The axiomatics required an innovation, the concept of triangulated category, and the construction is based on localization of a category, a generalization of localization of a ring. The original impulse to develop the "derived" formalism came from the need to find a suitable formulation of Grothendieck's coherent duality theory. Derived categories have since become indispensable also outside of algebraic geometry, for example in the formulation of the theory of D-modules and microlocal analysis. Recently derived categories have also become important in areas nearer to physics, such as D-branes and mirror symmetry. Unbounded derived categories were introduced by Spaltenstein in 1988. Motivations In coherent sheaf theory, pushing to the limit of what could be done with Serre duality without the assumption of a non-singular scheme, the need to take a whole complex of sheaves in place of a single dualizing sheaf became apparent. In fact the Cohen–Macaulay ring condition, a weakening of non-singularity, corresponds to the existence of a single dualizing sheaf; and this is far from the general case. From the top-down intellectual position, always assumed by Grothendieck, this signified a need to reformulate. With it came the idea that the 'real' tensor product and Hom functors would be those existing on the derived level; with respect to those, Tor and Ext become more like computational devices. Despite the level of abstraction, derived categories became accepted over the following decades, especially as a convenient setting for sheaf cohomology. Perhaps the biggest advance was the formulation of the Riemann–Hilbert correspondence in dimensions greater than 1 in derived terms, around 1980. The Sato school adopted the language of derived categories, and the subsequent history of D-modules was of a theory expressed in those terms. A parallel development was the category of spectra in homotopy theory. The homotopy category of spectra and the derived category of a ring are both examples of triangulated categories. Definition Let be an abelian category. (Examples include the category of modules over a ring and the category of sheaves of abelian groups on a topological space.) The derived category is defined by a universal property with respect to the category of cochain complexes with terms in . The objects of are of the form where each Xi is an object of and each of the composites is zero. The ith cohomology group of the complex is . If and are two objects in this category, then a morphism is defined to be a family of morphisms such that . Such a morphism induces morphisms on cohomology groups , and is called a quasi-isomorphism if each of these morphisms is an isomorphism in . The universal property of the derived category is that it is a localization of the category of complexes with respect to quasi-isomorphisms. Specifically, the derived category is a category, together with a functor , having the following universal property: Suppose is another category (not necessarily abelian) and is a functor such that, whenever is a quasi-isomorphism in , its image is an isomorphism in ; then factors through . Any two categories having this universal property are equivalent. Relation to the homotopy category If and are two morphisms in , then a chain homotopy or simply homotopy is a collection of morphisms such that for every i. It is straightforward to show that two homotopic morphisms induce identical morphisms on cohomology groups. We say that is a chain homotopy equivalence if there exists such that and are chain homotopic to the identity morphisms on and , respectively. The homotopy category of cochain complexes is the category with the same objects as but whose morphisms are equivalence classes of morphisms of complexes with respect to the relation of chain homotopy. There is a natural functor which is the identity on objects and which sends each morphism to its chain homotopy equivalence class. Since every chain homotopy equivalence is a quasi-isomorphism, factors through this functor. Consequently can be equally well viewed as a localization of the homotopy category. From the point of view of model categories, the derived category D(A) is the true 'homotopy category' of the category of complexes, whereas K(A) might be called the 'naive homotopy category'. Constructing the derived category There are several possible constructions of the derived category. When is a small category, then there is a direct construction of the derived category by formally adjoining inverses of quasi-isomorphisms. This is an instance of the general construction of a category by generators and relations. When is a large category, this construction does not work for set theoretic reasons. This construction builds morphisms as equivalence classes of paths. If has a proper class of objects, all of which are isomorphic, then there is a proper class of paths between any two of these objects. The generators and relations construction therefore only guarantees that the morphisms between two objects form a proper class. However, the morphisms between two objects in a category are usually required to be sets, and so this construction fails to produce an actual category. Even when is small, however, the construction by generators and relations generally results in a category whose structure is opaque, where morphisms are arbitrarily long paths subject to a mysterious equivalence relation. For this reason, it is conventional to construct the derived category more concretely even when set theory is not at issue. These other constructions go through the homotopy category. The collection of quasi-isomorphisms in forms a multiplicative system. This is a collection of conditions that allow complicated paths to be rewritten as simpler ones. The Gabriel–Zisman theorem implies that localization at a multiplicative system has a simple description in terms of roofs. A morphism in may be described as a pair , where for some complex , is a quasi-isomorphism and is a chain homotopy equivalence class of morphisms. Conceptually, this represents . Two roofs are equivalent if they have a common overroof. Replacing chains of morphisms with roofs also enables the resolution of the set-theoretic issues involved in derived categories of large categories. Fix a complex and consider the category whose objects are quasi-isomorphisms in with codomain and whose morphisms are commutative diagrams. Equivalently, this is the category of objects over whose structure maps are quasi-isomorphisms. Then the multiplicative system condition implies that the morphisms in from to are assuming that this colimit is in fact a set. While is potentially a large category, in some cases it is controlled by a small category. This is the case, for example, if is a Grothendieck abelian category (meaning that it satisfies AB5 and has a set of generators), with the essential point being that only objects of bounded cardinality are relevant. In these cases, the limit may be calculated over a small subcategory, and this ensures that the result is a set. Then may be defined to have these sets as its sets. There is a different approach based on replacing morphisms in the derived category by morphisms in the homotopy category. A morphism in the derived category with codomain being a bounded below complex of injective objects is the same as a morphism to this complex in the homotopy category; this follows from termwise injectivity. By replacing termwise injectivity by a stronger condition, one gets a similar property that applies even to unbounded complexes. A complex is K-injective if, for every acyclic complex , we have . A straightforward consequence of this is that, for every complex , morphisms in are the same as such morphisms in . A theorem of Serpé, generalizing work of Grothendieck and of Spaltenstein, asserts that in a Grothendieck abelian category, every complex is quasi-isomorphic to a K-injective complex with injective terms, and moreover, this is functorial. In particular, we may define morphisms in the derived category by passing to K-injective resolutions and computing morphisms in the homotopy category. The functoriality of Serpé's construction ensures that composition of morphisms is well-defined. Like the construction using roofs, this construction also ensures suitable set theoretic properties for the derived category, this time because these properties are already satisfied by the homotopy category. Derived Hom-sets As noted before, in the derived category the hom sets are expressed through roofs, or valleys , where is a quasi-isomorphism. To get a better picture of what elements look like, consider an exact sequence We can use this to construct a morphism by truncating the complex above, shifting it, and using the obvious morphisms above. In particular, we have the picture where the bottom complex has concentrated in degree , the only non-trivial upward arrow is the equality morphism, and the only-nontrivial downward arrow is . This diagram of complexes defines a morphism in the derived category. One application of this observation is the construction of the Atiyah-class. Remarks For certain purposes (see below) one uses bounded-below ( for ), bounded-above ( for ) or bounded ( for ) complexes instead of unbounded ones. The corresponding derived categories are usually denoted D+(A), D−(A) and Db(A), respectively. If one adopts the classical point of view on categories, that there is a set of morphisms from one object to another (not just a class), then one has to give an additional argument to prove this. If, for example, the abelian category A is small, i.e. has only a set of objects, then this issue will be no problem. Also, if A is a Grothendieck abelian category, then the derived category D(A) is equivalent to a full subcategory of the homotopy category K(A), and hence has only a set of morphisms from one object to another. Grothendieck abelian categories include the category of modules over a ring, the category of sheaves of abelian groups on a topological space, and many other examples. Composition of morphisms, i.e. roofs, in the derived category is accomplished by finding a third roof on top of the two roofs to be composed. It may be checked that this is possible and gives a well-defined, associative composition. Since K(A) is a triangulated category, its localization D(A) is also triangulated. For an integer n and a complex X, define the complex X[n] to be X shifted down by n, so that with differential By definition, a distinguished triangle in D(A) is a triangle that is isomorphic in D(A) to the triangle X → Y → Cone(f) → X[1] for some map of complexes f: X → Y. Here Cone(f) denotes the mapping cone of f. In particular, for a short exact sequence in A, the triangle X → Y → Z → X[1] is distinguished in D(A). Verdier explained that the definition of the shift X[1] is forced by requiring X[1] to be the cone of the morphism X → 0. By viewing an object of A as a complex concentrated in degree zero, the derived category D(A) contains A as a full subcategory. Morphisms in the derived category include information about all Ext groups: for any objects X and Y in A and any integer j, Projective and injective resolutions One can easily show that a homotopy equivalence is a quasi-isomorphism, so the second step in the above construction may be omitted. The definition is usually given in this way because it reveals the existence of a canonical functor In concrete situations, it is very difficult or impossible to handle morphisms in the derived category directly. Therefore, one looks for a more manageable category which is equivalent to the derived category. Classically, there are two (dual) approaches to this: projective and injective resolutions. In both cases, the restriction of the above canonical functor to an appropriate subcategory will be an equivalence of categories. In the following we will describe the role of injective resolutions in the context of the derived category, which is the basis for defining right derived functors, which in turn have important applications in cohomology of sheaves on topological spaces or more advanced cohomology theories like étale cohomology or group cohomology. In order to apply this technique, one has to assume that the abelian category in question has enough injectives, which means that every object X of the category admits a monomorphism to an injective object I. (Neither the map nor the injective object has to be uniquely specified.) For example, every Grothendieck abelian category has enough injectives. Embedding X into some injective object I0, the cokernel of this map into some injective I1 etc., one constructs an injective resolution of X, i.e. an exact (in general infinite) sequence where the I* are injective objects. This idea generalizes to give resolutions of bounded-below complexes X, i.e. Xn = 0 for sufficiently small n. As remarked above, injective resolutions are not uniquely defined, but it is a fact that any two resolutions are homotopy equivalent to each other, i.e. isomorphic in the homotopy category. Moreover, morphisms of complexes extend uniquely to a morphism of two given injective resolutions. This is the point where the homotopy category comes into play again: mapping an object X of A to (any) injective resolution I* of A extends to a functor from the bounded below derived category to the bounded below homotopy category of complexes whose terms are injective objects in A. It is not difficult to see that this functor is actually inverse to the restriction of the canonical localization functor mentioned in the beginning. In other words, morphisms Hom(X,Y) in the derived category may be computed by resolving both X and Y and computing the morphisms in the homotopy category, which is at least theoretically easier. In fact, it is enough to resolve Y: for any complex X and any bounded below complex Y of injectives, Dually, assuming that A has enough projectives, i.e. for every object X there is an epimorphism from a projective object P to X, one can use projective resolutions instead of injective ones. In 1988 Spaltenstein defined an unbounded derived category () which immediately proved useful in the study of singular spaces; see, for example, the book by Kashiwara and Schapira (Categories and Sheaves) on various applications of unbounded derived category. Spaltenstein used so-called K-injective and K-projective resolutions. and May (2006) describe the derived category of modules over DG-algebras. Keller also gives applications to Koszul duality, Lie algebra cohomology, and Hochschild homology. More generally, carefully adapting the definitions, it is possible to define the derived category of an exact category . The relation to derived functors The derived category is a natural framework to define and study derived functors. In the following, let F: A → B be a functor of abelian categories. There are two dual concepts: right derived functors come from left exact functors and are calculated via injective resolutions left derived functors come from right exact functors and are calculated via projective resolutions In the following we will describe right derived functors. So, assume that F is left exact. Typical examples are F: A → Ab given by X ↦ Hom(X, A) or X ↦ Hom(A, X) for some fixed object A, or the global sections functor on sheaves or the direct image functor. Their right derived functors are Extn(–,A), Extn(A,–), Hn(X, F) or Rnf∗ (F), respectively. The derived category allows us to encapsulate all derived functors RnF in one functor, namely the so-called total derived functor RF: D+(A) → D+(B). It is the following composition: D+(A) ≅ K+(Inj(A)) → K+(B) → D+(B), where the first equivalence of categories is described above. The classical derived functors are related to the total one via RnF(X) = Hn(RF(X)). One might say that the RnF forget the chain complex and keep only the cohomologies, whereas RF does keep track of the complexes. Derived categories are, in a sense, the "right" place to study these functors. For example, the Grothendieck spectral sequence of a composition of two functors such that F maps injective objects in A to G-acyclics (i.e. RiG(F(I)) = 0 for all i > 0 and injective I), is an expression of the following identity of total derived functors R(G∘F) ≅ RG∘RF. J.-L. Verdier showed how derived functors associated with an abelian category A can be viewed as Kan extensions along embeddings of A into suitable derived categories [Mac Lane]. Derived equivalence It may happen that two abelian categories A and B are not equivalent, but their derived categories D(A) and D(B) are. Often this is an interesting relation between A and B. Such equivalences are related to the theory of t-structures in triangulated categories. Here are some examples. Let be an abelian category of coherent sheaves on the projective line over a field k. Let K2-Rep be an abelian category of representations of the Kronecker quiver with two vertices. They are very different abelian categories, but their (bounded) derived categories are equivalent. Let Q be any quiver and P be a quiver obtained from Q by reversing some arrows. In general, the categories of representations of Q and P are different, but Db(Q-Rep) is always equivalent to Db(P-Rep). Let X be an abelian variety, Y its dual abelian variety. Then Db(Coh(X)) is equivalent to Db(Coh(Y)) by the theory of Fourier–Mukai transforms. Varieties with equivalent derived categories of coherent sheaves are sometimes called Fourier–Mukai partners. See also Homotopy category of chain complexes Derived noncommutative algebraic geometry Coherent sheaf cohomology Coherent duality Derived algebraic geometry Notes References gives an interpretation of the derived category of modules over DG-algebras. Four textbooks that discuss derived categories are: Homological algebra Categories in category theory
Derived category
[ "Mathematics" ]
4,282
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Categories in category theory", "Homological algebra" ]
985,959
https://en.wikipedia.org/wiki/Lifelong%20learning
Lifelong learning is the "ongoing, voluntary, and self-motivated" pursuit of learning for either personal or professional reasons. Lifelong learning is important for an individual's competitiveness and employability, but also enhances social inclusion, active citizenship, and personal development. Professions typically recognize the importance of developing practitioners becoming lifelong learners. Many licensed professions mandate that their members continue learning to maintain a license. Lifelong learning institutes are educational organisations specifically for lifelong learning purposes. Informal lifelong learning communities also exist around the world. History In some contexts, the term "lifelong learning" evolved from the term "life-long learners", created by Leslie Watkins and used by Clint Taylor, professor at CSULA and Superintendent for the Temple City Unified School District, in the district's mission statement in 1993, the term recognizes that learning is not confined to childhood or the classroom but takes place throughout life and in a range of situations. In other contexts, the term "lifelong learning" evolved organically. The first lifelong learning institute began at The New School for Social Research (now The New School) in 1962 as an experiment in "learning in retirement". Later, after similar groups formed across the United States, many chose the name "lifelong learning institute" to be inclusive of nonretired persons in the same age range. Traditional colleges and universities are beginning to recognize the value of lifelong learning outside of the credit and degree attainment model. Lifelong learners, including persons with academic or professional credentials, tend to find higher-paying occupations, leaving monetary, cultural, and entrepreneurial impressions on communities, according to educator Cassandra B. Whyte. Libraries in the United States In the United States, librarians have understood lifelong learning as an essential service of libraries since the early part of the 20th century. In 1924, William S. Learned, wrote of the potential of the American public library as an agency for adult education in The American Public Library and the Diffusion of Knowledge. Two decades later, in 1942, the American Library Association Adult Education Board established a new responsibility to the adult reader. The Adult Education Act of 1966 linked literacy education and adult basic education programs. This occurred at the same time that the Library Services and Construction Act was being passed. Twenty-five years after the U.S. Adult Education Act was passed, the U.S. Office of Education published Partners for Lifelong Learning, Public Libraries and Adult Education. The Institute of Museum and Library Services (IMLS) was established in 1996 and incorporated responsibilities from the U.S. Office of Education's library programs, including those focused on lifelong learning. "Championing Lifelong Learning" through libraries and museums is the first goal listed in the organization's strategic plan for 2022-2026. Definition Lifelong learning has been defined as "all learning activity undertaken throughout life, with the aim of improving knowledge, skills and competences within a personal, civic, social and/or employment-related perspective". It is often considered learning that occurs after the formal education years of childhood and into adulthood. It is sought out naturally through life experiences as the learner seeks to gain knowledge for professional or personal reasons. These natural experiences can come about on purpose or accidentally. Lifelong learning has been described as a process that includes people learning in different contexts. These environments do not only include schools but also homes, workplaces, and locations where people pursue leisure activities. However, while the learning process can be applied to learners of all ages, there is a focus on adults who are returning to organized learning. There are programs based on its framework that address the different needs of learners, such as United Nations' Sustainable Development Goal 4 and the UNESCO Institute for Lifelong Learning, which caters to the needs of the disadvantaged and marginalized learners. Lifelong learning is distinguished from the concept of continuing education in the sense that it has a broader scope. Unlike the latter, which is oriented towards adult education developed for the needs of schools and industries, this type of learning is concerned with the development of human potential in individuals generally. Pedagogy Lifelong learning focuses on holistic education and it has two dimensions, namely, lifelong and broad options for learning. These indicate learning that integrates traditional education proposals and modern learning opportunities. It also entails an emphasis on encouraging people to learn how to learn and to select content, process, and methodologies that pursue autodidacticism. Some authors highlight that lifelong learning is founded on a different conceptualization of knowledge and its acquisition. It is explained not only as the possession of discrete pieces of information or factual knowledge but also as a generalized scheme of making sense of new events, including the use of tactics in order to effectively deal with them. Reflective learning and critical thinking can help a learner to become more self-reliant through learning how to learn, thus making them better able to direct, manage, and control their own learning process. Sipe studied experimentally "open" teachers and found that they valued self-directed learning, collaboration, reflection, and challenge; risk taking in their learning was seen as an opportunity, not a threat. Dunlap and Grabinger say that for higher education students to be lifelong learners, they must develop a capacity for self-direction, metacognition awareness, and a disposition toward learning. The Delors Report proposed an integrated vision of education based on two key paradigms: lifelong learning and the four pillars of learning. It argued that formal education tends to emphasize the acquisition of knowledge to the detriment of other types of learning essential to sustaining human development, stressing the need to think of learning over the lifespan, and to address how everyone can develop relevant skills, knowledge and attitudes for work, citizenship and personal fulfillment. The four pillars of learning are: Learning to know Learning to do Learning to be Learning to live together The four pillars of learning were envisaged against the backdrop of the notion of 'lifelong learning', itself an adaptation of the concept of 'lifelong education' as initially conceptualized in the 1972 Faure publication Learning to Be. Educational technology The emergence of internet technologies has great potential to support lifelong learning endeavors, allowing for informal day-to-day learning. Application In India and elsewhere, the "University of the Third Age" (U3A) is an almost spontaneous movement comprising autonomous learning groups accessing the expertise of their own members in the pursuit of knowledge and shared experience. In Sweden, the concept of study circles, an idea launched almost a century ago, still represents a large portion of the adult education provision. The concept has since spread, and for instance, is a common practice in Finland as well. Formal administrative units devoted to lifelong learning exist in a number of universities. For example, the 'Academy of Lifelong Learning' is an administrative unit at the University of Delaware. Another example is the Jagiellonian University Extension (Wszechnica Uniwersytetu Jagiellonskiego), which is one of the most comprehensive Polish centers for lifelong learning (open learning, organizational learning, community learning). In recent years, 'lifelong learning' has been adopted in the UK as an umbrella term for post-compulsory education that falls outside of the UK higher education system—further education, community education, work-based learning and similar voluntary, public sector and commercial settings. In Canada, the federal government's Lifelong Learning Plan allows Canadian residents to withdraw funds from their Registered Retirement Savings Plan to help pay for lifelong learning, but the funds can only be used for formal learning programs at designated educational institutions. Priorities for lifelong and lifewide learning have different priorities in different countries, some placing more emphasis on economic development and some on social development. For example, the policies of China, Republic of Korea, Singapore and Malaysia promote lifelong learning in a human resource development perspective. The governments of these countries have done much to foster training and development whilst encouraging entrepreneurship. Aging In a 2012 New York Times article, Arthur Toga, a professor of neurology and director of the laboratory of neuroimaging at the University of California, Los Angeles, stated that "Exercising the brain may preserve it, forestalling mental decline." Some research has shown that people with higher cognitive reserves, attained through lifelong learning, were better able to avoid the cognitive decline that often accompanies age-related neurodegenerative diseases. Even when subjects had dementia, some studies show that they were able to persist in a normal mental state for a longer period than subjects who were not involved in some type of lifelong learning. Studies so far have lacked large, randomized controlled trials. In "Education and Alzheimer's Disease: A Review of Recent International Epidemiological Studies" published in 1997 in the journal Aging and Mental Health, C.J. Gilleard, finds fault with other studies linking education to cognitive decline. Among other factors, he suggests that variations in lifestyles could be responsible for an increase in vascular dementia, as blue-collar type workers may be less inclined to work in industries that provide mentally challenging situations. See also Folk high school Folkbildning, an approach to community education in Scandinavia Part-time student Autodidacticism TVET (Technical and Vocational Education and Training) Vocational education Workers' Educational Association References Sources Grady, D., (2012, March 7). Exercising an aging brain. New York Times. Retrieved from https://www.nytimes.com/2012/03/08/business/retirementspecial/retirees-are-using-education-to-exercise-an-aging-brain.html . US Department of Health and Human Services. (2007) Growing older in America: the health and retirement study. Retrieved from http://hrsonline.isr.umich.edu/sitedocs/databook-2006/inc/pdf/HRS-Growing-Older-in-America.pdf Yilmaz, K. (2008). Constructivism: Its theoretical underpinnings, variations, and implications for classroom instruction. Educational HORIZONS, Spring. Further reading John Field, Lifelong Learning and the New Educational Order (Trentham Books, 2006) Charles D. Hayes, The Rapture of Maturity: A Legacy of Lifelong Learning (2004) Charles D. Hayes, Beyond the American Dream: Lifelong Learning and the Search for Meaning in a Postmodern World (1998) Pastore G., Un'altra chance. Il futuro progettato tra formazione e flessibilità, in Mario Aldo Toscano, Homo instabilis. Sociologia della precarietà, Grandevetro/Jaca Book, Milano 2007 William A. Draves and Julie Coates ''Nine Shift: Work, life, and education in the 21st Century (2004) Alternative education Philosophy of education Educational psychology Personal development Educational stages Education reform
Lifelong learning
[ "Biology" ]
2,200
[ "Personal development", "Behavior", "Human behavior" ]
985,963
https://en.wikipedia.org/wiki/Lambda-CDM%20model
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components: a cosmological constant, denoted by lambda (Λ), associated with dark energy; the postulated cold dark matter, denoted by CDM; ordinary matter. It is the current standard model of Big Bang cosmology, as it is the simplest model that provides a reasonably good account of: the existence and structure of the cosmic microwave background; the large-scale structure in the distribution of galaxies; the observed abundances of hydrogen (including deuterium), helium, and lithium; the accelerating expansion of the universe observed in the light from distant galaxies and supernovae. The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe. The ΛCDM model has been successful in modeling broad collection of astronomical observations over decades. Remaining issues have lead to many alternative models and challenges the assumptions of the ΛCDM model. Overview The ΛCDM model is based on three postulates on the structure of spacetime: The cosmological principle, that the universe is the same everywhere and in all directions, and that it is expanding, A postulate by Hermann Weyl that the lines of spacetime (geodesics) intersect at only one point, where time along each line can be synchronized; the behavior resembles an expanding fluid, general relativity that relates the geometry of spacetime to the distribution of matter and energy. This combination greatly simplifies the equations of general relativity in to a form called the Friedmann equations. These equations specify the evolution of the scale factor the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor. For example, a model might include baryons, photons, neutrinos, and dark matter. These component densities become parameters extracted when the model constrained to match astrophysical observations. The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations. Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space, where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter. The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light. The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe. Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter. The ΛCDM model proposes specifically cold dark matter, hypothesized as: Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons) Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold) Dissipationless: Cannot cool by radiating photons Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly the weak force Dark matter constitutes about 26.5% of the mass–energy density of the universe. The remaining 4.9% comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe. The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon. The model uses the Friedmann–Lemaître–Robertson–Walker metric, the Friedmann equations, and the cosmological equations of state to describe the observable universe from approximately 0.1 s to the present. Cosmic expansion history The expansion of the universe is parameterized by a dimensionless scale factor (with time counted from the birth of the universe), defined relative to the present time, so ; the usual convention in cosmology is that subscript 0 denotes present-day values, so denotes the age of the universe. The scale factor is related to the observed redshift of the light emitted at time by The expansion rate is described by the time-dependent Hubble parameter, , defined as where is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density the curvature and the cosmological constant where, as usual is the speed of light and is the gravitational constant. A critical density is the present-day density, which gives zero curvature , assuming the cosmological constant is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives where is the reduced Hubble constant. If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent. The present-day density parameter for various species is defined as the dimensionless ratio where the subscript is one of for baryons, for cold dark matter, for radiation (photons plus relativistic neutrinos), and for dark energy. Since the densities of various species scale as different powers of , e.g. for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as where is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various parameters add up to by construction. In the general case this is integrated by computer to give the expansion history and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations. In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature is zero and , so this simplifies to Observations show that the radiation density is very small today, ; if this term is neglected the above has an analytic solution where this is fairly accurate for or million years. Solving for gives the present age of the universe in terms of the other parameters. It follows that the transition from decelerating to accelerating expansion (the second derivative crossing zero) occurred when which evaluates to or for the best-fit parameters estimated from the Planck spacecraft. Parameters Multiple variants of the ΛCDM model are used with some differences in parameters. One such set is outlined in the table below. The Planck collaboration version of the ΛCDM model is based on six parameters: baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated. Historical development The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density. During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density. During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted. These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty. Successes Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy. The notable successes include: Accurate modeling the high-precision CMB angular distribution measure by the Planck mission and Atacama Cosmology Telescope. Accurate description of the linear E-mode polarization of the CMB radiation due to fluctuations on the surface of last scattering events. Prediction of the observed B-mode polarization of the CMB light due to primordial gravitational waves. Observations of H2O emission spectra from a galaxy 12.8 billion light years away consistent with molecules excited by cosmic background radiation much more energetic – 16-20K – than the CMB we observe now, 3K. Predictions of the primordial abundance of deuterium as a result of Big bang nucleosynthesis. The observed abundance matches the one derived from the nucleosynthesis model with the value for baryon density derived from CMB measurements. In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed. Challenges Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model. Lack of detection Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions. Violations of the cosmological principle The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions (isotropy) and from every location (homogeneity) if you look at a large enough scale: "the universe looks the same whoever and wherever you are." This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric, to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible. The assumptions were carried over into the ΛCDM model. However, some findings suggested violations of the cosmological principle. Violations of isotropy Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales. Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored. Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle. The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole. Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps. Violations of homogeneity Based on N-body simulations in ΛCDM, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. However, many large-scale structures have been discovered, and some authors have reported some of the structures to be in conflict with the predicted scale of homogeneity for ΛCDM, including The Clowes–Campusano LQG, discovered in 1991, which has a length of 580 Mpc The Sloan Great Wall, discovered in 2003, which has a length of 423 Mpc U1.11, a large quasar group discovered in 2011, which has a length of 780 Mpc The Huge-LQG, discovered in 2012, which is three times longer than and twice as wide as is predicted possible according to ΛCDM The Hercules–Corona Borealis Great Wall, discovered in November 2013, which has a length of 2000–3000 Mpc (more than seven times that of the SGW) The Giant Arc, discovered in June 2021, which has a length of 1000 Mpc The Big Ring, reported in 2024, which has a diameter of 399 Mpc and is shaped like a ring Other authors claim that the existence of structures larger than the scale of homogeneity in the ΛCDM model does not necessarily violate the cosmological principle in the ΛCDM model. El Gordo galaxy cluster collision El Gordo is a massive interacting galaxy cluster in the early Universe (). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong () tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation. KBC void The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model. Hubble tension Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model. Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM. S8 tension The tension in cosmology is another major problem for the ΛCDM model. The parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction. Some values for are (2020 Planck), (2021 KIDS), (2022 DES), (2023 DES+KIDS), – (2023 HSC-SSP), (2024 EROSITA). Values have also obtained using peculiar velocities, (2020) and (2020), among other methods. Axis of evil Cosmological lithium problem The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed. Shape of the universe The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature. Violations of the strong equivalence principle The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies. Cold dark matter discrepancies Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model. Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity. Cuspy halo problem The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves. Dwarf galaxy problem Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way. Satellite disk problem Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. High-velocity galaxy problem Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy. Galaxy morphology problem If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years. Fast galaxy bar problem If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast. Small scale crisis Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model. High redshift galaxies Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars. Missing baryon problem Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies. They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following ), weighted with the luminosity function over the previously mentioned classes of astrophysical objects: The result was: where . Note that this value is much lower than the prediction of standard cosmic nucleosynthesis , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons". The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements. Unfalsifiability It has been argued that the ΛCDM model is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper. Extended models Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models. Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value. Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio should be between 0 and 0.3, and the latest results are within those limits. See also Bolshoi cosmological simulation Galaxy formation and evolution Illustris project List of cosmological computation software Millennium Run Weakly interacting massive particles (WIMPs) The ΛCDM model is also known as the standard model of cosmology, but is not related to the Standard Model of particle physics. Inhomogeneous cosmology References Further reading External links Cosmology tutorial/NedWright Millennium Simulation WMAP estimated cosmological parameters/Latest Summary Dark matter Dark energy Concepts in astronomy Scientific models
Lambda-CDM model
[ "Physics", "Astronomy" ]
6,638
[ "Dark matter", "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Energy (physics)", "Exotic matter", "Dark energy", "Wikipedia categories named after physical quantities", "Physics beyond the Standard Model", "Matter" ]
986,020
https://en.wikipedia.org/wiki/Cytolysis
Cytolysis, or osmotic lysis, occurs when a cell bursts due to an osmotic imbalance that has caused excess water to diffuse into the cell. Water can enter the cell by diffusion through the cell membrane or through selective membrane channels called aquaporins, which greatly facilitate the flow of water. It occurs in a hypotonic environment, where water moves into the cell by osmosis and causes its volume to increase to the point where the volume exceeds the membrane's capacity and the cell bursts. The presence of a cell wall prevents the membrane from bursting, so cytolysis only occurs in animal and protozoa cells which do not have cell walls. The reverse process is plasmolysis. In bacteria Osmotic lysis would be expected to occur when bacterial cells are treated with a hypotonic solution with added lysozyme, which destroys the bacteria's cell walls. Prevention Different cells and organisms have adapted different ways of preventing cytolysis from occurring. For example, the paramecium uses a contractile vacuole, which rapidly pumps out excessive water to prevent the build-up of water and the otherwise subsequent lysis. See also Cell disruption Crenation Lysis Osmotic pressure Plasmolysis Water intoxication References Cell biology Membrane biology Articles containing video clips
Cytolysis
[ "Chemistry", "Biology" ]
274
[ "Membrane biology", "Cell biology", "Molecular biology" ]
986,029
https://en.wikipedia.org/wiki/Strong%20CP%20problem
The strong CP problem is a question in particle physics, which brings up the following quandary: why does quantum chromodynamics (QCD) seem to preserve CP-symmetry? In particle physics, CP stands for the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). According to the current mathematical formulation of quantum chromodynamics, a violation of CP-symmetry in strong interactions could occur. However, no violation of the CP-symmetry has ever been seen in any experiment involving only the strong interaction. As there is no known reason in QCD for it to necessarily be conserved, this is a "fine tuning" problem known as the strong CP problem. The strong CP problem is sometimes regarded as an unsolved problem in physics, and has been referred to as "the most underrated puzzle in all of physics." There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory, involving new pseudoscalar particles called axions. Theory CP-symmetry states that physics should be unchanged if particles were swapped with their antiparticles and then left-handed and right-handed particles were also interchanged. This corresponds to performing a charge conjugation transformation and then a parity transformation. The symmetry is known to be broken in the Standard Model through weak interactions, but it is also expected to be broken through strong interactions which govern quantum chromodynamics (QCD), something that has not yet been observed. To illustrate how the CP violation can come about in QCD, consider a Yang–Mills theory with a single massive quark. The most general mass term possible for the quark is a complex mass written as for some arbitrary phase . In that case the Lagrangian describing the theory consists of four terms: The first and third terms are the CP-symmetric kinetic terms of the gauge and quark fields. The fourth term is the quark mass term which is CP violating for non-zero phases while the second term is the so-called θ-term or “vacuum angle”, which also violates CP-symmetry. Quark fields can always be redefined by performing a chiral transformation by some angle as which changes the complex mass phase by while leaving the kinetic terms unchanged. The transformation also changes the θ-term as due to a change in the path integral measure, an effect closely connected to the chiral anomaly. The theory would be CP invariant if one could eliminate both sources of CP violation through such a field redefinition. But this cannot be done unless . This is because even under such field redefinitions, the combination remains unchanged. For example, the CP violation due to the mass term can be eliminated by picking , but then all the CP violation goes to the θ-term which is now proportional to . If instead the θ-term is eliminated through a chiral transformation, then there will be a CP violating complex mass with a phase . Practically, it is usually useful to put all the CP violation into the θ-term and thus only deal with real masses. In the Standard Model where one deals with six quarks whose masses are described by the Yukawa matrices and , the physical CP violating angle is . Since the θ-term has no contributions to perturbation theory, all effects from strong CP violation is entirely non-perturbative. Notably, it gives rise to a neutron electric dipole moment Current experimental upper bounds on the dipole moment give an upper bound of cm, which requires . The angle can take any value between zero and , so it taking on such a particularly small value is a fine-tuning problem called the strong CP problem. Proposed solutions The strong CP problem is solved automatically if one of the quarks is massless. In that case one can perform a set of chiral transformations on all the massive quark fields to get rid of their complex mass phases and then perform another chiral transformation on the massless quark field to eliminate the residual θ-term without also introducing a complex mass term for that field. This then gets rid of all CP violating terms in the theory. The problem with this solution is that all quarks are known to be massive from experimental matching with lattice calculations. Even if one of the quarks was essentially massless to solve the problem, this would in itself just be another fine-tuning problem since there is nothing requiring a quark mass to take on such a small value. The most popular solution to the problem is through the Peccei–Quinn mechanism. This introduces a new global anomalous symmetry which is then spontaneously broken at low energies, giving rise to a pseudo-Goldstone boson called an axion. The axion ground state dynamically forces the theory to be CP-symmetric by setting . Axions are also considered viable candidates for dark matter and axion-like particles are also predicted by string theory. Other less popular proposed solutions exist such as Nelson–Barr models. These set at some high energy scale where CP-symmetry is exact but the symmetry is then spontaneously broken. The Nelson–Barr mechanism is a way of explaining why remains small at low energies while the CP breaking phase in the CKM matrix is large. See also Axion CP violation References Particle physics Unsolved problems in physics
Strong CP problem
[ "Physics" ]
1,091
[ "Unsolved problems in physics", "Particle physics" ]
986,033
https://en.wikipedia.org/wiki/Zinc%20oxide%20eugenol
Zinc oxide eugenol (ZOE) is a material created by the combination of zinc oxide and eugenol contained in clove oil. An acid–base reaction takes place with the formation of zinc eugenolate chelate. The reaction is catalysed by water and is accelerated by the presence of metal salts. ZOE can be used as a dental filling material or dental cement in dentistry. It is often used in dentistry when the decay is very deep or very close to the nerve or pulp chamber. Because the tissue inside the tooth, i.e. the pulp, reacts badly to the drilling stimulus (heat and vibration), it frequently becomes severely inflamed and precipitates a condition called acute or chronic pulpitis. This condition usually leads to severe chronic tooth sensitivity or actual toothache and can then only be treated with the removal of the nerve (pulp) called root canal therapy. For persons with a dry socket as a complication of tooth extraction, packing the dry socket with a eugenol-zinc oxide paste on iodoform gauze is effective for reducing acute pain. The placement of a ZOE "temporary" for a few to several days prior to the placement of the final filling can help to sedate the pulp. But, ZOE had in vitro cytotoxicity majorly due to release of Zn ions, not eugenol. In spite of severe in vitro cytotoxicity, ZOE showed relatively good biocompatibility in animal study when ZOE was applied on dentin. When ZOE was used as dentin-protective based materials, use of dental composite resin on ZOE was strongly prevented due to its inhibition of resin polymerization through radical scavenging effect. It is classified as an intermediate restorative material and has anaesthetic and antibacterial properties. The exact mechanism of anesthetic effect from ZOE was not revealed perfectly, but possibly through anti-inflammatory effect, modulating immune cells to less inflamed status. It is sometimes used in the management of dental caries as a "temporary filling". ZOE cements were introduced in the 1890s. Zinc oxide eugenol is also used as an impression material during construction of complete dentures and is used in the mucostatic technique of taking impressions, usually in a special tray, (acrylic) produced after primary alginate impressions. However, ZOE is not usually used if the patient has large undercuts or tuberosities, whereby silicone impression materials would be better suited. Zinc oxide eugenol is also used as an antimicrobial additive in paint. Types According to ANSI/ADA Specification no:30 (ISO 3107) and depending on intended use and individual formulation designed for each specific purpose. Composition The chemical composition of ZOE is typically: Zinc oxide, ~69.0% White (bleached) rosin, ~29.3% Zinc acetate, ~1.0% (improves strength) Zinc stearate, ~0.7% (acts as accelerator) Liquid (eugenol, ~85%, olive oil ~15%) ZOE impression pastes are sold in two separate tubes. The first tube contains zinc oxide and vegetable or mineral oil, while the second tube contains eugenol and rosin. The vegetable or mineral oil acts as a plasticizer and helps to counteract the irritant action of eugenol. Clove oil, which contains 70% to 85% eugenol, is sometimes used instead of eugenol because it causes less burning sensation in patients when it comes into contact with soft tissues. Rosin added to the paste in the second tube speeds up the reaction and produces a smoother, more homogeneous product. Canada balsam and Balsam of Peru are often used to increase flow and improve mixing properties. If the mixed paste is too thin or lacks body before it sets, a filler (such as a wax) or an inert powder (such as kaolin, talc, or diatomaceous earth) may be added to one or both of the original pastes. References A D Wilson and J W Nicholson, Acid-Base Cements, 1993, , Chapter 9 Dental materials oxide eugenol Impression material
Zinc oxide eugenol
[ "Physics" ]
850
[ "Materials", "Dental materials", "Matter" ]
986,051
https://en.wikipedia.org/wiki/Fine-tuning%20%28physics%29
In theoretical physics, fine-tuning is the process in which parameters of a model must be adjusted very precisely in order to fit with certain observations. Theories requiring fine-tuning are regarded as problematic in the absence of a known mechanism to explain why the parameters happen to have precisely the observed values that they return. The heuristic rule that parameters in a fundamental physical theory should not be too fine-tuned is called naturalness. Background The idea that naturalness will explain fine tuning was brought into question by Nima Arkani-Hamed, a theoretical physicist, in his talk "Why is there a Macroscopic Universe?", a lecture from the mini-series "Multiverse & Fine Tuning" from the "Philosophy of Cosmology" project, a University of Oxford and Cambridge Collaboration 2013. In it he describes how naturalness has usually provided a solution to problems in physics; and that it had usually done so earlier than expected. However, in addressing the problem of the cosmological constant, naturalness has failed to provide an explanation though it would have been expected to have done so a long time ago. The necessity of fine-tuning leads to various problems that do not show that the theories are incorrect, in the sense of falsifying observations, but nevertheless suggest that a piece of the story is missing. For example, the cosmological constant problem (why is the cosmological constant so small?); the hierarchy problem; and the strong CP problem, among others. Example An example of a fine-tuning problem considered by the scientific community to have a plausible "natural" solution is the cosmological flatness problem, which is solved if inflationary theory is correct: inflation forces the universe to become very flat, answering the question of why the universe is today observed to be flat to such a high degree. Measurement Although fine-tuning was traditionally measured by ad hoc fine-tuning measures, such as the Barbieri-Giudice-Ellis measure, over the past decade many scientists recognized that fine-tuning arguments were a specific application of Bayesian statistics. See also Anthropic principle Fine-tuned universe Hierarchy problem Strong CP problem References External links Chaos theory Theoretical physics
Fine-tuning (physics)
[ "Physics" ]
444
[ "Theoretical physics" ]
986,096
https://en.wikipedia.org/wiki/Fermi%20surface
In condensed matter physics, the Fermi surface is the surface in reciprocal space which separates occupied electron states from unoccupied electron states at zero temperature. The shape of the Fermi surface is derived from the periodicity and symmetry of the crystalline lattice and from the occupation of electronic energy bands. The existence of a Fermi surface is a direct consequence of the Pauli exclusion principle, which allows a maximum of one electron per quantum state. The study of the Fermi surfaces of materials is called fermiology. Theory Consider a spin-less ideal Fermi gas of particles. According to Fermi–Dirac statistics, the mean occupation number of a state with energy is given by where is the mean occupation number of the th state is the kinetic energy of the th state is the chemical potential (at zero temperature, this is the maximum kinetic energy the particle can have, i.e. Fermi energy ) is the absolute temperature is the Boltzmann constant Suppose we consider the limit . Then we have, By the Pauli exclusion principle, no two fermions can be in the same state. Additionally, at zero temperature the enthalpy of the electrons must be minimal, meaning that they cannot change state. If, for a particle in some state, there existed an unoccupied lower state that it could occupy, then the energy difference between those states would give the electron an additional enthalpy. Hence, the enthalpy of the electron would not be minimal. Therefore, at zero temperature all the lowest energy states must be saturated. For a large ensemble the Fermi level will be approximately equal to the chemical potential of the system, and hence every state below this energy must be occupied. Thus, particles fill up all energy levels below the Fermi level at absolute zero, which is equivalent to saying that is the energy level below which there are exactly states. In momentum space, these particles fill up a ball of radius , the surface of which is called the Fermi surface. The linear response of a metal to an electric, magnetic, or thermal gradient is determined by the shape of the Fermi surface, because currents are due to changes in the occupancy of states near the Fermi energy. In reciprocal space, the Fermi surface of an ideal Fermi gas is a sphere of radius , determined by the valence electron concentration where is the reduced Planck constant. A material whose Fermi level falls in a gap between bands is an insulator or semiconductor depending on the size of the bandgap. When a material's Fermi level falls in a bandgap, there is no Fermi surface. Materials with complex crystal structures can have quite intricate Fermi surfaces. Figure 2 illustrates the anisotropic Fermi surface of graphite, which has both electron and hole pockets in its Fermi surface due to multiple bands crossing the Fermi energy along the direction. Often in a metal, the Fermi surface radius is larger than the size of the first Brillouin zone, which results in a portion of the Fermi surface lying in the second (or higher) zones. As with the band structure itself, the Fermi surface can be displayed in an extended-zone scheme where is allowed to have arbitrarily large values or a reduced-zone scheme where wavevectors are shown modulo (in the 1-dimensional case) where a is the lattice constant. In the three-dimensional case the reduced zone scheme means that from any wavevector there is an appropriate number of reciprocal lattice vectors subtracted that the new now is closer to the origin in -space than to any . Solids with a large density of states at the Fermi level become unstable at low temperatures and tend to form ground states where the condensation energy comes from opening a gap at the Fermi surface. Examples of such ground states are superconductors, ferromagnets, Jahn–Teller distortions and spin density waves. The state occupancy of fermions like electrons is governed by Fermi–Dirac statistics so at finite temperatures the Fermi surface is accordingly broadened. In principle all fermion energy level populations are bound by a Fermi surface although the term is not generally used outside of condensed-matter physics. Experimental determination Electronic Fermi surfaces have been measured through observation of the oscillation of transport properties in magnetic fields , for example the de Haas–van Alphen effect (dHvA) and the Shubnikov–de Haas effect (SdH). The former is an oscillation in magnetic susceptibility and the latter in resistivity. The oscillations are periodic versus and occur because of the quantization of energy levels in the plane perpendicular to a magnetic field, a phenomenon first predicted by Lev Landau. The new states are called Landau levels and are separated by an energy where is called the cyclotron frequency, is the electronic charge, is the electron effective mass and is the speed of light. In a famous result, Lars Onsager proved that the period of oscillation is related to the cross-section of the Fermi surface (typically given in Å−2) perpendicular to the magnetic field direction by the equation. Thus the determination of the periods of oscillation for various applied field directions allows mapping of the Fermi surface. Observation of the dHvA and SdH oscillations requires magnetic fields large enough that the circumference of the cyclotron orbit is smaller than a mean free path. Therefore, dHvA and SdH experiments are usually performed at high-field facilities like the High Field Magnet Laboratory in Netherlands, Grenoble High Magnetic Field Laboratory in France, the Tsukuba Magnet Laboratory in Japan or the National High Magnetic Field Laboratory in the United States. The most direct experimental technique to resolve the electronic structure of crystals in the momentum-energy space (see reciprocal lattice), and, consequently, the Fermi surface, is the angle-resolved photoemission spectroscopy (ARPES). An example of the Fermi surface of superconducting cuprates measured by ARPES is shown in Figure 3. With positron annihilation it is also possible to determine the Fermi surface as the annihilation process conserves the momentum of the initial particle. Since a positron in a solid will thermalize prior to annihilation, the annihilation radiation carries the information about the electron momentum. The corresponding experimental technique is called angular correlation of electron positron annihilation radiation (ACAR) as it measures the angular deviation from of both annihilation quanta. In this way it is possible to probe the electron momentum density of a solid and determine the Fermi surface. Furthermore, using spin polarized positrons, the momentum distribution for the two spin states in magnetized materials can be obtained. ACAR has many advantages and disadvantages compared to other experimental techniques: It does not rely on UHV conditions, cryogenic temperatures, high magnetic fields or fully ordered alloys. However, ACAR needs samples with a low vacancy concentration as they act as effective traps for positrons. In this way, the first determination of a smeared Fermi surface in a 30% alloy was obtained in 1978. See also Fermi energy Brillouin zone Fermi surface of superconducting cuprates Kelvin probe force microscope Luttinger's theorem References External links Experimental Fermi surfaces of some superconducting cuprates and strontium ruthenates in "Angle-resolved photoemission spectroscopy of the cuprate superconductors (Review Article)" (2002) Experimental Fermi surfaces of some cuprates, transition metal dichalcogenides, ruthenates, and iron-based superconductors in "ARPES experiment in fermiology of quasi-2D metals (Review Article)" (2014) Condensed matter physics Electric and magnetic fields in matter Fermi–Dirac statistics
Fermi surface
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,617
[ "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Matter" ]
986,135
https://en.wikipedia.org/wiki/Peccei%E2%80%93Quinn%20theory
In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem. Overview Quantum chromodynamics (QCD) has a complicated vacuum structure which gives rise to a CP violating θ-term in the Lagrangian. Such a term can have a number of non-perturbative effects, one of which is to give the neutron an electric dipole moment. The absence of this dipole moment in experiments requires the fine-tuning of the θ-term to be very small, something known as the strong CP problem. Motivated as a solution to this problem, Peccei–Quinn (PQ) theory introduces a new complex scalar field in addition to the standard Higgs doublet. This scalar field couples to d-type quarks through Yukawa terms, while the Higgs now only couples to the up-type quarks. Additionally, a new global chiral anomalous U(1) symmetry is introduced, the Peccei–Quinn symmetry, under which is charged, requiring some of the fermions also have a PQ charge. The scalar field also has a potential where is a dimensionless parameter and is known as the decay constant. The potential results in having the vacuum expectation value of at the electroweak phase transition. Spontaneous symmetry breaking of the Peccei–Quinn symmetry below the electroweak scale gives rise to a pseudo-Goldstone boson known as the axion , with the resulting Lagrangian taking the form where the first term is the Standard Model (SM) and axion Lagrangian which includes axion-fermion interactions arising from the Yukawa terms. The second term is the CP violating θ-term, with the strong coupling constant, the gluon field strength tensor, and the dual field strength tensor. The third term is known as the color anomaly, a consequence of the Peccei–Quinn symmetry being anomalous, with determined by the choice of PQ charges for the quarks. If the symmetry is also anomalous in the electromagnetic sector, there will additionally be an anomaly term coupling the axion to photons. Due to the presence of the color anomaly, the effective angle is modified to , giving rise to an effective potential through instanton effects, which can be approximated in the dilute gas approximation as To minimize the ground state energy, the axion field picks the vacuum expectation value , with axions now being excitations around this vacuum. This prompts the field redefinition which leads to the cancellation of the angle, dynamically solving the strong CP problem. It is important to point out that the axion is massive since the Peccei–Quinn symmetry is explicitly broken by the chiral anomaly, with the axion mass roughly given in terms of the pion mass and pion decay constant as . Invisible axion models For the Peccei–Quinn model to work, the decay constant must be set at the electroweak scale, leading to a heavy axion. Such an axion has long been ruled out by experiments, for example through bounds on rare kaon decays . Instead, there are a variety of modified models called invisible axion models which introduce the new scalar field independently of the electroweak scale, enabling much larger vacuum expectation values, hence very light axions. The most popular such models are the Kim–Shifman–Vainshtein–Zakharov (KSVZ) and the Dine–Fischler–Srednicki–Zhitnisky (DFSZ) models. The KSVZ model introduces a new heavy quark doublet with PQ charge, acquiring its mass through a Yukawa term involving . Since in this model the only fermions that carry a PQ charge are the heavy quarks, there are no tree-level couplings between the SM fermions and the axion. Meanwhile, the DFSZ model replaces the usual Higgs with two PQ charged Higgs doublets, and , that give mass to the SM fermions through the usual Yukawa terms, while the new scalar only interacts with the standard model through a quartic coupling . Since the two Higgs doublets carry PQ charge, the resulting axion couples to SM fermions at tree-level. See also Axion QCD vacuum Strong CP problem References Further reading Physics beyond the Standard Model Quantum chromodynamics Anomalies (physics)
Peccei–Quinn theory
[ "Physics" ]
1,023
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
986,182
https://en.wikipedia.org/wiki/N-gram
An n-gram is a sequence of n adjacent symbols in particular order. The symbols may be n adjacent letters (including punctuation marks and blanks), syllables, or rarely whole words found in a language dataset; or adjacent phonemes extracted from a speech-recording dataset, or adjacent base pairs extracted from a genome. They are collected from a text corpus or speech corpus. If Latin numerical prefixes are used, then n-gram of size 1 is called a "unigram", size 2 a "bigram" (or, less commonly, a "digram") etc. If, instead of the Latin ones, the English cardinal numbers are furtherly used, then they are called "four-gram", "five-gram", etc. Similarly, using Greek numerical prefixes such as "monomer", "dimer", "trimer", "tetramer", "pentamer", etc., or English cardinal numbers, "one-mer", "two-mer", "three-mer", etc. are used in computational biology, for polymers or oligomers of a known size, called k-mers. When the items are words, -grams may also be called shingles. In the context of Natural language processing (NLP), the use of n-grams allows bag-of-words models to capture information such as word order, which would not be possible in the traditional bag of words setting. Examples (Shannon 1951) discussed n-gram models of English. For example: 3-gram character model (random draw based on the probabilities of each trigram): in no ist lat whey cratict froure birs grocid pondenome of demonstures of the retagin is regiactiona of cre 2-gram word model (random draw of words taking into account their transition probabilities): the head and in frontal attack on an english writer that the character of this point is therefore another method for the letters that the time of who ever told the problem for an unexpected Figure 1 shows several example sequences and the corresponding 1-gram, 2-gram and 3-gram sequences. Here are further examples; these are word-level 3-grams and 4-grams (and counts of the number of times they appeared) from the Google n-gram corpus. 3-grams ceramics collectables collectibles (55) ceramics collectables fine (130) ceramics collected by (52) ceramics collectible pottery (50) ceramics collectibles cooking (45) 4-grams serve as the incoming (92) serve as the incubator (99) serve as the independent (794) serve as the index (223) serve as the indication (72) serve as the indicator (120) References Further reading Manning, Christopher D.; Schütze, Hinrich; Foundations of Statistical Natural Language Processing, MIT Press: 1999, Damerau, Frederick J.; Markov Models and Linguistic Theory, Mouton, The Hague, 1971 See also Google Books Ngram Viewer External links Ngram Extractor: Gives weight of n-gram based on their frequency. Google's Google Books n-gram viewer and Web n-grams database (September 2006) STATOPERATOR N-grams Project Weighted n-gram viewer for every domain in Alexa Top 1M 1,000,000 most frequent 2,3,4,5-grams from the 425 million word Corpus of Contemporary American English Peachnote's music ngram viewer Stochastic Language Models (n-Gram) Specification (W3C) Michael Collins's notes on n-Gram Language Models OpenRefine: Clustering In Depth Natural language processing Computational linguistics Language modeling Speech recognition Corpus linguistics Probabilistic models
N-gram
[ "Technology" ]
786
[ "Natural language processing", "Natural language and computing", "Computational linguistics" ]
986,353
https://en.wikipedia.org/wiki/Information%20leakage
Information leakage happens whenever a system that is designed to be closed to an eavesdropper reveals some information to unauthorized parties nonetheless. In other words: Information leakage occurs when secret information correlates with, or can be correlated with, observable information. For example, when designing an encrypted instant messaging network, a network engineer without the capacity to crack encryption codes could see when messages are transmitted, even if he could not read them. Risk vectors A modern example of information leakage is the leakage of secret information via data compression, by using variations in data compression ratio to reveal correlations between known (or deliberately injected) plaintext and secret data combined in a single compressed stream. Another example is the key leakage that can occur when using some public-key systems when cryptographic nonce values used in signing operations are insufficiently random. Bad randomness cannot protect proper functioning of a cryptographic system, even in a benign circumstance, it can easily produce crackable keys that cause key leakage. Information leakage can sometimes be deliberate: for example, an algorithmic converter may be shipped that intentionally leaks small amounts of information, in order to provide its creator with the ability to intercept the users' messages, while still allowing the user to maintain an illusion that the system is secure. This sort of deliberate leakage is sometimes known as a subliminal channel. Generally, only very advanced systems employ defenses against information leakage. Following are the commonly implemented countermeasures : Use steganography to hide the fact that a message is transmitted at all. Use chaffing to make it unclear to whom messages are transmitted (but this does not hide from others the fact that messages are transmitted). For busy re-transmitting proxies, such as a Mixmaster node: randomly delay and shuffle the order of outbound packets - this will assist in disguising a given message's path, especially if there are multiple, popular forwarding nodes, such as are employed with Mixmaster mail forwarding. When a data value is no longer going to be used, erase it from the memory. See also Kleptographic attack Side-channel attack Traffic analysis References Cryptography
Information leakage
[ "Mathematics", "Engineering" ]
448
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
986,413
https://en.wikipedia.org/wiki/Hair%20dryer
A hair dryer (the handheld type also referred to as a blow dryer) is an electromechanical device that blows ambient air in hot or warm settings for styling or drying hair. Hair dryers enable better control over the shape and style of hair, by accelerating and controlling the formation of temporary hydrogen bonds within each strand. These bonds are powerful, but are temporary and extremely vulnerable to humidity. They disappear with a single washing of the hair. Hairstyles using hair dryers usually have volume and discipline, which can be further improved with styling products, hairbrushes, and combs during drying to add tension, hold and lift. Hair dryers were invented in the late 19th century. The first model was created in 1911 by Gabriel Ghazanchyan. Handheld, household hair dryers first appeared in 1920. Hair dryers are used in beauty salons by professional stylists, as well as by consumers at home. History In 1888 the first hair dryer was invented by French stylist Alexandre Godefroy. His invention was a large, seated version that consisted of a bonnet that attached to the chimney pipe of a gas stove. Godefroy invented it for use in his hair salon in France, and it was not portable or handheld. It could only be used by having the person sit underneath it. Armenian American inventor Gabriel Kazanjian was the first to patent a hair dryer in the United States, in 1911. Around 1920, hair dryers began to go on the market in handheld form. This was due to innovations by National Stamping and Electricworks under the white cross brand, and later U.S. Racine Universal Motor Company and the Hamilton Beach Co., which allowed the dryer to be small enough to be held by hand. Even in the 1920s, the new dryers were often heavy, weighing in at approximately , and were difficult to use. They also had many instances of overheating and electrocution. Hair dryers were only capable of using 100 watts, which increased the amount of time needed to dry hair (the average dryer today can use up to 2000 watts of heat). Since the 1920s, development of the hair dryer has mainly focused on improving the wattage and superficial exterior and material changes. In fact, the mechanism of the dryer has not had any significant changes since its inception. One of the more important changes for the hair dryer is to be made of plastic, so that it is more lightweight. This really caught on in the 1960s with the introduction of better electrical motors and the improvement of plastics. Another important change happened in 1954 when GEC changed the design of the dryer to move the motor inside the casing. The bonnet dryer was introduced to consumers in 1951. This type worked by having the dryer, usually in a small portable box, connected to a tube that went into a bonnet with holes in it that could be placed on top of a person's head. This worked by giving an even amount of heat to the whole head at once. The 1950s also saw the introduction of the rigid-hood hair dryer which is the type most frequently seen in salons. It had a hard plastic helmet that wraps around the person's head. This dryer works similarly to the bonnet dryer of the 1950s but at a much higher wattage. In the 1970s, the U.S. Consumer Product Safety Commission set up guidelines that hair dryers had to meet to be considered safe to manufacture. Since 1991 the CPSC has mandated that all dryers must use a ground fault circuit interrupter so that it cannot electrocute a person if it gets wet. By 2000, deaths by blowdryers had dropped to fewer than four people a year, a stark difference to the hundreds of cases of electrocution accidents during the mid-20th century. Function Most hair dryers consist of electric heating coils and a fan that blows the air (usually powered by a universal motor). The heating element in most dryers is a bare, coiled nichrome wire that is wrapped around mica insulators. Nichrome is used due to its high resistivity, and low tendency to corrode when heated. A survey of stores in 2007 showed that most hair dryers had ceramic heating elements (like ceramic heaters) because of their "instant heat" capability. This means that it takes less time for the dryers to heat up and for the hair to dry. Many of these dryers have "normal mode" buttons that turn off the heater and blow room-temperature air while the button is pressed. This function helps to maintain the hairstyle by setting it. The colder air reduces frizz and can help to promote shine in the hair. Many feature "ionic" operation, to reduce the build-up of static electricity in the hair, though the efficacy of ionic technology is of some debate. Manufacturers claim this makes the hair "smoother". Hair dryers are available with attachments, such as diffusers, airflow concentrators, and comb nozzles. A diffuser is an attachment that is used on hair that is fine, colored, permed or naturally curly. It diffuses the jet of air, so that the hair is not blown around while it dries. The hair dries more slowly, at a cooler temperature, and with less physical disturbance. This makes it so that the hair is less likely to frizz and it gives the hair more volume. An airflow concentrator does the opposite of a diffuser. It makes the end of the hair dryer narrower and thus helps to concentrate the heat into one spot to make it dry rapidly. The comb nozzle attachment is the same as the airflow concentrator, but it ends with comb-like teeth so that the user can dry the hair using the dryer without a brush or comb. Hair dryers have been cited as an effective treatment for head lice. Types Today there are two major types of hair dryers: the handheld and the rigid-hood dryer. A hood dryer has a hard plastic dome that fits over a person's head to dry their hair. Hot air is blown out through tiny openings around the inside of the dome so the hair is dried evenly. Hood dryers are mainly found in hair salons. Hair dryer brush A hair dryer brush (also called "hot air brush" and "round brush hair dryer" and "hair styler") has the shape of a brush and it is used as a volumizer too. There are two types of round brush hair dryers – rotating and static. Rotating round brush hair dryers have barrels that rotate automatically while static round brush hair dryers don't. Cultural references The British historical drama television series Downton Abbey made note of the invention of the portable hair dryer when a character purchased one in Series 6 Episode 9, set in the year 1925. Gallery See also Curling iron Heat gun Notes References External links Hairdressing Home appliances Products introduced in 1920 19th-century inventions French inventions American inventions
Hair dryer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,444
[ "Machines", "Chemical equipment", "Dryers", "Physical systems", "Home appliances" ]
986,434
https://en.wikipedia.org/wiki/Krang
Krang (also spelled Kraang) is a supervillain appearing in Teenage Mutant Ninja Turtles-related media, most frequently in the 1987 animated series and its associated merchandise, such as the Teenage Mutant Ninja Turtles Adventures comic book and many TMNT video games. The character has endured as one of the franchise's most prominent antagonists and a major foe of the Ninja Turtles. Krang's first comics appearance was in Teenage Mutant Ninja Turtles Adventures vol. 1, #1, published by Archie Comics in August 1988. In the 1987 TV series, Krang was voiced by Pat Fraley. He also appeared as General Krang in the 2012 IDW comic publication. Krang made his first live action appearance in Teenage Mutant Ninja Turtles: Out of the Shadows, which was a sequel to the 2014 film, with his voice provided by Brad Garrett. Krang was created by David Wise, with inspirations from the Utroms, to supply the Shredder with extraterrestrial technology. In the 2012 series, Krang is referred to as Kraang Prime, and is a deranged Utrom who had mind-controlled most of the Utrom populace into becoming a subservient, rogue hive mind faction known as "the Kraang". In Rise of the TMNT: The Movie, Krang is referred to as Krang Leader (credited as Krang One), who leads his siblings, Krang Sister (credited as Krang Two) and Krang Brother (credited as Krang Three). Abilities In the final season of the 1987 animated series, Krang showed signs of psychic powers when he hypnotized one of Lord Dregg's soldiers into obeying his and Shredder's commands, saying it would only work on weak-willed people. Throughout the rest of the show, as well as most other appearances, Krang's most notable combat ability is weaponry which he can switch his android bodies hands out for - his most commonly seen weapons are swords, maces, and blasters. Relating to the Utroms Krang's physical appearance was inspired by the Utroms from the original TMNT comic book. In several subsequent series, such as the 2012 IDW comic series, he is himself a member of the Utrom species. 1987 series Prior to the start of the 1987 cartoon, Krang was a reptilian creature in command of an army of Rock Soldiers under the leadership of General Traag, and took the completed station called the Technodrome, a powerful mobile battle fortress, and banished Von Drakus, who helped Krang build it, to Earth. When he was banished from Dimension X, Krang was stripped of his body and reduced to a brain-like form forced to use small android walkers and/or small platforms to move. While on Earth, Krang allied himself with the Shredder, who, along with his robotic Foot Soldier army, moved into the Technodrome. In exchange, the Shredder had to design and build a new body for Krang, a human-shaped exo-suit referred to as his "android body", which he eventually turns giant and uses to attack the turtles. Shredder lived up to his part of the bargain in the season 1 episode "Shredder & Splintered", in no small part because he was unable to deal with the Turtles and needed Krang's help. In the season 3 episode "Shredderville", the Turtles have a dream of a parallel world in which they never lived, and Shredder had no problem taking over the world; in that world, Shredder abandoned Krang after his conquest was complete, leaving him with no body and a heavily-damaged Technodrome. Krang's ultimate goal is to take over the Earth; it probably only became his objective after he was exiled on the Earth, but this point is never made clear. Every plan Krang conceives is either aimed at that goal, or towards the short-term objective of powering-up the Technodrome. He does not share Shredder's obsession with the Turtles and Splinter; while Shredder sees them as mortal enemies, Krang seems to regard them more like annoyances to be destroyed when they interfere in his plans. He does have his own "version" of the turtles, however- a rebellious group of teens from Dimension X named the neutrinos seem to have a very similar relationship to Krang as Shredder has to the TMNT. Counting from the first meeting between the Turtles and Shredder and Krang, Krang spent seven seasons in the Technodrome, either somewhere on Earth or in Dimension X, scheming to power up his battle fortress and take over the Earth. Eventually the Turtles managed to banish the Technodrome back to Dimension X without Krang and Shredder. At that point they began operating out of an old science building. Krang and Shredder eventually returned to the Technodrome in the season 8 episode Turtle Trek, but the Turtles destroy the engines of the Technodrome, trapping it and its inhabitants in Dimension X and putting an end to Krang's plans. Krang spent the next two years in Dimension X, until he was contacted by Dregg. Dregg arranged for him and Shredder to come back to Earth, to help him fight the Turtles. However, Dregg betrays them, and drains Krang's intelligence. Shredder escapes and restores Krang, but Dregg captures them again. Finally, the Turtles spoil his plan and transport Shredder and Krang back to Dimension X. In the series finale, Divide and Conquer, the Turtles return to the Technodrome to take Krang's android body, which they need to fight Dregg. Krang is nowhere to be seen, but it is assumed that he is still somewhere in Dimension X. IDW Comics In the IDW Comics, Krang is both an Utrom and a denizen of Dimension X. He is the heir of Quanin, the former Prime Minister of the Utroms' ruling council who appointed himself Emperor and aggressively expanded the Utrom domain into an empire. However, his megalomanic expansion drive both deprived his home planet of its most essential natural resource, the Ooze, and incited rebellion among the subjugated people of Dimension X, eventually leading to the destruction of Utrominon. Krang, who was as brutal as his father but opposed his uncautious politics, fled with a few survivors of his people through an interdimensional portal to Burnow Island on Earth, where he established a base from which he intended to terraform this world into a new home for his people, which he calls "new Utrominon". In order to augment his troops, Krang, initially disguised as a despotic human warlord, forms a business relationship with Baxter Stockman, head of the genetics research institute Stock Gen, and supplies him with Ooze, which could be used as a natural mutagen on Earth's organisms. Krang seeks this mutagen to use in healing the surviving utroms he took with him from Utrominon. It is through Stockman's experiments that the Teenage Mutant Ninja Turtles and Splinter evolve into intelligent, humanoid mutants. When the Turtles learn of Krang's genocidal plans thanks to their human friend April O'Neil, a former intern at Stock Gen, they, together with their ally, the Fugitoid (a former Neutrino scientist whose mind is trapped in a robot body and who was forcibly conscripted by Krang to complete his terraforming machine, the Technodrome), and the Foot Clan stop Krang from destroying the Earth, and the Utrom warlord is surrendered to the Neutrinos for trial for his numerous war crimes. While imprisoned on Neutrino, Kraang hires the bounty hunter Hakk-R to eliminate several material witnesses in order to get the trial cancelled, but Hakk-R fails thanks to the efforts of the Turtles. Eventually, Krang is found guilty and sentenced to permanent exile from Dimension X on Earth. However, Leatherhead, one of his former victims and a key witness in the trial, refuses to accept the mild verdict and kills Krang by devouring him. However, as the Fugitoid belatedly realizes, the Utroms possess a natural parasitic physiology, enabling Krang to regenerate himself and take possession of Leatherhead's body. He later joins Baxter Stockman and Madame Null in their alliance with the Rat King to bring about the demigod's "Armageddon Game", and receives a restored Metalhead as a new exobody. He stil continues to work on his own schemes, but his leadership of the Utroms is usurped by his former subordinate Ch'rell, and he is executed by King Zenter before he can destroy the Earth out of spite. 2003 series Krang makes a small cameo appearance in the episode "Secret Origins Part 3" of the 2003 series. As the Utroms are all walking to the transmat to go back home, one of them complains, "I hate walking on my tentacles," to which another Utrom replies, "Oh, shut up, Krang!". This Krang was voiced by Wayne Grayson. Ch'rell the evil Utrom serves as a sole heartless version of Krang and Oroku Saki. Krang also appears in the 2009 crossover film, Turtles Forever, in which he, Shredder and the Turtles from the 1987 show end up in the 2003 universe. Although Shredder was able to find his 2003 counterpart, he was unable to find Krang's, even though he exists in this universe (albeit as a regular, non-evil Utrom). Krang is voiced here by Bradford Cameron. 2012 series An alien species based on both Krang and the Utroms appear in the 2012 Nickelodeon show, named The Kraang. Kraang Prime is the leader of the hive mind and was a normal Utrom scientist until he made the mutagen, which he used to mutate himself into Kraang Prime. He then used his powers to enslave most of the Utroms into becoming hive-mind slaves. In the 40th Anniversary Comics Celebration, it is revealed that after his final defeat, Kraang prime's cells survived and began to absorb every living being it came across, eventually developing a consciousness and taking a humanoid form, renaming itself Kraang Primordius with the intent to consume everything. Given the series introduces the 1987 show as an alternate universe, the original Krang makes an appearance, still voiced by Pat Fraley, being said to be a cousin of Kraang Sub-Prime who wound up exiled to that dimension because he was a screw-up. He attempted to destroy the Mirage, 1987, and 2012 universes, the latter of which the Kraang had especially been trying to conquer, using Sub-Prime's desire to "wipe out the Turtles at any cost" as leverage. Sub-Prime banishes him back to the 1987-universe once this is revealed, as this incompetence was why Krang was banished in the first place (the fact about the 1987 Krang being a cousin and his exile is non-canon to the 1987 series). The Kraang are voiced by Nolan North, who had previously voiced Raphael in the 2007 TMNT film, and Kraang Prime was initially voiced by Roseanne Barr and later by Rachael Butera. Kraang Sub-Prime is voiced by Gilbert Gottfried and Patrick Fraley reprises his role of Krang from the 1987 series. 2018 series In the series Rise of the Teenage Mutant Ninja Turtles and its Netflix film sequel Rise of the Teenage Mutant Ninja Turtles: The Movie, the Krang is an alien species that landed on ancient Earth bringing with them a mutagen known as Empyrean, which created the Yōkai race. During feudal times in Japan, the Krang gifted Oroku Saki, leader of the Foot Clan, with the dark armor Kuroi Yōroi, which allowed Saki to defeat the Foot's enemies, but ended up possessing and transforming him into the evil Shredder and leading the Foot clan into worshiping them. Eventually a group of warriors who created the mystic weapon key and used it to banish them into another realm for a thousand years. During the series finale, Shredder unearths the remains of a Krang inside a buried ship while looking for Empyrean to fulfill his goals. By the time the Foot opens the portal to set them free, only three of them have survived their exile, they then possess the members of the Foot Clan and turn them into monstrous minions (with the same fate later befalling Raph, until Leo snaps him out of it) and proceed to take over the highest building of the city in order to open a portal big enough for their ship Technodrome to crossover. Unlike the previous versions of the Krang who mostly relied on their intellect, this version is more powerful and deadly and is capable of fighting without the use of any kind of tech and are virtually unstoppable in their suits. Their method of mutation also greatly differs from prior incarnations in that they utilise a form of bio-growth that usually takes over or otherwise transmutates anything it touches, to the point that it can puppeteer inorganic matter. Their members include the mastermind behind their plan who leads the other two, a female Krang who leads the possessed slaves into battle and has a temper, and a silent one who is in charge of spreading the bio-growth, creating the portal and piloting the Technodrome (Which in this series is techno-organic). The female one lost her right eye at the hands of April and was defeated by her, Splinter and Casey Jones and later captured by humans, the silent one was restrained by Donatello when he seized control of Technodrome and presumably destroyed with the ship and the leader was exiled again at the hands of Leonardo. Krang Leader is voiced by Jim Pirri and Krang Two is voiced by Toks Olagundoye. Film Brad Garrett voices Krang in Teenage Mutant Ninja Turtles: Out of the Shadows, where it was the first official live-action appearance of the character. This version looks accurate to his comics version in terms of him being a large brain with facial features, though his robot is more gray and robotic. It also has thin strips of plating that look like skin, a reference to the extremely humanoid design employed by the comics version. Fred Armisen was originally going to voice the character, but scheduling conflicts made him unavailable. Video games In Teenage Mutant Ninja Turtles: Smash-Up, one of the players is a Utrominator drone, a Utrom enslaved by Shredder in the 2003 series, despite not actually being Krang, he acts as a stand-in for him. The Kraang are one of the enemies in Teenage Mutant Ninja Turtles: Out of the Shadows, where the Turtles infiltrate the TCRI building in search for the Shredder, who has been stealing their technology for Baxter Stockman to invent him a telepathic helmet as a way to defeat the Turtles. General Krang is the secondary antagonist in Teenage Mutant Ninja Turtles: Mutants in Manhattan, where he teams up with Shredder to distract the Turtles so his Foot Soldiers and mutant allies can collect alien parts to construct a giant portal to Dimension X, to which Krang will initiate a invasion against Earth. In Teenage Mutant Ninja Turtles: Shredder's Revenge, Krang's android body parts are scattered for the villains try to repair, however this was actually a distraction to where he is actually turning the Statue of Liberty into a new body called the "Statue of Tyranny". References External links Kranag's profile at the Nickelodeon website Krang's Android Body on X-Entertainment. Krang - A Tribute on The Rubber Chicken. Teenage Mutant Ninja Turtles characters Villains in animated television series Comics characters introduced in 1988 Fiction about cyborgs Extraterrestrial supervillains Fictional dictators Fictional reptilians Fictional warlords Male characters in animation Male characters in comics Television characters introduced in 1987 Television supervillains Video game bosses Film supervillains Animated characters introduced in 1987
Krang
[ "Biology" ]
3,388
[ "Fiction about cyborgs", "Cyborgs" ]
986,580
https://en.wikipedia.org/wiki/Tar%20paper
Tar paper, roofing paper, felt paper, underlayment, or roofing tar paper is a heavy-duty paper used in construction. Tar paper is made by impregnating paper with tar, producing a waterproof material useful for roof construction. Tar paper is similar to roofing felt, historically a felt-like fabric made from recycled rags impregnated with melted asphalt, and today evolving into a more complex underlayment of synthetic mesh or fiberglass strands waterproofed by synthetically enhanced asphalt. Description Tar paper has been in use for centuries. It is defined as a Grade D building paper—a designation derived from a federal specification in the United States. Sometimes anachronistically referred to as "building paper", tar paper is manufactured from virgin kraft paper (as opposed to the fabric-based or synthetic mesh substrates of roofing felt) impregnated with asphalt. The result is a lighter-weight but less durable product with similar properties to felt. Grade papers are rated in minutes: the amount of time it takes for a moisture-sensitive chemical indicator to change color when a small boat-like sample is floated on water. Common grades include 10-, 20-, 30-, and 60-minute. The higher the rating, the heavier and more moisture-resistant the paper. A typical 20-minute paper will weigh about per square, a 30-minute paper per square, and a 60-minute paper about per square. The smaller volume of material, however, does tend to make these papers less resistant to moisture than heavier felts. Uses Tar paper is used as a roofing underlayment with asphalt, wood, shake, and other roof shingles as a form of intermediate bituminous waterproofing. It is sold in rolls of various widths, lengths, and thicknesses – rolls, long and "15 lb" () and "30 lb" () weights are common in the U.S. – often marked with chalk lines at certain intervals to aid in laying it out straight on roofs with the proper overlap (more overlap for flatter roofs). It is typically stapled in place, or held with roofing nails, and is sometimes applied in several layers with hot asphalt, cold asphalt (adhesive), or non-asphaltic adhesives. Older construction typically used a lighter-weight tar paper, stapled up with some overlap, as a water- and wind-proofing material on walls, largely displaced in recent decades by breathable plastic housewrap, commonly in widths. In the 19th and early 20th centuries, shacks of wooden frames covered with tar paper were a common form of temporary structure or very low-cost permanent housing in the rural United States and Canada, particularly in the temperate American South. References Building materials Paper Roofs Moisture protection Roofing materials
Tar paper
[ "Physics", "Technology", "Engineering" ]
578
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Construction", "Materials", "Roofs", "Matter", "Building materials" ]
986,871
https://en.wikipedia.org/wiki/Drug%20test
A drug test (also often toxicology screen or tox screen) is a technical analysis of a biological specimen, for example urine, hair, blood, breath, sweat, or oral fluid/saliva—to determine the presence or absence of specified parent drugs or their metabolites. Major applications of drug testing include detection of the presence of performance enhancing steroids in sport, employers and parole/probation officers screening for drugs prohibited by law (such as cocaine, methamphetamine, and heroin) and police officers testing for the presence and concentration of alcohol (ethanol) in the blood commonly referred to as BAC (blood alcohol content). BAC tests are typically administered via a breathalyzer while urinalysis is used for the vast majority of drug testing in sports and the workplace. Numerous other methods with varying degrees of accuracy, sensitivity (detection threshold/cutoff), and detection periods exist. A drug test may also refer to a test that provides quantitative chemical analysis of an illegal drug, typically intended to help with responsible drug use. Detection periods The detection windows depend upon multiple factors: drug class, amount and frequency of use, metabolic rate, body mass, age, overall health, and urine pH. For ease of use, the detection times of metabolites have been incorporated into each parent drug. For example, heroin and cocaine can only be detected for a few hours after use, but their metabolites can be detected for several days in urine. The chart depicts the longer detection times of the metabolites. In the case of hair testing, the metabolytes are permanently embedded into hair, and the detection time is determined by the length of the hair sample used in the analysis. The standard length of head hair used in the test is 1.5", which corresponds to about 3 months. Body/pubic hair grows slower, and the same 1.5" would result in a longer detection time. Oral fluid or saliva testing results for the most part mimic that of blood. The only exceptions are THC (tetrahydrocannabinol) and benzodiazepines. Oral fluid will likely detect THC from ingestion up to a maximum period of 6–12 hours. This continues to cause difficulty in oral fluid detection of THC and benzodiazepines. Breath air for the most part mimics blood tests as well. Due to the very low levels of substances in the breath air, liquid chromatography—mass spectrometry has to be used to analyze the sample according to a recent publication wherein 12 analytes were investigated. Rapid oral fluid products are not approved for use in workplace drug testing programs and are not FDA cleared. Using rapid oral fluid drug tests in the workplace is prohibited in only: California Kansas Maine Minnesota New York Vermont The following chart gives approximate detection periods for each substance by test type. Types Urine drug screen Urine analysis is primarily used because of its low cost. Urine drug testing is one of the most common testing methods used. The enzyme-multiplied immune test is the most frequently used urinalysis. Complaints have been made about the relatively high rates of false positives using this test. Urine drug tests screen the urine for the presence of a parent drug or its metabolites. The level of drug or its metabolites is not predictive of when the drug was taken or how much the patient used. Urine drug testing is an immunoassay based on the principle of competitive binding. Drugs which may be present in the urine specimen compete against their respective drug conjugate for binding sites on their specific antibody. During testing, a urine specimen migrates upward by capillary action. A drug, if present in the urine specimen below its cut-off concentration, will not saturate the binding sites of its specific antibody. The antibody will then react with the drug-protein conjugate and a visible colored line will show up in the test line region of the specific drug strip. A common misconception is that a drug test that is testing for a class of drugs, for example, opioids, will detect all drugs of that class. However, most opioid tests will not reliably detect oxycodone, oxymorphone, meperidine, or fentanyl. Likewise, most benzodiazepine drug tests will not reliably detect lorazepam. However, urine drug screens that test for a specific drug, rather than an entire class, are often available. When an employer requests a drug test from an employee, or a physician requests a drug test from a patient, the employee or patient is typically instructed to go to a collection site or their home. The urine sample goes through a specified 'chain of custody' to ensure that it is not tampered with or invalidated through lab or employee error. The patient or employee's urine is collected at a remote location in a specially designed secure cup, sealed with tamper-resistant tape, and sent to a testing laboratory to be screened for drugs (typically the Substance Abuse and Mental Health Services Administration 5 panel). The first step at the testing site is to split the urine into two aliquots. One aliquot is first screened for drugs using an analyzer that performs immunoassay as the initial screen. To ensure the specimen integrity and to detect possible adulterants, additional parameters are tested for. Some test the properties of normal urine, such as, urine creatinine, pH, and specific gravity. Others are intended to catch substances added to the urine to alter the test result, such as, oxidants (including bleach), nitrites, and gluteraldehyde. If the urine screen is positive then another aliquot of the sample is used to confirm the findings by gas chromatography—mass spectrometry (GC-MS) or liquid chromatography - mass spectrometry methodology. If requested by the physician or employer, certain drugs are screened for individually; these are generally drugs part of a chemical class that are, for one of many reasons, considered more habit-forming or of concern. For instance, oxycodone and diamorphine may be tested, both sedative analgesics. If such a test is not requested specifically, the more general test (in the preceding case, the test for opioids) will detect most of the drugs of a class, but the employer or physician will not have the benefit of the identity of the drug. Employment-related test results are relayed to a medical review office (MRO) where a medical physician reviews the results. If the result of the screen is negative, the MRO informs the employer that the employee has no detectable drug in the urine, typically within 24 hours. However, if the test result of the immunoassay and GC-MS are non-negative and show a concentration level of parent drug or metabolite above the established limit, the MRO contacts the employee to determine if there is any legitimate reason—such as a medical treatment or prescription. On-site instant drug testing is a more cost-efficient method of effectively detecting substance use amongst employees, as well as in rehabilitation programs to monitor patient progress. These instant tests can be used for both urine and saliva testing. Although the accuracy of such tests varies with the manufacturer, some kits have rates of accuracy correlating closely with laboratory test results. Breath test Breath test is a widespread method for quickly determining alcohol intoxication. A breath test measures the alcohol concentration in the body by a deep-lung breath. There are different instruments used for measuring the alcohol content of an individual though their breath. Breathalyzer is a widely known instrument which was developed in 1954 and contained chemicals unlike other breath-testing instruments. More modernly used instruments are the infrared light-absorption devices and fuel cell detectors, these two testers are microprocessor controlled meaning the operator only has to press the start button. To get accurate readings on a breath-testing device the individual must blow for approximately 6 seconds and need to contain roughly 1.1 to 1.5 liters of breath. For a breath-test to result accurately and truly an operator must take steps such as avoiding measuring "mouth alcohol" which is a result from regurgitation, belching, or recent intake of an alcoholic beverage. To avoid measuring "mouth alcohol" the operator must not allow the individual that's taking the test to consume any materials for at least fifteen minutes before the breath test. When pulled over for a driving violation if an individual in the United States refuses to take a breath test that individual's driver's license can be suspended for a 6 to 12 months time period. Hair testing Hair analysis to detect addictive substances has been used by court systems in the United States, United Kingdom, Canada, and other countries worldwide. In the United States, hair testing has been accepted in court cases as forensic evidence following the Frye Rule, the Federal Rules of Evidence, and the Daubert Rule. As such, hair testing results are legally and scientifically recognized as admissible evidence. Hair testing is commonly used in the USA as pre-employment drug test. The detection time for this test is roughly 3 months, which is the time, that takes head hair to grow ca. 1.5 inches, that are collected as a specimen. Longer detection times are possible with longer hair samples. A 2014 collaborative US study of 359 adults with moderate-risk drug use found, that a large number of participants, who reported drug use in the last 3 months, had negative hair tests. The tests were done using an immunoassay followed by a confirmatory GC-MS. For marijuana, only about half of self-disclosed users had a positive hair test. Under-identification of drug use by hair testing (or over-reporting) was also widespread for cocaine, amphetamines, and opioids. Because such under-identification was more common among participants, who self-reported an infrequent use, the authors suggested, that the immunoassay did not have the sensitivity required for such infrequent uses. It is worth noting, that most earlier studies reported, that hair tests found ca. 50-fold higher prevalence of illicit drug use, than self reports. In late 2022 the US Federal Motor Carrier Safety Administration denied a petition to recognize hair samples as an alternative (to the currently used urine samples) drug-testing method for truckers. The agency did not comment on the test validity, but rather stated, that it lacks the statutory authority to adopt new analytical methods. Although some lower courts may have accepted hair test evidence, there is no controlling judicial ruling in either the federal or any state system declaring any type of hair test as reliable. Hair testing is now recognized in both the UK and US judicial systems. There are guidelines for hair testing that have been published by the Society of Hair Testing (a private company in France) that specify the markers to be tested for and the cutoff concentrations that need to be tested. Addictive substances that can be detected include Cannabis, Cocaine, Amphetamines and drugs new to the UK such as Mephedrone. Alcohol In contrast to other drugs consumed, alcohol is deposited directly in the hair. For this reason the investigation procedure looks for direct products of ethanol metabolism. The main part of alcohol is oxidized in the human body. This means it is released as water and carbon dioxide. One part of the alcohol reacts with fatty acids to produce esters. The sum of the concentrations of four of these fatty acid ethyl esters (FAEEs: ethyl myristate, ethyl palmitate, ethyl oleate and ethyl stearate) are used as indicators of the alcohol consumption. The amounts found in hair are measured in nanograms (one nanogram equals only one billionth of a gram), however with the benefit of modern technology, it is possible to detect such small amounts. In the detection of ethyl glucuronide, or EtG, testing can detect amounts in picograms (one picogram equals 0.001 nanograms). However, there is one major difference between most drugs and alcohol metabolites in the way in which they enter into the hair: on the one hand like other drugs FAEEs enter into the hair via the keratinocytes, the cells responsible for hair growth. These cells form the hair in the root and then grow through the skin surface taking any substances with them. On the other hand, the sebaceous glands produce FAEEs in the scalp and these migrate together with the sebum along the hair shaft (Auwärter et al., 2001, Pragst et al., 2004). So these glands lubricate not only the part of the hair that is just growing at 0.3 mm per day on the skin surface, but also the more mature hair growth, providing it with a protective layer of fat. FAEEs (nanogram = one billionth of a gram) appear in hair in almost one order of magnitude lower than (the relevant order of magnitude of) EtG (picogram = one trillionth of a gram). It has been technically possible to measure FAEEs since 1993, and the first study reporting the detection of EtG in hair was done by Sachs in 1993. In practice, most hair which is sent for analysis has been cosmetically treated in some way (bleached, permed etc.). It has been proven that FAEEs are not significantly affected by such treatments (Hartwig et al., 2003a). FAEE concentrations in hair from other body sites can be interpreted in a similar fashion as scalp hair (Hartwig et al., 2003b). Presumptive substance testing Presumptive substance tests attempt to identify a suspicious substance, material or surface where traces of drugs are thought to be, instead of testing individuals through biological methods such as urine or hair testing. The test involves mixing the suspicious material with a chemical in order to trigger a color change to indicate if a drug is present. Most are now available over-the-counter for consumer use, and do not require a lab to read results. Benefits to this method include that the person who is suspected of drug use does not need to be confronted or aware of testing. Only a very small amount of material is needed to obtain results, and can be used to test powder, pills, capsules, crystals, or organic material. There is also the ability to detect illicit material when mixed with other non-illicit materials. The tests are used for general screening purposes, offering a generic result for the presence of a wide range of drugs, including Heroin, Cocaine, Methamphetamine, Amphetamine, Ecstasy/MDMA, Methadone, Ketamine, PCP, PMA, DMT, MDPV, and may detect rapidly evolving synthetic designer drugs. Separate tests for Marijuana/Hashish are also available. There are five primary color-tests reagents used for general screening purposes. The Marquis reagent turns into a variety of colors when in the presence of different substances. Dille-Koppanyi reagent uses two chemical solutions which turns a violet-blue color in the presence of barbiturates. Duquenois-Levine reagent is a series of chemical solutions that turn to the color of purple when the vegetation of marijuana is added. Van Urk reagent turns blue-purple when in the presence of LSD. Scott test's chemical solution shows up as a faint blue for cocaine base. In recent years, the use of presumptive test kits in the criminal justice system has come under great scrutiny due to the lack to forensic studies, questioned reliability, rendering of false positives with legal substances, and wrongful arrests. Saliva drug screen / Oral fluid-based drug screen Saliva / oral fluid-based drug tests can generally detect use during the previous few days. It is better at detecting very recent use of a substance. THC may only be detectable for 2–24 hours in most cases. On site drug tests are allowed per the Department of Labor. Detection in saliva tests begins almost immediately upon use of the following substances, and lasts for approximately the following times: Alcohol: 6-12 h Marijuana: 1-24h A disadvantage of saliva based drug testing is that it is not approved by FDA or SAMHSA for use with DOT / Federal Mandated Drug Testing. Oral fluid is not considered a bio-hazard unless there is visible blood; however, it should be treated with care. Sweat drug screen Sweat patches are attached to the skin to collect sweat over a long period of time (up to 14 days). These are used by child protective services, parole departments, and other government institutions concerned with drug use over long periods, when urine testing is not practical. There are also surface drug tests that test for the metabolite of parent drug groups in the residue of drugs left in sweat. An example of a rapid, non-invasive, sweat-based drug test is fingerprint drug screening. This 10 minute fingerprint test is in use by a variety of organisations in the UK and beyond, including within workplaces, drug treatment and family safeguarding services at airport border control (to detect drug mules) and in mortuaries to assist in investigations into cause of death. Blood Drug-testing a blood sample measures whether or not a drug or a metabolite is in the body at a particular time. These types of tests are considered to be the most accurate way of telling if a person is intoxicated. Blood drug tests are not used very often because they need specialized equipment and medically trained administrators. Depending on how much marijuana was consumed, it can usually be detected in blood tests within six hours of consumption. After six hours has passed, the concentration of marijuana in the blood decreases significantly. It generally disappears completely within 30 days. Random drug testing Can occur at any time, usually when the investigator has reason to believe that a substance is possibly being used by the subject by behavior or immediately after an employee-related incident occurs during work hours. Testing protocol typically conforms to the national medical standard, candidates are given up to 120 minutes to reasonably produce a urine sample from the time of commencement (in some instances this time frame may be extended at the examiner's discretion). Diagnostic screening In the case of life-threatening symptoms, unconsciousness, or bizarre behavior in an emergency situation, screening for common drugs and toxins may help find the cause, called a toxicology test or tox screen to denote the broader area of possible substances beyond just self-administered drugs. These tests can also be done post-mortem during an autopsy in cases where a death was not expected. The test is usually done within 96 hours (4 days) after the desire for the test is realized. Both a urine sample and a blood sample may be tested. A blood sample is routinely used to detect ethanol/methanol and ASA/paracetamol intoxication. Various panels are used for screening urine samples for common substances, e.g. triage 8 that detects amphetamines, benzodiazepines, cocaine, methadone, opiates, cannabis, barbiturates and tricyclic antidepressants. Results are given in 10–15 min. Similar screenings may be used to evaluate the possible use of date rape drugs. This is usually done on a urine sample. Optional harm reduction scheme Drug checks/tests (also known as pill testing) are provided at some events such as concerts and music festivals. Attendees can voluntarily hand over a sample of any drug or drugs in their possession to be tested to check what the drug is and its purity. The scheme is used as a harm reduction technique so people are more aware of what they are taking and the potential risks. Occupational harm reduction strategies Drug and alcohol impairment while at work increases the risk of work-place accidents and decreases productivity. Employers such as the commercial driving and airline industry may conduct random drug tests on employees with the goal of deterring use to improve safety. There is some evidence that increasing the use of random drug testing in the airline industry reduces the percentage of people who test positive, however, it is unclear if this decrease is associated with a corresponding decrease in fatal or non-fatal injuries, other accidents, number of days absent from work. It is also not clear if there are other unwanted side effects that may result from random drug and alcohol testing in the workplace. Commonly tested substances Anabolic steroids Anabolic steroids are used to enhance performance in sports and as they are prohibited in most high-level competitions drug testing is used extensively in order to enforce this prohibition. This is particularly so in individual (rather than team) sports such as athletics and cycling. Methodologies Before testing samples, the tamper-evident seal is checked for integrity. If it appears to have been tampered with or damaged, the laboratory rejects the sample and does not test it. Next, the sample must be made testable. Urine and oral fluid can be used "as is" for some tests, but other tests require the drugs to be extracted from urine. Strands of hair, patches, and blood must be prepared before testing. Hair is washed in order to eliminate second-hand sources of drugs on the surface of the hair, then the keratin is broken down using enzymes. Blood plasma may need to be separated by centrifuge from blood cells prior to testing. Sweat patches are opened and the sweat collection component is removed and soaked in a solvent to dissolve any drugs present. Laboratory-based drug testing is done in two steps. The first step is the screening test, which is an immunoassay based test applied to all samples. The second step, known as the confirmation test, is usually undertaken by a laboratory using highly specific chromatographic techniques and only applied to samples that test positive during the screening test. Screening tests are usually done by immunoassay (EMIT, ELISA, and RIA are the most common). A "dipstick" drug testing method which could provide screening test capabilities to field investigators has been developed at the University of Illinois. After a suspected positive sample is detected during screening, the sample is tested using a confirmation test. Samples that are negative on the screening test are discarded and reported as negative. The confirmation test in most laboratories (and all SAMHSA certified labs) is performed using mass spectrometry, and is precise but expensive. False positive samples from the screening test will almost always be negative on the confirmation test. Samples testing positive during both screening and confirmation tests are reported as positive to the entity that ordered the test. Most laboratories save positive samples for some period of months or years in the event of a disputed result or lawsuit. For workplace drug testing, a positive result is generally not confirmed without a review by a Medical Review Officer who will normally interview the subject of the drug test. Urine drug testing Urine drug test kits are available as on-site tests, or laboratory analysis. Urinalysis is the most common test type and used by federally mandated drug testing programs and is considered the Gold Standard of drug testing. Urine based tests have been upheld in most courts for more than 30 years. However, urinalysis conducted by the Department of Defense has been challenged for reliability of testing the metabolite of cocaine. There are two associated metabolites of cocaine, benzoylecgonine (BZ) and ecgonine methyl ester (EME), the first (BZ) is created by the presence of cocaine in an aqueous solution with a pH greater than 7.0, while the second (EME) results from the actual human metabolic process. The presence of EME confirms actual ingestion of cocaine by a human being, while the presence of BZ is indicative only. BZ without EME is evidence of sample contamination, however, the US Department of Defense has chosen not to test for EME in its urinalysis program. A number of different analyses (defined as the unknown substance being tested for) are available on Urine Drug Screens. Spray drug testing Spray (sweat) drug test kits are non-invasive. It is a simple process to collect the required specimen, no bathroom is needed, no laboratory is required for analysis, and the tests themselves are difficult to manipulate and relatively tamper-resistant. The detection window is long and can detect recent drug use within several hours. There are also some disadvantages to spray or sweat testing. There is not much variety in these drug tests, only a limited number of drugs can be detected, prices tend to be higher, and inconclusive results can be produced by variations in sweat production rates in donors. They also have a relatively long specimen collection period and are more vulnerable to contamination than other common forms of testing. Hair drug testing Hair drug testing is a method that can detect drug use over a much longer period of time than saliva, sweat or urine tests. Hair testing is also more robust with respect to tampering. Thus, hair sampling is preferred by the US military and by many large corporations, which are subject to Drug-Free Workplace Act of 1988. Head hair normally growth at the rate of 0.5 inches per month. Thus, the most common hair sample length of 1.5" from the scalp would detect drug use within the last 90-100 days. 80-120 strands of hair are sufficient for the test. In the absence of hair on the head, body hair can be used as an acceptable substitute. This includes facial hair, the underarms, arms, and legs or even pubic hair. Because body hair usually grows slower than head hair, drugs can often be detected in body hair for longer periods, e.g. up to 12 months. Currently, most entities that use hair testing have prescribed consequences for individuals removing hair to avoid a hair drug test. Most drugs are analysed in hair samples not as the original psychoactive molecules, but rather as their metabolytes. For example, ethanol is determined as ethyl glucuronide, while cocaine use is confirmed using ecgonine. Testing for metabolytes reduces the likelihood of false positive results due to contamination. One disadvantage of hair testing is, that it cannot detect recent drug use, because it takes at least a week after a drug intake for the metabolytes to show up in a growing hair above the skin. Urine tests are better suited for detecting recent (within a week) drug use. In a practical test, hair sample is usually washed with a low polarity solvent (such as dichloromethane) to remove surface contaminations. Then, the sample is pulverized and extracted with a more polar solvent, such as methanol. Although thousand different substances can be determined in a single gas chromatography–mass spectrometry or liquid chromatography–mass spectrometry experiment, due to the low concentration of analytes, practical measurements (see selective ion monitoring) are limited to a smaller number (10-20) of analytes. Designer drugs are usually missed in such measurements, because the analyst must know in advance what chemicals to look for. Most hair testing laboratories use the aforementioned chromato-mass-spectrometry methods for confirmation or for rarely tested drugs only. Mass screening (preliminary or final) is usually done with immunoassays, because of their lower cost. Legality, ethics and politics The results of federally mandating drug testing were similar to the effects of simply extending to the trucking industry the right to perform drug tests, and it has been argued that the latter approach would have been as effective at lower cost. Psychologist Tony Buon has criticized the use of workplace drug testing on a number of grounds, including: Flawed technology: The real world performance of testing is much lower than that claimed by its promoters. Buon suggest that tests are probably adequate for rehabilitation and treatment situations, possibly adequate for pre-employment situations, but not for dismissing employees. Ethical Issues: Because of the fairly simple ways that an employee can invalidate the test, drug testing must be strictly monitored. This means that the specimen must be observed leaving the body. Many legal objections currently being raised in the courts about drug testing are pointing to legal requirements of prior notice, consent, due process, and cause. Wrong focus: As has been shown with Employee Assistance Programs, the focus of management concern should be on work performance decline. Buon suggests effective management practices are an infinitely better approach to managing workplace alcohol and other drug issues. Tony Buon has also reported by the CIPD as stating that "drug testing captures the stupid—experienced drug users know how to beat the tests". From a penological standpoint, one purpose of drug testing is to help classify the people taking the drug test within risk groups so that those who pose more of a danger to the public can be incapacitated through incarceration or other restrictions on liberty. Thus, the drug testing serves a crime control purpose even if there is no expectation of rehabilitating the drug user through treatment, deterring drug use through sanctions, or sending a message that drug use is a deviant behavior that will not be tolerated. United Kingdom A study in 2004 by the Independent Inquiry into Drug Testing at Work found that attempts by employers to force employees to take drug tests could potentially be challenged as a violation of privacy under the Human Rights Act 1998 and Article 8 of the European Convention of Human Rights. However, this does not apply to industries where drug testing is a matter of personal and public safety or security rather than productivity. United States In consultation with Dr. Carlton Turner, President Ronald Reagan issued Executive Order 12564. In doing so, he instituted mandatory drug-testing for all safety-sensitive executive-level and civil-service Federal employees. This was challenged in the courts by the National Treasury Employees Union. In 1988, this challenge was considered by the US Supreme Court. A similar challenge resulted in the Court extending the drug-free workplace concept to the private sector. These decisions were then incorporated into the White House Drug Control Strategy directive issued by President George H.W. Bush in 1989. All defendants serving on federal probation or federal supervised release are required to submit to at least three drug tests. Failing a drug test can be construed as possession of a controlled substance, resulting in mandatory revocation and imprisonment. There have been inconsistent evaluation results as to whether continued pretrial drug testing has beneficial effects. Testing positive can lead to bail not being granted, or if bail has already been granted, to bail revocation or other sanctions. Arizona also adopted a law in 1987 authorizing mandatory drug testing of felony arrestees for the purpose of informing the pretrial release decision, and the District of Columbia has had a similar law since the 1970s. It has been argued that one of the problems with such testing is that there is often not enough time between the arrest and the bail decision to confirm positive results using GC/MS technology. It has also been argued that such testing potentially implicates the Fifth Amendment privilege against self-incrimination, the right to due process (including the prohibition against gathering evidence in a manner that shocks the conscience or constitutes outrageous government conduct), and the prohibition against unreasonable searches and seizures contained in the Fourth Amendment. According to Henriksson, the anti-drug appeals of the Reagan administration "created an environment in which many employers felt compelled to implement drug testing programs because failure to do so might be perceived as condoning drug use. This fear was easily exploited by aggressive marketing and sales forces, who often overstated the value of testing and painted a bleak picture of the consequences of failing to use the drug testing product or service being offered." On March 10, 1986, the Commission on Organized Crime asked all U.S. companies to test employees for drug use. By 1987, nearly 25% of the Fortune 500 companies used drug tests. According to an uncontrolled self-report study done by DATIA and Society for Human Resource Management in 2012 (sample of 6,000 randomly selected human resource professionals), human resource professionals reported the following results after implementing a drug testing program: 19% of companies reported a subjective increase in employee productivity, 16% reported a decrease in employee turnover (8% reported an increase), and unspecified percentages reported decreases in absenteeism and improvement of workers' compensation incidence rates. According to US Chamber of Commerce 70% of all illicit drug users are employed. Some industries have high rates of employee drug use such as construction (12.8%), repair (11.1%), and hospitality (7.9-16.3%). Australia A person conducting a business or undertaking (PCBU—the new term that includes employers) has duties under the work health and safety (WHS) legislation to ensure a worker affected by alcohol or other drugs does not place themselves or other persons at risk of injury while at work. Workplace policies and prevention programs can help change the norms and culture around substance use. All organisations—large and small—can benefit from an agreed policy on alcohol and drug misuse that applies to all workers. Such a policy should form part of an organisations overall health and safety management system. PCBUs are encouraged to establish a policy and procedure, in consultation with workers, to constructively manage alcohol and other drug related hazards in their workplace. A comprehensive workplace alcohol and other drug policy should apply to everyone in the workplace and include prevention, education, counselling and rehabilitation arrangements. In addition, the roles and responsibilities of managers and supervisors should be clearly outlined. All Australian workplace drug testing must comply with Australian standard AS/NZS4308:2008. In Victoria, roadside saliva tests detect drugs that contain: THC (Delta-9 tetrahydrocannabinol), the active component in cannabis. methamphetamine, also known as "ice", "crystal" and "crank". MDMA (Methylenedioxymethamphetamine), which is known as ecstasy. In February 2016 a New South Wales magistrate "acquitted a man who tested positive for cannabis". He had been arrested and charged after testing positive during a roadside drug test, despite not having smoked for nine days. He was relying on advice previously given to him by police. Refusal In the United States federal criminal system, refusing to take a drug test triggers an automatic revocation of probation or supervised release. In Victoria, Australia the driver of the car has the option to refuse the drug test. Refusing to undergo a drug test or refusing to undergo a secondary drug test after the first one, triggers an automatic suspension and disqualification for a period of two years and a fine of AUD$1000. The second refusal triggers an automatic suspension and disqualification for a period of four years and an even larger fine. Historical cases In 2000, an Australian Mining Company South Blackwater Coal Ltd with 400 employees, imposed drug-testing procedures, and the trade unions advised their members to refuse to take the tests, partly because a positive result does not necessarily indicate present impairment; the workers were stood-down by the company without pay for a week. In 2003, sixteen members of the Chicago White Sox considered refusing to take a drug test, in hopes of making steroid testing mandatory. In 2006, Levy County, Florida, volunteer librarians resigned en masse rather than take drug tests. In 2010, Iranian super heavyweight class weightlifters refused to submit to a drug test authorized by the Iran Weightlifting League. See also Cannabis drug tests Detoxification Drug checking Drug testing welfare recipients Drug-Free Workplace Act of 1988 Equine drug testing Forensic toxicology Lacing (drugs) Poppy seed defence Presumptive and confirmatory tests Reagent testing School district drug policies Urinalysis Occupational health concerns of cannabis use References External links National Institute on Drug Abuse Drug control law Doping in sport Workplace programs Medical tests Addiction medicine
Drug test
[ "Chemistry" ]
7,377
[ "Drug control law", "Regulation of chemicals" ]
986,932
https://en.wikipedia.org/wiki/Symbolic%20method%20%28combinatorics%29
In combinatorics, the symbolic method is a technique for counting combinatorial objects. It uses the internal structure of the objects to derive formulas for their generating functions. The method is mostly associated with Philippe Flajolet and is detailed in Part A of his book with Robert Sedgewick, Analytic Combinatorics, while the rest of the book explains how to use complex analysis in order to get asymptotic and probabilistic results on the corresponding generating functions. During two centuries, generating functions were popping up via the corresponding recurrences on their coefficients (as can be seen in the seminal works of Bernoulli, Euler, Arthur Cayley, Schröder, Ramanujan, Riordan, Knuth, , etc.). It was then slowly realized that the generating functions were capturing many other facets of the initial discrete combinatorial objects, and that this could be done in a more direct formal way: The recursive nature of some combinatorial structures translates, via some isomorphisms, into noteworthy identities on the corresponding generating functions. Following the works of Pólya, further advances were thus done in this spirit in the 1970s with generic uses of languages for specifying combinatorial classes and their generating functions, as found in works by Foata and Schützenberger on permutations, Bender and Goldman on prefabs, and Joyal on combinatorial species. Note that this symbolic method in enumeration is unrelated to "Blissard's symbolic method", which is just another old name for umbral calculus. The symbolic method in combinatorics constitutes the first step of many analyses of combinatorial structures, which can then lead to fast computation schemes, to asymptotic properties and limit laws, to random generation, all of them being suitable to automatization via computer algebra. Classes of combinatorial structures Consider the problem of distributing objects given by a generating function into a set of n slots, where a permutation group G of degree n acts on the slots to create an equivalence relation of filled slot configurations, and asking about the generating function of the configurations by weight of the configurations with respect to this equivalence relation, where the weight of a configuration is the sum of the weights of the objects in the slots. We will first explain how to solve this problem in the labelled and the unlabelled case and use the solution to motivate the creation of classes of combinatorial structures. The Pólya enumeration theorem solves this problem in the unlabelled case. Let f(z) be the ordinary generating function (OGF) of the objects, then the OGF of the configurations is given by the substituted cycle index In the labelled case we use an exponential generating function (EGF) g(z) of the objects and apply the Labelled enumeration theorem, which says that the EGF of the configurations is given by We are able to enumerate filled slot configurations using either PET in the unlabelled case or the labelled enumeration theorem in the labelled case. We now ask about the generating function of configurations obtained when there is more than one set of slots, with a permutation group acting on each. Clearly the orbits do not intersect and we may add the respective generating functions. Suppose, for example, that we want to enumerate unlabelled sequences of length two or three of some objects contained in a set X. There are two sets of slots, the first one containing two slots, and the second one, three slots. The group acting on the first set is , and on the second slot, . We represent this by the following formal power series in X: where the term is used to denote the set of orbits under G and , which denotes in the obvious way the process of distributing the objects from X with repetition into the n slots. Similarly, consider the labelled problem of creating cycles of arbitrary length from a set of labelled objects X. This yields the following series of actions of cyclic groups: Clearly we can assign meaning to any such power series of quotients (orbits) with respect to permutation groups, where we restrict the groups of degree n to the conjugacy classes of the symmetric group , which form a unique factorization domain. (The orbits with respect to two groups from the same conjugacy class are isomorphic.) This motivates the following definition. A class of combinatorial structures is a formal series where (the "A" is for "atoms") is the set of primes of the UFD and In the following we will simplify our notation a bit and write e.g. for the classes mentioned above. The Flajolet–Sedgewick fundamental theorem A theorem in the Flajolet–Sedgewick theory of symbolic combinatorics treats the enumeration problem of labelled and unlabelled combinatorial classes by means of the creation of symbolic operators that make it possible to translate equations involving combinatorial structures directly (and automatically) into equations in the generating functions of these structures. Let be a class of combinatorial structures. The OGF of where X has OGF and the EGF of where X is labelled with EGF are given by and In the labelled case we have the additional requirement that X not contain elements of size zero. It will sometimes prove convenient to add one to to indicate the presence of one copy of the empty set. It is possible to assign meaning to both (the most common example is the case of unlabelled sets) and To prove the theorem simply apply PET (Pólya enumeration theorem) and the labelled enumeration theorem. The power of this theorem lies in the fact that it makes it possible to construct operators on generating functions that represent combinatorial classes. A structural equation between combinatorial classes thus translates directly into an equation in the corresponding generating functions. Moreover, in the labelled case it is evident from the formula that we may replace by the atom z and compute the resulting operator, which may then be applied to EGFs. We now proceed to construct the most important operators. The reader may wish to compare with the data on the cycle index page. The sequence operator This operator corresponds to the class and represents sequences, i.e. the slots are not being permuted and there is exactly one empty sequence. We have and The cycle operator This operator corresponds to the class i.e., cycles containing at least one object. We have or and This operator, together with the set operator , and their restrictions to specific degrees are used to compute random permutation statistics. There are two useful restrictions of this operator, namely to even and odd cycles. The labelled even cycle operator is which yields This implies that the labelled odd cycle operator is given by The multiset/set operator The series is i.e., the symmetric group is applied to the slots. This creates multisets in the unlabelled case and sets in the labelled case (there are no multisets in the labelled case because the labels distinguish multiple instances of the same object from the set being put into different slots). We include the empty set in both the labelled and the unlabelled case. The unlabelled case is done using the function so that Evaluating we obtain For the labelled case we have In the labelled case we denote the operator by , and in the unlabelled case, by . This is because in the labeled case there are no multisets (the labels distinguish the constituents of a compound combinatorial class) whereas in the unlabeled case there are multisets and sets, with the latter being given by Procedure Typically, one starts with the neutral class , containing a single object of size 0 (the neutral object, often denoted by ), and one or more atomic classes , each containing a single object of size 1. Next, set-theoretic relations involving various simple operations, such as disjoint unions, products, sets, sequences, and multisets define more complex classes in terms of the already defined classes. These relations may be recursive. The elegance of symbolic combinatorics lies in that the set theoretic, or symbolic, relations translate directly into algebraic relations involving the generating functions. In this article, we will follow the convention of using script uppercase letters to denote combinatorial classes and the corresponding plain letters for the generating functions (so the class has generating function ). There are two types of generating functions commonly used in symbolic combinatorics—ordinary generating functions, used for combinatorial classes of unlabelled objects, and exponential generating functions, used for classes of labelled objects. It is trivial to show that the generating functions (either ordinary or exponential) for and are and , respectively. The disjoint union is also simple — for disjoint sets and , implies . The relations corresponding to other operations depend on whether we are talking about labelled or unlabelled structures (and ordinary or exponential generating functions). Combinatorial sum The restriction of unions to disjoint unions is an important one; however, in the formal specification of symbolic combinatorics, it is too much trouble to keep track of which sets are disjoint. Instead, we make use of a construction that guarantees there is no intersection (be careful, however; this affects the semantics of the operation as well). In defining the combinatorial sum of two sets and , we mark members of each set with a distinct marker, for example for members of and for members of . The combinatorial sum is then: This is the operation that formally corresponds to addition. Unlabelled structures With unlabelled structures, an ordinary generating function (OGF) is used. The OGF of a sequence is defined as Product The product of two combinatorial classes and is specified by defining the size of an ordered pair as the sum of the sizes of the elements in the pair. Thus we have for and , . This should be a fairly intuitive definition. We now note that the number of elements in of size n is Using the definition of the OGF and some elementary algebra, we can show that implies Sequence The sequence construction, denoted by is defined as In other words, a sequence is the neutral element, or an element of , or an ordered pair, ordered triple, etc. This leads to the relation Set The set (or powerset) construction, denoted by is defined as which leads to the relation where the expansion was used to go from line 4 to line 5. Multiset The multiset construction, denoted is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times. Therefore, This leads to the relation where, similar to the above set construction, we expand , swap the sums, and substitute for the OGF of . Other elementary constructions Other important elementary constructions are: the cycle construction (), like sequences except that cyclic rotations are not considered distinct pointing (), in which each member of is augmented by a neutral (zero size) pointer to one of its atoms substitution (), in which each atom in a member of is replaced by a member of . The derivations for these constructions are too complicated to show here. Here are the results: Examples Many combinatorial classes can be built using these elementary constructions. For example, the class of plane trees (that is, trees embedded in the plane, so that the order of the subtrees matters) is specified by the recursive relation In other words, a tree is a root node of size 1 and a sequence of subtrees. This gives we solve for G(z) by multiplying to get subtracting z and solving for G(z) using the quadratic formula gives Another example (and a classic combinatorics problem) is integer partitions. First, define the class of positive integers , where the size of each integer is its value: The OGF of is then Now, define the set of partitions as The OGF of is Unfortunately, there is no closed form for ; however, the OGF can be used to derive a recurrence relation, or using more advanced methods of analytic combinatorics, calculate the asymptotic behavior of the counting sequence. Specification and specifiable classes The elementary constructions mentioned above allow us to define the notion of specification. This specification allows us to use a set of recursive equations, with multiple combinatorial classes. Formally, a specification for a set of combinatorial classes is a set of equations , where is an expression, whose atoms are and the 's, and whose operators are the elementary constructions listed above. A class of combinatorial structures is said to be constructible or specifiable when it admits a specification. For example, the set of trees whose leaves' depth is even (respectively, odd) can be defined using the specification with two classes and . Those classes should satisfy the equation and . Labelled structures An object is weakly labelled if each of its atoms has a nonnegative integer label, and each of these labels is distinct. An object is (strongly or well) labelled, if furthermore, these labels comprise the consecutive integers . Note: some combinatorial classes are best specified as labelled structures or unlabelled structures, but some readily admit both specifications. A good example of labelled structures is the class of labelled graphs. With labelled structures, an exponential generating function (EGF) is used. The EGF of a sequence is defined as Product For labelled structures, we must use a different definition for product than for unlabelled structures. In fact, if we simply used the cartesian product, the resulting structures would not even be well labelled. Instead, we use the so-called labelled product, denoted For a pair and , we wish to combine the two structures into a single structure. In order for the result to be well labelled, this requires some relabelling of the atoms in and . We will restrict our attention to relabellings that are consistent with the order of the original labels. Note that there are still multiple ways to do the relabelling; thus, each pair of members determines not a single member in the product, but a set of new members. The details of this construction are found on the page of the Labelled enumeration theorem. To aid this development, let us define a function, , that takes as its argument a (possibly weakly) labelled object and relabels its atoms in an order-consistent way so that is well labelled. We then define the labelled product for two objects and as Finally, the labelled product of two classes and is The EGF can be derived by noting that for objects of size and , there are ways to do the relabelling. Therefore, the total number of objects of size is This binomial convolution relation for the terms is equivalent to multiplying the EGFs, Sequence The sequence construction is defined similarly to the unlabelled case: and again, as above, Set In labelled structures, a set of elements corresponds to exactly sequences. This is different from the unlabelled case, where some of the permutations may coincide. Thus for , we have Cycle Cycles are also easier than in the unlabelled case. A cycle of length corresponds to distinct sequences. Thus for , we have Boxed product In labelled structures, the min-boxed product is a variation of the original product which requires the element of in the product with the minimal label. Similarly, we can also define a max-boxed product, denoted by , by the same manner. Then we have, or equivalently, Example An increasing Cayley tree is a labelled non-plane and rooted tree whose labels along any branch stemming from the root form an increasing sequence. Then, let be the class of such trees. The recursive specification is now Other elementary constructions The operators and represent cycles of even and odd length, and sets of even and odd cardinality. Example Stirling numbers of the second kind may be derived and analyzed using the structural decomposition The decomposition is used to study unsigned Stirling numbers of the first kind, and in the derivation of the statistics of random permutations. A detailed examination of the exponential generating functions associated to Stirling numbers within symbolic combinatorics may be found on the page on Stirling numbers and exponential generating functions in symbolic combinatorics. See also Combinatorial species References François Bergeron, Gilbert Labelle, Pierre Leroux, Théorie des espèces et combinatoire des structures arborescentes, LaCIM, Montréal (1994). English version: Combinatorial Species and Tree-like Structures, Cambridge University Press (1998). Philippe Flajolet and Robert Sedgewick, Analytic Combinatorics, Cambridge University Press (2009). (available online: http://algo.inria.fr/flajolet/Publications/book.pdf) Micha Hofri, Analysis of Algorithms: Computational Methods and Mathematical Tools, Oxford University Press (1995). Combinatorics
Symbolic method (combinatorics)
[ "Mathematics" ]
3,506
[ "Discrete mathematics", "Combinatorics" ]
987,014
https://en.wikipedia.org/wiki/Combinatorial%20class
In mathematics, a combinatorial class is a countable set of mathematical objects, together with a size function mapping each object to a non-negative integer, such that there are finitely many objects of each size. Counting sequences and isomorphism The counting sequence of a combinatorial class is the sequence of the numbers of elements of size i for i = 0, 1, 2, ...; it may also be described as a generating function that has these numbers as its coefficients. The counting sequences of combinatorial classes are the main subject of study of enumerative combinatorics. Two combinatorial classes are said to be isomorphic if they have the same numbers of objects of each size, or equivalently, if their counting sequences are the same. Frequently, once two combinatorial classes are known to be isomorphic, a bijective proof of this equivalence is sought; such a proof may be interpreted as showing that the objects in the two isomorphic classes are cryptomorphic to each other. For instance, the triangulations of regular polygons (with size given by the number of sides of the polygon, and a fixed choice of polygon to triangulate for each size) and the set of unrooted binary plane trees (up to graph isomorphism, with a fixed ordering of the leaves, and with size given by the number of leaves) are both counted by the Catalan numbers, so they form isomorphic combinatorial classes. A bijective isomorphism in this case is given by planar graph duality: a triangulation can be transformed bijectively into a tree with a leaf for each polygon edge, an internal node for each triangle, and an edge for each two (polygon edges?) or triangles that are adjacent to each other. Analytic combinatorics The theory of combinatorial species and its extension to analytic combinatorics provide a language for describing many important combinatorial classes, constructing new classes from combinations of previously defined ones, and automatically deriving their counting sequences. For example, two combinatorial classes may be combined by disjoint union, or by a Cartesian product construction in which the objects are ordered pairs of one object from each of two classes, and the size function is the sum of the sizes of each object in the pair. These operations respectively form the addition and multiplication operations of a semiring on the family of (isomorphism equivalence classes of) combinatorial classes, in which the zero object is the empty combinatorial class, and the unit is the class whose only object is the empty set. Permutation patterns In the study of permutation patterns, a combinatorial class of permutation classes, enumerated by permutation length, is called a Wilf class. The study of enumerations of specific permutation classes has turned up unexpected equivalences in counting sequences of seemingly unrelated permutation classes. References Combinatorics
Combinatorial class
[ "Mathematics" ]
607
[ "Discrete mathematics", "Combinatorics" ]
987,101
https://en.wikipedia.org/wiki/Comparison%20of%20cross-platform%20instant%20messaging%20clients
The landscape for instant messaging involves cross-platform instant messaging clients that can handle one or multiple protocols. Clients that use the same protocol can typically federate and talk to one another. The following table compares general and technical information for cross-platform instant messaging clients in active development, each of which have their own article that provide further information. General Operating system support Connectivity Privacy Some messaging services that are not designed for privacy require a unique phone number for sign-up, as a form of identity verification and to prevent users from creating multiple accounts. Some messaging services that do not solely focus on a mobile-first experience, or enforce SMS authentication, may allow email addresses to be used for sign-up instead. Some messaging services offer greater flexibility and privacy, by allowing users to create more than one account to compartmentalize personal & work purposes, or not requiring personally identifiable information for sign-up. To find out if the software has end-to-end encryption, see "media" table below. 1: Apple iOS doesn't allow screenshot protection. Screenshot security Message handling Media Backup and restore messages Official status to guarantee support for backing up and restoring messages. Miscellaneous Messaging services can operate around different models, based on security and accessibility considerations. A mobile-focused, phone number-based model operates on the concept of primary and secondary devices. Examples of such messaging services include: WhatsApp, Viber, Line, WeChat, Signal, etc. The primary device is a mobile phone and is required to login and send/receive messages. Only one mobile phone is allowed to be the primary device, as attempting to login to the messaging app on another mobile phone would trigger the previous phone to be logged out. The secondary device is a computer running a desktop operating system, which serves as a companion for the primary device. Desktop messaging clients on secondary devices do not function independently, as they are reliant on the mobile phone maintaining an active network connection for login authentication and syncing messages. A multi-device, device-agnostic model is designed for accessibility on multiple devices, regardless of desktop or mobile. Examples of such messaging services include: Skype, Facebook Messenger, Google Hangouts (subsequently Google Chat), Telegram, ICQ, Element, Slack, Discord, etc. Users have more options as usernames or email addresses can be used as user identifiers, besides phone numbers. Unlike the phone-based model, user accounts on a multi-device model are not tied to a single device, and logins are allowed on multiple devices. Messaging services with a multi-device model are able to eliminate feature disparity and provide identical functionality on both mobile and desktop clients. Desktop clients can function independently, without relying on the mobile phone to login and sync messages. See also Comparison of instant messaging protocols Comparison of Internet Relay Chat clients Comparison of VoIP software List of SIP software Comparison of LAN messengers List of video telecommunication services and product brands Comparison of user features of messaging platforms Notes References Instant Messaging clients Instant messaging clients Instant messaging clients
Comparison of cross-platform instant messaging clients
[ "Technology" ]
620
[ "Social software", "Mobile content", "Online services comparisons", "Computing comparisons", "Instant messaging clients", "Instant messaging" ]
987,156
https://en.wikipedia.org/wiki/Bonnacon
The bonnacon (also called bonasus or bonacho) () is a legendary creature described as a bull with inward-curving horns and a horse-like mane. Medieval bestiaries usually depict its fur as reddish-brown or black. Because its horns were useless for self-defense, the bonnacon was said to expel large amounts of caustic feces from its anus at its pursuers, burning them and thereby ensuring its escape. Term The term is derived from Greek βόνᾱσος (bonasos), meaning "bison". Strabo when describing the Zebu at the festivals in India, used the term bonasus. Textual history The first known description of the bonnacon comes from Pliny the Elder's Naturalis Historia: The popularity of the Naturalis Historia in the Middle Ages led to the bonnacon's inclusion in medieval bestiaries. In the tradition of the Physiologus, bestiaries often ascribed moral and scriptural lessons to the descriptions of animals, but the bonnacon gained no such symbolic meaning. Manuscript illustrations of the creature may have served as a source of humor, deriving as much from the reaction of the hunters as from the act of defecation. The Aberdeen Bestiary describes the creature using similar language to Pliny, though the beast's location is moved from Paeonia to Asia: The bonnacon is also mentioned in the life of Saint Martha in the Golden Legend, a 13th-century hagiographical work by Jacobus de Voragine. In the story, Saint Martha encounters and tames the Tarasque, a dragon-like legendary creature said to be the offspring of the biblical Leviathan and the bonnacon. In this account, the bonnacon (here: bonacho or onacho) is said to originate in Galatia. References External links Bonnacon at The Medieval Bestiary Image of the Bonnacon in the fifteenth-century English bestiary Copenhagen, GKS 1633 4º, f. 10r Greek legendary creatures Legendary mammals Medieval European legendary creatures Defecation
Bonnacon
[ "Biology" ]
440
[ "Excretion", "Defecation" ]
987,423
https://en.wikipedia.org/wiki/Reducing%20sugar
A reducing sugar is any sugar that is capable of acting as a reducing agent. In an alkaline solution, a reducing sugar forms some aldehyde or ketone, which allows it to act as a reducing agent, for example in Benedict's reagent. In such a reaction, the sugar becomes a carboxylic acid. All monosaccharides are reducing sugars, along with some disaccharides, some oligosaccharides, and some polysaccharides. The monosaccharides can be divided into two groups: the aldoses, which have an aldehyde group, and the ketoses, which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars. Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group. The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test. The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals, including those found in polysaccharide linkages, cannot easily become free aldehydes. Reducing sugars react with amino acids in the Maillard reaction, a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products. Terminology Oxidation-reduction A reducing sugar is one that reduces another compound and is itself oxidized; that is, the carbonyl carbon of the sugar is oxidized to a carboxyl group. A sugar is classified as a reducing sugar only if it has an open-chain form with an aldehyde group or a free hemiacetal group. Aldoses and ketoses Monosaccharides which contain an aldehyde group are known as aldoses, and those with a ketone group are known as ketoses. The aldehyde can be oxidized via a redox reaction in which another compound is reduced. Thus, aldoses are reducing sugars. Sugars with ketone groups in their open chain form are capable of isomerizing via a series of tautomeric shifts to produce an aldehyde group in solution. Therefore, ketones like fructose are considered reducing sugars but it is the isomer containing an aldehyde group which is reducing since ketones cannot be oxidized without decomposition of the sugar. This type of isomerization is catalyzed by the base present in solutions which test for the presence of reducing sugars. Reducing end Disaccharides consist of two monosaccharides and may be either reducing or nonreducing. Even a reducing disaccharide will only have one reducing end, as disaccharides are held together by glycosidic bonds, which consist of at least one anomeric carbon. With one anomeric carbon unable to convert to the open-chain form, only the free anomeric carbon is available to reduce another compound, and it is called the reducing end of the disaccharide. A nonreducing disaccharide is that which has both anomeric carbons tied up in the glycosidic bond. Similarly, most polysaccharides have only one reducing end. Examples All monosaccharides are reducing sugars because they either have an aldehyde group (if they are aldoses) or can tautomerize in solution to form an aldehyde group (if they are ketoses). This includes common monosaccharides like galactose, glucose, glyceraldehyde, fructose, ribose, and xylose. Many disaccharides, like cellobiose, lactose, and maltose, also have a reducing form, as one of the two units may have an open-chain form with an aldehyde group. However, sucrose and trehalose, in which the anomeric carbon atoms of the two units are linked together, are nonreducing disaccharides since neither of the rings is capable of opening. In glucose polymers such as starch and starch-derivatives like glucose syrup, maltodextrin and dextrin the macromolecule begins with a reducing sugar, a free aldehyde. When starch has been partially hydrolyzed the chains have been split and hence it contains more reducing sugars per gram. The percentage of reducing sugars present in these starch derivatives is called dextrose equivalent (DE). Glycogen is a highly branched polymer of glucose that serves as the main form of carbohydrate storage in animals. It is a reducing sugar with only one reducing end, no matter how large the glycogen molecule is or how many branches it has (note, however, that the unique reducing end is usually covalently linked to glycogenin and will therefore not be reducing). Each branch ends in a nonreducing sugar residue. When glycogen is broken down to be used as an energy source, glucose units are removed one at a time from the nonreducing ends by enzymes. Characterization Several qualitative tests are used to detect the presence of reducing sugars. Two of them use solutions of copper(II) ions: Benedict's reagent (Cu2+ in aqueous sodium citrate) and Fehling's solution (Cu2+ in aqueous sodium tartrate). The reducing sugar reduces the copper(II) ions in these test solutions to copper(I), which then forms a brick red copper(I) oxide precipitate. Reducing sugars can also be detected with the addition of Tollen's reagent, which consist of silver ions (Ag+) in aqueous ammonia. When Tollen's reagent is added to an aldehyde, it precipitates silver metal, often forming a silver mirror on clean glassware. 3,5-dinitrosalicylic acid is another test reagent, one that allows quantitative detection. It reacts with a reducing sugar to form 3-amino-5-nitrosalicylic acid, which can be measured by spectrophotometry to determine the amount of reducing sugar that was present. Some sugars, such as sucrose, do not react with any of the reducing-sugar test solutions. However, a non-reducing sugar can be hydrolyzed using dilute hydrochloric acid. After hydrolysis and neutralization of the acid, the product may be a reducing sugar that gives normal reactions with the test solutions. All carbohydrates are converted to aldehydes and respond positively in Molisch's test. But the test has a faster rate when it comes to monosaccharides. Importance in medicine Fehling's solution was used for many years as a diagnostic test for diabetes, a disease in which blood glucose levels are dangerously elevated by a failure to produce enough insulin (type 1 diabetes) or by an inability to respond to insulin (type 2 diabetes). Measuring the amount of oxidizing agent (in this case, Fehling's solution) reduced by glucose makes it possible to determine the concentration of glucose in the blood or urine. This then enables the right amount of insulin to be injected to bring blood glucose levels back into the normal range. Importance in food chemistry Maillard reaction The carbonyl groups of reducing sugars react with the amino groups of amino acids in the Maillard reaction, a complex series of reactions that occurs when cooking food. Maillard reaction products (MRPs) are diverse; some are beneficial to human health, while others are toxic. However, the overall effect of the Maillard reaction is to decrease the nutritional value of food. One example of a toxic product of the Maillard reaction is acrylamide, a neurotoxin and possible carcinogen that is formed from free asparagine and reducing sugars when cooking starchy foods at high temperatures (above 120 °C). However, evidence from epidemiological studies suggest that dietary acrylamide is unlikely to raise the risk of people developing cancer. Food quality The level of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products, and monitoring the levels of reducing sugars during food production has improved market quality. The conventional method for doing so is the Lane-Eynon method, which involves titrating the reducing sugar with copper(II) in Fehling's solution in the presence of methylene blue, a common redox indicator. However, it is inaccurate, expensive, and sensitive to impurities. References Carbohydrate chemistry Biomolecules Carbohydrates
Reducing sugar
[ "Chemistry", "Biology" ]
2,029
[ "Biomolecules by chemical classification", "Carbohydrates", "Natural products", "Biochemistry", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "Biomolecules", "Structural biology", "Glycobiology", "nan", "Molecular biology" ]
987,490
https://en.wikipedia.org/wiki/Crash%20bar
A crash bar (also known as a panic exit device, panic bar, or bump bar) is a type of door opening mechanism which allows users to open a door by pushing a bar. While originally conceived as a way to prevent crowd crushing in an emergency, crash bars are now used as the primary door opening mechanism in many commercial buildings. The device consists of a spring-loaded metal bar which is fixed horizontally to a door that swings in the direction of an exit. Depressing the bar unlatches the door, allowing occupants to quickly leave the building. Modern fire standards often mandate that doors be fitted with crash bars in commercial and other occupancies where mass evacuation may be slowed by other types of door openers. They are sometimes intended solely for emergency use and may be fitted with alarms. However, in many buildings the crash bar functions as the primary mechanism for opening a door in normal circumstances as well. They may even be used when not required by code because they are quicker and easier for users compared with a knob or lever handle. Background History Following the events of the Victoria Hall disaster in Sunderland, England, in 1883 in which 183 children died because a door had been bolted at the bottom of a stairwell, the British government began legal moves to enforce minimum standards for building safety. This slowly led to the legal requirement that venues must have a minimum number of outward opening doors as well as locks which could be opened from the inside. Motivated by the Sunderland disaster, Robert Alexander Briggs (1868–1963) invented the panic bolt which was granted a UK patent on 13 August 1892. However, these moves were not globally copied. For example, in the United States, at least 602 people died in the Iroquois Theatre fire in Chicago in December 1903 because of door latch designs that were difficult for fleeing patrons to open. Five years later, 174 people in Ohio died in the Collinwood school fire, which led to a national outcry in the United States for greater fire safety in buildings. On 31 December 1929, some 37 years after the panic bolt was patented, 71 children died during the Glen Cinema disaster at a Hogmanay film screening in Paisley, Renfrewshire, Scotland, when a smoking cellulose nitrate film canister sparked panic. Children rushing to escape the cinema became crushed against the padlocked exit doors. Even after a police officer broke a padlock, the inward-opening doors were held shut by the mass of bodies behind them. Justification for use By the end of the 20th century, most countries had building codes (or regulations) which require all public buildings have a minimum number of fire and emergency exits. Crash bars are fitted to these types of doors because they are proven to save lives in the event of human crushes. Panic can often occur during mass building evacuations caused by fires or explosions. In the event emergency exits are required, the crash bar works efficiently to allow people to pass through security doors without a reduction in speed. A crash bar's fast-acting mechanism reduces the risk that a rushing crowd might suddenly become a logjam at the exits. Such a human crush, which has many historical precedents, can cause falls, crushing, and other injury because the rear of a crowd has no idea that the people at the front of a crowd have come across a door. Crash bars are typically found on doors which are required emergency exits serving a particular type or quantity of occupants. Common locations include doors which provide egress from assembly areas, doors which serve many occupants, or doors serving hazardous areas. For buildings subject to the International Building Code, or a locally adopted variation, they are required for certain healthcare, education, or assembly spaces, generally related to the number of occupants exiting through a given door. Latching Mechanical types Crash bars offer numerous configurations for latching to the door frame. Vertical rods can be affixed to crash bars allowing both doors to be opened with no center clearance obstruction. When the bar is depressed, a chord within the vertical rod gets pulled, which lowers a latch at the top and/or bottom and allows the door to open. The Pullman latch, which attaches to a Pullman keeper, is the locking mechanism usually used at the ends of the vertical rods. More expensive products may feature vertical rods and latches concealed within the door. Some jurisdictions permit doors to latch to each other. For security, additional latching points may be added. For example, upper and lower vertical rods added to one door and connected to the leaf with no rods via a mortice latch. A double door coordinator is used to ensure the active leaf does not close before the inactive leaf. This configuration is not recommended for high traffic locations. Center posts are an alternative to vertical rods at double door exits. This offers less clearance because the post remains in the middle when both doors are open. That said, the post can often be removed with a key for occasions when items larger than a single door need to pass through. Center posts may be preferred over vertical rods because they have fewer moving parts, thus they have fewer components that can break. Push bars themselves are some of the most reliable door opening mechanisms. To pass CE certification, bars must function between 100,000 and 500,000 opening cycles depending on the rating the manufacturer is seeking. Unlocking and latch hold In some applications, such as storefront entrances, panic bars may be dogged during business hours. Dogging is a common feature on panic bars in which the bar is retracted with a key—freeing the door to swing without latching. This allows customers to apply force to any portion of the door, not just the bar, in order to open it. Dogging is distinct from simple unlocking, which permits the user to open the door from both ends but still requires performing an action to release the latches. However, in applications where the exterior side contains an immovable dummy handle, as opposed to a knob or lever handle, it is usually impossible to unlock the crash bar without also dogging it. Dogging can extend the life of the bar. Some bars can be unlocked /dogged electronically, while others take a cylinder lock, hex key, or contain no key functionality at all. Dogging should be avoided in high wind areas where the door is susceptible to blowing open. Electric strike Unlike a traditional crash bar, this type contains a horizontal touch sensor and no moving parts. When the sensor is pressed, it releases an electromagnetic lock. It may be used in tandem with a motion sensor which unlocks for anyone who stands in front of the door. This type of release must still unlock in the event of a power failure, and in some jurisdictions, the door must automatically unlock for a fire alarm activation. Because these will not function in a long term power outage, they are most commonly used on secondary doors between a vestibule and the secure part of a building. On automatic doors In some jurisdictions, when automatic doors are used on the primary exit route, these doors are equipped with crash bars. In the event the automatic door does not function, it becomes an outward swinging rather than sliding door. Crash bars are one of many emergency release mechanisms that can be used with automatic doors. Depending on available space, a common alternative is to install an emergency exit door beside the automatic doors. Installation of a secondary crash bar equipped exit is often required in large buildings with revolving doors, since these are too slow for a crowd to move through. Around the world European Union In the European Union, panic bars are governed by the standard EN 1125, Panic exit devices operated by a horizontal bar. As with other EN family standards, the English language version is produced by the British Standards Institution and utilises the call sign BS EN 1125. Panic bars are required to conform to this standard in order to carry CE marking and thus be sold in the European Economic Area. In 2008, the standard was updated to include an alpha-numeric labelling scheme. In this system, products are tested to various benchmarks and assigned a letter or number accordingly. Products must achieve minimum quality scores in order to receive general CE approval. The nine rating categories are: Category of Use Number of Test Cycles Test Door Mass Fire Resistance Safety Corrosion Resistance Security Projection of device Type of device EN 1125 is one of two standards which govern exit devices in the EU. The other standard, EN 179, governs door handles, push pads, and other exit devices with emergency release functionality. However, EN 179 devices shall only be used at locations where people "are familiar with the emergency exit and its hardware and therefore a panic situation is unlikely to appear". Examples of places where EN 179 hardware may be used in place of EN 1125 panic bars include small apartment buildings and offices. United States The first panic bar, made by Von Duprin, was available by 1908 in many models and configurations. In the US, building exit requirements are generally controlled by model codes such as the International Building Code and the NFPA Life Safety Code. Adoption of regulations varies by location and may occur at the city, county, or state level. Model codes are usually supplemented with amendments adopted locally. Additional requirements may be imposed on a site from an Authority Having Jurisdiction such as a local fire marshal. Factors considered when mandating exit devices include the number of occupants who would need to leave in an emergency, the availability of other nearby exits, and proximity to any hazards equipment or chemicals. Differences between Europe and North America and emerging trends In Europe, most panic bars are of the cross bar type, which are called Type A in the EN 1125 standard. This contrasts strongly with North American architectural design, which years ago switched to using predominantly touch bars (EN 1125 Type B) in new construction. In Europe, the use of panic bars is generally confined to code required applications. In US and Canadian commercial buildings, they are frequently used even where not required by code, because bars are seen as being easier to use than knobs or lever handles. For example, when used on the rear service door of a business, a worker whose hands are being used to carry bulky items can lean against a bar to release the lock. While the public generally prefers automatic doors, they can be costly to install and maintain. Some manufacturers offer crash bars designed to resist microbial growth. This can include coating the bar with silver ions in order to create a chemical environment hostile to unwanted microbes. Antimicrobial surfaces have shown to be effective at inhibiting bacterial, mold, and mildew growth; but may not be effective at stopping viruses such as SARS-CoV-2. See also References Further reading United Kingdom British Standards relating to Panic Hardware United States OCCUPATIONAL SAFETY AND HEALTH STANDARDS, 29 cfr 1910.36 National Fire Protection Association 101, Life Safety Code, 2012; 2011 NATIONAL ELECTRICAL CODE (NEC) External links Door furniture Fire protection Emergency management
Crash bar
[ "Engineering" ]
2,200
[ "Building engineering", "Fire protection" ]
987,492
https://en.wikipedia.org/wiki/Food%20browning
Browning is the process of food turning brown due to the chemical reactions that take place within. The process of browning is one of the chemical reactions that take place in food chemistry and represents an interesting research topic regarding health, nutrition, and food technology. Though there are many different ways food chemically changes over time, browning in particular falls into two main categories: enzymatic versus non-enzymatic browning processes. Browning has many important implications on the food industry relating to nutrition, technology, and economic cost. Researchers are especially interested in studying the control (inhibition) of browning and the different methods that can be employed to maximize this inhibition and ultimately prolong the shelf life of food. Enzymatic browning Enzymatic browning is one of the most important reactions that takes place in most fruits and vegetables as well as in seafood. These processes affect the taste, color, and value of such foods. Generally, it is a chemical reaction involving polyphenol oxidase (PPO), catechol oxidase, and other enzymes that create melanins and benzoquinone from natural phenols. Enzymatic browning (also called oxidation of foods) requires exposure to oxygen. It begins with the oxidation of phenols by polyphenol oxidase into quinones, whose strong electrophilic state causes high susceptibility to a nucleophilic attack from other proteins. These quinones are then polymerized in a series of reactions, eventually resulting in the formation of brown pigments (melanosis) on the surface of the food. The rate of enzymatic browning is reflected by the amount of active polyphenol oxidases present in the food. Hence, most research into methods of preventing enzymatic browning has been directed towards inhibiting polyphenol oxidase activity. However, not all browning of food produces negative effects. Examples of beneficial enzymatic browning: Developing color and flavor in coffee, cocoa beans, and tea. Developing color and flavor in dried fruit such as figs and raisins. Examples of non-beneficial enzymatic browning: Fresh fruit and vegetables, including apples, potatoes, bananas and avocados. Oxidation of polyphenols is the major cause of melanosis in crustaceans such as shrimp. Control of enzymatic browning The control of enzymatic browning has always been a challenge for the food industry. A variety of approaches are used to prevent or slow down enzymatic browning of foods, each method aimed at targeting specific steps of the chemical reaction. The different types of enzymatic browning control can be classified into two large groups: physical and chemical. Usually, multiple methods are used. The use of sulfites (powerful anti-browning chemicals) have been reconsidered due to the potential hazards that it causes along with its activity. Much research has been conducted regarding the exact types of control mechanisms that take place when confronted with the enzymatic process. Besides prevention, control over browning also includes measures intended to recover the food color after its browning. For instance, ion exchange filtration or ultrafiltration can be used in winemaking to remove the brown color sediments in the solution. Physical methods Heat treatment − Treating food with heat, such as blanching or roasting, de-naturates enzymes and destroys the reactants responsible for browning. Blanching is used, for example, in winemaking, tea processing, storing nuts and bacon, and preparing vegetables for freezing preservation. Meat is often partially browned under high heat before being incorporated into a larger preparation to be cooked at a lower temperature which produces less browning. Cold treatment − Refrigeration and freezing are the most common ways of storing food, preventing decay. The activity of browning enzymes, i.e., rate of reaction, drops in low temperatures. Thus, refrigeration helps to keep the initial look, color, and flavour of fresh vegetables and fruits. Refrigeration is also used during distribution and retailing of fruits and vegetables. Oxygen elimination − Presence of oxygen is crucial for enzymatic browning, therefore eliminating oxygen from the environment helps to slow down the browning reaction. Withdrawing air or replacing it with other gases (e.g., N2 or CO2) during preservation, such as in vacuum-packaging or modified atmosphere packaging, wine or juice bottling, using impermeable films or edible coatings, dipping into salt or sugar solutions, keeps the food away from direct contact with oxygen. Impermeable films made of plastic or other materials prevent food being exposed to oxygen in the air and avoid moisture loss. There is an increasing activity in developing packaging materials impregnated with antioxidants, antimicrobial, and antifungal substances, such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA), tocopherols, hinokitiol, lysozyme, nisin, natamycin, chitosan, and ε-polylysine. Edible coatings can be made of polysaccharides, proteins, lipids, vegetable skins, plants or other natural products. Irradiation − Food irradiation using UV-C, gamma rays, x-rays, and electron beams is another method to extend the food shelf life. Ionizing radiation inhibits the vitality of microorganisms responsible for food spoilage and delays the maturation and sprouting of preserving vegetables and fruits. Chemical methods Acidification − Browning enzymes, as other enzymes, are active at a specific range of pH. For example, PPO shows optimal activity at pH 5-7 and is inhibited below pH 3. Acidifying agents and acidity regulators are widely used as food additives to maintain a desired pH in food products. Acidulants, such as citric acid, ascorbic acid, and glutathione, are used as anti-browning agents. Many of these agents also show other anti-browning effects, such as chelating and antioxidant activities. Antioxidants − Many antioxidants are used in food industry as food additives. These compounds react with oxygen and suppress the initiation of the browning process. Also, they interfere with intermediate products of the following reactions and inhibit melanin formation. Ascorbic acid, N-acetylcysteine, L-cysteine, 4-hexylresorcinol, erythorbic acid, cysteine hydrochloride, glutathione are examples of antioxidants that have been studied for their anti-browning properties. Chelating agents − Polyphenol oxidase requires copper as a cofactor for its functionality, thus copper-chelating agents inhibit the activity of this enzyme. Many agents possessing chelating activity have been studied and used in different fields of food industry, such as citric acid, sorbic acid, polyphosphates, hinokitiol, kojic acid, EDTA, porphyrins, polycarboxylic acids, different proteins. Some of these compounds also have other anti-browning effects, such as acidifying or antioxidant. Hinokitiol is used in coating materials for food packaging. Other methods Natural agents − Different natural products and their extracts, such as onion, pineapple, lemon, and white wine, are known to inhibit or slow the browning of some products. Onion and its extract exhibit potent anti-browning properties by inhibiting the PPO activity. Pineapple juice have shown to possess anti-browning effect on apples and bananas. Lemon juice is used in making doughs to make the pastry products look brighter. This effect is possibly explained by the anti-browning properties of citric and ascorbic acids in the lemon juice. Genetic modification − Arctic apples have been genetically modified to silence the expression of PPO, thereby delaying the browning effect, and improving apple eating quality. Non-enzymatic browning The second type of browning, non-enzymatic browning, is a process that also produces the brown pigmentation in foods but without the activity of enzymes. The two main forms of non-enzymatic browning are caramelization and the Maillard reaction. Both vary in the reaction rate as a function of water activity (in food chemistry, the standard state of water activity is most often defined as the partial vapor pressure of pure water at the same temperature). Caramelization is a process involving the pyrolysis of sugar. It is used extensively in cooking for the desired nutty flavor and brown color. As the process occurs, volatile chemicals are released, producing the characteristic caramel flavor. The other non-enzymatic reaction is the Maillard reaction. This reaction is responsible for the production of the flavor when foods are cooked. Examples of foods that undergo Maillard reaction include breads, steaks, and potatoes. It is a chemical reaction that takes place between the amine group of a free amino acid and the carbonyl group of a reducing sugar, usually with the addition of heat. The sugar interacts with the amino acid, producing a variety of odors and flavors. The Maillard reaction is the basis for producing artificial flavors for processed foods in the flavoring industry since the type of amino acid involved determines the resulting flavor. Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine through the Maillard reaction at high temperatures and low water activity. Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries. Browning of grapes during winemaking Like most fruit, grapes vary in the number of phenolic compounds they have. This characteristic is used as a parameter in judging the quality of the wine. The general process of winemaking is initiated by the enzymatic oxidation of phenolic compounds by polyphenol oxidases. Contact between the phenolic compounds in the vacuole of the grape cell and the polyphenol oxidase enzyme (located in the cytoplasm) triggers the oxidation of the grape. Thus, the initial browning of grapes occurs as a result of "compartmentalization modification" in the cells of the grape. Implications in food industry and technology Enzymatic browning affects the color, flavor, and nutritional value of foods, causing huge economic loss when not sold to consumers on time. It is estimated that more than 50% of produce is lost as a result of enzymatic browning. The increase in human population and consequential depletion in natural resources has prompted many biochemists and food engineers alike to find new or improved techniques to preserve food and for longer by using methods to inhibit the browning reaction. This effectively increases the shelf life of foods, solving this part of the waste problem. A better understanding of the enzymatic browning mechanisms, specifically, understanding the properties of the enzymes and substrates that are involved in the reaction may help food technologists to control certain stages in the mechanism and ultimately apply that knowledge to inhibit browning. Apples are fruits commonly studied by researchers due to their high phenolic content, which make them highly susceptible to enzymatic browning. In accordance with other findings regarding apples and browning activity, a correlation has been found between higher phenolic quantities and increased enzymatic activity in apples. This provides a potential target and thus hope for food industries wishing to genetically modify foods to decrease polyphenol oxidase activity and thus decrease browning. An example of such accomplishments in food engineering is in the production of Arctic apples. These apples, engineered by Okanagan Specialty Fruits Inc, are a result of applying gene splicing, a laboratory technique that has allowed for the reduction in polyphenol oxidase. Another type of issue that is closely studied is the browning of seafood. Seafood, in particular shrimp, is a staple consumed by people all over the world. The browning of shrimp, which is actually referred to as melanosis, creates a great concern for food handlers and consumers. Melanosis mainly occurs during postmortem handling and refrigerated storage. Recent studies have found a plant extract that acts as an anti-melatonin polyphenol oxidase inhibitor serves the same function as sulfites but without the health risks. See also Browning (partial cooking) Decomposition Gravy Water activity References Food science Biochemistry pl:Ciemnienie nieenzymatyczne
Food browning
[ "Chemistry", "Biology" ]
2,620
[ "Biochemistry", "nan" ]
987,507
https://en.wikipedia.org/wiki/Fire%20safety
Fire safety is the set of practices intended to reduce destruction caused by fire. Fire safety measures include those that are intended to prevent the ignition of an uncontrolled fire and those that are used to limit the spread and impact of a fire. Fire safety measures include those that are planned during the construction of a building or implemented in structures that are already standing and those that are taught or provided to occupants of the building. Threats to fire safety are commonly referred to as fire hazards. A fire hazard may include a situation that increases the likelihood of a fire or may impede escape in the event a fire occurs. Fire safety is often a component of building safety. Those who inspect buildings for violations of the Fire Code and go into schools to educate children on fire safety topics are Fire Department members known as Fire Prevention Officers. The Chief Fire Prevention Officer or Chief of Fire Prevention will normally train newcomers to the Fire Prevention Division and may also conduct inspections or make presentations. Elements of a fire safety policy Fire safety policies apply at the construction of a building and throughout its operating life. Building codes are enacted by local, sub-national, or national governments to ensure such features as adequate fire exits, signage, and construction details such as fire stops and fire rated doors, windows, and walls. Fire safety is also an objective of electrical codes to prevent overheating of wiring or equipment, and to protect from ignition by electrical faults. Fire codes regulate such requirements as the maximum occupancy for buildings such as theatres or restaurants, for example. Fire codes may require portable fire extinguishers within a building, or may require permanently installed fire detection and suppression equipment such as a fire sprinkler system and a fire alarm system. Local authorities charged with fire safety may conduct regular inspections for such items as usable fire exit and proper exit signage, functional fire extinguishers of the correct type in accessible places, and proper storage and handling of flammable materials. Depending on local regulations, a fire inspection may result in a notice of required action, or closing of a building until it can be put into compliance with fire code requirements. Owners and managers of a building may implement additional fire policies. For example, an industrial site may designate and train particular employees as a fire fighting force. Managers must ensure buildings comply with fire evacuation regulations, and that building features such as spray fireproofing remains undamaged. Fire policies may be in place to dictate training and awareness of occupants and users of the building to avoid obvious mistakes, such as the propping open of fire doors. Buildings, especially institutions such as schools, may conduct fire drills at regular intervals throughout the year. Beyond individual buildings, other elements of fire safety policies may include technologies such as wood coatings, education and prevention, preparedness measures, wildfire detection and suppression, and ensuring geographic coverage of local and sufficient fire extinguishing capacities. Common fire hazards Some common fire hazards are: Kitchen fires from unattended cooking, grease fires/chip pan fires Electrical systems that are overloaded, poorly maintained or defective Combustible storage areas with insufficient protection Combustibles near equipment that generates heat, flame, or sparks Candles and other open flames Smoking (Cigarettes, cigars, pipes, lighters, etc.) Equipment that generates heat and utilizes combustible materials Flammable liquids and aerosols Flammable solvents (and rags soaked with solvent) placed in enclosed trash cans Fireplace chimneys not properly or regularly cleaned Cooking appliances - stoves, ovens Heating appliances - fireplaces, wood-burning stoves, furnaces, boilers, portable heaters, solid fuels Household appliances - clothes dryers, curling irons, hair dryers, refrigerators, freezers, boilers Chimneys that concentrate creosote Electrical wiring in poor condition Leaking/defective batteries Personal ignition sources - matches, lighters Electronic and electrical equipment Exterior cooking equipment - barbecue Fire code In the United States, the fire code (also fire prevention code or fire safety code) is a model code adopted by the state or local jurisdiction and enforced by fire prevention officers within municipal fire departments. It is a set of rules prescribing minimum requirements to prevent fire and explosion hazards arising from storage, handling, or use of dangerous materials, or from other specific hazardous conditions. It complements the building code. The fire code is aimed primarily at preventing fires, ensuring that necessary training and equipment will be on hand, and that the original design basis of the building, including the basic plan set out by the architect, is not compromised. The fire code also addresses inspection and maintenance requirements of various fire protection equipment in order to maintain optimal active fire protection and passive fire protection measures. A typical fire safety code includes administrative sections about the rule-making and enforcement process, and substantive sections dealing with fire suppression equipment, particular hazards such as containers and transportation for combustible materials, and specific rules for hazardous occupancies, industrial processes, and exhibitions. Sections may establish the requirements for obtaining permits and specific precautions required to remain in compliance with a permit. For example, a fireworks exhibition may require an application to be filed by a licensed pyrotechnician, providing the information necessary for the issuing authority to determine whether safety requirements can be met. Once a permit is issued, the same authority (or another delegated authority) may inspect the site and monitor safety during the exhibition, with the power to halt operations, when unapproved practices are seen or when unforeseen hazards arise. List of some typical fire and explosion issues in a fire code Fireworks, explosives, mortars and cannons, model rockets (licenses for manufacture, storage, transportation, sale, use) Certification for servicing, placement, and inspecting fire extinguishing equipment General storage and handling of flammable liquids, solids, gases (tanks, personnel training, markings, equipment) Limitations on locations and quantities of flammables (e.g., 10 liters of gasoline inside a residential dwelling) Specific uses and specific flammables (e.g., dry cleaning, gasoline distribution, explosive dusts, pesticides, space heaters, plastics manufacturing) Permits and limitations in various building occupancies (assembly hall, hospital, school, theater, elderly care, child care centers) that require a smoke detector, sprinkler system, fire extinguisher, or other specific equipment or procedures Removal of interior and exterior obstructions to emergency exits or firefighters and removal of hazardous materials Permits and limitations in special outdoor applications (tents, asphalt kettles, bonfires, etc.) Other hazards (flammable decorations, welding, smoking, bulk matches, tire yards) Electrical safety codes such as the National Electrical Code (by the National Fire Protection Association) for the U.S. and some other places in the Americas Fuel gas code Car fire Public fire safety education Most U.S. fire departments have fire safety education programs. Fire prevention programs may include distribution of smoke detectors, visiting schools to review key topics with the students and implementing nationally recognized programs such as NFPAS "Risk Watch" and "Learn not to burn". Other programs or props can be purchased by fire departments or community organizations. These are usually entertaining and designed to capture children's attention and relay important messages. Props include those that are mostly auditory, such as puppets and robots. The prop is visually stimulating but the safety message is only transmitted orally. Other props are more elaborate, access more senses and increase the learning factor. They mix audio messages and visual cues with hands-on interaction. Examples of these include mobile trailer safety houses and tabletop hazard house simulators. Some fire prevention software is also being developed to identify hazards in a home. All programs tend to mix messages of general injury prevention, safety, fire prevention, and escape in case of fire. In most cases the fire department representative is regarded as the expert and is expected to present information in a manner that is appropriate for each age group. Fire educator qualifications The US industry standard that outlines the recommended qualifications for fire safety educators is NFPA 1035: Standard for Professional Qualifications for Public Fire and Life Safety Educator, which includes the requirements for Fire and Life Safety Educator Levels I, II, and III; Public Information Officer; and Juvenile Firesetter Intervention Specialist Levels I and II. Target audiences According to the United States Fire Administration, the very young and the elderly are considered to be "at risk" populations. These groups represent approximately 33% of the population. Global perspectives Fire safety has been highlighted in relation to global supply chain management. Sedex, the Supplier Ethical Data Exchange, a collaborative platform for sharing ethical supply chain data, and Verité, Inc., a Massachusetts-based supply chain investigatory NGO, issued a briefing in August 2013 which highlighted the significance of this issue. The briefing referred to several major factory fires, including the 2012 Dhaka garment factory fire in the Tazreen Fashion factory and other examples of fires in Bangladesh, Pakistan and elsewhere, compared the incidence of fire safety issues in a manufacturing context, and highlighted the need for buyers, suppliers and local fire safety enforcement agencies all to take action to improve fire safety within the supply chains for ready-made garments and other products. The briefing recommended that buyers seek greater visibility of fire safety and other risks across the supply chain and identify opportunities to improve standards: "buyers can encourage change through more responsible and consistent practices". Fire safety plan A fire safety plan is required by all North American national, state and provincial fire codes based on building use or occupancy types. Generally, the owner of the building is responsible for the preparation of a fire safety plan. Buildings with elaborate emergency systems may require the assistance of a fire protection consultant. After the plan has been prepared, it must be submitted to the Chief Fire Official or authority having jurisdiction for approval. Once approved, the owner is responsible for implementing the fire safety plan and training all staff in their duties. It is also the owner's responsibility to ensure that all visitors and staff are informed of what to do in case of fire. During a fire emergency, a copy of the approved fire safety plan must be available for the responding fire department's use. In the United Kingdom, a fire safety plan is called a fire risk assessment. Fire safety plan structure Key contact information Utility services (Including shut-off valves for water, gas and electric) Access issues Dangerous stored materials Location of people with special needs Connections to sprinkler system Layout, drawing, and site plan of building Maintenance schedules for life safety systems Personnel training and fire drill procedure Create assemble point/safe zone Use of fire safety plans Fire safety plans are a useful tool for fire fighters to have because they allow them to know critical information about a building that they may have to go into. Using this, fire fighters can locate and avoid potential dangers such as hazardous material (hazmat) storage areas and flammable chemicals. In addition to this, fire safety plans can also provide specialized information that, in the case of a hospital fire, can provide information about the location of things like the nuclear medicine ward. In addition to this, fire safety plans also greatly improve the safety of fire fighters. According to FEMA, 16 percent of all fire fighter deaths in 2002 occurred due to a structural collapse or because the fire fighter got lost. Fire safety plans can outline any possible structural hazards, as well as give the fire fighter knowledge of where he is in the building. Fire safety plans in the fire code In North America alone, there are around 8 million buildings that legally require a fire safety plan, be it due to provincial or state law. Not having a fire safety plan for buildings which fit the fire code occupancy type can result in a fine, and they are required for all buildings, such as commercial, industrial, assembly, etc. Advances in fire safety planning As previously stated, a copy of the approved fire safety plan shall be available for the responding fire department. This, however, is not always the case. Up until now, all fire plans were stored in paper form in the fire department. The problem with this is that sorting and storing these plans is a challenge, and it is difficult for people to update their fire plans. As a result, only half of the required buildings have fire plans, and of those, only around 10 percent are up-to-date. This problem has been solved through the introduction of digital fire plans. These fire plans are stored in a database and can be accessed wirelessly on site by firefighters and are much simpler for building owners to update. Insurance companies Fire is one of the biggest threats to property with losses adding up to billions of dollars in damages every year. In 2019 alone, the total amount of property damage resulting from fire was $14.8 billion in the United States. Insurance companies in the United States are not only responsible for financially covering fire loss but are also responsible for managing risk associated with it. Most commercial insurance companies hire a risk control specialist whose primary job is to survey property to ensure compliance with NFPA standards, assess the current risk level of the property, and make recommendations to reduce the probability of fire loss. Careers in property risk management continue to grow and have been projected to grow 4 to 8% from 2018 to 2028 in the United States. See also References External links Sample Fire Code Table of Contents from International Code Council Fire prevention Fire protection Legal codes Safety practices
Fire safety
[ "Engineering" ]
2,720
[ "Building engineering", "Fire protection" ]
987,546
https://en.wikipedia.org/wiki/Harmattan
The Harmattan is a season in West Africa that occurs between the end of November and the middle of March. It is characterized by the dry and dusty northeasterly trade wind, of the same name, which blows from the Sahara over West Africa into the Gulf of Guinea. The name is related to the word in the Twi language. The temperature is cold mostly at night in some places but can be very hot in certain places during daytime. Generally, temperature differences can also depend on local circumstances. The Harmattan blows during the dry season, which occurs during the months with the lowest sun. In this season, the subtropical ridge of high pressure stays over the central Sahara and the low-pressure Intertropical Convergence Zone (ITCZ) stays over the Gulf of Guinea. On its passage over the Sahara, the Harmattan picks fine dust and sand particles (between 0.5 and 10 microns). It is also known as the "doctor wind", because of its invigorating dryness compared with humid tropical air. Effects This season differs from winter because it is characterized by cold, dry, dust-laden wind, and also wide fluctuations in the ambient temperatures of the day and night. Temperatures can easily be as low as all day, but sometimes in the afternoon the temperature can also soar to as high as , while the relative humidity drops under 5%. It can also be hot in some regions, like in the Sahara. The air is particularly dry and desiccating when the Harmattan blows over the region. The Harmattan brings desert-like weather conditions: it lowers the humidity, dissipates cloud cover, prevents rainfall formation and sometimes creates big clouds of dust which can result in dust storms or sandstorms. The wind can increase fire risk and cause severe crop damage. The interaction of the Harmattan with monsoon winds can cause tornadoes. Harmattan haze In some countries in West Africa, the heavy amount of dust in the air can severely limit visibility and block the sun for several days, comparable to a heavy fog. This effect is known as the Harmattan haze. It costs airlines millions of dollars in cancelled and diverted flights each year. When the haze is weak, the skies are clear. The extreme dryness of the air may cause branches of trees to die. Health A 2024 study found that dust carried by the Harmattan increases infant and child mortality, as well as has persistent adverse health impacts on surviving children. Humidity can drop lower than 15%, which can result in spontaneous nosebleeds for some people. Other health effects on humans may include conditions of the skin (dryness of the skin), dried or chapped lips, eyes, and respiratory system, including aggravation of asthma. See also Khamsin References External links Cold Geography of Ghana Geography of Nigeria Geography of West Africa Seasons Winds
Harmattan
[ "Physics" ]
583
[ "Physical phenomena", "Earth phenomena", "Seasons" ]
987,604
https://en.wikipedia.org/wiki/Tobin%27s%20q
Tobin's q (or the q ratio, and Kaldor's v), is the ratio between a physical asset's market value and its replacement value. It was first introduced by Nicholas Kaldor in 1966 in his paper: Marginal Productivity and the Macro-Economic Theories of Distribution: Comment on Samuelson and Modigliani. It was popularised a decade later by James Tobin, who in 1970, described its two quantities as: Measurement Single company Although it is not the direct equivalent of Tobin's q, it has become common practice in the finance literature to calculate the ratio by comparing the market value of a company's equity and liabilities with its corresponding book values, as the replacement values of a company's assets is hard to estimate: Tobin's q = It is also common practice to assume equivalence of the liabilities market and book value, yielding: Tobin's q = . Even if market and book value of liabilities are assumed to be equal, this is not equal to the "Market to Book Ratio" or "Price to Book Ratio", used in financial analysis. The latter ratio is only calculated for equity values: Market to Book Ratio= . Financial analysis also often uses the inverse of this ratio, the "Book to Market Ratio", i.e. Book to Market Ratio= For stock-listed companies, the market value of equity or market capitalization is often quoted in financial databases. It can be calculated for a specific point in time by . Aggregate corporations Another use for q is to determine the valuation of the whole market in ratio to the aggregate corporate assets. The formula for this is: The following graph is an example of Tobin's q for all U.S. corporations. The line shows the ratio of the US stock market value to US net assets at replacement cost since 1900. Effect on capital investment If the market value reflected solely the recorded assets of a company, Tobin's q would be 1.0. If Tobin's q is greater than 1.0, then the market value is greater than the value of the company's recorded assets. This suggests that the market value reflects some unmeasured or unrecorded assets of the company. High Tobin's q values encourage companies to invest more in capital because they are "worth" more than the price they paid for them. If a company's stock price (which is a measure of the company's capital market value) is $2 and the price of the capital in the current market is $1, so that q > 1, the company can issue shares and with the proceeds invest in capital, thus obtaining economic profit. On the other hand, if Tobin's q is less than 1, the market value is less than the recorded value of the assets of the company. This suggests that the market may be undervaluing the company, or that the company could increase profit by getting rid of some capital stock, either by selling it or by declining to replace it as it wears out. John Mihaljevic points out that "no straightforward balancing mechanism exists in the case of low Q ratios, i.e., when the market is valuing an asset below its replacement cost (Q<1). When Q is less than parity, the market seems to be saying that the deployed real assets will not earn a sufficient rate of return and that, therefore, the owners of such assets must accept a discount to the replacement value if they desire to sell their assets in the market. If the real assets can be sold off at replacement cost, for example via an asset liquidation, such an action would be beneficial to shareholders because it would drive the Q ratio back up toward parity (Q->1). In the case of the stock market as a whole, rather than a single firm, the conclusion that assets should be liquidated does not typically apply. A low Q ratio for the entire market does not mean that blanket redeployment of resources across the economy will create value. Instead, when market-wide Q is less than parity, investors are probably being overly pessimistic about future asset returns." Lang and Stulz found out that diversified companies have a lower q-ratio than focused firms because the market penalizes the value of the firm assets. Tobin's insights show that movements in stock prices will be reflected in changes in consumption and investment, although empirical evidence shows that the relationship is not as tight as one would have thought. This is largely because firms do not blindly base fixed investment decisions on movements in the stock price; rather they examine future interest rates and the present value of expected profits. Other influences on q Tobin's q measures two variables - the current price of capital assets as measured by accountants or statisticians and the market value of equity and bonds - but there are other elements that may affect the value of q, namely: Market hype and speculation, reflecting, for example, analysts' views of the prospects for companies, or speculation such as bid rumors. The "intellectual capital" of corporations, that is, the unmeasured contribution of knowledge, goodwill, technology and other intangible assets that a company may have but aren't recorded by accountants. Some companies seek to develop ways to measure intangible assets such as intellectual capital. See balanced scorecard. Tobin's q is said to be influenced by market hype and intangible assets so that we see swings in q around the value of 1. Kaldor's v and Tobin's q In his 1966 paper Marginal Productivity and the Macro-Economic Theory of Distribution: Comment on Samuelson and Modigliani co-authored with Luigi Pasinetti, Nicholas Kaldor introduced this relationship as part of his broader theory of distribution that was non-marginalist. This theory is today known as the ‘Cambridge Growth Model’ after the location (University of Cambridge, UK) where the theories was devised. In the paper Kaldor writes: The "valuation ratio" (v) [is] the relation of the market value of shares to the capital employed by corporations." Kaldor then goes on to explore the properties of v at a properly macroeconomic level. He ends up deriving the following equation: where c is net consumption out of capital, sw is the savings of workers, g is the growth rate, Y is income, K is capital, sc is savings out of capital and i is the fraction of new securities issued by firms. Kaldor then supplements this with a price, p, equation for securities which is as follows: He then goes on to lay out his interpretation of these equations: The interpretation of these equations is as follows. Given the savings-coefficients and the capital-gains-coefficient, there will be a certain valuation ratio which will secure just enough savings by the personal sector to take up the new securities issued by corporations. Hence the net savings of the personal sector (available for investment by the business sector) will depend, not only on the savings propensities of individuals, but on the policies of corporations towards new issues. In the absence of new issues the level of security prices will be established at the point at which the purchases of securities by the savers will be balanced by the sale of securities of the dis-savers, making the net savings of the personal sector zero. The issue of new securities by corporations will depress security prices (i.e. the valuation ratio v) just enough to reduce the sale of securities by the dis-savers sufficiently to induce the net savings required to take up the new issues. If i were negative and the corporations were net purchasers of securities from the personal sector (which they could be through the redemption of past securities, or purchasing shares from the personal sector for the acquisition of subsidiaries) the valuation ratio v would be driven up to the point at which net personal savings would be negative to the extent necessary to match the sale of securities by the personal sector. Kaldor is clearly laying out equilibrium condition by which, ceteris paribus, the stock of savings in existence at any given time is matched to the total numbers of securities outstanding in the market. He goes on to state: In a state of Golden Age equilibrium (given a constant g and a constant K/Y, however determined), v will be constant, with a value that can be ><1, depending on the values of sc, sw, c, and i. In this sentence Kaldor is laying out the determination of the v ratio in equilibrium (a constant g and a constant K/Y) by: the savings out of capital, the savings of workers, net consumption out of capital and the issuance of new shares by firms. Kaldor goes further still. Prior to this he had asserted that "the share of investment in total income is higher than the share of savings in wages, or in total personal income" is a "matter of fact" (i.e. a matter of empirical investigation that Kaldor thought would likely hold true). This is the so-called "Pasinetti inequality" and if we allow for it we can say something more concrete about the determination of v: [One] can assert that, given the Pasinetti inequality, gK>sw.Y, v<1 when c=(1-sw). i=0; with i>0 this will be true a fortiori. This fits nicely with the fact that Kaldor's v and Tobin's q tend on average to be below 1 thus suggesting that Pasinetti's inequality likely does hold in empirical reality. Finally, Kaldor considers whether this exercise give us any clue to the future development of income distribution in the capitalist system. The neoclassicals tended to argue that capitalism would eventually liquidate the capitalists and lead to more homogenous income distribution. Kaldor lays out a case whereby this might take place in his framework: Has this "neo-Pasinetti theorem" any very-long-run "Pasinetti" or "anti-Pasinetti" solution? So far we have not taken any account of the change in distribution of assets between "workers" (i.e. pension funds) and "capitalists" - indeed we assumed it to be constant. However since the capitalists are selling shares (if c>0) and the pension funds are buying them, one could suppose that the share of total assets in the hands of the capitalists would diminish continually, whereas the share of assets in the hands of the workers' funds would increase continuously until, at some distant day, the capitalists have no shares left; the pension funds and insurance companies would own them all!. While this is a possible interpretation of the analysis Kaldor warns against it and lays out an alternative interpretation of the results: But this view ignores that the ranks of the capitalist class are constantly renewed by the sons and daughters of the new Captains of Industry, replacing the grandsons and granddaughters of the older Captains who gradually dissipate their inheritance through living beyond their dividend income. It is reasonable to assume that the value of the shares of the newly formed and growing companies grows at a higher rate than the average, whilst those of older companies (which decline in relative importance) grow at a lower rate. This means that the rate of capital appreciation of the shares in the hands of the capitalist group as a whole, for the reasons given above, is greater than the rate of appreciation of the assets in the hands of pension funds, etc. Given the difference in the rates of appreciation of the two funds of securities-and this depends on the rate at which new corporations emerge and replace older ones-I think it can be shown that there will be, for any given constellation of the value of the parameters, a long run equilibrium distribution of the assets between capitalists and pension funds which will remain constant. Kaldor's theory of v is comprehensive and provides an equilibrium determination of the variable based on macroeconomic theory that was missing in most other discussions. But today it is largely neglected and the focus is placed on Tobin's later contribution - hence the fact that the variable is known as Tobin's q and not Kaldor's v. Cassel's q In September 1996, at a lunch at the European Bank for Reconstruction and Development (EBRD), attended by Tobin, Mark Cutis of the EBRD and Brian Reading and Gabriel Stein of Lombard Street Research Ltd, Tobin mentioned that "in common with most American economists, [he] did not read anything in a foreign language] and that "in common with most post-War economists [he] did not read anything published before World War II". He was therefore greatly embarrassed when he discovered that in the 1920s, the Swedish economist Gustav Cassel, had introduced a ratio between a physical asset's market value and its replacement value, which he called 'q'. Cassel's q thus antedates both Kaldor's and Tobin's by a number of decades. Tobin's marginal q Tobin's marginal q is the ratio of the market value of an additional unit of capital to its replacement cost. Price-to-book ratio (P/B) In inflationary times, q will be lower than the price-to-book ratio. During periods of very high inflation, the book value would understate the cost of replacing a firm's assets, since the inflated prices of its assets would not be reflected on its balance sheet. Criticism Olivier Blanchard, Changyong Rhee and Lawrence Summers found with data of the US economy from the 1920s to the 1990s that "fundamentals" predict investment much better than Tobin's q. What these authors call fundamentals is however the rate of profit, which connects these empirical findings with older ideas of authors such as Wesley Mitchell, or even Karl Marx, that profits are the basic engine of the market economy. Doug Henwood, in his book Wall Street, argues that the q ratio fails to accurately predict investment, as Tobin claims. "The data for Tobin and Brainard’s 1977 paper covers 1960 to 1974, a period for which q seemed to explain investment pretty well," he writes. "But as the chart [see right] shows, things started going away even before the paper was published. While q and investment seemed to move together for the first half of the chart, they part ways almost at the middle; q collapsed during the bearish stock markets of the 1970s, yet investment rose." (p. 145) See also P/E ratio Dividend yield Replacement value Notes References Further reading Investment Valuation: Tools and Techniques for Determining the Value of Any Asset External links Tobin's Q Moderately Bullish on U.S. Equities (as of March 2009) The Manual of Ideas Launches Tobin's Q Research Service Based on James Tobin's Q Indicator Robert Huebscher on "The Market Valuation Q-uestion" Andrew Smithers' Q-Ratio FAQ Q-Ratio Graphs and Data Financial ratios Valuation (finance)
Tobin's q
[ "Mathematics" ]
3,088
[ "Financial ratios", "Quantity", "Metrics" ]
987,792
https://en.wikipedia.org/wiki/Barlow%27s%20wheel
Barlow's wheel was an early demonstration of a homopolar motor, designed and built by English mathematician and physicist, Peter Barlow in 1822. It consists of a star-shaped wheel free to turn suspended over a trough of the liquid metal mercury, with the points dipping into the mercury, between the poles of a horseshoe magnet. A DC electric current passes from the hub of the wheel, through the wheel into the mercury and out through an electrical contact dipping into the mercury. The Lorentz force of the magnetic field on the moving charges in the wheel causes the wheel to rotate. The presence of serrations on the wheel is unnecessary and the apparatus will work with a round metal disk, usually made of copper. "The points of the wheel, R, dip into mercury contained in a groove hollowed in the stand. A more rapid revolution will be obtained if a small electro-magnet be substituted for a steel magnet, as is shown in the cut. The electro-magnet is fixed to the stand, and included in the circuit with the spur-wheel, so that the current flows through them in succession. Hence, the direction of the rotation will not be changed by reversing that of the current; since the polarity of the electromagnet will also be reversed." (Excerpt taken from the 1842 edition of the Manual of Magnetism, page 94) It is used as a demonstration of electromagnetism in physics education. Because mercury is toxic, brine is sometimes used in place of mercury in modern recreations of the experiment. How it works Action of a magnet on current and how rotational motion is produced due to this can be shown by Barlow's wheel experiment. The apparatus consists of a star shaped copper wheel capable of rotating freely in a vertical plane about a horizontal axis. The point of each spoke of the star just dips into a pool of mercury kept in a small groove on the wooden base of the apparatus. The pool of mercury is kept in between the two opposite poles of a strong magnet. The wheel rotates with its plane being perpendicular to the direction of the magnetic field and during its rotation only one point of the star dips into the pool of mercury at a time. When the axis of the wheel and the mercury are connected to electric cell, the circuit is completed through the axis of the wheel (when a point dips in the mercury) and the mercury. On passing current through the circuit the wheel will begin to rotate due to the action of the magnet on the current. The direction of rotation of the wheel can be determined by applying Fleming's left hand rule. While rotating and when a spoke of the wheel just leaves the mercury the circuit breaks but due to inertia of motion the wheel continues its motion and brings the next spoke in contact with the mercury thereby restoring the electrical contact. In this way rotation of the wheel continues. On reversing the direction of the current or that of the magnetic field the wheel rotates in the opposite direction. The speed of rotation depends upon the strength of the magnetic field and the strength of the current. Here mechanical energy is obtained from electrical energy. References External links Barlow's wheel designed by Eckling 1840 (German page) Barlow's Wheel - Interactive Java Tutorial National High Magnetic Field Laboratory 5H40.50 BARLOW'S WHEEL Electric motors
Barlow's wheel
[ "Technology", "Engineering" ]
675
[ "Electrical engineering", "Engines", "Electric motors" ]
987,847
https://en.wikipedia.org/wiki/Salomon%20Bochner
Salomon Bochner (20 August 1899 – 2 May 1982) was a Galizien-born mathematician, known for work in mathematical analysis, probability theory and differential geometry. Life He was born into a Jewish family in Podgórze (near Kraków), then Austria-Hungary, now Poland. Fearful of a Russian invasion in Galicia at the beginning of World War I in 1914, his family moved to Germany, seeking greater security. Bochner was educated at a Berlin gymnasium (secondary school), and then at the University of Berlin. There, he was a student of Erhard Schmidt, writing a dissertation involving what would later be called the Bergman kernel. Shortly after this, he left the academy to help his family during the escalating inflation. After returning to mathematical research, he lectured at the University of Munich from 1924 to 1933. His academic career in Germany ended after the Nazis came to power in 1933, and he left for a position at Princeton University. He was a visiting scholar at the Institute for Advanced Study in 1945 to 1948. He was appointed as Henry Burchard Fine Professor in 1959, retiring in 1968. Although he was seventy years old when he retired from Princeton, Bochner was appointed as Edgar Odell Lovett Professor of Mathematics at Rice University and went on to hold this chair until his death in 1982. He became Head of Department at Rice in 1969 and held this position until 1976. He died in Houston, Texas. He was an Orthodox Jew. Mathematical work In 1925 he started work in the area of almost periodic functions, simplifying the approach of Harald Bohr by use of compactness and approximate identity arguments. In 1933 he defined the Bochner integral, as it is now called, for vector-valued functions. Bochner's theorem on Fourier transforms appeared in a 1932 book. His techniques came into their own as Pontryagin duality and then the representation theory of locally compact groups developed in the following years. Subsequently, he worked on multiple Fourier series, posing the question of the Bochner–Riesz means. This led to results on how the Fourier transform on Euclidean space behaves under rotations. In differential geometry, Bochner's formula on curvature from 1946 was published. Joint work with Kentaro Yano (1912–1993) led to the 1953 book Curvature and Betti Numbers. It had consequences, for the Kodaira vanishing theory, representation theory, and spin manifolds. Bochner also worked on several complex variables (the Bochner–Martinelli formula and the book Several Complex Variables from 1948 with W. T. Martin). Selected publications 2016 reprint 2013 reprint 2014 reprint See also Bochner almost periodic functions Bochner–Kodaira–Nakano identity Bochner Laplacian Bochner measurable function References External links National Academy of Sciences Biographical Memoir 1899 births 1982 deaths 20th-century Austrian mathematicians 20th-century American mathematicians American Orthodox Jews Austrian Orthodox Jews Complex analysts Differential geometers German Orthodox Jews Institute for Advanced Study visiting scholars Jewish American scientists Jewish emigrants from Nazi Germany to the United States Jews from Galicia (Eastern Europe) Mathematical analysts Measure theorists PDE theorists Princeton University faculty Probability theorists Rice University faculty Scientists from Berlin
Salomon Bochner
[ "Mathematics" ]
644
[ "Mathematical analysis", "Mathematical analysts" ]
987,902
https://en.wikipedia.org/wiki/The%20Food%20of%20the%20Gods%20and%20How%20It%20Came%20to%20Earth
The Food of the Gods and How It Came to Earth is a science fiction novel by H. G. Wells that was first published in 1904. Wells called it "a fantasia on the change of scale in human affairs. ... I had hit upon [the idea] while working out the possibilities of the near future in a book of speculations called Anticipations (1901)". The novel, which has had various B-movie adaptations, is about a group of scientists that invents food that accelerates the growth of children and turns them into giants when they become adults. Plot summary Book I: The Discovery of the Food Research chemist Mr. Bensington specialises in "the More Toxic Alkaloids", and Professor Redwood studies reaction times and takes an interest in "Growth". Redwood's suggestion "that the process of growth probably demanded the presence of a considerable quantity of some necessary substance in the blood that was only formed very slowly" causes Bensington to begin searching for such a substance. After a year of research and experiment, he finds a way to make what he calls in his initial enthusiasm "the Food of the Gods" but later more soberly dubs Herakleophorbia IV. Their first experimental success is with chickens that grow to about six times their normal size on an experimental farm at Hickleybrow, near Urshot, Kent. Mr. and Mrs. Skinner, the couple hired to feed and monitor the chickens, eventually allow Herakleophorbia IV to enter the local food chain, and the other creatures that get the food grow to six or seven times their normal size: not only plants, but also wasps, earwigs, and rats. The chickens escape and overrun a nearby town. Bensington and Redwood do nothing until a decisive and efficient "well-known civil engineer" of their acquaintance, Cossar, arrives to organize a party of eight ("Obviously!") to destroy the wasps' nest, hunt down the monstrous vermin and burn the experimental farm to the ground. As debate ensues about the substance, popularly known as "Boomfood", children are being given the substance and grow to enormous size: Redwood's son ("pioneer of the new race"), Cossar's three sons, and Mrs. Skinner's grandson, Caddles. Dr. Winkles makes the substance available to a princess, and there are other giants as well. The massive offspring reach about 40 feet in height. At first, the giants are tolerated, but as they grow further, restrictions are imposed. In time, most of the English population comes to resent the young giants, as well as changes to flora, fauna and the organisation of society that become more extensive with each passing year. Bensington is nearly lynched by an angry mob and subsequently retires from active life to Mount Glory Hydrotherapeutic Hotel. Book II: The Food in the Village Mrs. Skinner's grandson, Albert Edward Caddles, becomes an epitome of "the coming of Bigness in the world". Book III: The Harvest of the Food A man is released from prison after having been incarcerated for 20 years is shocked by how much everything has changed. British society has learned to cope with occasional outbreaks of giant pests (mosquitoes, spiders, rats etc.), but the coming to maturity of the giant children brings a rabble-rousing politician, Caterham, nicknamed "Jack the Giant Killer", into power. Caterham has been promoting a program to destroy the Food of the Gods, hints that he will suppress the giants and now begins to execute his plan. By coincidence, it is at that moment that Caddles rebels against spending his life working in a chalk pit and sets out to see the world. In London, he is surrounded by thousands of tiny people and confused by everything that he sees. He demands to know what it is all for and where he fits in, but no one can answer his questions. After refusing to return to his chalk pit, Caddles is shot and killed by the police. Meanwhile, a romance between the young giant Redwood and the princess blossoms just as Caterham, who has at last attained a position of power, launches an effort to suppress the giants. However, after two days of fighting, the giants, who have taken refuge in an enormous pit, have held their own. Their bombardment of London with shells containing large quantities of Herakleophorbia IV forces Caterham to call a truce. Caterham employs Redwood père as an envoy to send a proposed settlement, whose terms would demand that the giants live apart somewhere and forgo the right to reproduce. The offer is rejected at a meeting of the giants, and one of Cossar's sons expresses a belief in growth as part of the law of life: "We fight not for ourselves but for growth, growth that goes on for ever. Tomorrow, whether we live or die, growth will conquer through us. That is the law of the spirit for evermore. To grow according to the will of God!" The world ends up on the verge of a long struggle between the "little people" and the Children of the Food. Film, television and theatrical adaptations Bert I. Gordon adapted the work to the movies twice. He first co-wrote, produced and directed Village of the Giants (1965) for Embassy Pictures. In this film, the substance, called simply "Goo", is developed by an 11-year-old (Ron Howard) and is consumed by a gang of teenaged troublemakers (led by Beau Bridges), who become giants and take over the town and turn the tables on the knee-high adults. They are eventually defeated by other teens (led by Tommy Kirk). The Food of the Gods was released by American International Pictures in 1976 and was again written, produced, and directed by Gordon. Based on a portion of the book, it reduces the tale to an "ecology strikes back" scenario, then common in science fiction films. The movie was not very successful and received a Golden Turkey Award for Worst Rodent Movie of All Time, "beating" such competitors as The Killer Shrews (1959), The Mole People (1956), The Nasty Rabbit (1965) and Night of the Lepus (1972). In 1989, Gnaw: Food of the Gods, Part 2 was released, written by Richard Bennett and directed by Damian Lee. Dealing with a pack of giant lab rats wreaking havoc on a college campus, it is even further removed from the book than Gordon's attempts. Comic book adaptions The Food of the Gods was first adapted for the comics in January 1961 for Classics Illustrated #160 with a painted cover by Gerald McCann, script by Alfred Sundel and interior artwork by Tony Tallarico. The giant wasps are depicted in only two panels, and the giant rats do not appear at all. A more dynamic and dramatic version, "told in the mighty Marvel manner", features in Marvel Classics Comics #22 (Marvel Comics 1977), written by Doug Moench with art by Sonny Trinidad. "Deadly Muffins" in Secrets of Sinister House #13 (DC Comics 1973) is an uncredited version of the story written by John Albano and drawn by Alfredo Alcala. Notes See also Dr. Ox's Experiment by Jules Verne References External links 1904 British novels 1904 science fiction novels Novels by H. G. Wells British science fiction novels Fiction about size change Macmillan Publishers books Fictional food and drink Giants in popular culture British novels adapted into films Novels adapted into comics Science fiction novels adapted into films Experimental medical treatments in fiction
The Food of the Gods and How It Came to Earth
[ "Physics", "Mathematics" ]
1,562
[ "Fiction about size change", "Quantity", "Physical quantities", "Size" ]
987,959
https://en.wikipedia.org/wiki/Gram%20matrix
In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers. An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero. It is named after Jørgen Pedersen Gram. Examples For finite-dimensional real vectors in with the usual Euclidean dot product, the Gram matrix is , where is a matrix whose columns are the vectors and is its transpose whose rows are the vectors . For complex vectors in , , where is the conjugate transpose of . Given square-integrable functions on the interval , the Gram matrix is: where is the complex conjugate of . For any bilinear form on a finite-dimensional vector space over any field we can define a Gram matrix attached to a set of vectors by . The matrix will be symmetric if the bilinear form is symmetric. Applications In Riemannian geometry, given an embedded -dimensional Riemannian manifold and a parametrization for the volume form on induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: This generalizes the classical surface integral of a parametrized surface for : If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector. In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix. In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system. Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94). In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace. In machine learning, kernel functions are often represented as Gram matrices. (Also see kernel PCA) Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition. Properties Positive-semidefiniteness The Gram matrix is symmetric in the case the inner product is real-valued; it is Hermitian in the general, complex case by definition of an inner product. The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation: The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors are linearly independent (that is, for all ). Finding a vector realization Given any positive semidefinite matrix , one can decompose it as: , where is the conjugate transpose of (or in the real case). Here is a matrix, where is the rank of . Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of . The columns of can be seen as n vectors in (or k-dimensional Euclidean space , in the real case). Then where the dot product is the usual inner product on . Thus a Hermitian matrix is positive semidefinite if and only if it is the Gram matrix of some vectors . Such vectors are called a vector realization of The infinite-dimensional analog of this statement is Mercer's theorem. Uniqueness of vector realizations If is the Gram matrix of vectors in then applying any rotation or reflection of (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any orthogonal matrix , the Gram matrix of is also This is the only way in which two real vector realizations of can differ: the vectors are unique up to orthogonal transformations. In other words, the dot products and are equal if and only if some rigid transformation of transforms the vectors to and 0 to 0. The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors is equal to the Gram matrix of vectors in then there is a unitary matrix (meaning ) such that for . Other properties Because , it is necessarily the case that and commute. That is, a real or complex Gram matrix is also a normal matrix. The Gram matrix of any orthonormal basis is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real rotation matrix is the identity matrix. Likewise, the Gram matrix of the rows or columns of a unitary matrix is the identity matrix. The rank of the Gram matrix of vectors in or equals the dimension of the space spanned by these vectors. Gram determinant The Gram determinant or Gramian is the determinant of the Gram matrix: If are vectors in then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When the determinant and volume are zero. When , this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is . The Gram determinant can also be expressed in terms of the exterior product of vectors by The Gram determinant therefore supplies an inner product for the space . If an orthonormal basis ei, on is given, the vectors will constitute an orthonormal basis of n-dimensional volumes on the space . Then the Gram determinant amounts to an n-dimensional Pythagorean Theorem for the volume of the parallelotope formed by the vectors in terms of its projections onto the basis volumes . When the vectors are defined from the positions of points relative to some reference point , then the Gram determinant can be written as the difference of two Gram determinants, where each is the corresponding point supplemented with the coordinate value of 1 for an -st dimension. Note that in the common case that , the second term on the right-hand side will be zero. Constructing an orthonormal basis Given a set of linearly independent vectors with Gram matrix defined by , one can construct an orthonormal basis In matrix notation, , where has orthonormal basis vectors and the matrix is composed of the given column vectors . The matrix is guaranteed to exist. Indeed, is Hermitian, and so can be decomposed as with a unitary matrix and a real diagonal matrix. Additionally, the are linearly independent if and only if is positive definite, which implies that the diagonal entries of are positive. is therefore uniquely defined by . One can check that these new vectors are orthonormal: where we used . See also Controllability Gramian Observability Gramian References External links Volumes of parallelograms by Frank Jones Systems theory Matrices Determinants Analytic geometry Kernel methods for machine learning fr:Matrice de Gram
Gram matrix
[ "Mathematics" ]
1,665
[ "Matrices (mathematics)", "Mathematical objects" ]