id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
50,482
https://en.wikipedia.org/wiki/Flood
A flood is an overflow of water (or rarely other fluids) that submerges land that is usually dry. In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are of significant concern in agriculture, civil engineering and public health. Human changes to the environment often increase the intensity and frequency of flooding. Examples for human changes are land use changes such as deforestation and removal of wetlands, changes in waterway course or flood controls such as with levees. Global environmental issues also influence causes of floods, namely climate change which causes an intensification of the water cycle and sea level rise. For example, climate change makes extreme weather events more frequent and stronger. This leads to more intense floods and increased flood risk. Natural types of floods include river flooding, groundwater flooding coastal flooding and urban flooding sometimes known as flash flooding. Tidal flooding may include elements of both river and coastal flooding processes in estuary areas. There is also the intentional flooding of land that would otherwise remain dry. This may take place for agricultural, military, or river-management purposes. For example, agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries. Flooding may occur as an overflow of water from water bodies, such as a river, lake, sea or ocean. In these cases, the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries. Flooding may also occur due to an accumulation of rainwater on saturated ground. This is called an areal flood. The size of a lake or other body of water naturally varies with seasonal changes in precipitation and snow melt. Those changes in size are however not considered a flood unless they flood property or drown domestic animals. Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if these buildings are in the natural flood plains of rivers. People could avoid riverine flood damage by moving away from rivers. However, people in many countries have traditionally lived and worked by rivers because the land is usually flat and fertile. Also, the rivers provide easy travel and access to commerce and industry. Flooding can damage property and also lead to secondary impacts. These include in the short term an increased spread of waterborne diseases and vector-bourne disesases, for example those diseases transmitted by mosquitos. Flooding can also lead to long-term displacement of residents. Floods are an area of study of hydrology and hydraulic engineering. A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions. Types Areal flooding Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths. Surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, or roofs. Areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience areal flooding during periods when precipitation exceeds evaporation. River flooding Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the world's largest rivers. When overland flow occurs on tilled fields, it can result in a muddy flood where sediments are picked up by run off and carried as suspended matter or bed load. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, or beaver dams. Slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, or tropical cyclones. However, large rivers may have rapid flooding events in areas with dry climates, since they may have large basins but small river channels, and rainfall can be very intense in smaller areas of those basins. In extremely flat areas, such as the Red River Valley of the North in Minnesota, North Dakota, and Manitoba, a type of hybrid river/areal flooding can occur, known locally as "overland flooding". This is different from "overland flow" defined as "surface runoff". The Red River Valley is a former glacial lakebed, created by Lake Agassiz, and over a length of , the river course drops only , for an average slope of about 5 inches per mile (or 8.2 cm per kilometer). In this very large area, spring snowmelt happens at different rates in different places, and if winter snowfall was heavy, a fast snowmelt can push water out of the banks of a tributary river so that it moves overland, to a point further downstream in the river or completely to another streambed. Overland flooding can be devastating because it is unpredictable, it can occur very suddenly with surprising speed, and in such flat land it can run for miles. It is these qualities that set it apart from simple "overland flow". Rapid flooding events, including flash floods, more often occur on smaller rivers, rivers with steep valleys, rivers that flow for much of their length over impermeable terrain, or normally-dry channels. The cause may be localized convective precipitation (intense thunderstorms) or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flash flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the flow rate increased from about in just one minute. Two larger floods occurred at the same site within a week, but no one was at the waterfall on those days. The deadly flood resulted from a thunderstorm over part of the drainage basin, where steep, bare rock slopes are common and the thin soil was already saturated. Flash floods are the most common flood type in normally-dry channels in arid zones, known as arroyos in the southwest United States and many other names elsewhere. In that setting, the first flood water to arrive is depleted as it wets the sandy stream bed. The leading edge of the flood thus advances more slowly than later and higher flows. As a result, the rising limb of the hydrograph becomes ever quicker as the flood moves downstream, until the flow rate is so great that the depletion by wetting soil becomes insignificant. Coastal flooding Coastal areas may be flooded by storm surges combining with high tides and large wave events at sea, resulting in waves over-topping flood defenses or in severe cases by tsunami or tropical cyclones. A storm surge, from either a tropical cyclone or an extratropical cyclone, falls within this category. A storm surge is "an additional rise of water generated by a storm, over and above the predicted astronomical tides". Due to the effects of climate change (e.g. sea level rise and an increase in extreme weather events) and an increase in the population living in coastal areas, the damage caused by coastal flood events has intensified and more people are being affected. Flooding in estuaries is commonly caused by a combination of storm surges caused by winds and low barometric pressure and large waves meeting high upstream river flows. Urban flooding Intentional floods The intentional flooding of land that would otherwise remain dry may take place for agricultural, military or river-management purposes. This is a form of hydraulic engineering. Agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries. Flooding for river management may occur in the form of diverting flood waters in a river at flood stage upstream from areas that are considered more valuable than the areas that are sacrificed in this way. This may be done ad hoc, or permanently, as in the so-called overlaten (literally "let-overs"), an intentionally lowered segment in Dutch riparian levees, like the Beerse Overlaat in the left levee of the Meuse between the villages of Gassel and Linden, North Brabant. Military inundation creates an obstacle in the field that is intended to impede the movement of the enemy. This may be done both for offensive and defensive purposes. Furthermore, in so far as the methods used are a form of hydraulic engineering, it may be useful to differentiate between controlled inundations and uncontrolled ones. Examples for controlled inundations include those in the Netherlands under the Dutch Republic and its successor states in that area and exemplified in the two Hollandic Water Lines, the Stelling van Amsterdam, the Frisian Water Line, the IJssel Line, the Peel-Raam Line, and the Grebbe line in that country. To count as controlled, a military inundation has to take the interests of the civilian population into account, by allowing them a timely evacuation, by making the inundation reversible, and by making an attempt to minimize the adverse ecological impact of the inundation. That impact may also be adverse in a hydrogeological sense if the inundation lasts a long time. Examples for uncontrolled inundations are the second Siege of Leiden during the first part of the Eighty Years' War, the flooding of the Yser plain during the First World War, and the Inundation of Walcheren, and the Inundation of the Wieringermeer during the Second World War). Causes Floods are caused by many factors or a combination of any of these generally prolonged heavy rainfall (locally concentrated or throughout a catchment area), highly accelerated snowmelt, severe winds over water, unusual high tides, tsunamis, or failure of dams, levees, retention ponds, or other structures that retained the water. Flooding can be exacerbated by increased amounts of impervious surface or by other natural hazards such as wildfires, which reduce the supply of vegetation that can absorb rainfall. During times of rain, some of the water is retained in ponds or soil, some is absorbed by grass and vegetation, some evaporates, and the rest travels over the land as surface runoff. Floods occur when ponds, lakes, riverbeds, soil, and vegetation cannot absorb all the water. This has been exacerbated by human activities such as draining wetlands that naturally store large amounts of water and building paved surfaces that do not absorb any water. Water then runs off the land in quantities that cannot be carried within stream channels or retained in natural ponds, lakes, and human-made reservoirs. About 30 percent of all precipitation becomes runoff and that amount might be increased by water from melting snow. Upslope factors River flooding is often caused by heavy rain, sometimes increased by melting snow. A flood that rises rapidly, with little or no warning, is called a flash flood. Flash floods usually result from intense rainfall over a relatively small area, or if the area was already saturated from previous precipitation. The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow. Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins. The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately . The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively. Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest. The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins. Downslope factors Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation in coastal flooding lands is often the ocean or some coastal flooding bars which form natural lakes. In flooding low lands, elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel and, especially, by depth of channel, speed of flow and amount of sediments in it Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels. Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel. Periodic floods occur on many rivers, forming a surrounding region known as the flood plain. Even when rainfall is relatively light, the shorelines of lakes and bays can be flooded by severe winds—such as during hurricanes—that blow water into the shore areas. Climate change Coincidence Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams. Coincident events may cause extensive flooding to be more frequent than anticipated from simplistic statistical prediction models considering only precipitation runoff flowing within unobstructed drainage channels. Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flood-damaged structures and vehicles, including boats and railway equipment. Recent field measurements during the 2010–11 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by velocity and water depth fluctuations. These considerations ignore further the risks associated with large debris entrained by the flow motion. Negative impacts Floods can be a huge destructive power. When water flows, it has the ability to demolish all kinds of buildings and objects, such as bridges, structures, houses, trees, and cars. Economical, social and natural environmental damages are common factors that are impacted by flooding events and the impacts that flooding has on these areas can be catastrophic. Impacts on infrastructure and societies There have been numerous flood incidents around the world which have caused devastating damage to infrastructure, the natural environment and human life. Floods can have devastating impacts to human societies. Flooding events worldwide are increasing in frequency and severity, leading to increasing costs to societies. Catastrophic riverine flooding can result from major infrastructure failures, often the collapse of a dam. It can also be caused by drainage channel modification from a landslide, earthquake or volcanic eruption. Examples include outburst floods and lahars. Tsunamis can cause catastrophic coastal flooding, most commonly resulting from undersea earthquakes. Economic impacts The primary effects of flooding include loss of life and damage to buildings and other structures, including bridges, sewerage systems, roadways, and canals. The economic impacts caused by flooding can be severe. Every year flooding causes countries billions of dollars worth of damage that threatens the livelihood of individuals. As a result, there is also significant socio-economic threats to vulnerable populations around the world from flooding. For example, in Bangladesh in 2007, a flood was responsible for the destruction of more than one million houses. And yearly in the United States, floods cause over $7 billion in damage. Flood waters typically inundate farm land, making the land unworkable and preventing crops from being planted or harvested, which can lead to shortages of food both for humans and farm animals. Entire harvests for a country can be lost in extreme flood circumstances. Some tree species may not survive prolonged flooding of their root systems. Flooding in areas where people live also has significant economic implications for affected neighborhoods. In the United States, industry experts estimate that wet basements can lower property values by 10–25 percent and are cited among the top reasons for not purchasing a home. According to the U.S. Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses never reopen their doors following a flooding disaster. In the United States, insurance is available against flood damage to both homes and businesses. Economic hardship due to a temporary decline in tourism, rebuilding costs, or food shortages leading to price increases is a common after-effect of severe flooding. The impact on those affected may cause psychological damage to those affected, in particular where deaths, serious injuries and loss of property occur. Health impacts Fatalities connected directly to floods are usually caused by drowning; the waters in a flood are very deep and have strong currents. Deaths do not just occur from drowning, deaths are connected with dehydration, heat stroke, heart attack and any other illness that needs medical supplies that cannot be delivered. Injuries can lead to an excessive amount of morbidity when a flood occurs. Injuries are not isolated to just those who were directly in the flood, rescue teams and even people delivering supplies can sustain an injury. Injuries can occur anytime during the flood process; before, during and after. During floods accidents occur with falling debris or any of the many fast moving objects in the water. After the flood rescue attempts are where large numbers injuries can occur. Communicable diseases are increased due to many pathogens and bacteria that are being transported by the water.There are many waterborne diseases such as cholera, hepatitis A, hepatitis E and diarrheal diseases, to mention a few. Gastrointestinal disease and diarrheal diseases are very common due to a lack of clean water during a flood. Most of clean water supplies are contaminated when flooding occurs. Hepatitis A and E are common because of the lack of sanitation in the water and in living quarters depending on where the flood is and how prepared the community is for a flood. When floods hit, people lose nearly all their crops, livestock, and food reserves and face starvation. Floods also frequently damage power transmission and sometimes power generation, which then has knock-on effects caused by the loss of power. This includes loss of drinking water treatment and water supply, which may result in loss of drinking water or severe water contamination. It may also cause the loss of sewage disposal facilities. Lack of clean water combined with human sewage in the flood waters raises the risk of waterborne diseases, which can include typhoid, giardia, cryptosporidium, cholera and many other diseases depending upon the location of the flood. Damage to roads and transport infrastructure may make it difficult to mobilize aid to those affected or to provide emergency health treatment. Flooding can cause chronically wet houses, leading to the growth of indoor mold and resulting in adverse health effects, particularly respiratory symptoms. Respiratory diseases are a common after the disaster has occurred. This depends on the amount of water damage and mold that grows after an incident. Research suggests that there will be an increase of 30–50% in adverse respiratory health outcomes caused by dampness and mold exposure for those living in coastal and wetland areas. Fungal contamination in homes is associated with increased allergic rhinitis and asthma. Vector borne diseases increase as well due to the increase in still water after the floods have settled. The diseases that are vector borne are malaria, dengue, West Nile, and yellow fever. Floods have a huge impact on victims' psychosocial integrity. People suffer from a wide variety of losses and stress. One of the most treated illness in long-term health problems are depression caused by the flood and all the tragedy that flows with one. Loss of life Below is a list of the deadliest floods worldwide, showing events with death tolls at or above 100,000 individuals. Positive impacts (benefits) Floods (in particular more frequent or smaller floods) can also bring many benefits, such as recharging ground water, making soil more fertile and increasing nutrients in some soils. Flood waters provide much needed water resources in arid and semi-arid regions where precipitation can be very unevenly distributed throughout the year and kills pests in the farming land. Freshwater floods particularly play an important role in maintaining ecosystems in river corridors and are a key factor in maintaining floodplain biodiversity. Flooding can spread nutrients to lakes and rivers, which can lead to increased biomass and improved fisheries for a few years. For some fish species, an inundated floodplain may form a highly suitable location for spawning with few predators and enhanced levels of nutrients or food. Fish, such as the weather fish, make use of floods in order to reach new habitats. Bird populations may also profit from the boost in food production caused by flooding. Flooding can bring benefits, such as making the soil more fertile and providing it with more nutrients. For this reason, periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River among others. The viability of hydropower, a renewable source of energy, is also higher in flood prone regions. Protections against floods and associated hazards Flood management Flood management examples In many countries around the world, waterways prone to floods are often carefully managed. Defenses such as detention basins, levees, bunds, reservoirs, and weirs are used to prevent waterways from overflowing their banks. When these defenses fail, emergency measures such as sandbags or portable inflatable tubes are often used to try to stem flooding. Coastal flooding has been addressed in portions of Europe and the Americas with coastal defenses, such as sea walls, beach nourishment, and barrier islands. In the riparian zone near rivers and streams, erosion control measures can be taken to try to slow down or reverse the natural forces that cause many waterways to meander over long periods of time. Flood controls, such as dams, can be built and maintained over time to try to reduce the occurrence and severity of floods as well. In the United States, the U.S. Army Corps of Engineers maintains a network of such flood control dams. In areas prone to urban flooding, one solution is the repair and expansion of human-made sewer systems and stormwater infrastructure. Another strategy is to reduce impervious surfaces in streets, parking lots and buildings through natural drainage channels, porous paving, and wetlands (collectively known as green infrastructure or sustainable urban drainage systems (SUDS)). Areas identified as flood-prone can be converted into parks and playgrounds that can tolerate occasional flooding. Ordinances can be adopted to require developers to retain stormwater on site and require buildings to be elevated, protected by floodwalls and levees, or designed to withstand temporary inundation. Property owners can also invest in solutions themselves, such as re-landscaping their property to take the flow of water away from their building and installing rain barrels, sump pumps, and check valves. Flood safety planning In the United States, the National Weather Service gives out the advice "Turn Around, Don't Drown" for floods; that is, it recommends that people get out of the area of a flood, rather than trying to cross it. At the most basic level, the best defense against floods is to seek higher ground for high-value uses while balancing the foreseeable risks with the benefits of occupying flood hazard zones. Critical community-safety facilities, such as hospitals, emergency-operations centers, and police, fire, and rescue services, should be built in areas least at risk of flooding. Structures, such as bridges, that must unavoidably be in flood hazard areas should be designed to withstand flooding. Areas most at risk for flooding could be put to valuable uses that could be abandoned temporarily as people retreat to safer areas when a flood is imminent. Planning for flood safety involves many aspects of analysis and engineering, including: observation of previous and present flood heights and inundated areas, statistical, hydrologic, and hydraulic model analyses, mapping inundated areas and flood heights for future flood scenarios, long-term land use planning and regulation, engineering design and construction of structures to control or withstand flooding, intermediate-term monitoring, forecasting, and emergency-response planning, and short-term monitoring, warning, and response operations. Each topic presents distinct yet related questions with varying scope and scale in time, space, and the people involved. Attempts to understand and manage the mechanisms at work in floodplains have been made for at least six millennia. In the United States, the Association of State Floodplain Managers works to promote education, policies, and activities that mitigate current and future losses, costs, and human suffering caused by flooding and to protect the natural and beneficial functions of floodplains – all without causing adverse impacts. A portfolio of best practice examples for disaster mitigation in the United States is available from the Federal Emergency Management Agency. Flood clean-up safety Clean-up activities following floods often pose hazards to workers and volunteers involved in the effort. Potential dangers include electrical hazards, carbon monoxide exposure, musculoskeletal hazards, heat or cold stress, motor vehicle-related dangers, fire, drowning, and exposure to hazardous materials. Because flooded disaster sites are unstable, clean-up workers might encounter sharp jagged debris, biological hazards in the flood water, exposed electrical lines, blood or other body fluids, and animal and human remains. In planning for and reacting to flood disasters, managers provide workers with hard hats, goggles, heavy work gloves, life jackets, and watertight boots with steel toes and insoles. Flood predictions Mathematical models and computer tools A series of annual maximum flow rates in a stream reach can be analyzed statistically to estimate the 100-year flood and floods of other recurrence intervals there. Similar estimates from many sites in a hydrologically similar region can be related to measurable characteristics of each drainage basin to allow indirect estimation of flood recurrence intervals for stream reaches without sufficient data for direct analysis. Physical process models of channel reaches are generally well understood and will calculate the depth and area of inundation for given channel conditions and a specified flow rate, such as for use in floodplain mapping and flood insurance. Conversely, given the observed inundation area of a recent flood and the channel conditions, a model can calculate the flow rate. Applied to various potential channel configurations and flow rates, a reach model can contribute to selecting an optimum design for a modified channel. Various reach models are available as of 2015, either 1D models (flood levels measured in the channel) or 2D models (variable flood depths measured across the extent of a floodplain). HEC-RAS, the Hydraulic Engineering Center model, is among the most popular software, if only because it is available free of charge. Other models such as TUFLOW combine 1D and 2D components to derive flood depths across both river channels and the entire floodplain. Physical process models of complete drainage basins are even more complex. Although many processes are well understood at a point or for a small area, others are poorly understood at all scales, and process interactions under normal or extreme climatic conditions may be unknown. Basin models typically combine land-surface process components (to estimate how much rainfall or snowmelt reaches a channel) with a series of reach models. For example, a basin model can calculate the runoff hydrograph that might result from a 100-year storm, although the recurrence interval of a storm is rarely equal to that of the associated flood. Basin models are commonly used in flood forecasting and warning, as well as in analysis of the effects of land use change and climate change. In the United States, an integrated approach to real-time hydrologic computer modelling uses observed data from the U.S. Geological Survey (USGS), various cooperative observing networks, various automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (NOHRSC), various hydroelectric companies, etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt to generate daily or as-needed hydrologic forecasts. The NWS also cooperates with Environment Canada on hydrologic forecasts that affect both the US and Canada, like in the area of the Saint Lawrence Seaway. The Global Flood Monitoring System, "GFMS", a computer tool which maps flood conditions worldwide, is available online. Users anywhere in the world can use GFMS to determine when floods may occur in their area. GFMS uses precipitation data from NASA's Earth observing satellites and the Global Precipitation Measurement satellite, "GPM". Rainfall data from GPM is combined with a land surface model that incorporates vegetation cover, soil type, and terrain to determine how much water is soaking into the ground, and how much water is flowing into streamflow. Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 hours, at each 12-kilometer gridpoint on a global map. Forecasts for these parameters are 5 days into the future. Users can zoom in to see inundation maps (areas estimated to be covered with water) in 1-kilometer resolution. Flood forecasts and warnings Anticipating floods before they occur allows for precautions to be taken and people to be warned so that they can be prepared in advance for flooding conditions. For example, farmers can remove animals from low-lying areas and utility services can put in place emergency provisions to re-route services if needed. Emergency services can also make provisions to have enough resources available ahead of time to respond to emergencies as they occur. People can evacuate areas to be flooded. In order to make the most accurate flood forecasts for waterways, it is best to have a long time-series of historical data that relates stream flows to measured past rainfall events. Coupling this historical information with real-time knowledge about volumetric capacity in catchment areas, such as spare capacity in reservoirs, ground-water levels, and the degree of saturation of area aquifers is also needed in order to make the most accurate flood forecasts. Radar estimates of rainfall and general weather forecasting techniques are also important components of good flood forecasting. In areas where good quality data is available, the intensity and height of a flood can be predicted with fairly good accuracy and plenty of lead time. The output of a flood forecast is typically a maximum expected water level and the likely time of its arrival at key locations along a waterway, and it also may allow for the computation of the likely statistical return period of a flood. In many developed countries, urban areas at risk of flooding are protected against a 100-year flood – that is a flood that has a probability of around 63% of occurring in any 100-year period of time. According to the U.S. National Weather Service (NWS) Northeast River Forecast Center (RFC) in Taunton, Massachusetts, a rule of thumb for flood forecasting in urban areas is that it takes at least of rainfall in around an hour's time in order to start significant ponding of water on impermeable surfaces. Many NWS RFCs routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the general amount of rainfall that would need to fall in a short period of time in order to cause flash flooding or flooding on larger water basins. Flood risk assessment Flood risks can be defined as the risk that floods pose to individuals, property and the natural landscape based on specific hazards and vulnerability. The extent of flood risks can impact the types of mitigation strategies required and implemented. A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions. Examples by country or region Worldwide: List of floods Africa: List of floods#Africa Asia: List of floods#Asia Europe: List of floods in Europe North Sea: Storm tides of the North Sea The Netherlands: Floods in the Netherlands, Flood control in the Netherlands Oceania: List of floods#Oceania Australia: Floods in Australia United States: Lists of floods in the United States Society and culture Myths and religion Etymology The word "flood" comes from the Old English , a word common to Germanic languages (compare German , Dutch from the same root as is seen in flow, float; also compare with Latin , ), meaning "a flowing of water, tide, an overflowing of land by water, a deluge, Noah's Flood; mass of water, river, sea, wave". The Old English word comes from the Proto-Germanic floduz (Old Frisian , Old Norse , Middle Dutch , Dutch , German , and Gothic derives from floduz). See also References Water Bodies of water Hydrology Meteorological phenomena Weather hazards Natural disasters
Flood
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
6,967
[ "Physical phenomena", "Earth phenomena", "Hydrology", "Weather hazards", "Weather", "Natural disasters", "Flood", "Meteorological phenomena", "Environmental engineering", "Water" ]
50,513
https://en.wikipedia.org/wiki/Melanin
Melanin (; ) is a family of biomolecules organized as oligomers or polymers, which among other functions provide the pigments of many organisms. Melanin pigments are produced in a specialized group of cells known as melanocytes. There are five basic types of melanin: eumelanin, pheomelanin, neuromelanin, allomelanin and pyomelanin. Melanin is produced through a multistage chemical process known as melanogenesis, where the oxidation of the amino acid tyrosine is followed by polymerization. Pheomelanin is a cysteinated form containing polybenzothiazine portions that are largely responsible for the red or yellow tint given to some skin or hair colors. Neuromelanin is found in the brain. Research has been undertaken to investigate its efficacy in treating neurodegenerative disorders such as Parkinson's. Allomelanin and pyomelanin are two types of nitrogen-free melanin. The phenotypic color variation observed in the epidermis and hair of mammals is primarily determined by the levels of eumelanin and pheomelanin in the examined tissue. In an average human individual, eumelanin is more abundant in tissues requiring photoprotection, such as the epidermis and the retinal pigment epithelium. In healthy subjects, epidermal melanin is correlated with UV exposure, while retinal melanin has been found to correlate with age, with levels diminishing 2.5-fold between the first and ninth decades of life, which has been attributed to oxidative degradation mediated by reactive oxygen species generated via lipofuscin-dependent pathways. In the absence of albinism or hyperpigmentation, the human epidermis contains approximately 74% eumelanin and 26% pheomelanin, largely irrespective of skin tone, with eumelanin content ranging between 71.8–78.9%, and pheomelanin varying between 21.1–28.2%. Total melanin content in the epidermis ranges from around 0 μg/mg in albino epidermal tissue to >10 μg/mg in darker tissue. In the human skin, melanogenesis is initiated by exposure to UV radiation, causing the skin to darken. Eumelanin is an effective absorbent of light; the pigment is able to dissipate over 99.9% of absorbed UV radiation. Because of this property, eumelanin is thought to protect skin cells from UVA and UVB radiation damage, reducing the risk of folate depletion and dermal degradation. Exposure to UV radiation is associated with increased risk of malignant melanoma, a cancer of melanocytes (melanin cells). Studies have shown a lower incidence for skin cancer in individuals with more concentrated melanin, i.e. darker skin tone. Melanin types Eumelanin Eumelanin () has two forms linked to 5,6-dihydroxyindole (DHI) and 5,6-dihydroxyindole-2-carboxylic acid (DHICA). DHI-derived eumelanin is dark brown or black and insoluble, and DHICA -derived eumelanin which is lighter and soluble in alkali. Both eumelanins arise from the oxidation of tyrosine in specialized organelles called melanosomes. This reaction is catalyzed by the enzyme tyrosinase. The initial product, dopaquinone can transform into either 5,6-dihydroxyindole (DHI) or 5,6-dihydroxyindole-2-carboxylic acid (DHICA). DHI and DHICA are oxidized and then polymerize to form the two eumelanins. In natural conditions, DHI and DHICA often co-polymerize, resulting in a range of eumelanin polymers. These polymers contribute to the variety of melanin components in human skin and hair, ranging from light yellow/red pheomelanin to light brown DHICA-enriched eumelanin and dark brown or black DHI-enriched eumelanin. These final polymers differ in solubility and color. Analysis of highly pigmented (Fitzpatrick type V and VI) skin finds that DHI-eumelanin comprises the largest portion, approximately 60–70%, followed by DHICA-eumelanin at 25–35%, and pheomelanin only 2–8%. Notably, while an enrichment of DHI-eumelanin occurs in during sun tanning, it is accompanied by a decrease in DHICA-eumelanin and pheomelanin. A small amount of black eumelanin in the absence of other pigments causes grey hair. A small amount of eumelanin in the absence of other pigments causes blond hair. Eumelanin is present in the skin and hair, etc. Pheomelanin Pheomelanins (or phaeomelanins, from Greek φαιός phaios, "grey") impart a range of yellowish to reddish colors. Pheomelanins are particularly concentrated in the lips, nipples, glans of the penis, and vagina. When a small amount of eumelanin in hair (which would otherwise cause blond hair) is mixed with pheomelanin, the result is orange hair, which is typically called "red" or "ginger" hair. Pheomelanin is also present in the skin, and redheads consequently often have a more pinkish hue to their skin as well. Exposure of the skin to ultraviolet light increases pheomelanin content, as it does for eumelanin; but rather than absorbing light, pheomelanin within the hair and skin reflect yellow to red light, which may increase damage from UV radiation exposure. Pheomelanin production is highly dependent on cysteine availability, which is transported into the melanosome, reacting with dopaquinone to form cys-dopa. Cys-dopa then undergoes several transformations before forming pheomelanin. In chemical terms, pheomelanins differ from eumelanins in that the oligomer structure incorporates benzothiazine and benzothiazole units that are produced, instead of DHI and DHICA, when the amino acid L-cysteine is present. Pheomelanins, unlike euemanins, are rare in lower organisms with claims they are an "evolutionary innovation in the tetrapod lineage" but recent research finds them also in some fish. Neuromelanin Neuromelanin (NM) is an insoluble polymer pigment produced in specific populations of catecholaminergic neurons in the brain. Humans have the largest amount of NM, which is present in lesser amounts in other primates, and totally absent in many other species. The biological function remains unknown, although human NM has been shown to efficiently bind transition metals such as iron, as well as other potentially toxic molecules. Therefore, it may play crucial roles in apoptosis and the related Parkinson's disease. Other forms of melanins Up until the 1960s, melanin was classified into eumelanin and pheomelanin. However, in 1955, a melanin associated with nerve cells was discovered, neuromelanin. In 1972 a water-soluble form, pyomelanin was discovered. In 1976, allomelanin, the fifth form of the melanins was found in nature. Peptidomelanin Peptidomelanin is another water-soluble form of melanin. It was found to be secreted into the surrounding medium by germinating Aspergillus niger (strain: melanoliber) spores. Peptidomelanin is formed as a copolymer between L-DOPA eumelanin and short peptides that form a 'corona', that are responsible for the substance's solubility. The peptide chains are linked to the L-DOPA core polymer via peptide bonds. This lead to a proposed biosynthetic process involving the hydroxylation of tyrosinylated peptides formed via proteases during sporogenesis, which are then incorporated autoxidatively into a growing L-DOPA core polymer. Selenomelanin It is possible to enrich melanin with selenium instead of sulphur. This selenium analogue of pheomelanin has been successfully synthesized through chemical and biosynthetic routes using selenocystine as a feedstock. Due to selenium's higher atomic number, the obtained selenomelanin can be expected to provide better protection against ionising radiation as compared to the other known forms of melanin. This protection has been demonstrated with radiation experiments on human cells and bacteria, opening up the possibility of applications in space travel. Trichochromes Trichochromes (formerly called trichosiderins) are pigments produced from the same metabolic pathway as the eumelanins and pheomelanins, but unlike those molecules they have low molecular weight. They occur in some red human hair. Humans In humans, melanin is the primary determinant of skin color. It is also found in hair, the pigmented tissue underlying the iris of the eye, and the stria vascularis of the inner ear. In the brain, tissues with melanin include the medulla and pigment-bearing neurons within areas of the brainstem, such as the locus coeruleus. It also occurs in the zona reticularis of the adrenal gland. The melanin in the skin is produced by melanocytes, which are found in the basal layer of the epidermis. Although, in general, human beings possess a similar concentration of melanocytes in their skin, the melanocytes in some individuals and ethnic groups produce variable amounts of melanin. The ratio of eumelanin (74%) and pheomelanin (26%) in the epidermis is constant regardless of the degree of pigmentation. Some humans have very little or no melanin synthesis in their bodies, a condition known as albinism. Because melanin is an aggregate of smaller component molecules, there are many different types of melanin with different proportions and bonding patterns of these component molecules. Both pheomelanin and eumelanin are found in human skin and hair, but eumelanin is the most abundant melanin in humans, as well as the form most likely to be deficient in albinism. Other organisms Melanins have very diverse roles and functions in various organisms. A form of melanin makes up the ink used by many cephalopods (see cephalopod ink) as a defense mechanism against predators. Melanins also protect microorganisms, such as bacteria and fungi, against stresses that involve cell damage such as UV radiation from the sun and reactive oxygen species. Melanin also protects against damage from high temperatures, chemical stresses (such as heavy metals and oxidizing agents), and biochemical threats (such as host defenses against invading microbes). Therefore, in many pathogenic microbes (for example, in Cryptococcus neoformans, a fungus) melanins appear to play important roles in virulence and pathogenicity by protecting the microbe against immune responses of its host. In invertebrates, a major aspect of the innate immune defense system against invading pathogens involves melanin. Within minutes after infection, the microbe is encapsulated within melanin (melanization), and the generation of free radical byproducts during the formation of this capsule is thought to aid in killing them. Some types of fungi, called radiotrophic fungi, appear to be able to use melanin as a photosynthetic pigment that enables them to capture gamma rays and harness this energy for growth. In fish, melanin occurs not only in the skin but also in internal organs such as eyes. Most fish species use eumelanin, but Stegastes apicalis and Cyprinus carpio use pheomelanin instead. The darker feathers of birds owe their color to melanin and are less readily degraded by bacteria than unpigmented ones or those containing carotenoid pigments. Feathers that contain melanin are also 39% more resistant to abrasion than those that do not because melanin granules help fill the space between the keratin strands that form feathers. Pheomelanin synthesis in birds implies the consumption of cysteine, a semi‐essential amino acid that is necessary for the synthesis of the antioxidant glutathione (GSH) but that may be toxic if in excess in the diet. Indeed, many carnivorous birds, which have a high protein content in their diet, exhibit pheomelanin‐based coloration. Melanin is also important in mammalian pigmentation. The coat pattern of mammals is determined by the agouti gene which regulates the distribution of melanin. The mechanisms of the gene have been extensively studied in mice to provide an insight into the diversity of mammalian coat patterns. Melanin in arthropods has been observed to be deposited in layers thus producing a Bragg reflector of alternating refractive index. When the scale of this pattern matches the wavelength of visible light, structural coloration arises: giving a number of species an iridescent color. Arachnids are one of the few groups in which melanin has not been easily detected, though researchers found data suggesting spiders do in fact produce melanin. Some moth species, including the wood tiger moth, convert resources to melanin to enhance their thermoregulation. As the wood tiger moth has populations over a large range of latitudes, it has been observed that more northern populations showed higher rates of melanization. In both yellow and white male phenotypes of the wood tiger moth, individuals with more melanin had a heightened ability to trap heat but an increased predation rate due to a weaker and less effective aposematic signal. Melanin may protect Drosophila flies and mice against DNA damage from non-UV radiation. Plants Melanin produced by plants are sometimes referred to as 'catechol melanins' as they can yield catechol on alkali fusion. It is commonly seen in the enzymatic browning of fruits such as bananas. Chestnut shell melanin can be used as an antioxidant and coloring agent. Biosynthesis involves the oxidation of indole-5,6-quinone by the tyrosinase type polyphenol oxidase from tyrosine and catecholamines leading to the formation of catechol melanin. Despite this many plants contain compounds which inhibit the production of melanins. Interpretation as a single monomer It is now understood that melanins do not have a single structure or stoichiometry. Nonetheless, chemical databases such as PubChem include structural and empirical formulae; typically 3,8-Dimethyl-2,7-dihydrobenzo[1,2,3-cd:4,5,6-c′d′]diindole-4,5,9,10-tetrone. This can be thought of as a single monomer that accounts for the measured elemental composition and some properties of melanin, but is unlikely to be found in nature. Solano claims that this misleading trend stems from a report of an empirical formula in 1948, but provides no other historical detail. Biosynthetic pathways The first step of the biosynthetic pathway for both eumelanins and pheomelanins is catalysed by tyrosinase. Tyrosine → DOPA → dopaquinone Dopaquinone can combine with cysteine by two pathways to benzothiazines and pheomelanins dopaquinone + cysteine → 5-S-cysteinyldopa → benzothiazine intermediate → pheomelanin dopaquinone + cysteine → 2-S-cysteinyldopa → benzothiazine intermediate → pheomelanin Also, dopaquinone can be converted to leucodopachrome and follow two more pathways to the eumelanins dopaquinone → leucodopachrome → dopachrome → 5,6-dihydroxyindole-2-carboxylic acid → quinone → eumelanin dopaquinone → leucodopachrome → dopachrome → 5,6-dihydroxyindole → quinone → eumelanin Detailed metabolic pathways can be found in the KEGG database (see External links). Microscopic appearance Melanin is brown, non-refractile, and finely granular with individual granules having a diameter of less than 800 nanometers. This differentiates melanin from common blood breakdown pigments, which are larger, chunky, and refractile, and range in color from green to yellow or red-brown. In heavily pigmented lesions, dense aggregates of melanin can obscure histologic detail. A dilute solution of potassium permanganate is an effective melanin bleach. Genetic disorders and disease states There are approximately nine types of oculocutaneous albinism, which is mostly an autosomal recessive disorder. Certain ethnicities have higher incidences of different forms. For example, the most common type, called oculocutaneous albinism type 2 (OCA2), is especially frequent among people of black African descent and white Europeans. People with OCA2 usually have fair skin, but are often not as pale as OCA1. They (OCA2 or OCA1? see comments in History) have pale blonde to golden, strawberry blonde, or even brown hair, and most commonly blue eyes. 98.7–100% of modern Europeans are carriers of the derived allele SLC24A5, a known cause of nonsyndromic oculocutaneous albinism. It is an autosomal recessive disorder characterized by a congenital reduction or absence of melanin pigment in the skin, hair, and eyes. The estimated frequency of OCA2 among African-Americans is 1 in 10,000, which contrasts with a frequency of 1 in 36,000 in white Americans. In some African nations, the frequency of the disorder is even higher, ranging from 1 in 2,000 to 1 in 5,000. Another form of Albinism, the "yellow oculocutaneous albinism", appears to be more prevalent among the Amish, who are of primarily Swiss and German ancestry. People with this IB variant of the disorder commonly have white hair and skin at birth, but rapidly develop normal skin pigmentation in infancy. Ocular albinism affects not only eye pigmentation but visual acuity, as well. People with albinism typically test poorly, within the 20/60 to 20/400 range. In addition, two forms of albinism, with approximately 1 in 2,700 most prevalent among people of Puerto Rican origin, are associated with mortality beyond melanoma-related deaths. The connection between albinism and deafness is well known, though poorly understood. In his 1859 treatise On the Origin of Species, Charles Darwin observed that "cats which are entirely white and have blue eyes are generally deaf". In humans, hypopigmentation and deafness occur together in the rare Waardenburg's syndrome, predominantly observed among the Hopi in North America. The incidence of albinism in Hopi Indians has been estimated as approximately 1 in 200 individuals. Similar patterns of albinism and deafness have been found in other mammals, including dogs and rodents. However, a lack of melanin per se does not appear to be directly responsible for deafness associated with hypopigmentation, as most individuals lacking the enzymes required to synthesize melanin have normal auditory function. Instead, the absence of melanocytes in the stria vascularis of the inner ear results in cochlear impairment, though the reasons for this are not fully understood. In Parkinson's disease, a disorder that affects neuromotor functioning, there is decreased neuromelanin in the substantia nigra and locus coeruleus as a consequence of specific dropping out of dopaminergic and noradrenergic pigmented neurons. This results in diminished dopamine and norepinephrine synthesis. While no correlation between race and the level of neuromelanin in the substantia nigra has been reported, the significantly lower incidence of Parkinson's in blacks than in whites has "prompt[ed] some to suggest that cutaneous melanin might somehow serve to protect the neuromelanin in substantia nigra from external toxins." In addition to melanin deficiency, the molecular weight of the melanin polymer may be decreased by various factors such as oxidative stress, exposure to light, perturbation in its association with melanosomal matrix proteins, changes in pH, or in local concentrations of metal ions. A decreased molecular weight or a decrease in the degree of polymerization of ocular melanin has been proposed to turn the normally anti-oxidant polymer into a pro-oxidant. In its pro-oxidant state, melanin has been suggested to be involved in the causation and progression of macular degeneration and melanoma. Rasagiline, an important monotherapy drug in Parkinson's disease, has melanin binding properties, and melanoma tumor reducing properties. Higher eumelanin levels also can be a disadvantage, however, beyond a higher disposition toward vitamin D deficiency. Dark skin is a complicating factor in the laser removal of port-wine stains. Effective in treating white skin, in general, lasers are less successful in removing port-wine stains in people of Asian or African descent. Higher concentrations of melanin in darker-skinned individuals simply diffuse and absorb the laser radiation, inhibiting light absorption by the targeted tissue. In a similar manner, melanin can complicate laser treatment of other dermatological conditions in people with darker skin. Freckles and moles are formed where there is a localized concentration of melanin in the skin. They are highly associated with pale skin. Nicotine has an affinity for melanin-containing tissues because of its precursor function in melanin synthesis or its irreversible binding of melanin. This has been suggested to underlie the increased nicotine dependence and lower smoking cessation rates in darker pigmented individuals. Human adaptations Physiology Melanocytes insert granules of melanin into specialized cellular vesicles called melanosomes. These are then transferred into the keratinocyte cells of the human epidermis. The melanosomes in each recipient cell accumulate atop the cell nucleus, where they protect the nuclear DNA from mutations caused by the ionizing radiation of the sun's ultraviolet rays. In general, people whose ancestors lived for long periods in the regions of the globe near the equator have larger quantities of eumelanin in their skins. This makes their skins brown or black and protects them against high levels of exposure to the sun, which more frequently result in melanomas in lighter-skinned people. Not all the effects of pigmentation are advantageous. Pigmentation increases the heat load in hot climates, and dark-skinned people absorb 30% more heat from sunlight than do very light-skinned people, although this factor may be offset by more profuse sweating. In cold climates dark skin entails more heat loss by radiation. Pigmentation also hinders synthesis of vitamin D. Since pigmentation appears to be not entirely advantageous to life in the tropics, other hypotheses about its biological significance have been advanced; for example a secondary phenomenon induced by adaptation to parasites and tropical diseases. Evolutionary origins Early humans evolved dark skin color, as an adaptation to a loss of body hair that increased the effects of UV radiation. Before the development of hairlessness, early humans might have had light skin underneath their fur, similar to that found in other primates. Anatomically modern humans evolved in Africa between 200,000 and 100,000 years ago, and then populated the rest of the world through migration between 80,000 and 50,000 years ago, in some areas interbreeding with certain archaic human species (Neanderthals, Denisovans, and possibly others). The first modern humans had darker skin as the indigenous people of Africa today. Following migration and settlement in Asia and Europe, the selective pressure dark UV-radiation protecting skin decreased where radiation from the sun was less intense. This resulted in the current range of human skin color. Of the two common gene variants known to be associated with pale human skin, Mc1r does not appear to have undergone positive selection, while SLC24A5 has undergone positive selection. Effects As with peoples having migrated northward, those with light skin migrating toward the equator acclimatize to the much stronger solar radiation. Nature selects for less melanin when ultraviolet radiation is weak. Most people's skin darkens when exposed to UV light, giving them more protection when it is needed. This is the physiological purpose of sun tanning. Dark-skinned people, who produce more skin-protecting eumelanin, have a greater protection against sunburn and the development of melanoma, a potentially deadly form of skin cancer, as well as other health problems related to exposure to strong solar radiation, including the photodegradation of certain vitamins such as riboflavins, carotenoids, tocopherol, and folate. Melanin in the eyes, in the iris and choroid, helps protect from ultraviolet and high-frequency visible light; people with blue, green, and grey eyes are more at risk of sun-related eye problems. Furthermore, the ocular lens yellows with age, providing added protection. However, the lens also becomes more rigid with age, losing most of its accommodation—the ability to change shape to focus from far to near—a detriment due probably to protein crosslinking caused by UV exposure. Recent research suggests that melanin may serve a protective role other than photoprotection. Melanin is able to effectively chelate metal ions through its carboxylate and phenolic hydroxyl groups, often much more efficiently than the powerful chelating ligand ethylenediaminetetraacetate (EDTA). Thus, it may serve to sequester potentially toxic metal ions, protecting the rest of the cell. This hypothesis is supported by the fact that the loss of neuromelanin, observed in Parkinson's disease, is accompanied by an increase in iron levels in the brain. Physical properties and technological applications Evidence exists for a highly cross-linked heteropolymer bound covalently to matrix scaffolding melanoproteins. It has been proposed that the ability of melanin to act as an antioxidant is directly proportional to its degree of polymerization or molecular weight. Suboptimal conditions for the effective polymerization of melanin monomers may lead to formation of pro-oxidant melanin with lower-molecular-weight, implicated in the causation and progression of macular degeneration and melanoma. Signaling pathways that upregulate melanization in the retinal pigment epithelium (RPE) also may be implicated in the downregulation of rod outer segment phagocytosis by the RPE. This phenomenon has been attributed in part to foveal sparing in macular degeneration. Role in melanoma metastasis Heavily pigmented melanoma cells have a Young's modulus of about 4.93 kPa, compared to non-pigmented cells, with a value of 0.98 kPa. The elasticity of melanoma cells is crucial to metastasis and growth; non-pigmented tumors were larger than pigmented tumors, and spread far more easily. Pigmented and non-pigmented cells are both present in melanoma tumors, so that they can both be drug-resistant and metastatic. See also Albino Albinism in biology Griscelli syndrome, a syndrome characterised by hypopigmentation Human skin color Melanin theory Melanism Melanogenesis, melanin production Risks and benefits of sun exposure Skin whitening Vitamin D References External links Human skin color Skin anatomy Hair color Fungal pigments
Melanin
[ "Biology" ]
6,063
[ "Human skin color", "Pigmentation" ]
50,530
https://en.wikipedia.org/wiki/Mole%20%28animal%29
Moles are small, subterranean mammals. They have cylindrical bodies, velvety fur, very small, inconspicuous eyes and ears, reduced hindlimbs, and short, powerful forelimbs with large paws adapted for digging. The word "mole" most commonly refers to many species in the family Talpidae (which are named after the Latin word for mole, talpa). True moles are found in most parts of North America, Europe and Asia. Other mammals referred to as moles include the African golden moles and the Australian marsupial moles, which have a similar ecology and lifestyle to true moles, but are unrelated. Moles may be viewed as pests to gardeners, but they provide positive contributions to soil, gardens, and ecosystems, including soil aeration, feeding on slugs and small creatures that eat plant roots, and providing prey for other wildlife. They eat earthworms and other small invertebrates in the soil. Terminology In Middle English, moles were known as moldwarps. The expression "don't make a mountain out of a molehill" (which means "exaggerating problems") was first recorded in Tudor times. By the era of Early Modern English, the mole was also known in English as mouldywarp or mouldiwarp, a word having cognates in other Germanic languages such as German (Maulwurf), and Danish, Norwegian, Swedish and Icelandic (muldvarp, moldvarp, mullvad, moldvarpa), where muld/mull/mold refers to soil and varp/vad/varpa refers to throwing, hence "one who throws soil" or "dirt-tosser". Male moles are called "boars"; females are called "sows". Characteristics Underground breathing Moles have been found to tolerate higher levels of carbon dioxide than other mammals, because their blood cells have a special form of hemoglobin that has a higher affinity to oxygen than other forms. In addition, moles use oxygen more effectively by reusing the exhaled air, and can survive in low-oxygen environments such as burrows. Extra thumbs Moles have polydactyl forepaws: each has an extra thumb (also known as a prepollex) next to the regular thumb. While the mole's other digits have multiple joints, the prepollex has a single, sickle-shaped bone that develops later and differently from the other fingers during embryogenesis from a transformed sesamoid bone in the wrist, independently evolved but similar to the giant panda thumb. This supernumerary digit is species-specific, as it is not present in shrews, the mole's closest relatives. Androgenic steroids are known to affect the growth and formation of bones, and a connection is possible between this species-specific trait and the male genital apparatus in female moles of many mole species (gonads with testicular and ovary tissues). Diet Moles are omnivores, but their diet primarily consists of earthworms and other small invertebrates found in the soil. The mole runs are in reality "worm traps", the mole sensing when a worm falls into the tunnel and quickly running along to kill and eat it. Because their saliva contains a toxin that can paralyze earthworms, moles are able to store their still-living prey for later consumption. They construct special underground "larders" for just this purpose; researchers have discovered such larders with over a thousand earthworms in them. Before eating earthworms, moles pull them between their squeezed paws to force the collected earth and dirt out of the worm's gut. The star-nosed mole can detect, catch and eat food faster than the human eye can follow. Breeding Breeding season for a mole depends on species, but is generally from February through May. Males search for females by letting out high-pitched squeals and tunneling through foreign areas. The gestation period of the Eastern (North America) mole (Scalopus aquaticus) is approximately 42 days. Three to five young are born, mainly in March and early April. Townsend's moles mate in February and March, and the 2–4 young are born in March and April after a gestation period of about 1 month. Social structure Moles are allegedly solitary creatures, coming together only to mate. Territories may overlap, but moles avoid each other and males may fight fiercely if they meet. Classification The family Talpidae contains all the true moles and some of their close relatives. Those species called "shrew moles" represent an intermediate form between the moles and their shrew ancestors, and as such may not be fully described by the article. Moles were traditionally classified in the order Insectivora, but that order has since been abandoned because it has been shown to not be monophyletic. Moles are now classified with shrews and hedgehogs, in the more narrowly defined order Eulipotyphla. Subfamily Scalopinae: New World moles Tribe Condylurini: Star-nosed mole (North America) Genus Condylura: Star-nosed mole (the sole species) Tribe Scalopini: New World moles Genus Alpiscaptulus: Medog mole (China) Genus Parascalops: Hairy-tailed mole (northeastern North America) Genus Scalopus: Eastern mole (North America) Genus Scapanulus: Gansu mole (China) Genus Scapanus: Western North American moles (five species) Subfamily Talpinae: Old World moles, desmans, and shrew moles Tribe Desmanini Genus Desmana: Russian desman Genus Galemys: Pyrenean desman Tribe Talpini: Old World moles Genus Euroscaptor: Ten Asian species Genus Mogera: Nine species from Japan, Korea, and eastern China Genus Parascaptor: White-tailed mole, southern Asia Genus Scaptochirus: Short-faced mole, China Genus Talpa: Thirteen species, Europe and western Asia Tribe Scaptonychini: Long-tailed mole Genus Scaptonyx: Long-tailed mole (China and Myanmar (Burma)) Tribe Urotrichini: Japanese shrew moles Genus Dymecodon: True's shrew mole Genus Urotrichus: Japanese shrew mole Tribe Neurotrichini: New World shrew moles Genus Neurotrichus: American shrew mole (US Pacific Northwest, southwest British Columbia) Subfamily Uropsilinae: Asian shrew moles Genus Uropsilus: Five species in China, Bhutan, and Myanmar (Burma) Other "moles" Many groups of burrowing animals (pink fairy armadillos, tuco-tucos, mole rats, mole crickets, pygmy mole crickets, and mole crabs) have independently developed close physical similarities with moles due to convergent evolution; two of these are so similar to true moles, they are commonly called and thought of as "moles" in common English, although they are completely unrelated to true moles or to each other. These are the golden moles of southern Africa and the marsupial moles of Australia. While difficult to distinguish from each other, they are most easily distinguished from true moles by shovel-like patches on their noses, which they use in tandem with their abbreviated forepaws to swim through sandy soils. Golden moles The golden moles belong to the same branch on the phylogenetic tree as the tenrecs, called Tenrecomorpha, which, in turn, stem from a main branch of placental mammals called the Afrosoricida. This means that they share a closer common ancestor with such existing afrosoricids as elephants, manatees and aardvarks than they do with other placental mammals, such as true Talpidae moles. ORDER AFROSORICIDA Suborder Tenrecomorpha Family Tenrecidae: tenrecs, 34 species in 10 genera Suborder Chrysochloridea Family Chrysochloridae Subfamily Chrysochlorinae Genus Carpitalpa Arends' golden mole (Carpitalpa arendsi) Genus Chlorotalpa Duthie's golden mole (Chlorotalpa duthieae) Sclater's golden mole (Chlorotalpa sclateri) Genus Chrysochloris Subgenus Chrysochloris Cape golden mole (Chrysochloris asiatica) Visagie's golden mole (Chrysochloris visagiei) Subgenus Kilimatalpa Stuhlmann's golden mole (Chrysochloris stuhlmanni) Genus Chrysospalax Giant golden mole (Chrysospalax trevelyani) Rough-haired golden mole (Chrysospalax villosus) Genus Cryptochloris De Winton's golden mole (Cryptochloris wintoni) Van Zyl's golden mole (Cryptochloris zyli) Genus Eremitalpa Grant's golden mole (Eremitalpa granti) Subfamily Amblysominae Genus Amblysomus Fynbos golden mole (Amblysomus corriae) Hottentot golden mole (Amblysomus hottentotus) Marley's golden mole (Amblysomus marleyi) Robust golden mole (Amblysomus robustus) Highveld golden mole (Amblysomus septentrionalis) Genus Calcochloris Subgenus Huetia Congo golden mole (Calcochloris leucorhinus) Subgenus Calcochloris Yellow golden mole (Calcochloris obtusirostris) Subgenus incertae sedis Somali golden mole (Calcochloris tytonis) Genus Neamblysomus Juliana's golden mole (Neamblysomus julianae) Gunning's golden mole (Neamblysomus gunningi) Marsupial moles As marsupials, these moles are even more distantly related to true talpid moles than golden moles are, both of which belong to the Eutheria, or placental mammals. This means that they are more closely related to such existing Australian marsupials as kangaroos or koalas, and even to a lesser extent to American marsupials, such as opossums, than they are to placental mammals, such as golden or Talpidae moles. Class Mammalia Subclass Prototheria: monotremes (echidnas and the platypus) Subclass Theriiformes: live-bearing mammals and their prehistoric relatives Infraclass Holotheria: modern live-bearing mammals and their prehistoric relatives Supercohort Theria: live-bearing mammals Cohort Marsupialia: marsupials Magnorder Ameridelphia: New World marsupials Order Didelphimorphia (opossums) Order Paucituberculata (shrew opossums) Superorder Australidelphia Australian marsupials Order Dasyuromorphia (the Tasmanian devil, the numbat, thylacines, quolls, dunnarts and others) Order Peramelemorphia (bilbies, bandicoots and rainforest bandicoots) Order Diprotodontia (koalas, wombats, diprotodonts, possums, cuscuses, sugar gliders, kangaroos and others) Order Notoryctemorphia (marsupial moles and closely related extinct families of marsupials) Family Notoryctidae (living and extinct marsupial mole genera) Genus Notoryctes (only genus of marsupial moles with living species) Species Notoryctes typhlops (southern marsupial mole) Species Notoryctes caurinus (northern marsupial mole) Interaction with humans Pelts Moles' pelts have a velvety texture not found in surface animals. Surface-dwelling animals tend to have longer fur with a natural tendency for the nap to lie in a particular direction, but to facilitate their burrowing lifestyle, mole pelts are short and very dense and have no particular direction to the nap. This makes it easy for moles to move backwards underground, as their fur is not "brushed the wrong way". The leather is extremely soft and supple. Queen Alexandra, the wife of Edward VII of the United Kingdom, ordered a mole-fur garment to start a fashion that would create a demand for mole fur, thereby turning what had been a serious pest problem in Scotland into a lucrative industry for the country. Hundreds of pelts are cut into rectangles and sewn together to make a coat. The natural color is taupe, (derived from the French noun taupe meaning mole) but it is readily dyed any color. The term "moleskin" for a tough cotton fabric is in common use today. Pest status: extermination and humane options Moles are considered agricultural pests in some countries, while in others, such as Germany, they are a protected species, but may be killed with a permit. Problems cited as caused by moles include contamination of silage with soil particles, making it unpalatable to livestock, the covering of pasture with fresh soil reducing its size and yield, damage to agricultural machinery by the exposure of stones, damage to young plants through disturbance of the soil, weed invasion of pasture through exposure of freshly tilled soil, and damage to drainage systems and watercourses. Other species such as weasels and voles may use mole tunnels to gain access to enclosed areas or plant roots. Moles burrow and raise molehills, killing parts of lawns. They can undermine plant roots, indirectly causing damage or death. Moles do not eat plant roots. Moles are controlled with traps such as mole-catchers, smoke bombs, and poisons such as calcium carbide, which produces acetylene gas to drive moles away. Strychnine was also used for this purpose in the past. The most common method now is Phostoxin or Talunex tablets. They contain aluminium phosphide and are inserted in the mole tunnels, where they turn into phosphine gas (not be confused with phosgene gas). More recently, high-grade nitrogen gas has proven effective at killing moles, with the added advantage of having no polluting effect to the environment. Other common defensive measures include cat litter and blood meal, to repel the mole, or smoking its burrow. Devices are also sold to trap the mole in its burrow, when one sees the "mole hill" moving and therefore knows where the animal is, and then stabbing it. Other humane options are also possible including humane traps that capture the mole alive so it may be transported elsewhere. In many contexts including ordinary gardens, the damage caused by moles to lawns is mostly visual, and it is possible instead of extermination to simply remove the earth of the molehills as they appear, leaving their permanent galleries for the moles to continue their existence underground. However, when the tunnels are near the surface in soft ground or after heavy rain, they may collapse, leaving (small) unsightly furrows in the lawn. Meat William Buckland, known for eating every animal he could, opined that mole meat tasted vile. Archaeology Moles can inadvertently help archaeologists by bringing small artifacts to the surface through their digging. By examining molehills for sherds and other small objects, archaeologists can find evidence of human habitation. See also Molecatcher Notes References External links UK Government DEFRA paper on control the European mole British Traditional Molecatchers Register Agricultural pests Body plans Mammal common names
Mole (animal)
[ "Biology" ]
3,283
[ "Pests (organism)", "Agricultural pests" ]
50,555
https://en.wikipedia.org/wiki/Domoic%20acid
Domoic acid (DA) is a kainic acid-type neurotoxin that causes amnesic shellfish poisoning (ASP). It is produced by algae and accumulates in shellfish, sardines, and anchovies. When sea lions, otters, cetaceans, humans, and other predators eat contaminated animals, poisoning may result. Exposure to this compound affects the brain, causing seizures, and possibly death. History There has been little use of domoic acid throughout history except for in Japan, where it has been used as an anthelmintic for centuries. Domoic acid was first isolated in 1959 from a species of red algae, Chondria armata, in Japan, which is commonly referred to as dōmoi (ドウモイ) in the Tokunoshima dialect, or hanayanagi. Poisonings in history have been rare, or undocumented; however, it is thought that the increase in human activities is resulting in an increasing frequency of harmful algal blooms along coastlines in recent years. In 2015, the North American Pacific coast was heavily impacted by an algal bloom, consisting predominantly of the domoic acid-producing pennate diatom, Pseudo-nitzschia. Consequently, elevated levels of domoic acid were measured in stranded marine mammals, prompting the closure of beaches and damaging razor clam, rock crab and Dungeness crab fisheries. In 1961, seabirds attacked the Capitola area in California, and though it was never confirmed, it was later hypothesized that they were under the influence of domoic acid. In 1987, in Prince Edward Island, Canada, there was a shellfish poisoning resulting in 3 deaths. Blue mussels (Mytulis edulis) contaminated with domoic acid were blamed. Domoic acid has been suggested to have been involved in an incident which took place on June 22, 2006, when a California brown pelican flew through the windshield of a car on the Pacific Coast Highway. On Friday, June 14, 2019, a teenager was attacked and injured by a sea lion that was alleged to be under the influence of domoic acid in Pismo Beach on the Central California coast. Chemistry General Domoic acid is a structural analog of kainic acid, proline, and endogenous excitatory neurotransmitter glutamate. Ohfune and Tomita, who wanted to investigate its absolute stereochemistry, were the first and only to synthesize domoic acid in 1982. Biosynthesis In 1999, using 13C- and 14C-labelled precursors, the biosynthesis of domoic acid in the diatom genus Pseudo-nitzschia was examined. After addition of [1,2-13C2]-acetate, NMR spectroscopy showed enrichment of every carbon in domoic acid, indicating incorporation of the carbon isotopes. This enrichment was consistent with two biosynthetic pathways. The labeling pattern determined that domoic acid can be biosynthesized by an isoprenoid intermediate in combination with a tricarboxylic acid (TCA) cycle intermediate. In 2018, using growth conditions known to induce domoic acid production in Pseudo-nitzschia multiseries, transcriptome sequencing successfully identified candidate domoic acid biosynthesis genes responsible for the pyrrolidine core. These domoic acid biosynthesis genes, or ‘Dab’ enzymes were heterologously expressed, characterized, and annotated as dabA (terpene cyclase), dabB (hypothetical protein), dabC (α-ketoglutarate–dependent dioxygenase), and dabD (CYP450).Domoic acid biosynthesis begins with the DabA-catalyzed geranylation of L-glutamic acid (L-Glu) with geranyl pyrophosphate (GPP) to form N-geranyl-L-glutamic acid (L-NGG). DabD then performs three successive oxidation reactions at the 7′-methyl of L-NGG to produce 7′-carboxy-L-NGG, which is then cyclized by DabC to generate the naturally occurring isodomoic acid A. Finally, an uncharacterized isomerase could convert isodomoic acid A to domoic acid. Further investigation is needed to resolve the final isomerization reaction to complete the pathway to Domoic acid. Synthesis Using intermediates 5 and 6, a Diels-Alder reaction produced a bicyclic compound (7). 7 then underwent ozonolysis to open the six-membered ring leading to selenide (8). 8 was then deselenated to form 9 (E-9 and Z-9), lastly leading to the formation of (-) domoic acid. Mechanism of action The effects of domoic acid have been attributed to several mechanisms, but the one of concern is through glutamate receptors. Domoic acid is an excitatory amino acid analogue of glutamate; a neurotransmitter in the brain that activates glutamate receptors. Domoic acid has a very strong affinity for these receptors, which results in excitotoxicity initiated by an integrative action on ionotropic glutamate receptors at both sides of the synapse, coupled with the effect of blocking the channel from rapid desensitization. In addition there is a synergistic effect with endogenous glutamate and N-Methyl-D-aspartate receptor agonists that contribute to the excitotoxicity. In the brain, domoic acid especially damages the hippocampus and amygdaloid nucleus. It damages the neurons by activating AMPA and kainate receptors, causing an influx of calcium. Although calcium flowing into cells is normal, the uncontrolled increase of calcium causes the cells to degenerate. Because the hippocampus may be severely damaged, short-term memory loss occurs. It may also cause kidney damage – even at levels considered safe for human consumption, a new study in mice has revealed. The kidney is affected at a hundred times lower than the concentration allowed under FDA regulations. Toxicology Domoic acid producing algal blooms are associated with the phenomenon of amnesic shellfish poisoning (ASP). Domoic acid can bioaccumulate in marine organisms such as shellfish, anchovies, and sardines that feed on the phytoplankton known to produce this toxin. It can accumulate in high concentrations in the tissues of these plankton feeders when the toxic phytoplankton are high in concentration in the surrounding waters. Domoic acid is a neurotoxin that inhibits neurochemical processes, causing short-term memory loss, brain damage, and, in severe cases, death in humans. In marine mammals, domoic acid typically causes seizures and tremors. Studies have shown that there are no symptomatic effects in humans at levels of 0.5 mg/kg of body weight. In the 1987 domoic acid poisoning on Prince Edward Island concentrations ranging from 0.31 to 1.28 mg/kg of muscle tissue were noted in people that became ill (three of whom died). Dangerous levels of domoic acid have been calculated based on cases such as the one on Prince Edward island. The exact for humans is unknown; for mice the LD50 is 3.6 mg/kg. New research has found that domoic acid is a heat-resistant and very stable toxin, which can damage kidneys at concentrations that are 100 times lower than what causes neurological effects. Diagnosis and prevention In order to be diagnosed and treated if poisoned, domoic acid must first be detected. Methods such as ELISA or probe development with polymerase chain reaction (PCR) may be used to detect the toxin or the organism producing this toxin. There is no known antidote available for domoic acid. Therefore, if poisoning occurs, it is advised to go quickly to a hospital. Cooking or freezing affected fish or shellfish tissue that are contaminated with domoic acid does not lessen the toxicity. As a public health concern, the concentration of domoic acid in shellfish and shellfish parts at point of sale should not exceed the current permissible limit of 20 mg/kg tissue. In addition, during processing shellfish, it is important to pay attention to environmental condition factors. In popular culture On August 18, 1961, in Capitola and Santa Cruz, California there was an invasion of what people described as chaotic seabirds. These birds were believed to be under the influence of domoic acid, and it inspired a scene in Alfred Hitchcock's feature film The Birds. In the Elementary Season 1 Episode 13 "The Red Team", domoic acid was used as a poison to mimic Alzheimer's. See also Canadian Reference Materials Pseudo-nitzschia Quisqualic acid Brevetoxin Ciguatoxin Okadaic acid Saxitoxin Maitotoxin References External links Marine neurotoxins Phycotoxins Secondary amino acids Pyrrolidines AMPA receptor agonists Kainate receptor agonists Chelating agents Tricarboxylic acids Conjugated dienes Toxic amino acids Excitotoxins
Domoic acid
[ "Chemistry" ]
1,945
[ "Chelating agents", "Process chemicals" ]
50,557
https://en.wikipedia.org/wiki/Phytoplankton
Phytoplankton () are the autotrophic (self-feeding) components of the plankton community and a key part of ocean and freshwater ecosystems. The name comes from the Greek words (), meaning 'plant', and (), meaning 'wanderer' or 'drifter'. Phytoplankton obtain their energy through photosynthesis, as trees and other plants do on land. This means phytoplankton must have light from the sun, so they live in the well-lit surface layers (euphotic zone) of oceans and lakes. In comparison with terrestrial plants, phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). As a result, phytoplankton respond rapidly on a global scale to climate variations. Phytoplankton form the base of marine and freshwater food webs and are key players in the global carbon cycle. They account for about half of global photosynthetic activity and at least half of the oxygen production, despite amounting to only about 1% of the global plant biomass. Phytoplankton are very diverse, comprising photosynthesizing bacteria (cyanobacteria) and various unicellular protist groups (notably the diatoms). Most phytoplankton are too small to be individually seen with the unaided eye. However, when present in high enough numbers, some varieties may be noticeable as colored patches on the water surface due to the presence of chlorophyll within their cells and accessory pigments (such as phycobiliproteins or xanthophylls) in some species. Types Phytoplankton are photosynthesizing microscopic protists and bacteria that inhabit the upper sunlit layer of marine and fresh water bodies of water on Earth. Paralleling plants on land, phytoplankton undertake primary production in water, creating organic compounds from carbon dioxide dissolved in the water. Phytoplankton form the base of — and sustain — the aquatic food web, and are crucial players in the Earth's carbon cycle. Phytoplankton are very diverse, comprising photosynthesizing bacteria (cyanobacteria) and various unicellular protist groups (notably the diatoms). Many other organism groups formally named as phytoplankton, including coccolithophores and dinoflagellates, are now no longer included as they are not only phototrophic but can also eat. These organisms are now more correctly termed mixoplankton. This recognition has important consequences for how we view the functioning of the planktonic food web. Ecology Phytoplankton obtain energy through the process of photosynthesis and must therefore live in the well-lit surface layer (termed the euphotic zone) of an ocean, sea, lake, or other body of water. Phytoplankton account for about half of all photosynthetic activity on Earth. Their cumulative energy fixation in carbon compounds (primary production) is the basis for the vast majority of oceanic and also many freshwater food webs (chemosynthesis is a notable exception). While almost all phytoplankton species are obligate photoautotrophs, there are some that are mixotrophic and other, non-pigmented species that are actually heterotrophic (the latter are often viewed as zooplankton). Of these, the best known are dinoflagellate genera such as Noctiluca and Dinophysis, that obtain organic carbon by ingesting other organisms or detrital material. Phytoplankton live in the photic zone of the ocean, where photosynthesis is possible. During photosynthesis, they assimilate carbon dioxide and release oxygen. If solar radiation is too high, phytoplankton may fall victim to photodegradation. Phytoplankton species feature a large variety of photosynthetic pigments which species-specifically enables them to absorb different wavelengths of the variable underwater light. This implies different species can use the wavelength of light different efficiently and the light is not a single ecological resource but a multitude of resources depending on its spectral composition. By that it was found that changes in the spectrum of light alone can alter natural phytoplankton communities even if the same intensity is available. For growth, phytoplankton cells additionally depend on nutrients, which enter the ocean by rivers, continental weathering, and glacial ice meltwater on the poles. Phytoplankton release dissolved organic carbon (DOC) into the ocean. Since phytoplankton are the basis of marine food webs, they serve as prey for zooplankton, fish larvae and other heterotrophic organisms. They can also be degraded by bacteria or by viral lysis. Although some phytoplankton cells, such as dinoflagellates, are able to migrate vertically, they are still incapable of actively moving against currents, so they slowly sink and ultimately fertilize the seafloor with dead cells and detritus. Phytoplankton are crucially dependent on a number of nutrients. These are primarily macronutrients such as nitrate, phosphate or silicic acid, which are required in relatively large quantities for growth. Their availability in the surface ocean is governed by the balance between the so-called biological pump and upwelling of deep, nutrient-rich waters. The stoichiometric nutrient composition of phytoplankton drives — and is driven by — the Redfield ratio of macronutrients generally available throughout the surface oceans. Phytoplankton also rely on trace metals such as iron (Fe), manganese (Mn), zinc (Zn), cobalt (Co), cadmium (Cd) and copper (Cu) as essential micronutrients, influencing their growth and community composition. Limitations in these metals can lead to co-limitations and shifts in phytoplankton community structure. Across large areas of the oceans such as the Southern Ocean, phytoplankton are often limited by the lack of the micronutrient iron. This has led to some scientists advocating iron fertilization as a means to counteract the accumulation of human-produced carbon dioxide (CO2) in the atmosphere. Large-scale experiments have added iron (usually as salts such as ferrous sulfate) to the oceans to promote phytoplankton growth and draw atmospheric CO2 into the ocean. Controversy about manipulating the ecosystem and the efficiency of iron fertilization has slowed such experiments. The ocean science community still has a divided attitude toward the study of iron fertilization as a potential marine Carbon Dioxide Removal (mCDR) approach. Phytoplankton depend on B vitamins for survival. Areas in the ocean have been identified as having a major lack of some B Vitamins, and correspondingly, phytoplankton. The effects of anthropogenic warming on the global population of phytoplankton is an area of active research. Changes in the vertical stratification of the water column, the rate of temperature-dependent biological reactions, and the atmospheric supply of nutrients are expected to have important effects on future phytoplankton productivity. The effects of anthropogenic ocean acidification on phytoplankton growth and community structure has also received considerable attention. The cells of coccolithophore phytoplankton are typically covered in a calcium carbonate shell called a coccosphere that is sensitive to ocean acidification. Because of their short generation times, evidence suggests some phytoplankton can adapt to changes in pH induced by increased carbon dioxide on rapid time-scales (months to years). Phytoplankton serve as the base of the aquatic food web, providing an essential ecological function for all aquatic life. Under future conditions of anthropogenic warming and ocean acidification, changes in phytoplankton mortality due to changes in rates of zooplankton grazing may be significant. One of the many food chains in the ocean – remarkable due to the small number of links – is that of phytoplankton sustaining krill (a crustacean similar to a tiny shrimp), which in turn sustain baleen whales. The El Niño-Southern Oscillation (ENSO) cycles in the Equatorial Pacific area can affect phytoplankton. Biochemical and physical changes during ENSO cycles modify the phytoplankton community structure. Also, changes in the structure of the phytoplankton, such as a significant reduction in biomass and phytoplankton density, particularly during El Nino phases can occur. The sensitivity of phytoplankton to environmental changes is why they are often used as indicators of estuarine and coastal ecological condition and health. To study these events satellite ocean color observations are used to observe these changes. Satellite images help to have a better view of their global distribution. Diversity The term phytoplankton encompasses all photoautotrophic microorganisms in aquatic food webs. However, unlike terrestrial communities, where most autotrophs are plants, phytoplankton are a diverse group, incorporating protistan eukaryotes and both eubacterial and archaebacterial prokaryotes. There are about 5,000 known species of marine phytoplankton. How such diversity evolved despite scarce resources (restricting niche differentiation) is unclear. In terms of numbers, the most important groups of phytoplankton include the diatoms, cyanobacteria and dinoflagellates, although many other groups of algae are represented. One group, the coccolithophorids, is responsible (in part) for the release of significant amounts of dimethyl sulfide (DMS) into the atmosphere. DMS is oxidized to form sulfate which, in areas where ambient aerosol particle concentrations are low, can contribute to the population of cloud condensation nuclei, mostly leading to increased cloud cover and cloud albedo according to the so-called CLAW hypothesis. Different types of phytoplankton support different trophic levels within varying ecosystems. In oligotrophic oceanic regions such as the Sargasso Sea or the South Pacific Gyre, phytoplankton is dominated by the small sized cells, called picoplankton and nanoplankton (also referred to as picoflagellates and nanoflagellates), mostly composed of cyanobacteria (Prochlorococcus, Synechococcus) and picoeucaryotes such as Micromonas. Within more productive ecosystems, dominated by upwelling or high terrestrial inputs, larger dinoflagellates are the more dominant phytoplankton and reflect a larger portion of the biomass. Growth strategies In the early twentieth century, Alfred C. Redfield found the similarity of the phytoplankton's elemental composition to the major dissolved nutrients in the deep ocean. Redfield proposed that the ratio of carbon to nitrogen to phosphorus (106:16:1) in the ocean was controlled by the phytoplankton's requirements, as phytoplankton subsequently release nitrogen and phosphorus as they are remineralized. This so-called "Redfield ratio" in describing stoichiometry of phytoplankton and seawater has become a fundamental principle to understand marine ecology, biogeochemistry and phytoplankton evolution. However, the Redfield ratio is not a universal value and it may diverge due to the changes in exogenous nutrient delivery and microbial metabolisms in the ocean, such as nitrogen fixation, denitrification and anammox. The dynamic stoichiometry shown in unicellular algae reflects their capability to store nutrients in an internal pool, shift between enzymes with various nutrient requirements and alter osmolyte composition. Different cellular components have their own unique stoichiometry characteristics, for instance, resource (light or nutrients) acquisition machinery such as proteins and chlorophyll contain a high concentration of nitrogen but low in phosphorus. Meanwhile, growth machinery such as ribosomal RNA contains high nitrogen and phosphorus concentrations. Based on allocation of resources, phytoplankton is classified into three different growth strategies, namely survivalist, bloomer and generalist. Survivalist phytoplankton has a high ratio of N:P (>30) and contains an abundance of resource-acquisition machinery to sustain growth under scarce resources. Bloomer phytoplankton has a low N:P ratio (<10), contains a high proportion of growth machinery, and is adapted to exponential growth. Generalist phytoplankton has similar N:P to the Redfield ratio and contain relatively equal resource-acquisition and growth machinery. Factors affecting abundance The NAAMES study was a five-year scientific research program conducted between 2015 and 2019 by scientists from Oregon State University and NASA to investigated aspects of phytoplankton dynamics in ocean ecosystems, and how such dynamics influence atmospheric aerosols, clouds, and climate (NAAMES stands for the North Atlantic Aerosols and Marine Ecosystems Study). The study focused on the sub-arctic region of the North Atlantic Ocean, which is the site of one of Earth's largest recurring phytoplankton blooms. The long history of research in this location, as well as relative ease of accessibility, made the North Atlantic an ideal location to test prevailing scientific hypotheses in an effort to better understand the role of phytoplankton aerosol emissions on Earth's energy budget. NAAMES was designed to target specific phases of the annual phytoplankton cycle: minimum, climax and the intermediary decreasing and increasing biomass, in order to resolve debates on the timing of bloom formations and the patterns driving annual bloom re-creation. The NAAMES project also investigated the quantity, size, and composition of aerosols generated by primary production in order to understand how phytoplankton bloom cycles affect cloud formations and climate. Factors affecting productivity Phytoplankton are the key mediators of the biological pump. Understanding the response of phytoplankton to changing environmental conditions is a prerequisite to predict future atmospheric concentrations of CO2. Temperature, irradiance and nutrient concentrations, along with CO2 are the chief environmental factors that influence the physiology and stoichiometry of phytoplankton. The stoichiometry or elemental composition of phytoplankton is of utmost importance to secondary producers such as copepods, fish and shrimp, because it determines the nutritional quality and influences energy flow through the marine food chains. Climate change may greatly restructure phytoplankton communities leading to cascading consequences for marine food webs, thereby altering the amount of carbon transported to the ocean interior. The figure gives an overview of the various environmental factors that together affect phytoplankton productivity. All of these factors are expected to undergo significant changes in the future ocean due to global change. Global warming simulations predict oceanic temperature increase; dramatic changes in oceanic stratification, circulation and changes in cloud cover and sea ice, resulting in an increased light supply to the ocean surface. Also, reduced nutrient supply is predicted to co-occur with ocean acidification and warming, due to increased stratification of the water column and reduced mixing of nutrients from the deep water to the surface. Role of phytoplankton The compartments influenced by phytoplankton include the atmospheric gas composition, inorganic nutrients, and trace element fluxes as well as the transfer and cycling of organic matter via biological processes (see figure). The photosynthetically fixed carbon is rapidly recycled and reused in the surface ocean, while a certain fraction of this biomass is exported as sinking particles to the deep ocean, where it is subject to ongoing transformation processes, e.g., remineralization. Phytoplankton contribute to not only a basic pelagic marine food web but also to the microbial loop. Phytoplankton are the base of the marine food web and because they do not rely on other organisms for food, they make up the first trophic level. Organisms such as zooplankton feed on these phytoplankton which are in turn fed on by other organisms and so forth until the fourth trophic level is reached with apex predators. Approximately 90% of total carbon is lost between trophic levels due to respiration, detritus, and dissolved organic matter. This makes the remineralization process and nutrient cycling performed by phytoplankton and bacteria important in maintaining efficiency. Phytoplankton blooms in which a species increases rapidly under conditions favorable to growth can produce harmful algal blooms (HABs). Aquaculture Phytoplankton are a key food item in both aquaculture and mariculture. Both utilize phytoplankton as food for the animals being farmed. In mariculture, the phytoplankton is naturally occurring and is introduced into enclosures with the normal circulation of seawater. In aquaculture, phytoplankton must be obtained and introduced directly. The plankton can either be collected from a body of water or cultured, though the former method is seldom used. Phytoplankton is used as a foodstock for the production of rotifers, which are in turn used to feed other organisms. Phytoplankton is also used to feed many varieties of aquacultured molluscs, including pearl oysters and giant clams. A 2018 study estimated the nutritional value of natural phytoplankton in terms of carbohydrate, protein and lipid across the world ocean using ocean-colour data from satellites, and found the calorific value of phytoplankton to vary considerably across different oceanic regions and between different time of the year. The production of phytoplankton under artificial conditions is itself a form of aquaculture. Phytoplankton is cultured for a variety of purposes, including foodstock for other aquacultured organisms, a nutritional supplement for captive invertebrates in aquaria. Culture sizes range from small-scale laboratory cultures of less than 1L to several tens of thousands of litres for commercial aquaculture. Regardless of the size of the culture, certain conditions must be provided for efficient growth of plankton. The majority of cultured plankton is marine, and seawater of a specific gravity of 1.010 to 1.026 may be used as a culture medium. This water must be sterilized, usually by either high temperatures in an autoclave or by exposure to ultraviolet radiation, to prevent biological contamination of the culture. Various fertilizers are added to the culture medium to facilitate the growth of plankton. A culture must be aerated or agitated in some way to keep plankton suspended, as well as to provide dissolved carbon dioxide for photosynthesis. In addition to constant aeration, most cultures are manually mixed or stirred on a regular basis. Light must be provided for the growth of phytoplankton. The colour temperature of illumination should be approximately 6,500 K, but values from 4,000 K to upwards of 20,000 K have been used successfully. The duration of light exposure should be approximately 16 hours daily; this is the most efficient artificial day length. Anthropogenic changes Marine phytoplankton perform half of the global photosynthetic CO2 fixation (net global primary production of ~50 Pg C per year) and half of the oxygen production despite amounting to only ~1% of global plant biomass. In comparison with terrestrial plants, marine phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). Therefore, phytoplankton respond rapidly on a global scale to climate variations. These characteristics are important when one is evaluating the contributions of phytoplankton to carbon fixation and forecasting how this production may change in response to perturbations. Predicting the effects of climate change on primary productivity is complicated by phytoplankton bloom cycles that are affected by both bottom-up control (for example, availability of essential nutrients and vertical mixing) and top-down control (for example, grazing and viruses). Increases in solar radiation, temperature and freshwater inputs to surface waters strengthen ocean stratification and consequently reduce transport of nutrients from deep water to surface waters, which reduces primary productivity. Conversely, rising CO2 levels can increase phytoplankton primary production, but only when nutrients are not limiting. Some studies indicate that overall global oceanic phytoplankton density has decreased in the past century, but these conclusions have been questioned because of the limited availability of long-term phytoplankton data, methodological differences in data generation and the large annual and decadal variability in phytoplankton production. Moreover, other studies suggest a global increase in oceanic phytoplankton production and changes in specific regions or specific phytoplankton groups. The global Sea Ice Index is declining, leading to higher light penetration and potentially more primary production; however, there are conflicting predictions for the effects of variable mixing patterns and changes in nutrient supply and for productivity trends in polar zones. The effect of human-caused climate change on phytoplankton biodiversity is not well understood. Should greenhouse gas emissions continue rising to high levels by 2100, some phytoplankton models predict an increase in species richness, or the number of different species within a given area. This increase in plankton diversity is traced to warming ocean temperatures. In addition to species richness changes, the locations where phytoplankton are distributed are expected to shift towards the Earth's poles. Such movement may disrupt ecosystems, because phytoplankton are consumed by zooplankton, which in turn sustain fisheries. This shift in phytoplankton location may also diminish the ability of phytoplankton to store carbon that was emitted by human activities. Human (anthropogenic) changes to phytoplankton impact both natural and economic processes. See also Critical depth Deep chlorophyll maximum (microalgae) NAAMES References Further reading External links Secchi Disk and Secchi app, a citizen science project to study the phytoplankton Ocean Drifters, a short film narrated by David Attenborough about the varied roles of plankton Plankton Chronicles, a short documentary films & photos DMS and Climate, NOAA Plankton*Net, images of planktonic species Aquatic ecology Biological oceanography Planktology
Phytoplankton
[ "Biology" ]
4,821
[ "Aquatic ecology", "Ecosystems" ]
50,563
https://en.wikipedia.org/wiki/Sucrose
Sucrose, a disaccharide, is a sugar composed of glucose and fructose subunits. It is produced naturally in plants and is the main constituent of white sugar. It has the molecular formula . For human consumption, sucrose is extracted and refined from either sugarcane or sugar beet. Sugar mills – typically located in tropical regions near where sugarcane is grown – crush the cane and produce raw sugar which is shipped to other factories for refining into pure sucrose. Sugar beet factories are located in temperate climates where the beet is grown, and process the beets directly into refined sugar. The sugar-refining process involves washing the raw sugar crystals before dissolving them into a sugar syrup which is filtered and then passed over carbon to remove any residual colour. The sugar syrup is then concentrated by boiling under a vacuum and crystallized as the final purification process to produce crystals of pure sucrose that are clear, odorless, and sweet. Sugar is often an added ingredient in food production and recipes. About 185 million tonnes of sugar were produced worldwide in 2017. Sucrose is particularly dangerous as a risk factor for tooth decay because Streptococcus mutans bacteria convert it into a sticky, extracellular, dextran-based polysaccharide that allows them to cohere, forming plaque. Sucrose is the only sugar that bacteria can use to form this sticky polysaccharide. Etymology The word sucrose was coined in 1857, by the English chemist William Miller from the French ("sugar") and the generic chemical suffix for sugars -ose. The abbreviated term Suc is often used for sucrose in scientific literature. The name saccharose was coined in 1860 by the French chemist Marcellin Berthelot. Saccharose is an obsolete name for sugars in general, especially sucrose. Physical and chemical properties Structural O-α-D-glucopyranosyl-(1→2)-β-D-fructofuranoside In sucrose, the monomers glucose and fructose are linked via an ether bond between C1 on the glucosyl subunit and C2 on the fructosyl unit. The bond is called a glycosidic linkage. Glucose exists predominantly as a mixture of α and β "pyranose" anomers, but sucrose has only the α form. Fructose exists as a mixture of five tautomers but sucrose has only the β-D-fructofuranose form. Unlike most disaccharides, the glycosidic bond in sucrose is formed between the reducing ends of both glucose and fructose, and not between the reducing end of one and the non-reducing end of the other. This linkage inhibits further bonding to other saccharide units, and prevents sucrose from spontaneously reacting with cellular and circulatory macromolecules in the manner that glucose and other reducing sugars do. Since sucrose contains no anomeric hydroxyl groups, it is classified as a non-reducing sugar. Sucrose crystallizes in the monoclinic space group P21 with room-temperature lattice parameters a = 1.08631 nm, b = 0.87044 nm, c = 0.77624 nm, β = 102.938°. The purity of sucrose is measured by polarimetry, through the rotation of plane-polarized light by a sugar solution. The specific rotation at using yellow "sodium-D" light (589 nm) is +66.47°. Commercial samples of sugar are assayed using this parameter. Sucrose does not deteriorate at ambient conditions. Thermal and oxidative degradation Sucrose does not melt at high temperatures. Instead, it decomposes at to form caramel. Like other carbohydrates, it combusts to carbon dioxide and water by the simplified equation: Mixing sucrose with the oxidizer potassium nitrate produces the fuel known as rocket candy that is used to propel amateur rocket motors. This reaction is somewhat simplified though. Some of the carbon does get fully oxidized to carbon dioxide, and other reactions, such as the water-gas shift reaction also take place. A more accurate theoretical equation is: Sucrose burns with chloric acid, formed by the reaction of hydrochloric acid and potassium chlorate: Sucrose can be dehydrated with sulfuric acid to form a black, carbon-rich solid, as indicated in the following idealized equation: The formula for sucrose's decomposition can be represented as a two-step reaction: the first simplified reaction is dehydration of sucrose to pure carbon and water, and then carbon is oxidised to by from air. Hydrolysis Hydrolysis breaks the glycosidic bond converting sucrose into glucose and fructose. Hydrolysis is, however, so slow that solutions of sucrose can sit for years with negligible change. If the enzyme sucrase is added, however, the reaction will proceed rapidly. Hydrolysis can also be accelerated with acids, such as cream of tartar or lemon juice, both weak acids. Likewise, gastric acidity converts sucrose to glucose and fructose during digestion, the bond between them being an acetal bond which can be broken by an acid. Given (higher) heats of combustion of 1349.6 kcal/mol for sucrose, 673.0 for glucose, and 675.6 for fructose, hydrolysis releases about per mole of sucrose, or about 3 small calories per gram of product. Synthesis and biosynthesis of sucrose The biosynthesis of sucrose proceeds via the precursors UDP-glucose and fructose 6-phosphate, catalyzed by the enzyme sucrose-6-phosphate synthase. The energy for the reaction is gained by the cleavage of uridine diphosphate (UDP). Sucrose is formed by plants, algae and cyanobacteria but not by other organisms. Sucrose is the end product of photosynthesis and is found naturally in many food plants along with the monosaccharide fructose. In many fruits, such as pineapple and apricot, sucrose is the main sugar. In others, such as grapes and pears, fructose is the main sugar. Chemical synthesis After numerous unsuccessful attempts by others, Raymond Lemieux and George Huber succeeded in synthesizing sucrose from acetylated glucose and fructose in 1953. Sources In nature, sucrose is present in many plants, and in particular their roots, fruits and nectars, because it serves as a way to store energy, primarily from photosynthesis. Many mammals, birds, insects and bacteria accumulate and feed on the sucrose in plants and for some it is their main food source. Although honeybees consume sucrose, the honey they produce consists primarily of fructose and glucose, with only trace amounts of sucrose. As fruits ripen, their sucrose content usually rises sharply, but some fruits contain almost no sucrose at all. This includes grapes, cherries, blueberries, blackberries, figs, pomegranates, tomatoes, avocados, lemons and limes. Sucrose is a naturally occurring sugar, but with the advent of industrialization, it has been increasingly refined and consumed in all kinds of processed foods. Production History of sucrose refinement The production of table sugar has a long history. Some scholars claim Indians discovered how to crystallize sugar during the Gupta dynasty, around CE 350. Other scholars point to the ancient manuscripts of China, dated to the 8th century BCE, where one of the earliest historical mentions of sugar cane is included along with the fact that their knowledge of sugar cane was derived from India. By about 500 BCE, residents of modern-day India began making sugar syrup, cooling it in large flat bowls to produce raw sugar crystals that were easier to store and transport. In the local Indian language, these crystals were called (), which is the source of the word candy. The army of Alexander the Great was halted on the banks of river Indus by the refusal of his troops to go further east. They saw people in the Indian subcontinent growing sugarcane and making "granulated, salt-like sweet powder", locally called (), (), pronounced as () in Greek (Modern Greek, , ). On their return journey, the Greek soldiers carried back some of the "honey-bearing reeds". Sugarcane remained a limited crop for over a millennium. Sugar was a rare commodity and traders of sugar became wealthy. Venice, at the height of its financial power, was the chief sugar-distributing center of Europe. Moors started producing it in Sicily and Spain. Only after the Crusades did it begin to rival honey as a sweetener in Europe. The Spanish began cultivating sugarcane in the West Indies in 1506 (Cuba in 1523). The Portuguese first cultivated sugarcane in Brazil in 1532. Sugar remained a luxury in much of the world until the 18th century. Only the wealthy could afford it. In the 18th century, the demand for table sugar boomed in Europe and by the 19th century it had become regarded as a human necessity. The use of sugar grew from use in tea, to cakes, confectionery and chocolates. Suppliers marketed sugar in novel forms, such as solid cones, which required consumers to use a sugar nip, a pliers-like tool, in order to break off pieces. The demand for cheaper table sugar drove, in part, colonization of tropical islands and nations where labor-intensive sugarcane plantations and table sugar manufacturing could thrive. Growing sugar cane crop in hot humid climates, and producing table sugar in high temperature sugar mills was harsh, inhumane work. The demand for cheap labor for this work, in part, first drove slave trade from Africa (in particular West Africa), followed by indentured labor trade from South Asia (in particular India). Millions of slaves, followed by millions of indentured laborers were brought into the Caribbean, Indian Ocean, Pacific Islands, East Africa, Natal, north and eastern parts of South America, and southeast Asia. The modern ethnic mix of many nations, settled in the last two centuries, has been influenced by table sugar. Beginning in the late 18th century, the production of sugar became increasingly mechanized. The steam engine first powered a sugar mill in Jamaica in 1768, and, soon after, steam replaced direct firing as the source of process heat. During the same century, Europeans began experimenting with sugar production from other crops. Andreas Marggraf identified sucrose in beet root and his student Franz Achard built a sugar beet processing factory in Silesia (Prussia). The beet-sugar industry took off during the Napoleonic Wars, when France and the continent were cut off from Caribbean sugar. In 2009, about 20 percent of the world's sugar was produced from beets. Today, a large beet refinery producing around 1,500 tonnes of sugar a day needs a permanent workforce of about 150 for 24-hour production. Trends Table sugar (sucrose) comes from plant sources. Two important sugar crops predominate: sugarcane (Saccharum spp.) and sugar beets (Beta vulgaris), in which sugar can account for 12% to 20% of the plant's dry weight. Minor commercial sugar crops include the date palm (Phoenix dactylifera), sorghum (Sorghum vulgare), and the sugar maple (Acer saccharum). Sucrose is obtained by extraction of these crops with hot water; concentration of the extract gives syrups, from which solid sucrose can be crystallized. In 2017, worldwide production of table sugar amounted to 185 million tonnes. Most cane sugar comes from countries with warm climates, because sugarcane does not tolerate frost. Sugar beets, on the other hand, grow only in cooler temperate regions and do not tolerate extreme heat. About 80 percent of sucrose is derived from sugarcane, the rest almost all from sugar beets. In mid-2018, India and Brazil had about the same production of sugar – 34 million tonnes – followed by the European Union, Thailand, and China as the major producers. India, the European Union, and China were the leading domestic consumers of sugar in 2018. Beet sugar comes from regions with cooler climates: northwest and eastern Europe, northern Japan, plus some areas in the United States (including California). In the northern hemisphere, the beet-growing season ends with the start of harvesting around September. Harvesting and processing continues until March in some cases. The availability of processing plant capacity and the weather both influence the duration of harvesting and processing – the industry can store harvested beets until processed, but a frost-damaged beet becomes effectively unprocessable. The United States sets high sugar prices to support its producers, with the effect that many former purchasers of sugar have switched to corn syrup (beverage manufacturers) or moved out of the country (candy manufacturers). The low prices of glucose syrups produced from wheat and corn (maize) threaten the traditional sugar market. Used in combination with artificial sweeteners, they can allow drink manufacturers to produce very low-cost goods. Types Cane Since the 6th century BCE, cane sugar producers have crushed the harvested vegetable material from sugarcane in order to collect and filter the juice. They then treat the liquid, often with lime (calcium oxide), to remove impurities and then neutralize it. Boiling the juice then allows the sediment to settle to the bottom for dredging out, while the scum rises to the surface for skimming off. In cooling, the liquid crystallizes, usually in the process of stirring, to produce sugar crystals. Centrifuges usually remove the uncrystallized syrup. The producers can then either sell the sugar product for use as is, or process it further to produce lighter grades. The later processing may take place in another factory in another country. Sugarcane is a major component of Brazilian agriculture; the country is the world's largest producer of sugarcane and its derivative products, such as crystallized sugar and ethanol (ethanol fuel). Beet Beet sugar producers slice the washed beets, then extract the sugar with hot water in a "diffuser". An alkaline solution ("milk of lime" and carbon dioxide from the lime kiln) then serves to precipitate impurities (see carbonatation). After filtration, evaporation concentrates the juice to a content of about 70% solids, and controlled crystallisation extracts the sugar. A centrifuge removes the sugar crystals from the liquid, which gets recycled in the crystalliser stages. When economic constraints prevent the removal of more sugar, the manufacturer discards the remaining liquid, now known as molasses, or sells it on to producers of animal feed. Sieving the resultant white sugar produces different grades for selling. Cane versus beet It is difficult to distinguish between fully refined sugar produced from beet and cane. One way is by isotope analysis of carbon. Cane uses C4 carbon fixation, and beet uses C3 carbon fixation, resulting in a different ratio of 13C and 12C isotopes in the sucrose. Tests are used to detect fraudulent abuse of European Union subsidies or to aid in the detection of adulterated fruit juice. Sugar cane tolerates hot climates better, but the production of sugar cane needs approximately four times as much water as the production of sugar beet. As a result, some countries that traditionally produced cane sugar (such as Egypt) have built new beet sugar factories since about 2008. Some sugar factories process both sugar cane and sugar beets and extend their processing period in that way. The production of sugar leaves residues that differ substantially depending on the raw materials used and on the place of production. While cane molasses is often used in food preparation, humans find molasses from sugar beets unpalatable, and it consequently ends up mostly as industrial fermentation feedstock (for example in alcohol distilleries), or as animal feed. Once dried, either type of molasses can serve as fuel for burning. Pure beet sugar is difficult to find, so labelled, in the marketplace. Although some makers label their product clearly as "pure cane sugar", beet sugar is almost always labeled simply as sugar or pure sugar. Interviews with the five major beet sugar-producing companies revealed that many store brands or "private label" sugar products are pure beet sugar. The lot code can be used to identify the company and the plant from which the sugar came, enabling beet sugar to be identified if the codes are known. Culinary sugars Mill white Mill white, also called plantation white, crystal sugar or superior sugar is produced from raw sugar. It is exposed to sulfur dioxide during the production to reduce the concentration of color compounds and helps prevent further color development during the crystallization process. Although common to sugarcane-growing areas, this product does not store or ship well. After a few weeks, its impurities tend to promote discoloration and clumping; therefore this type of sugar is generally limited to local consumption. Blanco directo Blanco directo, a white sugar common in India and other south Asian countries, is produced by precipitating many impurities out of cane juice using phosphoric acid and calcium hydroxide, similar to the carbonatation technique used in beet sugar refining. Blanco directo is more pure than mill white sugar, but less pure than white refined. White refined White refined is the most common form of sugar in North America and Europe. Refined sugar is made by dissolving and purifying raw sugar using phosphoric acid similar to the method used for blanco directo, a carbonatation process involving calcium hydroxide and carbon dioxide, or by various filtration strategies. It is then further purified by filtration through a bed of activated carbon or bone char. Beet sugar refineries produce refined white sugar directly without an intermediate raw stage. White refined sugar is typically sold as granulated sugar, which has been dried to prevent clumping and comes in various crystal sizes for home and industrial use: Coarse-grain, such as sanding sugar (also called "pearl sugar", "decorating sugar", nibbed sugar or sugar nibs) is a coarse grain sugar used to add sparkle and flavor atop baked goods and candies. Its large reflective crystals will not dissolve when subjected to heat. Granulated, familiar as table sugar, with a grain size about 0.5 mm across. "Sugar cubes" are lumps for convenient consumption produced by mixing granulated sugar with sugar syrup. Caster (0.35 mm), a very fine sugar in Britain and other Commonwealth countries, so-named because the grains are small enough to fit through a sugar caster which is a small vessel with a perforated top, from which to sprinkle sugar at table. Commonly used in baking and mixed drinks, it is sold as "superfine" sugar in the United States. Because of its fineness, it dissolves faster than regular white sugar and is especially useful in meringues and cold liquids. Caster sugar can be prepared at home by grinding granulated sugar for a couple of minutes in a mortar or food processor. Powdered, 10X sugar, confectioner's sugar (0.060 mm), or icing sugar (0.024 mm), produced by grinding sugar to a fine powder. The manufacturer may add a small amount of anticaking agent to prevent clumping — either corn starch (1% to 3%) or tri-calcium phosphate. Brown sugar comes either from the late stages of cane sugar refining, when sugar forms fine crystals with significant molasses content, or from coating white refined sugar with a cane molasses syrup (blackstrap molasses). Brown sugar's color and taste become stronger with increasing molasses content, as do its moisture-retaining properties. Brown sugars also tend to harden if exposed to the atmosphere, although proper handling can reverse this. Measurement Dissolved sugar content Scientists and the sugar industry use degrees Brix (symbol °Bx), introduced by Adolf Brix, as units of measurement of the mass ratio of dissolved substance to water in a liquid. A 25 °Bx sucrose solution has 25 grams of sucrose per 100 grams of liquid; or, to put it another way, 25 grams of sucrose sugar and 75 grams of water exist in the 100 grams of solution. The Brix degrees are measured using an infrared sensor. This measurement does not equate to Brix degrees from a density or refractive index measurement, because it will specifically measure dissolved sugar concentration instead of all dissolved solids. When using a refractometer, one should report the result as "refractometric dried substance" (RDS). One might speak of a liquid as having 20 °Bx RDS. This refers to a measure of percent by weight of total dried solids and, although not technically the same as Brix degrees determined through an infrared method, renders an accurate measurement of sucrose content, since sucrose in fact forms the majority of dried solids. The advent of in-line infrared Brix measurement sensors has made measuring the amount of dissolved sugar in products economical using a direct measurement. Consumption Refined sugar was a luxury before the 18th century. It became widely popular in the 18th century, then graduated to becoming a necessary food in the 19th century. This evolution of taste and demand for sugar as an essential food ingredient unleashed major economic and social changes. Eventually, table sugar became sufficiently cheap and common enough to influence standard cuisine and flavored drinks. Sucrose forms a major element in confectionery and desserts. Cooks use it for sweetening. It can also act as a food preservative when used in sufficient concentrations, and thus is an important ingredient in the production of fruit preserves. Sucrose is important to the structure of many foods, including biscuits and cookies, cakes and pies, candy, and ice cream and sorbets. It is a common ingredient in many processed and so-called "junk foods". Nutritional information Fully refined sugar is 99.9% sucrose, thus providing only carbohydrate as dietary nutrient and 390 kilocalories per 100 g serving (table). There are no micronutrients of significance in fully refined sugar (table). Metabolism of sucrose In humans and other mammals, sucrose is broken down into its constituent monosaccharides, glucose and fructose, by sucrase or isomaltase glycoside hydrolases, which are located in the membrane of the microvilli lining the duodenum. The resulting glucose and fructose molecules are then rapidly absorbed into the bloodstream. In bacteria and some animals, sucrose is digested by the enzyme invertase. Sucrose is an easily assimilated macronutrient that provides a quick source of energy, provoking a rapid rise in blood glucose upon ingestion. Sucrose, as a pure carbohydrate, has an energy content of 3.94 kilocalories per gram (or 17 kilojoules per gram). If consumed excessively, sucrose may contribute to the development of metabolic syndrome, including increased risk for type 2 diabetes, insulin resistance, weight gain and obesity in adults and children. Tooth decay Tooth decay (dental caries) has become a pronounced health hazard associated with the consumption of sugars, especially sucrose. Oral bacteria such as Streptococcus mutans live in dental plaque and metabolize any free sugars (not just sucrose, but also glucose, lactose, fructose, and cooked starches) into lactic acid. The resultant lactic acid lowers the pH of the tooth's surface, stripping it of minerals in the process known as tooth decay. All 6-carbon sugars and disaccharides based on 6-carbon sugars can be converted by dental plaque bacteria into acid that demineralizes teeth, but sucrose may be uniquely useful to Streptococcus sanguinis (formerly Streptococcus sanguis) and Streptococcus mutans. Sucrose is the only dietary sugar that can be converted to sticky glucans (dextran-like polysaccharides) by extracellular enzymes. These glucans allow the bacteria to adhere to the tooth surface and to build up thick layers of plaque. The anaerobic conditions deep in the plaque encourage the formation of acids, which leads to carious lesions. Thus, sucrose could enable S. mutans, S. sanguinis and many other species of bacteria to adhere strongly and resist natural removal, e.g. by flow of saliva, although they are easily removed by brushing. The glucans and levans (fructose polysaccharides) produced by the plaque bacteria also act as a reserve food supply for the bacteria. Such a special role of sucrose in the formation of tooth decay is much more significant in light of the almost universal use of sucrose as the most desirable sweetening agent. Widespread replacement of sucrose by high-fructose corn syrup (HFCS) has not diminished the danger from sucrose. If smaller amounts of sucrose are present in the diet, they will still be sufficient for the development of thick, anaerobic plaque and plaque bacteria will metabolise other sugars in the diet, such as the glucose and fructose in HFCS. Glycemic index Sucrose is a disaccharide made up of 50% glucose and 50% fructose and has a glycemic index of 65. Sucrose is digested rapidly, but has a relatively low glycemic index due to its content of fructose, which has a minimal effect on blood glucose. As with other sugars, sucrose is digested into its components via the enzyme sucrase to glucose (blood sugar). The glucose component is transported into the blood where it serves immediate metabolic demands, or is converted and reserved in the liver as glycogen. Gout The occurrence of gout is connected with an excess production of uric acid. A diet rich in sucrose may lead to gout as it raises the level of insulin, which prevents excretion of uric acid from the body. As the concentration of uric acid in the body increases, so does the concentration of uric acid in the joint liquid and beyond a critical concentration, the uric acid begins to precipitate into crystals. Researchers have implicated sugary drinks high in fructose in a surge in cases of gout. Sucrose intolerance UN dietary recommendation In 2015, the World Health Organization published a new guideline on sugars intake for adults and children, as a result of an extensive review of the available scientific evidence by a multidisciplinary group of experts. The guideline recommends that both adults and children ensure their intake of free sugars (monosaccharides and disaccharides added to foods and beverages by the manufacturer, cook or consumer, and sugars naturally present in honey, syrups, fruit juices and fruit juice concentrates) is less than 10% of total energy intake. A level below 5% of total energy intake brings additional health benefits, especially with regards to dental caries. Religious concerns The sugar refining industry often uses bone char (calcinated animal bones) for decolorizing. About 25% of sugar produced in the U.S. is processed using bone char as a filter, the remainder being processed with activated carbon. As bone char does not seem to remain in finished sugar, Jewish religious leaders consider sugar filtered through it to be pareve, meaning that it is neither meat nor dairy and may be used with either type of food. However, the bone char must source to a kosher animal (e.g. cow, sheep) for the sugar to be kosher. Trade and economics One of the most widely traded commodities in the world throughout history, sugar accounts for around 2% of the global dry cargo market. International sugar prices show great volatility, ranging from around 3 cents to over 60 cents per pound in the 50 years. About 100 of the world's 180 countries produce sugar from beet or cane, a few more refine raw sugar to produce white sugar, and all countries consume sugar. Consumption of sugar ranges from around per person per annum in Ethiopia to around in Belgium. Consumption per capita rises with income per capita until it reaches a plateau of around per person per year in middle income countries. Many countries subsidize sugar production heavily. The European Union, the United States, Japan, and many developing countries subsidize domestic production and maintain high tariffs on imports. Sugar prices in these countries have often up to triple the prices on the international market; , with world market sugar futures prices strong, such prices were typically double world prices. Within international trade bodies, especially in the World Trade Organization (WTO), the "G20" countries led by Brazil have long argued that, because these sugar markets in essence exclude cane sugar imports, the G20 sugar producers receive lower prices than they would under free trade. While both the European Union and United States maintain trade agreements whereby certain developing and least developed countries (LDCs) can sell certain quantities of sugar into their markets, free of the usual import tariffs, countries outside these preferred trade régimes have complained that these arrangements violate the "most favoured nation" principle of international trade. This has led to numerous tariffs and levies in the past. In 2004, the WTO sided with a group of cane sugar exporting nations (led by Brazil and Australia) and ruled illegal the EU sugar-régime and the accompanying ACP-EU Sugar Protocol, that granted a group of African, Caribbean, and Pacific countries receive preferential access to the European sugar market. In response to this and to other rulings of the WTO, and owing to internal pressures against the EU sugar-régime, the European Commission proposed on 22 June 2005 a radical reform of the EU sugar-régime that cut prices by 39% and eliminated all EU sugar exports. In 2007, it seemed that the U.S. Sugar Program could become the next target for reform. However, some commentators expected heavy lobbying from the U.S. sugar industry, which donated $2.7 million to U.S. House and Senate incumbents in the 2006 U.S. election, more than any other group of U.S. food-growers. Especially prominent among sugar lobbyists were the Fanjul Brothers, so-called "sugar barons" who made the single individual contributions of soft money to both the Democratic and Republican parties in the U.S. political system. Small quantities of sugar, especially specialty grades of sugar, reach the market as 'fair trade' commodities; the fair trade system produces and sells these products with the understanding that a larger-than-usual fraction of the revenue will support small farmers in the developing world. However, whilst the Fairtrade Foundation offers a premium of $60.00 per tonne to small farmers for sugar branded as "Fairtrade", government schemes such as the U.S. Sugar Program and the ACP-EU Sugar Protocol offer premiums of around $400.00 per tonne above world market prices. However, the EU announced on 14 September 2007 that it had offered "to eliminate all duties and quotas on the import of sugar into the EU". References Further reading External links 3D images of sucrose archived from the original CDC – NIOSH Pocket Guide to Chemical Hazards Disaccharides Types of sugar
Sucrose
[ "Chemistry" ]
6,654
[ "Glycobiology", "Carbohydrates", "Glycosides", "Biomolecules" ]
50,564
https://en.wikipedia.org/wiki/Gray%20code
The reflected binary code (RBC), also known as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be "" and "2" would be "". In Gray code, these values are represented as "" and "". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. Function Many devices indicate position by closing and opening switches. If that device uses natural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ: {| class="wikitable" style="text-align:center;" |- ! Decimal !! Binary |- | ... || ... |- | 3 || |- | 4 || |- | ... || ... |} The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce, the transition might look like — — — . When the switches appear to be in position , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic, then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers, or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance, single-distance, single-step, monostrophic or syncopic codes, in reference to the Hamming distance of 1 between adjacent codes. Invention In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code, or BRGC. Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray Code the least significant bit follows a repetitive pattern of 2 on, 2 off the next digit a pattern of 4 on, 4 off; the i-th least significant bit a pattern of 2i on 2i off. The most significant digit is an exception to this: for an n-bit Gray code, the most significant digit follows the pattern 2n-1 on, 2n-1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2n-2 places. The four-bit version of this is shown below: {| class="wikitable sortable" style="text-align:center;" |- ! Decimal !! Binary !! Gray |- | 0 || || |- | 1 || || |- | 2 || || |- | 3 || || |- | 4 || || |- | 5 || || |- | 6 || || |- | 7 || || |- | 8 || || |- | 9 || || |- | 10 || || |- | 11 || || |- | 12 || || |- | 13 || || |- | 14 || || |- | 15 || || |} For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. In modern digital communications, Gray codes play an important role in error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise. Despite the fact that Stibitz described this code before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; one of those also lists "minimum error code" and "cyclic permutation code" among the names. A 1954 patent application refers to "the Bell Telephone Gray code". Other names include "cyclic binary code", "cyclic progression code", "cyclic permuting binary" or "cyclic permuted binary" (CPB). The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray. History and practical application Mathematical puzzles Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle, a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. Martin Gardner wrote a popular account of the Gray code in his August 1972 Mathematical Games column in Scientific American. The code also forms a Hamiltonian cycle on a hypercube, where each bit is seen as one dimension. Telegraphy codes When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 or 1876, he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. This code became known as Baudot code and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. About the same time, the German-Austrian demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. Analog-to-digital signal conversion Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube-based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Position encoders Gray codes are used in linear and rotary position encoders (absolute encoders and quadrature encoders) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Genetic algorithms Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms. They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Boolean circuit minimization Gray codes are also used in labelling the axes of Karnaugh maps since 1953 as well as in Händler circle graphs since 1958, both graphical methods for logic circuit minimization. Error correction In modern digital communications, 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise. Communication between clock domains Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. Cycling through states with minimal effort If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. Gray code counters and arithmetic George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, add one to it with a standard binary adder, and then convert the result back to Gray code. Other methods of counting in Gray code are discussed in a report by Robert W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. Gray code addressing As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. Constructing an n-bit Gray code The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary , prefixing the entries in the reflected list with a binary , and then concatenating the original list with the reversed list. For example, generating the n = 3 list from the n = 2 list: {| cellpadding="5" border="0" style="margin: 1em;" |- | 2-bit list: | , , , |   |- | Reflected: |   | , , , |- | Prefix old entries with : | , , , , |   |- | Prefix new entries with : |   | , , , |- | Concatenated: | , , , , | , , , |} The one-bit Gray code is G1 = (). This can be thought of as built recursively as above from a zero-bit Gray code G0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating Gn+1 from Gn makes the following properties of the standard reflecting code clear: Gn is a permutation of the numbers 0, ..., 2n − 1. (Each number appears exactly once in the list.) Gn is embedded as the first half of Gn+1. Therefore, the coding is stable, in the sense that once a binary number appears in Gn it appears in the same position in all longer lists; so it makes sense to talk about the reflective Gray code value of a number: G(m) = the mth reflecting Gray code, counting from 0. Each entry in Gn differs by only one bit from the previous entry. (The Hamming distance is 1.) The last entry in Gn differs by only one bit from the first entry. (The code is cyclic.) These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the nth Gray code is obtained by computing . Prepending a bit leaves the order of the code words unchanged, prepending a bit reverses the order of the code words. If the bits at position of codewords are inverted, the order of neighbouring blocks of codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed 000,001,010,011,100,101,110,111 → 001,000,011,010,101,100,111,110 (invert bit 0) If bit 1 is inverted, blocks of 2 codewords change order: 000,001,010,011,100,101,110,111 → 010,011,000,001,110,111,100,101 (invert bit 1) If bit 2 is inverted, blocks of 4 codewords reverse order: 000,001,010,011,100,101,110,111 → 100,101,110,111,000,001,010,011 (invert bit 2) Thus, performing an exclusive or on a bit at position with the bit at position leaves the order of codewords intact if , and reverses the order of blocks of codewords if . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming is the th Gray-coded bit ( being the most significant bit), and is the th binary-coded bit ( being the most-significant bit), the reverse translation can be given recursively: , and . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the , and at step find the bit position of the least significant in the binary representation of and flip the bit at that position in the previous code to get the next code . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, .... See find first set for efficient algorithms to compute these values. Converting to and from Gray code The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. typedef unsigned int uint; // This function converts an unsigned binary number to reflected binary Gray code. uint BinaryToGray(uint num) { return num ^ (num >> 1); // The operator >> is shift right. The operator ^ is exclusive or. } // This function converts a reflected binary Gray code number to a binary number. uint GrayToBinary(uint num) { uint mask = num; while (mask) { // Each Gray code bit is exclusive-ored with all more significant bits. mask >>= 1; num ^= mask; } return num; } // A more efficient version for Gray codes 32 bits or fewer through the use of SWAR (SIMD within a register) techniques. // It implements a parallel prefix XOR function. The assignment statements can be in any order. // // This function can be adapted for longer Gray codes by adding steps. uint GrayToBinary32(uint num) { num ^= num >> 16; num ^= num >> 8; num ^= num >> 4; num ^= num >> 2; num ^= num >> 1; return num; } // A Four-bit-at-once variant changes a binary number (abcd)2 to (abcd)2 ^ (00ab)2, then to (abcd)2 ^ (00ab)2 ^ (0abc)2 ^ (000a)2. On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set. If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. Special types of Gray codes In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). Gray codes with n bits and of length less than 2n It is possible to construct binary Gray codes with n bits with a length of less than , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. OEIS sequence A290772 gives the number of possible Gray sequences of length that include zero and use the minimum number of bits. n-ary Gray code There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n-ary Gray code, also known as a non-Boolean Gray code. As the name implies, this type of Gray code uses non-Boolean values in its encodings. For example, a 3-ary (ternary) Gray code would use the values 0,1,2. The (n, k)-Gray code is the n-ary Gray code with k digits. The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The (n, k)-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively. An algorithm to iteratively generate the (N, k)-Gray code is presented (in C): // inputs: base, digits, value // output: Gray // Convert a value to a Gray code with the given base and digits. // Iterating through a sequence of values would result in a sequence // of Gray codes in which only one digit changes at a time. void toGray(unsigned base, unsigned digits, unsigned value, unsigned gray[digits]) { unsigned baseN[digits]; // Stores the ordinary base-N number, one digit per entry unsigned i; // The loop variable // Put the normal baseN number into the baseN array. For base 10, 109 // would be stored as [9,0,1] for (i = 0; i < digits; i++) { baseN[i] = value % base; value = value / base; } // Convert the normal baseN number into the Gray code equivalent. Note that // the loop starts at the most significant digit and goes down. unsigned shift = 0; while (i--) { // The Gray digit gets shifted down by the sum of the higher // digits. gray[i] = (baseN[i] + shift) % base; shift = shift + base - gray[i]; // Subtract from base so shift is positive } } // EXAMPLES // input: value = 1899, base = 10, digits = 4 // output: baseN[] = [9,9,8,1], gray[] = [0,1,7,1] // input: value = 1900, base = 10, digits = 4 // output: baseN[] = [0,0,9,1], gray[] = [0,1,8,1] There are other Gray code algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system, a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Balanced Gray code Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". In balanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R-ary complete Gray cycle having transition sequence ; the transition counts (spectrum) of G are the collection of integers defined by A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have for all k. Clearly, when , such codes exist only if n is a power of 2. If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either or . Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: We will now show a construction and implementation for well-balanced binary Gray codes which allows us to generate an n-digit balanced Gray code for every n. The main principle is to inductively construct an (n + 2)-digit Gray code given an n-digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of into an even number L of non-empty blocks of the form where , , and ). This partition induces an -digit Gray code given by If we define the transition multiplicities to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the (n + 2)-digit Gray code induced by this partition the transition spectrum is The delicate part of this construction is to find an adequate partitioning of a balanced n-digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit transition and splitting another block at another digit transition produces a different Gray code with exactly the same transition spectrum , so one may for example designate the first transitions at digit as those that fall between two blocks. Uniform codes can be found when and , and this construction can be extended to the R-ary case as well. Long run Gray codes Long run (or maximum gap) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. Monotonic Gray codes Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube into levels of vertices that have equal weight, i.e. for . These levels satisfy . Let be the subgraph of induced by , and let be the edges in . A monotonic Gray code is then a Hamiltonian path in such that whenever comes before in the path, then . An elegant construction of monotonic n-digit Gray codes for any n is based on the idea of recursively building subpaths of length having edges in . We define , whenever or , and otherwise. Here, is a suitably defined permutation and refers to the path P with its coordinates permuted by . These paths give rise to two monotonic n-digit Gray codes and given by The choice of which ensures that these codes are indeed Gray codes turns out to be . The first few values of are shown in the table below. {| class="wikitable floatright" style="text-align: center;" |+ Subpaths in the Savage–Winkler algorithm |- ! scope="col" | ! scope="col" | j = 0 ! scope="col" | j = 1 ! scope="col" | j = 2 ! scope="col" | j = 3 |- ! scope="row" | n = 1 | || || || |- ! scope="row" | n = 2 | || || || |- ! scope="row" | n = 3 | || || || |- ! scope="row" | n = 4 | || || || |} These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O(n) time. The algorithm is most easily described using coroutines. Monotonic codes have an interesting connection to the Lovász conjecture, which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N, where N is the number of vertices in the middle-level subgraph. Beckett–Gray code Another type of Gray code, the Beckett–Gray code, is named for Irish playwright Samuel Beckett, who was interested in symmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth's Art of Computer Programming. According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than solutions for the case n = 7 have been found. Snake-in-the-box codes Snake-in-the-box codes, or snakes, are the sequences of nodes of induced paths in an n-dimensional hypercube graph, and coil-in-the-box codes, or coils, are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Single-track Gray code Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix, each column is a cyclic shift of the first column. The name comes from their use with rotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of or . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. Although it is not possible to distinguish 2n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2n − 2n positions and that for prime n the limit is 2n − 2 positions. The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 28 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Single-track Gray code for 30 positions ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="7" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |- | 0° || || 72° || || 144° || || 216° || || 288° || |- | 12° || || 84° || || 156° || || 228° || || 300° || |- | 24° || || 96° || || 168° || || 240° || || 312° || |- | 36° || || 108° || || 180° || || 252° || || 324° || |- | 48° || || 120° || || 192° || || 264° || || 336° || |- | 60° || || 132° || || 204° || || 276° || || 348° || |} Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes, also called De Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, based on previous work discovered a 9-bit Single Track Gray Code that gives a 1 degree resolution. This gray code was used to design an actual device which was published on the site Thingiverse. This device was designed by etzenseep (Florian Bauer) in September, 2022. An STGC for P = 360 and n = 9 is reproduced here: {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Single-track Gray code for 360 positions ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| ! Angle || Code |rowspan="42" style="text-align:center; background:#FFFFFF; border-width:0;"| |- | 0° || | 40° || | 80° || | 120° || | 160° || | 200° || | 240° || | 280° || | 320° || |- | 1° || | 41° || | 81° || | 121° || | 161° || | 201° || | 241° || | 281° || | 321° || |- | 2° || | 42° || | 82° || | 122° || | 162° || | 202° || | 242° || | 282° || | 322° || |- | 3° || | 43° || | 83° || | 123° || | 163° || | 203° || | 243° || | 283° || | 323° || |- | 4° || | 44° || | 84° || | 124° || | 164° || | 204° || | 244° || | 284° || | 324° || |- | 5° || | 45° || | 85° || | 125° || | 165° || | 205° || | 245° || | 285° || | 325° || |- | 6° || | 46° || | 86° || | 126° || | 166° || | 206° || | 246° || | 286° || | 326° || |- | 7° || | 47° || | 87° || | 127° || | 167° || | 207° || | 247° || | 287° || | 327° || |- | 8° || | 48° || | 88° || | 128° || | 168° || | 208° || | 248° || | 288° || | 328° || |- | 9° || | 49° || | 89° || | 129° || | 169° || | 209° || | 249° || | 289° || | 329° || |- | 10° || | 50° || | 90° || | 130° || | 170° || | 210° || | 250° || | 290° || | 330° || |- | 11° || | 51° || | 91° || | 131° || | 171° || | 211° || | 251° || | 291° || | 331° || |- | 12° || | 52° || | 92° || | 132° || | 172° || | 212° || | 252° || | 292° || | 332° || |- | 13° || | 53° || | 93° || | 133° || | 173° || | 213° || | 253° || | 293° || | 333° || |- | 14° || | 54° || | 94° || | 134° || | 174° || | 214° || | 254° || | 294° || | 334° || |- | 15° || | 55° || | 95° || | 135° || | 175° || | 215° || | 255° || | 295° || | 335° || |- | 16° || | 56° || | 96° || | 136° || | 176° || | 216° || | 256° || | 296° || | 336° || |- | 17° || | 57° || | 97° || | 137° || | 177° || | 217° || | 257° || | 297° || | 337° || |- | 18° || | 58° || | 98° || | 138° || | 178° || | 218° || | 258° || | 298° || | 338° || |- | 19° || | 59° || | 99° || | 139° || | 179° || | 219° || | 259° || | 299° || | 339° || |- | 20° || | 60° || | 100° || | 140° || | 180° || | 220° || | 260° || | 300° || | 340° || |- | 21° || | 61° || | 101° || | 141° || | 181° || | 221° || | 261° || | 301° || | 341° || |- | 22° || | 62° || | 102° || | 142° || | 182° || | 222° || | 262° || | 302° || | 342° || |- | 23° || | 63° || | 103° || | 143° || | 183° || | 223° || | 263° || | 303° || | 343° || |- | 24° || | 64° || | 104° || | 144° || | 184° || | 224° || | 264° || | 304° || | 344° || |- | 25° || | 65° || | 105° || | 145° || | 185° || | 225° || | 265° || | 305° || | 345° || |- | 26° || | 66° || | 106° || | 146° || | 186° || | 226° || | 266° || | 306° || | 346° || |- | 27° || | 67° || | 107° || | 147° || | 187° || | 227° || | 267° || | 307° || | 347° || |- | 28° || | 68° || | 108° || | 148° || | 188° || | 228° || | 268° || | 308° || | 348° || |- | 29° || | 69° || | 109° || | 149° || | 189° || | 229° || | 269° || | 309° || | 349° || |- | 30° || | 70° || | 110° || | 150° || | 190° || | 230° || | 270° || | 310° || | 350° || |- | 31° || | 71° || | 111° || | 151° || | 191° || | 231° || | 271° || | 311° || | 351° || |- | 32° || | 72° || | 112° || | 152° || | 192° || | 232° || | 272° || | 312° || | 352° || |- | 33° || | 73° || | 113° || | 153° || | 193° || | 233° || | 273° || | 313° || | 353° || |- | 34° || | 74° || | 114° || | 154° || | 194° || | 234° || | 274° || | 314° || | 354° || |- | 35° || | 75° || | 115° || | 155° || | 195° || | 235° || | 275° || | 315° || | 355° || |- | 36° || | 76° || | 116° || | 156° || | 196° || | 236° || | 276° || | 316° || | 356° || |- | 37° || | 77° || | 117° || | 157° || | 197° || | 237° || | 277° || | 317° || | 357° || |- | 38° || | 78° || | 118° || | 158° || | 198° || | 238° || | 278° || | 318° || | 358° || |- | 39° || | 79° || | 119° || | 159° || | 199° || | 239° || | 279° || | 319° || | 359° || |} {|class="wikitable" style="text-align:center; background:#FFFFFF; border-width:0;" |+ Starting and ending angles for the 20 tracks for a Single-track Gray Code with 9 sensors separated by 40° ! Starting Angle || Ending Angle || Length |rowspan="21" style="text-align:center; background:#FFFFFF; border-width:0;"| |- | 3 || 4 || 2 |- | 23 || 28 || 6 |- | 31 || 37 || 7 |- | 44 || 48 || 5 |- | 56 || 60 || 5 |- | 64 || 71 || 8 |- | 74 || 76 || 3 |- | 88 || 91 || 4 |- | 94 || 96 || 3 |- | 99 || 104 || 6 |- | 110 || 115 || 6 |- | 131 || 134 || 4 |- | 138 || 154 || 17 |- | 173 || 181 || 9 |- | 186 || 187 || 2 |- | 220 || 238 || 19 |- | 242 || 246 || 5 |- | 273 || 279 || 7 |- | 286 || 289 || 4 |- | 307 || 360 || 54 |} Two-dimensional Gray code Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. Excess-Gray-code If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit gray-code, the resulting code will be an "excess gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single gray-code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. Gray isometry The bijective mapping { 0 ↔ , 1 ↔ , 2 ↔ , 3 ↔ } establishes an isometry between the metric space over the finite field with the metric given by the Hamming distance and the metric space over the finite ring (the usual modular arithmetic) with the metric given by the Lee distance. The mapping is suitably extended to an isometry of the Hamming spaces and . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in of ring-linear codes from . Related codes There are a number of binary codes similar to Gray codes, including: Datex codes aka Giannini codes (1954), as described by Carl P. Spaulding, use a variant of O'Brien code II. Codes used by Varec (ca. 1954), use a variant of O'Brien code I as well as base-12 and base-16 Gray code variants. Lucal code (1959) aka modified reflected binary code (MRB) Gillham code (1961/1962), uses a variant of Datex code and O'Brien code II. Leslie and Russell code (1964) Royal Radar Establishment code Hoklas code (1988) The following binary-coded decimal (BCD) codes are Gray code variants as well: Petherick code (1953), also known as Royal Aircraft Establishment (RAE) code. O'Brien codes I and II (1955) (An O'Brien type-I code was already described by Frederic A. Foss of IBM and used by Varec in 1954. Later, it was also known as Watts code or Watts reflected decimal (WRD) code and is sometimes ambiguously referred to as reflected binary modified Gray code. An O'Brien type-II code was already used by Datex in 1954.) Excess-3 Gray code (1956) (aka Gray excess-3 code, Gray 3-excess code, reflex excess-3 code, excess Gray code, Gray excess code, 10-excess-3 Gray code or Gray–Stibitz code), described by Frank P. Turvey Jr. of ITT. Tompkins codes I and II (1956) Glixon code (1957), sometimes ambiguously also called modified Gray code See also Linear-feedback shift register De Bruijn sequence Steinhaus–Johnson–Trotter algorithm – an algorithm that generates Gray codes for the factorial number system Minimum distance code Prouhet–Thue–Morse sequence – related to inverse Gray code Ryser formula Hilbert curve Notes References Further reading Part 2 Part 3 (7 pages) (5 pages) (2 pages) External links "Gray Code" demonstration by Michael Schreiber, Wolfram Demonstrations Project (with Mathematica implementation). 2007. NIST Dictionary of Algorithms and Data Structures: Gray code. Hitch Hiker's Guide to Evolutionary Computation, Q21: What are Gray codes, and why are they used?, including C code to convert between binary and BRGC. Dragos A. Harabor uses Gray codes in a 3D digitizer. Single-track gray codes, binary chain codes (Lancaster 1994), and linear-feedback shift registers are all useful in finding one's absolute position on a single-track rotary encoder (or other position sensor). AMS Column: Gray codes Optical Encoder Wheel Generator ProtoTalk.net – Understanding Quadrature Encoding – Covers quadrature encoding in more detail with a focus on robotic applications Data transmission Numeral systems Binary arithmetic Non-standard positional numeral systems Articles with example C code
Gray code
[ "Mathematics" ]
12,155
[ "Mathematical objects", "Numeral systems", "Arithmetic", "Binary arithmetic", "Numbers" ]
50,571
https://en.wikipedia.org/wiki/Transportation%20engineering
Transportation engineering or transport engineering is the application of technology and scientific principles to the planning, functional design, operation and management of facilities for any mode of transportation to provide for the safe, efficient, rapid, comfortable, convenient, economical, and environmentally compatible movement of people and goods transport. Theory The planning aspects of transportation engineering relate to elements of urban planning, and involve technical forecasting decisions and political factors. Technical forecasting of passenger travel usually involves an urban transportation planning model, requiring the estimation of trip generation, trip distribution, mode choice, and route assignment. More sophisticated forecasting can include other aspects of traveler decisions, including auto ownership, trip chaining (the decision to link individual trips together in a tour) and the choice of residential or business location (known as land use forecasting). Passenger trips are the focus of transportation engineering because they often represent the peak of demand on any transportation system. A review of descriptions of the scope of various committees indicates that while facility planning and design continue to be the core of the transportation engineering field, such areas as operations planning, logistics, network analysis, financing, and policy analysis are also important, particularly to those working in highway and urban transportation. The National Council of Examiners for Engineering and Surveying (NCEES) list online the safety protocols, geometric design requirements, and signal timing. Transportation engineering, primarily involves planning, design, construction, maintenance, and operation of transportation facilities. The facilities support air, highway, railroad, pipeline, water, and even space transportation. The design aspects of transportation engineering include the sizing of transportation facilities (how many lanes or how much capacity the facility has), determining the materials and thickness used in pavement designing the geometry (vertical and horizontal alignment) of the roadway (or track). Before any planning occurs an engineer must take what is known as an inventory of the area or, if it is appropriate, the previous system in place. This inventory or database must include information on population, land use, economic activity, transportation facilities and services, travel patterns and volumes, laws and ordinances, regional financial resources, and community values and expectations. These inventories help the engineer create business models to complete accurate forecasts of the future conditions of the system. Operations and management involve traffic engineering, so that vehicles move smoothly on the road or track. Older techniques include signs, signals, markings, and tolling. Newer technologies involve intelligent transportation systems, including advanced traveler information systems (such as variable message signs), advanced traffic control systems (such as ramp meters), and vehicle infrastructure integration. Human factors are an aspect of transportation engineering, particularly concerning driver-vehicle interface and user interface of road signs, signals, and markings. Specializations Highway engineering Engineers in this specialization: Handle the planning, design, construction, and operation of highways, roads, and other vehicular facilities as well as their related bicycle and pedestrian realms Estimate the transportation needs of the public and then secure the funding for projects Analyze locations of high traffic volumes and high collisions for safety and capacity Use engineering principles to improve the transportation system Utilize the three design controls, which are the drivers, the vehicles, and the roadways themselves Railroad engineering Railway engineers handle the design, construction, and operation of railroads and mass transit systems that use a fixed guideway (such as light rail or monorails). Typical tasks include: Determine horizontal and vertical alignment of the railways Determine station location Design functional segments of stations like lines, platforms, etc. Estimate construction cost Railway engineers work to build a cleaner and safer transportation network by reinvesting and revitalizing the rail system to meet future demands. In the United States, railway engineers work with elected officials in Washington, D.C., on rail transportation issues to make sure that the rail system meets the country's transportation needs. Railroad engineers can also move into the specialized field of train dispatching which focuses on train movement control. Port and harbor engineering Port and harbor engineers handle the design, construction, and operation of ports, harbors, canals, and other maritime facilities. Airport engineering Airport engineers design and construct airports. Airport engineers must account for the impacts and demands of aircraft in their design of airport facilities. These engineers must use the analysis of predominant wind direction to determine runway orientation, determine the size of runway border and safety areas, different wing tip to wing tip clearances for all gates and must designate the clear zones in the entire port. The Civil Engineering Department, consisting of Civil and Structural Engineers, undertakes the structural design of passenger, terminal design and cargo terminals, aircraft hangars (for parking commercial, private and government aircraft), runways and other pavements, technical buildings for installation of airport ground aids etc. for the airports in-house requirements and consultancy projects. They are even responsible for the master plan for airports they are authorized to work with. See also Bicycle transportation engineering Highway engineering List of BIM software Pavement engineering Traffic engineering References External links Home Institute of Transportation Engineers, a professional society for transportation engineers A better future transformed by transportation technology and innovation. ITS America Home ASCE Engineering disciplines Civil engineering
Transportation engineering
[ "Engineering" ]
1,023
[ "Industrial engineering", "Construction", "Transportation engineering", "Civil engineering", "nan" ]
50,578
https://en.wikipedia.org/wiki/Queueing%20theory
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company. These ideas were seminal to the field of teletraffic engineering and have since seen applications in telecommunications, traffic engineering, computing, project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals. Spelling The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems. Description Queueing theory is one of the major areas of study in the discipline of management science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic. The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute. The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc. Single queueing nodes A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue. However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job. An analogy often used is that of the cashier at a supermarket. Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n. Birth-death process The behaviour of a single queue (also called a queueing node) can be described by a birth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. If k denotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increases k by 1 and a departure decreases k by 1. The system transitions between values of k by "births" and "deaths", which occur at the arrival rates and the departure rates for each job . For a queue, these rates are generally considered not to vary with the number of jobs in the queue, so a single average rate of arrivals/departures per unit time is assumed. Under this assumption, this process has an arrival rate of and a departure rate of . Balance equations The steady state equations for the birth-and-death process, known as the balance equations, are as follows. Here denotes the steady state probability to be in state n. The first two equations imply and . By mathematical induction, . The condition leads to which, together with the equation for , fully describes the required steady state probabilities. Kendall's notation Single queueing nodes are usually described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs, and c the number of servers at the node. For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process (where inter-arrival durations are exponentially distributed) and have exponentially distributed service times (the M denotes a Markov process). In an M/G/1 queue, the G stands for "general" and indicates an arbitrary probability distribution for service times. Example analysis of an M/M/1 queue Consider a queue with one server and the following characteristics: : the arrival rate (the reciprocal of the expected time between each customer arriving, e.g. 10 customers per second) : the reciprocal of the mean service time (the expected number of consecutive service completions per the same unit time, e.g. per 30 seconds) n: the parameter characterizing the number of customers in the system : the probability of there being n customers in the system in steady state Further, let represent the number of times the system enters state n, and represent the number of times the system leaves state n. Then for all n. That is, the number of times the system leaves a state differs by at most 1 from the number of times it enters that state, since it will either return into that state at some time in the future () or not (). When the system arrives at a steady state, the arrival rate should be equal to the departure rate. Thus the balance equations imply The fact that leads to the geometric distribution formula where . Simple two-equation queue A common basic queueing system is attributed to Erlang and is a modification of Little's Law. Given an arrival rate λ, a dropout rate σ, and a departure rate μ, length of the queue L is defined as: . Assuming an exponential distribution for the rates, the waiting time W can be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving: The second equation is commonly rewritten as: The two-stage one-box model is common in epidemiology. History In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation: M stands for "Markov" or "memoryless", and means arrivals occur according to a Poisson process D stands for "deterministic", and means jobs arriving at the queue require a fixed amount of service k describes the number of servers at the queueing node (k = 1, 2, 3, ...) If the node has more jobs than servers, then jobs will queue and wait for service. The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula. After the 1940s, queueing theory became an area of research interest to mathematicians. In 1953, David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula. Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet. The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered. Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing. Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration. Problems such as performance metrics for the M/G/k queue remain an open problem. Service disciplines Various scheduling policies can be used at queueing nodes: First in, first out Also called first-come, first-served (FCFS), this principle states that customers are served one at a time and that the customer that has been waiting the longest is served first. Last in, first out This principle also serves customers one at a time, but the customer with the shortest waiting time will be served first. Also known as a stack. Processor sharing Service capacity is shared equally between customers. Priority Customers with high priority are served first. Priority queues can be of two types: non-preemptive (where a job in service cannot be interrupted) and preemptive (where a job in service can be interrupted by a higher-priority job). No work is lost in either model. Shortest job first The next job to be served is the one with the smallest size. Preemptive shortest job first The next job to be served is the one with the smallest original size. Shortest remaining processing time The next job to serve is the one with the smallest remaining processing requirement. Service facility Single server: customers line up and there is only one server Several parallel servers (single queue): customers line up and there are several servers Several parallel servers (several queues): there are many counters and customers can decide for which to queue Unreliable server Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed. Customer waiting behavior Balking: customers decide not to join the queue if it is too long Jockeying: customers switch between queues if they think they will get served faster by doing so Reneging: customers leave the queue if they have waited too long for service Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue. Queueing networks Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network. For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node. The simplest non-trivial networks of queues are called tandem queues. The first significant results in this area were Jackson networks, for which an efficient product-form stationary distribution exists and the mean value analysis (which allows average metrics such as throughput and sojourn times) can be computed. If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem. This result was extended to the BCMP network, where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973. Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes. Another type of network are G-networks, first proposed by Erol Gelenbe in 1993: these networks do not assume exponential time distributions like the classic Jackson network. Routing algorithms In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node. In the more general case where jobs can visit more than one node, backpressure routing gives optimal throughput. A network scheduler must choose a queueing algorithm, which affects the characteristics of the larger network. Mean-field limits Mean-field models consider the limiting behaviour of the empirical measure (proportion of queues in different states) as the number of queues m approaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model. Heavy traffic/diffusion approximations In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by a reflected Brownian motion, Ornstein–Uhlenbeck process, or more general diffusion process. The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negative orthant. Fluid limits Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit. Queueing Applications Queueing theory finds widespread application in computer science and information technology. In networking, for instance, queues are integral to routers and switches, where packets queue up for transmission. By applying queueing theory principles, designers can optimize these systems, ensuring responsive performance and efficient resource utilization. Beyond the technological realm, queueing theory is relevant to everyday experiences. Whether waiting in line at a supermarket or for public transportation, understanding the principles of queueing theory provides valuable insights into optimizing these systems for enhanced user satisfaction. At some point, everyone will be involved in an aspect of queuing. What some may view to be an inconvenience could possibly be the most effective method. Queueing theory, a discipline rooted in applied mathematics and computer science, is a field dedicated to the study and analysis of queues, or waiting lines, and their implications across a diverse range of applications. This theoretical framework has proven instrumental in understanding and optimizing the efficiency of systems characterized by the presence of queues. The study of queues is essential in contexts such as traffic systems, computer networks, telecommunications, and service operations. Queueing theory delves into various foundational concepts, with the arrival process and service process being central. The arrival process describes the manner in which entities join the queue over time, often modeled using stochastic processes like Poisson processes. The efficiency of queueing systems is gauged through key performance metrics. These include the average queue length, average wait time, and system throughput. These metrics provide insights into the system's functionality, guiding decisions aimed at enhancing performance and reducing wait times. See also Ehrenfest model Erlang unit Line management Network simulation Project production management Queue area Queueing delay Queue management system Queuing Rule of Thumb Random early detection Renewal theory Throughput Scheduling (computing) Traffic jam Traffic generation model Flow network References Further reading Online chap.15, pp. 380–412 Leonard Kleinrock, Information Flow in Large Communication Nets, (MIT, Cambridge, May 31, 1961) Proposal for a Ph.D. Thesis Leonard Kleinrock. Information Flow in Large Communication Nets (RLE Quarterly Progress Report, July 1961) Leonard Kleinrock. Communication Nets: Stochastic Message Flow and Delay (McGraw-Hill, New York, 1964) External links Teknomo's Queueing theory tutorial and calculators Virtamo's Queueing Theory Course Myron Hlynka's Queueing Theory Page LINE: a general-purpose engine to solve queueing models Production planning Customer experience Operations research Formal sciences Rationing Network performance Markov models
Queueing theory
[ "Mathematics" ]
3,551
[ "Applied mathematics", "Operations research" ]
50,582
https://en.wikipedia.org/wiki/Zircon
Zircon () is a mineral belonging to the group of nesosilicates and is a source of the metal zirconium. Its chemical name is zirconium(IV) silicate, and its corresponding chemical formula is ZrSiO4. An empirical formula showing some of the range of substitution in zircon is (Zr1–y, REEy)(SiO4)1–x(OH)4x–y. Zircon precipitates from silicate melts and has relatively high concentrations of high field strength incompatible elements. For example, hafnium is almost always present in quantities ranging from 1 to 4%. The crystal structure of zircon is tetragonal crystal system. The natural color of zircon varies between colorless, yellow-golden, red, brown, blue, and green. The name derives from the Persian zargun, meaning "gold-hued". This word is changed into "jargoon", a term applied to light-colored zircons. The English word "zircon" is derived from Zirkon, which is the German adaptation of this word. Yellow, orange, and red zircon is also known as "hyacinth", from the flower hyacinthus, whose name is of Ancient Greek origin. Properties Zircon is common in the crust of Earth. It occurs as a common accessory mineral in igneous rocks (as primary crystallization products), in metamorphic rocks and as detrital grains in sedimentary rocks. Large zircon crystals are rare. Their average size in granite rocks is about , but they can also grow to sizes of several cm, especially in mafic pegmatites and carbonatites. Zircon is fairly hard (with a Mohs hardness of 7.5) and chemically stable, and so is highly resistant to weathering. It also is resistant to heat, so that detrital zircon grains are sometimes preserved in igneous rocks formed from melted sediments. Its resistance to weathering, together with its relatively high specific gravity (4.68), make it an important component of the heavy mineral fraction of sandstones. Because of their uranium and thorium content, some zircons undergo metamictization. Connected to internal radiation damage, these processes partially disrupt the crystal structure and partly explain the highly variable properties of zircon. As zircon becomes more and more modified by internal radiation damage, the density decreases, the crystal structure is compromised, and the color changes. Zircon occurs in many colors, including reddish brown, yellow, green, blue, gray, and colorless. The color of zircons can sometimes be changed by heat treatment. Common brown zircons can be transformed into colorless and blue zircons by heating to . In geological settings, the development of pink, red, and purple zircon occurs after hundreds of millions of years, if the crystal has sufficient trace elements to produce color centers. Color in this red or pink series is annealed in geological conditions above temperatures of around . Structurally, zircon consists of parallel chains of alternating silica tetrahedra (silicon ions in fourfold coordination with oxygen ions) and zirconium ions, with the large zirconium ions in eightfold coordination with oxygen ions. Applications Zircon is mainly consumed as an opacifier, and has been known to be used in the decorative ceramics industry. It is also the principal precursor not only to metallic zirconium, although this application is small, but also to all compounds of zirconium including zirconium dioxide (), an important refractory oxide with a melting point of . Other applications include use in refractories and foundry casting and a growing array of specialty applications as zirconia and zirconium chemicals, including in nuclear fuel rods, catalytic fuel converters and in water and air purification systems. Zircon is one of the key minerals used by geologists for geochronology. Zircon is a part of the ZTR index to classify highly-weathered sediments. Gemstone Transparent zircon is a well-known form of semi-precious gemstone, favored for its high specific gravity (between 4.2 and 4.86) and adamantine luster. Because of its high refractive index (1.92) it has sometimes been used as a substitute for diamond, though it does not display quite the same play of color as a diamond. Zircon is one of the heaviest types of gemstone. Its Mohs hardness is between that of quartz and topaz, at 7.5 on the 10 point scale, though below that of the similar manmade stone cubic zirconia (8-8.5). Zircons may sometimes lose their inherent color after long exposure to bright sunlight, which is unusual in a gemstone. It is immune to acid attack except by sulfuric acid and then only when ground into a fine powder. Most gem-grade zircons show a high degree of birefringence which, on stones cut with a table and pavilion cuts (i.e., nearly all cut stones), can be seen as the apparent doubling-up of the latter when viewed through the former, and this characteristic can be used to distinguish them from diamonds and cubic zirconias (CZ) as well as soda-lime glass, none of which show this characteristic. However, some zircons from Sri Lanka display only weak or no birefringence at all, and some other Sri Lanka stones may show clear birefringence in one place and little or none in another part of the same cut stone. Other gemstones also display birefringence, so while the presence of this characteristic may help distinguish a given zircon from a diamond or a CZ, it will not help distinguish it from, for example, a topaz gemstone. The high specific gravity of zircon, however, can usually separate it from any other gem and is simple to test. Also, birefringence depends on the cut of the stone in relation to its optical axis. If a zircon is cut with this axis perpendicular to its table, birefringence may be reduced to undetectable levels unless viewed with a jeweler's loupe or other magnifying optics. The highest grade zircons are cut to minimize birefringence. The value of a zircon gem depends largely on its color, clarity, and size. Prior to World War II, blue zircons (the most valuable color) were available from many gemstone suppliers in sizes between 15 and 25 carats; since then, stones even as large as 10 carats have become very scarce, especially in the most desirable color varieties. Synthetic zircons have been created in laboratories. They are occasionally used in jewellery such as earrings. Zircons are sometimes imitated by spinel and synthetic sapphire, but are not difficult to distinguish from them with simple tools. Zircon from Ratanakiri province in Cambodia is heat treated to produce blue zircon gemstones, sometimes referred to by the trade name cambolite. Occurrence Zircon is a common accessory to trace mineral constituent of all kinds of igneous rocks, but particularly granite and felsic igneous rocks. Due to its hardness, durability and chemical inertness, zircon persists in sedimentary deposits and is a common constituent of most sands. Zircon can occasionally be found as a trace mineral in ultrapotassic igneous rocks such as kimberlites, carbonatites, and lamprophyre, owing to the unusual magma genesis of these rocks. Zircon forms economic concentrations within heavy mineral sands ore deposits, within certain pegmatites, and within some rare alkaline volcanic rocks, for example the Toongi Trachyte, Dubbo, New South Wales Australia in association with the zirconium-hafnium minerals eudialyte and armstrongite. Australia leads the world in zircon mining, producing 37% of the world total and accounting for 40% of world EDR (economic demonstrated resources) for the mineral. South Africa is Africa's main producer, with 30% of world production, second after Australia. Radiometric dating Zircon has played an important role during the evolution of radiometric dating. Zircons contain trace amounts of uranium and thorium (from 10 ppm up to 1 wt%) and can be dated using several modern analytical techniques. Because zircons can survive geologic processes like erosion, transport, even high-grade metamorphism, they contain a rich and varied record of geological processes. Currently, zircons are typically dated by uranium-lead (U-Pb), fission-track, and U+Th/He techniques. Imaging the cathodoluminescence emission from fast electrons can be used as a prescreening tool for high-resolution secondary-ion-mass spectrometry (SIMS) to image the zonation pattern and identify regions of interest for isotope analysis. This is done using an integrated cathodoluminescence and scanning electron microscope. Zircons in sedimentary rock can identify the sediment source. Zircons from Jack Hills in the Narryer Gneiss Terrane, Yilgarn Craton, Western Australia, have yielded U-Pb ages up to 4.404 billion years, interpreted to be the age of crystallization, making them the oldest minerals so far dated on Earth. In addition, the oxygen isotopic compositions of some of these zircons have been interpreted to indicate that more than 4.3 billion years ago there was already liquid water on the surface of the Earth. This interpretation is supported by additional trace element data, but is also the subject of debate. In 2015, "remains of biotic life" were found in 4.1-billion-year-old rocks in the Jack Hills of Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth ... then it could be common in the universe." Similar minerals Hafnon (), xenotime (), béhierite, schiavinatoite (), thorite (), and coffinite () all share the same crystal structure (IVX IVY O4, IIIX VY O4 in the case of xenotime) as zircon. Gallery See also Baddeleyite, Cathodoluminescence microscope Cool Early Earth Earliest known life forms Hadean zircon Heavy mineral sands ore deposits History of Earth Ilmenite Cerium anomaly References Further reading External links Geochemistry of old zircons. . Mineral galleries (archived 7 April 2005) GIA Gem Encyclopedia – Zircon Online articles and information on zircon history, lore, and research Zircon Industry Association Zirconium minerals Nesosilicates Refractory materials Radioactive gemstones Gemstones Tetragonal minerals Minerals in space group 141 Luminescent minerals
Zircon
[ "Physics", "Chemistry" ]
2,315
[ "Luminescence", "Refractory materials", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
50,589
https://en.wikipedia.org/wiki/Spanking
Spanking is a form of corporal punishment involving the act of striking, with either the palm of the hand or an implement, the buttocks of a person to cause physical pain. The term spanking broadly encompasses the use of either the hand or implement, though the use of certain implements can also be characterized as other, more specific types of corporal punishment such as belting, caning, paddling and slippering. Some parents spank children in response to undesired behavior. Adults more commonly spank boys than girls both at home and in school. Some countries have outlawed the spanking of children in every setting, including homes, schools, and penal institutions, while others permit it when done by a parent or guardian. Terminology In American English, dictionaries define spanking as being administered with either the open hand or an implement such as a paddle. Thus, the standard form of corporal punishment in US schools (use of a paddle) is often referred to as a spanking. In North America, the word "spanking" has often been used as a synonym for an official paddling in school, and sometimes even as a euphemism for the formal corporal punishment of adults in an institution. In British English, most dictionaries define "spanking" as being given only with the open hand. In the United Kingdom, Ireland, Australia, and New Zealand, the word "smacking" is generally used in preference to "spanking" when describing striking with an open hand, rather than with an implement. Whereas a spanking is invariably administered to the bottom, a "smacking" is less specific and may refer to slapping the child's hands, arms, or legs as well as its bottom. In the home Parents commonly spank their children as a form of corporal punishment in the United States; however, support for this practice appears to be declining amongst U.S. parents. Spanking is typically done with one or more slaps on the child's buttocks with a bare hand, although, not uncommonly, various objects are used to spank children, such as a hairbrush or wooden spoon. Historically, adults have spanked boys more than girls. In the United States, adults commonly spank toddlers the most. The main reasons parents give for spanking their children are to make children more compliant and to promote better behavior, especially to put a stop to their children's apparent aggressive behaviors. However, research has shown that spanking (or any other form of corporal punishment) is associated with the opposite effect. When adults physically punish children, the children tend to obey parents less with time and develop more aggressive behaviors, including toward other children. This increase in aggressive behavior appears to reflect the child's perception that hitting is the way to deal with anger and frustration. There are also many adverse physical, mental, and emotional effects correlated with spanking and other forms of corporal punishment, including various physical injuries, increased anxiety, depression, and antisocial behavior. Adults who were spanked during their childhood are more likely to abuse their children and spouse. The American Academy of Pediatrics (AAP), Royal College of Paediatrics and Child Health (RCPCH), and the Royal Australasian College of Physicians (RACP) all recommend that no child should be spanked and instead favor the use of effective, healthy forms of discipline. Additionally, the AAP recommends that primary care providers (e.g., pediatricians and family medicine physicians) begin to discuss parents' discipline methods no later than nine months of age and consider initiating such discussions by age 3–4 months. By eight months of age, 5% of parents report spanking and 5% report starting to spank by age three months. The AAP also recommends that pediatricians discuss effective discipline strategies and counsel parents about the ineffectiveness of spanking and the risks of harmful effects associated with the practice to minimize harm to children and guide parents. Although parents and other advocates of spanking often claim that spanking is necessary to promote child discipline, studies have shown that parents tend to apply physical punishment inconsistently and tend to spank more often when they are angry or under stress. The use of corporal punishment by parents increases the likelihood that children will suffer physical abuse, and most documented cases of physical abuse in Canada and the United States begin as disciplinary spankings. If a child is frequently spanked, this form of corporal punishment tends to become less effective at modifying behavior over time (also known as extinction). In response to the decreased effectiveness of spanking, some parents increase the frequency or severity of spanking or use an object. Alternatives to spanking Parents may spank less – or not at all – if they have learned effective discipline techniques since many view spanking as a last resort to discipline their children. There are many alternatives to spanking and other forms of corporal punishment: Time-in, increasing praise, and special time to promote desired behaviors Time outs to take a break from escalating misbehavior Positive reinforcement of rewarding desirable behavior with a star, sticker, or treat Implementing non-physical punishment (psychology) in which an unpleasant consequence follows misbehavior, such as taking away a privilege Ignoring low-level misbehaviors and prioritizing attention on more significant forms of misbehavior Avoiding the opportunity for misbehavior and thus the need for corrective discipline. In schools Corporal punishment, usually delivered with an implement (such as a paddle or cane) rather than with the open hand, used to be a common form of school discipline in many countries, but it is now banned in most of the Western World. Corporal punishment, such as caning, remains a common form of discipline in schools in several Asian and African countries, even in countries in which this practice has been deemed illegal such as India and South Africa. In these cultures it is referred to as "caning" and not "spanking." The Supreme Court of the United States in 1977 held that the paddling of school students was not per se unlawful. However, 33 states have now banned paddling in public schools. It is still common in some schools in the South, and more than 167,000 students were paddled in the 2011–2012 school year in American public schools. Students can be physically punished from kindergarten to the end of high school, meaning that even adults who have reached the age of majority are sometimes spanked by school officials. Several medical, pediatric, or psychological societies have issued statements opposing all forms of corporal punishment in schools, citing such outcomes as poorer academic achievements, increases in antisocial behaviors, injuries to students, and an unwelcoming learning environment. They include the American Medical Association, the American Academy of Child and Adolescent Psychiatry, the American Psychoanalytic Association, the American Academy of Pediatrics (AAP), the Society for Adolescent Medicine, the American Psychological Association, the Royal College of Paediatrics and Child Health, the Royal College of Psychiatrists, the Canadian Paediatric Society and the Australian Psychological Society, as well as the United States' National Association of School Psychologists and National Association of Secondary School Principals. Adult spanking Most spanking performed between adults in the 21st century within the Western world is erotic spanking. Within the early 20th century, American men spanking their wives and girlfriends was often seen as an acceptable form of domestic discipline. It was a common trope in American films, from the earliest days up through the 1960s, and was often used to allude to romance between the man and woman. In the early 21st century, adherents of a small subculture known as Christian domestic discipline have on a literalist interpretation of the Bible justified spanking as a form of acceptable punishment of women by their husbands. Critics describe such practices as a form of domestic abuse. A few countries have a judicial corporal punishment for adults. Ritual spanking traditions Asia On the first day of the lunar Chinese New Year holidays, a week-long 'Spring Festival', the most important festival for Chinese people all over the world, thousands of Chinese visit the Taoist Dong Lung Gong temple in Tungkang to go through the century-old ritual to get rid of bad luck. Men traditionally receive spankings and women get whipped, with the number of strokes to be administered (always lightly) by the temple staff being decided in either case by the god Wang Ye and by burning incense and tossing two pieces of wood, after which all go home happily, believing their luck will improve. Europe On Easter Monday, there is a Slavic tradition of spanking girls and young ladies with woven willow switches (Czech: pomlázka; Slovak: korbáč) and dousing them with water. In Slovenia, there is a jocular tradition that anyone who succeeds in climbing to the top of Mount Triglav receives a spanking or birching. In Poland there is a tradition named Pasowanie, which is celebrated on the 18th birthday. The birthday person receives eighteen smacks with the belt from the guests at the birthday party. North America Birthday spanking is a tradition within some parts of the United States. Within the tradition an individual (commonly, though not exclusively, a child) upon their birthday receives, typically corresponding to their age, a number of spanks. Characteristically these spankings are playful and are administered in such a fashion so the recipient receives no or only minor discomfort. See also UN Convention on the Rights of the Child Corporal punishment Erotic spanking Caning in Singapore Easter whip References Notes External links American Academy of Pediatrics What's The Best Way to Discipline My Child? The California Evidence-Based Clearinghouse for Child Welfare Healthy Steps Help me Grow Triple P – Positive Parenting Program (archived 30 March 2017) Corporal punishments Traditions Pain infliction methods Youth rights Children's rights Parenting Harassment and bullying
Spanking
[ "Biology" ]
2,008
[ "Harassment and bullying", "Behavior", "Aggression" ]
50,604
https://en.wikipedia.org/wiki/Interacting%20boson%20model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons (protons or neutrons) pair up, essentially acting as a single particle with boson properties, with integral spin of either 2 (d-boson) or 0 (s-boson). They correspond to a quintuplet and singlet, i.e. 6 states. It is sometimes known as the Interacting boson approximation (IBA). The IBM1/IBM-I model treats both types of nucleons the same and considers only pairs of nucleons coupled to total angular momentum 0 and 2, called respectively, s and d bosons. The IBM2/IBM-II model treats protons and neutrons separately. Both models are restricted to nuclei with even numbers of protons and neutrons. The model can be used to predict vibrational and rotational modes of non-spherical nuclei. History This model was invented by Akito Arima and Francesco Iachello in 1974. while working at the Kernfysisch Versneller Instituut(KVI) in Groningen, Netherlands. KVI is now property of Universitair Medisch Centrum Groningen (https://umcgresearch.org/). See also Liquid drop model Nuclear shell model References Further reading Evolution of shapes in even–even nuclei using the standard interacting boson model Nuclear physics
Interacting boson model
[ "Physics" ]
291
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
50,609
https://en.wikipedia.org/wiki/Nuclear%20shell%20model
In nuclear physics, atomic physics, and nuclear chemistry, the nuclear shell model utilizes the Pauli exclusion principle to model the structure of atomic nuclei in terms of energy levels. The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen, who received the 1963 Nobel Prize in Physics for their contributions to this model, and Eugene Wigner, who received the Nobel Prize alongside them for his earlier groundlaying work on the atomic nuclei. The nuclear shell model is partly analogous to the atomic shell model, which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons (protons and neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation that there are specific magic quantum numbers of nucleons (2, 8, 20, 28, 50, 82, and 126) that are more tightly bound than the following higher number is the origin of the shell model. The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and "doubly magic quantum nuclei", where both are. Due to variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40, which gives the nuclear shell filling for the various elements; 16 may also be a magic number. To get these numbers, the nuclear shell model starts with an average potential with a shape somewhere between the square well and the harmonic oscillator. To this potential, a spin-orbit term is added. Even so, the total perturbation does not coincide with the experiment, and an empirical spin-orbit coupling must be added with at least two or three different values of its coupling constant, depending on the nuclei being studied. The magic numbers of nuclei, as well as other properties, can be arrived at by approximating the model with a three-dimensional harmonic oscillator plus a spin–orbit interaction. A more realistic but complicated potential is known as the Woods–Saxon potential. Modified harmonic oscillator model Consider a three-dimensional harmonic oscillator. This would give, for example, in the first three levels ("ℓ" is the angular momentum quantum number): Nuclei are built by adding protons and neutrons. These will always fill the lowest available level, with the first two protons filling level zero, the next six protons filling level one, and so on. As with electrons in the periodic table, protons in the outermost shell will be relatively loosely bound to the nucleus if there are only a few protons in that shell because they are farthest from the center of the nucleus. Therefore, nuclei with a full outer proton shell will have a higher nuclear binding energy than other nuclei with a similar total number of protons. The same is true for neutrons. This means that the magic numbers are expected to be those in which all occupied shells are full. In accordance with the experiment, we get 2 (level 0 full) and 8 (levels 0 and 1 full) for the first two numbers. However, the full set of magic numbers does not turn out correctly. These can be computed as follows: In a three-dimensional harmonic oscillator the total degeneracy of states at level n is . Due to the spin, the degeneracy is doubled and is . Thus, the magic numbers would befor all integer k. This gives the following magic numbers: 2, 8, 20, 40, 70, 112, ..., which agree with experiment only in the first three entries. These numbers are twice the tetrahedral numbers (1, 4, 10, 20, 35, 56, ...) from the Pascal Triangle. In particular, the first six shells are: level 0: 2 states (ℓ = 0) = 2. level 1: 6 states (ℓ = 1) = 6. level 2: 2 states (ℓ = 0) + 10 states (ℓ = 2) = 12. level 3: 6 states (ℓ = 1) + 14 states (ℓ = 3) = 20. level 4: 2 states (ℓ = 0) + 10 states (ℓ = 2) + 18 states (ℓ = 4) = 30. level 5: 6 states (ℓ = 1) + 14 states (ℓ = 3) + 22 states (ℓ = 5) = 42. where for every ℓ there are 2ℓ+1 different values of ml and 2 values of ms, giving a total of 4ℓ+2 states for every specific level. These numbers are twice the values of triangular numbers from the Pascal Triangle: 1, 3, 6, 10, 15, 21, .... Including a spin-orbit interaction We next include a spin–orbit interaction. First, we have to describe the system by the quantum numbers j, mj and parity instead of ℓ, ml and ms, as in the hydrogen–like atom. Since every even level includes only even values of ℓ, it includes only states of even (positive) parity. Similarly, every odd level includes only states of odd (negative) parity. Thus we can ignore parity in counting states. The first six shells, described by the new quantum numbers, are level 0 (n = 0): 2 states (j = ). Even parity. level 1 (n = 1): 2 states (j = ) + 4 states (j = ) = 6. Odd parity. level 2 (n = 2): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) = 12. Even parity. level 3 (n = 3): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) = 20. Odd parity. level 4 (n = 4): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) = 30. Even parity. level 5 (n = 5): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) + 12 states (j = ) = 42. Odd parity. where for every j there are different states from different values of mj. Due to the spin–orbit interaction, the energies of states of the same level but with different j will no longer be identical. This is because in the original quantum numbers, when is parallel to , the interaction energy is positive, and in this case j = ℓ + s = ℓ + . When is anti-parallel to (i.e. aligned oppositely), the interaction energy is negative, and in this case . Furthermore, the strength of the interaction is roughly proportional to ℓ. For example, consider the states at level 4: The 10 states with j = come from ℓ = 4 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. The 8 states with j = came from ℓ = 4 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. The 6 states with j = came from ℓ = 2 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. However, its magnitude is half compared to the states with j = . The 4 states with j = came from ℓ = 2 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. However, its magnitude is half compared to the states with j = . The 2 states with j = came from ℓ = 0 and thus have zero spin–orbit interaction energy. Changing the profile of the potential The harmonic oscillator potential grows infinitely as the distance from the center r goes to infinity. A more realistic potential, such as the Woods–Saxon potential, would approach a constant at this limit. One main consequence is that the average radius of nucleons' orbits would be larger in a realistic potential. This leads to a reduced term in the Laplace operator of the Hamiltonian operator. Another main difference is that orbits with high average radii, such as those with high n or high ℓ, will have a lower energy than in a harmonic oscillator potential. Both effects lead to a reduction in the energy levels of high ℓ orbits. Predicted magic numbers Together with the spin–orbit interaction, and for appropriate magnitudes of both effects, one is led to the following qualitative picture: at all levels, the highest j states have their energies shifted downwards, especially for high n (where the highest j is high). This is both due to the negative spin–orbit interaction energy and to the reduction in energy resulting from deforming the potential into a more realistic one. The second-to-highest j states, on the contrary, have their energy shifted up by the first effect and down by the second effect, leading to a small overall shift. The shifts in the energy of the highest j states can thus bring the energy of states of one level closer to the energy of states of a lower level. The "shells" of the shell model are then no longer identical to the levels denoted by n, and the magic numbers are changed. We may then suppose that the highest j states for n = 3 have an intermediate energy between the average energies of n = 2 and n = 3, and suppose that the highest j states for larger n (at least up to n = 7) have an energy closer to the average energy of . Then we get the following shells (see the figure) 1st shell: 2 states (n = 0, j = ). 2nd shell: 6 states (n = 1, j = or ). 3rd shell: 12 states (n = 2, j = , or ). 4th shell: 8 states (n = 3, j = ). 5th shell: 22 states (n = 3, j = , or ; n = 4, j = ). 6th shell: 32 states (n = 4, j = , , or ; n = 5, j = ). 7th shell: 44 states (n = 5, j = , , , or ; n = 6, j = ). 8th shell: 58 states (n = 6, j = , , , , or ; n = 7, j = ). and so on. Note that the numbers of states after the 4th shell are doubled triangular numbers . Spin–orbit coupling causes so-called 'intruder levels' to drop down from the next higher shell into the structure of the previous shell. The sizes of the intruders are such that the resulting shell sizes are themselves increased to the next higher doubled triangular numbers from those of the harmonic oscillator. For example, 1f2p has 20 nucleons, and spin–orbit coupling adds 1g9/2 (10 nucleons), leading to a new shell with 30 nucleons. 1g2d3s has 30 nucleons, and adding intruder 1h11/2 (12 nucleons) yields a new shell size of 42, and so on. The magic numbers are then 2 and so on. This gives all the observed magic numbers and also predicts a new one (the so-called island of stability) at the value of 184 (for protons, the magic number 126 has not been observed yet, and more complicated theoretical considerations predict the magic number to be 114 instead). Another way to predict magic (and semi-magic) numbers is by laying out the idealized filling order (with spin–orbit splitting but energy levels not overlapping). For consistency, s is split into and components with 2 and 0 members respectively. Taking the leftmost and rightmost total counts within sequences bounded by / here gives the magic and semi-magic numbers. s(2,0)/p(4,2) > 2,2/6,8, so (semi)magic numbers 2,2/6,8 d(6,4):s(2,0)/f(8,6):p(4,2) > 14,18:20,20/28,34:38,40, so 14,20/28,40 g(10,8):d(6,4):s(2,0)/h(12,10):f(8,6):p(4,2) > 50,58,64,68,70,70/82,92,100,106,110,112, so 50,70/82,112 i(14,12):g(10,8):d(6,4):s(2,0)/j(16,14):h(12,10):f(8,6):p(4,2) > 126,138,148,156,162,166,168,168/184,198,210,220,228,234,238,240, so 126,168/184,240 The rightmost predicted magic numbers of each pair within the quartets bisected by / are double tetrahedral numbers from the Pascal Triangle: 2, 8, 20, 40, 70, 112, 168, 240 are 2x 1, 4, 10, 20, 35, 56, 84, 120, ..., and the leftmost members of the pairs differ from the rightmost by double triangular numbers: 2 − 2 = 0, 8 − 6 = 2, 20 − 14 = 6, 40 − 28 = 12, 70 − 50 = 20, 112 − 82 = 30, 168 − 126 = 42, 240 − 184 = 56, where 0, 2, 6, 12, 20, 30, 42, 56, ... are 2 × 0, 1, 3, 6, 10, 15, 21, 28, ... . Other properties of nuclei This model also predicts or explains with some success other properties of nuclei, in particular spin and parity of nuclei ground states, and to some extent their excited nuclear states as well. Take (oxygen-17) as an example: Its nucleus has eight protons filling the first three proton "shells", eight neutrons filling the first three neutron "shells", and one extra neutron. All protons in a complete proton shell have zero total angular momentum, since their angular momenta cancel each other. The same is true for neutrons. All protons in the same level (n) have the same parity (either +1 or −1), and since the parity of a pair of particles is the product of their parities, an even number of protons from the same level (n) will have +1 parity. Thus, the total angular momentum of the eight protons and the first eight neutrons is zero, and their total parity is +1. This means that the spin (i.e. angular momentum) of the nucleus, as well as its parity, are fully determined by that of the ninth neutron. This one is in the first (i.e. lowest energy) state of the 4th shell, which is a d-shell (ℓ = 2), and since p = (−1), this gives the nucleus an overall parity of +1. This 4th d-shell has a j = , thus the nucleus of is expected to have positive parity and total angular momentum , which indeed it has. The rules for the ordering of the nucleus shells are similar to Hund's Rules of the atomic shells, however, unlike its use in atomic physics, the completion of a shell is not signified by reaching the next n, as such the shell model cannot accurately predict the order of excited nuclei states, though it is very successful in predicting the ground states. The order of the first few terms are listed as follows: 1s, 1p, 1p, 1d, 2s, 1d... For further clarification on the notation refer to the article on the RussellSaunders term symbol. For nuclei farther from the magic quantum numbers one must add the assumption that due to the relation between the strong nuclear force and total angular momentum, protons or neutrons with the same n tend to form pairs of opposite angular momentum. Therefore, a nucleus with an even number of protons and an even number of neutrons has 0 spin and positive parity. A nucleus with an even number of protons and an odd number of neutrons (or vice versa) has the parity of the last neutron (or proton), and the spin equal to the total angular momentum of this neutron (or proton). By "last" we mean the properties coming from the highest energy level. In the case of a nucleus with an odd number of protons and an odd number of neutrons, one must consider the total angular momentum and parity of both the last neutron and the last proton. The nucleus parity will be a product of theirs, while the nucleus spin will be one of the possible results of the sum of their angular momenta (with other possible results being excited states of the nucleus). The ordering of angular momentum levels within each shell is according to the principles described above – due to spin–orbit interaction, with high angular momentum states having their energies shifted downwards due to the deformation of the potential (i.e. moving from a harmonic oscillator potential to a more realistic one). For nucleon pairs, however, it is often energetically favourable to be at high angular momentum, even if its energy level for a single nucleon would be higher. This is due to the relation between angular momentum and the strong nuclear force. The nuclear magnetic moment of neutrons and protons is partly predicted by this simple version of the shell model. The magnetic moment is calculated through j, ℓ and s of the "last" nucleon, but nuclei are not in states of well-defined ℓ and s. Furthermore, for odd-odd nuclei, one has to consider the two "last" nucleons, as in deuterium. Therefore, one gets several possible answers for the nuclear magnetic moment, one for each possible combined ℓ and s state, and the real state of the nucleus is a superposition of them. Thus the real (measured) nuclear magnetic moment is somewhere in between the possible answers. The electric dipole of a nucleus is always zero, because its ground state has a definite parity. The matter density (ψ, where ψ is the wavefunction) is always invariant under parity. This is usually the situation with the atomic electric dipole. Higher electric and magnetic multipole moments cannot be predicted by this simple version of the shell model for reasons similar to those in the case of deuterium. Including residual interactions For nuclei having two or more valence nucleons (i.e. nucleons outside a closed shell), a residual two-body interaction must be added. This residual term comes from the part of the inter-nucleon interaction not included in the approximative average potential. Through this inclusion, different shell configurations are mixed, and the energy degeneracy of states corresponding to the same configuration is broken. These residual interactions are incorporated through shell model calculations in a truncated model space (or valence space). This space is spanned by a basis of many-particle states where only single-particle states in the model space are active. The Schrödinger equation is solved on this basis, using an effective Hamiltonian specifically suited for the model space. This Hamiltonian is different from the one of free nucleons as, among other things, it has to compensate for excluded configurations. One can do away with the average potential approximation entirely by extending the model space to the previously inert core and treating all single-particle states up to the model space truncation as active. This forms the basis of the no-core shell model, which is an ab initio method. It is necessary to include a three-body interaction in such calculations to achieve agreement with experiments. Collective rotation and the deformed potential In 1953 the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was non-spherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to a large number of valence particles—and this intractability was even greater in the 1950s when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is now known as the Nilsson model. It is essentially the harmonic oscillator model described in this article, but with anisotropy added, so the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level produces states whose expected angular momentum along the cranking axis is the desired value. Related models Igal Talmi developed a method to obtain the information from experimental data and use it to calculate and predict energies which have not been measured. This method has been successfully used by many nuclear physicists and has led to a deeper understanding of nuclear structure. The theory which gives a good description of these properties was developed. This description turned out to furnish the shell model basis of the elegant and successful interacting boson model. A model derived from the nuclear shell model is the alpha particle model developed by Henry Margenau, Edward Teller, J. K. Pering, T. H. Skyrme, also sometimes called the Skyrme model. Note, however, that the Skyrme model is usually taken to be a model of the nucleon itself, as a "cloud" of mesons (pions), rather than as a model of the nucleus as a "cloud" of alpha particles. See also Nuclear structure Table of nuclides Liquid drop model Isomeric shift Interacting boson model References Further reading External links Nuclear physics German inventions
Nuclear shell model
[ "Physics" ]
4,842
[ "Nuclear physics" ]
50,623
https://en.wikipedia.org/wiki/Chmod
In Unix and Unix-like operating systems, is the command and system call used to change the access permissions and the special mode flags (the setuid, setgid, and sticky flags) of file system objects (files and directories). Collectively these were originally called its modes, and the name was chosen as an abbreviation of change mode. History A command first appeared in AT&T Unix version 1, along with the system call. As systems grew in number and types of users, access-control lists were added to many file systems in addition to these most basic modes to increase flexibility. The version of bundled in GNU coreutils was written by David MacKenzie and Jim Meyering. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. Command syntax Throughout this section, ser refers to the owner of the file, as a reminder that the symbolic form of the command uses "u", to avoid confusion with "other". Note that only the user or the superuser (root) is able to change file permissions. chmod [options] mode[,mode] file1 [file2 ...] Usually implemented options include: Recursive, i.e. include objects in subdirectories. verbose, show objects changed (unchanged objects are not shown). If a symbolic link is specified, the target object is affected. File modes directly associated with symbolic links themselves are typically not used. To view the file mode, the or commands may be used: $ ls -l findPhoneNumbers.sh -rwxr-xr-- 1 dgerman staff 823 Dec 16 15:03 findPhoneNumbers.sh $ stat -c %a findPhoneNumbers.sh 754 The , , and specify the read, write, and execute access respectively. The first character of the display denotes the object type; a hyphen represents a plain file. The script can be read, written to, and executed by the user ; read and executed by members of the group; and only read by any other users. The main parts of the permissions: For example: Each group of three characters define permissions for each class: the three leftmost characters, , define permissions for the User class (i.e. the file owner). the middle three characters, , define permissions for the Group class (i.e. the group owning the file) the rightmost three characters, , define permissions for the Others class. In this example, users who are not the owner of the file and who are not members of the Group (and, thus, are in the Others class) have no permission to access the file. Numerical permissions The numerical format accepts up to four digits. The three rightmost digits define permissions for the file user, the group, and others. The optional leading digit, when 4 digits are given, specifies the special , , and flags. Each digit of the three rightmost digits represents a binary value, which controls the "read", "write" and "execute" permissions respectively. A value of 1 means a class is allowed that action, while a 0 means it is disallowed. For example, would allow: "read" (4), "write" (2), and "execute" (1) for the User class; i.e., 7 (4 + 2 + 1). "read" (4) and "execute" (1) for the Group class; i.e., 5 (4 + 1). Only "read" (4) for the Others class. A numerical code permits execution if and only if it is odd (i.e. , , , or ). A numerical code permits "read" if and only if it is greater than or equal to (i.e. , , , or ). A numerical code permits "write" if and only if it is , , , or . Numeric example Change permissions to permit the update of a file: $ ls -l File -rw-r--r-- 1 jsmith programmers 57 Jul 3 10:13 File $ chmod 664 File $ ls -l File -rw-rw-r-- 1 jsmith programmers 57 Jul 3 10:13 File Since the , and bits are not specified, this is equivalent to: $ chmod 0664 File Symbolic modes The command also accepts a finer-grained symbolic notation, which allows modifying specific modes while leaving other modes untouched. The symbolic mode is composed of three components, which are combined to form a single string of text: $ chmod [references][operator][modes] file ... Classes of users are used to distinguish to whom the permissions apply. If no classes are specified "all" is implied. The classes are represented by one or more of the following letters: The program uses an operator to specify how the modes of a file should be adjusted. The following operators are accepted: The modes indicate which permissions are to be granted or removed from the specified classes. There are three basic modes which correspond to the basic permissions: Multiple changes can be specified by separating multiple symbolic modes with commas (without spaces). If a user is not specified, chmod will check the umask and the effect will be as if "a" was specified except bits that are set in the umask are not affected. Symbolic examples Add write permission () to the Group's () access modes of a directory, allowing users in the same group to add files: $ ls -ld dir # show access modes before chmod drwxr-xr-x 2 jsmitt northregion 96 Apr 8 12:53 shared_dir $ chmod g+w dir $ ls -ld dir # show access modes after chmod drwxrwxr-x 2 jsmitt northregion 96 Apr 8 12:53 shared_dir Remove write permissions () for all classes (), preventing anyone from writing to the file: $ ls -l ourBestReferenceFile -rw-rw-r-- 2 tmiller northregion 96 Apr 8 12:53 ourBestReferenceFile $ chmod a-w ourBestReferenceFile $ ls -l ourBestReferenceFile -r--r--r-- 2 tmiller northregion 96 Apr 8 12:53 ourBestReferenceFile Set the permissions for the ser and the Group () to read and execute () only (no write permission) on , preventing anyone from adding files. $ ls -ld referenceLib drwxr----- 2 ebowman northregion 96 Apr 8 12:53 referenceLib $ chmod ug=rx referenceLib $ ls -ld referenceLib dr-xr-x--- 2 ebowman northregion 96 Apr 8 12:53 referenceLib Add the read and write permissions to the user and group classes of a file or directory named : $ chmod ug+rw sample $ ls -ld sample drw-rw---- 2 rsanchez budget 96 Dec 8 12:53 sample Remove all permissions, allowing no one to read, write, or execute the file named to no useful end. $ chmod a-rwx sample $ ls -l sample ---------- 2 rswven planning 96 Dec 8 12:53 sample Change the permissions for the user and the group to read and execute only (no write permission) on . $ # Sample file permissions before command $ ls -ld sample drw-rw---- 2 oschultz warehousing 96 Dec 8 12:53 NY_DBs $ chmod ug=rx sample $ ls -ld sample dr-xr-x--- 2 oschultz warehousing 96 Dec 8 12:53 NJ_DBs Set the item writable for the user while making it read-only for anyone else with only one command: $ chmod u=rw,go=r sample $ ls -ld sample drw-r--r-- 2 oschultz warehousing 96 Dec 8 12:53 sample Special modes The command is also capable of changing the additional permissions or special modes of a file or directory. The symbolic modes use '' to represent the setuid and setgid modes, and '' to represent the sticky mode. The modes are only applied to the appropriate classes, regardless of whether or not other classes are specified. Most operating systems support the specification of special modes numerically, particularly in octal, but some do not. On these systems, only the symbolic modes can be used. Command line examples See also File-system permissions chattr, the command used to change the attributes of a file or directory on Linux systems chown, the command used to change the owner of a file or directory on Unix-like systems chgrp, the command used to change the group of a file or directory on Unix-like systems cacls, a command used on Windows NT and its derivatives to modify the access control lists associated with a file or directory attrib umask, restricts mode (permissions) at file or directory creation on Unix-like systems User identifier Group identifier List of Unix commands References External links chmod — manual page from GNU coreutils. GNU "Setting Permissions" manual CHMOD-Win 3.0 — Freeware Windows' ACL ↔ CHMOD converter. Beginners tutorial with on-line "live" example File system permissions Operating system security Standard Unix programs Unix file system-related software Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Chmod
[ "Technology" ]
2,098
[ "IBM i Qshell commands", "Standard Unix programs", "Computing commands", "Plan 9 commands", "Inferno (operating system) commands" ]
50,627
https://en.wikipedia.org/wiki/Conformal%20map
In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths. More formally, let and be open subsets of . A function is called conformal (or angle-preserving) at a point if it preserves angles between directed curves through , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature. The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix. For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types. The notion of conformality generalizes in a natural way to maps between Riemannian or semi-Riemannian manifolds. In two dimensions If is an open subset of the complex plane , then a function is conformal if and only if it is holomorphic and its derivative is everywhere non-zero on . If is antiholomorphic (conjugate to a holomorphic function), it preserves angles but reverses their orientation. In the literature, there is another definition of conformal: a mapping which is one-to-one and holomorphic on an open set in the plane. The open mapping theorem forces the inverse function (defined on the image of ) to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. In fact, we have the following relation, the inverse function theorem: where . However, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic. The Riemann mapping theorem, one of the profound results of complex analysis, states that any non-empty open simply connected proper subset of admits a bijective conformal map to the open unit disk in . Informally, this means that any blob can be transformed into a perfect circle by some conformal map. Global conformal maps on the Riemann sphere A map of the Riemann sphere onto itself is conformal if and only if it is a Möbius transformation. The complex conjugate of a Möbius transformation preserves angles, but reverses the orientation. For example, circle inversions. Conformality with respect to three types of angles In plane geometry there are three types of angles that may be preserved in a conformal map. Each is hosted by its own real algebra, ordinary complex numbers, split-complex numbers, and dual numbers. The conformal maps are described by linear fractional transformations in each case. In three or more dimensions Riemannian geometry In Riemannian geometry, two Riemannian metrics and on a smooth manifold are called conformally equivalent if for some positive function on . The function is called the conformal factor. A diffeomorphism between two Riemannian manifolds is called a conformal map if the pulled back metric is conformally equivalent to the original one. For example, stereographic projection of a sphere onto the plane augmented with a point at infinity is a conformal map. One can also define a conformal structure on a smooth manifold, as a class of conformally equivalent Riemannian metrics. Euclidean space A classical theorem of Joseph Liouville shows that there are far fewer conformal maps in higher dimensions than in two dimensions. Any conformal map from an open subset of Euclidean space into the same Euclidean space of dimension three or greater can be composed from three types of transformations: a homothety, an isometry, and a special conformal transformation. For linear transformations, a conformal map may only be composed of homothety and isometry, and is called a conformal linear transformation. Applications Applications of conformal mapping exist in aerospace engineering, in biomedical sciences (including brain mapping and genetic mapping), in applied math (for geodesics and in geometry), in earth sciences (including geophysics, geography, and cartography), in engineering, and in electronics. Cartography In cartography, several named map projections, including the Mercator projection and the stereographic projection are conformal. The preservation of compass directions makes them useful in marine navigation. Physics and engineering Conformal mappings are invaluable for solving problems in engineering and physics that can be expressed in terms of functions of a complex variable yet exhibit inconvenient geometries. By choosing an appropriate mapping, the analyst can transform the inconvenient geometry into a much more convenient one. For example, one may wish to calculate the electric field, , arising from a point charge located near the corner of two conducting planes separated by a certain angle (where is the complex coordinate of a point in 2-space). This problem per se is quite clumsy to solve in closed form. However, by employing a very simple conformal mapping, the inconvenient angle is mapped to one of precisely radians, meaning that the corner of two planes is transformed to a straight line. In this new domain, the problem (that of calculating the electric field impressed by a point charge located near a conducting wall) is quite easy to solve. The solution is obtained in this domain, , and then mapped back to the original domain by noting that was obtained as a function (viz., the composition of and ) of , whence can be viewed as , which is a function of , the original coordinate basis. Note that this application is not a contradiction to the fact that conformal mappings preserve angles, they do so only for points in the interior of their domain, and not at the boundary. Another example is the application of conformal mapping technique for solving the boundary value problem of liquid sloshing in tanks. If a function is harmonic (that is, it satisfies Laplace's equation ) over a plane domain (which is two-dimensional), and is transformed via a conformal map to another plane domain, the transformation is also harmonic. For this reason, any function which is defined by a potential can be transformed by a conformal map and still remain governed by a potential. Examples in physics of equations defined by a potential include the electromagnetic field, the gravitational field, and, in fluid dynamics, potential flow, which is an approximation to fluid flow assuming constant density, zero viscosity, and irrotational flow. One example of a fluid dynamic application of a conformal map is the Joukowsky transform that can be used to examine the field of flow around a Joukowsky airfoil. Conformal maps are also valuable in solving nonlinear partial differential equations in some specific geometries. Such analytic solutions provide a useful check on the accuracy of numerical simulations of the governing equation. For example, in the case of very viscous free-surface flow around a semi-infinite wall, the domain can be mapped to a half-plane in which the solution is one-dimensional and straightforward to calculate. For discrete systems, Noury and Yang presented a way to convert discrete systems root locus into continuous root locus through a well-know conformal mapping in geometry (aka inversion mapping). Maxwell's equations Maxwell's equations are preserved by Lorentz transformations which form a group including circular and hyperbolic rotations. The latter are sometimes called Lorentz boosts to distinguish them from circular rotations. All these transformations are conformal since hyperbolic rotations preserve hyperbolic angle, (called rapidity) and the other rotations preserve circular angle. The introduction of translations in the Poincaré group again preserves angles. A larger group of conformal maps for relating solutions of Maxwell's equations was identified by Ebenezer Cunningham (1908) and Harry Bateman (1910). Their training at Cambridge University had given them facility with the method of image charges and associated methods of images for spheres and inversion. As recounted by Andrew Warwick (2003) Masters of Theory: Each four-dimensional solution could be inverted in a four-dimensional hyper-sphere of pseudo-radius in order to produce a new solution. Warwick highlights this "new theorem of relativity" as a Cambridge response to Einstein, and as founded on exercises using the method of inversion, such as found in James Hopwood Jeans textbook Mathematical Theory of Electricity and Magnetism. General relativity In general relativity, conformal maps are the simplest and thus most common type of causal transformations. Physically, these describe different universes in which all the same events and interactions are still (causally) possible, but a new additional force is necessary to affect this (that is, replication of all the same trajectories would necessitate departures from geodesic motion because the metric tensor is different). It is often used to try to make models amenable to extension beyond curvature singularities, for example to permit description of the universe even before the Big Bang. See also Biholomorphic map Carathéodory's theorem – A conformal map extends continuously to the boundary Penrose diagram Schwarz–Christoffel mapping – a conformal transformation of the upper half-plane onto the interior of a simple polygon Special linear group – transformations that preserve volume (as opposed to angles) and orientation References Further reading Constantin Carathéodory (1932) Conformal Representation, Cambridge Tracts in Mathematics and Physics External links Interactive visualizations of many conformal maps Conformal Maps by Michael Trott, Wolfram Demonstrations Project. Conformal Mapping images of current flow in different geometries without and with magnetic field by Gerhard Brunthaler. Conformal Transformation: from Circle to Square. Online Conformal Map Grapher. Joukowski Transform Interactive WebApp Riemannian geometry Map projections Angle
Conformal map
[ "Physics", "Mathematics" ]
2,089
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Map projections", "Coordinate systems", "Wikipedia categories named after physical quantities", "Angle" ]
50,650
https://en.wikipedia.org/wiki/Astronomy
Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole. Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results. Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets. Etymology Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct. Use of terms "astronomy" and "astrophysics" "Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties", while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics. History Pre-historic astronomy In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that may have had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year. Classical astronomy As civilizations developed, most notably in Egypt, Mesopotamia, Greece, Persia, India, China, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy. A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros. Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe. Post-classical astronomy Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Iranian scholar Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars. It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. In Post-classical West Africa, Astronomers studied the movement of stars and relation to seasons, crafting charts of the heavens as well as precise diagrams of orbits of the other planets based on complex mathematical calculations. Songhai historian Mahmud Kati documented a meteor shower in August 1583. Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise. For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter. Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later. Early telescopic astronomy During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope. Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars. More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found. During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations. Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes. Deep space astronomy The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proven in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. In 1919, when the Hooker Telescope was completed, the prevailing view was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that the universe consists of a multitude of galaxies. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, which became increasingly precise with better meassurements, starting at 2 billion years and 280 million light-years, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe. Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September. Observational astronomy The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation. Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below. Radio astronomy Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths. Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths. A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei. Infrared astronomy Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous galactic protostars and their host star clusters. With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets. Optical astronomy Historically, optical astronomy, which has been also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation. Ultraviolet astronomy Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary. X-ray astronomy X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 107 (10 million) kelvins, and thermal emission from thick gases above 107 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei. Gamma-ray astronomy Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes. The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere. Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei. Fields not based on the electromagnetic spectrum In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth. In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere. Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole. A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments. The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy. Astrometry and celestial mechanics One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars. Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects. The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allow astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy. During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars. Theoretical astronomy Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved. Theorists in astronomy endeavor to create theoretical models that are based on existing observations and known physics, and to predict observational consequences of those models. The observation of phenomena predicted by a model allows astronomers to select between several alternative or conflicting models. Theorists also modify existing models to take into account new observations. In some cases, a large amount of observational data that is inconsistent with a model may lead to abandoning it largely or completely, as for geocentric theory, the existence of luminiferous aether, and the steady-state model of cosmic evolution. Phenomena modeled by theoretical astronomers include: stellar dynamics and evolution galaxy formation large-scale distribution of matter in the Universe the origin of cosmic rays general relativity and physical cosmology, including string cosmology and astroparticle physics. Modern theoretical astronomy reflects dramatic advances in observation since the 1990s, including studies of the cosmic microwave background, distant supernovae and galaxy redshifts, which have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Specific subfields Astrophysics Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space". Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrochemistry Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans. Astrobiology Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does. The term exobiology is similar. Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. The origin and early evolution of life is an inseparable part of the discipline of astrobiology. Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space. Physical cosmology Cosmology (from the Greek () "world, universe" and () "word, study" or literally "logic") could be considered the study of the Universe as a whole. Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13.8 billion years to its present condition. The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965. In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe. (See also nucleocosmochronology.) When the first neutral atoms formed from a sea of primordial ions, space became transparent to radiation, releasing the energy viewed today as the microwave background radiation. The expanding Universe then underwent a Dark Age due to the lack of stellar energy sources. A hierarchical structure of matter began to form from minute variations in the mass density of space. Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III stars. These massive stars triggered the reionization process and are believed to have created many of the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing the cycle of nucleosynthesis to continue longer. Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations of gas and dust merged to form the first primitive galaxies. Over time, these pulled in more matter, and were often organized into groups and clusters of galaxies, then into larger-scale superclusters. Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason, much effort is expended in trying to understand the physics of these components. Extragalactic astronomy The study of objects outside our galaxy is a branch of astronomy concerned with the formation and evolution of galaxies, their morphology (description) and classification, the observation of active galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the understanding of the large-scale structure of the cosmos. Most galaxies are organized into distinct shapes that allow for classification schemes. They are commonly divided into spiral, elliptical and Irregular galaxies. As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move along random orbits with no preferred direction. These galaxies contain little or no interstellar dust, few star-forming regions, and older stars. Elliptical galaxies may have been formed by other galaxies merging. A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation within which massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of older stars. Both the Milky Way and one of our nearest galaxy neighbors, the Andromeda Galaxy, are spiral galaxies. Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational interaction. An active galaxy is a formation that emits a significant amount of its energy from a source other than its stars, dust and gas. It is powered by a compact region at the core, thought to be a supermassive black hole that is emitting radiation from in-falling material. A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy radiation include Seyfert galaxies, quasars, and blazars. Quasars are believed to be the most consistently luminous objects in the known universe. The large-scale structure of the cosmos is represented by groups and clusters of galaxies. This structure is organized into a hierarchy of groupings, with the largest being the superclusters. The collective matter is formed into filaments and walls, leaving large voids between. Galactic astronomy The Solar System orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the Local Group of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large portions of the Milky Way that are obscured from view. In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. This is surrounded by four primary arms that spiral from the core. This is a region of active star formation that contains many younger, population I stars. The disk is surrounded by a spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as globular clusters. Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions, molecular clouds of molecular hydrogen and other elements create star-forming regions. These begin as a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by the Jeans length) to form compact protostars. As the more massive stars appear, they transform the cloud into an H II region (ionized atomic hydrogen) of glowing gas and plasma. The stellar wind and supernova explosions from these stars eventually cause the cloud to disperse, often leaving behind one or more young open clusters of stars. These clusters gradually disperse, and the stars join the population of the Milky Way. Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined. Stellar astronomy The study of stars and stellar evolution is fundamental to our understanding of the Universe. The astrophysics of stars has been determined through observation and theoretical understanding; and from computer simulations of the interior. Star formation occurs in dense regions of dust and gas, known as giant molecular clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star. Almost all elements heavier than hydrogen and helium were created inside the cores of stars. The characteristics of the resulting star depend primarily upon its starting mass. The more massive the star, the greater its luminosity, and the more rapidly it fuses its hydrogen fuel into helium in its core. Over time, this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium requires a higher core temperature. A star with a high enough core temperature will push its outer layers outward while increasing its core density. The resulting red giant formed by the expanding outer layers enjoys a brief life span, before the helium fuel in the core is in turn consumed. Very massive stars can also undergo a series of evolutionary phases, as they fuse increasingly heavier elements. The final fate of the star depends on its mass, with stars of mass greater than about eight times the Sun becoming core collapse supernovae; while smaller stars blow off their outer layers and leave behind the inert core in the form of a white dwarf. The ejection of the outer layers forms a planetary nebula. The remnant of a supernova is a dense neutron star, or, if the stellar mass was at least three times that of the Sun, a black hole. Closely orbiting binary stars can follow more complex evolutionary paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova. Planetary nebulae and supernovae distribute the "metals" produced in the star by fusion to the interstellar medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and helium alone. Solar astronomy At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical main-sequence dwarf star of stellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not considered a variable star, but it does undergo periodic changes in activity known as the sunspot cycle. This is an 11-year oscillation in sunspot number. Sunspots are regions of lower-than-average temperatures that are associated with intense magnetic activity. The Sun has steadily increased in luminosity by 40% since it first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can have a significant impact on the Earth. The Maunder minimum, for example, is believed to have caused the Little Ice Age phenomenon during the Middle Ages. At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by means of radiation. Above that is the convection zone where the gas material transports energy primarily through physical displacement of the gas known as convection. It is believed that the movement of mass within the convection zone creates the magnetic activity that generates sunspots. The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and finally by the super-heated corona. A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit of the Solar System, it reaches the heliopause. As the solar wind passes the Earth, it interacts with the Earth's magnetic field (magnetosphere) and deflects the solar wind, but traps some creating the Van Allen radiation belts that envelop the Earth. The aurora are created when solar wind particles are guided by the magnetic flux lines into the Earth's polar regions where the lines then descend into the atmosphere. Planetary science Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids, and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall understanding of the formation and evolution of the Sun's planetary system, although many new discoveries are still being made. The Solar System is divided into the inner Solar System (subdivided into the inner planets and the asteroid belt), the outer Solar System (subdivided into the outer planets and centaurs), comets, the trans-Neptunian region (subdivided into the Kuiper belt, and the scattered disc) and the farthest regions (e.g., boundaries of the heliosphere, and the Oort Cloud, which may extend as far as a light-year). The inner terrestrial planets consist of Mercury, Venus, Earth, and Mars. The outer giant planets are the gas giants (Jupiter and Saturn) and the ice giants (Uranus and Neptune). The planets were formed 4.6 billion years ago in the protoplanetary disk that surrounded the early Sun. Through a process that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with time, became protoplanets. The radiation pressure of the solar wind then expelled most of the unaccreted matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets continued to sweep up, or eject, the remaining matter during a period of intense bombardment, evidenced by the many impact craters on the Moon. During this period, some of the protoplanets may have collided and one such collision may have formed the Moon. Once a planet reaches sufficient mass, the materials of different densities segregate within, during planetary differentiation. This process can form a stony or metallic core, surrounded by a mantle and an outer crust. The core may include solid and liquid regions, and some planetary cores generate their own magnetic field, which can protect their atmospheres from solar wind stripping. A planet or moon's interior heat is produced from the collisions that created the body, by the decay of radioactive materials (e.g. uranium, thorium, and 26Al), or tidal heating caused by interactions with other bodies. Some planets and moons accumulate enough heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating, cool more quickly; and their geological activity ceases with the exception of impact cratering. Interdisciplinary studies Astronomy and astrophysics have developed significant interdisciplinary links with other major scientific fields. Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and evolution of biological systems in the Universe, with particular emphasis on the possibility of non-terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of a vast amount of observational astrophysical data. The study of chemicals found in space, including their formation, interaction and destruction, is called astrochemistry. These substances are usually found in molecular clouds, although they may also appear in low-temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the chemicals found within the Solar System, including the origins of the elements and variations in the isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As "forensic astronomy", finally, methods from astronomy have been used to solve problems of art history and occasionally of law. Amateur astronomy Astronomy is one of the sciences to which amateurs can contribute the most. Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with consumer-level equipment or equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events that interest them. Most amateurs work at visible wavelengths, but many experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (e.g. the One-Mile Telescope). Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make occultation measurements that are used to refine the orbits of minor planets. They can also discover comets, and perform regular observations of variable stars. Improvements in digital technology have allowed amateurs to make impressive advances in the field of astrophotography. Unsolved problems in astronomy In the 21st century there remain important unanswered questions in astronomy. Some are cosmic in scope: for example, what are dark matter and dark energy? These dominate the evolution and fate of the cosmos, yet their true nature remains unknown. What will be the ultimate fate of the universe? Why is the abundance of lithium in the cosmos four times lower than predicted by the standard Big Bang model? Others pertain to more specific classes of phenomena. For example, is the Solar System normal or atypical? What is the origin of the stellar mass spectrum? That is, why do astronomers observe the same distribution of stellar masses—the initial mass function—apparently regardless of the initial conditions? Likewise, questions remain about the formation of the first galaxies, the origin of supermassive black holes, the source of ultra-high-energy cosmic rays, and more. Is there other life in the Universe? Especially, is there other intelligent life? If so, what is the explanation for the Fermi paradox? The existence of life elsewhere has important scientific and philosophical implications. See also Lists References Bibliography External links NASA/IPAC Extragalactic Database (NED) (NED-Distances) Core books and Core journals in Astronomy, from the Smithsonian/NASA Astrophysics Data System Solar System
Astronomy
[ "Astronomy" ]
8,636
[ "nan", "Outer space", "Solar System" ]
50,652
https://en.wikipedia.org/wiki/Uniform%20convergence
In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every point in . Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find (i.e., could depend on the values of both and ) such that, for that particular , falls within of whenever (and a different may require a different, larger for to guarantee that ). The difference between uniform convergence and pointwise convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Karl Weierstrass, is important because several properties of the functions , such as continuity, Riemann integrability, and, with additional hypotheses, differentiability, are transferred to the limit if the convergence is uniform, but not necessarily if the convergence is not uniform. History In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series, arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions. The term uniform convergence was probably first used by Christoph Gudermann, in an 1838 paper on elliptic functions, where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series is independent of the variables and While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs. Later Gudermann's pupil Karl Weierstrass, who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent () which he used in his 1841 paper Zur Theorie der Potenzreihen, published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel and George Gabriel Stokes. G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis." Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel, Paul du Bois-Reymond, Ulisse Dini, Cesare Arzelà and others. Definition We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below). Suppose is a set and is a sequence of real-valued functions on it. We say the sequence is uniformly convergent on with limit if for every there exists a natural number such that for all and for all The notation for uniform convergence of to is not quite standardized and different authors have used a variety of symbols, including (in roughly decreasing order of popularity): Frequently, no special symbol is used, and authors simply write to indicate that convergence is uniform. (In contrast, the expression on without an adverb is taken to mean pointwise convergence on : for all , as .) Since is a complete metric space, the Cauchy criterion can be used to give an equivalent alternative formulation for uniform convergence: converges uniformly on (in the previous sense) if and only if for every , there exists a natural number such that . In yet another equivalent formulation, if we define then converges to uniformly if and only if as . Thus, we can characterize uniform convergence of on as (simple) convergence of in the function space with respect to the uniform metric (also called the supremum metric), defined by Symbolically, . The sequence is said to be locally uniformly convergent with limit if is a metric space and for every , there exists an such that converges uniformly on It is clear that uniform convergence implies local uniform convergence, which implies pointwise convergence. Notes Intuitively, a sequence of functions converges uniformly to if, given an arbitrarily small , we can find an so that the functions with all fall within a "tube" of width centered around (i.e., between and ) for the entire domain of the function. Note that interchanging the order of quantifiers in the definition of uniform convergence by moving "for all " in front of "there exists a natural number " results in a definition of pointwise convergence of the sequence. To make this difference explicit, in the case of uniform convergence, can only depend on , and the choice of has to work for all , for a specific value of that is given. In contrast, in the case of pointwise convergence, may depend on both and , and the choice of only has to work for the specific values of and that are given. Thus uniform convergence implies pointwise convergence, however the converse is not true, as the example in the section below illustrates. Generalizations One may straightforwardly extend the concept to functions E → M, where (M, d) is a metric space, by replacing with . The most general setting is the uniform convergence of nets of functions E → X, where X is a uniform space. We say that the net converges uniformly with limit f : E → X if and only if for every entourage V in X, there exists an , such that for every x in E and every , is in V. In this situation, uniform limit of continuous functions remains continuous. Definition in a hyperreal setting Uniform convergence admits a simplified definition in a hyperreal setting. Thus, a sequence converges to f uniformly if for all hyperreal x in the domain of and all infinite n, is infinitely close to (see microcontinuity for a similar definition of uniform continuity). In contrast, pointwise continuity requires this only for real x. Examples For , a basic example of uniform convergence can be illustrated as follows: the sequence converges uniformly, while does not. Specifically, assume . Each function is less than or equal to when , regardless of the value of . On the other hand, is only less than or equal to at ever increasing values of when values of are selected closer and closer to 1 (explained more in depth further below). Given a topological space X, we can equip the space of bounded real or complex-valued functions over X with the uniform norm topology, with the uniform metric defined by Then uniform convergence simply means convergence in the uniform norm topology: . The sequence of functions is a classic example of a sequence of functions that converges to a function pointwise but not uniformly. To show this, we first observe that the pointwise limit of as is the function , given by Pointwise convergence: Convergence is trivial for and , since and , for all . For and given , we can ensure that whenever by choosing , which is the minimum integer exponent of that allows it to reach or dip below (here the upper square brackets indicate rounding up, see ceiling function). Hence, pointwise for all . Note that the choice of depends on the value of and . Moreover, for a fixed choice of , (which cannot be defined to be smaller) grows without bound as approaches 1. These observations preclude the possibility of uniform convergence. Non-uniformity of convergence: The convergence is not uniform, because we can find an so that no matter how large we choose there will be values of and such that To see this, first observe that regardless of how large becomes, there is always an such that Thus, if we choose we can never find an such that for all and . Explicitly, whatever candidate we choose for , consider the value of at . Since the candidate fails because we have found an example of an that "escaped" our attempt to "confine" each to within of for all . In fact, it is easy to see that contrary to the requirement that if . In this example one can easily see that pointwise convergence does not preserve differentiability or continuity. While each function of the sequence is smooth, that is to say that for all n, , the limit is not even continuous. Exponential function The series expansion of the exponential function can be shown to be uniformly convergent on any bounded subset using the Weierstrass M-test. Theorem (Weierstrass M-test). Let be a sequence of functions and let be a sequence of positive real numbers such that for all and If converges, then converges absolutely and uniformly on . The complex exponential function can be expressed as the series: Any bounded subset is a subset of some disc of radius centered on the origin in the complex plane. The Weierstrass M-test requires us to find an upper bound on the terms of the series, with independent of the position in the disc: To do this, we notice and take If is convergent, then the M-test asserts that the original series is uniformly convergent. The ratio test can be used here: which means the series over is convergent. Thus the original series converges uniformly for all and since , the series is also uniformly convergent on Properties Every uniformly convergent sequence is locally uniformly convergent. Every locally uniformly convergent sequence is compactly convergent. For locally compact spaces local uniform convergence and compact convergence coincide. A sequence of continuous functions on metric spaces, with the image metric space being complete, is uniformly convergent if and only if it is uniformly Cauchy. If is a compact interval (or in general a compact topological space), and is a monotone increasing sequence (meaning for all n and x) of continuous functions with a pointwise limit which is also continuous, then the convergence is necessarily uniform (Dini's theorem). Uniform convergence is also guaranteed if is a compact interval and is an equicontinuous sequence that converges pointwise. Applications To continuity If and are topological spaces, then it makes sense to talk about the continuity of the functions . If we further assume that is a metric space, then (uniform) convergence of the to is also well defined. The following result states that continuity is preserved by uniform convergence: This theorem is proved by the " trick", and is the archetypal example of this trick: to prove a given inequality (), one uses the definitions of continuity and uniform convergence to produce 3 inequalities (), and then combines them via the triangle inequality to produce the desired inequality. This theorem is an important one in the history of real and Fourier analysis, since many 18th century mathematicians had the intuitive understanding that a sequence of continuous functions always converges to a continuous function. The image above shows a counterexample, and many discontinuous functions could, in fact, be written as a Fourier series of continuous functions. The erroneous claim that the pointwise limit of a sequence of continuous functions is continuous (originally stated in terms of convergent series of continuous functions) is infamously known as "Cauchy's wrong theorem". The uniform limit theorem shows that a stronger form of convergence, uniform convergence, is needed to ensure the preservation of continuity in the limit function. More precisely, this theorem states that the uniform limit of uniformly continuous functions is uniformly continuous; for a locally compact space, continuity is equivalent to local uniform continuity, and thus the uniform limit of continuous functions is continuous. To differentiability If is an interval and all the functions are differentiable and converge to a limit , it is often desirable to determine the derivative function by taking the limit of the sequence . This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable (not even if the sequence consists of everywhere-analytic functions, see Weierstrass function), and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance with uniform limit . Clearly, is also identically zero. However, the derivatives of the sequence of functions are given by and the sequence does not converge to or even to any function at all. In order to ensure a connection between the limit of a sequence of differentiable functions and the limit of the sequence of derivatives, the uniform convergence of the sequence of derivatives plus the convergence of the sequence of functions at at least one point is required: If is a sequence of differentiable functions on such that exists (and is finite) for some and the sequence converges uniformly on , then converges uniformly to a function on , and for . To integrability Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral, this can be done if uniform convergence is assumed: If is a sequence of Riemann integrable functions defined on a compact interval which uniformly converge with limit , then is Riemann integrable and its integral can be computed as the limit of the integrals of the : In fact, for a uniformly convergent family of bounded functions on an interval, the upper and lower Riemann integrals converge to the upper and lower Riemann integrals of the limit function. This follows because, for n sufficiently large, the graph of is within of the graph of f, and so the upper sum and lower sum of are each within of the value of the upper and lower sums of , respectively. Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead. To analyticity Using Morera's Theorem, one can show that if a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function). To series We say that converges: With this definition comes the following result: Let x0 be contained in the set E and each fn be continuous at x0. If converges uniformly on E then f is continuous at x0 in E. Suppose that and each fn is integrable on E. If converges uniformly on E then f is integrable on E and the series of integrals of fn is equal to integral of the series of fn. Almost uniform convergence If the domain of the functions is a measure space E then the related notion of almost uniform convergence can be defined. We say a sequence of functions converges almost uniformly on E if for every there exists a measurable set with measure less than such that the sequence of functions converges uniformly on . In other words, almost uniform convergence means there are sets of arbitrarily small measure for which the sequence of functions converges uniformly on their complement. Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name. However, Egorov's theorem does guarantee that on a finite measure space, a sequence of functions that converges almost everywhere also converges almost uniformly on the same set. Almost uniform convergence implies almost everywhere convergence and convergence in measure. See also Uniform convergence in probability Modes of convergence (annotated index) Dini's theorem Arzelà–Ascoli theorem Notes References Konrad Knopp, Theory and Application of Infinite Series; Blackie and Son, London, 1954, reprinted by Dover Publications, . G. H. Hardy, Sir George Stokes and the concept of uniform convergence; Proceedings of the Cambridge Philosophical Society, 19, pp. 148–156 (1918) Bourbaki; Elements of Mathematics: General Topology. Chapters 5–10 (paperback); Walter Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw–Hill, 1976. Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, . William Wade, An Introduction to Analysis, 3rd ed., Pearson, 2005 External links Graphic examples of uniform convergence of Fourier series from the University of Colorado Calculus Mathematical series Topology of function spaces Convergence (mathematics)
Uniform convergence
[ "Mathematics" ]
3,553
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Series (mathematics)", "Calculus", "Mathematical objects", "Mathematical relations" ]
50,678
https://en.wikipedia.org/wiki/Matthias%20Jakob%20Schleiden
Matthias Jakob Schleiden (; 5 April 1804 – 23 June 1881) was a German botanist and co-founder of cell theory, along with Theodor Schwann and Rudolf Virchow. He published some poems and non-scientific work under the pseudonym Ernst. Career Matthias Jakob Schleiden was born in Hamburg. on 5 April 1804. His father was the municipal physician of Hamburg. Schleiden pursued legal studies graduating in 1827. He then established a legal practice but after a period of emotional depression and attempted suicide, he changed professions. The suicide attempt left a prominent scar across his forehead. He studied natural science at the University of Göttingen in Göttingen, Germany, but transferred to the University of Berlin in 1835 to study plants. Johann Horkel, Schleiden's uncle, encouraged him to study plant embryology. He soon developed his love for botany and cats into a full-time pursuit. Schleiden preferred to study plant structure under the microscope. As a professor of botany at the University of Jena, he wrote Contributions to our Knowledge of Phytogenesis (1838), in which he stated that all plants are composed of cells. Thus, Schleiden and Schwann became the first to formulate what was then an informal belief as a principle of biology equal in importance to the atomic theory of chemistry. He also recognized the importance of the cell nucleus, discovered in 1831 by the Scottish botanist Robert Brown, and sensed its connection with cell division. In 1838, the two scientists M. J. Schleiden and Theodore Schwann formulated a theory about cellular structure which stated, 'All the living organisms are made up of cells and the cell is the fundamental component of living organismus”. In 1885 Rudolf Virchow stated that all cells are formed from pre-existing cells. Although Schleiden was not Jewish nor a historian by profession, he was noted for his defense of Judaism and against antisemitism, and wrote two works, Die Bedeutung der Juden für die Erhaltung und Wiederbelebung der Wissenschaften im Mittelalter (1877) and Die Romantik des Martyriums bei den Juden im Mittelalter (1878), published in English as The Sciences among the Jews Before and During the Middle Ages and The Importance of the Jews for the Preservation and Revival of Learning during the Middle Ages. He became a professor of botany at the University of Dorpat in 1863. He concluded that all plant parts are made of cells and that an embryonic plant organism arises from one cell. He died in Frankfurt am Main on 23 June 1881. Evolution Schleiden was an early advocate of evolution. In a lecture on the "History of the Vegetable World" published in his book Die Pflanze und ihr Leben ("The Plant: A Biography") (1848) was a passage that embraced the transmutation of species. He was one of the first German biologists to accept Charles Darwin's theory of evolution. He has been described as a leading proponent of Darwinism in Germany. With Die Pflanze und ihr Leben, reprinted six times by 1864, and his Studien: Populäre Vorträge ("Studies: Popular Lectures"), both written in a way that was accessible to lay readers, Schleiden contributed to creating a momentum for popularizing science in Germany. Schleiden’s popular writings included two volumes of poetry which appeared under the pseudonym “Ernst” in 1858 and 1873. American composer Harriet P. Sawyer set one of his poems to music with her song “Die ersten Tropfen fallen.” Selected publications On the Development of the Organization in Phaenogamous Plants (1838) The Plant, a Biography (1848) [translated by Arthur Henfrey] References External links Short biography and bibliography in the Virtual Laboratory of the Max Planck Institute for the History of Science Schwann, Theodor and Schleyden, M. J., Microscopical researches into the accordance in the structure and growth of animals and plants. London: Printed for the Sydenham Society, 1847. 1804 births 1881 deaths Burials at Frankfurt Main Cemetery 19th-century German botanists Heidelberg University alumni Scientists from Hamburg Proto-evolutionary biologists Academic staff of the University of Jena Academic staff of the University of Tartu
Matthias Jakob Schleiden
[ "Biology" ]
895
[ "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
50,680
https://en.wikipedia.org/wiki/Funicular
A funicular ( ) is a type of cable railway system that connects points along a railway track laid on a steep slope. The system is characterized by two counterbalanced carriages (also called cars or trains) permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track. The result of such a configuration is that the two carriages move synchronously: as one ascends, the other descends at an equal speed. This feature distinguishes funiculars from inclined elevators, which have a single car that is hauled uphill. The term funicular derives from the Latin word , the diminutive of , meaning 'rope'. Operation In a funicular, both cars are permanently connected to the opposite ends of the same cable, known as a haul rope; this haul rope runs through a system of pulleys at the upper end of the line. If the railway track is not perfectly straight, the cable is guided along the track using sheaves – unpowered pulleys that simply allow the cable to change direction. While one car is pulled upwards by one end of the haul rope, the other car descends the slope at the other end. Since the weight of the two cars is counterbalanced (except for the weight of passengers), no lifting force is required to move them; the engine only has to lift the cable itself and the excess passengers, and supply the energy lost to friction by the cars' wheels and the pulleys. For passenger comfort, funicular carriages are often (although not always) constructed so that the floor of the passenger deck is horizontal, and not necessarily parallel to the sloped track. In some installations, the cars are also attached to a second cable – bottom towrope – which runs through a pulley at the bottom of the incline. In these designs, one of the pulleys must be designed as a tensioning wheel to avoid slack in the ropes. One advantage of such an installation is the fact that the weight of the rope is balanced between the carriages; therefore, the engine no longer needs to use any power to lift the cable itself. This practice is used on funiculars with slopes below 6%, funiculars using sledges instead of carriages, or any other case where it is not ensured that the descending car is always able to pull out the cable from the pulley in the station on the top of the incline. It is also used in systems where the engine room is located at the lower end of the track (such as the upper half of the Great Orme Tramway) – in such systems, the cable that runs through the top of the incline is still necessary to prevent the carriages from coasting down the incline. Types of power systems Cable drive In most modern funiculars, neither of the two carriages is equipped with an engine of its own. Instead, the propulsion is provided by an electric motor in the engine room (typically at the upper end of the track); the motor is linked via a speed-reducing gearbox to a large pulley – a drive bullwheel – which then controls the movement of the haul rope using friction. Some early funiculars were powered in the same way, but using steam engines or other types of motor. The bullwheel has two grooves: after the first half turn around it the cable returns via an auxiliary pulley. This arrangement has the advantage of having twice the contact area between the cable and the groove, and returning the downward-moving cable in the same plane as the upward-moving one. Modern installations also use high friction liners to enhance the friction between the bullwheel grooves and the cable. For emergency and service purposes two sets of brakes are used at the engine room: the emergency brake directly grips the bullwheel, and the service brake is mounted at the high speed shaft of the gear. In case of an emergency the cars are also equipped with spring-applied, hydraulically opened rail brakes. The first funicular caliper brakes which clamp each side of the crown of the rail were invented by the Swiss entrepreneurs Franz Josef Bucher and Josef Durrer and implemented at the , opened in 1893. The Abt rack and pinion system was also used on some funiculars for speed control or emergency braking. Water counterbalancing Many early funiculars were built using water tanks under the floor of each car, which were filled or emptied until just sufficient imbalance was achieved to allow movement, and a few such funiculars still exist and operate in the same way. The car at the top of the hill is loaded with water until it is heavier than the car at the bottom, causing it to descend the hill and pull up the other car. The water is drained at the bottom, and the process repeats with the cars exchanging roles. The movement is controlled by a brakeman using the brake handle of the rack and pinion system engaged with the rack mounted between the rails. The Bom Jesus funicular built in 1882 near Braga, Portugal is one of the extant systems of this type. Another example, the Fribourg funicular in Fribourg, Switzerland built in 1899, is of particular interest as it utilizes waste water, coming from a sewage plant at the upper part of the city. Some funiculars of this type were later converted to electrical power. For example, the Giessbachbahn in the Swiss canton of Bern, opened in 1879, was originally powered by water ballast. In 1912 its energy provision was replaced by a hydraulic engine powered by a Pelton turbine. In 1948 this in turn was replaced by an electric motor. Track layout There are three main rail layouts used on funiculars; depending on the system, the track bed can consist of four, three, or two rails. Early funiculars were built to the four-rail layout, with two separate parallel tracks and separate station platforms at both ends for each vehicle. The two tracks are laid with sufficient space between them for the two carriages to pass at the midpoint. While this layout requires the most land area, it is also the only layout that allows both tracks to be perfectly straight, requiring no sheaves on the tracks to keep the cable in place. Examples of four-rail funiculars are the Duquesne Incline in Pittsburgh, Pennsylvania, and most cliff railways in the United Kingdom. In three-rail layouts, the middle rail is shared by both carriages, while each car runs on a different outer rail. To allow the two cars to pass at the halfway point, the middle rail must briefly split into two, forming a passing loop. Such systems are narrower and require less rail to construct than four-rail systems; however, they still require separate station platforms for each vehicle. In a two-rail layout, both cars share the entire track except at the passing loop in the middle. This layout is the narrowest of all and needs only a single platform at each station (though sometimes two platforms are built: one for boarding, one for alighting). However, the required passing loop is more complex and costly to build, since special turnout systems must be in place to ensure that each car always enters the correct track at the loop. Furthermore, if a rack for braking is used, that rack can be mounted higher in three-rail and four-rail layouts, making it less sensitive to choking in snowy conditions compared to the two-rail layout. Some funicular systems use a mix of different track layouts. An example of this arrangement is the lower half of the Great Orme Tramway, where the section "above" the passing loop has a three-rail layout (with each pair of adjacent rails having its own conduit which the cable runs through), while the section "below" the passing loop has a two-rail layout (with a single conduit shared by both cars). Another example is the Peak Tram in Hong Kong, which is mostly of a two-rail layout except for a short three-rail section immediately uphill of the passing loop. Some four-rail funiculars have their tracks interlaced above and below the passing loop; this allows the system to be nearly as narrow as a two-rail system, with a single platform at each station, while also eliminating the need for the costly junctions either side of the passing loop. The Hill Train at the Legoland Windsor Resort is an example of this configuration. Turnout systems for two-rail funiculars In the case of two-rail funiculars, various solutions exist for ensuring that a carriage always enters the same track at the passing loop. One such solution involves installing switches at each end of the passing loop. These switches are moved into their desired position by the carriage's wheels during trailing movements (i.e. away from the passing loop); this procedure also sets the route for the next trip in the opposite direction. The Great Orme Tramway is an example of a funicular that utilizes this system. Another turnout system, known as the Abt switch, involves no moving parts on the track at all. Instead, the carriages are built with an unconventional wheelset design: the outboard wheels have flanges on both sides, whereas the inboard wheels are unflanged (and usually wider to allow them to roll over the turnouts more easily). The double-flanged wheels keep the carriages bound to one specific rail at all times. One car has the flanged wheels on the left-hand side, so it follows the leftmost rail, forcing it to run via the left branch of the passing loop; similarly, the other car has them on the right-hand side, meaning it follows the rightmost rail and runs on the right branch of the loop. This system was invented by Carl Roman Abt and first implemented on the Lugano Città–Stazione funicular in Switzerland in 1886; since then, the Abt turnout has gained popularity, becoming a standard for modern funiculars. The lack of moving parts on the track makes this system cost-effective and reliable compared to other systems. Stations The majority of funiculars have two stations, one at each end of the track. However, some systems have been built with additional intermediate stations. Because of the nature of a funicular system, intermediate stations are usually built symmetrically about the mid-point; this allows both cars to call simultaneously at a station. Examples of funiculars with more than two stations include the Wellington Cable Car in New Zealand (five stations, including one at the passing loop) and the Carmelit in Haifa, Israel (six stations, three on each side of the passing loop). A few funiculars with asymmetrically placed stations also exist. For example, the Petřín funicular in Prague has three stations: one at each end, and a third (Nebozízek) a short way up from the passing loop. Because of this arrangement, carriages are forced to make a technical stop a short distance down from the passing loop as well, for the sole purpose of allowing the other car to call at Nebozízek. History A number of cable railway systems which pull their cars on inclined slopes were built since the 1820s. In the second half of the 19th century the design of a funicular as a transit system emerged. It was especially attractive in comparison with the other systems of the time as counterbalancing of the cars was deemed to be a cost-cutting solution. The first line of the Funiculars of Lyon () opened in 1862, followed by other lines in 1878, 1891 and 1900. The Budapest Castle Hill Funicular was built in 1868–69, with the first test run on 23 October 1869. The oldest funicular railway operating in Britain dates from 1875 and is in Scarborough, North Yorkshire. In Istanbul, Turkey, the Tünel has been in continuous operation since 1875 and is both the first underground funicular and the second-oldest underground railway. It remained powered by a steam engine up until it was taken for renovation in 1968. Until the end of the 1870s, the four-rail parallel-track funicular was the normal configuration. Carl Roman Abt developed the Abt Switch allowing the two-rail layout, which was used for the first time in 1879 when the Giessbach Funicular opened in Switzerland. In the United States, the first funicular to use a two-rail layout was the Telegraph Hill Railroad in San Francisco, which was in operation from 1884 until 1886. The Mount Lowe Railway in Altadena, California, was the first mountain railway in the United States to use the three-rail layout. Three- and two-rail layouts considerably reduced the space required for building a funicular, reducing grading costs on mountain slopes and property costs for urban funiculars. These layouts enabled a funicular boom in the latter half of the 19th century. Currently, the United States' oldest and steepest funicular in continuous use is the Monongahela Incline located in Pittsburgh, Pennsylvania. Construction began in 1869 and officially opened 28 May 1870 for passenger use. The Monongahela incline also has the distinction of being the first funicular in the United States for strictly passenger use and not freight. In 1880 the funicular of Mount Vesuvius inspired the Italian popular song Funiculì, Funiculà. This funicular was destroyed repeatedly by volcanic eruptions and abandoned after the eruption of 1944. Exceptional examples According to the Guinness World Records, the smallest public funicular in the world is the Fisherman's Walk Cliff Railway in Bournemouth, England, which is long. Stoosbahn in Switzerland, with a maximum slope of 110% (47.7°), is the steepest funicular in the world. The Lynton and Lynmouth Cliff Railway, built in 1888, is the steepest and longest water-powered funicular in the world. It climbs vertically on a 58% gradient. The city of Valparaíso in Chile used to have up to 30 funicular elevators (). The oldest of them dates from 1883. 15 remain with almost half in operation, and others in various stages of restoration. The Carmelit in Haifa, Israel, with six stations and a tunnel 1.8 km (1.1 mi) long, is claimed by the Guinness World Records as the "least extensive metro" in the world. Technically, it is an underground funicular. The Dresden Suspension Railway (), which hangs from an elevated rail, is the only suspended funicular in the world. The Fribourg funicular is the only funicular in the world powered by wastewater. Standseilbahn Linth-Limmern, capable of moving 215 t, is said to have the highest capacity. Comparison with inclined elevators Some inclined elevators are incorrectly called funiculars. On an inclined elevator the cars operate independently rather than in interconnected pairs, and are lifted uphill. A notable example is Paris' Montmartre Funicular. Its formal title is a relic of its original configuration, when its two cars operated as a counterbalanced, interconnected pair, always moving in opposite directions, thus meeting the definition of a funicular. However, the system has since been redesigned, and now uses two independently-operating cars that can each ascend or descend on demand, qualifying as a double inclined elevator; the term "funicular" in its title is retained as a historical reference. See also Cable car (railway) Aerial lift Counterweight Gravity railroad Inclined elevator List of funicular railways Steep grade railway "Funiculì, Funiculà", a Neapolitan song celebrating funiculars References External links French inventions Rail technologies Railways by type Vertical transport devices
Funicular
[ "Technology" ]
3,202
[ "Vertical transport devices", "Transport systems" ]
50,702
https://en.wikipedia.org/wiki/Environmental%20engineering
Environmental engineering is a professional engineering discipline related to environmental science. It encompasses broad scientific topics like chemistry, biology, ecology, geology, hydraulics, hydrology, microbiology, and mathematics to create solutions that will protect and also improve the health of living organisms and improve the quality of the environment. Environmental engineering is a sub-discipline of civil engineering and chemical engineering. While on the part of civil engineering, the Environmental Engineering is focused mainly on Sanitary Engineering. Environmental engineering applies scientific and engineering principles to improve and maintain the environment to protect human health, protect nature's beneficial ecosystems, and improve environmental-related enhancement of the quality of human life. Environmental engineers devise solutions for wastewater management, water and air pollution control, recycling, waste disposal, and public health. They design municipal water supply and industrial wastewater treatment systems, and design plans to prevent waterborne diseases and improve sanitation in urban, rural and recreational areas. They evaluate hazardous-waste management systems to evaluate the severity of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. They implement environmental engineering law, as in assessing the environmental impact of proposed construction projects. Environmental engineers study the effect of technological advances on the environment, addressing local and worldwide environmental issues such as acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources. Most jurisdictions impose licensing and registration requirements for qualified environmental engineers. Etymology The word environmental has its root in the late 19th-century French word environ (verb), meaning to encircle or to encompass. The word environment was used by Carlyle in 1827 to refer to the aggregate of conditions in which a person or thing lives. The meaning shifted again in 1956 when it was used in the ecological sense, where Ecology is the branch of science dealing with the relationship of living things to their environment. The second part of the phrase environmental engineer originates from Latin roots and was used in the 14th century French as engignour, meaning a constructor of military engines such as trebuchets, harquebuses, longbows, cannons, catapults, ballistas, stirrups, armour as well as other deadly or bellicose contraptions. The word engineer was not used to reference public works until the 16th century; and it likely entered the popular vernacular as meaning a contriver of public works during John Smeaton's time. History Ancient civilizations Environmental engineering is a name for work that has been done since early civilizations, as people learned to modify and control the environmental conditions to meet needs. As people recognized that their health was related to the quality of their environment, they built systems to improve it. The ancient Indus Valley Civilization (3300 B.C.E. to 1300 B.C.E.) had advanced control over their water resources. The public work structures found at various sites in the area include wells, public baths, water storage tanks, a drinking water system, and a city-wide sewage collection system. They also had an early canal irrigation system enabling large-scale agriculture. From 4000 to 2000 B.C.E., many civilizations had drainage systems and some had sanitation facilities, including the Mesopotamian Empire, Mohenjo-Daro, Egypt, Crete, and the Orkney Islands in Scotland. The Greeks also had aqueducts and sewer systems that used rain and wastewater to irrigate and fertilize fields. The first aqueduct in Rome was constructed in 312 B.C.E., and the Romans continued to construct aqueducts for irrigation and safe urban water supply during droughts. They also built an underground sewer system as early as the 7th century B.C.E. that fed into the Tiber River, draining marshes to create farmland as well as removing sewage from the city. Modern era Very little change was seen from the decline of the Roman Empire until the 19th century, where improvements saw increasing efforts focused on public health. Modern environmental engineering began in London in the mid-19th century when Joseph Bazalgette designed the first major sewerage system following the Great Stink. The city's sewer system conveyed raw sewage to the River Thames, which also supplied the majority of the city's drinking water, leading to an outbreak of cholera. The introduction of drinking water treatment and sewage treatment in industrialized countries reduced waterborne diseases from leading causes of death to rarities. The field emerged as a separate academic discipline during the middle of the 20th century in response to widespread public concern about water and air pollution and other environmental degradation. As society and technology grew more complex, they increasingly produced unintended effects on the natural environment. One example is the widespread application of the pesticide DDT to control agricultural pests in the years following World War II. The story of DDT as vividly told in Rachel Carson's Silent Spring (1962) is considered to be the birth of the modern environmental movement, which led to the modern field of "environmental engineering." Education Many universities offer environmental engineering programs through either the department of civil engineering or chemical engineering and also including electronic projects to develop and balance the environmental conditions. Environmental engineers in a civil engineering program often focus on hydrology, water resources management, bioremediation, and water and wastewater treatment plant design. Environmental engineers in a chemical engineering program tend to focus on environmental chemistry, advanced air and water treatment technologies, and separation processes. Some subdivisions of environmental engineering include natural resources engineering and agricultural engineering. Courses for students fall into a few broad classes: Mechanical engineering courses oriented towards designing machines and mechanical systems for environmental use such as water and wastewater treatment facilities, pumping stations, garbage segregation plants, and other mechanical facilities. Environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment. Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects of chemicals in the environment, including any mining processes, pollutants, and also biochemical processes. Environmental technology courses oriented towards producing electronic or electrical graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources. Curriculum The following topics make up a typical curriculum in environmental engineering: Mass and Energy transfer Environmental chemistry Inorganic chemistry Organic Chemistry Nuclear Chemistry Growth models Resource consumption Population growth Economic growth Risk assessment Hazard identification Dose-response Assessment Exposure assessment Risk characterization Comparative risk analysis Water pollution Water resources and pollutants Oxygen demand Pollutant transport Water and waste water treatment Air pollution Industry, transportation, commercial and residential emissions Criteria and toxic air pollutants Pollution modelling (e.g. Atmospheric dispersion modeling) Pollution control Air pollution and meteorology Global change Greenhouse effect and global temperature Carbon, nitrogen, and oxygen cycle IPCC emissions scenarios Oceanic changes (ocean acidification, other effects of global warming on oceans) and changes in the stratosphere (see Physical impacts of climate change) Solid waste management and resource recovery Life cycle assessment Source reduction Collection and transfer operations Recycling Waste-to-energy conversion Landfill Applications Water supply and treatment Environmental engineers evaluate the water balance within a watershed and determine the available water supply, the water needed for various needs in that watershed, the seasonal cycles of water movement through the watershed and they develop systems to store, treat, and convey water for various uses. Water is treated to achieve water quality objectives for the end uses. In the case of a potable water supply, water is treated to minimize the risk of infectious disease transmission, the risk of non-infectious illness, and to create a palatable water flavor. Water distribution systems are designed and built to provide adequate water pressure and flow rates to meet various end-user needs such as domestic use, fire suppression, and irrigation. Wastewater treatment There are numerous wastewater treatment technologies. A wastewater treatment train can consist of a primary clarifier system to remove solid and floating materials, a secondary treatment system consisting of an aeration basin followed by flocculation and sedimentation or an activated sludge system and a secondary clarifier, a tertiary biological nitrogen removal system, and a final disinfection process. The aeration basin/activated sludge system removes organic material by growing bacteria (activated sludge). The secondary clarifier removes the activated sludge from the water. The tertiary system, although not always included due to costs, is becoming more prevalent to remove nitrogen and phosphorus and to disinfect the water before discharge to a surface water stream or ocean outfall. Air pollution management Scientists have developed air pollution dispersion models to evaluate the concentration of a pollutant at a receptor or the impact on overall air quality from vehicle exhausts and industrial flue gas stack emissions. To some extent, this field overlaps the desire to decrease carbon dioxide and other greenhouse gas emissions from combustion processes. Environmental impact assessment and mitigation Environmental engineers apply scientific and engineering principles to evaluate if there are likely to be any adverse impacts to water quality, air quality, habitat quality, flora and fauna, agricultural capacity, traffic, ecology, and noise. If impacts are expected, they then develop mitigation measures to limit or prevent such impacts. An example of a mitigation measure would be the creation of wetlands in a nearby location to mitigate the filling in of wetlands necessary for a road development if it is not possible to reroute the road. In the United States, the practice of environmental assessment was formally initiated on January 1, 1970, the effective date of the National Environmental Policy Act (NEPA). Since that time, more than 100 developing and developed nations either have planned specific analogous laws or have adopted procedure used elsewhere. NEPA is applicable to all federal agencies in the United States. Regulatory agencies Environmental Protection Agency The U.S. Environmental Protection Agency (EPA) is one of the many agencies that work with environmental engineers to solve critical issues. An essential component of EPA's mission is to protect and improve air, water, and overall environmental quality to avoid or mitigate the consequences of harmful effects. See also Associations References Further reading Davis, M. L. and D. A. Cornwell, (2006) Introduction to environmental engineering (4th ed.) McGraw-Hill Chemical engineering Civil engineering Environmental science Engineering disciplines Environmental terminology
Environmental engineering
[ "Chemistry", "Engineering", "Environmental_science" ]
2,085
[ "Chemical engineering", "Construction", "Civil engineering", "nan", "Environmental engineering" ]
50,705
https://en.wikipedia.org/wiki/Construction%20engineering
Construction engineering, also known as construction operations, is a professional subdiscipline of civil engineering that deals with the designing, planning, construction, and operations management of infrastructure such as roadways, tunnels, bridges, airports, railroads, facilities, buildings, dams, utilities and other projects. Construction engineers learn some of the design aspects similar to civil engineers as well as project management aspects. At the educational level, civil engineering students concentrate primarily on the design work which is more analytical, gearing them toward a career as a design professional. This essentially requires them to take a multitude of challenging engineering science and design courses as part of obtaining a 4-year accredited degree. Education for construction engineers is primarily focused on construction procedures, methods, costs, schedules and personnel management. Their primary concern is to deliver a project on time within budget and of the desired quality. Regarding educational requirements, construction engineering students take basic design courses in civil engineering, as well as construction management courses. Work activities Being a sub-discipline of civil engineering, construction engineers apply their knowledge and business, technical and management skills obtained from their undergraduate degree to oversee projects that include bridges, buildings and housing projects. Construction engineers are heavily involved in the design and management/ allocation of funds in these projects. They are charged with risk analysis, costing and planning. A career in design work does require a professional engineer license (PE). Individuals who pursue this career path are strongly advised to sit for the Engineer in Training exam (EIT), also, referred to as the Fundamentals of Engineering Exam (FE) while in college as it takes five years' (4 years in USA) post-graduate to obtain the PE license. Some states have recently changed the PE license exam pre-requisite of 4 years work experience after graduation to become a licensed Professional Engineer where an EIT is eligible to take the PE Exam in as little as 6 months after taking the FE exam. Entry-level construction engineers position is typically project engineers or assistant project engineers. They are responsible for preparing purchasing requisitions, processing change orders, preparing monthly budgeting reports and handling meeting minutes. The construction management position does not necessarily require a PE license; however, possessing one does make the individual more marketable, as the PE license allows the individual to sign off on temporary structure designs. Abilities Construction engineers are problem solvers. They contribute to the creation of infrastructure that best meets the unique demands of its environment. They must be able to understand infrastructure life cycles. When compared and contrasted to design engineers, construction engineers bring to the table their own unique perspectives for solving technical challenges with clarity and imagination. While individuals considering this career path should certainly have a strong understanding of mathematics and science, many other skills are also highly desirable, including critical and analytical thinking, time management, people management and good communication skills. Educational requirements Individuals looking to obtain a construction engineering degree must first ensure that the program is accredited by the Accreditation Board for Engineering and Technology (ABET). ABET accreditation is assurance that a college or university program meets the quality standards established by the profession for which it prepares its students. In the US there are currently twenty-five programs that exist in the entire country so careful college consideration is advised. A typical construction engineering curriculum is a mixture of engineering mechanics, engineering design, construction management and general science and mathematics. This usually leads to a Bachelor of Science degree. The B.S. degree along with some design or construction experience is sufficient for most entry-level positions. Graduate schools may be an option for those who want to go further in depth of the construction and engineering subjects taught at the undergraduate level. In most cases construction engineering graduates look to either civil engineering, engineering management or business administration as a possible graduate degree. Job prospects Job prospects for construction engineers generally have a strong cyclical variation. For example, starting in 2008 and continuing until at least 2011, job prospects have been poor due to the collapse of housing bubbles in many parts of the world. This sharply reduced demand for construction, forced construction professionals towards infrastructure construction and therefore increased the competition faced by established and new construction engineers. This increased competition and a core reduction in quantity demand is in parallel with a possible shift in the demand for construction engineers due to the automation of many engineering tasks, overall resulting in reduced prospects for construction engineers. In early 2010, the United States construction industry had a 27% unemployment rate, this is nearly three times higher than the 9.7% national average unemployment rate. The construction unemployment rate (including tradesmen) is comparable to the United States 1933 unemployment rate—the lowest point of the Great Depression—of 25%. Remuneration The average salary for a civil engineer in the UK depends on the sector and more specifically the level of experience of the individual. A 2010 survey of the remuneration and benefits of those occupying jobs in construction and the built environment industry showed that the average salary of a civil engineer in the UK is £29,582. In the United States, as of May 2013, the average was $85,640. The average salary varies depending on experience, for example the average annual salary for a civil engineer with between 3 and 6 years' experience is £23,813. For those with between 14 and 20 years' experience the average is £38,214. See also Architectural engineering Building officials Civil engineering Constructability Construction communication Construction estimating software Construction law Construction management Cost engineering Cost overrun Earthquake engineering Engineering, procurement and construction (EPC) Engineering, procurement, construction and installation, (EPCI) Index of construction articles International Building Code List of BIM software Military engineering Quantity surveyor Structural engineering Work breakdown structure References Construction and extraction occupations Engineering disciplines Civil engineering Building engineering Construction management Transportation engineering
Construction engineering
[ "Engineering" ]
1,154
[ "Building engineering", "Industrial engineering", "Construction", "Transportation engineering", "Civil engineering", "nan", "Construction management", "Architecture" ]
50,719
https://en.wikipedia.org/wiki/Quantum%20harmonic%20oscillator
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known. One-dimensional harmonic oscillator Hamiltonian and energy eigenstates The Hamiltonian of the particle is: where is the particle's mass, is the force constant, is the angular frequency of the oscillator, is the position operator (given by in the coordinate basis), and is the momentum operator (given by in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law. The time-independent Schrödinger equation (TISE) is, where denotes a real number (which needs to be determined) that will specify a time-independent energy level, or eigenvalue, and the solution denotes that level's energy eigenstate. Then solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function , using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions, The functions Hn are the physicists' Hermite polynomials, The corresponding energy levels are The expectation values of position and momentum combined with variance of each variable can be derived from the wavefunction to understand the behavior of the energy eigenkets. They are shown to be and owing to the symmetry of the problem, whereas: The variance in both position and momentum are observed to increase for higher energy levels. The lowest energy level has value of which is its minimum value due to uncertainty relation and also corresponds to a gaussian wavefunction. This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the state, called the ground state) is not equal to the minimum of the potential well, but above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian. Ladder operator method The "ladder operator" method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators and its adjoint , Note these operators classically are exactly the generators of normalized rotation in the phase space of and , i.e they describe the forwards and backwards evolution in time of a classical harmonic oscillator. These operators lead to the following representation of and , The operator is not Hermitian, since itself and its adjoint are not equal. The energy eigenstates , when operated on by these ladder operators, give From the relations above, we can also define a number operator , which has the following property: The following commutators can be easily obtained by substituting the canonical commutation relation, and the Hamilton operator can be expressed as so the eigenstates of are also the eigenstates of energy. To see that, we can apply to a number state : Using the property of the number operator : we get: Thus, since solves the TISE for the Hamiltonian operator , is also one of its eigenstates with the corresponding eigenvalue: QED. The commutation property yields and similarly, This means that acts on to produce, up to a multiplicative constant, , and acts on to produce . For this reason, is called an annihilation operator ("lowering operator"), and a creation operator ("raising operator"). The two operators together are called ladder operators. Given any energy eigenstate, we can act on it with the lowering operator, , to produce another eigenstate with less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to . However, since the smallest eigenvalue of the number operator is 0, and In this case, subsequent applications of the lowering operator will just produce zero, instead of additional energy eigenstates. Furthermore, we have shown above that Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates such that which matches the energy spectrum given in the preceding section. Arbitrary eigenstates can be expressed in terms of |0⟩, Analytical questions The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equation . In the position representation, this is the first-order differential equation whose solution is easily found to be the Gaussian Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the Hermite functions energy eigenstates constructed by the ladder method form a complete orthonormal set of functions. Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined by , hence so that , and so on. Natural length and energy scales The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization. The result is that, if energy is measured in units of and distance in units of , then the Hamiltonian simplifies to while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half, where are the Hermite polynomials. To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter. For example, the fundamental solution (propagator) of , the time-dependent Schrödinger operator for this oscillator, simply boils down to the Mehler kernel, where . The most general solution for a given initial configuration then is simply Coherent states The coherent states (also known as Glauber states) of the harmonic oscillator are special nondispersive wave packets, with minimum uncertainty , whose observables' expectation values evolve like a classical system. They are eigenvectors of the annihilation operator, not the Hamiltonian, and form an overcomplete basis which consequentially lacks orthogonality. The coherent states are indexed by and expressed in the basis as Since coherent states are not energy eigenstates, their time evolution is not a simple shift in wavefunction phase. The time-evolved states are, however, also coherent states but with phase-shifting parameter instead: . Because and via the Kermack-McCrae identity, the last form is equivalent to a unitary displacement operator acting on the ground state: . Calculating the expectation values: where is the phase contributed by complex . These equations confirm the oscillating behavior of the particle. The uncertainties calculated using the numeric method are: which gives . Since the only wavefunction that can have lowest position-momentum uncertainty, , is a gaussian wavefunction, and since the coherent state wavefunction has minimum position-momentum uncertainty, we note that the general gaussian wavefunction in quantum mechanics has the form:Substituting the expectation values as a function of time, gives the required time varying wavefunction. The probability of each energy eigenstates can be calculated to find the energy distribution of the wavefunction: which corresponds to Poisson distribution. Highly excited states When is large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energy can move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified through asymptotics of the Hermite polynomials, and also through the WKB approximation. The frequency of oscillation at is proportional to the momentum of a classical particle of energy and position . Furthermore, the square of the amplitude (determining the probability density) is inversely proportional to , reflecting the length of time the classical particle spends near . The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using an Airy function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately This is also given, asymptotically, by the integral Phase space solutions In the phase space formulation of quantum mechanics, eigenstates of the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution. The Wigner quasiprobability distribution for the energy eigenstate is, in the natural units described above, where Ln are the Laguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map. Meanwhile, the Husimi Q function of the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we have This claim can be verified using the Segal–Bargmann transform. Specifically, since the raising operator in the Segal–Bargmann representation is simply multiplication by and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simply . At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform. N-dimensional isotropic harmonic oscillator The one-dimensional harmonic oscillator is readily generalizable to dimensions, where . In one dimension, the position of the particle was specified by a single coordinate, . In dimensions, this is replaced by position coordinates, which we label . Corresponding to each position coordinate is a momentum; we label these . The canonical commutation relations between these operators are The Hamiltonian for this system is As the form of this Hamiltonian makes clear, the -dimensional harmonic oscillator is exactly analogous to independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities would refer to the positions of each of the particles. This is a convenient property of the potential, which allows the potential energy to be separated into terms depending on one coordinate each. This observation makes the solution straightforward. For a particular set of quantum numbers the energy eigenfunctions for the -dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as: In the ladder operator method, we define sets of ladder operators, By an analogous procedure to the one-dimensional case, we can then show that each of the and operators lower and raise the energy by respectively. The Hamiltonian is This Hamiltonian is invariant under the dynamic symmetry group (the unitary group in dimensions), defined by where is an element in the defining matrix representation of . The energy levels of the system are As in the one-dimensional case, the energy is quantized. The ground state energy is times the one-dimensional ground energy, as we would expect using the analogy to independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In -dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy. The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define . All states with the same will have the same energy. For a given , we choose a particular . Then . There are possible pairs . can take on the values to , and for each the value of is fixed. The degree of degeneracy therefore is: Formula for general and [ being the dimension of the symmetric irreducible -th power representation of the unitary group ]: The special case = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in dimensions (as dimensions are distinguishable). For the case of bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer using integers less than or equal to . This arises due to the constraint of putting quanta into a state ket where and , which are the same constraints as in integer partition. Example: 3D isotropic harmonic oscillator The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with a different spherically symmetric potential where is the mass of the particle. Because will be used below for the magnetic quantum number, mass is indicated by , instead of , as earlier in this article. The solution to the equation is: where is a normalization constant; ; are generalized Laguerre polynomials; The order of the polynomial is a non-negative integer; is a spherical harmonic function; is the reduced Planck constant: The energy eigenvalue is The energy is usually described by the single quantum number Because is a non-negative integer, for every even we have and for every odd we have . The magnetic quantum number is an integer satisfying , so for every and ℓ there are 2ℓ + 1 different quantum states, labeled by . Thus, the degeneracy at level is where the sum starts from 0 or 1, according to whether is even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of , the relevant degeneracy group. Applications Harmonic oscillators lattice: phonons The notation of a harmonic oscillator can be extended to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions. As in the previous section, we denote the positions of the masses by , as measured from their equilibrium positions (i.e. if the particle is at its equilibrium position). In two or more dimensions, the are vector quantities. The Hamiltonian for this system is where is the (assumed uniform) mass of each atom, and and are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space. We introduce, then, a set of "normal coordinates" , defined as the discrete Fourier transforms of the s, and "conjugate momenta" defined as the Fourier transforms of the s, The quantity will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite. This preserves the desired commutation relations in either real space or wave vector space From the general result it is easy to show, through elementary trigonometry, that the potential energy term is where The Hamiltonian may be written in wave vector space as Note that the couplings between the position variables have been transformed away; if the s and s were hermitian (which they are not), the transformed Hamiltonian would describe uncoupled harmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, we impose periodic boundary conditions, defining the -th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is The upper bound to comes from the minimum wavelength, which is twice the lattice spacing , as discussed above. The harmonic oscillator eigenvalues or energy levels for the mode are If we ignore the zero-point energy then the levels are evenly spaced at So an exact amount of energy , must be supplied to the harmonic oscillator lattice to push it to the next energy level. In analogy to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon. All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described elsewhere. In the continuum limit, , , while is held fixed. The canonical coordinates devolve to the decoupled momentum modes of a scalar field, , whilst the location index (not the displacement dynamical variable) becomes the parameter argument of the scalar field, . Molecular vibrations The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by where is the reduced mass and and are the masses of the two atoms. The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator. Modelling phonons, as discussed above. A charge with mass in a uniform magnetic field is an example of a one-dimensional quantum harmonic oscillator: Landau quantization. See also Notes References Bibliography External links Quantum Harmonic Oscillator Rationale for choosing the ladder operators Live 3D intensity plots of quantum harmonic oscillator Driven and damped quantum harmonic oscillator (lecture notes of course "quantum optics in electric circuits") Quantum models Oscillators
Quantum harmonic oscillator
[ "Physics" ]
4,165
[ "Quantum models", "Quantum mechanics" ]
50,723
https://en.wikipedia.org/wiki/Convergence%20of%20random%20variables
In probability theory, there exist several different notions of convergence of sequences of random variables, including convergence in probability, convergence in distribution, and almost sure convergence. The different notions of convergence capture different properties about the sequence, with some notions of convergence being stronger than others. For example, convergence in distribution tells us about the limit distribution of a sequence of random variables. This is a weaker notion than convergence in probability, which tells us about the value a random variable will take, rather than just the distribution. The concept is important in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that certain properties of a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution. Background "Stochastic convergence" formalizes the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle into a pattern. The pattern may for instance be Convergence in the classical sense to a fixed value, perhaps itself coming from a random event An increasing similarity of outcomes to what a purely deterministic function would produce An increasing preference towards a certain outcome An increasing "aversion" against straying far away from a certain outcome That the probability distribution describing the next outcome may grow increasingly similar to a certain distribution Some less obvious, more theoretical patterns could be That the series formed by calculating the expected value of the outcome's distance from a particular value may converge to 0 That the variance of the random variable describing the next event grows smaller and smaller. These other types of patterns that may arise are reflected in the different types of stochastic convergence that have been studied. While the above discussion has related to the convergence of a single series to a limiting value, the notion of the convergence of two series towards each other is also important, but this is easily handled by studying the sequence defined as either the difference or the ratio of the two series. For example, if the average of n independent random variables , all having the same finite mean and variance, is given by then as tends to infinity, converges in probability (see below) to the common mean, , of the random variables . This result is known as the weak law of large numbers. Other forms of convergence are important in other useful theorems, including the central limit theorem. Throughout the following, we assume that is a sequence of random variables, and is a random variable, and all of them are defined on the same probability space . Convergence in distribution Loosely, with this mode of convergence, we increasingly expect to see the next outcome in a sequence of random experiments becoming better and better modeled by a given probability distribution. More precisely, the distribution of the associated random variable in the sequence becomes arbitrarily close to a specified fixed distribution. Convergence in distribution is the weakest form of convergence typically discussed, since it is implied by all other types of convergence mentioned in this article. However, convergence in distribution is very frequently used in practice; most often it arises from application of the central limit theorem. Definition A sequence of real-valued random variables, with cumulative distribution functions , is said to converge in distribution, or converge weakly, or converge in law to a random variable with cumulative distribution function if for every number at which is continuous. The requirement that only the continuity points of should be considered is essential. For example, if are distributed uniformly on intervals , then this sequence converges in distribution to the degenerate random variable . Indeed, for all when , and for all when . However, for this limiting random variable , even though for all . Thus the convergence of cdfs fails at the point where is discontinuous. Convergence in distribution may be denoted as where is the law (probability distribution) of . For example, if is standard normal we can write . For random vectors the convergence in distribution is defined similarly. We say that this sequence converges in distribution to a random -vector if for every which is a continuity set of . The definition of convergence in distribution may be extended from random vectors to more general random elements in arbitrary metric spaces, and even to the “random variables” which are not measurable — a situation which occurs for example in the study of empirical processes. This is the “weak convergence of laws without laws being defined” — except asymptotically. In this case the term weak convergence is preferable (see weak convergence of measures), and we say that a sequence of random elements converges weakly to (denoted as ) if for all continuous bounded functions . Here E* denotes the outer expectation, that is the expectation of a “smallest measurable function that dominates ”. Properties Since , the convergence in distribution means that the probability for to be in a given range is approximately equal to the probability that the value of is in that range, provided is sufficiently large. In general, convergence in distribution does not imply that the sequence of corresponding probability density functions will also converge. As an example one may consider random variables with densities . These random variables converge in distribution to a uniform U(0, 1), whereas their densities do not converge at all. However, according to Scheffé’s theorem, convergence of the probability density functions implies convergence in distribution. The portmanteau lemma provides several equivalent definitions of convergence in distribution. Although these definitions are less intuitive, they are used to prove a number of statistical theorems. The lemma states that converges in distribution to if and only if any of the following statements are true: for all continuity points of ; for all bounded, continuous functions (where denotes the expected value operator); for all bounded, Lipschitz functions ; for all nonnegative, continuous functions ; for every open set ; for every closed set ; for all continuity sets of random variable ; for every upper semi-continuous function bounded above; for every lower semi-continuous function bounded below. The continuous mapping theorem states that for a continuous function , if the sequence converges in distribution to , then converges in distribution to . Note however that convergence in distribution of to and to does in general not imply convergence in distribution of to or of to . Lévy’s continuity theorem: The sequence converges in distribution to if and only if the sequence of corresponding characteristic functions converges pointwise to the characteristic function of . Convergence in distribution is metrizable by the Lévy–Prokhorov metric. A natural link to convergence in distribution is the Skorokhod's representation theorem. Convergence in probability The basic idea behind this type of convergence is that the probability of an “unusual” outcome becomes smaller and smaller as the sequence progresses. The concept of convergence in probability is used very often in statistics. For example, an estimator is called consistent if it converges in probability to the quantity being estimated. Convergence in probability is also the type of convergence established by the weak law of large numbers. Definition A sequence {Xn} of random variables converges in probability towards the random variable X if for all ε > 0 More explicitly, let Pn(ε) be the probability that Xn is outside the ball of radius ε centered at X. Then is said to converge in probability to X if for any and any δ > 0 there exists a number N (which may depend on ε and δ) such that for all n ≥ N, Pn(ε) < δ (the definition of limit). Notice that for the condition to be satisfied, it is not possible that for each n the random variables X and Xn are independent (and thus convergence in probability is a condition on the joint cdf's, as opposed to convergence in distribution, which is a condition on the individual cdf's), unless X is deterministic like for the weak law of large numbers. At the same time, the case of a deterministic X cannot, whenever the deterministic value is a discontinuity point (not isolated), be handled by convergence in distribution, where discontinuity points have to be explicitly excluded. Convergence in probability is denoted by adding the letter p over an arrow indicating convergence, or using the "plim" probability limit operator: For random elements {Xn} on a separable metric space , convergence in probability is defined similarly by Properties Convergence in probability implies convergence in distribution.[proof] In the opposite direction, convergence in distribution implies convergence in probability when the limiting random variable X is a constant.[proof] Convergence in probability does not imply almost sure convergence.[proof] The continuous mapping theorem states that for every continuous function , if , then also  Convergence in probability defines a topology on the space of random variables over a fixed probability space. This topology is metrizable by the Ky Fan metric: or alternately by this metric Counterexamples Not every sequence of random variables which converges to another random variable in distribution also converges in probability to that random variable. As an example, consider a sequence of standard normal random variables and a second sequence . Notice that the distribution of is equal to the distribution of for all , but: which does not converge to . So we do not have convergence in probability. Almost sure convergence This is the type of stochastic convergence that is most similar to pointwise convergence known from elementary real analysis. Definition To say that the sequence converges almost surely or almost everywhere or with probability 1 or strongly towards X means that This means that the values of approach the value of X, in the sense that events for which does not converge to X have probability 0 (see Almost surely). Using the probability space and the concept of the random variable as a function from Ω to R, this is equivalent to the statement Using the notion of the limit superior of a sequence of sets, almost sure convergence can also be defined as follows: Almost sure convergence is often denoted by adding the letters a.s. over an arrow indicating convergence: For generic random elements {Xn} on a metric space , convergence almost surely is defined similarly: Properties Almost sure convergence implies convergence in probability (by Fatou's lemma), and hence implies convergence in distribution. It is the notion of convergence used in the strong law of large numbers. The concept of almost sure convergence does not come from a topology on the space of random variables. This means there is no topology on the space of random variables such that the almost surely convergent sequences are exactly the converging sequences with respect to that topology. In particular, there is no metric of almost sure convergence. Counterexamples Consider a sequence of independent random variables such that and . For we have which converges to hence in probability. Since and the events are independent, second Borel Cantelli Lemma ensures that hence the sequence does not converge to almost everywhere (in fact the set on which this sequence does not converge to has probability ). Sure convergence or pointwise convergence To say that the sequence of random variables (Xn) defined over the same probability space (i.e., a random process) converges surely or everywhere or pointwise towards X means where Ω is the sample space of the underlying probability space over which the random variables are defined. This is the notion of pointwise convergence of a sequence of functions extended to a sequence of random variables. (Note that random variables themselves are functions). Sure convergence of a random variable implies all the other kinds of convergence stated above, but there is no payoff in probability theory by using sure convergence compared to using almost sure convergence. The difference between the two only exists on sets with probability zero. This is why the concept of sure convergence of random variables is very rarely used. Convergence in mean Given a real number , we say that the sequence converges in the r-th mean (or in the Lr-norm) towards the random variable X, if the -th absolute moments (|Xn|r ) and (|X|r ) of and X exist, and where the operator E denotes the expected value. Convergence in -th mean tells us that the expectation of the -th power of the difference between and converges to zero. This type of convergence is often denoted by adding the letter Lr over an arrow indicating convergence: The most important cases of convergence in r-th mean are: When converges in r-th mean to X for r = 1, we say that converges in mean to X. When converges in r-th mean to X for r = 2, we say that converges in mean square (or in quadratic mean) to X. Convergence in the r-th mean, for r ≥ 1, implies convergence in probability (by Markov's inequality). Furthermore, if r > s ≥ 1, convergence in r-th mean implies convergence in s-th mean. Hence, convergence in mean square implies convergence in mean. Additionally, The converse is not necessarily true, however it is true if (by a more general version of Scheffé's lemma). Properties Provided the probability space is complete: If and , then almost surely. If and , then almost surely. If and , then almost surely. If and , then (for any real numbers and ) and . If and , then (for any real numbers and ) and . If and , then (for any real numbers and ). None of the above statements are true for convergence in distribution. The chain of implications between the various notions of convergence are noted in their respective sections. They are, using the arrow notation: These properties, together with a number of other special cases, are summarized in the following list: Almost sure convergence implies convergence in probability:[proof] Convergence in probability implies there exists a sub-sequence which almost surely converges: Convergence in probability implies convergence in distribution:[proof] Convergence in r-th order mean implies convergence in probability: Convergence in r-th order mean implies convergence in lower order mean, assuming that both orders are greater than or equal to one: provided r ≥ s ≥ 1. If Xn converges in distribution to a constant c, then Xn converges in probability to c:[proof] provided c is a constant. If converges in distribution to X and the difference between Xn and Yn converges in probability to zero, then Yn also converges in distribution to X:[proof] If converges in distribution to X and Yn converges in distribution to a constant c, then the joint vector converges in distribution to :[proof] provided c is a constant. Note that the condition that converges to a constant is important, if it were to converge to a random variable Y then we wouldn't be able to conclude that converges to . If Xn converges in probability to X and Yn converges in probability to Y, then the joint vector converges in probability to :[proof] If converges in probability to X, and if for all n and some b, then converges in rth mean to X for all . In other words, if converges in probability to X and all random variables are almost surely bounded above and below, then converges to X also in any rth mean. Almost sure representation. Usually, convergence in distribution does not imply convergence almost surely. However, for a given sequence {Xn} which converges in distribution to X0 it is always possible to find a new probability space (Ω, F, P) and random variables {Yn, n = 0, 1, ...} defined on it such that Yn is equal in distribution to for each , and Yn converges to Y0 almost surely. If for all ε > 0, then we say that converges almost completely, or almost in probability towards X. When converges almost completely towards X then it also converges almost surely to X. In other words, if converges in probability to X sufficiently quickly (i.e. the above sequence of tail probabilities is summable for all ), then also converges almost surely to X. This is a direct implication from the Borel–Cantelli lemma. If is a sum of n real independent random variables: then converges almost surely if and only if converges in probability. The proof can be found in Page 126 (Theorem 5.3.4) of the book by Kai Lai Chung. However, for a sequence of mutually independent random variables, convergence in probability does not imply almost sure convergence. The dominated convergence theorem gives sufficient conditions for almost sure convergence to imply L1-convergence: A necessary and sufficient condition for L1 convergence is and the sequence (Xn) is uniformly integrable. If , the followings are equivalent , , is uniformly integrable. See also Proofs of convergence of random variables Convergence of measures Convergence in measure Continuous stochastic process: the question of continuity of a stochastic process is essentially a question of convergence, and many of the same concepts and relationships used above apply to the continuity question. Asymptotic distribution Big O in probability notation Skorokhod's representation theorem The Tweedie convergence theorem Slutsky's theorem Continuous mapping theorem Notes References Stochastic processes Random variables, Convergence of
Convergence of random variables
[ "Mathematics" ]
3,566
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Mathematical objects", "Mathematical relations" ]
50,732
https://en.wikipedia.org/wiki/Extreme%20value%20theory
Extreme value theory or extreme value analysis (EVA) is the study of extremes in statistical distributions. It is widely used in many disciplines, such as structural engineering, finance, economics, earth sciences, traffic prediction, and geological engineering. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, such as the 100-year flood. Similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50 year wave and design the structure accordingly. Data analysis Two main approaches exist for practical extreme value analysis. The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima (minima), generating an annual maxima series (AMS). The second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold (falls below a certain threshold). This method is generally referred to as the peak over threshold method (POT). For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, leading to the generalized extreme value distribution being selected for fitting. However, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the limiting distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. Given that the number of relevant random events within a year may be rather limited, it is unsurprising that analyses of observed AMS data often lead to distributions other than the generalized extreme value distribution (GEVD) being selected. For POT data, the analysis may involve fitting two distributions: One for the number of events in a time period considered and a second for the size of the exceedances. A common assumption for the first is the Poisson distribution, with the generalized Pareto distribution being used for the exceedances. A tail-fitting can be based on the Pickands–Balkema–de Haan theorem. Novak (2011) reserves the term "POT method" to the case where the threshold is non-random, and distinguishes it from the case where one deals with exceedances of a random threshold. Applications Applications of extreme value theory include predicting the probability distribution of: Extreme floods; the size of freak waves Tornado outbreaks Maximum sizes of ecological populations Side effects of drugs (e.g., ximelagatran) The magnitudes of large insurance losses Equity risks; day-to-day market risk Mutation events during evolution Large wildfires Environmental loads on structures Time the fastest humans could ever run the 100 metres sprint and performances in other athletic disciplines Pipeline failures due to pitting corrosion Anomalous IT network traffic, prevent attackers from reaching important data Road safety analysis Wireless communications Epidemics Neurobiology Solar energy Extreme Space weather History The field of extreme value theory was pioneered by L. Tippett (1902–1985). Tippett was employed by the British Cotton Industry Research Association, where he worked to make cotton thread stronger. In his studies, he realized that the strength of a thread was controlled by the strength of its weakest fibres. With the help of R.A. Fisher, Tippet obtained three asymptotic limits describing the distributions of extremes assuming independent variables. E.J. Gumbel (1958) codified this theory. These results can be extended to allow for slight correlations between variables, but the classical theory does not extend to strong correlations of the order of the variance. One universality class of particular interest is that of log-correlated fields, where the correlations decay logarithmically with the distance. Univariate theory The theory for extreme values of a single variable is governed by the extreme value theorem, also called the Fisher–Tippett–Gnedenko theorem, which describes which of the three possible distributions for extreme values applies for a particular statistical variable which is summarized in this section. Multivariate theory Extreme value theory in more than one variable introduces additional issues that have to be addressed. One problem that arises is that one must specify what constitutes an extreme event. Although this is straightforward in the univariate case, there is no unambiguous way to do this in the multivariate case. The fundamental problem is that although it is possible to order a set of real-valued numbers, there is no natural way to order a set of vectors. As an example, in the univariate case, given a set of observations it is straightforward to find the most extreme event simply by taking the maximum (or minimum) of the observations. However, in the bivariate case, given a set of observations , it is not immediately clear how to find the most extreme event. Suppose that one has measured the values at a specific time and the values at a later time. Which of these events would be considered more extreme? There is no universal answer to this question. Another issue in the multivariate case is that the limiting model is not as fully prescribed as in the univariate case. In the univariate case, the model (GEV distribution) contains three parameters whose values are not predicted by the theory and must be obtained by fitting the distribution to the data. In the multivariate case, the model not only contains unknown parameters, but also a function whose exact form is not prescribed by the theory. However, this function must obey certain constraints. It is not straightforward to devise estimators that obey such constraints though some have been recently constructed. As an example of an application, bivariate extreme value theory has been applied to ocean research. Non-stationary extremes Statistical modeling for nonstationary time series was developed in the 1990s. Methods for nonstationary multivariate extremes have been introduced more recently. The latter can be used for tracking how the dependence between extreme values changes over time, or over another covariate. See also Extreme risk Extreme weather Fisher–Tippett–Gnedenko theorem Generalized extreme value distribution Large deviation theory Outlier Pareto distribution Pickands–Balkema–de Haan theorem Rare events Redundancy principle Extreme value distributions Fréchet distribution Gumbel distribution Weibull distribution References Sources Software — Package for extreme value statistics in R. — Package for extreme value statistics in Julia. External links — Easy non-mathematical introduction. — Full-text access to conferences held by in 1933–1934. Actuarial science Statistical theory Extreme value data Tails of probability distributions Financial risk modeling
Extreme value theory
[ "Mathematics" ]
1,342
[ "Applied mathematics", "Actuarial science" ]
50,798
https://en.wikipedia.org/wiki/Insomnia
Insomnia, also known as sleeplessness, is a sleep disorder where people have trouble sleeping. They may have difficulty falling asleep, or staying asleep for as long as desired. Insomnia is typically followed by daytime sleepiness, low energy, irritability, and a depressed mood. It may result in an increased risk of accidents of all kinds as well as problems focusing and learning. Insomnia can be short term, lasting for days or weeks, or long term, lasting more than a month. The concept of the word insomnia has two distinct possibilities: insomnia disorder (ID) or insomnia symptoms, and many abstracts of randomized controlled trials and systematic reviews often underreport on which of these two possibilities the word refers to. Insomnia can occur independently or as a result of another problem. Conditions that can result in insomnia include psychological stress, chronic pain, heart failure, hyperthyroidism, heartburn, restless leg syndrome, menopause, certain medications, and drugs such as caffeine, nicotine, and alcohol. Insomnia is also common in people with ADHD, and children with autism. Other risk factors include working night shifts and sleep apnea. Diagnosis is based on sleep habits and an examination to look for underlying causes. A sleep study may be done to look for underlying sleep disorders. Screening may be done with questions like "Do you experience difficulty sleeping?" or "Do you have difficulty falling or staying asleep?" Although their efficacy as first line treatments is not unequivocally established, sleep hygiene and lifestyle changes are typically the first treatment for insomnia. Sleep hygiene includes a consistent bedtime, a quiet and dark room, exposure to sunlight during the day and regular exercise. Cognitive behavioral therapy may be added to this. While sleeping pills may help, they are sometimes associated with injuries, dementia, and addiction. These medications are not recommended for more than four or five weeks. The effectiveness and safety of alternative medicine is unclear. Between 10% and 30% of adults have insomnia at any given point in time and up to half of people have insomnia in a given year. About 6% of people have insomnia that is not due to another problem and lasts for more than a month. People over the age of 65 are affected more often than younger people. Women are more often affected than men. Descriptions of insomnia occur at least as far back as ancient Greece. Signs and symptoms Symptoms of insomnia: Difficulty falling asleep, including difficulty finding a comfortable sleeping position Waking during the night, being unable to return to sleep and waking up early Not able to focus on daily tasks, difficulty in remembering Daytime sleepiness, irritability, depression or anxiety Feeling tired or having low energy during the day Trouble concentrating Being irritable, acting aggressive or impulsive Sleep onset insomnia is difficulty falling asleep at the beginning of the night, often a symptom of anxiety disorders. Delayed sleep phase disorder can be misdiagnosed as insomnia, as sleep onset is delayed to much later than normal while awakening spills over into daylight hours. It is common for patients who have difficulty falling asleep to also have nocturnal awakenings with difficulty returning to sleep. Two-thirds of these patients wake up in the middle of the night, with more than half having trouble falling back to sleep after a middle-of-the-night awakening. Early morning awakening is an awakening occurring earlier (more than 30 minutes) than desired with an inability to go back to sleep, and before total sleep time reaches 6.5 hours. Early morning awakening is often a characteristic of depression. Anxiety symptoms may well lead to insomnia. Some of these symptoms include tension, compulsive worrying about the future, feeling overstimulated, and overanalyzing past events. Poor sleep quality Poor sleep quality can occur as a result of, for example, restless legs, sleep apnea or major depression. Poor sleep quality is defined as the individual not reaching stage 3 or delta sleep which has restorative properties. Major depression leads to alterations in the function of the hypothalamic–pituitary–adrenal axis, causing excessive release of cortisol which can lead to poor sleep quality. Nocturnal polyuria, excessive night-time urination, can also result in a poor quality of sleep. Subjectivity Some cases of insomnia are not really insomnia in the traditional sense because people experiencing sleep state misperception often sleep for a normal amount of time. The problem is that, despite sleeping for multiple hours each night and typically not experiencing significant daytime sleepiness or other symptoms of sleep loss, they do not feel like they have slept very much, if at all. Because their perception of their sleep is incomplete, they incorrectly believe it takes them an abnormally long time to fall asleep, and they underestimate how long they stay asleep. Problematic digital media use Causes While insomnia can be caused by a number of conditions, it can also occur without any identifiable cause. This is known as Primary Insomnia. Primary Insomnia may also have an initial identifiable cause but continues after the cause is no longer present. For example, a bout of insomnia may be triggered by a stressful work or life event. However, the condition may continue after the stressful event has been resolved. In such cases, the insomnia is usually perpetuated by the anxiety or fear caused by the sleeplessness itself, rather than any external factors. Symptoms of insomnia can be caused by or be associated with: Sleep breathing disorders, such as sleep apnea or upper airway resistance syndrome Use of psychoactive drugs (such as stimulants), including certain medications, herbs, caffeine, nicotine, cocaine, amphetamines, methylphenidate, aripiprazole, MDMA, modafinil, or excessive alcohol intake Use of or withdrawal from alcohol and other sedatives, such as anti-anxiety and sleep drugs like benzodiazepines Use of or withdrawal from pain-relievers such as opioids Heart disease Restless legs syndrome, which can cause sleep onset insomnia due to the discomforting sensations felt and the need to move the legs or other body parts to relieve these sensations Periodic limb movement disorder (PLMD), which occurs during sleep and can cause arousals of which the sleeper is unaware Pain: an injury or condition that causes pain can preclude an individual from finding a comfortable position in which to fall asleep, and can also cause awakening. Hormone shifts such as those that precede menstruation and those during menopause Life events such as fear, stress, anxiety, emotional or mental tension, work problems, financial stress, birth of a child, and bereavement Gastrointestinal issues such as heartburn or constipation Mental, neurobehavioral, or neurodevelopmental disorders such as bipolar disorder, clinical depression, generalized anxiety disorder, post traumatic stress disorder, schizophrenia, obsessive compulsive disorder, autism, dementia, ADHD, and FASD Disturbances of the circadian rhythm, such as shift work and jet lag, can cause an inability to sleep at some times of the day and excessive sleepiness at other times of the day. Chronic circadian rhythm disorders are characterized by similar symptoms. Certain neurological disorders such as brain lesions, or a history of traumatic brain injury Medical conditions such as hyperthyroidism Abuse of over-the-counter or prescription sleep aids (sedative or depressant drugs) can produce rebound insomnia Poor sleep hygiene, e.g., noise or over-consumption of caffeine A rare genetic condition can cause a prion-based, permanent and eventually fatal form of insomnia called fatal familial insomnia Physical exercise: exercise-induced insomnia is common in athletes in the form of prolonged sleep onset latency Increased exposure to the blue light from artificial sources, such as phones or computers Chronic pain Lower back pain Asthma Sleep studies using polysomnography have suggested that people who have sleep disruption have elevated night-time levels of circulating cortisol and adrenocorticotropic hormone. They also have an elevated metabolic rate, which does not occur in people who do not have insomnia but whose sleep is intentionally disrupted during a sleep study. Studies of brain metabolism using positron emission tomography (PET) scans indicate that people with insomnia have higher metabolic rates by night and by day. The question remains whether these changes are the causes or consequences of long-term insomnia. Genetics Heritability estimates of insomnia vary between 38% in males to 59% in females. A genome-wide association study (GWAS) identified 3 genomic loci and 7 genes that influence the risk of insomnia, and showed that insomnia is highly polygenic. In particular, a strong positive association was observed for the MEIS1 gene in both males and females. This study showed that the genetic architecture of insomnia strongly overlaps with psychiatric disorders and metabolic traits. It has been hypothesized that epigenetics might also influence insomnia through a controlling process of both sleep regulation and brain-stress response having an impact as well on the brain plasticity. Substance-induced Alcohol-induced Alcohol is often used as a form of self-treatment of insomnia to induce sleep. However, alcohol use to induce sleep can be a cause of insomnia. Long-term use of alcohol is associated with a decrease in NREM stage 3 and 4 sleep as well as suppression of REM sleep and REM sleep fragmentation. Frequent moving between sleep stages occurs with; awakenings due to headaches, the need to urinate, dehydration, and excessive sweating. Glutamine rebound also plays a role as when someone is drinking; alcohol inhibits glutamine, one of the body's natural stimulants. When the person stops drinking, the body tries to make up for lost time by producing more glutamine than it needs. The increase in glutamine levels stimulates the brain while the drinker is trying to sleep, keeping them from reaching the deepest levels of sleep. Stopping chronic alcohol use can also lead to severe insomnia with vivid dreams. During withdrawal, REM sleep is typically exaggerated as part of a rebound effect. Caffeine Some people experience sleep disruption or anxiety if they consume caffeine. Doses as low as 100 mg/day, such as a cup of coffee or two to three servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Benzodiazepine-induced Like alcohol, benzodiazepines, such as alprazolam, clonazepam, lorazepam, and diazepam, are commonly used to treat insomnia in the short-term (both prescribed and self-medicated), but worsen sleep in the long-term. While benzodiazepines can put people to sleep (i.e., inhibit NREM stage 1 and 2 sleep), while asleep, the drugs disrupt sleep architecture: decreasing sleep time, delaying time to REM sleep, and decreasing deep slow-wave sleep (the most restorative part of sleep for both energy and mood). Opioid-induced Opioid medications such as hydrocodone, oxycodone, and morphine are used for insomnia that is associated with pain due to their analgesic properties and hypnotic effects. Opioids can fragment sleep and decrease REM and stage 2 sleep. By producing analgesia and sedation, opioids may be appropriate in carefully selected patients with pain-associated insomnia. However, dependence on opioids can lead to long-term sleep disturbances. Risk factors Insomnia affects people of all age groups, but people in the following groups have a higher chance of acquiring insomnia: Individuals older than 60 History of mental health disorder including depression, etc. Emotional stress Working late night shifts Traveling through different time zones Having chronic diseases such as diabetes, kidney disease, lung disease, Alzheimer's, or heart disease Alcohol or drug use disorders Gastrointestinal reflux disease Heavy smoking Work stress Individuals of low socioeconomic status Urban Neighborhoods Household stress Mechanism Two main models exist as to the mechanism of insomnia, cognitive and physiological. The cognitive model suggests rumination and hyperarousal contribute to preventing a person from falling asleep and might lead to an episode of insomnia. The physiological model is based upon three major findings in people with insomnia; firstly, increased urinary cortisol and catecholamines have been found suggesting increased activity of the HPA axis and arousal; second, increased global cerebral glucose utilization during wakefulness and NREM sleep in people with insomnia; and lastly, increased full body metabolism and heart rate in those with insomnia. All these findings taken together suggest a deregulation of the arousal system, cognitive system, and HPA axis all contributing to insomnia. However, it is unknown if the hyperarousal is a result of, or cause of insomnia. Altered levels of the inhibitory neurotransmitter GABA have been found, but the results have been inconsistent, and the implications of altered levels of such a ubiquitous neurotransmitter are unknown. Studies on whether insomnia is driven by circadian control over sleep or a wake dependent process have shown inconsistent results, but some literature suggests a deregulation of the circadian rhythm based on core temperature. Increased beta activity and decreased delta wave activity has been observed on electroencephalograms; however, the implication of this is unknown. Around half of post-menopausal women experience sleep disturbances, and generally sleep disturbance is about twice as common in women as men; this appears to be due in part, but not completely, to changes in hormone levels, especially in and post-menopause. Changes in sex hormones in both men and women as they age may account in part for increased prevalence of sleep disorders in older people. Diagnosis In medicine, insomnia is measured using the Athens insomnia scale. It measures eight parameters related to sleep, represented as an overall scale which assesses an individual's sleep quality. Medical history and a physical examination can identify other conditions that could be the cause of insomnia. A comprehensive sleep history should include sleep habits and sleep environment, medications (prescription and non-prescription including supplements), alcohol, nicotine, and caffeine intake, co-morbid illnesses. A sleep diary can be used track time to bed, total sleep time, time to sleep onset, number of awakenings, use of medications, time of awakening, and subjective feelings in the morning. The sleep diary can be replaced or validated by the use of out-patient actigraphy for a week or more, using a non-invasive device that measures movement. Not everyone who suffers from insomnia should routinely have a polysomnography study to screen for sleep disorders, but it may be indicated for those with risk factors for sleep apnea, including obesity, a thick neck diameter, or fullness of the flesh in the oropharynx. For most people, the test is not needed to make a diagnosis, and insomnia can often be treated by changing their schedule to make time for sufficient sleep and by improving sleep hygiene. Some patients may need to do an overnight sleep study in a sleep lab. Such a study will commonly involve assessment tools including a polysomnogram and the multiple sleep latency test. Specialists in sleep medicine are qualified to diagnose disorders within the, according to the ICSD, 81 major sleep disorder diagnostic categories. Patients with some disorders, including delayed sleep phase disorder, are often mis-diagnosed with primary insomnia; when a person has trouble getting to sleep and awakening at desired times, but has a normal sleep pattern once asleep, a circadian rhythm disorder is a likely cause. In many cases, insomnia is co-morbid with another disease, side-effects from medications, or a psychological problem. Approximately half of all diagnosed insomnia is related to psychiatric disorders. For those who have depression, "insomnia should be regarded as a co-morbid condition, rather than as a secondary one;" insomnia typically predates psychiatric symptoms. "In fact, it is possible that insomnia represents a significant risk for the development of a subsequent psychiatric disorder." Insomnia occurs in between 60% and 80% of people with depression, and can be a side effect from medications that treat depression. Determination of causation is not necessary for a diagnosis. DSM-5 criteria The DSM-5 criteria for insomnia include the following: "Predominant complaint of dissatisfaction with sleep quantity or quality, associated with one (or more) of the following symptoms: Difficulty initiating sleep. (In children, this may manifest as difficulty initiating sleep without caregiver intervention.) Difficulty maintaining sleep, characterized by frequent awakenings or problems returning to sleep after awakenings. (In children, this may manifest as difficulty returning to sleep without caregiver intervention.) Early-morning awakening with inability to return to sleep. In addition: The sleep disturbance causes clinically significant distress or impairment in social, occupational, educational, academic, behavioral, or other important areas of functioning. The sleep difficulty occurs at least three nights per week. The sleep difficulty is present for at least three months. The sleep difficulty occurs despite adequate opportunity for sleep. The insomnia is not better explained by and does not occur exclusively during the course of another sleep-wake disorder (e.g., narcolepsy, a breathing-related sleep disorder, a circadian rhythm sleep-wake disorder, a parasomnia). The insomnia is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication)." The DSM-IV TR includes insomnia but does not fully elaborate on the symptoms compared to the DSM-5. Instead of early-morning waking as a symptom, the DSM-IV-TR listed “nonrestorative sleep” as a primary symptom. The duration of the experience was also vague in the DSM-IV-TR. The DSM-IV-TR stated that symptoms had to be present for a month, whereas in the DSM-5, it stated symptoms had to be present for three months and occur at least 3 nights a week (Gillette). Types Insomnia can be classified as transient, acute, or chronic. Transient insomnia lasts for less than a week. It can be caused by another disorder, by changes in the sleep environment, by the timing of sleep, severe depression, or by stress. Its consequences – sleepiness and impaired psychomotor performance – are similar to those of sleep deprivation. Acute insomnia is the inability to consistently sleep well for a period of less than a month. Insomnia is present when there is difficulty initiating or maintaining sleep or when the sleep that is obtained is non-refreshing or of poor quality. These problems occur despite adequate opportunity and circumstances for sleep and they must result in problems with daytime function. Hyperarousal can be linked to acute insomnia since it activates the body's fight-or-flight response. When we encounter stress or danger, our bodies naturally become more alert, which can interfere with our capacity to both fall asleep and remain asleep. This heightened state of arousal can be useful in the short term during threatening situations, but if it continues over an extended period, it can result in acute insomnia. Acute insomnia is also known as short term insomnia or stress related insomnia. Chronic insomnia lasts for longer than a month. It can be caused by another disorder, or it can be a primary disorder. Common causes of chronic insomnia include persistent stress, trauma, work schedules, poor sleep habits, medications, and other mental health disorders. When an individual consistently engages in behaviors that disrupt their sleep, such as irregular sleep schedules, spending excessive time awake in bed, or engaging in stimulating activities close to bedtime, it can lead to conditioned wakefulness contributing to chronic insomnia. People with high levels of stress hormones or shifts in the levels of cytokines are more likely than others to have chronic insomnia. Its effects can vary according to its causes. They might include muscular weariness, hallucinations, and/or mental fatigue. Prevention Prevention and treatment of insomnia may require a combination of cognitive behavioral therapy, medications, and lifestyle changes. Among lifestyle practices, going to sleep and waking up at the same time each day can create a steady pattern which may help to prevent insomnia. Avoidance of vigorous exercise and caffeinated drinks a few hours before going to sleep is recommended, while exercise earlier in the day may be beneficial. Other practices to improve sleep hygiene may include: Avoiding or limiting naps Treating pain at bedtime Avoiding large meals, beverages, alcohol, and nicotine before bedtime Finding soothing ways to relax into sleep, including use of white noise Making the bedroom suitable for sleep by keeping it dark, cool, and free of devices, such as clocks, cell phones, or televisions Maintain regular exercise Try relaxing activities before sleeping Management It is recommended to rule out medical and psychological causes before deciding on the treatment for insomnia. Cognitive behavioral therapy has been found to be effective for chronic insomnia. The beneficial effects, in contrast to those produced by medications, may last well beyond the stopping of therapy. Medications have been used mainly to reduce symptoms in insomnia of short duration; their role in the management of chronic insomnia remains unclear. Several different types of medications may be used. Many doctors do not recommend relying on prescription sleeping pills for long-term use. It is also important to identify and treat other medical conditions that may be contributing to insomnia, such as depression, breathing problems, and chronic pain. As of 2022, many people with insomnia were reported as not receiving overall sufficient sleep or treatment for insomnia. Non-medication based Non-medication based strategies have comparable efficacy to hypnotic medication for insomnia and they may have longer lasting effects. Hypnotic medication is only recommended for short-term use because dependence with rebound withdrawal effects upon discontinuation or tolerance can develop. Non medication based strategies provide long lasting improvements to insomnia and are recommended as a first line and long-term strategy of management. Behavioral sleep medicine offers non-medication strategies to address chronic insomnia including sleep hygiene, stimulus control, behavioral interventions, sleep-restriction therapy, paradoxical intention, patient education, and relaxation therapy. Some examples are keeping a journal, restricting the time spent awake in bed, practicing relaxation techniques, and maintaining a regular sleep schedule and a wake-up time. Behavioral therapy can assist a patient in developing new sleep behaviors to improve sleep quality and consolidation. Behavioral therapy may include, learning healthy sleep habits to promote sleep relaxation, undergoing light therapy to help with worry-reduction strategies and regulating the circadian clock. Music may improve insomnia in adults (see music and sleep). EEG biofeedback has demonstrated effectiveness in the treatment of insomnia with improvements in duration as well as quality of sleep. Self-help therapy (defined as a psychological therapy that can be worked through on one's own) may improve sleep quality for adults with insomnia to a small or moderate degree. Stimulus control therapy is a treatment for patients who have conditioned themselves to associate the bed, or sleep in general, with a negative response. As stimulus control therapy involves taking steps to control the sleep environment, it is sometimes referred interchangeably with the concept of sleep hygiene. Examples of such environmental modifications include using the bed for sleep and sex only, not for activities such as reading or watching television; waking up at the same time every morning, including on weekends; going to bed only when sleepy and when there is a high likelihood that sleep will occur; leaving the bed and beginning an activity in another location if sleep does not occur in a reasonably brief period of time after getting into bed (commonly ~20 min); reducing the subjective effort and energy expended trying to fall asleep; avoiding exposure to bright light during night-time hours, and eliminating daytime naps. A component of stimulus control therapy is sleep restriction, a technique that aims to match the time spent in bed with actual time spent asleep. This technique involves maintaining a strict sleep-wake schedule, sleeping only at certain times of the day and for specific amounts of time to induce mild sleep deprivation. Complete treatment usually lasts up to 3 weeks and involves making oneself sleep for only a minimum amount of time that they are actually capable of on average, and then, if capable (i.e. when sleep efficiency improves), slowly increasing this amount (~15 min) by going to bed earlier as the body attempts to reset its internal sleep clock. Bright light therapy may be effective for insomnia. Paradoxical intention is a cognitive reframing technique where the insomniac, instead of attempting to fall asleep at night, makes every effort to stay awake (i.e. essentially stops trying to fall asleep). One theory that may explain the effectiveness of this method is that by not voluntarily making oneself go to sleep, it relieves the performance anxiety that arises from the need or requirement to fall asleep, which is meant to be a passive act. This technique has been shown to reduce sleep effort and performance anxiety and also lower subjective assessment of sleep-onset latency and overestimation of the sleep deficit (a quality found in many insomniacs). Sleep hygiene Sleep hygiene is a common term for all of the behaviors which relate to the promotion of good sleep. They include habits which provide a good foundation for sleep and help to prevent insomnia. However, sleep hygiene alone may not be adequate to address chronic insomnia. Sleep hygiene recommendations are typically included as one component of cognitive behavioral therapy for insomnia (CBT-I). Recommendations include reducing caffeine, nicotine, and alcohol consumption, maximizing the regularity and efficiency of sleep episodes, minimizing medication usage and daytime napping, the promotion of regular exercise, and the facilitation of a positive sleep environment. The creation of a positive sleep environment may also be helpful in reducing the symptoms of insomnia. On the other hand, a systematic review by the AASM concluded that clinicians should not prescribe sleep hygiene for insomnia due to the evidence of absence of its efficacy and potential delaying of adequate treatment, recommending instead that effective therapies such as CBT-i should be preferred. Cognitive behavioral therapy There is some evidence that cognitive behavioral therapy for insomnia (CBT-I) is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. In this therapy, patients are taught improved sleep habits and relieved of counter-productive assumptions about sleep. Common misconceptions and expectations that can be modified include: Unrealistic sleep expectations. Misconceptions about insomnia causes. Amplifying the consequences of insomnia. Performance anxiety after trying for so long to have a good night's sleep by controlling the sleep process. Numerous studies have reported positive outcomes of combining cognitive behavioral therapy for insomnia treatment with treatments such as stimulus control and the relaxation therapies. Hypnotic medications are equally effective in the short-term treatment of insomnia, but their effects wear off over time due to tolerance. The effects of CBT-I have sustained and lasting effects on treating insomnia long after therapy has been discontinued. The addition of hypnotic medications with CBT-I adds no benefit in insomnia. The long lasting benefits of a course of CBT-I shows superiority over pharmacological hypnotic drugs. Even in the short term when compared to short-term hypnotic medication such as zolpidem, CBT-I still shows significant superiority. Thus CBT-I is recommended as a first line treatment for insomnia. Common forms of CBT-I treatments include stimulus control therapy, sleep restriction, sleep hygiene, improved sleeping environments, relaxation training, paradoxical intention, and biofeedback. CBT is the well-accepted form of therapy for insomnia since it has no known adverse effects, whereas taking medications to alleviate insomnia symptoms have been shown to have adverse side effects. Nevertheless, the downside of CBT is that it may take a lot of time and motivation. Acceptance and commitment therapy Treatments based on the principles of acceptance and commitment therapy (ACT) and metacognition have emerged as alternative approaches to treating insomnia. ACT rejects the idea that behavioral changes can help insomniacs achieve better sleep, since they require "sleep efforts" - actions which create more "struggle" and arouse the nervous system, leading to hyperarousal. The ACT approach posits that acceptance of the negative feelings associated with insomnia can, in time, create the right conditions for sleep. Mindfulness practice is a key feature of this approach, although mindfulness is not practised to induce sleep (this in itself is a sleep effort to be avoided) but rather as a longer-term activity to help calm the nervous system and create the internal conditions from which sleep can emerge. A key distinction between CBT-i and ACT lies in the divergent approaches to time spent awake in bed. Proponents of CBT-i advocate minimizing time spent awake in bed, on the basis that this creates cognitive association between being in bed and wakefulness. The ACT approach proposes that avoiding time in bed may increase the pressure to sleep and arouse the nervous system further. Research has shown that "ACT has a significant effect on primary and comorbid insomnia and sleep quality, and ... can be used as an appropriate treatment method to control and improve insomnia". Internet interventions Despite the therapeutic effectiveness and proven success of CBT, treatment availability is significantly limited by a lack of trained clinicians, poor geographical distribution of knowledgeable professionals, and expense. One way to potentially overcome these barriers is to use the Internet to deliver treatment, making this effective intervention more accessible and less costly. The Internet has already become a critical source of health-care and medical information. Although the vast majority of health websites provide general information, there is growing research literature on the development and evaluation of Internet interventions. These online programs are typically behaviorally-based treatments that have been operationalized and transformed for delivery via the Internet. They are usually highly structured; automated or human supported; based on effective face-to-face treatment; personalized to the user; interactive; enhanced by graphics, animations, audio, and possibly video; and tailored to provide follow-up and feedback. There is good evidence for the use of computer based CBT for insomnia. Medications Many people with insomnia use sleeping tablets and other sedatives. In some places medications are prescribed in over 95% of cases. They, however, are a second line treatment. In 2019, the US Food and Drug Administration stated it is going to require warnings for eszopiclone, zaleplon, and zolpidem, due to concerns about serious injuries resulting from abnormal sleep behaviors, including sleepwalking or driving a vehicle while asleep. The percentage of adults using a prescription sleep aid increases with age. During 2005–2010, about 4% of U.S. adults aged 20 and over reported that they took prescription sleep aids in the past 30 days. Rates of use were lowest among the youngest age group (those aged 20–39) at about 2%, increased to 6% among those aged 50–59, and reached 7% among those aged 80 and over. More adult women (5%) reported using prescription sleep aids than adult men (3%). Non-Hispanic white adults reported higher use of sleep aids (5%) than non-Hispanic black (3%) and Mexican-American (2%) adults. No difference was shown between non-Hispanic black adults and Mexican-American adults in use of prescription sleep aids. Antihistamines As an alternative to taking prescription drugs, some evidence shows that an average person seeking short-term help may find relief by taking over-the-counter antihistamines such as diphenhydramine or doxylamine. Diphenhydramine and doxylamine are widely used in nonprescription sleep aids. They are the most effective over-the-counter sedatives currently available, at least in much of Europe, Canada, Australia, and the United States, and are more sedating than some prescription hypnotics. Antihistamine effectiveness for sleep may decrease over time, and anticholinergic side-effects (such as dry mouth) may also be a drawback with these particular drugs. While addiction does not seem to be an issue with this class of drugs, they can induce dependence and rebound effects upon abrupt cessation of use. However, people whose insomnia is caused by restless legs syndrome may have worsened symptoms with antihistamines. Antidepressants While insomnia is a common symptom of depression, antidepressants are effective for treating sleep problems whether or not they are associated with depression. While all antidepressants help regulate sleep, some antidepressants, such as amitriptyline, doxepin, mirtazapine, trazodone, and trimipramine, can have an immediate sedative effect, and are prescribed to treat insomnia. Trazodone was at the beginning of the 2020s the biggest prescribed drug for sleep in the United States despite not being indicated for sleep. Amitriptyline, doxepin, and trimipramine all have antihistaminergic, anticholinergic, antiadrenergic, and antiserotonergic properties, which contribute to both their therapeutic effects and side effect profiles, while mirtazapine's actions are primarily antihistaminergic and antiserotonergic and trazodone's effects are primarily antiadrenergic and antiserotonergic. Mirtazapine is known to decrease sleep latency (i.e., the time it takes to fall asleep), promoting sleep efficiency and increasing the total amount of sleeping time in people with both depression and insomnia. Agomelatine, a melatonergic antidepressant with claimed sleep-improving qualities that does not cause daytime drowsiness, is approved for the treatment of depression though not sleep conditions in the European Union and Australia. After trials in the United States, its development for use there was discontinued in October 2011 by Novartis, who had bought the rights to market it there from the European pharmaceutical company Servier. A 2018 Cochrane review found the safety of taking antidepressants for insomnia to be uncertain with no evidence supporting long term use. Melatonin agonists Melatonin receptor agonists such as melatonin and ramelteon are used in the treatment of insomnia. The evidence for melatonin in treating insomnia is generally poor. There is low-quality evidence that it may speed the onset of sleep by 6minutes. Ramelteon does not appear to speed the onset of sleep or the amount of sleep a person gets. Usage of melatonin as a treatment for insomnia in adults has increased from 0.4% between 1999 and 2000 to nearly 2.1% between 2017 and 2018. While the use of melatonin in the short-term has been proven to be generally safe and it is shown to not be a dependent medication, side effects can still occur. Most common side effects of melatonin include: Headache Dizziness Nausea Daytime drowsiness Prolonged-release melatonin may improve quality of sleep in older people with minimal side effects. Studies have also shown that children who are on the autism spectrum or have learning disabilities, attention-deficit hyperactivity disorder (ADHD) or related neurological diseases can benefit from the use of melatonin. This is because they often have trouble sleeping due to their disorders. For example, children with ADHD tend to have trouble falling asleep because of their hyperactivity and, as a result, tend to be tired during most of the day. Another cause of insomnia in children with ADHD is the use of stimulants used to treat their disorder. Children who have ADHD then, as well as the other disorders mentioned, may be given melatonin before bedtime in order to help them sleep. Benzodiazepines The most commonly used class of hypnotics for insomnia are the benzodiazepines. Benzodiazepines are not significantly better for insomnia than antidepressants. Chronic users of hypnotic medications for insomnia do not have better sleep than chronic insomniacs not taking medications. In fact, chronic users of hypnotic medications have more regular night-time awakenings than insomniacs not taking hypnotic medications. Many have concluded that these drugs cause an unjustifiable risk to the individual and to public health and lack evidence of long-term effectiveness. It is preferred that hypnotics be prescribed for only a few days at the lowest effective dose and avoided altogether wherever possible, especially in the elderly. Between 1993 and 2010, the prescribing of benzodiazepines to individuals with sleep disorders has decreased from 24% to 11% in the US, coinciding with the first release of nonbenzodiazepines. The benzodiazepine and nonbenzodiazepine hypnotic medications also have a number of side-effects such as day time fatigue, motor vehicle crashes and other accidents, cognitive impairments, and falls and fractures. Elderly people are more sensitive to these side-effects. Some benzodiazepines have demonstrated effectiveness in sleep maintenance in the short term but in the longer term benzodiazepines can lead to tolerance, physical dependence, benzodiazepine withdrawal syndrome upon discontinuation, and long-term worsening of sleep, especially after consistent usage over long periods of time. Benzodiazepines, while inducing unconsciousness, actually worsen sleep as – like alcohol – they promote light sleep while decreasing time spent in deep sleep. A further problem is, with regular use of short-acting sleep aids for insomnia, daytime rebound anxiety can emerge. Although there is little evidence for benefit of benzodiazepines in insomnia compared to other treatments and evidence of major harm, prescriptions have continued to increase. This is likely due to their addictive nature, both due to misuse and because – through their rapid action, tolerance and withdrawal they can "trick" insomniacs into thinking they are helping with sleep. There is a general awareness that long-term use of benzodiazepines for insomnia in most people is inappropriate and that a gradual withdrawal is usually beneficial due to the adverse effects associated with the long-term use of benzodiazepines and is recommended whenever possible. Benzodiazepines all bind unselectively to the GABAA receptor. Some theorize that certain benzodiazepines (hypnotic benzodiazepines) have significantly higher activity at the α1 subunit of the GABAA receptor compared to other benzodiazepines (for example, triazolam and temazepam have significantly higher activity at the α1 subunit compared to alprazolam and diazepam, making them superior sedative-hypnotics – alprazolam and diazepam, in turn, have higher activity at the α2 subunit compared to triazolam and temazepam, making them superior anxiolytic agents). Modulation of the α1 subunit is associated with sedation, motor impairment, respiratory depression, amnesia, ataxia, and reinforcing behavior (drug-seeking behavior). Modulation of the α2 subunit is associated with anxiolytic activity and disinhibition. For this reason, certain benzodiazepines may be better suited to treat insomnia than others. Z-Drugs Nonbenzodiazepine or Z-drug sedative–hypnotic drugs, such as zolpidem, zaleplon, zopiclone, and eszopiclone, are a class of hypnotic medications that are similar to benzodiazepines in their mechanism of action, and indicated for mild to moderate insomnia. Their effectiveness at improving time to sleeping is slight, and they have similar—though potentially less severe—side effect profiles compared to benzodiazepines. Prescribing of nonbenzodiazepines has seen a general increase since their initial release on the US market in 1992, from 2.3% in 1993 among individuals with sleep disorders to 13.7% in 2010. Orexin antagonists Orexin receptor antagonists are a more recently introduced class of sleep medications and include suvorexant, lemborexant, and daridorexant, all of which are FDA-approved for treatment of insomnia characterized by difficulties with sleep onset and/or sleep maintenance. They are oriented towards blocking signals in the brain that stimulate wakefulness, therefore claiming to address insomnia without creating dependence. There are three dual orexin receptor (DORA) drugs on the market: Belsomra (Merck), Dayvigo (Eisai) and Quviviq (Idorsia). Antipsychotics Certain atypical antipsychotics, particularly quetiapine, olanzapine, and risperidone, are used in the treatment of insomnia. However, while common, use of antipsychotics for this indication is not recommended as the evidence does not demonstrate a benefit, and the risk of adverse effects are significant. A major 2022 systematic review and network meta-analysis of medications for insomnia in adults found that quetiapine did not demonstrate any short-term benefits for insomnia. Some of the more serious adverse effects may also occur at the low doses used, such as dyslipidemia and neutropenia. Such concerns of risks at low doses are supported by Danish observational studies that showed an association of use of low-dose quetiapine (excluding prescriptions filled for tablet strengths >50 mg) with an increased risk of major cardiovascular events as compared to use of Z-drugs, with most of the risk being driven by cardiovascular death. Laboratory data from an unpublished analysis of the same cohort also support the lack of dose-dependency of metabolic side effects, as new use of low-dose quetiapine was associated with a risk of increased fasting triglycerides at one-year follow-up. Concerns regarding side effects are greater in the elderly. Other sedatives Gabapentinoids like gabapentin and pregabalin have sleep-promoting effects but are not commonly used for treatment of insomnia. Gabapentin is not effective in helping alcohol related insomnia. Barbiturates, while once used, are no longer recommended for insomnia due to the risk of addiction and other side effects. Comparative effectiveness Medications for the treatment of insomnia have a wide range of effect sizes. When comparing drugs such as benzodiazepines, Z-drugs, sedative antidepressants and antihistamines, quetiapine, orexin receptor antagonists, and melatonin receptor agonists, the orexin antagonist lemborexant and the Z-drug eszopiclone had the best profiles overall in terms of efficacy, tolerability, and acceptability. Alternative medicine Herbal products, such as valerian, kava, chamomile, and lavender, have been used to treat insomnia. However, there is no quality evidence that they are effective and safe. The same is true for cannabis and cannabinoids. It is likewise unclear whether acupuncture is useful in the treatment of insomnia. Prognosis A survey of 1.1 million residents in the United States found that those that reported sleeping about 7 hours per night had the lowest rates of mortality, whereas those that slept for fewer than 6 hours or more than 8 hours had higher mortality rates. Severe insomnia – sleeping less than 3.5 hours in women and 4.5 hours in men – is associated with a 15% increase in mortality, while getting 8.5 or more hours of sleep per night was associated with a 15% higher mortality rate. With this technique, it is difficult to distinguish lack of sleep caused by a disorder which is also a cause of premature death, versus a disorder which causes a lack of sleep, and the lack of sleep causing premature death. Most of the increase in mortality from severe insomnia was discounted after controlling for associated disorders. After controlling for sleep duration and insomnia, use of sleeping pills was also found to be associated with an increased mortality rate. The lowest mortality was seen in individuals who slept between six and a half and seven and a half hours per night. Even sleeping only 4.5 hours per night is associated with very little increase in mortality. Thus, mild to moderate insomnia for most people is associated with increased longevity and severe insomnia is associated only with a very small effect on mortality. It is unclear why sleeping longer than 7.5 hours is associated with excess mortality. Epidemiology Between 10% and 30% of adults have insomnia at any given point in time and up to half of people have insomnia in a given year, making it the most common sleep disorder. About 6% of people have insomnia that is not due to another problem and lasts for more than a month. People over the age of 65 are affected more often than younger people. Females are more often affected than males. Insomnia is 40% more common in women than in men. There are higher rates of insomnia reported among university students compared to the general population. Society and culture The word insomnia is from + "without sleep" and -ia as a nominalizing suffix. The popular press have published stories about people who supposedly never sleep, such as that of Thái Ngọc and Al Herpin. Horne writes "everybody sleeps and needs to do so", and generally this appears true. However, he also relates from contemporary accounts the case of Paul Kern, who was shot in 1915 fighting in World War I and then "never slept again" until his death in 1955. Kern appears to be a completely isolated, unique case. References External links Sleep disorders Sleeplessness and sleep deprivation Wikipedia medicine articles ready to translate Wikipedia neurology articles ready to translate
Insomnia
[ "Biology" ]
9,775
[ "Sleeplessness and sleep deprivation", "Behavior", "Sleep", "Sleep disorders" ]
50,863
https://en.wikipedia.org/wiki/Stored%20energy%20printer
A stored energy printer is a computer printer that uses the energy stored in a spring or magnetic field to push a hammer through a ribbon to print a dot. As compared to dot matrix printers that print a single column of dots at a time, this printer generally creates an entire line of dots at a time. Therefore, it is also known as a line matrix printer. This technology produces premium impact printers that print for millions to billions of dots per hammer. The advantage of this technology is that it has the lowest known cost of ownership: ink is transferred by conventional typewriter-style ribbons. Technology and use The most common printer to use this technology was the line-matrix printer made by Printronix and its licensees. In this type, the hammers are arranged as a "hammerbank"; a type of comb that oscillate horizontally to produce a line of dots. A character matrix printer has also been produced. In this printer, the hammers are machined from an oval of magnetically permeable stainless steel, and the hammer-tips form vertical rows. The original technology, patented by Printronix in 1974, has the top of a stiff leaf spring held back by a magnetic pole-piece. A tungsten carbide hammer is brazed to the center-top of the leaf spring. When it produces a dot, a coil (electromagnet) wrapped around the pole-piece neutralizes the magnetic field. The leaf spring snaps the hammer away from the pole-piece, pushing the hammer out against a ribbon and placing an image of a dot onto the paper. Recent designs have performed complex optimizations in the magnetic circuit, and eliminated unwanted resonances in the spring. The result was a near-doubling of speed. Other improvements include the use of electrical discharge machining to produce complex, three-dimensional hammers that trade-off the magnetic circuit, mechanical resonances, and printing speed. Normal wear usually occurs when the spring rubs against the pole-piece as it returns. This causes the pole-piece to wear, eventually requiring the pole pieces to be reground and recertified. Hexavalent chrome plating on the pole-piece, combined with careful design, more than doubles speeds and improves life-span. It produces approximately a billion impressions per hammer. References Computer peripherals Computer printers Impact printers
Stored energy printer
[ "Technology" ]
476
[ "Computer peripherals", "Computing stubs", "Components" ]
50,896
https://en.wikipedia.org/wiki/Space%20station
A space station (or orbital station) is a spacecraft which remains in orbit and hosts humans for extended periods of time. It therefore is an artificial satellite featuring habitation facilities. The purpose of maintaining a space station varies depending on the program. Most often space stations have been research stations, but they have also served military or commercial uses, such as hosting space tourists. Space stations have been hosting the only continuous presence of humans in space. The first space station was Salyut 1 (1971), hosting the first crew, of the ill-fated Soyuz 11. Consecutively space stations have been operated since Skylab (1973) and occupied since 1987 with the Salyut successor Mir. Uninterrupted occupation has been sustained since the operational transition from the Mir to the International Space Station (ISS), with its first occupation in 2000. Currently there are two fully operational space stations – the ISS and China's Tiangong Space Station (TSS), which have been occupied since October 2000 with Expedition 1 and since June 2022 with Shenzhou 14. The highest number of people at the same time on one space station has been 13, first achieved with the eleven day docking to the ISS of the 127th Space Shuttle mission in 2009. The record for most people on all space stations at the same time has been 17, first on May 30, 2023, with 11 people on the ISS and 6 on the TSS. Space stations are often modular, featuring docking ports, through which they are built and maintained, allowing the joining or movement of modules and the docking of other spacecrafts for the exchange of people, supplies and tools. While space stations generally do not leave their orbit, they do feature thrusters for station keeping. History Early concepts The first mention of anything resembling a space station occurred in Edward Everett Hale's 1868 "The Brick Moon". The first to give serious, scientifically grounded consideration to space stations were Konstantin Tsiolkovsky and Hermann Oberth about two decades apart in the early 20th century. In 1929, Herman Potočnik's The Problem of Space Travel was published, the first to envision a "rotating wheel" space station to create artificial gravity. Conceptualized during the Second World War, the "sun gun" was a theoretical orbital weapon orbiting Earth at a height of . No further research was ever conducted. In 1951, Wernher von Braun published a concept for a rotating wheel space station in Collier's Weekly, referencing Potočnik's idea. However, development of a rotating station was never begun in the 20th century. First advances and precursors The first human flew to space and concluded the first orbit on April 12, 1961, with Vostok 1. The Apollo program had in its early planning instead of a lunar landing a crewed lunar orbital flight and an orbital laboratory station in orbit of Earth, at times called Project Olympus, as two different possible program goals, until the Kennedy administration sped ahead and made the Apollo program focus on what was originally planned to come after it, the lunar landing. The Project Olympus space station, or orbiting laboratory of the Apollo program, was proposed as an in-space unfolded structure with the Apollo command and service module docking. While never realized, the Apollo command and service module would perform docking maneuvers and eventually become a lunar orbiting module which was used for station-like purposes. But before that the Gemini program paved the way and achieved the first space rendezvous (undocked) with Gemini 6 and Gemini 7 in 1965. Subsequently in 1966 Neil Armstrong performed on Gemini 8 the first ever space docking, while in 1967 Kosmos 186 and Kosmos 188 were the first spacecrafts that docked automatically. In January 1969, Soyuz 4 and Soyuz 5 performed the first docked, but not internal, crew transfer, and in March, Apollo 9 performed the first ever internal transfer of astronauts between two docked spaceships. Salyut, Almaz and Skylab In 1971, the Soviet Union developed and launched the world's first space station, Salyut 1. The Almaz and Salyut series were eventually joined by Skylab, Mir, and Tiangong-1 and Tiangong-2. The hardware developed during the initial Soviet efforts remains in use, with evolved variants comprising a considerable part of the ISS, orbiting today. Each crew member stays aboard the station for weeks or months but rarely more than a year. Early stations were monolithic designs that were constructed and launched in one piece, generally containing all their supplies and experimental equipment. A crew would then be launched to join the station and perform research. After the supplies had been consumed, the station was abandoned. The first space station was Salyut 1, which was launched by the Soviet Union on April 19, 1971. The early Soviet stations were all designated "Salyut", but among these, there were two distinct types: civilian and military. The military stations, Salyut 2, Salyut 3, and Salyut 5, were also known as Almaz stations. The civilian stations Salyut 6 and Salyut 7 were built with two docking ports, which allowed a second crew to visit, bringing a new spacecraft with them; the Soyuz ferry could spend 90 days in space, at which point it needed to be replaced by a fresh Soyuz spacecraft. This allowed for a crew to man the station continually. The American Skylab (1973–1979) was also equipped with two docking ports, like second-generation stations, but the extra port was never used. The presence of a second port on the new stations allowed Progress supply vehicles to be docked to the station, meaning that fresh supplies could be brought to aid long-duration missions. This concept was expanded on Salyut 7, which "hard docked" with a TKS tug shortly before it was abandoned; this served as a proof of concept for the use of modular space stations. The later Salyuts may reasonably be seen as a transition between the two groups. Mir Unlike previous stations, the Soviet space station Mir had a modular design; a core unit was launched, and additional modules, generally with a specific role, were later added. This method allows for greater flexibility in operation, as well as removing the need for a single immensely powerful launch vehicle. Modular stations are also designed from the outset to have their supplies provided by logistical support craft, which allows for a longer lifetime at the cost of requiring regular support launches. International Space Station The ISS is divided into two main sections, the Russian Orbital Segment (ROS) and the US Orbital Segment (USOS). The first module of the ISS, Zarya, was launched in 1998. The Russian Orbital Segment's "second-generation" modules were able to launch on Proton, fly to the correct orbit, and dock themselves without human intervention. Connections are automatically made for power, data, gases, and propellants. The Russian autonomous approach allows the assembly of space stations prior to the launch of crew. The Russian "second-generation" modules are able to be reconfigured to suit changing needs. As of 2009, RKK Energia was considering the removal and reuse of some modules of the ROS on the Orbital Piloted Assembly and Experiment Complex after the end of mission is reached for the ISS. However, in September 2017, the head of Roscosmos said that the technical feasibility of separating the station to form OPSEK had been studied, and there were now no plans to separate the Russian segment from the ISS. In contrast, the main US modules launched on the Space Shuttle and were attached to the ISS by crews during EVAs. Connections for electrical power, data, propulsion, and cooling fluids are also made at this time, resulting in an integrated block of modules that is not designed for disassembly and must be deorbited as one mass. Axiom Station is a planned commercial space station that will begin as a single module docked to the ISS. Axiom Space gained NASA approval for the venture in January 2020. The first module, the Payload Power Transfer Module (PPTM), is expected to be launched to the ISS no earlier than 2027. PPTM will remain at the ISS until the launch of Axiom's Habitat One (Hab-1) module about one year later, after which it will detach from the ISS to join with Hab-1. Tiangong program China's first space laboratory, Tiangong-1 was launched in September 2011. The uncrewed Shenzhou 8 then successfully performed an automatic rendezvous and docking in November 2011. The crewed Shenzhou 9 then docked with Tiangong-1 in June 2012, followed by the crewed Shenzhou 10 in 2013. According to the China Manned Space Engineering Office, Tiangong-1 reentered over the South Pacific Ocean, northwest of Tahiti, on 2 April 2018 at 00:15 UTC. A second space laboratory Tiangong-2 was launched in September 2016, while a plan for Tiangong-3 was merged with Tiangong-2. The station made a controlled reentry on 19 July 2019 and burned up over the South Pacific Ocean. The Tiangong Space Station (), the first module of which was launched on 29 April 2021, is in low Earth orbit, 340 to 450 kilometres above the Earth at an orbital inclination of 42° to 43°. Its planned construction via 11 total launches across 2021–2022 was intended to extend the core module with two laboratory modules, capable of hosting up to six crew. Planned projects Architecture Two types of space stations have been flown: monolithic and modular. Monolithic stations consist of a single vehicle and are launched by one rocket. Modular stations consist of two or more separate vehicles that are launched independently and docked on orbit. Modular stations are currently preferred due to lower costs and greater flexibility. A space station is a complex vehicle that must incorporate many interrelated subsystems, including structure, electrical power, thermal control, attitude determination and control, orbital navigation and propulsion, automation and robotics, computing and communications, environmental and life support, crew facilities, and crew and cargo transportation. Stations must serve a useful role, which drives the capabilities required. Orbit and purpose Materials Space stations are made from durable materials that have to weather space radiation, internal pressure, micrometeoroids, thermal effects of the sun and cold temperatures for long periods of time. They are typically made from stainless steel, titanium and high-quality aluminum alloys, with layers of insulation such as Kevlar as a ballistics shield protection. The International Space Station (ISS) has a single inflatable module, the Bigelow Expandable Activity Module, which was installed in April2016 after being delivered to the ISS on the SpaceX CRS-8 resupply mission. This module, based on NASA research in the 1990s, weighs and was transported while compressed before being attached to the ISS by the space station arm and inflated to provide a volume. Whilst it was initially designed for a 2year lifetime it was still attached and being used for storage in August 2022. Construction Salyut 1 – first space station, launched in 1971 Skylab – launched in a single launch in May 1973 Mir – first modular space station assembled in orbit International Space Station – modular space station assembled in orbit Tiangong space station – Chinese space station Habitability The space station environment presents a variety of challenges to human habitability, including short-term problems such as the limited supplies of air, water, and food and the need to manage waste heat, and long-term ones such as weightlessness and relatively high levels of ionizing radiation. These conditions can create long-term health problems for space-station inhabitants, including muscle atrophy, bone deterioration, balance disorders, eyesight disorders, and elevated risk of cancer. Future space habitats may attempt to address these issues, and could be designed for occupation beyond the weeks or months that current missions typically last. Possible solutions include the creation of artificial gravity by a rotating structure, the inclusion of radiation shielding, and the development of on-site agricultural ecosystems. Some designs might even accommodate large numbers of people, becoming essentially "cities in space" where people would reside semi-permanently. Molds that develop aboard space stations can produce acids that degrade metal, glass, and rubber. Despite an expanding array of molecular approaches for detecting microorganisms, rapid and robust means of assessing the differential viability of the microbial cells, as a function of phylogenetic lineage, remain elusive. Power Like uncrewed spacecraft close to the Sun, space stations in the inner Solar System generally rely on solar panels to obtain power. Life support Space station air and water is brought up in spacecraft from Earth before being recycled. Supplemental oxygen can be supplied by a solid fuel oxygen generator. Communications Military The last military-use space station was the Soviet Salyut 5, which was launched under the Almaz program and orbited between 1976 and 1977. Occupation Space stations have harboured so far the only long-duration direct human presence in space. After the first station, Salyut 1 (1971), and its tragic Soyuz 11 crew, space stations have been operated consecutively since Skylab (1973–1974), having allowed a progression of long-duration direct human presence in space. Long-duration resident crews have been joined by visiting crews since 1977 (Salyut 6), and stations have been occupied by consecutive crews since 1987 with the Salyut successor Mir. Uninterrupted occupation of stations has been achieved since the operational transition from the Mir to the ISS, with its first occupation in 2000. The ISS has hosted the highest number of people in orbit at the same time, reaching 13 for the first time during the eleven day docking of STS-127 in 2009. The duration record for a single spaceflight is 437.75 days, set by Valeri Polyakov aboard Mir from 1994 to 1995. , four cosmonauts have completed single missions of over a year, all aboard Mir. Operations Resupply and crew vehicles Many spacecraft are used to dock with the space stations. Soyuz flight T-15 in March to July 1986 was the first and as of 2016, only spacecraft to visit two different space stations, Mir and Salyut 7. International Space Station The International Space Station has been supported by many different spacecraft. Future Sierra Nevada Corporation Dream Chaser New Space-Station Resupply Vehicle (HTV-X) Roscosmos Orel Current Northrop Grumman Cygnus (2013–present) Roscosmos Progress (multiple variants) (2000–present) Energia Soyuz (multiple variants) (2001–present) SpaceX Dragon 2 (2020–present) Retired Automated Transfer Vehicle (ATV) (2008–2015) H-II Transfer Vehicle (HTV) (2009–2020) Space Shuttle (1998–2011) SpaceX Dragon 1 (2012–2020) Tiangong space station The Tiangong space station is supported by the following spacecraft: Shenzhou (2021–present) Tianzhou (2021–present) Tiangong program The Tiangong program relied on the following spacecraft. Shenzhou program (2011–2016) Mir The Mir space station was in orbit from 1986 to 2001 and was supported and visited by the following spacecraft: Roscosmos Progress (multiple variants) (1986–2000) – An additional Progress spacecraft was used in 2001 to deorbit Mir. Energia Soyuz (multiple variants) (1986–2000) Space Shuttle (1995–1998) Skylab Apollo command and service module (1973–1974) Salyut programme Energia Soyuz (multiple variants) (1971–1986) Docking and berthing Maintenance Research Research conducted on the Mir included the first long term space based ESA research project EUROMIR95 which lasted 179days and included 35 scientific experiments. During the first 20 years of operation of the International Space Station, there were around 3,000 scientific experiments in the areas of biology and biotech, technology development, educational activities, human research, physical science, and Earth and space science. Materials research Space stations provide a useful platform to test the performance, stability, and survivability of materials in space. This research follows on from previous experiments such as the Long Duration Exposure Facility, a free flying experimental platform which flew from April1984 until January1990. Mir Environmental Effects Payload (1996–1997) Materials International Space Station Experiment (2001–present) Human research Botany Space tourism On the International Space Station, guests sometimes pay $50 million to spend the week living as an astronaut. Later, space tourism is slated to expand once launch costs are lowered sufficiently. By the end of the 2020s, space hotels may become relatively common. Finance As it currently costs on average $10,000 to $25,000 per kilogram to launch anything into orbit, space stations remain the exclusive province of government space agencies, which are primarily funded by taxation. In the case of the International Space Station, space tourism makes up a small portion of money to run it. Legacy Technology spinoffs International cooperation and economy Cultural impact Space settlement See also Apollo–Soyuz Spacelab Shuttle–Mir program List of space stations References Bibliography Haeuplik-Meusburger: Architecture for Astronauts – An Activity based Approach. Springer Praxis Books, 2011, . External links Read Congressional Research Service (CRS) Reports regarding Space Stations ISS – on Russian News Agency TASS, Official Infographic The star named ISS – on Roscosmos TV "Giant Doughnut Purposed as Space Station", Popular Science, October 1951, pp. 120–121; article on the subject of space exploration and a space station orbiting earth Further reading 1971 introductions Human habitats Soviet inventions Solar System
Space station
[ "Astronomy" ]
3,598
[ "Outer space", "Solar System" ]
50,903
https://en.wikipedia.org/wiki/Wavelet
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing. For example, a wavelet could be created to have a frequency of middle C and a short duration of roughly one tenth of a second. If this wavelet were to be convolved with a signal created from the recording of a melody, then the resulting signal would be useful for determining when the middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar. Correlation is at the core of many practical wavelet applications. As a mathematical tool, wavelets can be used to extract information from many kinds of data, including audio signals and images. Sets of wavelets are needed to analyze data fully. "Complementary" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square-integrable functions. This is accomplished through coherent states. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. Multiple, closely spaced openings (e.g., a diffraction grating), can result in a complex pattern of varying intensity. Etymology The word wavelet has been used for decades in digital signal processing and exploration geophysics. The equivalent French word ondelette meaning "small wave" was used by Jean Morlet and Alex Grossmann in the early 1980s. Wavelet theory Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Discrete wavelet transform (continuous in time) of a discrete-time (sampled) signal by using discrete-time filterbanks of dyadic (octave band) configuration is a wavelet approximation to that signal. The coefficients of such a filter bank are called the shift and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle. Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based. Continuous wavelet transforms (continuous shift and scale parameters) In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the Lp function space L2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequencies f > 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components. The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ in L2(R), the mother wavelet. For the example of the scale one frequency band [1, 2] this function is with the (normalized) sinc function. That, Meyer's, and two other examples of mother wavelets are: The subspace of scale a or frequency band [1/a, 2/a] is generated by the functions (sometimes called child wavelets) where a is positive and defines the scale and b is any real number and defines the shift. The pair (a, b) defines a point in the right halfplane R+ × R. The projection of a function x onto the subspace of scale a then has the form with wavelet coefficients For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal. See a list of some Continuous wavelets. Discrete wavelet transforms (discrete shift and scale parameters, continuous in time) It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a > 1, b > 0. The corresponding discrete subset of the halfplane consists of all the points (am, nb am) with m, n in Z. The corresponding child wavelets are now given as A sufficient condition for the reconstruction of any signal x of finite energy by the formula is that the functions form an orthonormal basis of L2(R). Multiresolution based discrete wavelet transforms (continuous in time) In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. In special situations this numerical complexity can be avoided if the scaled and shifted wavelets form a multiresolution analysis. This means that there has to exist an auxiliary function, the father wavelet φ in L2(R), and that a is an integer. A typical choice is a = 2 and b = 1. The most famous pair of father and mother wavelets is the Daubechies 4-tap wavelet. Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe wavelet admits no multiresolution analysis. From the mother and father wavelets one constructs the subspaces The father wavelet keeps the time domain properties, while the mother wavelets keeps the frequency domain properties. From these it is required that the sequence forms a multiresolution analysis of L2 and that the subspaces are the orthogonal "differences" of the above sequence, that is, Wm is the orthogonal complement of Vm inside the subspace Vm−1, In analogy to the sampling theorem one may conclude that the space Vm with sampling distance 2m more or less covers the frequency baseband from 0 to 1/2m-1. As orthogonal complement, Wm roughly covers the band [1/2m−1, 1/2m]. From those inclusions and orthogonality relations, especially , follows the existence of sequences and that satisfy the identities so that and so that The second identity of the first pair is a refinement equation for the father wavelet φ. Both pairs of identities form the basis for the algorithm of the fast wavelet transform. From the multiresolution analysis derives the orthogonal decomposition of the space L2 as For any signal or function this gives a representation in basis functions of the corresponding subspaces as where the coefficients are and Time-causal wavelets For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation. Mother wavelet For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the space This is the space of Lebesgue measurable functions that are both absolutely integrable and square integrable in the sense that and Being in this space ensures that one can formulate the conditions of zero mean and square norm one: is the condition for zero mean, and is the condition for square norm one. For ψ to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform. For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space L2(R). Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is a solution to a functional equation. In most situations it is useful to restrict ψ to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m < M The mother wavelet is scaled (or dilated) by a factor of a and translated (or shifted) by a factor of b to give (under Morlet's original formulation): For the continuous WT, the pair (a,b) varies over the full half-plane R+ × R; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group. These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat). Restriction: when and , has a finite time interval Comparisons with Fourier transform (continuous-time) The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. In fact, the Fourier transform can be viewed as a special case of the continuous wavelet transform with the choice of the mother wavelet . The main difference in general is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The short-time Fourier transform (STFT) is similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off. In particular, assuming a rectangular window region, one may think of the STFT as a transform with a slightly different kernel where can often be written as , where and u respectively denote the length and temporal offset of the windowing function. Using Parseval's theorem, one may define the wavelet's energy as From this, the square of the temporal support of the window offset by time u is given by and the square of the spectral support of the window acting on a frequency Multiplication with a rectangular window in the time domain corresponds to convolution with a function in the frequency domain, resulting in spurious ringing artifacts for short/localized temporal windows. With the continuous-time Fourier transform, and this convolution is with a delta function in Fourier space, resulting in the true Fourier transform of the signal . The window function may be some other apodizing filter, such as a Gaussian. The choice of windowing function will affect the approximation error relative to the true Fourier transform. A given resolution cell's time-bandwidth product may not be exceeded with the STFT. All STFT basis elements maintain a uniform spectral and temporal support for all temporal shifts or offsets, thereby attaining an equal resolution in time for lower and higher frequencies. The resolution is purely determined by the sampling width. In contrast, the wavelet transform's multiresolutional properties enables large temporal supports for lower frequencies while maintaining short temporal widths for higher frequencies by the scaling properties of the wavelet transform. This property extends conventional time-frequency analysis into time-scale analysis. The discrete wavelet transform is less computationally complex, taking O(N) time as compared to O(N log N) for the fast Fourier transform (FFT). This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT which uses the same basis functions as the discrete Fourier transform (DFT). This complexity only applies when the filter size has no relation to the signal size. A wavelet without compact support such as the Shannon wavelet would require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only useful for certain types of signals.) Definition of a wavelet A wavelet (or a wavelet family) can be defined in various ways: Scaling filter An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined. For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters. Daubechies and Symlet wavelets can be defined by the scaling filter. Scaling function Wavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain. The wavelet function is in effect a band-pass filter and scaling that for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See for a detailed explanation. For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filter g. Meyer wavelets can be defined by scaling functions Wavelet function The wavelet only has a time domain representation as the wavelet function ψ(t). For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few continuous wavelets. History The development of wavelets can be linked to several separate trains of thought, starting with Alfréd Haar's work in the early 20th century. Later work by Dennis Gabor yielded Gabor atoms (1946), which are constructed similarly to wavelets, and applied to similar purposes. Notable contributions to wavelet theory since then can be attributed to George Zweig’s discovery of the continuous wavelet transform (CWT) in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound), Pierre Goupillaud, Alex Grossmann and Jean Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), the Le Gall–Tabatabai (LGT) 5/3-taps non-orthogonal filter bank with linear phase (1988), Ingrid Daubechies' orthogonal wavelets with compact support (1988), Stéphane Mallat's non-orthogonal multiresolution framework (1989), Ali Akansu's binomial QMF (1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's harmonic wavelet transform (1993), and set partitioning in hierarchical trees (SPIHT) developed by Amir Said with William A. Pearlman in 1996. The JPEG 2000 standard was developed from 1997 to 2000 by a Joint Photographic Experts Group (JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. It uses the CDF 9/7 wavelet transform (developed by Ingrid Daubechies in 1992) for its lossy compression algorithm, and the Le Gall–Tabatabai (LGT) 5/3 discrete-time filter bank (developed by Didier Le Gall and Ali J. Tabatabai in 1988) for its lossless compression algorithm. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004. Timeline First wavelet (Haar's wavelet) by Alfréd Haar (1909) Since the 1970s: George Zweig, Jean Morlet, Alex Grossmann Since the 1980s: Yves Meyer, Didier Le Gall, Ali J. Tabatabai, Stéphane Mallat, Ingrid Daubechies, Ronald Coifman, Ali Akansu, Victor Wickerhauser Since the 1990s: Nathalie Delprat, Newland, Amir Said, William A. Pearlman, Touradj Ebrahimi, JPEG 2000 Wavelet transforms A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals. Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid. There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below: Continuous wavelet transform (CWT) Discrete wavelet transform (DWT) Fast wavelet transform (FWT) Lifting scheme and generalized lifting scheme Wavelet packet decomposition (WPD) Stationary wavelet transform (SWT) Fractional Fourier transform (FRFT) Fractional wavelet transform (FRWT) Generalized transforms There are a number of generalized transforms of which the wavelet transform is a special case. For example, Yosef Joseph introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume. Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform. An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects. Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition and strain/metrology applications for intermediate transforms with high frequency resolution (like brushlets and ridgelets) is growing rapidly. Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane. Applications Generally, an approximation to DWT is used for data compression if a signal is already sampled, and the CWT for signal analysis. Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research. Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of frames of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression. A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed. Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a power line communications technology developed by Panasonic), and in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems). As a representation of a signal Often, signals can be represented well as a sum of sinusoids. However, consider a non-continuous signal with an abrupt discontinuity; this signal can still be represented as a sum of sinusoids, but requires an infinite number, which is an observation known as Gibbs phenomenon. This, then, requires an infinite number of Fourier coefficients, which is not practical for many applications, such as compression. Wavelets are more useful for describing these signals with discontinuities because of their time-localized behavior (both Fourier and wavelet transforms are frequency-localized, but wavelets have an additional time-localization property). Because of this, many types of signals in practice may be non-sparse in the Fourier domain, but very sparse in the wavelet domain. This is particularly useful in signal reconstruction, especially in the recently popular field of compressed sensing. (Note that the short-time Fourier transform (STFT) is also localized in time and frequency, but there are often problems with the frequency-time resolution trade-off. Wavelets are better signal representations because of multiresolution analysis.) This motivates why wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, chaos theory, ab initio calculations, astrophysics, gravitational wave transient data analysis, density-matrix localisation, seismology, optics, turbulence and quantum mechanics. This change has also occurred in image processing, EEG, EMG, ECG analyses, brain rhythms, DNA analysis, protein analysis, climatology, human sexual response analysis, general signal processing, speech recognition, acoustics, vibration signals, computer graphics, multifractal analysis, and sparse coding. In computer vision and image processing, the notion of scale space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation. Wavelet denoising Suppose we measure a noisy signal , where represents the signal and represents the noise. Assume has a sparse representation in a certain wavelet basis, and Let the wavelet transform of be , where is the wavelet transform of the signal component and is the wavelet transform of the noise component. Most elements in are 0 or close to 0, and Since is orthogonal, the estimation problem amounts to recovery of a signal in iid Gaussian noise. As is sparse, one method is to apply a Gaussian mixture model for . Assume a prior , where is the variance of "significant" coefficients and is the variance of "insignificant" coefficients. Then , is called the shrinkage factor, which depends on the prior variances and . By setting coefficients that fall below a shrinkage threshold to zero, once the inverse transform is applied, an expectedly small amount of signal is lost due to the sparsity assumption. The larger coefficients are expected to primarily represent signal due to sparsity, and statistically very little of the signal, albeit the majority of the noise, is expected to be represented in such lower magnitude coefficients... therefore the zeroing-out operation is expected to remove most of the noise and not much signal. Typically, the above-threshold coefficients are not modified during this process. Some algorithms for wavelet-based denoising may attenuate larger coefficients as well, based on a statistical estimate of the amount of noise expected to be removed by such an attenuation. At last, apply the inverse wavelet transform to obtain Multiscale climate network Agarwal et al. proposed wavelet based advanced linear and nonlinear methods to construct and investigate Climate as complex networks at different timescales. Climate networks constructed using SST datasets at different timescale averred that wavelet based multi-scale analysis of climatic processes holds the promise of better understanding the system dynamics that may be missed when processes are analyzed at one timescale only List of wavelets Discrete wavelets Beylkin (18) Moore Wavelet Morlet wavelet Biorthogonal nearly coiflet (BNC) wavelets Coiflet (6, 12, 18, 24, 30) Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechies biorthogonal wavelets) Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, etc.) Binomial QMF (Also referred to as Daubechies wavelet) Haar wavelet Mathieu wavelet Legendre wavelet Villasenor wavelet Symlet Continuous wavelets Real-valued Beta wavelet Hermitian wavelet Meyer wavelet Mexican hat wavelet Poisson wavelet Shannon wavelet Spline wavelet Strömberg wavelet Complex-valued Complex Mexican hat wavelet fbsp wavelet Morlet wavelet Shannon wavelet Modified Morlet wavelet See also Chirplet transform Curvelet Digital cinema Dimension reduction Filter banks Fourier-related transforms Fractal compression Fractional Fourier transform Huygens–Fresnel principle (physical wavelets) JPEG 2000 Least-squares spectral analysis for computing periodicity in any including unevenly spaced data Morlet wavelet Multiresolution analysis Noiselet Non-separable wavelet Scale space Scaled correlation Shearlet Short-time Fourier transform Spectrogram Ultra wideband radio – transmits wavelets Wavelet for multidimensional signals analysis References Further reading External links 1st NJIT Symposium on Wavelets (April 30, 1990) (First Wavelets Conference in USA) Binomial-QMF Daubechies Wavelets Wavelets by Gilbert Strang, American Scientist 82 (1994) 250–255. (A very short and excellent introduction) Course on Wavelets given at UC Santa Barbara, 2004 Wavelets for Kids (PDF file) (Introductory (for very smart kids!)) WITS: Where Is The Starlet? A dictionary of tens of wavelets and wavelet-related terms ending in -let, from activelets to x-lets through bandlets, contourlets, curvelets, noiselets, wedgelets. The Fractional Spline Wavelet Transform describes a fractional wavelet transform based on fractional b-Splines. A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity provides a tutorial on two-dimensional oriented wavelets and related geometric multiscale transforms. Concise Introduction to Wavelets by René Puschinger A Really Friendly Guide To Wavelets by Clemens Valens Time–frequency analysis Signal processing
Wavelet
[ "Physics", "Technology", "Engineering" ]
6,050
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis" ]
50,909
https://en.wikipedia.org/wiki/Basis%20function
In mathematics, a basis function is an element of a particular basis for a function space. Every function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors. In numerical analysis and approximation theory, basis functions are also called blending functions, because of their use in interpolation: In this application, a mixture of the basis functions provides an interpolating function (with the "blend" depending on the evaluation of the basis functions at the data points). Examples Monomial basis for Cω The monomial basis for the vector space of analytic functions is given by This basis is used in Taylor series, amongst others. Monomial basis for polynomials The monomial basis also forms a basis for the vector space of polynomials. After all, every polynomial can be written as for some , which is a linear combination of monomials. Fourier basis for L2[0,1] Sines and cosines form an (orthonormal) Schauder basis for square-integrable functions on a bounded domain. As a particular example, the collection forms a basis for L2[0,1]. See also Basis (linear algebra) (Hamel basis) Schauder basis (in a Banach space) Dual basis Biorthogonal system (Markushevich basis) Orthonormal basis in an inner-product space Orthogonal polynomials Fourier analysis and Fourier series Harmonic analysis Orthogonal wavelet Biorthogonal wavelet Radial basis function Finite-elements (bases) Functional analysis Approximation theory Numerical analysis References Numerical analysis Fourier analysis Linear algebra Numerical linear algebra Types of functions
Basis function
[ "Mathematics" ]
348
[ "Functions and mappings", "Mathematical objects", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Linear algebra", "Types of functions", "Approximations", "Algebra" ]
50,958
https://en.wikipedia.org/wiki/Sulfur%20dioxide
Sulfur dioxide (IUPAC-recommended spelling) or sulphur dioxide (traditional Commonwealth English) is the chemical compound with the formula . It is a colorless gas with a pungent smell that is responsible for the odor of burnt matches. It is released naturally by volcanic activity and is produced as a by-product of copper extraction and the burning of sulfur-bearing fossil fuels. Sulfur dioxide is somewhat toxic to humans, although only when inhaled in relatively large quantities for a period of several minutes or more. It was known to medieval alchemists as "volatile spirit of sulfur". Structure and bonding SO2 is a bent molecule with C2v symmetry point group. A valence bond theory approach considering just s and p orbitals would describe the bonding in terms of resonance between two resonance structures. The sulfur–oxygen bond has a bond order of 1.5. There is support for this simple approach that does not invoke d orbital participation. In terms of electron-counting formalism, the sulfur atom has an oxidation state of +4 and a formal charge of +1. Occurrence Sulfur dioxide is found on Earth and exists in very small concentrations in the atmosphere at about 15 ppb. On other planets, sulfur dioxide can be found in various concentrations, the most significant being the atmosphere of Venus, where it is the third-most abundant atmospheric gas at 150 ppm. There, it reacts with water to form clouds of sulfurous acid ( + ⇌ + ), is a key component of the planet's global atmospheric sulfur cycle and contributes to global warming. It has been implicated as a key agent in the warming of early Mars, with estimates of concentrations in the lower atmosphere as high as 100 ppm, though it only exists in trace amounts. On both Venus and Mars, as on Earth, its primary source is thought to be volcanic. The atmosphere of Io, a natural satellite of Jupiter, is 90% sulfur dioxide and trace amounts are thought to also exist in the atmosphere of Jupiter. The James Webb Space Telescope has observed the presence of sulfur dioxide on the exoplanet WASP-39b, where it is formed through photochemistry in the planet's atmosphere. As an ice, it is thought to exist in abundance on the Galilean moons—as subliming ice or frost on the trailing hemisphere of Io, and in the crust and mantle of Europa, Ganymede, and Callisto, possibly also in liquid form and readily reacting with water. Production Sulfur dioxide is primarily produced for sulfuric acid manufacture (see contact process, but other processes predated that at least since 16th century). In the United States in 1979, 23.6 million metric tons (26 million U.S. short tons) of sulfur dioxide were used in this way, compared with 150,000 metric tons (165,347 U.S. short tons) used for other purposes. Most sulfur dioxide is produced by the combustion of elemental sulfur. Some sulfur dioxide is also produced by roasting pyrite and other sulfide ores in air. Combustion routes Sulfur dioxide is the product of the burning of sulfur or of burning materials that contain sulfur: + 8 → 8 , ΔH = −297 kJ/mol To aid combustion, liquified sulfur ( is sprayed through an atomizing nozzle to generate fine drops of sulfur with a large surface area. The reaction is exothermic, and the combustion produces temperatures of . The significant amount of heat produced is recovered by steam generation that can subsequently be converted to electricity. The combustion of hydrogen sulfide and organosulfur compounds proceeds similarly. For example: 2 + 3 → 2 + 2 The roasting of sulfide ores such as pyrite, sphalerite, and cinnabar (mercury sulfide) also releases SO2: 4 + 11 → 2 + 8 2 + 3 → 2 + 2 4 FeS + 7 → 2 + 4 A combination of these reactions is responsible for the largest source of sulfur dioxide, volcanic eruptions. These events can release millions of tons of SO2. Reduction of higher oxides Sulfur dioxide can also be a byproduct in the manufacture of calcium silicate cement; CaSO4 is heated with coke and sand in this process: 2 + 2 + C → 2 + 2 + Until the 1970s commercial quantities of sulfuric acid and cement were produced by this process in Whitehaven, England. Upon being mixed with shale or marl, and roasted, the sulfate liberated sulfur dioxide gas, used in sulfuric acid production, the reaction also produced calcium silicate, a precursor in cement production. On a laboratory scale, the action of hot concentrated sulfuric acid on copper turnings produces sulfur dioxide. Cu + 2 → Tin also reacts with concentrated sulfuric acid but it produces tin(II) sulfate which can later be pyrolyzed at 360 °C into tin dioxide and dry sulfur dioxide. Sn + → → + From sulfites The reverse reaction occurs upon acidification: Reactions Sulfites result by the action of aqueous base on sulfur dioxide: Sulfur dioxide is a mild but useful reducing agent. It is oxidized by halogens to give the sulfuryl halides, such as sulfuryl chloride: Sulfur dioxide is the oxidising agent in the Claus process, which is conducted on a large scale in oil refineries. Here, sulfur dioxide is reduced by hydrogen sulfide to give elemental sulfur: The sequential oxidation of sulfur dioxide followed by its hydration is used in the production of sulfuric acid. + + → Sulfur dioxide dissolves in water to give "sulfurous acid", which cannot be isolated and is instead an acidic solution of bisulfite, and possibly sulfite, ions. Ka = 1.54; pKa = 1.81 Laboratory reactions Sulfur dioxide is one of the few common acidic yet reducing gases. It turns moist litmus pink (being acidic), then white (due to its bleaching effect). It may be identified by bubbling it through a dichromate solution, turning the solution from orange to green (Cr3+ (aq)). It can also reduce ferric ions to ferrous. Sulfur dioxide can react with certain 1,3-dienes in a cheletropic reaction to form cyclic sulfones. This reaction is exploited on an industrial scale for the synthesis of sulfolane, which is an important solvent in the petrochemical industry. Sulfur dioxide can bind to metal ions as a ligand to form metal sulfur dioxide complexes, typically where the transition metal is in oxidation state 0 or +1. Many different bonding modes (geometries) are recognized, but in most cases, the ligand is monodentate, attached to the metal through sulfur, which can be either planar and pyramidal η1. As a η1-SO2 (S-bonded planar) ligand sulfur dioxide functions as a Lewis base using the lone pair on S. SO2 functions as a Lewis acids in its η1-SO2 (S-bonded pyramidal) bonding mode with metals and in its 1:1 adducts with Lewis bases such as dimethylacetamide and trimethyl amine. When bonding to Lewis bases the acid parameters of SO2 are EA = 0.51 and EA = 1.56. Uses The overarching, dominant use of sulfur dioxide is in the production of sulfuric acid. Precursor to sulfuric acid Sulfur dioxide is an intermediate in the production of sulfuric acid, being converted to sulfur trioxide, and then to oleum, which is made into sulfuric acid. Sulfur dioxide for this purpose is made when sulfur combines with oxygen. The method of converting sulfur dioxide to sulfuric acid is called the contact process. Several million tons are produced annually for this purpose. Food preservative Sulfur dioxide is sometimes used as a preservative for dried apricots, dried figs, and other dried fruits, owing to its antimicrobial properties and ability to prevent oxidation, and is called E220 when used in this way in Europe. As a preservative, it maintains the colorful appearance of the fruit and prevents rotting. Historically, molasses was "sulfured" as a preservative and also to lighten its color. Treatment of dried fruit was usually done outdoors, by igniting sublimed sulfur and burning in an enclosed space with the fruits. Fruits may be sulfured by dipping them into an either sodium bisulfite, sodium sulfite or sodium metabisulfite. Winemaking Sulfur dioxide was first used in winemaking by the Romans, when they discovered that burning sulfur candles inside empty wine vessels keeps them fresh and free from vinegar smell. It is still an important compound in winemaking, and is measured in parts per million (ppm) in wine. It is present even in so-called unsulfurated wine at concentrations of up to 10 mg/L. It serves as an antibiotic and antioxidant, protecting wine from spoilage by bacteria and oxidation – a phenomenon that leads to the browning of the wine and a loss of cultivar specific flavors. Its antimicrobial action also helps minimize volatile acidity. Wines containing sulfur dioxide are typically labeled with "containing sulfites". Sulfur dioxide exists in wine in free and bound forms, and the combinations are referred to as total SO2. Binding, for instance to the carbonyl group of acetaldehyde, varies with the wine in question. The free form exists in equilibrium between molecular SO2 (as a dissolved gas) and bisulfite ion, which is in turn in equilibrium with sulfite ion. These equilibria depend on the pH of the wine. Lower pH shifts the equilibrium towards molecular (gaseous) SO2, which is the active form, while at higher pH more SO2 is found in the inactive sulfite and bisulfite forms. The molecular SO2 is active as an antimicrobial and antioxidant, and this is also the form which may be perceived as a pungent odor at high levels. Wines with total SO2 concentrations below 10 ppm do not require "contains sulfites" on the label by US and EU laws. The upper limit of total SO2 allowed in wine in the US is 350 ppm; in the EU it is 160 ppm for red wines and 210 ppm for white and rosé wines. In low concentrations, SO2 is mostly undetectable in wine, but at free SO2 concentrations over 50 ppm, SO2 becomes evident in the smell and taste of wine. SO2 is also a very important compound in winery sanitation. Wineries and equipment must be kept clean, and because bleach cannot be used in a winery due to the risk of cork taint, a mixture of SO2, water, and citric acid is commonly used to clean and sanitize equipment. Ozone (O3) is now used extensively for sanitizing in wineries due to its efficacy, and because it does not affect the wine or most equipment. As a reducing agent Sulfur dioxide is also a good reductant. In the presence of water, sulfur dioxide is able to decolorize substances. Specifically, it is a useful reducing bleach for papers and delicate materials such as clothes. This bleaching effect normally does not last very long. Oxygen in the atmosphere reoxidizes the reduced dyes, restoring the color. In municipal wastewater treatment, sulfur dioxide is used to treat chlorinated wastewater prior to release. Sulfur dioxide reduces free and combined chlorine to chloride. Sulfur dioxide is fairly soluble in water, and by both IR and Raman spectroscopy; the hypothetical sulfurous acid, H2SO3, is not present to any extent. However, such solutions do show spectra of the hydrogen sulfite ion, HSO3−, by reaction with water, and it is in fact the actual reducing agent present: SO2 + H2O ⇌ HSO3− + H+ As a fumigant In the beginning of the 20th century sulfur dioxide was used in Buenos Aires as a fumigant to kill rats that carried the Yersinia pestis bacterium, which causes bubonic plague. The application was successful, and the application of this method was extended to other areas in South America. In Buenos Aires, where these apparatuses were known as Sulfurozador, but later also in Rio de Janeiro, New Orleans and San Francisco, the sulfur dioxide treatment machines were brought into the streets to enable extensive disinfection campaigns, with effective results. Biochemical and biomedical roles Sulfur dioxide or its conjugate base bisulfite is produced biologically as an intermediate in both sulfate-reducing organisms and in sulfur-oxidizing bacteria, as well. The role of sulfur dioxide in mammalian biology is not yet well understood. Sulfur dioxide blocks nerve signals from the pulmonary stretch receptors and abolishes the Hering–Breuer inflation reflex. It is considered that endogenous sulfur dioxide plays a significant physiological role in regulating cardiac and blood vessel function, and aberrant or deficient sulfur dioxide metabolism can contribute to several different cardiovascular diseases, such as arterial hypertension, atherosclerosis, pulmonary arterial hypertension, and stenocardia. It was shown that in children with pulmonary arterial hypertension due to congenital heart diseases the level of homocysteine is higher and the level of endogenous sulfur dioxide is lower than in normal control children. Moreover, these biochemical parameters strongly correlated to the severity of pulmonary arterial hypertension. Authors considered homocysteine to be one of useful biochemical markers of disease severity and sulfur dioxide metabolism to be one of potential therapeutic targets in those patients. Endogenous sulfur dioxide also has been shown to lower the proliferation rate of endothelial smooth muscle cells in blood vessels, via lowering the MAPK activity and activating adenylyl cyclase and protein kinase A. Smooth muscle cell proliferation is one of important mechanisms of hypertensive remodeling of blood vessels and their stenosis, so it is an important pathogenetic mechanism in arterial hypertension and atherosclerosis. Endogenous sulfur dioxide in low concentrations causes endothelium-dependent vasodilation. In higher concentrations it causes endothelium-independent vasodilation and has a negative inotropic effect on cardiac output function, thus effectively lowering blood pressure and myocardial oxygen consumption. The vasodilating and bronchodilating effects of sulfur dioxide are mediated via ATP-dependent calcium channels and L-type ("dihydropyridine") calcium channels. Endogenous sulfur dioxide is also a potent antiinflammatory, antioxidant and cytoprotective agent. It lowers blood pressure and slows hypertensive remodeling of blood vessels, especially thickening of their intima. It also regulates lipid metabolism. Endogenous sulfur dioxide also diminishes myocardial damage, caused by isoproterenol adrenergic hyperstimulation, and strengthens the myocardial antioxidant defense reserve. As a reagent and solvent in the laboratory Sulfur dioxide is a versatile inert solvent widely used for dissolving highly oxidizing salts. It is also used occasionally as a source of the sulfonyl group in organic synthesis. Treatment of aryl diazonium salts with sulfur dioxide and cuprous chloride yields the corresponding aryl sulfonyl chloride, for example: As a result of its very low Lewis basicity, it is often used as a low-temperature solvent/diluent for superacids like magic acid (FSO3H/SbF5), allowing for highly reactive species like tert-butyl cation to be observed spectroscopically at low temperature (though tertiary carbocations do react with SO2 above about −30 °C, and even less reactive solvents like SO2ClF must be used at these higher temperatures). As a refrigerant Being easily condensed and possessing a high heat of evaporation, sulfur dioxide is a candidate material for refrigerants. Before the development of chlorofluorocarbons, sulfur dioxide was used as a refrigerant in home refrigerators. As an indicator of volcanic activity Sulfur dioxide content in naturally-released geothermal gasses is measured by the Icelandic Meteorological Office as an indicator of possible volcanic activity. Safety Ingestion In the United States, the Center for Science in the Public Interest lists the two food preservatives, sulfur dioxide and sodium bisulfite, as being safe for human consumption except for certain asthmatic individuals who may be sensitive to them, especially in large amounts. Symptoms of sensitivity to sulfiting agents, including sulfur dioxide, manifest as potentially life-threatening trouble breathing within minutes of ingestion. Sulphites may also cause symptoms in non-asthmatic individuals, namely dermatitis, urticaria, flushing, hypotension, abdominal pain and diarrhea, and even life-threatening anaphylaxis. Inhalation Incidental exposure to sulfur dioxide is routine, e.g. the smoke from matches, coal, and sulfur-containing fuels like bunker fuel. Relative to other chemicals, it is only mildly toxic and requires high concentrations to be actively hazardous. However, its ubiquity makes it a major air pollutant with significant impacts on human health. In 2008, the American Conference of Governmental Industrial Hygienists reduced the short-term exposure limit to 0.25 parts per million (ppm). In the US, the OSHA set the PEL at 5 ppm (13 mg/m3) time-weighted average. Also in the US, NIOSH set the IDLH at 100 ppm. In 2010, the EPA "revised the primary SO2 NAAQS by establishing a new one-hour standard at a level of 75 parts per billion (ppb). EPA revoked the two existing primary standards because they would not provide additional public health protection given a one-hour standard at 75 ppb." Environmental role Air pollution Major volcanic eruptions have an overwhelming effect on sulfate aerosol concentrations in the years when they occur: eruptions ranking 4 or greater on the Volcanic Explosivity Index inject and water vapor directly into the stratosphere, where they react to create sulfate aerosol plumes. Volcanic emissions vary significantly in composition, and have complex chemistry due to the presence of ash particulates and a wide variety of other elements in the plume. Only stratovolcanoes containing primarily felsic magmas are responsible for these fluxes, as mafic magma erupted in shield volcanoes doesn't result in plumes which reach the stratosphere. However, before the Industrial Revolution, dimethyl sulfide pathway was the largest contributor to sulfate aerosol concentrations in a more average year with no major volcanic activity. According to the IPCC First Assessment Report, published in 1990, volcanic emissions usually amounted to around 10 million tons in 1980s, while dimethyl sulfide amounted to 40 million tons. Yet, by that point, the global human-caused emissions of sulfur into the atmosphere became "at least as large" as all natural emissions of sulfur-containing compounds combined: they were at less than 3 million tons per year in 1860, and then they increased to 15 million tons in 1900, 40 million tons in 1940 and about 80 millions in 1980. The same report noted that "in the industrialized regions of Europe and North America, anthropogenic emissions dominate over natural emissions by about a factor of ten or even more". In the eastern United States, sulfate particles were estimated to account for 25% or more of all air pollution. Exposure to sulfur dioxide emissions by coal power plants (coal PM2.5) in the US was associated with 2.1 times greater mortality risk than exposure to PM2.5 from all sources. Meanwhile, the Southern Hemisphere had much lower concentrations due to being much less densely populated, with an estimated 90% of the human population in the north. In the early 1990s, anthropogenic sulfur dominated in the Northern Hemisphere, where only 16% of annual sulfur emissions were natural, yet amounted for less than half of the emissions in the Southern Hemisphere. Such an increase in sulfate aerosol emissions had a variety of effects. At the time, the most visible one was acid rain, caused by precipitation from clouds carrying high concentrations of sulfate aerosols in the troposphere. At its peak, acid rain has eliminated brook trout and some other fish species and insect life from lakes and streams in geographically sensitive areas, such as Adirondack Mountains in the United States. Acid rain worsens soil function as some of its microbiota is lost and heavy metals like aluminium are mobilized (spread more easily) while essential nutrients and minerals such as magnesium can leach away because of the same. Ultimately, plants unable to tolerate lowered pH are killed, with montane forests being some of the worst-affected ecosystems due to their regular exposure to sulfate-carrying fog at high altitudes. While acid rain was too dilute to affect human health directly, breathing smog or even any air with elevated sulfate concentrations is known to contribute to heart and lung conditions, including asthma and bronchitis. Further, this form of pollution is linked to preterm birth and low birth weight, with a study of 74,671 pregnant women in Beijing finding that every additional 100 μg/m3 of in the air reduced infants' weight by 7.3 g, making it and other forms of air pollution the largest attributable risk factor for low birth weight ever observed. Control measures Due largely to the US EPA's Acid Rain Program, the U.S. has had a 33% decrease in emissions between 1983 and 2002 (see table). This improvement resulted in part from flue-gas desulfurization, a technology that enables SO2 to be chemically bound in power plants burning sulfur-containing coal or petroleum. In particular, calcium oxide (lime) reacts with sulfur dioxide to form calcium sulfite: CaO + SO2 → CaSO3 Aerobic oxidation of the CaSO3 gives CaSO4, anhydrite. Most gypsum sold in Europe comes from flue-gas desulfurization. To control sulfur emissions, dozens of methods with relatively high efficiencies have been developed for fitting of coal-fired power plants. Sulfur can be removed from coal during burning by using limestone as a bed material in fluidized bed combustion. Sulfur can also be removed from fuels before burning, preventing formation of SO2 when the fuel is burnt. The Claus process is used in refineries to produce sulfur as a byproduct. The Stretford process has also been used to remove sulfur from fuel. Redox processes using iron oxides can also be used, for example, Lo-Cat or Sulferox. Fuel additives such as calcium additives and magnesium carboxylate may be used in marine engines to lower the emission of sulfur dioxide gases into the atmosphere. Effects on ozone layer Sulfur dioxide aerosols in the stratosphere can contribute to ozone depletion in the presence of chlorofluorocarbons and other halogenated ozone-depleting substances. The effects of volcanic eruptions containing sulfur dioxide aerosols on the ozone layer are complex, however. In the absence of anthropogenic or biogenic halogenated compounds in the lower stratosphere, depletion of dinitrogen pentoxide in the middle stratosphere associated with its reactivity to the aerosols can promote ozone formation. Injection of sulfur dioxide and large amounts of water vapor into the stratosphere following the 2022 eruption of Hunga Tonga-Hunga Haʻapai resulted in altered atmospheric circulation that promoted a decrease in ozone in the southern latitudes but an increase in the tropics. The additional presence of hydrochloric acid in eruptions can result in net ozone depletion. Impact on climate change Projected impacts Solar geoengineering Properties Table of thermal and physical properties of saturated liquid sulfur dioxide: See also Bunker fuel National Ambient Air Quality Standards Sulfur trioxide Sulfur–iodine cycle References External links Global map of sulfur dioxide distribution United States Environmental Protection Agency Sulfur Dioxide page International Chemical Safety Card 0074 IARC Monographs. "Sulfur Dioxide and some Sulfites, Bisulfites and Metabisulfites". vol. 54. 1992. p. 131. NIOSH Pocket Guide to Chemical Hazards CDC – Sulfure Dioxide – NIOSH Workplace Safety and Health Topic Sulfur Dioxide, Molecule of the Month Acidic oxides IARC Group 3 carcinogens Industrial gases Interchalcogens Preservatives Refrigerants Airborne pollutants Sulfur oxides Gaseous signaling molecules Trace gases Triatomic molecules Reducing agents Inorganic solvents Hypervalent molecules E-number additives Sulfur(IV) compounds
Sulfur dioxide
[ "Physics", "Chemistry" ]
5,127
[ "Redox", "Molecules", "Reducing agents", "Signal transduction", "Hypervalent molecules", "Triatomic molecules", "Gaseous signaling molecules", "Industrial gases", "Chemical process engineering", "Matter" ]
50,964
https://en.wikipedia.org/wiki/Liouville%20number
In number theory, a Liouville number is a real number with the property that, for every positive integer , there exists a pair of integers with such that The inequality implies that Liouville numbers possess an excellent sequence of rational number approximations. In 1844, Joseph Liouville proved a bound showing that there is a limit to how well algebraic numbers can be approximated by rational numbers, and he defined Liouville numbers specifically so that they would have rational approximations better than the ones allowed by this bound. Liouville also exhibited examples of Liouville numbers thereby establishing the existence of transcendental numbers for the first time. One of these examples is Liouville's constant in which the nth digit after the decimal point is 1 if is the factorial of a positive integer and 0 otherwise. It is known that and , although transcendental, are not Liouville numbers. The existence of Liouville numbers (Liouville's constant) Liouville numbers can be shown to exist by an explicit construction. For any integer and any sequence of integers such that for all and for infinitely many , define the number In the special case when , and for all , the resulting number is called Liouville's constant: It follows from the definition of that its base- representation is where the th term is in the th place. Since this base- representation is non-repeating it follows that is not a rational number. Therefore, for any rational number , . Now, for any integer , and can be defined as follows: Then, Therefore, any such is a Liouville number. Notes on the proof The inequality follows since ak ∈ {0, 1, 2, ..., b−1} for all k, so at most ak = b−1. The largest possible sum would occur if the sequence of integers (a1, a2, ...) were (b−1, b−1, ...), i.e. ak = b−1, for all k. will thus be less than or equal to this largest possible sum. The strong inequality follows from the motivation to eliminate the series by way of reducing it to a series for which a formula is known. In the proof so far, the purpose for introducing the inequality in #1 comes from intuition that (the geometric series formula); therefore, if an inequality can be found from that introduces a series with (b−1) in the numerator, and if the denominator term can be further reduced from to , as well as shifting the series indices from 0 to , then both series and (b−1) terms will be eliminated, getting closer to a fraction of the form , which is the end-goal of the proof. This motivation is increased here by selecting now from the sum a partial sum. Observe that, for any term in , since b ≥ 2, then , for all k (except for when n=1). Therefore, (since, even if n=1, all subsequent terms are smaller). In order to manipulate the indices so that k starts at 0, partial sum will be selected from within (also less than the total value since it is a partial sum from a series whose terms are all positive). Choose the partial sum formed by starting at k = (n+1)! which follows from the motivation to write a new series with k=0, namely by noticing that . For the final inequality , this particular inequality has been chosen (true because b ≥ 2, where equality follows if and only if n=1) because of the wish to manipulate into something of the form . This particular inequality allows the elimination of (n+1)! and the numerator, using the property that (n+1)! – n! = (n!)n, thus putting the denominator in ideal form for the substitution . Irrationality Here the proof will show that the number where and are integers and cannot satisfy the inequalities that define a Liouville number. Since every rational number can be represented as such the proof will show that no Liouville number can be rational. More specifically, this proof shows that for any positive integer large enough that [equivalently, for any positive integer )], no pair of integers exists that simultaneously satisfies the pair of bracketing inequalities If the claim is true, then the desired conclusion follows. Let and be any integers with Then, If then meaning that such pair of integers would violate the first inequality in the definition of a Liouville number, irrespective of any choice of  . If, on the other hand, since then, since is an integer, we can assert the sharper inequality From this it follows that Now for any integer the last inequality above implies Therefore, in the case such pair of integers would violate the second inequality in the definition of a Liouville number, for some positive integer . Therefore, to conclude, there is no pair of integers with that would qualify such an as a Liouville number. Hence a Liouville number cannot be rational. Liouville numbers and transcendence No Liouville number is algebraic. The proof of this assertion proceeds by first establishing a property of irrational algebraic numbers. This property essentially says that irrational algebraic numbers cannot be well approximated by rational numbers, where the condition for "well approximated" becomes more stringent for larger denominators. A Liouville number is irrational but does not have this property, so it cannot be algebraic and must be transcendental. The following lemma is usually known as Liouville's theorem (on diophantine approximation), there being several results known as Liouville's theorem. Lemma: If is an irrational root of an irreducible polynomial of degree with integer coefficients, then there exists a real number such that for all integers with , Proof of Lemma: Let be a minimal polynomial with integer coefficients, such that . By the fundamental theorem of algebra, has at most distinct roots. Therefore, there exists such that for all we get . Since is a minimal polynomial of we get , and also is continuous. Therefore, by the extreme value theorem there exists and such that for all we get . Both conditions are satisfied for . Now let be a rational number. Without loss of generality we may assume that . By the mean value theorem, there exists such that Since and , both sides of that equality are nonzero. In particular and we can rearrange: Proof of assertion: As a consequence of this lemma, let x be a Liouville number; as noted in the article text, x is then irrational. If x is algebraic, then by the lemma, there exists some integer n and some positive real A such that for all p, q Let r be a positive integer such that 1/(2r) ≤ A and define m = r + n. Since x is a Liouville number, there exist integers a, b with b > 1 such that which contradicts the lemma. Hence a Liouville number cannot be algebraic, and therefore must be transcendental. Establishing that a given number is a Liouville number proves that it is transcendental. However, not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of e, one can show that e is an example of a transcendental number that is not Liouville. Mahler proved in 1953 that is another such example. Uncountability Consider the number 3.1400010000000000000000050000.... 3.14(3 zeros)1(17 zeros)5(95 zeros)9(599 zeros)2(4319 zeros)6... where the digits are zero except in positions n! where the digit equals the nth digit following the decimal point in the decimal expansion of . As shown in the section on the existence of Liouville numbers, this number, as well as any other non-terminating decimal with its non-zero digits similarly situated, satisfies the definition of a Liouville number. Since the set of all sequences of non-null digits has the cardinality of the continuum, the same is true of the set of all Liouville numbers. Moreover, the Liouville numbers form a dense subset of the set of real numbers. Liouville numbers and measure From the point of view of measure theory, the set of all Liouville numbers is small. More precisely, its Lebesgue measure, , is zero. The proof given follows some ideas by John C. Oxtoby. For positive integers and set: then Observe that for each positive integer and , then Since and then Now and it follows that for each positive integer , has Lebesgue measure zero. Consequently, so has . In contrast, the Lebesgue measure of the set of all real transcendental numbers is infinite (since the set of algebraic numbers is a null set). One could show even more - the set of Liouville numbers has Hausdorff dimension 0 (a property strictly stronger than having Lebesgue measure 0). Structure of the set of Liouville numbers For each positive integer , set The set of all Liouville numbers can thus be written as Each is an open set; as its closure contains all rationals (the from each punctured interval), it is also a dense subset of real line. Since it is the intersection of countably many such open dense sets, is comeagre, that is to say, it is a dense Gδ set. Irrationality measure The Liouville–Roth irrationality measure (irrationality exponent, approximation exponent, or Liouville–Roth constant) of a real number is a measure of how "closely" it can be approximated by rationals. It is defined by adapting the definition of Liouville numbers: instead of requiring the existence of a sequence of pairs that make the inequality hold for each —a sequence which necessarily contains infinitely many distinct pairs—the irrationality exponent is defined to be the supremum of the set of for which such an infinite sequence exists, that is, the set of such that is satisfied by an infinite number of integer pairs with . For any value , the infinite set of all rationals satisfying the above inequality yields good approximations of . Conversely, if , then there are at most finitely many with that satisfy the inequality. If is a Liouville number then . See also Brjuno number Markov constant Diophantine approximation References External links The Beginning of Transcendental Numbers Diophantine approximation Mathematical constants Articles containing proofs Real transcendental numbers Irrational numbers
Liouville number
[ "Mathematics" ]
2,253
[ "Approximations", "Irrational numbers", "Mathematical objects", "Mathematical relations", "nan", "Articles containing proofs", "Diophantine approximation", "Mathematical constants", "Numbers", "Number theory" ]
50,969
https://en.wikipedia.org/wiki/List%20of%20computing%20and%20IT%20abbreviations
This is a list of computing and IT acronyms, initialisms and abbreviations. 0–9 1GL—first-generation programming language 1NF—first normal form 10B2—10BASE-2 10B5—10BASE-5 10B-F—10BASE-F 10B-FB—10BASE-FB 10B-FL—10BASE-FL 10B-FP—10BASE-FP 10B-T—10BASE-T 100B-FX—100BASE-FX 100B-TX—100BASE-TX 100BVG—100BASE-VG 286—Intel 80286 processor 2B1Q—2 binary 1 quaternary 2FA—Two-factor authentication 2GL—second-generation programming language 2NF—second normal form 3GL—third-generation programming language 3GPP—3rd Generation Partnership Project – 3G comms 3GPP2—3rd Generation Partnership Project 2 3NF—third normal form 386—Intel 80386 processor 486—Intel 80486 processor 4B5BLF—4-bit 5-bit local fiber 4GL—fourth-generation programming language 4NF—fourth normal form 5GL—fifth-generation programming language 5NF—fifth normal form 6NF—sixth normal form 8B10BLF—8-bit 10-bit local fiber 802.11—wireless LAN A A11Y—Accessibility AAA—Authentication Authorization, Accounting AABB—Axis Aligned Bounding Box AAC—Advanced Audio Coding AAL—ATM Adaptation Layer AALC—ATM Adaptation Layer Connection AARP—AppleTalk Address Resolution Protocol ABAC—Attribute-Based Access Control ABCL—Actor-Based Concurrent Language ABI—Application Binary Interface ABM—Asynchronous Balanced Mode ABR—Area Border Router ABR—Auto Baud-Rate detection ABR—Available Bitrate ABR—Average Bitrate ABR—Adaptive Bitrate (Streaming) AC—Acoustic Coupler AC—Alternating Current ACD—Automatic Call Distributor ACE—Advanced Computing Environment ACID—Atomicity Consistency Isolation Durability ACK—ACKnowledgement ACK—Amsterdam Compiler Kit ACL—Access Control List ACL—Active Current Loop ACM—Association for Computing Machinery ACME—Automated Classification of Medical Entities ACP—Airline Control Program ACPI—Advanced Configuration and Power Interface ACR—Allowed Cell Rate ACR—Attenuation to Crosstalk Ratio AD—Active Directory AD—Administrative Domain ADC—Analog-to-Digital Converter ADC—Apple Display Connector ADB—Apple Desktop Bus ADCCP—Advanced Data Communications Control Procedures ADO—ActiveX Data Objects ADSL—Asymmetric Digital Subscriber Line ADT—Abstract Data Type AE—Adaptive Equalizer AES—Advanced Encryption Standard AF—Anisotropic Filtering AFP—Apple Filing Protocol AGI—Artificial General Intelligence AGP—Accelerated Graphics Port AH—Active Hub AI—Artificial Intelligence AIX—Advanced Interactive eXecutive Ajax—Asynchronous JavaScript and XML AL—Active Link AL—Access List ALAC—Apple Lossless Audio Codec ALGOL—Algorithmic Language ALSA—Advanced Linux Sound Architecture ALU—Arithmetic and Logical Unit AM—Access Method AM—Active Matrix AMOLED—Active-Matrix Organic Light-Emitting Diode AM—Active Monitor AM—Allied Mastercomputer AM—Amplitude Modulation AMD—Advanced Micro Devices AMQP—Advanced Message Queuing Protocol AMR—Audio Modem Riser ANN—Artificial Neural Network ANSI—American National Standards Institute ANT—Another Neat Tool AoE—ATA over Ethernet AOP—Aspect-Oriented Programming AOT—Ahead-Of-Time APCI—Application-Layer Protocol Control Information API—Application Programming Interface APIC—Advanced Programmable Interrupt Controller APIPA—Automatic Private IP Addressing APL—A Programming Language APR—Apache Portable Runtime ARC—Adaptive Replacement Cache ARC—Advanced RISC Computing ARIN—American Registry for Internet Numbers ARM—Advanced RISC Machines AROS—AROS Research Operating System ARP—Address Resolution Protocol ARPA—Address and Routing Parameter Area ARPA—Advanced Research Projects Agency ARPANET—Advanced Research Projects Agency Network AS—Access Server ASCII—American Standard Code for Information Interchange AuthIP—Authenticated Internet Protocol ASG—Abstract Semantic Graph ASIC—Application-Specific Integrated Circuit ASIMO—Advanced Step in Innovative Mobility ASLR—Address Space Layout Randomization ASM—Algorithmic State Machine ASMP—Asymmetric Multiprocessing ASN.1—Abstract Syntax Notation 1 ASP—Active Server Pages ASP—Application Service Provider ASR—Asynchronous Signal Routine AST—Abstract Syntax Tree AT—Advanced Technology AT—Access Time AT—Active Terminator ATA—Advanced Technology Attachment ATAG—Authoring Tool Accessibility Guidelines ATAPI—Advanced Technology Attachment Packet Interface ATM—Asynchronous Transfer Mode AuthN—Authentication AuthZ—Authorization AV—Antivirus AVC—Advanced Video Coding AVI—Audio Video Interleaved AWK—Aho Weinberger Kernighan AWS—Amazon Web Services AWT—Abstract Window Toolkit B B2B—Business-to-Business B2C—Business-to-Consumer B2E—Business-to-Employee BAL—Basic Assembly Language BAM—Block Availability Map Bash—Bourne-again shell BASIC—Beginner's All-Purpose Symbolic Instruction Code BBP—Baseband Processor BBS—Bulletin Board System BC—Business Continuity BCC—Blind Carbon Copy BCD—Binary Coded Decimal BCD—Boot Configuration Data BCNF—Boyce–Codd normal form BCP—Business Continuity Planning BCP—Best Current Practice BE—Backend BEEP—Blocks Extensible Exchange Protocol BER—Bit Error Rate BFD—Bidirectional Forwarding Detection BFD—Binary File Descriptor BFS—Breadth-First Search BFT—Byzantine Fault Tolerant BGP—Border Gateway Protocol BI—Business Intelligence BiDi—Bi-Directional bin—binary BINAC—Binary Automatic Computer BIND—Berkeley Internet Name Domain BIOS—Basic Input Output System BJT—Bipolar Junction Transistor bit—binary digit Blob—Binary large object Blog—Web Log BMP—Basic Multilingual Plane BNC—Baby Neill Constant BOINC—Berkeley Open Infrastructure for Network Computing BOM—Byte Order Mark BOOTP—Bootstrap Protocol BPDU—Bridge Protocol Data Unit BPEL—Business Process Execution Language BPL—Broadband over Power Lines BPM—Business Process Management BPM—Business Process Modeling bps—bits per second BRM—Business Reference Model BRMS—Business Rule Management System BRR—Business Readiness Rating BRS—Broadband Radio Service BSA—Business Software Alliance BSB—Backside Bus BSD—Berkeley Software Distribution BSoD—Blue Screen of Death BSS—Block Started by Symbol BT—BitTorrent BT—Bluetooth B TAM—Basic Telecommunications Access Method BW—Bandwidth BYOD—Bring Your Own Device Byte—By eight (group of 8 bits) C CA—Certificate authority CAD—Computer-aided design CAE—Computer-aided engineering CAID—Computer-aided industrial design CAI—Computer-aided instruction CAM—Computer-aided manufacturing CAP—Consistency availability partition tolerance (theorem) CAPTCHA—Completely automated public Turing test to tell computers and humans apart CAT—Computer-aided translation CAQ—Computer-aided quality assurance CASE—Computer-aided software engineering cc—C compiler CC—Carbon copy CD—Compact Disc CDE—Common Desktop Environment CDMA—Code-division multiple access CDN—Content delivery network CDP—Cisco Discovery Protocol CDP—Continuous data protection CD-R—CD-Recordable CD-ROM—CD Read-Only Memory CD-RW—CD-Rewritable CDSA—Common Data Security Architecture CERT—Computer emergency response team CES—Consumer Electronics Show CF—Compact Flash CFD—Computational fluid dynamics CFG—Context-free grammar CFG—Control-flow graph CG—Computer graphics CGA—Color graphics array CGI—Common Gateway Interface CGI—Computer-generated imagery CGT—Computational Graph Theory CHAP—Challenge–handshake authentication protocol CHS—Cylinder–head–sector CIDR—Classless inter-domain routing CIFS—Common Internet Filesystem CIM—Common Information Model CIO—Chief information officer CIR—Committed information rate CISC—Complex-instruction-set computer CIT—Computer information technology CJK—Chinese, Japanese, and Korean CJKV—Chinese, Japanese, Korean, and Vietnamese CLI—Command line interface CLR—Common Language Runtime CM—Configuration management CM—Content management CMDB—Configuration management database CMMI—Capability Maturity Model Integration CMOS—Complementary metal–oxide–semiconductor CMS—Content management system CN—Canonical Name CN—Common Name CNC—Computerized numerical control CNG—Cryptographic Next Generation CNR—Communications and Networking Riser COBOL—Common Business-Oriented Language COM—Component Object Model or communication CORBA—Common Object Request Broker Architecture CORS—Cross-origin resource sharing COTS—Commercial off-the-shelf CPA—Cell processor architecture CPAN—Comprehensive Perl Archive Network CP/M—Control Program/Monitor CPRI—Common Public Radio Interface CPS—Characters per second CPU—Central processing unit CQS—Command–query separation CQRS—Command Query Responsibility Segregation CR—Carriage return CRAN—Comprehensive R Archive Network CRC—Cyclic redundancy check CRLF—Carriage return line feed CRM—Customer Relationship Management CRS—Computer Reservations System CRT—Cathode-ray tube CRUD—Create, read, update and delete CS—Cable Select CS—Computer Science CSE—Computer science and engineering CSI—Common System Interface CSM—Compatibility support module CSMA/CD—Carrier-sense multiple access with collision detection CSP—Cloud service provider CSP—Communicating sequential processes CSRF—Cross-site request forgery CSS—Cascading style sheets CSS—Content-scrambling system CSS—Closed-source software CSS—Cross-site scripting CSV—Comma-separated values CT—Computerized tomography CTAN—Comprehensive TeX Archive Network CTCP—Client-to-client protocol CTI—Computer telephony integration CTFE—Compile-time function execution CTL—Computation tree logic CTM—Close To Metal CTS—Clear to send CTSS—Compatible Time-Sharing System CUA—Common User Access CVE—Common Vulnerabilities and Exposures CVS—Concurrent Versions System CX—Customer experience D DAC—Digital-To-Analog Converter DAC—Discretionary Access Control DAL—Database Abstraction Layer DAO—Data Access Object DAO—Data Access Objects DAO—Disk-At-Once DAP—Directory Access Protocol DARPA—Defense Advanced Research Projects Agency DAS—Direct Attached Storage DAT—Digital Audio Tape DB—Database DSKT—Desktop DBA—Database Administrator DBCS—Double Byte Character Set DBMS—Database Management System DCC—Direct Client-to-Client DCCP—Datagram Congestion Control Protocol DCCA—Debian Common Core Alliance DCL—Data Control Language DCS—Distributed Control System DCMI—Dublin Core Metadata Initiative DCOM—Distributed Component Object Model DD—Double Density DDE—Dynamic Data Exchange DDL—Data Definition Language DDoS—Distributed Denial of Service DDR—Double Data Rate DEC—Digital Equipment Corporation DES—Data Encryption Standard dev—development DFA—Deterministic Finite Automaton DFD—Data Flow Diagram DFS—Depth-First Search DFS—Distributed File System DGD—Dworkin's Game Driver DHCP—Dynamic Host Configuration Protocol DHTML—Dynamic Hypertext Markup Language DIF—Data Integrity Field DIMM—Dual Inline Memory Module DIN—Deutsches Institut für Normung DIP—Dual In-line Package DISM—Deployment Image and Service Management Tool DIVX—Digital Video Express DKIM—Domain Keys Identified Mail DL—Download DLL—Dynamic Link Library DLNA—Digital Living Network Alliance DMA—Direct Memory Access DMCA—Digital Millennium Copyright Act DMI—Direct Media Interface DML—Data Manipulation Language DML—Definitive Media Library DMR—Dennis M. Ritchie DMZ—Demilitarized Zone DN—Distinguished Name DND—Drag-and-Drop DNS—Domain Name System DOA—Dead on Arrival DOCSIS—Data Over Cable Service Interface Specification DOM—Document Object Model DORA—Discover, Offer, Request, Acknowledge DoS—Denial of Service DOS—Disk Operating System DP—Dot Pitch DPC—Deferred Procedure Call DPI—Deep packet inspection DPI—Dots per inch DPMI—DOS Protected Mode Interface DPMS—Display Power Management Signaling DR—Disaster Recovery DRAM—Dynamic Random-Access Memory DR-DOS—Digital Research – Disk Operating System DRI—Direct Rendering Infrastructure DRM—Digital rights management DRM—Direct rendering manager DSA—Digital Signature Algorithm DSDL—Document Schema Definition Languages DSDM—Dynamic Systems Development Method DSL—Digital Subscriber Line DSL—Domain-Specific Language DSLAM—Digital Subscriber Line Access Multiplexer DSN—Database Source Name DSN—Data Set Name DSP—Digital Signal Processor DSSSL—Document Style Semantics and Specification Language DTD—Document Type Definition DTE—Data Terminal Equipment or data transfer rate DTO—Data Transfer Object DTP—Desktop Publishing DTR—Data Terminal Ready or Data transfer rate DVD—Digital Versatile Disc or Digital Video Disc DVD-R—DVD-Recordable DVD-ROM—DVD-Read-Only Memory DVD-RW—DVD-Rewritable DVI—Digital Visual Interface DVR—Digital Video Recorder DW—Data Warehouse E EAI—Enterprise Application Integration EAP—Extensible Authentication Protocol EAS—Exchange ActiveSync EBCDIC—Extended Binary Coded Decimal Interchange Code EBML—Extensible Binary Meta Language ECC—Elliptic Curve Cryptography ECMA—European Computer Manufacturers Association ECN—Explicit Congestion Notification ECOS—Embedded Configurable Operating System ECRS—Expense and Cost Recovery System ECS—Entity-Component-System EDA—Electronic Design Automation EDGE—Enhanced Data rates for GSM Evolution EDI—Electronic Data Interchange EDO—Extended Data Out EDSAC—Electronic Delay Storage Automatic Calculator EDVAC—Electronic Discrete Variable Automatic Computer EEPROM—Electronically Erasable Programmable Read-Only Memory EFF—Electronic Frontier Foundation EFI—Extensible Firmware Interface EFM—Eight-to-Fourteen Modulation EFM—Ethernet in the First Mile EFS—Encrypting File System EGA—Enhanced Graphics Array E-mail—Electronic mail EGP—Exterior Gateway Protocol eID—electronic ID card EIDE—Enhanced IDE EIGRP—Enhanced Interior Gateway Routing Protocol EISA—Extended Industry Standard Architecture ELF—Extremely Low Frequency ELF—Executable and Linkable Format ELM—ELectronic Mail EMACS—Editor MACroS EMS—Expanded Memory Specification ENIAC—Electronic Numerical Integrator And Computer EOF—End of File EOL—End of Life EOL—End of Line EOM—End of Message EOS—End of Support EPIC—Explicitly Parallel Instruction Computing EPROM—Erasable Programmable Read-Only Memory ERD—Entity–Relationship Diagram ERM—Entity–Relationship Model ERP—Enterprise Resource Planning eSATA—external SATA ESB—Enterprise service bus ESCON—Enterprise Systems Connection ESD—Electrostatic Discharge ESI—Electronically Stored Information ESR—Eric Steven Raymond ETL—Extract, Transform, Load ETW—Event Tracing for Windows EUC—Extended Unix Code EULA—End User License Agreement EWMH—Extended Window Manager Hints EXT—EXTended file system ETA—Estimated Time of Arrival F FAP—FORTRAN Assembly Program FASM—Flat ASseMbler FAT—File Allocation Table FAQ—Frequently Asked Questions FBDIMM—Fully Buffered Dual Inline Memory Module FC-AL—Fibre Channel Arbitrated Loop FCB—File Control Block FCS—Frame Check Sequence FDC—Floppy-Disk Controller FDS—Fedora Directory Server FDD—Frequency-Division Duplexing FDD—Floppy Disk Drive FDDI—Fiber Distributed Data Interface FDM—Frequency-Division Multiplexing FDMA—Frequency-Division Multiple Access FE—Frontend FEC—Forward Error Correction FEMB—Front-End Motherboard FET—Field Effect Transistor FHS—Filesystem Hierarchy Standard FICON—FIber CONnectivity FIFO—First In First Out FIPS—Federal Information Processing Standards FL—Function Level FLAC—Free Lossless Audio Codec FLOPS—FLoating-Point Operations Per Second FLOSS—Free/Libre/Open-Source Software FMC—Fixed Mobile Convergence "Mobile UC or Unified Communications over Wireless" FOLDOC—Free On-line Dictionary of Computing FORTRAN—Formula Translation FOSDEM—Free and Open-source Software Developers' European Meeting FOSI—Formatted Output Specification Instance FOSS—Free and Open-Source Software FP—Function Programming FP—Functional Programming FPGA—Field Programmable Gate Array FPS—Floating Point Systems FPU—Floating-Point Unit FRU—Field-Replaceable Unit FS—File System FSB—Front-Side Bus fsck—File System Check FSF—Free Software Foundation FSM—Finite State Machine FTTC—Fiber To The Curb FTTH—Fiber To The Home FTTP—Fiber To The Premises FTP—File Transfer Protocol FQDN—Fully Qualified Domain Name FUD—Fear Uncertainty Doubt FWS—Folding White Space FXP—File eXchange Protocol FYI—For Your Information G G11N—Globalization Gas—GNU Assembler Gb—Gigabit GB—Gigabyte Gbps—Gigabits per second GCC—GNU Compiler Collection GCJ—GNU Compiler for Java GCP—Google Cloud Platform GCR—Group Coded Recording GDB—GNU Debugger GDI—Graphics Device Interface GFDL—GNU Free Documentation License GIF—Graphics Interchange Format GIGO—Garbage In, Garbage Out GIMP—GNU Image Manipulation Program GIMPS—Great Internet Mersenne Prime Search GIS—Geographic Information System GLUT—OpenGL Utility Toolkit GML—Geography Markup Language GNOME—GNU Network Object Model Environment GNU—GNU's Not Unix GOMS—Goals, Operators, Methods, and Selection rules GPASM—GNU PIC ASseMbler GPFS—General Parallel File System GPG—GNU Privacy Guard GPGPU—General-Purpose Computing on Graphics Processing Units GPIB—General-Purpose Instrumentation Bus GPL—General Public License GPL—General-Purpose Language GPRS—General Packet Radio Service GPT—GUID Partition Table GPU—Graphics Processing Unit GRUB—Grand Unified Boot-Loader GERAN—GSM EDGE Radio Access Network GSM—Global System for Mobile Communications GTK/GTK+—GIMP Toolkit GUI—Graphical user interface GUID—Globally Unique IDentifier GWT—Google Web Toolkit H HA—High availability HAL—Hardware Abstraction Layer HASP—Houston Automatic Spooling Priority HBA—Host Bus Adapter HCI—Human—Computer Interaction HD—High Density HDD—Hard Disk Drive HCL—Hardware Compatibility List HD DVD—High Definition DVD HDL—Hardware Description Language HDMI—High-Definition Multimedia Interface HECI—Host Embedded Controller Interface HF—High Frequency HFS—Hierarchical File System HHD—Hybrid Hard Drive HID—Human Interface Device HIG—Human Interface Guidelines HIRD—Hurd of Interfaces Representing Depth HLASM—High Level ASseMbler HLS—HTTP Live Streaming HMA—High Memory Area HP—Hewlett-Packard HPC—High-Performance Computing HPFS—High Performance File System HSDPA—High-Speed Downlink Packet Access HTC—High-Throughput Computing HSM—Hierarchical Storage Management HT—Hyper Threading HTM—Hierarchical Temporal Memory HTML—Hypertext Markup Language HTTP—Hypertext Transfer Protocol HTTPd—Hypertext Transport Protocol Daemon HTTPS—HTTP Secure HTX—HyperTransport eXpansion HURD—Hird of Unix-Replacing Daemons HVD—Holographic Versatile Disc Hz—Hertz I I²C—Inter-Integrated Circuit I²S—Integrated Interchip Sound I18N—Internationalization IANA—Internet Assigned Numbers Authority IaaS—Infrastructure as a Service IaC—Infrastructure as Code iBCS—Intel Binary Compatibility Standard IBM—International Business Machines IC—Integrated Circuit ICANN—Internet Corporation for Assigned Names and Numbers ICE—In-Circuit Emulator ICE—Intrusion Countermeasure Electronics ICH—I/O Controller Hub ICMP—Internet Control Message Protocol ICP—Internet Cache Protocol ICS—Internet Connection Sharing ICT—Information and Communication Technology IDE—Integrated Development Environment IDE—Integrated Drive Electronics IDF—Intermediate Distribution Frame IDF—Intermediate Data Format IDL—Interactive Data Language IDL—Interface Definition Language IdP—Identity Provider (cybersecurity) IDS—Intrusion Detection System IE—Internet Explorer IEC—International Electrotechnical Commission IEEE—Institute of Electrical and Electronics Engineers IETF—Internet Engineering Task Force IFL—Integrated Facility for Linux IGMP—Internet Group Management Protocol IGRP—Interior Gateway Routing Protocol IHV—Independent Hardware Vendor IIOP—Internet Inter-Orb Protocol IIS—Internet Information Services IKE—Internet Key Exchange IL—Intermediate Language IM—Instant Message or Instant Messaging IMAP—Internet Message Access Protocol IME—Input Method Editor INFOSEC—Information Systems Security I/O—Input/output IoT—Internet of Things IP—Intellectual Property IP—Internet Protocol IPAM—IP Address Management IPC—Inter-Process Communication IPL—Initial Program Load IPMI—Intelligent Platform Management Interface IPO—Inter Procedural Optimization IPP—Internet Printing Protocol IPS—In-Plane Switching IPS—Instructions Per Second IPS—Intrusion Prevention System IPsec—Internet Protocol security IPTV—Internet Protocol Television IPv4—Internet Protocol version 4 IPv6—Internet Protocol version 6 IPX—Internetwork Packet Exchange IR—Intermediate Representation IRC—Internet Relay Chat IrDA—Infrared Data Association IRI—Internationalized Resource Identifier IRP—I/O Request Packet IRQ—Interrupt Request IS—Information Systems IS-IS—Intermediate System to Intermediate System ISA—Industry Standard Architecture ISA—Instruction Set Architecture ISAM—Indexed Sequential Access Method ISATAP—Intra-Site Automatic Tunnel Addressing Protocol ISC—Internet Storm Center iSCSI—Internet Small Computer System Interface ISDN—Integrated Services Digital Network ISO—International Organization for Standardization iSNS—Internet Storage Name Service ISP—Internet Service Provider ISPF—Interactive System Productivity Facility ISR—Interrupt Service Routine ISV—Independent Software Vendor IT—Information Technology ITIL—Information Technology Infrastructure Library ITL—Interval Temporal Logic ITU—International Telecommunication Union IVR(S)—Interactive Voice Response (System) J J2EE—Java 2 Enterprise Edition J2ME—Java 2 Micro Edition J2SE—Java 2 Standard Edition JAXB—Java Architecture for XML Binding JAX-RPC—Jakarta XML (formerly Java XML) for Remote Procedure Calls JAXP—Java API for XML Processing JBOD—Just a Bunch of Disks JCE Java Cryptography Extension JCL—Job Control Language JCP—Java Community Process JDBC—Java Database Connectivity JDK—Java Development Kit JEE—Java Enterprise Edition JES—Job Entry Subsystem JDS—Java Desktop System JFC—Java Foundation Classes JFET—Junction Field-Effect Transistor JFS—IBM Journaling File System JINI—Jini Is Not Initials JIT—Just-In-Time JME—Java Micro Edition JMX—Java Management Extensions JMS—Java Message Service JNDI—Java Naming and Directory Interface JNI—Java Native Interface JNZ—Jump non-zero JPEG—Joint Photographic Experts Group JRE—Java Runtime Environment JS—JavaScript JSE—Java Standard Edition JSON—JavaScript Object Notation JSP—Jackson Structured Programming JSP—JavaServer Pages JTAG—Joint Test Action Group JVM—Java Virtual Machine K K&R—Kernighan and Ritchie K8s—Kubernetes KB—Keyboard Kb—Kilobit KB—Kilobyte KB—Knowledge Base Kbps—Kilobits per second KiB—Kibibyte KDE—K Desktop Environment kHz—Kilohertz KRL—Knowledge Representation Language KVM—Keyboard, Video, Mouse L L10N—Localization L2TP—Layer two Tunneling Protocol LACP—Link Aggregation Control Protocol LAMP—Linux Apache MySQL Perl LAMP—Linux Apache MySQL PHP LAMP—Linux Apache MySQL Python LAN—Local Area Network LBA—Logical Block Addressing LB—Load Balancer LCD—Liquid Crystal Display LCR—Least Cost Routing LCOS—Liquid Crystal On Silicon LDAP—Lightweight Directory Access Protocol LE—Logical Extents LED—Light-Emitting Diode LF—Line Feed LF—Low Frequency LFS—Linux From Scratch LGA—Land Grid Array LGPL—Lesser General Public License LIB—LIBrary LIF—Low Insertion Force LIFO—Last In First Out LILO—Linux Loader LIP—Loop Initialization Primitive LISP—LISt Processing LKML—Linux Kernel Mailing List LM—Lan Manager LOC—Lines of Code LPC—Lars Pensjö C LPI—Linux Professional Institute LPT Line Print Terminal LRU—Least Recently Used LSB—Least Significant Bit LSB—Linux Standard Base LSI—Large-Scale Integration LTE—Long Term Evolution LTL—Linear Temporal Logic LTR—Left-to-Right LUG—Linux User Group LUN—Logical Unit Number LV—Logical Volume LVD—Low Voltage Differential LVM—Logical Volume Management LZW—Lempel-Ziv-Welch M MAC—Mandatory Access Control MAC—Media Access Control MAC—Message authentication code MANET—Mobile Ad-Hoc Network MAN—Metropolitan Area Network MAPI—Messaging Application Programming Interface MBCS—Multi Byte Character Set MBD—Model-Based Design MBR—Master Boot Record Mb—Megabit MB—Megabyte Mbps—Megabits per second MCAD—Microsoft Certified Application Developer MCAS—Microsoft Certified Application Specialist MCA—Micro Channel Architecture MCA—Microsoft Certified Architect MCDBA—Microsoft Certified DataBase Administrator MCDST—Microsoft Certified Desktop Support Technician MCITP—Microsoft Certified Information Technology Professional MCM—Microsoft Certified Master MCPD—Microsoft Certified Professional Developer MCP—Microsoft Certified Professional MCSA—Microsoft Certified Systems Administrator MCSD—Microsoft Certified Solution Developer MCSE—Microsoft Certified Systems Engineer MCTS—Microsoft Certified Technology Specialist MCT—Microsoft Certified Trainer MDA—Monochrome Display Adapter MDA—Mail Delivery Agent MDA—Model-Driven Architecture MDD/MDSD—Model-Driven (Software) Development MDF—Main Distribution Frame MDI—Multiple-Document Interface MDM—Master Data Management ME—Microsoft Edge ME—[Windows] Millennium Edition MFA—Multi-factor authentication MFC—Microsoft Foundation Classes MFT—Master File Table MFM—Modified Frequency Modulation MF—Medium Frequency MGCP—Media Gateway Control Protocol MHz—Megahertz MIB—Management Information Base MICR—Magnetic Ink Character Recognition or Magnetic Ink Character Reader MIDI—Musical Instrument Digital Interface MIMD—Multiple Instruction, Multiple Data MIME—Multipurpose Internet Mail Extensions MIMO—Multiple-Input Multiple-Output MINIX—MIni-uNIX MIPS—Microprocessor without Interlocked Pipeline Stages MIPS—Million Instructions Per Second MISD—Multiple Instruction, Single Data MIS—Management Information Systems MIT—Massachusetts Institute of Technology ML—Machine Learning MMC—Microsoft Management Console MMDS—Mortality Medical Data System MMDS—Multichannel Multipoint Distribution Service MMF—Multi-Mode (optical) Fiber MMIO—Memory-Mapped I/O MMI—Man Machine Interface. MMORPG—Massively Multiplayer Online Role-Playing Game MMS—Multimedia Message Service MMU—Memory Management Unit MMX—Multi-Media Extensions MNG—Multiple-image Network Graphics MoBo—Motherboard MOM—Message-Oriented Middleware MOO—MUD Object Oriented MOP—Meta-Object Protocol MOSFET—Metal-Oxide Semiconductor Field Effect Transistor MOS—Microsoft Office Specialist MOTD—Message Of The Day MOUS—Microsoft Office User Specialist MOV—Apple QuickTime Multimedia File MPAA—Motion Picture Association of America MPEG—Motion Pictures Experts Group MPLS—Multiprotocol Label Switching MPL—Mozilla Public License MPU—Microprocessor Unit MS-DOS—Microsoft DOS MSA—Mail Submission Agent MSB—Most Significant Bit MSDN—Microsoft Developer Network MSI—Medium-Scale Integration MSI—Message Signaled Interrupt MSI—Microsoft Installer MSN—Microsoft Network MS—Microsoft MS—Memory Stick MTA—Mail Transfer Agent MTA—Microsoft Technology Associate MTBF—Mean Time Between Failures MTU—Maximum Transmission Unit MT—Machine Translation MUA—Mail User Agent MUD—Multi-User Dungeon MU—Memory Unit MVC—Model-View-Controller MVP—Most Valuable Professional MVS—Multiple Virtual Storage MWC—Mobile World Congress MXF—Material Exchange Format MX—Mail exchange N NAC—Network Access Control NACK—Negative ACKnowledgement NAK—Negative AcKnowledge Character NaN—Not a Number NAP—Network Access Protection NAS—Network-Attached Storage NASM—Netwide ASseMbler NAT—Network Address Translation NCP—NetWare Core Protocol NCQ—Native Command Queuing NCSA—National Center for Supercomputing Applications NDIS—Network Driver Interface Specification NDPS—Novell Distributed Print Services NDS—Novell Directory Services NEP—Network Equipment Provider NetBIOS—Network Basic Input/Output System NetBT—NetBIOS over TCP/IP NEXT—Near-End CrossTalk NFA—Nondeterministic Finite Automaton NFC—Near-field communication NFS—Network File System NGL—aNGeL NGSCB—Next-Generation Secure Computing Base NI—National Instruments NIC—Network Interface Controller or Network Interface Card NIM—No Internal Message NIO—Non-blocking I/O NIST—National Institute of Standards and Technology NLE—Non-Linear Editing system NLP—Natural Language Processing NLS—Native Language Support NMI—Non-Maskable Interrupt NNTP—Network News Transfer Protocol NOC—Network Operations Center NOP—No OPeration NOS—Network Operating System NP—Nondeterministic Polynomial time NPL—Netscape Public License NPTL—Native POSIX Thread Library NPU—Network Processing Unit NS—Netscape NSIS—Nullsoft Scriptable Install System NSPR—Netscape Portable Runtime NSS—Novell Storage Service NSS—Network Security Services NSS—Name Service Switch NT—New Technology NTFS—NT Filesystem NTLM—NT Lan Manager NTP—Network Time Protocol NUMA—Non-Uniform Memory Access NURBS—Non-Uniform Rational B-Spline NVR—Network Video Recorder NVRAM—Non-Volatile Random-Access Memory O OASIS—Organization for the Advancement of Structured Information Standards OAT—Operational Acceptance Testing OBSAI—Open Base Station Architecture Initiative OCR—Optical Character Recognition ODBC—Open Database Connectivity OEM—Original Equipment Manufacturer OES—Open Enterprise Server OFDM—Orthogonal Frequency-Division Multiplexing OFTC—Open and Free Technology Community OID—Object Identifier OLAP—Online Analytical Processing OLE—Object Linking and Embedding OLED—Organic Light Emitting Diode OLPC—One Laptop per Child OLTP—Online Transaction Processing OMF—Object Module Format OMG—Object Management Group OMR—Optical Mark Reader ooRexx—Open Object Rexx OO—Object-Oriented OO—OpenOffice OOE—Out-of-Order Execution OOM—Out Of Memory OOo—OpenOffice.org OoOE—Out-of-Order Execution OOP—Object-Oriented Programming OOTB—Out of the box OPML—Outline Processor Markup Language ORB—Object Request Broker ORM—Object–Relational Mapping OS—Open Source OS—Operating System OSCON—O'Reilly Open Source CONvention OSDN—Open Source Development Network OSI—Open Source Initiative OSI—Open Systems Interconnection OSPF—Open Shortest Path First OSS—Open Sound System OSS—Open-Source Software OSS—Operations Support System OSTG—Open Source Technology Group OTP—One-time password OUI—Organizationally Unique Identifier P P2P—Peer-To-Peer PaaS—Platform as a Service PAM—Privileged Access Management PAN—Personal Area Network PAP—Password Authentication Protocol PARC—Palo Alto Research Center PATA—Parallel ATA PBS—Portable Batch System PC—Personal Computer PCB—Printed Circuit Board PCB—Process Control Block PC DOS—Personal Computer Disc Operating System PCI—Peripheral Component Interconnect PCIe—PCI Express PCI-X—PCI Extended PCL—Printer Command Language PCMCIA—Personal Computer Memory Card International Association PCM—Pulse-Code Modulation PCRE—Perl Compatible Regular Expressions PD—Public Domain PDA—Personal Digital Assistant PDF—Portable Document Format PDH—Plesiochronous Digital Hierarchy PDP—Programmed Data Processor PE—Physical Extents PE—Portable Executable PERL—Practical Extraction and Reporting Language PFA—Please Find Attachment PG—Peripheral Gateway PGA—Pin Grid Array PGA—Programmable Gate Array PGO—Profile-Guided Optimization PGP—Pretty Good Privacy PHP—Hypertext Preprocessor PIC—Peripheral Interface Controller PIC—Programmable Interrupt Controller PID—Proportional-Integral-Derivative PID—Process ID PIM—Personal Information Manager PINE—Program for Internet News and Email PING—Packet Internet Gopher PIO—Programmed Input/Output Pixel—Picture element PKCS—Public Key Cryptography Standards PKI—Public Key Infrastructure PLC—Power Line Communication PLC—Programmable logic controller PLD—Programmable logic device PL/I—Programming Language One PL/M—Programming Language for Microcomputers PL/P—Programming Language for Prime PLT—Power Line Telecommunications PMM—POST Memory Manager PNG—Portable Network Graphics PnP—Plug-and-Play PNRP—Peer Name Resolution Protocol PoE—Power over Ethernet PoS—Point of Sale POCO—Plain Old Class Object POID—Persistent Object Identifier POJO—Plain Old Java Object POP—Point of Presence POP3—Post Office Protocol v3 POSIX—Portable Operating System Interface, formerly IEEE-IX POST—Power-On Self Test PPC—PowerPC PPI—Pixels Per Inch PPM—Pages Per Minute PPP—Point-to-Point Protocol PPPoA—PPP over ATM PPPoE—PPP over Ethernet PPTP—Point-to-Point Tunneling Protocol PR—Pull Request PROM—Programmable Read-Only Memory PS—PostScript PS/2—Personal System/2 PSA—Professional Services Automation PSM—Platform Specific Model PSTN—Public Switched Telephone Network PSU—Power Supply Unit PSVI—Post-Schema-Validation Infoset PTS-DOS—PhysTechSoft – Disk Operating System PV—Physical Volume PVG—Physical Volume Group PVR—Personal Video Recorder PXE—Preboot Execution Environment PXI—PCI eXtensions for Instrumentation PRC—Procedure Remote Call Q QDR—Quad Data Rate QA—Quality Assurance QFP—Quad Flat Package QoS—Quality of Service QOTD—Quote of the Day Qt—Quasar Toolkit QTAM—Queued Teleprocessing Access Method QSOP—Quarter Small Outline Package qWave—Quality Windows Audio/Video Experience R RACF—Resource Access Control Facility RAD—Rapid Application Development RADIUS—Remote Authentication Dial In User Service RAID—Redundant Array of Independent Disks RAII—Resource Acquisition Is Initialization RAIT—Redundant Array of Inexpensive Tapes RAM—Random-Access Memory RARP—Reverse Address Resolution Protocol RAS—Reliability, Availability and Serviceability RAS—Remote access service RC—Region Code RC—Release Candidate RC—Run Commands RCA—Root Cause Analysis RCP—Reality Coprocesser RCS—Revision Control System RD—Remote Desktop rd—remove directory RDBMS—Relational Database Management System RDC—Remote Desktop Connection RDF—Resource Description Framework RDM—Relational Data Model RDOS—Real-time Disk Operating System RDP—Remote Desktop Protocol RDS—Remote Data Services REFAL—Recursive Functions Algorithmic Language REP—RAID Error Propagation REST—Representational State Transfer RESV—Reservation Message regex—Regular Expression regexp—Regular Expression RF—Radio Frequency RFC—Request For Comments RFI—Radio Frequency Interference RFID—Radio Frequency Identification RGB—Red, Green, Blue RGBA—Red, Green, Blue, Alpha RHL—Red Hat Linux RHEL—Red Hat Enterprise Linux REXX—Restructured Extended Executor Language RIA—Rich Internet Application RIAA—Recording Industry Association of America RIP—Raster Image Processor RIP—Routing Information Protocol RIR—Regional Internet registry RISC—Reduced Instruction Set Computer RISC OS—Reduced Instruction Set Computer Operating System RJE—Remote Job Entry RLE—Run-Length Encoding RLL—Run-Length Limited rmdir—remove directory RMI—Remote Method Invocation RMS—Richard Matthew Stallman ROM—Read-Only Memory ROMB—Read-Out Motherboard ROM-DOS—Read-Only Memory – Disk Operating System RPA—Robotic Process Automation RPC—Remote Procedure Call RPG—Report Program Generator RPM—RPM Package Manager RRAS—Routing and Remote Access Service RSA—Rivest Shamir Adleman RSI—Repetitive Strain Injury RSS—Radio Service Software RSS—Rich Site Summary, RDF Site Summary, or Really Simple Syndication RSVP—Resource Reservation Protocol RTAI—Real-Time Application Interface RTC—Real-Time Clock RTE—Real-Time Enterprise RTEMS—Real-Time Executive for Multiprocessor Systems RTF—Rich Text Format RTL—Right-to-Left RTMP—Real Time Messaging Protocol RTOS—Real-Time Operating System RTP—Real-time Transport Protocol RTS—Ready To Send RTSP—Real Time Streaming Protocol RTTI—Run-time Type Information RTU—Remote Terminal Unit RWD—Responsive Web Design S SaaS—Software as a Service SASS—Syntactically Awesome Style Sheets SAM—Security Account Manager SAN—Storage Area Network SAS—Serial attached SCSI SATA—Serial ATA SAX—Simple API for XML SBOD—Spinning Beachball of Death SBP-2—Serial Bus Protocol 2 sbin—superuser binary sbs—Small Business Server SBU—Standard Build Unit SCADA—Supervisory Control And Data Acquisition SCID—Source Code in Database SCM—Software Configuration Management SCM—Source Code Management SCP—Secure Copy SCPC—Single Channel Per Carrier SCPI—Standard Commands for Programmable Instrumentation SCSA—Secure Content Storage Association SCSI—Small Computer System Interface SCTP—Stream Control Transmission Protocol SD—Secure Digital SDDL—Security Descriptor Definition Language SDH—Synchronous Digital Hierarchy SDI—Single-Document Interface SEC—Single Edge Contact SDIO—Secure Digital Input Output SDK—Software Development Kit SDL—Simple DirectMedia Layer SDN—Service Delivery Network SDP—Session Description Protocol SDR—Software-Defined Radio SDRAM—Synchronous Dynamic Random-Access Memory SDSL—Symmetric DSL SE—Single Ended SEI—Software Engineering Institute SEO—Search Engine Optimization SFTP—Secure FTP SFTP—Simple File Transfer Protocol SFTP—SSH File Transfer Protocol SGI—Silicon Graphics, Incorporated SGML—Standard Generalized Markup Language SGR—Select Graphic Rendition SHA—Secure Hash Algorithm SHDSL—Single-pair High-speed Digital Subscriber Line SIEM—Security information and event management SIGCAT—Special Interest Group on CD-ROM Applications and Technology SIGGRAPH—Special Interest Group on Graphics SIMD—Single Instruction, Multiple Data SIM—Subscriber Identification Module SIMM—Single Inline Memory Module SIP—Session Initiation Protocol SIP—Supplementary Ideographic Plane SISD—Single Instruction, Single Data SISO—Single-Input and Single-Output SLA—Service Level Agreement SLED—SUSE Linux Enterprise Desktop SLES—SUSE Linux Enterprise Server SLI—Scalable Link Interface SLIP—Serial Line Internet Protocol SLM—Service Level Management SLOC—Source Lines of Code SME—Subject Matter Expert SMF—Single-Mode (optical) Fiber SPM—Software project management SPMD—Single Program, Multiple Data SPOF—Single point of failure SMA—SubMiniature version A SMB—Server Message Block SMBIOS—System Management BIOS SMIL—Synchronized Multimedia Integration Language S/MIME—Secure/Multipurpose Internet Mail Extensions SMP—Supplementary Multilingual Plane SMP—Symmetric Multi-Processing SMPS—Switch Mode Power Supply SMS—Short Message Service SMS—System Management Server SMT—Simultaneous Multithreading SMTP—Simple Mail Transfer Protocol SNA—Systems Network Architecture SNMP—Simple Network Management Protocol SNTP—Simple Network Time Protocol SOA—Service-Oriented Architecture SOAP—Simple Object Access Protocol SOAP—Symbolic Optimal Assembly Program SOPA—Stop Online Piracy Act SoC—System-on-a-Chip SO-DIMM—Small Outline DIMM SOE—Standard Operating Environment SOHO—Small Office/Home Office SOI—Silicon On Insulator SOLID—Single-responsibility, Open-closed, Liskov substitution, Interface segregation, Dependency Inversion SP—Service Pack SPA—Single Page Application SPF—Sender Policy Framework SPI—Serial Peripheral Interface SPI—Stateful Packet Inspection SPARC—Scalable Processor Architecture SQL—Structured Query Language SRAM—Static Random-Access Memory SSA—Static Single Assignment SSD—Software Specification Document SSD—Solid-State Drive SSDP—Simple Service Discovery Protocol SSE—Streaming SIMD Extensions SSH—Secure Shell SSI—Server Side Includes SSI—Single-System Image SSI—Small-Scale Integration SSID—Service Set Identifier SSL—Secure Socket Layer SSO—Single Sign On SSP—Supplementary Special-purpose Plane SSR—Server Side Rendering SSSE—Supplementary Streaming SIMD Extensions SSSP—Single Source Shortest Path SSTP—Secure Socket Tunneling Protocol su—superuser SUS—Single UNIX Specification SUSE—Software und System-Entwicklung SVC—Scalable Video Coding SVG—Scalable Vector Graphics SVGA—Super Video Graphics Array SVD—Structured VLSI Design SWF—Shock Wave Flash SWT—Standard Widget Toolkit Sysop—System operator T TAO—Track-At-Once TAPI—Telephony Application Programming Interface TASM—Turbo ASseMbler TB—TeraByte Tcl—Tool Command Language TCP—Transmission Control Protocol TCP/IP—Transmission Control Protocol/Internet Protocol TCU—Telecommunication Control Unit TDMA—Time-Division Multiple Access TDP—Thermal Design Power TFT—Thin-Film Transistor TFTP—Trivial File Transfer Protocol TI—Texas Instruments TIFF—Tagged Image File Format TLA—Three-Letter Acronym TLD—Top-Level Domain TLS—Thread-Local Storage TLS—Transport Layer Security TLV—Type—length—value tmp—temporary TNC—Terminal Node Controller TNC—Threaded Neill-Concelman connector TPF—Transaction Processing Facility TPM—Trusted Platform Module TROFF—Trace Off TRON—Trace On TRON—The Real-time Operating system Nucleus TRSDOS—Tandy Radio Shack – Disk Operating System TSO—Time Sharing Option TSP—Traveling Salesman Problem TSR—Terminate and Stay Resident TTA—True Tap Audio TTF—TrueType Font TTL—Transistor—Transistor Logic TTL—Time To Live TTS—Text-to-Speech TTY—Teletype TUCOWS—The Ultimate Collection of Winsock Software TUG—TeX Users Group TWAIN—Technology Without An Interesting Name U UAAG—User Agent Accessibility Guidelines UAC—User Account Control UART—Universal Asynchronous Receiver/Transmitter UAT—User Acceptance Testing UB—Undefined Behavior UCS—Universal Character Set UDDI—Universal Description, Discovery, and Integration UDMA—Ultra DMA UDP—User Datagram Protocol UEFI—Unified Extensible Firmware Interface UHF—Ultra High Frequency UI—User Interface UL—Upload ULA—Uncommitted Logic Array ULSI—Ultra Large Scale Integration UMA—Upper Memory Area UMB—Upper Memory Block UML—Unified Modeling Language UML—User-Mode Linux UMPC—Ultra-Mobile Personal Computer UMTS—Universal Mobile Telecommunications System UNC—Universal Naming Convention UNIVAC—Universal Automatic Computer (By MKS) UPS—Uninterruptible Power Supply or Uninterrupted Power Supply URI—Uniform Resource Identifier URL—Uniform Resource Locator URN—Uniform Resource Name USB—Universal Serial Bus usr—User System Resources USR—U.S. Robotics UTC—Coordinated Universal Time UTF—Unicode Transformation Format UTP—Unshielded Twisted Pair UTRAN—Universal Terrestrial Radio Access Network UUCP—Unix to Unix Copy UUID—Universally Unique Identifier UUN—Universal User Name UVC—Universal Virtual Computer UWP—Universal Windows Platform UX—User Experience V var—variable VoLTE—Voice Over Long Term Evolution VAX—Virtual Address eXtension VCPI—Virtual Control Program Interface VB—Visual Basic VBA—Visual Basic for Applications VBS—Visual Basic Script VDI—Virtual Desktop Infrastructure VDU—Visual Display Unit VDM—Virtual DOS machine VDSL—Very High Bitrate Digital Subscriber Line VESA—Video Electronics Standards Association VFAT—Virtual FAT VHD—Virtual Hard Disk VFS—Virtual File System VG—Volume Group VGA—Video Graphics Array VHF—Very High Frequency VIRUS—Vital Information Resource Under Seize VLAN—Virtual Local Area Network VLSM—Variable-length subnet masking VLB—Vesa Local Bus VLF—Very Low Frequency VLIW—Very Long Instruction Word VLSI—Very-Large-Scale Integration VM—Virtual Machine VM—Virtual Memory VMM—Virtual Machine Monitor VNC—Virtual Network Computing VOD—Video On Demand VoIP—Voice over Internet Protocol VPN—Virtual Private Network VPS—Virtual Private Server VPU—Visual Processing Unit VR—Virtual Reality VRML—Virtual Reality Modeling Language VSAM—Virtual Storage-Access Method VSAT—Very Small Aperture Terminal VT—Video Terminal VTL—Virtual Tape Library VTAM—Virtual Telecommunications Access Method VRAM—Video Random-Access Memory W W3C—World Wide Web Consortium WWDC—Apple World Wide Developer Conference WAFS—Wide Area File Services WAI—Web Accessibility Initiative WAIS—Wide Area Information Server WAN—Wide Area Network WAP—Wireless Access Point WAP—Wireless Application Protocol WASM—Watcom ASseMbler WBEM—Web-Based Enterprise Management WCAG—Web Content Accessibility Guidelines WCF—Windows Communication Foundation WDM—Wavelength-Division Multiplexing WebDAV—WWW Distributed Authoring and Versioning WEP—Wired Equivalent Privacy WFI—Wait For Interrupt WiMAX—Worldwide Interoperability for Microwave Access WinFS—Windows Future Storage WinRT—Windows RunTime WINS—Windows Internet Name Service WLAN—Wireless Local Area Network WMA—Windows Media Audio WMI—Windows Management Instrumentation WMV—Windows Media Video WNS—Windows Push Notification Service WOL—Wake-on-LAN WOR—Wake-on-Ring WORA—Write once, run anywhere WORE—Write once, run everywhere WORM—Write Once Read Many WPA—Wi-Fi Protected Access WPAD—Web Proxy Autodiscovery Protocol WPAN—Wireless Personal Area Network WPF—Windows Presentation Foundation WS-D—Web Services-Discovery WSDL—Web Services Description Language WSFL—Web Services Flow Language WUSB—Wireless Universal Serial Bus WWAN—Wireless Wide Area Network WWID—World Wide Identifier WWN—World Wide Name WWW—World Wide Web WYSIWYG—What You See Is What You Get WZC—Wireless Zero Configuration X XAML—eXtensible Application Markup Language XDM—X Window Display Manager XDMCP—X Display Manager Control Protocol XCBL—XML Common Business Library XHTML—eXtensible Hypertext Markup Language XILP—X Interactive ListProc XML—eXtensible Markup Language XMMS—X Multimedia System XMPP—eXtensible Messaging and Presence Protocol XMS—Extended Memory Specification XNS—Xerox Network Systems XP—Cross-Platform XP—Extreme Programming XPCOM—Cross Platform Component Object Model XPI—XPInstall XPIDL—Cross-Platform IDL XPS—XML Paper Specification XSD—XML Schema Definition XSL—eXtensible Stylesheet Language XSL-FO—eXtensible Stylesheet Language Formatting Objects XSLT—eXtensible Stylesheet Language Transformations XSS—Cross-Site Scripting XTF—eXtensible Tag Framework XTF—eXtended Triton Format XUL—XML User Interface Language XVGA—Extended Video Graphics Adapter Y Y2K—Year Two Thousand Y2K38—Year Two Thousand Thirty Eight YAAF—Yet Another Application Framework YACC—Yet Another Compiler Compiler YAGNI—You Aren't Gonna Need It YAML—YAML Ain't Markup Language YARN—Yet Another Resource Negotiator YaST—Yet another Setup Tool Z ZCAV—Zone Constant Angular Velocity ZCS—Zero Code Suppression ZIF—Zero Insertion Force ZIFS—Zero Insertion Force Socket ZIP—ZIP file archive ZISC—Zero Instruction Set Computer ZOI—Zero One Infinity ZOPE—Z Object Publishing Environment ZMA—Zone Multicast Address ZPL—Z-level Programming Language See also Acronym Internet slang List of file formats List of information technology initialisms Professional certification References External links The UNIX Acronym List Lists of abbreviations Lists of computer terms Computer jargon
List of computing and IT abbreviations
[ "Technology" ]
10,499
[ "Computing terminology", "Computing-related lists", "Computer jargon", "Lists of computer terms", "Natural language and computing" ]
51,012
https://en.wikipedia.org/wiki/Charles%20K.%20Kao
Sir Charles Kao Kuen () (November 4, 1933 – September 23, 2018) was a Chinese physicist and Nobel laureate who contributed to the development and use of fibre optics in telecommunications. In the 1960s, Kao created various methods to combine glass fibres with lasers in order to transmit digital data, which laid the groundwork for the evolution of the Internet and the eventual creation of the World Wide Web. Kao was born in Shanghai. His family settled in Hong Kong in 1949. He graduated from St. Joseph's College in Hong Kong in 1952 and went to London to study electrical engineering. In the 1960s, Kao worked at Standard Telecommunication Laboratories, the research center of Standard Telephones and Cables (STC) in Harlow, and it was here in 1966 that he laid the groundwork for fibre optics in communication. Known as the "godfather of broadband", the "father of fibre optics", and the "father of fibre optic communications", he continued his work in Hong Kong at the Chinese University of Hong Kong, and in the United States at ITT (the parent corporation for STC) and Yale University. Kao was awarded the Nobel Prize in Physics for "groundbreaking achievements concerning the transmission of light in fibres for optical communication". In 2010, he was knighted by Queen Elizabeth II for "services to fibre optic communications". Kao was a permanent resident of Hong Kong, and a citizen of the United Kingdom and the United States. Early life and education Charles Kao was born in Shanghai in 1933 and lived with his parents in the Shanghai French Concession. He studied Chinese classics at home with his brother, under a tutor. He also studied English and French at the Shanghai World School () that was founded by a number of progressive Chinese educators, including Cai Yuanpei. After the Communist revolution, Kao's family settled in Hong Kong in 1949. Much of his mother's siblings moved to Hong Kong in the late 1930s, among them, his mother's youngest brother took good care of him. Kao's family lived in Lau Sin Street, at the edge of the North Point, a neighbourhood of Shanghai immigrants. During Kao's time in Hong Kong, he studied at St. Joseph's College for 5 years and graduated in 1952. Kao obtained high score in the Hong Kong School Certificate Examination, which at the time was the territory's matriculation examination, qualifying him for admission to the University of Hong Kong. However, at the time electrical engineering wasn't a programme available at the University of Hong Kong, the territory's then only teritary education institute. Hence in 1953, Kao went to London to continue his studies in secondary school and obtained his A-Level in 1955. He was later admitted to Woolwich Polytechnic (now the University of Greenwich) and obtained his Bachelor of Electrical Engineering degree. He then pursued research and received his PhD in electrical engineering in 1965 from the University of London, under Professor Harold Barlow of University College London as an external student while working at Standard Telecommunication Laboratories (STL) in Harlow, England, the research center of Standard Telephones and Cables. Ancestry and family Kao's father (), originally from Jinshan City (now a district of Shanghai City), obtained his Juris Doctor from the University of Michigan Law School in 1925. He was a judge at the Shanghai Concession and later a professor at Soochow University (then in Shanghai) Comparative Law School of China. His grandfather Kao Hsieh was a scholar, poet and artist, Several writers including Kao Hsü, (), and () were also Kao's close relatives. His father's cousin was astronomer Kao Ping-tse (Kao crater is named after him). Kao's younger brother Timothy Wu Kao () is a civil engineer and Professor Emeritus at the Catholic University of America. His research is in hydrodynamics. Kao met his future wife Gwen May-Wan Kao (née Wong; ) in London after graduation, when they worked together as engineers at Standard Telephones and Cables. She was British Chinese. They were married in 1959 in London, and had a son and a daughter, both of whom reside and work in Silicon Valley, California. According to Kao's autobiography, Kao was a Catholic who attended Catholic Church while his wife attended the Anglican Communion. Academic career Fibre optics and communications In the 1960s at Standard Telecommunication Laboratories (STL) based in Harlow, Essex, England, Kao and his coworkers did their pioneering work in creating fibre optics as a telecommunications medium, by demonstrating that the high-loss of existing fibre optics arose from impurities in the glass, rather than from an underlying problem with the technology itself. In 1963, when Kao first joined the optical communications research team he made notes summarising the background situation and available technology at the time, and identifying the key individuals involved. Initially Kao worked in the team of Antoni E. Karbowiak (Toni Karbowiak), who was working under Alec Reeves to study optical waveguides for communications. Kao's task was to investigate fibre attenuation, for which he collected samples from different fibre manufacturers and also investigated the properties of bulk glasses carefully. Kao's study primarily convinced him that the impurities in material caused the high light losses of those fibres. Later that year, Kao was appointed head of the electro-optics research group at STL. He took over the optical communication program of STL in December 1964, because his supervisor, Karbowiak, left to take the chair in Communications in the School of Electrical Engineering at the University of New South Wales (UNSW), Sydney, Australia. Although Kao succeeded Karbowiak as manager of optical communications research, he immediately decided to abandon Karbowiak's plan (thin-film waveguide) and overall change research direction with his colleague George Hockham. They not only considered optical physics but also the material properties. The results were first presented by Kao to the IEE in January 1966 in London, and further published in July with George Hockham (1964–1965 worked with Kao). This study proposed the use of glass fibres for optical communication. The concepts described, especially the electromagnetic theory and performance parameters, are the basis of today's optical fibre communications. In 1965, Kao with Hockham concluded that the fundamental limitation for glass light attenuation is below 20 dB/km (decibels per kilometer, is a measure of the attenuation of a signal over a distance), which is a key threshold value for optical communications. However, at the time of this determination, optical fibres commonly exhibited light loss as high as 1,000 dB/km and even more. This conclusion opened the intense race to find low-loss materials and suitable fibres for reaching such criteria. Kao, together with his new team (members including T. W. Davies, M. W. Jones and C. R. Wright), pursued this goal by testing various materials. They precisely measured the attenuation of light with different wavelengths in glasses and other materials. During this period, Kao pointed out that the high purity of fused silica (SiO2) made it an ideal candidate for optical communication. Kao also stated that the impurity of glass material is the main cause for the dramatic decay of light transmission inside glass fibre, rather than fundamental physical effects such as scattering as many physicists thought at that time, and such impurity could be removed. This led to a worldwide study and production of high-purity glass fibres. When Kao first proposed that such glass fibre could be used for long-distance information transfer and could replace copper wires which were used for telecommunication during that era, his ideas were widely disbelieved; later people realized that Kao's ideas revolutionized the whole communication technology and industry. He also played a leading role in the early stage of engineering and commercial realization of optical communication. In spring 1966, Kao traveled to the U.S. but failed to interest Bell Labs, which was a competitor of STL in communication technology at that time. He subsequently traveled to Japan and gained support. Kao visited many glass and polymer factories, discussed with various people including engineers, scientists, businessmen about the techniques and improvement of glass fibre manufacture. In 1969, Kao with M. W. Jones measured the intrinsic loss of bulk-fused silica at 4 dB/km, which is the first evidence of ultra-transparent glass. Bell Labs started considering fibre optics seriously. As of 2017, fibre optic losses (from both bulk and intrinsic sources) are as low as 0.1419 dB/km at the 1.56 μm wavelength. Kao developed important techniques and configurations for glass fibre waveguides, and contributed to the development of different fibre types and system devices which met both civil and military application requirements, and peripheral supporting systems for optical fibre communication. In mid-1970s, he did seminal work on glass fibre fatigue strength. When named the first ITT Executive Scientist, Kao launched the "Terabit Technology" program in addressing the high frequency limits of signal processing, so Kao is also known as the "father of the terabit technology concept". Kao has published more than 100 papers and was granted over 30 patents, including the water-resistant high-strength fibres (with M. S. Maklad). At an early stage of developing optic fibres, Kao already strongly preferred single-mode for long-distance optical communication, instead of using multi-mode systems. His vision later was followed and now is applied almost exclusively. Kao was also a visionary of modern submarine communications cables and largely promoted this idea. He predicted in 1983 that world's seas would be littered with fibre optics, five years ahead of the time that such a trans-oceanic fibre-optic cable first became serviceable. Ali Javan's introduction of a steady helium–neon laser and Kao's discovery of fibre light-loss properties now are recognized as the two essential milestones for the development of fibre-optic communications. Later work Kao joined the Chinese University of Hong Kong (CUHK) in 1970 to found the Department of Electronics, which later became the Department of Electronic Engineering. During this period, Kao was the reader and then the chair Professor of Electronics at CUHK; he built up both undergraduate and graduate study programs of electronics and oversaw the graduation of his first students. Under his leadership, the School of Education and other new research institutes were established. He returned to ITT Corporation in 1974 (the parent corporation of STC at that time) in the United States and worked in Roanoke, Virginia, first as Chief Scientist and later as Director of Engineering. In 1982, he became the first ITT Executive Scientist and was stationed mainly at the Advanced Technology Center in Connecticut. While there, he served as an adjunct professor and Fellow of Trumbull College at Yale University. In 1985, Kao spent one year in West Germany, at the SEL Research Center. In 1986, Kao was the Corporate Director of Research at ITT. He was one of the earliest to study the environmental effects of land reclamation in Hong Kong, and presented one of his first related studies at the conference of the Association of Commonwealth Universities (ACU) in Edinburgh in 1972. Kao was the vice-chancellor of the Chinese University of Hong Kong from 1987 to 1996. From 1991, Kao was an Independent Non-Executive Director and a member of the Audit Committee of the Varitronix International Limited in Hong Kong. From 1993 to 1994, he was the President of the Association of Southeast Asian Institutions of Higher Learning (ASAIHL). In 1996, Kao donated to Yale University, and the Charles Kao Fund Research Grants was established to support Yale's studies, research and creative projects in Asia. The fund currently is managed by Yale University Councils on East Asian and Southeast Asian Studies. After his retirement from CUHK in 1996, Kao spent his six-month sabbatical leave at the Department of Electrical and Electronic Engineering of Imperial College London; from 1997 to 2002, he also served as visiting professor in the same department. Kao was chairman and member of the Energy Advisory Committee (EAC) of Hong Kong for two years, and retired from the position on July 15, 2000. Kao was a member of the Council of Advisors on Innovation and Technology of Hong Kong, appointed on April 20, 2000. In 2000, Kao co-founded the Independent Schools Foundation Academy, which is located in Cyberport, Hong Kong. He was its founding chairman in 2000, and stepped down from the board of the ISF in December 2008. Kao was the keynote speaker at IEEE GLOBECOM 2002 in Taipei, Taiwan. In 2003, Kao was named a Chair Professor by special appointment at the Electronics Institute of the College of Electrical Engineering and Computer Science, National Taiwan University. Kao then worked as the chairman and CEO of Transtech Services Ltd., a telecommunication consultancy in Hong Kong. He was the founder, chairman and CEO of ITX Services Limited. From 2003 to January 30, 2009, Kao was an independent non-executive director and member of the audit committee of Next Media. Awards Kao received numerous awards such as the Nobel Prize in Physics, Grand Bauhinia Medal, Marconi Prize, Prince Philip Medal, Charles Stark Draper Prize, Bell Award, SPIE Gold Medal, Japan International Award, Faraday Medal, and the James C. McGroddy Prize for New Materials. Honours 1993: A Commander of the Most Excellent Order of the British Empire (CBE) 2010: A Knight Commander of the Most Excellent Order of the British Empire (KBE) 2010: The Grand Bauhinia Medal (GBM), Hong Kong Society and academy recognition Honorary degrees Awards Kao donated most of his prize medals to the Chinese University of Hong Kong. Namesakes The minor planet 3463 Kaokuen, discovered in 1981, was named after Kao in 1996. 1996 (November 7): The north wing of the Chinese University of Hong Kong Science Center was named the Charles Kuen Kao Building. 2009 (December 30): The landmark auditorium in the Hong Kong Science Park was named after Kao – the Charles K. Kao Auditorium. 2010 (March 18): Professor Charles Kao Square, a square of the Independent Schools Foundation Academy 2014 (September): Sir Charles Kao UTC (now known as BMAT STEM Academy) was opened. 2014: Kao Data, a data center operator based on the former site of Sir Charles Kao's work on fibre optics cables, was founded. Others Featured in Science Museum London Hong Kong Affairs Adviser (May 1994 – June 30, 1997) Advisor of the Macao Science and Technology Council 1999: Asian of the Century, Science and Technology 2002: Leader of the Year – Innovation Technology Category, Sing Tao, Hong Kong October 21, 2002: Inducted into the Engineering Hall of Fame, the 50th Anniversary Issue, Electronic Design January 3, 2008: Inducted into the Celebration 60, British Council's 60th anniversary in Hong Kong November 4, 2009: Honorary citizenship, and the "Dr. Charles Kao Day" in Mountain View, California, U.S.A. 2009: Hong Kong's Person of the Year The Top 10 Asian Achievements of 2009 – No. 7 2010 (February): Honoree, Committee of 100, U.S.A. The 2010 OFC/NFOEC Conferences were dedicated to Kao, March 23–25, San Diego, California, U.S.A. May 14–15, 2010: Two sessions were dedicated to Kao at the 19th Annual Wireless and Optical Communications Conference (WOCC 2010), Shanghai, P.R. China. May 22, 2010: Inducted into the memento archive of the 2010 Shanghai World Expo Mid-2010: Hong Kong Definitive Stamp Sheetlet (No. 1), Hong Kong March 25, 2011: Blue plaque unveiled in Harlow, Essex, U.K. November 4, 2014: Gimme Fibre Day on Kao's birthday, FTTH Councils Global Alliance November 4, 2021, Google celebrated Kao's birthday with a Google Doodle. The binary output in the graphic spells out 'KAO' when converted to ASCII. Later life and death Kao's international travels led him to opine that he belonged to the world instead of any country. An open letter published by Kao and his wife in 2010 later clarified that "Charles studied in Hong Kong for his high schooling, he has taught here, he was the Vice-Chancellor of CUHK and retired here too. So he is a Hong Kong belonger." Pottery making was a hobby of Kao's. Kao also enjoyed reading Wuxia (Chinese martial fantasy) novels. Kao suffered from Alzheimer's disease from early 2004 and had speech difficulty, but had no problem recognising people or addresses. His father suffered from the same disease. Beginning in 2008, he resided in Mountain View, California, United States, where he moved from Hong Kong in order to live near his children and grandchild. On October 6, 2009, when Kao was awarded the Nobel Prize in Physics for his contributions to the study of the transmission of light in optical fibres and for fibre communication, he said, "I am absolutely speechless and never expected such an honor." Kao's wife Gwen told the press that the prize will primarily be used for Charles's medical expenses. In 2010 Charles and Gwen Kao founded the Charles K. Kao Foundation for Alzheimer's Disease to raise public awareness about the disease and provide support for the patients. In 2016, Kao lost the ability to maintain his balance. At the end-stage of his dementia he was cared for by his wife and intended not to be kept alive with life support or have CPR performed on him. Kao passed away at Bradbury Hospice in Hong Kong on September 23, 2018, at the age of 84. Works Optical Fiber Technology; by Charles K. Kao. IEEE Press, New York, U.S.A.; 1981. Optical Fiber Technology, II; by Charles K. Kao. IEEE Press, New York, U.S.A.; 1981, 343 pages. . Optical Fiber Systems: Technology, Design, and Applications; by Charles K. Kao. McGraw-Hill, U.S.A.; 1982; 204 pages. . Optical Fibre (IEE materials & devices series, Volume 6); by Charles K. Kao. Palgrave Macmillan on behalf of IEEE; 1988; University of Michigan; 158 pages. A Choice Fulfilled: the Business of High Technology; by Charles K. Kao. The Chinese University Press/ Palgrave Macmillan; 1991, 203 pages. Tackling the Millennium Bug Together: Public Conferences; by Charles K. Kao. Central Policy Unit, Hong Kong; 48 pages, 1998. Technology Road Maps for Hong Kong: a Preliminary Study; by Charles K. Kao. Office of Industrial and Business Development, The Chinese University of Hong Kong; 126 pages, 1990. Nonlinear Photonics: Nonlinearities in Optics, Optoelectronics and fibre Communications; by Yili Guo, Kin S. Chiang, E. Herbert Li, and Charles K. Kao. The Chinese University Press, Hong Kong; 2002, 600 pages. Notes References Further reading K. C. Kao (June 1986), "1012 bit/s Optoelectronics Technology", IEE Proceedings 133, Pt.J, No 3, 230–236. External links Optical Fibre History at STL including the Nobel Lecture 8 December 2009 Sand from centuries past; Send future voices fast BBC: Lighting the way to a revolution Mountain View Voice: The legacy of Charles Kao Man who lit up the world – Professor Charles Kao CBE FREng Ingenia, Issue 43, June 2010 1933 births 2018 deaths Academics of Imperial College London Academics of Queen Mary University of London Alumni of University of London Worldwide Alumni of the University of London Alumni of the University of Greenwich Alumni of University College London American electrical engineers American Nobel laureates American people of Hong Kong descent American physicists American people of Chinese descent British electrical engineers British emigrants to the United States British Nobel laureates British physicists Chinese University of Hong Kong people Draper Prize winners Fellows of the Royal Society Fellows of the Royal Academy of Engineering Fellows of the IEEE Fellows of the Institution of Engineering and Technology Fiber-optic communications Hong Kong Affairs Advisors Hong Kong electrical engineers Hong Kong emigrants to England Hong Kong Nobel laureates Hong Kong physicists Knights Commander of the Order of the British Empire Members of Academia Sinica Members of the European Academy of Sciences and Arts Members of the Royal Swedish Academy of Engineering Sciences Members of the United States National Academy of Engineering Foreign members of the Chinese Academy of Sciences Naturalised citizens of the United Kingdom Nobel laureates in Physics People with Alzheimer's disease Recipients of the Grand Bauhinia Medal Physicists from Shanghai Vice-chancellors of the Chinese University of Hong Kong Yale University faculty Yale University fellows Educators from Shanghai SPIE ITT Inc. people Chinese emigrants to Hong Kong Chinese Roman Catholics Hong Kong Roman Catholics American Roman Catholics Optical engineers Optical physicists
Charles K. Kao
[ "Engineering" ]
4,379
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
51,013
https://en.wikipedia.org/wiki/Slurry%20pipeline
A slurry pipeline is a specially engineered pipeline used to move ores, such as coal or iron, or mining waste, called tailings, over long distances. A mixture of the ore concentrate and water, called slurry, is pumped to its destination and the water is filtered out. Due to the abrasive properties of slurry, the pipelines can be lined with high-density polyethylene (HDPE), or manufactured completely from HDPE Pipe, although this requires a very thick pipe wall. Slurry pipelines are used as an alternative to railroad transportation when mines are located in remote, inaccessible areas. Canadian researchers at the University of Alberta are investigating the use of slurry pipelines to move agricultural and forestry wastes from dispersed sources to centralized biofuel plants. Over distances of 100 kilometres pipeline transport of biomass can be viable provided it is used in processes that can accept very wet feedstocks such as hydrothermal liquefaction or ethanol fermentation. Compared to an equivalently sized oil pipeline, a biomass slurry pipeline would carry around 8% of the energy. Process The concentrate of the ore is mixed with water and then pumped over a long distance to a port where it can be shipped for further processing. At the end of the pipeline, the material is separated from the [slurry] in a filter press to remove the water. This water is usually subjected to a waste treatment process before disposal or return to the mine. Slurry pipelines offer an economic advantage over railroad transport and much less noise disturbance to the environment, particularly when mines are in extremely remote areas. Pipelines must be suitably engineered to resist abrasion from the solids as well as corrosion from the soil. Some of these pipelines are lined with high-density polyethylene (HDPE). Typical materials that are transferred using slurry pipelines include coal, copper, iron, and phosphate concentrates, limestone, lead, zinc, nickel, bauxite and oil sands. Slurry pipelines are also used to transport tailings from a mineral processing plant after the ore has been processed in order to dispose of the remaining rocks or clays. For oil sand plants, a mixture of oil sand and water may be pumped over a long distance to release the bitumen by ablation. These pipelines are also called hydrotransport pipelines. History Early modern slurry pipelines include The Ohio 'Consolidation' coal slurry pipeline (1957) and the Kensworth to Rugby limestone slurry pipeline (1965) The 85 km Savage River Slurry pipeline in Tasmania, Australia, was possibly the world's first slurry pipeline to transport iron ore when it was built in 1967. It includes a 366m bridge span at 167m above the Savage River. It carries iron ore slurry from the Savage River open cut mine owned by Australian Bulk Minerals and was still operational as of 2011. Planned projects One of the longest slurry pipelines was to be the proposed ETSI pipeline, to transport coal from Wyoming to Louisiana over a distance of 1036 miles (1675 km). It was never commissioned. It is anticipated that in the next few years some long distance slurry pipelines will be constructed in Australia and South America where mineral deposits are often a few hundred kilometers away from shipping ports. A 525 km slurry pipeline is planned for the Minas-Rio iron ore mine in Brazil. Slurry pipelines are also being considered to desilt or remove silts from deposits behind dams in man-made lakes. After the Hurricane Katrina disaster there were proposals to remedy the environment by pumping silt to the shore. Proposals have also been made to de-silt Lake Nubia-Nasser in Egypt and Sudan by slurry pipelines, as Egypt is now deprived of 95% of its alluvium, which used to arrive every year. These projects to remedy the environment might alleviate one of the major problems associated with large dams and man-made lakes. ESSAR Steel India Limited owns two >250 km slurry pipelines in India; the Kirandul-Vishakhapatnam (slurry pipeline) and Dabuna-Paradeep pipeline. See also Coal slurry pipeline Edward J. Wasp Notes References Miedema, S.A., Slurry Transport: Fundamentals, a Historical Overview and The Delft Head Loss & Limit Deposit Velocity Framework. http://www.dredging.org/media/ceda/org/documents/resources/othersonline/miedema-2016-slurry-transport.pdf External links Baha Abulnaga, Slurry Systems Handbook - McGraw-Hill 2002. Bonapace, A.C. A General Theory of the Hydraulic Transport of Solids in Full Suspension Ravelet, F., Bakir, F., Khelladi, S., Rey, R. (2012). Experimental study of hydraulic transport of large particles in horizontal pipes. Experimental thermal and fluid science. Ming, G., Ruixiang, L., Fusheng, N., Liqun, X. (2007). Hydraulic Transport of Coarse Gravel — A Laboratory Investigation Into Flow Resistance. Pipeline transport Mining equipment
Slurry pipeline
[ "Engineering" ]
1,086
[ "Mining equipment" ]
51,018
https://en.wikipedia.org/wiki/Civil%20defense%20siren
A civil defense siren is a siren used to provide an emergency population warning to the general population of approaching danger. Initially designed to warn city dwellers of air raids (air-raid sirens) during World War II, they were later used to warn of nuclear attack and natural disasters, such as tornadoes (tornado sirens). The generalized nature of sirens led to many of them being replaced with more specific warnings, such as the broadcast-based Emergency Alert System and the Cell Broadcast-based Wireless Emergency Alerts and EU-Alert mobile technologies. By use of varying tones or binary patterns of sound, different alert conditions can be called. Electronic sirens can transmit voice announcements in addition to alert tone signals. Siren systems may be electronically controlled and integrated into other warning systems. Sirens in integrated public warning systems Sirens are sometimes integrated into a warning system that links sirens with other warning media, such as the radio and TV Emergency Alert System, NOAA Weather Radio, telephone alerting systems, Reverse 911, Cable Override, and wireless alerting systems in the United States and the National Public Alerting System, Alert Ready, in Canada. This fluid approach enhances the credibility of warnings and reduces the risk of assumed false alarms by corroborating warning messages through multiple forms of media. The Common Alerting Protocol is a technical standard for this sort of multi-system integration. Siren installations have many ways of being activated. Commonly used methods are dual-tone multi-frequency signaling (DTMF) or public switched telephone network (PSTN) using telephone lines, but activation can also be done via radio broadcast. This method opens up vulnerability for exploitation, but there are protections from false alarms. These sirens can also be tied into other networks such as a fire department's volunteer notification/paging system. The basics of this type of installation would consist of a device (possibly the same pager the firefighters have) connected to the controller/timer system of the siren. When a page is received, the siren is activated. Types of sirens Mechanical sirens Basic design A mechanical siren uses a rotor and stator to chop an air stream, which is forced through the siren by radial vanes in the spinning rotor. An example of this type of siren is the Federal Signal 2T22, which was originally developed during the Cold War and produced from the early 1950s to the late 1980s. This particular design employs dual rotors and stators to sound each pitch. Because the sound power output of this type of siren is the same in every direction at all times, it is described as omnidirectional. The Federal 2T22 was also marketed in a 3-signal configuration known as the Federal Signal 3T22, with the capability for a "hi-lo" (High-Low) signal. While some mechanical sirens produce sound in all directions simultaneously, other designs produce sound in only one direction, while employing a rotator mechanism to turn the siren head through 360 degrees of rotation. One rare type of mechanical siren, the Federal Signal RSH-10 ("Thunderbeam"), does not rotate or produce equal sound output in all directions. It instead uses a slowly rotating angled disc below the siren which directs the siren's output throughout 360 degrees. Supercharged A variation of the electromechanical siren is a "supercharged" siren. A supercharged siren uses a separate source, usually a supercharger-like blower, which forces air into the rotor assembly of the siren. This increases the air pressure in the rotor assembly, causing the sound output of the siren to increase heavily, which in return increases the sound range by a large amount. The blower is generally driven by an electric motor, but in rare cases, can be driven by an engine. Federal Signal took advantage of this design and created their Thunderbolt Siren Series, utilizing Sutorbilt Roots Blowers of different varieties and outputs. A very early model called the Thunderbolt 2000. The difference between the Thunderbolt 2000 and later editions is that its blower is driven by an Onan two cylinder gas engine. Another example of a siren that has a separate blower is the Alerting Communicators of America (ACA) Hurricane. One more example of a siren with a blower is the SoCal Edison Model 120, utilizing a Centrifugal Style Blower, built specifically for the San Onofre Nuclear Generating Station. The SoCal Edison Model 120 is no longer standing out in public, as only one exists, and is owned privately. Pneumatic Another variation on the electromechanical siren is the pneumatic siren. Similar to supercharged sirens, pneumatic sirens also force air into the rotor assembly of the siren. However, these sirens use a pressurized air reservoir instead of a motor-driven blower. (HLS), first produced by the German firm Pintsch-Bamag and later by the German firm Hörmann. Soon afterward, Hörmann improved on the design to create the HLS 273, which did away with the massive siren head of the original in favor of a more compact head and cast aluminum exponential-profile horns. These sirens stored a reservoir of compressed air, recharged periodically by a diesel engine-driven compressor in a vault in the base of the massive siren unit. The later HLS 273 placed the large (6,000 liter) air tank underground beside the machinery vault, instead of in the mast itself as in the earlier HLS units. Electronic sirens Electronic sirens consist of an electronic tone generator, a high-power amplifier, and a horn loudspeaker. Typically, the loudspeaker unit incorporates horn loading, causing them to be similar in appearance to some electromechanical sirens. Many of these loudspeakers incorporate a vertical array of horns to achieve pattern control in the vertical plane. Each cell of the loudspeaker horn is driven by one or more compression drivers. One type of compression driver for this type of loudspeaker uses two doughnut-shaped permanent magnet slugs to provide magnetic flux. For siren applications, high-fidelity sound is a secondary concern to high output, and siren drivers typically produce large amounts of distortion which would not be tolerable in an audio system where fidelity is important. Most newer (and some older) electronic sirens have the ability to store digital audio files. These audio files could be custom sounds, or emergency messages. Depending on the situation, the stored sound file can broadcast through the siren. These sirens could also come with a Microphone Jack to broadcast live messages via microphone. As with electromechanical sirens, there are both omnidirectional, directional, and rotating categories. Whelen Engineering produces sirens which oscillate through 360 degrees, rotating in one direction and then in the other to allow a hard-wired connection between the amplifiers and the siren drivers. These sirens can also be set to rotate any amount from 0 to 360 degrees, allowing sirens to broadcast only in certain directions. Electronic sirens may be mechanically rotated to cover a wide area, or may have transducers facing in all directions to make an omnidirectional pattern. A directional siren may be applied where notification is only required for a defined area in one direction. Civil defense sirens around the world Middle East Iran, Kuwait, and Iraq Kuwait has an entire siren system of Federal Signal Modulators. They once had an entire system of Thunderbolt system 7000's, until they all got replaced. However, one System 7000 remains on display. Israel Israel has more than 3,100 warning sirens. Most of the sirens in urban areas are German-made HLS sirens, models F71 and ECN3000. All the other sirens are HPSS32 models made by Acoustic Technologies (ATI). During the early 2010s, mechanical sirens were gradually phased out and replaced by electronic ones, although the mechanical ones were generally left standing. The air-raid sirens are called (az'aka 'alarm'), and consist of a continuous ascending and descending tone. The "all clear" signal, called (tzfirat harga'a), is a constant single-pitch sound. In recent conflicts, use of the "all clear" signal has been discontinued, as it was seen as causing unnecessary confusion and alarm. In certain regions in the south of Israel, which regularly undergo rocket attacks from Gaza, a specialized system called Red Color is used. There is an earthquake warning system in Israel, which uses the sirens, which called "תרועה" (Trua). The "all clear" signal is used three times per year to denote a moment of silence (of one or two minutes): once on Israel's Holocaust Remembrance Day and twice on the Day of Remembrance. Saudi Arabia Most minaret speakers are used as sirens. The purpose of warnings is to notify the population of a danger that threatens their lives. Individuals must go to shelters or their homes, lock doors and windows, take appropriate protective actions, and listen through the radio and television for instructions of civil defense. The entire siren system is ASC I-Force and Whelen WPS-2900 series. As well as some Federal Signal Modulators in Air Force Bases, and a FS EOWS 1212 in Dammam. Türkiye Türkiye has Nationwide Siren System of Mechanical Sirens and Loudspeakers, though minaret speakers also are used as sirens sometimes. Models for mechanical sirens used across country are Selay SL-1170's, Even though the main models of the warning speakers are unknown, there is a slightly common presence of Federal Signal, Whelen and Hörmann, Sonnenburg, despite very rare cases of some Elektror or Siemens-Schuckert models remaining. They sound-off during the memorial of Atatürk's death on November 10 at 9:05 AM and often sound off for common emergencies such as Earthquakes, Tsunamis, Tornadoes, Chemical plant issues or incoming enemy attack. Sirens are controlled by AFAD. Sirens can also be mounted on roofs, or sometimes on 30m long Towers. United Arab Emirates The United Arab Emirates uses an integrated national Early Warning System (EWS) which was developed in 2017 and utilizes a network of cell broadcast, variable-message signs on roads, radio alerts, television alerts, and loudspeakers in mosque minarets as sirens. The warning system is managed by the National Emergency Crisis and Disaster Management Authority (NCEMA) and is used to warn the public during times of an emergency or a disaster. Asia The People's Republic of China China has sirens located in most cities and towns, particularly those located in or near disputed territories. If the state declares a state of emergency due to attacks or invasion, or when there is a very high risk of military conflict, sirens will warn the public of possible attacks or invasion. The sirens are controlled by the People's Liberation Army. There are annual or semi-annual test runs, often occurring on commemorative dates that are associated with the Second Sino-Japanese War. For example, Nanjing annually tests air raid sirens at 10 a.m. on 13 December, followed by a moment of silence to commemorate the Nanking Massacre. There have also been some commemorative tests during the memorial periods of major disasters, such as on 19 May 2008 in memory of victims of the 2008 Sichuan Earthquake. The air raid warning comes in 3 types: Pre-raid warning: a 36-second high-tone followed by a 24-second low-tone, with three cycles per period. This warning signifies an air raid is likely about to take place. Raid warning: a 6-second high-tone followed by a 6-second low tone, with 15 cycles per period. This signifies that an air raid is imminent. Post-raid warning: a single 3-minute high-tone. When sounded, it signifies an end to the raid or a cool-down of the wartime situation. Taiwan Taiwanese civil defense sirens are erected on police stations and commanded by the nation's Civil Defense Office (). The government issues air raid warnings, as well as tsunami warnings, through the sirens in conjunction with their own Public Warning System that utilizes 4G LTE cell signals. The Taiwanese government also holds annual air-raid drills called Wan-an drills () so the populace can be familiar with what to do in an air raid, given the high risk of war with neighboring mainland China. India Mumbai has around 200–250 functional sirens. The government is planning to change the system by incorporating modern wireless and digital technology in place of the present landline switching system. In Mumbai civil defence, sirens were used during the Indo-Pakistan wars of 1965 and 1971, warning civilians about air raids by the Pakistan Air Force. At night, sirens were also used to indicate blackouts, when all lights in Mumbai were switched off. Daily tests of the sirens at 9 a.m. were recently reduced to once per month. They are controlled by the Regional Civil Defence Control Center, Mumbai, with input from Indian Defence Services. Sirens are also used to denote a minute's worth of silence on special occasions. Japan Japan utilizes a system of mainly Electronic "Bosai Musen" electronic sirens to warn of impending missile strikes or natural disasters such as Tsunamis. The system was installed by J-Alert in 2007 as one of their many other methods of warning the public of incoming dangers. Some of these sirens are also used to play music at certain times of day, presumably as an "end of day" signal. Japan also had a large mechanical siren system not used to warn residents, made by Yamaha, called the "Yamaha Music Siren", sounded at certain times of day presumably as an "end of day" signal. These sirens had many mechanical choppers with different port ratios to match a certain musical note. These choppers would have rotating stators in order to let each note play individually, in order to play music. Singapore Singapore currently has a network of 284 stationary sirens named the Public Warning System which warns the entire country of air raids, as well as human-made and natural disasters (except earth tremors). Singapore's sirens are tested at noon on the first day of every month. During the test, the sirens sound a light cheerful chime instead of any of the three signals. The sirens look very similar to the Hörmann ECN3000 (See picture). South Korea Nearly all towns and cities are equipped with civil defense sirens in case of natural disasters or missile attacks from North Korea. South Korea holds civil defense drills every month to prepare for such scenarios. Europe Austria Austria is fully covered with an operational air-raid siren system consisting of 8,203 devices as of 2012. They are tested weekly at noon on Saturdays (except in Vienna) with the signal, a 15-second continuous tone. Every year on the first Saturday of October, the whole range of alarm signals (with the exception of the fire alert) is sounded as a system test () and to familiarize the population with the signals. Warning: a 3-minute continuous tone. People are warned of an incoming danger and advised to tune into the appropriate Ö2 regional radio station or ORF 2 for further instructions. General alarm: a 1-minute ascending and descending tone. Danger is imminent; people should seek shelter immediately and listen to radio or TV. End of danger: a 1-minute continuous tone. The danger is over, and shelters can be left. Any hazards potentially encountered during normal life are announced in the media. Fire alert: three 15-second continuous tones separated by 7-second intervals. All firemen of volunteer fire brigades should report to their fire station immediately. This signal is being used less and less, as many fire brigades have begun to alert their members by radio. Belgium Belgium used to test its air raid sirens every first Thursday of the trimester (three-month period). Sirens are Sonnenburg Electronic sirens. When the air-raid sirens are tested, the message or is broadcast every time the sirens sound. There are 540 sirens all across the country. A non-audible test was performed every day, and the last test occurred on 4 October 2018. Afterwards, the network was decommissioned. The sirens remain around nuclear facilities, but no tests are performed. The official recommendation is that people subscribe to BE-Alert, a system where information is provided via SMS, e-mail or phone. Bulgaria Bulgaria has over 4000 sirens deployed around in the country, especially Sofia. These are likely used to warn people of earthquakes . Sirens were first installed during WW2 to warn people of incoming attacks by Nazi Germany; these were replaced by electronic ones during the 1990s. The sirens are tested on the first workday of April and November every year to see if they are functioning properly. On 2 June every year, the day of Hristo Botev and demised ones for the liberation and independence of Bulgaria is commemorated with a 2-minute signal, different from the usual tone. The public is informed about the test beforehand. The sirens are mainly electronic ones (German made type ECN units or other high powered speaker systems on top of public buildings). The signals may include - attack (a 1-minute wailing tone used to indicate an incoming danger) and alert (sometimes referred to as the "all clear" signal which is a continuous single tone). Strong tremors in Bulgaria are quite rare, so a special earthquake warning system is not needed. Czech Republic The Czech Republic has around 6,000 sirens. Within these 6,000 sirens which include mechanical sirens and electronic sirens, are tested every first Wednesday of the month. There are three warning signals, which are accompanied by a verbal message in Czech and usually with an English and German translation on electronic sirens. There is also an emergency broadcast on TV channels, maintained by Czech Television, and radio channels, maintained by Czech Radio. General alert: a 140-second long fluctuating tone. The alert can be repeated up to three times in three-minute intervals with four possible verbal messages: The danger is unspecified and people should go inside a building, close the doors and windows (the alert usually means a dangerous element may be present in the air), and turn the TV or radio to an appropriate channel to find out more; A flooding alert. (Siren signals for Flooding alert is not used anymore). The required action is that people should turn on the radio and get away from the source of danger (usually a river); There was a chemical accident. People should behave the same way as during an message; There was a radioactive accident. People should behave the same way as during an message. Testing of sirens: a 140-seconds long still tone, used to test the functionality of the sirens. is the verbal message before the test, and is the verbal message after the test. Fire alert: a one-minute long still tone with a pause. It is used to alert firefighters of a fire, accompanied by this verbal message: Denmark There are 1,078 electronic warning sirens have been installed in Denmark by HSS Engineering and were actually license built Whelen WPS 2800 sirens. The sirens are placed on the tops of buildings or on masts. This warning system makes it possible to warn the populations of all urban areas with more than 1,000 inhabitants. This means that about 80% of Denmark's population can be warned using stationary sirens. The remaining 20% are warned by mobile sirens mounted on police cars. The function of the sirens is tested every night, but does not produce any sound. Once every year, on the first Wednesday of May at 12:00, the sirens are tested with sound. Finland A general alarm consists of a repeating 1-minute sound, made up of tones that ascend for 7 seconds and descend for 7 seconds. The end of danger is signaled by a 1-minute continuous tone. Warning sirens are tested on the first Monday of every month at noon. The testing alarm is shorter than the general alarm (only lasting for 7 or 14 seconds) and may be a flat tone. The system commonly uses Teho-Ulvo sirens, which are electronic and are powered by horn loudspeaker arrays. They are known for their very distinctive wind-ups and wind-downs. France In France, the emergency population warning network is called the "Réseau national d'alerte" (RNA). The system is inherited from the air raid siren network () developed before World WarII. It consists of about 4,500 electronic or electromechanical sirens placed all over France. The system is tested each month at noon on the first Wednesday. The most common siren type is the electromechanical KM Europ 8 port single tone siren. These sirens have a very characteristic sound: a very fast wind-up and a lower pitch than most sirens (the pitch is comparable to a Federal Signal STL-10 on a lower frequency resulting on a lower pitch). A recording of these sirens was used in the movie Silent Hill. Germany In Germany, the ('warning authorities') were closed in the 1990s after the threat of the Cold War was over, since the ability to alert the public was then considered unnecessary. As the civil defense sirens were also frequently used to alert volunteer firefighters, many sirens were sold to municipalities for a symbolic price; others were dismantled. In the 2000s, it was realized that the ability to warn the public is not only necessary in cases of war, but also in events like natural disasters, chemical or nuclear accidents, or terrorist attacks. As a result, some cities like Düsseldorf and Dresden began to rebuild their warning sirens. In Hamburg, the sirens are still operational. They also warn the public during storm surges; for instance, all towns in the Moselle Valley continue to operate and test their warning sirens. The majority of operational sirens in Germany are either electric-mechanical type E57 or electronic sirens. There are no precise official numbers as to how many sirens there used to be, as most of the documents regarding the system's construction and upkeep have been disposed of after the Cold War ended. However, estimates place the number of operational sirens during the system's peak at around 80,000 in West Germany alone. Accounts as to how many of those sirens are still in operation vary significantly depending on the source. The most prominent German company regarding manufacturing and maintenance of sirens, Hörmann Industries, states on its website that they are in charge of maintaining over 60,000 sirens. Granted, that includes mobile sirens that can be mounted on vehicles, but one can deduct from this information that there could be at least around 50,000 stationary sirens still in operation today, many of the once electro-mechanical sirens having been replaced with newer electronic models. During World WarII, Berlin's air raid sirens became known by the city's residents as "Meier's trumpets" or "Meier's hunting horns" due to Luftwaffe chief Hermann Goering's boast that "If a single bomb ever falls on Berlin, you can call me Meier!". Hungary The installation of sirens across the country was requested by the Minister of National Defense in 1938. Various models from several companies were used, and some were manufactured locally in Hungary under license from the companies owning the designs. The "FM Si 41" model originates from Germany, the "TESLA/Braun-Bovery" siren originates from a Swiss company, and the "EKA" series and "DINAMO" sirens were manufactured locally in Hungary. There were 25 air raid warning zones in Hungary. Only the larger towns and cities were afforded electric sirens, the rest of the country used hand-crank sirens, in addition to warnings broadcast by radio. The electric sirens were controlled via phone lines and were operated from a central location, such as a city hall, fire station, post office, or local military base or outpost. Initially, there were only two siren signals, similar to the signals used in Britain at the time: "Red Alert/Air Raid Warning" signal, indicating an imminent threat, and "All Clear", indicating the danger had passed. In 1944, the signal system was updated to four total signals: "Air Warning" - 30 seconds of continuous siren sounding. Used when enemy combatants entered within 150 km of the area "Red Alert/Air Raid Warning" - issued when enemy combatants were estimated to arrive within 15 minutes "Air Warning Over" - issued when enemy combatants were leaving the immediate area and the attack was considered over "All Clear" - issued when the enemy combatants moved back past the 150km range After the conclusion of World War II siren installation continued throughout the country. Previously unfurnished towns were equipped with the same type of sirens that were installed during the war, as those models were still kept in production. In the 1960s, after allying with the Soviet Union, Hungary planned to replace the aging siren systems installed during the war with sirens that were produced in other communist countries like Czechoslovakia and East Germany. In the early 1970s, a massive siren replacement program began. Nearly all sirens were replaced with the DDR DS977 and MEZ models. Some of the siren models that were used during the war were still in production, so some areas did receive "WW2-Type" sirens. The sirens installed during the 70s were and remain as property of the National Civil Defense. These more modern installations weren't controlled via telephone lines, but by sound-activated receivers. Each installation had a remote control unit at the local police station, fire station or Civil Defense Office that would transmit the specific frequencies that would activate the sirens in the surrounding areas. Testing and maintaining this system was problematic, as the remote control unit was not allowed to be used to run tests and only to be operated in legitimate use cases of war or emergency. Civil Defense workers had to manually go to and test each individual siren in the system that was registered to their station. With only a few Civil Defense stations per county, any given station would be responsible for several cities, towns and villages, making maintenance take significant amounts of time. A rare few of these "Cold War-era" siren systems were equipped with phone line control systems, these were mostly located around nuclear shelters. In the late 1990s, the remote control system was deemed unstable and unreliable. With the fall of the Soviet Union in 1991, the likelihood of war was low, and the National Warning System was abandoned. In the mid 2000s, the MoLaRi System was built around factories that work with hazardous materials, to warn surrounding areas in case of emergency or risk to public health or safety. These systems continue to be installed near facilities that work with hazardous materials. In 2011, a National Civil Defense Drill was held to see how many of the sirens abandoned for decades still functioned. Many of them did not work, and while some counties decided to repair the sirens and start doing yearly growl tests again, others decided to leave the abandoned systems as they were. A few cities and counties kept their sirens active and in good condition, and still perform repairs, maintenance, and yearly tests, like Győr-Moson-Sopron county, Heves county, Hajdú-Bihar county, and Jász-Nagykun-Szolnok county. In 2014 regular testing of the MoLaRi sirens near hazardous materials facilities began, testing is performed on the first Monday of each month at 11:00, this is most easily heard in and around Budapest, where a large number of these facilities are located The remaining siren systems that were found to be non-functional have been dismantled or left in place. Emergency Plan documents for some towns still state that citizens will be warned with the use of sirens, despite these systems being defunct. The government has attempted to destroy all old siren systems that were dismantled, out of fear that intact systems could be activated and cause panic. The Civil Defense Office currently relies on radio, TV, and phone alerts to warn the population in case of emergency, and in some emergency plans, church bells are included as a potential warning system. There are four current siren signals, used by the MoLaRi system, HÖRMANN sirens, and on the old siren systems where they are still maintained: "Air-Raid Alert" - a 30 second continuous sounding with pitch alternating between 280 Hz and 400 Hz, repeated three times, with 30 second pauses between soundings "Emergency Alarm" - a 120 second continuous sounding with pitch alternating between 133 Hz and 400 Hz "Alarm Passed/All Clear" a 30 second continuous sounding with a constant pitch of 400 Hz, repeated twice, with 30 second pauses between the signals "Test Alarm" - a 6 second sounding ramping up to 400 Hz over the duration Italy The Italian War Ministry began installing air raid sirens and issuing air defence regulations in 1938. Production was entrusted to La Sonora, founded in 1911 and still active today. During World WarII, every town had a siren, and several were present in each large city. Even after the danger of bombings had ended, they were kept to provide warning in case of any threat (e.g. high water in Venice). As of 2015, some of them still survive. For instance, as many as 34 have been located in Rome using crowdsourcing. Up until the 1980s, they underwent routine maintenance and sounded at noon. Additionally, the Protezione Civile (Civil Protection) operates sirens to warn the public in case of a threat to the citizen population. Protezione Civile also provides transport needs and military defence for the Government of Italy. These defence systems were put in place in the 1990s and are occasionally still used today. Urbania, Italy has a British Secomak GP3 air raid siren which is annually activated in honor of the Bombing in Urbania, which took place during World War II. Ferrara, Italy has a system of Whelen (remodeled HSS-TWS series) sirens that warn of industrial risks in the area. Lithuania Lithuania operates sirens from the Cold War mainly in suburban areas of big cities such as Vilnius, Klaipeda, Kaunas, etc. The most common models are Elektror, Siemens and Hörmann, even though Federal Signal is not too rare in these areas. These sirens can be found in fire stations, factories, police stations and city halls. The attack and all clear signal is used for tests even though tests aren't as frequent as in other areas across Eastern Europe. They soundnoff in case of fires, chemical issues, floods, incoming enemy attack, etc. Testing date for most areas is most likely yearly. Netherlands The Netherlands tests its air-raid sirens once per month, on the first Monday at noon, to keep the public aware of the system. There are about 4,200 sirens placed all across the country. In March 2015 it was announced that, due to high maintenance costs, the sirens will be taken out of service by the end of 2020. The government has implemented a Cell Broadcast system called NL-Alert, compliant to the mandatory European regulation EU-Alert, to replace the sirens by 2021. However, as of early 2022, the sirens will continue to be heard until another decision has been made. Norway Norway has about 1,250 operational sirens (mostly Kockums air horn units rather than motorized sirens), primarily located in cities. Three different signals are used: Critical message, listen to radio: three periods of three signals, separated by one minute between the periods. The "critical message" signal is followed by a radio broadcast. It is used in peacetime to warn the population about major accidents, large fires and gas leaks. Air raid, take cover: an intermittent signal lasting for about a minute. All clear: a continuous signal sounded for about 30 seconds. The sirens are tested twice each year, at noon on the second Wednesday of January and June. As of 2014, only the '"critical message" signal is used during tests. Previously, the signal tested in June would use the "air raid" and "all clear" signals. The latter two are no longer used in peacetime. There are also sirens in the Storfjorden area in Møre og Romsdal county to warn about an avalanche from the mountain Åkerneset. These sirens are not operated by Norwegian Civil Defense department; instead, they are operated by Åksnes/Tafjord Beredskap. These sirens can be found in the villages of Stranda, Tafjord, Geiranger, Hellesylt, Linge, and Valldal. Poland There are numerous civil defense sirens employed throughout Poland. Though the testing schedule is unclear, sirens are commonly sounded on the anniversary of the Warsaw Uprising. As in other parts of the world, many of the country's volunteer fire stations utilize civil defense sirens for fire calls. Many of these stations will use a two siren set-up; one unit being high pitched, the other being low pitched. Portugal Portugal has hundreds of sirens placed across the country. Urban areas most likely use a few electronic sirens even though the presence of warning sirens is more common in suburban or rural areas, where fire stations use them for fire calls. These kinds of sirens are dual-tone and mechanical and are the most common kind of warning siren in Portugal. Between the 2000s and the early 2010s, a few fire sirens were decommissioned due to maintenance upkeep even though most of them remain active. Romania In Romania, civil defense sirens have been used since the early 1930s. Originally, each street had a small siren on top of a high-rise building, which could be powered mechanically. During World WarII, the sirens had a single continuous tone to warn of an air strike. Throughout the Cold War, larger sirens were manufactured locally and installed on various public buildings and residences. The sirens were able to transmit a comprehensive variety of tones, each with a different meaning such as a chemical disaster, an earthquake, a flood, or an imminent air or nuclear strike; each of these tones required the population to either move to higher ground or an ABC shelter. An "all clear" signal was played after the area had been deemed safe for the general public. Since the 1990s, civil defense sirens have been replaced by electronic sirens and the procedure has been simplified. As of 2013, there are four playable tones: a natural disaster warning, an upcoming air/nuclear strike, an imminent air/nuclear strike, and an "all clear" signal. Taking shelter is no longer a legal requirement, although ABC shelters are still operational. In August 2017, Romanian authorities started to perform monthly defence siren tests. The first such test took place on 2 August 2017 and is scheduled to be repeated on the first Wednesday of each month, between 10:00 and 11:00am local time. Such tests have been stopped in the wake of the COVID-19 pandemic. Russia During the siege of Leningrad, the radio network carried information for the population about raids and air alerts. The famous "metronome" went down in the history of the siege of Leningrad as a cultural monument of resistance of the population. At that time, there were more than 1 thousand loudspeakers and 400 thousand radio streams operating in the city. If there were no broadcast programs, then the metronome was broadcast with a slow rhythm of 50-55 beats per minute. The network was switched on around the clock, which allowed the population and services to be confident in the operation of the network. By order of the MPVO headquarters, the duty officer of the Central station of the radio network interrupted the broadcast of the program, turned on an electric player with a record of the alarm text. This record was supplemented by 400 electric sirens. At the end of the recording, the metronome was switched on with a rapid rhythm of 160-180 beats per minute. When the danger was over, the electric player was switched on again by order of the staff, and the alarm was sounded in the streets and houses, accompanied by the sound of fanfares. Slovenia Slovenia has 1,563 operable civil defence sirens. Most of them are electronic sirens, although there are some mechanical ones. Civil defence sirens are mounted on fire stations, town halls, or other structures. Three siren tones are used in the country: Warning (Opozorilo na nevarnost): 2-minute long steady tone. Serves as a warning of the impending danger of a fire, natural or other type of disaster, or high water level. Immediate danger (Neposredna nevarnost): 1-minute wailing tone. Used in case of dangers, such as major fires, floods, radiological or chemical danger and air raid. All clear - end of danger (Konec nevarnosti): 30-second steady tone. It is heard during tests of air raid sirens each month on the first Saturday of each month at noon, and at the end of an emergency for which the immediate danger signal was sounded. Since 1 September 1998, there are two additional siren tones, which are used in certain Slovenian municipalities. The municipalities of Hrastnik and Trbovlje use a special signal (called Neposredna nevarnost nesreče s klorom) for the immediate danger in case of an accident involving chlorine when there is a danger of chlorine leaks in the environment. The 100 second long signal consists of a 30-second wailing tone immediately followed by a 40-second steady tone and again of a 30-second wailing tone. The municipalities of Muta, Vuzenica, Podvelka, Radlje ob Dravi, Brežice, Krško and Sevnica use a 100-second long wailing signal (named Neposredna nevarnost poplavnega vala) (consisting of 4-second bursts separated by 4 seconds of silence) for the immediate danger of flash floods, used in case of overflow or collapse of a hydroelectric dam. When emergencies impact multiple regions at the same time, or the whole country, people are advised to listen to the first channel of the public Radio Slovenia, Val 202, or watch the first or second channel of the public broadcaster RTV Slovenija. Emergencies of smaller extent are announced via regional radio and TV stations. Until 1 January 1998, air raid sirens were tested each Saturday at noon. The formerly used warning signals were: General public mobilisation (Splošna javna mobilizacija): 3-minute long ascending and descending tone, consisting of alternating 10-second bursts and 5 seconds of silence Immediate danger of air raid (Zračna nevarnost): 1-minute wailing tone ABC (chemical, biological, nuclear disaster) alarm (Radiacijsko-biološko-kemična nevarnost): 90-second tone, consisting of three 20-second ascending and descending bursts, separated by 15 seconds of silence Fire alarm (Požarna nevarnost): 90-second tone, consisting of three 20-second steady bursts, separated by 15 seconds of silence (Natural) disaster (Nevarnost naravnih nesreč): 60-second signal with alternating 20 seconds of steady tone, 20 seconds of ascending and descending tone and another 20-second wailing tone All clear (Konec nevarnosti): 60-second steady tone Spain Few sirens that were used for civil defence against bombings during the Spanish Civil War are preserved. The Guernica siren has a highly symbolic value because of the impact of the Bombing of Guernica. Barcelona City History Museum preserves one related to the Bombing of Barcelona, and another siren from civil war years is also preserved in Valencia. Sweden The Swedish alarm system uses outdoor sirens in addition to information transmitted through radio and television and sent by text messages and mobile apps. Special radio receivers are handed out to residents living near nuclear power plants. The outdoor system , also known as ('Hoarse Fredrik', alluding to the sound of the siren) to the Swedish population, consists of 4,600 sirens. These sirens were first tested in 1931, and were mounted to cars and bikes sent out by the government. The sirens can be found on tall buildings all around Sweden. They are driven by compressed air in giant tanks, but are successively being replaced by modern electronic sirens which use speakers. The emergency services are also able to send out spoken messages through the new sirens. The outdoor signals used are as follows: Readiness alarm (): a 5-minute pattern of 30 second tones separated by 15 second gaps. Used when an imminent danger of war is present. Air raid alarm (): a 1-minute pattern of 2 second tones separated by 2 second gaps. Sent when the threat of an air attack is imminent. Important public announcement, general alarm (): a pattern of 7 second long tones and 14 second long gaps. People should go inside, close windows and doors, close ventilation, and listen to radio channel P4. Information may also be given via television and text-television. All clear (): a 30-second long single tone. Used for all above signals when danger is over. The outdoor sirens are tested four times a year on the first non-holiday Monday of March, June, September, and December at 15:00 local time. The test consists of the general alarm for 2 minutes, followed by a 90-second gap before the "all clear" is sounded. There are usually around 15 to 20 general alarms, occurring locally, per year. The most common cause of general alarms is fire, specially in situations that involve industries, landfills, and other facilities containing dangerous substances which can create hazardous smoke. The 2018 peak in alarms (54 that year) is attributed to the 2018 Sweden wildfires which alone caused over 20 general alarms. Other possible attributing factors could be the increased public safety awareness after the 2017 Stockholm truck attack. Switzerland Switzerland currently has 8,500 mobile and stationary civil defense sirens, which can alert 99% of the population. There are also 700 sirens located near dams. Every year on the first Wednesday of February, Switzerland's sirens are tested to see if they are functioning properly. During this test, general alert sirens as well as sirens near dams are checked. The population is informed of the test in the days leading up to the tests by radio, television, teletext, and newspapers, and the siren tests do not require the population to take any special measures. The tones of the different sirens are provided on the last page of all phone books as well as on the Internet. General alert: a 1-minute regularly ascending and descending tone, followed by a 2-minute interval of silence before repeating. The 'general alert' siren goes off when there is a possible threat to the population. The population is instructed to inform those around them to proceed inside. Once inside, people are instructed to listen to emergency broadcasts made by the broadcasting networks SRF, RTS, RSI and RSR. Flood alert: 12 continuous low-pitched tones, each lasting 20 seconds. The flood alert is activated once the general siren is sounding. If heard by the population in danger zones (such as near dams), they must leave the dangerous area immediately or find shelter. Ukraine Ukraine has employed civil defense sirens to warn citizens of danger during the Russian invasion of Ukraine. The observed use of sirens in Ukraine has been the following: Danger threat: Repeated long bursts All Clear: Final Short burst United Kingdom During World WarII, Britain had two warning tones: Red warning: attack in progress or imminent (attack/wail) All clear: attack over (steady alert) These tones would be initiated by the Royal Observer Corps after spotting Luftwaffe aircraft coming toward Britain, with the help of coastal radar stations. The "red warning" would be sounded when the Royal Observer Corps spotted enemy aircraft in the immediate area. The sirens were tested periodically by emitting the tones in reverse order, with the "all clear" tone followed by the "red warning" tone. This ensured the public would not confuse a test with a real warning. Every village, town, and city in the United Kingdom used to have a network of dual-tone sirens to warn of incoming air raids during World WarII. The operation of the sirens was coordinated by a wire broadcast system via police stations. In towns and cities with a population of over 3,000, powered sirens were used, whereas in rural areas hand-operated sirens were used (which were later put to use as warnings for nuclear attack during the Cold War). At the end of the Cold War in 1992, the siren network was decommissioned, and very few remain. These sirens, mostly built by Carter, Gents, Castle Castings, and Secomak (now Klaxon Signal Co.), have 10 and 12 ports to create a minor third interval (B and D notes) and are probably the world's most recognised World WarII air raid siren sound. Recordings of British sirens are often dubbed into movies set in countries which never used this type of siren. Around 1,200 sirens remain, mostly used to warn the public of severe flooding. They are also used for public warning near gas or nuclear power plants, nuclear submarine bases, oil refineries and chemical plants. The remaining sirens are a mix of older motor driven models (usually from World WarII), such as the Carter siren manufactured by Carter's of Nelson or the "syren" manufactured by Gent's of Leicester, and Cold War like Castle Castings and Secomak (now Klaxon Signal Co.) and newer electronic sirens like Hormann ECN, Whelen, Federal Signal Modulator, ATI HPSS and COOPER WAVES. Most of them usually tested annually between August and September if they're not in a siren system. They are also used to start and finish silences on Armistice Day and Remembrance Sunday There was a siren system consisted mainly of Castle Castings and 2 Secomak GP3s for their flood warning system sector. When MoD was decommissioning these systems across the country, Norfolk kept their sector back for flood warning. With the advent of digital services and mobile technology, many local authorities are now retiring their siren networks in favour of contacting people by telephone. In January 2007, proposals to retire a network of sirens in Norfolk were made by the Norfolk Resilience Forum. In November 2007, residents were angered after the sirens had not sounded following a tidal surge in Walcott. In 2008, a review of the current and future role of flood warning sirens was undertaken by Norfolk County Council, after plans to retire them were halted following concerns from nearby residents. Although some of the sirens were initially withdrawn, 40 out of the 57 were eventually temporarily reinstated. Despite this, in July 2010 the flood warning sirens were finally retired in favour of alerting people by telephone, SMS or e-mail. After three years of consultations, the council had failed to demonstrate that refurbishing the sirens would be a worthwhile investment. The whole siren system was completely gone by late 2013, with only 2 sirens remaining, which are both inactive with one of them refurbished and put on display on a beach not far from its original location, and another, in Mundesley Fire Station, just standing there, extremely unlikely to ever sound again. Lincolnshire, which had one of the largest siren systems in the country consisting of Carters, had 46 sirens based in North Somercotes, Mablethorpe, Boston, Skegness, Spalding and Sutton Bridge, as well as inland at Louth, Horncastle, Middle Rasen and Gainsborough, the areas most at risk of being hit by floods. Following serious flooding in the summer of 2007, investigations took place into how the flood warning system could be improved. The Environment Agency admitted that the warning system in Louth had not sounded early enough. In April 2008, Lincolnshire County Council began to investigate the possibility of replacing the flood warning sirens with mobile phone alerts. A council report in November 2009 described the sirens as being "outdated, in the wrong places and difficult to repair". The sirens were eventually decommissioned in November 2011 and replaced with Floodline. In January 2010, 13 public warning sirens on the island of Guernsey that had first been installed in 1937 were due to be retired and replaced by text messages. This followed claims by the Home Department that the sirens had "reached the end of their useful working life". The sirens had previously been used to warn of major incidents. From 1950 to 2010, the Civil Defence Committee took responsibility for the sirens, and had tested them annually since 9 May 1979. Members of the public had criticised the decision, and Deputy Janine Le Sauvage claimed that sirens were the only way everyone knew there was an emergency. In February 2010, 40 islanders formed a protest march opposing the proposal to retire the sirens. The campaigners accused the government of not listening to them, as an online petition calling for the sirens to be saved was signed by more than 2,000 people. In April 2010, it was decided to dismantle the public warning system. Emergency planners had proposed to use a new warning system that would contact residents by telephone; however, this was abandoned due to technical limitations and local media and other communication methods are used instead. Only 2 remain in Guernsey, one in Victoria Tower, which sounded off once at around 2017, and another, active for a quarry. Following severe flooding in Upper Calder Valley in June 2000, the Environment Agency replaced its network of sirens, with eight being placed around Walsden, Todmorden, Hebden Bridge and Mytholmroyd. The network was designed to complement the agency's Floodline service. These sirens became what is now known as the Todmorden Flood Warning System. There are 9 sirens that are part of the system, 5 of them being Secomak, 3 of them being Klaxon and one of them being Carter (which was recently confirmed to be inactive according to the local environment agency). In November 2010, 36 flood warning sirens in Essex, including nine on Canvey, were retired following concerns from the county council that the system was "no longer fit for purpose". The sirens were due to become obsolete in 2014. Only 5 sirens from the entire system remain, 2 of them in Canvey Island. In September 2012, new flood warning sirens were installed in the Dunhills Estate in Leeds, as part of flood defence work at Wyke Beck. In January 2014, flood sirens sounded for the first time in 30 years on the Isle of Portland. Broadmoor Hospital used 13 sirens installed in 1952, which were tested weekly. These were consisted of Secomak CS8s, which were similar to a Secomak GP8 except the CS8 had coded shutters which could do an alternating hi-lo signal, and if designed to do so, could also do a pulse signal. For emergencies, they sounded the hi-lo and for all clear, they sounded a steady tone. In tests, they would sound the all-clear. In July 2014, plans were put forward to retire 7 of the 13 alarms, which had last been properly activated in 1993. The alarms are located in areas such as Sandhurst, Wokingham, Bracknell, Camberley and Bagshot. In June 2016, the West London Mental Health Trust, who manages the hospital, proposed decommissioning the sirens altogether and replacing them with social media alerts through websites such as Twitter. In December 2019, this entire system was decommissioned, in favour of a new Electronic siren located at the hospital. This siren is tested silently, however on occasion (with prior notice from the hospital) it is audibly tested, but not at full volume. A similar siren system in Carstairs, Scotland, called the Carstairs Hospital Siren System, uses 9 sirens, 7 of them being Secomak CS8, 1 being a Klaxon GP8 and 1 being Secomak GP12. The hi-lo signal is rarely used since during emergencies, they sound a continuous tone for 8 minutes and in all clear, they sound a long wail, consisting of 30 seconds startup and alert and a 30-second wind-down 3 times. Test schedule is the third Thursday of every month at 1PM with the all-clear. There are several sirens in use around Avonmouth near Bristol to warn of chemical incidents from industry in the area. These are known as the Severnside Sirens. These sirens consist of Federal Signal Modulators and two DSAs, which were installed in 1997 for the public. The system is tested every third day of every month at 3PM. This consists of rising tones and a steady pulsing tone, followed by a steady tone for all clear. In emergencies, they will run for as long as the batteries can allow since they are off-grid powered. North America Canada In Canada, a nationwide network of Canadian Line Materials sirens was established in the 1950s to warn urban populations of a possible Soviet nuclear attack. This system was tested nationwide twice in 1961, under the codenames Exercise Tocsin and "Tocsin B". The system was maintained until the 1970s, when advancements in military technology reduced the Soviet nuclear missile strike time from 3–5 hours to less than 15 minutes. Sirens can still be found in many Canadian cities, all in various states of repair. In Toronto, for instance, the network has been abandoned to the point where no level of government will take responsibility for its ownership. A handful of sirens still remain in Toronto in older established neighbourhoods: Dundas Street West and Shaw Street York Quay, Harbourfront Sirens have recently been built within 3 kilometers (2 miles) of the Darlington and Pickering nuclear power plants in the province of Ontario. (Both plants are within 30 kilometers; 20 miles of each other.) These sirens will sound in the event of a nuclear emergency that could result in a release of radioactivity. Sirens have also been placed (and are tested weekly) in Sarnia, Ontario due to the large number of chemical plants in the vicinity. These consist mainly of ATI HPSS32 sirens, as well as a Federal Signal Modulator in the rail yards and 3 Thunderbolt 1003s located at the Suncor plant. Sirens have also been installed in and around the Grey Bruce Nuclear Generating Station. The sirens are on the plant and in the surrounding communities such as Tiverton, Ontario. One notable siren is a Federal Signal Modulator at the Bruce Nuclear Visitor's Centre. The Public Siren network as it is called, consists of mostly Whelens, Modulators, and Model 2s. One of the sirens in this network (a Model 2) is at Tiverton, which is about 10 km (6 miles) from the plant. Many warning sirens in Ontario, Manitoba, Saskatchewan and Alberta are now used as tornado warning instruments. Smithers, British Columbia uses an old air raid siren as a noon-day whistle. New Waterford, Nova Scotia uses a siren to signal the daily curfew. One of the warning sirens was even used as a goal horn for the Quebec Nordiques between the mid-1980s and 1991. Caledonia, Ontario routinely uses an air raid siren to call in their local volunteer firefighters to the fire hall. NOAA Weather Radios in Canada are often used for advance warnings about future severe storms whenever people are at home, at a business or in a car. United States In the United States, several sets of warning tones have been used that have varied due to age, government structure, and manufacturer. The initial alerts used during World WarII were the alert signal (a 3–5-minute steady, continuous siren tone) and the attack signal (a 3–5-minute warbling tone, or series of short tone bursts on devices incapable of warbling, such as whistles). The Victory Siren manual stated that when a manual generation of the warbling tone was required, it could be achieved by holding the "Signal" switch on for 8 seconds and off for 4 seconds. In 1950, the Federal Civil Defense Administration revised the signals, naming the alert signal "red alert" and adding an "all clear" signal, characterized by three 1-minute steady blasts with 2 minutes of silence between the blasts. Beginning in 1952, the Bell and Lights Air Raid Warning System, developed by AT&T, was made available to provide automated transmission of an expanded set of alert signals: Red alert: Attack (Imminent) Yellow alert: Attack (Likely) White alert: All Clear Blue alert: Hi-Lo (High-Low) (Different warning, such as local warning) The "yellow alert" and "red alert" signals correspond to the earlier "alert" signal and "attack" signal, respectively, and the early Federal Signal AR timer siren control units featured the "take cover" button labeled with a red background and the "alert" button labeled with a yellow background. Later AF timers changed the color-coding, coloring the "alert" button blue, the "take cover" button yellow, and the "fire" button red (used to call out volunteer firefighters), thus confusing the color-coding of the alerts. In 1955, the Federal Civil Defense Administration again revised the warning signals, altering them to deal with concern over nuclear fallout. The new set of signals were the "alert" signal (unchanged) and the "take cover" signal (previously the "attack" signal). The "all clear" signal was removed because leaving a shelter while fallout was present would prove hazardous. Sirens began to replace bells for municipal warning in the early 1900s, but became commonplace following America's entry into World WarII. Most siren models of this time were single-tone models which often sounded almost an octave higher in pitch than their European counterparts. Dual-tone sirens became more common in the 1950s, but had been used in some areas since about 1915. During the Cold War, standard signals were used throughout the country for civil defense purposes, referred to as "alert" and "attack." Volunteer fire departments generally used a different siren signal. Many towns, especially in California and New England, used coded air horns or diaphones for fire calls and reserved sirens for civil defense use. Today, signals are determined by state and local authorities, and can vary from one region to another. The most common tones produced by sirens in the United States are "alert" (steady) and "attack" (wail). Other tones include Westminster Chimes (commonly used for the testing of electronic sirens), hi-lo (high-low), whoop, pulse (pulsing), air horn, and fast wail. The U.S. federal standard regarding emergency warning signals is defined in FEMA's Outdoor Warning Systems Guide, CPG 1–17, published on March 1, 1980, which describes the Civil Defense Warning System (CDWS) and its warning signals. The language was slightly revised by FEMA's National Warning System Operations Manual, Manual 1550.2 published 03-30-2001: Attack warning: a 3 to 5-minute wavering tone on sirens or a series of short blasts on horns or other devices. The "attack warning" signal means an actual attack or accidental missile launch was detected, and people should take protective action immediately. The signal will be repeated as often as deemed necessary by local government authorities to get the required response from the population, including taking protective action from the arrival of fallout. This signal will have no other meaning and will be used for no other purpose. (However, sometimes the "attack" signals are used for tornado warnings.) Attention or alert warning: a 3 to 5-minute steady signal from sirens, horns, or other devices. Local government officials may authorize use of this signal to alert the public of peacetime emergencies, normally tornadoes, flash floods, and tsunamis. With the exception of any other meaning or requirement for action as determined by local governments, the "attention" or "alert" signal will indicate that all persons in the United States should "turn on [their] radio or television and listen for essential emergency information". A third distinctive signal may be used for other purposes, such as a local fire signal. All clear: no all clear signal is defined by either document. The most common tone, "alert", is widely used by municipalities to warn citizens of impending severe weather, particularly tornadoes which have earmarked the sirens as "Tornado Sirens". This practice is nearly universal in the Midwest and parts of the Deep South, where intense and fast-moving thunderstorms that can produce tornadoes occur frequently. The "alert" sound is a steady, continuous note. In seaside towns, "alert" may also be used to warn of a tsunami. Sirens that rotate will have a rising-and-falling tone as the direction of the horn changes. The "attack" tone is the rising and falling sound of an air raid or nuclear attack, frequently heard in war movies. It was once reserved for imminent enemy attack, but is today sometimes used to warn of severe weather, tsunamis, or even fire calls, depending on local ordinance. Criteria to sound sirens during severe weather events is established by regional National Weather Service offices and do not have an "all clear" signal. There is no standard "fire" signal in the United States, and while the use of sirens by volunteer fire departments is still common, it is diminishing. In the dry areas of the American West, residents may be required to shut off outdoor water systems to ensure adequate pressure at fire hydrants upon hearing the signal. The "fire" signal can vary from one community to another. Three long blasts on a siren is one common signal, similar to the signal used by volunteer brigades in Germany and other countries, while other locales use the hi-lo (high-low) signal described above. Some communities, particularly in New England and northern California, make use of coded blasts over a diaphone or air horn for fire signals, reserving the use of sirens for more serious situations. Still others use the "attack" tone as their fire call. Some communities make use of an "all clear" signal, or sound separate signals for fire calls and ambulance runs. Some fire signals in the U.S. are often blasted at least once a day, mostly at noon, to test the system, and are often referred to as "noon sirens" or "noon whistles". These also function as a time tick for setting clocks. CPG 1-17 recommends that a monthly test be conducted, consisting of the steady "attention" signal for no more than one minute, one minute of silence, and the "attack" signal for no more than one minute. A "growl test" signal is also described by CPG 1–17, when a siren must be tested more than once a month. This is typically a 1-second burst of sound to verify the proper operation of the siren without causing a significant number of people to interpret the test as an actual alert. Many cities in the U.S. periodically sound their sirens as a test, either weekly, monthly, or yearly, at a day and hour set by each individual city. In the United States, there is no national level alert system. Normally, sirens are controlled on a county or local level, but some are controlled on a state level, such as in Hawaii. Sirens are usually used to warn of impending natural disaster; while they are also used to warn of threats of military attacks, these rarely occur in the United States. Throughout the Great Plains, Midwest, and South, they are typically used to warn the public to take cover when a tornado warning is issued, sometimes even for severe thunderstorm warnings, and very rarely used for anything else. They are generally required in areas within a ten-mile radius of nuclear power plants. In the South and on the East Coast (except for Texas, Maine, Florida and New Hampshire), sirens are used to inform people about approaching hurricanes. In Pierce County, Washington there is a system of sirens set up along the Puyallup and Carbon River valleys to warn residents of volcanic eruptions and lahars (giant mudslides) from Mt. Rainier. Coastal communities, especially those in northern California, Oregon, Washington, Alaska, and Hawaii, use siren systems to warn of incoming tsunamis. In 2011, the city of Honolulu created an "Adopt-A-Siren" website for its tsunami sirens. The site is modeled after Code for America's "Adopt-a-Hydrant", which helps volunteers in Boston sign up to shovel out fire hydrants after storms. Some U.S. volunteer fire departments, particularly in rural areas, use sirens to call volunteers to assemble at the firehouse. This method is being used less frequently as technology advances and local residents within earshot often file complaints with their town boards. Some areas utilize their sirens as a last resort, relying more on cellular and paging technology; however, a decreasing number of rural departments are still outside the range of wireless communications and rely on sirens to activate the local volunteer departments. Many college campuses in the U.S., especially in the wake of the Virginia Tech shooting, have begun installing sirens to warn students in the event of dangerous incidents. Sirens in the United States have been replaced by NOAA Weather Radios for advance warnings about future severe storms whenever people are inside cars or buildings. South America Argentina Around mainly suburban areas of big cities like Bahia Blanca, Mar De Plata, Rosario, Cordoba and Comodoro Rivadavia, in police stations, fire stations, factories, weather stations, city halls and amongst common public neighbourhoods, warning sirens can be found. Most of the common models are a special model which isn't completely identified yet as of now, which however, looks like a vertical Klaxon GP6/10, a Mechtric MS22 or a vertically installed mechanical or electro-mechanical 8, 9, 11, or 12 port single tone siren, most of which have 6 rectangular horns and is most likely identified as a Kingvox. They sound off for terrorism attacks, bushfires, dam leakages, chemical plant issues, life-threatening/extremely severe weather alerts which are certain to happen, incoming enemy attack and any other common natural disasters. Other models present in the country's warning system include Federal Signal, Elektror/Siemens, Whelen, Telegrafia, Klaxon and Hörmann. In extremely urban areas like Buenos Aires, most of the mechanical sirens which used to operate in a large amount were decommissioned and replaced with a smaller amount of electronic sirens, SMS alerts to phones and in some cases, as EAS alerts to TV. In some areas around the suburban areas of the big cities, over the years and since the 2010s, some sirens were decommissioned due to maintenance upkeep even though most of them remain active. Two signals are commonly used. Here are the signals: General warning: 13 second wind-up and alert with a 6-second wind-down, which repeats 3 times. This signal is used if a public emergency is imminent. All clear: A 100-second alert to be used after a public emergency has been dealt with. The all clear is mainly used during the nationwide siren testing in Argentine Volunteer Firefighters' Day, in the 2nd of June and sometimes, in some sirens, the general warning is tested after the all-clear. Oceania Australia A series of 98 electronic sirens, making up a large-scale public-address system (the "Sydney CBD Emergency Warning System") and including 13 variable-message signs, are installed in the Sydney central business district. While installed in the months preceding the 2007 APEC conference, they are designed to be a permanent fixture and are tested on a monthly basis. Some large-scale sirens are also deployed, like the Grifco Model 888, Grifco Model 777, or Klaxon SO4, which are used at fire stations for call-outs and at Sydney's beaches for shark alarms. Alarms are also used around prisons for breakouts and at many factories and schools to announce start and finish times. A siren is located at the Kwinana BP plant south of Perth, which is tested every Monday. It is used to evacuate the plant in case of an emergency and can be heard in Kwinana and certain parts of Rockingham. It can also be used to warn of severe weather and potentially dangerous emergencies on the Kwinana Industrial Strip. In South Australia, a number of Country Fire Service stations have a siren on or near the station. These are only activated when the brigade are responding to bushfire or grassfire events and for testing. They are not activated for every call, only as a public alert for the presence of bushfires. As had been in Australia, Electro-mechanical sirens and electronic sirens had been used for alerting the community areas, towns and cities in Australia as in an imminent emergency. Emergency broadcast speakers will be turned on and will be informing important information as the standard emergency warning signal the affected communities within 24 hours. There are electronic sirens that include Whelen, Telegrafia, SiRcom, Klaxon and Grifco. In Victoria, many Country Fire Authority stations or near the fire station have a siren installed that is used to summon volunteers to an emergency callout, as well as consequently alerting the local community of brigade activity. Due to a variety of siren models in use across the state, there are 2 signals that are used, differentiated by length: Emergency callout: siren sounds for no longer than 90 seconds. Community alert: siren sounds for no shorter than 5 minutes. In Melbourne's CBD, there is a set of sirens installed to warn of attack and extreme flooding. These became necessary after the Bourke and Flinders St. attacks, where people were killed as a result of a vehicle purposefully driving into pedestrians. In Queensland, Whelen Vortex-R4 sirens have been installed as part of the Somerset Regional Council Flood Warning System. At nearby Grantham, Queensland, a Whelen WPS-2906, which features both warning tones and pre-recorded messages provides early warning in the event of flooding. As well, Cairns Regional Council have installed 9 Whelen WPS-2900 series sirens to alert to a dam breach of the nearby Copperlode Falls Dam. A map of the system can be found here, as well as additional information. Other Whelen WPS-2900 series sirens can be found in a few towns around Queensland as well. New Zealand Lower Hutt, Napier, Whanganui, and the former Waitakere City area of Auckland each have a network of civil defense sirens. The networks in Lower Hutt and Napier are bolstered by fire sirens that also function as civil defense sirens. Lower Hutt's network is further bolstered by selected industrial sirens that double as civil defence sirens. In the western Bay of Plenty Region, several fire sirens also serve as civil defense sirens, and there are dedicated civil defense sirens at the Bay Park Raceway in Mount Maunganui, Tokoroa, and Whangamatā (which has two). Additionally, Tokoroa, Putāruru, Tīrau, and Whangamatā have fire sirens serving double duty as civil defense sirens. In the years following the tsunamis of the Indian Ocean earthquake in 2004, Meerkat electronic sirens were installed in all populated areas of the west coast lower than 10 metres. Warning sounds vary from area to area, including rising and falling notes and Morse code sirens. Communities with volunteer fire brigades use a continuous note on all sirens for civil defense, and a warbling siren on the fire station siren only for fire callouts. Civil defense uses a distinctive "sting" siren that is used by all radio stations nationwide, but is currently only used for civil defense sirens in Wanganui. Africa Morocco Morocco, like many other countries, has civil defense sirens installed in several cities and towns such as Casablanca, Oujda, Asilah, M'diq, Chefchaouen, and Qalaat Sraghna. However, not all cities are equipped with this system. These sirens are typically located at fire stations and city halls. They were first introduced during the French and Spanish colonial era, and while some have been dismantled, many continue to operate to this day. These sirens are strategically positioned in areas with high population densities like Casablanca, or in regions that are susceptible to natural disasters like floods, such as the Eureka siren. Additionally, they can also be found in tourist areas such as M'Diq and Chefchaouen, as well as in areas where dangerous industries are located. Although these sirens are seldom used due to the rarity of imminent danger, they were recently utilized during the quarantine period of the pandemic. Sirens sounded in various cities, including Oujda, to alert the population of the nightly curfew. During the holy month of Ramadan, these sirens are employed to signal the arrival of the Fotour time. In the past, these sirens were used in small towns to call for volunteer paramedics in emergencies such as fires or drowning incidents. However, in cities without sirens, the state utilizes alternative means to alert the public, such as broadcasting warning messages on television or radio, or sending SMS messages to citizens' phone numbers. In dangerous situations, police patrol car horns are also used to warn the public. Surprisingly, the state does not allocate any specific days to test the sirens, and the reasons behind this remain unknown. See also List of civil defense sirens Emergency Alert System Fire alarm system Long Range Acoustic Device Rotary woofer References External links Air Raid Sirens – outdoor warning siren website Civil Defense Museum – Overview of sirens since their inception Los Angeles air raid sirens – Pictures of unused nuclear-era civil defense sirens still extant in Los Angeles, California The Siren Archive – Over 1,000 siren photographs coupled with a few recordings from around the world The world's loudest and largest sirens ever "Tocsin B", Canada's dry run of its Nuclear Warning System from 1961 (CBC Radio News Special, 13 Nov 1961) Toronto Star – The mystery of the air raid sirens Michigan Civil Defense Museum Old German World WarII Air Raid Siren YouTube British World WarII Air Raid Siren with all-clear YouTube Federal Thunderbolt 1000T air raid siren in alert signal YouTube Civil defense Emergency population warning systems Sirens Articles containing video clips
Civil defense siren
[ "Technology" ]
15,207
[ "Warning systems", "Emergency population warning systems" ]
51,025
https://en.wikipedia.org/wiki/Electroplating
Electroplating, also known as electrochemical deposition or electrodeposition, is a process for producing a metal coating on a solid substrate through the reduction of cations of that metal by means of a direct electric current. The part to be coated acts as the cathode (negative electrode) of an electrolytic cell; the electrolyte is a solution of a salt whose cation is the metal to be coated, and the anode (positive electrode) is usually either a block of that metal, or of some inert conductive material. The current is provided by an external power supply. Electroplating is widely used in industry and decorative arts to improve the surface qualities of objects—such as resistance to abrasion and corrosion, lubricity, reflectivity, electrical conductivity, or appearance. It is used to build up thickness on undersized or worn-out parts and to manufacture metal plates with complex shape, a process called electroforming. It is used to deposit copper and other conductors in forming printed circuit boards and copper interconnects in integrated circuits. It is also used to purify metals such as copper. The aforementioned electroplating of metals uses an electroreduction process (that is, a negative or cathodic current is on the working electrode). The term "electroplating" is also used occasionally for processes that occur under electro-oxidation (i.e positive or anodic current on the working electrode), although such processes are more commonly referred to as anodizing rather than electroplating. One such example is the formation of silver chloride on silver wire in chloride solutions to make silver/silver-chloride (AgCl) electrodes. Electropolishing, a process that uses an electric current to selectively remove the outermost layer from the surface of a metal object, is the reverse of the process of electroplating. Throwing power is an important parameter that provides a measure of the uniformity of electroplating current, and consequently the uniformity of the electroplated metal thickness, on regions of the part that are near to the anode compared to regions that are far from it. It depends mostly on the composition and temperature of the electroplating solution, as well as on the operating current density. A higher throwing power of the plating bath results in a more uniform coating. Process The electrolyte in the electrolytic plating cell should contain positive ions (cations) of the metal to be deposited. These cations are reduced at the cathode to the metal in the zero valence state. For example, the electrolyte for copper electroplating can be a solution of copper(II) sulfate, which dissociates into Cu2+ cations and anions. At the cathode, the Cu2+ is reduced to metallic copper by gaining two electrons. When the anode is made of the metal that is intended for coating onto the cathode, the opposite reaction may occur at the anode, turning it into dissolved cations. For example, copper would be oxidized at the anode to Cu2+ by losing two electrons. In this case, the rate at which the anode is dissolved will equal the rate at which the cathode is plated, and thus the ions in the electrolyte bath are continuously replenished by the anode. The net result is the effective transfer of metal from the anode to the cathode. The anode may instead be made of a material that resists electrochemical oxidation, such as lead or carbon. Oxygen, hydrogen peroxide, and some other byproducts are then produced at the anode instead. In this case, ions of the metal to be plated must be replenished (continuously or periodically) in the bath as they are drawn out of the solution. The plating is most commonly a single metallic element, not an alloy. However, some alloys can be electrodeposited, notably brass and solder. Plated "alloys" are not "true alloys" (solid solutions), but rather they are tiny crystals of the elemental metals being plated. In the case of plated solder, it is sometimes deemed necessary to have a true alloy, and the plated solder is melted to allow the tin and lead to combine into a true alloy. The true alloy is more corrosion-resistant than the as-plated mixture. Many plating baths include cyanides of other metals (such as potassium cyanide) in addition to cyanides of the metal to be deposited. These free cyanides facilitate anode corrosion, help to maintain a constant metal ion level, and contribute to conductivity. Additionally, non-metal chemicals such as carbonates and phosphates may be added to increase conductivity. When plating is not desired on certain areas of the substrate, stop-offs are applied to prevent the bath from coming in contact with the substrate. Typical stop-offs include tape, foil, lacquers, and waxes. Strike Initially, a special plating deposit called a strike or flash may be used to form a very thin (typically less than 0.1 μm thick) plating with high quality and good adherence to the substrate. This serves as a foundation for subsequent plating processes. A strike uses a high current density and a bath with a low ion concentration. The process is slow, so more efficient plating processes are used once the desired strike thickness is obtained. The striking method is also used in combination with the plating of different metals. If it is desirable to plate one type of deposit onto a metal to improve corrosion resistance but this metal has inherently poor adhesion to the substrate, then a strike can be first deposited that is compatible with both. One example of this situation is the poor adhesion of electrolytic nickel on zinc alloys, in which case a copper strike is used, which has good adherence to both. Pulse electroplating The pulse electroplating or pulse electrodeposition (PED) process involves the swift alternating of the electrical potential or current between two different values, resulting in a series of pulses of equal amplitude, duration, and polarity, separated by zero current. By changing the pulse amplitude and width, it is possible to change the deposited film's composition and thickness. The experimental parameters of pulse electroplating usually consist of peak current/potential, duty cycle, frequency, and effective current/potential. Peak current/potential is the maximum setting of electroplating current or potential. Duty cycle is the effective portion of time in a certain electroplating period with the current or potential applied. The effective current/potential is calculated by multiplying the duty cycle and peak value of the current or potential. Pulse electroplating could help to improve the quality of electroplated film and release the internal stress built up during fast deposition. A combination of the short duty cycle and high frequency could decrease surface cracks. However, in order to maintain the constant effective current or potential, a high-performance power supply may be required to provide high current/potential and a fast switch. Another common problem of pulse electroplating is that the anode material could get plated and contaminated during the reverse electroplating, especially for a high-cost, inert electrode such as platinum. Other factors that affect the pulse electroplating include temperature, anode-to-cathode gap, and stirring. Sometimes, pulse electroplating can be performed in a heated electroplating bath to increase the deposition rate, since the rate of most chemical reactions increases exponentially with temperature per the Arrhenius law. The anode-to-cathode gap is related to the current distribution between anode and cathode. A small gap-to-sample-area ratio may cause uneven distribution of current and affect the surface topology of the plated sample. Stirring may increase the transfer/diffusion rate of metal ions from the bulk solution to the electrode surface. The ideal stirring setting varies for different metal electroplating processes. Brush electroplating A closely-related process is brush electroplating, in which localized areas or entire items are plated using a brush saturated with plating solution. The brush, typically a graphite body wrapped with an absorbent cloth material that both holds the plating solution and prevents direct contact with the item being plated, is connected to the anode of a low-voltage and 3-4 ampere direct-current power source, and the item to be plated (the cathode) is grounded. The operator dips the brush in plating solution and then applies it to the item, moving the brush continually to get an even distribution of the plating material. Brush electroplating has several advantages over tank plating, including portability, the ability to plate items that for some reason cannot be tank plated (one application was the plating of portions of very large decorative support columns in a building restoration), low or no masking requirements, and comparatively low plating solution volume requirements. Mainly used industrially for part repair, worn bearing surfaces getting a nickel or silver deposit. With technological advancement deposits up to .025" have been achieved and retained uniformity. Disadvantages compared to tank plating can include greater operator involvement (tank plating can frequently be done with minimal attention and the solutions used are often toxic), and the inconsistency in achieving as great a plate thickness. Barrel plating This technique of electroplating is one of the most common used in the industry for large numbers of small objects. The objects are placed in a barrel-shaped non-conductive cage and then immersed in a chemical bath containing dissolved ions of the metal that is to be plated onto them. The barrel is then rotated, and electrical currents are run through the various pieces in the barrel, which complete circuits as they touch one another. The result is a very uniform and efficient plating process, though the finish on the end products will likely suffer from abrasion during the plating process. It is unsuitable for highly ornamental or precisely engineered items. Cleanliness Cleanliness is essential to successful electroplating, since molecular layers of oil can prevent adhesion of the coating. ASTM B322 is a standard guide for cleaning metals prior to electroplating. Cleaning includes solvent cleaning, hot alkaline detergent cleaning, electrocleaning, ultrasonic cleaning and acid treatment. The most common industrial test for cleanliness is the waterbreak test, in which the surface is thoroughly rinsed and held vertical. Hydrophobic contaminants such as oils cause the water to bead and break up, allowing the water to drain rapidly. Perfectly clean metal surfaces are hydrophilic and will retain an unbroken sheet of water that does not bead up or drain off. ASTM F22 describes a version of this test. This test does not detect hydrophilic contaminants, but electroplating can displace these easily, since the solutions are water-based. Surfactants such as soap reduce the sensitivity of the test and must be thoroughly rinsed off. Test cells and characterization Throwing power Throwing power (or macro throwing power) is an important parameter that provides a measure of the uniformity of electroplating current, and consequently the uniformity of the electroplated metal thickness, on regions of the part that are near the anode compared to regions that are far from it. It depends mostly on the composition and temperature of the electroplating solution. Micro throwing power refers to the extent to which a process can fill or coat small recesses such as through-holes. Throwing power can be characterized by the dimensionless Wagner number: where R is the universal gas constant, T is the operating temperature, κ is the ionic conductivity of the plating solution, F is the Faraday constant, L is the equivalent size of the plated object, α is the transfer coefficient, and i the surface-averaged total (including hydrogen evolution) current density. The Wagner number quantifies the ratio of kinetic to ohmic resistances. A higher Wagner number produces a more uniform deposition. This can be achieved in practice by decreasing the size (L) of the plated object, reducing the current density |i|, adding chemicals that lower α (make the electric current less sensitive to voltage), and raising the solution conductivity (e.g. by adding acid). Concurrent hydrogen evolution usually improves the uniformity of electroplating by increasing |i|; however, this effect can be offset by blockage due to hydrogen bubbles and hydroxide deposits. The Wagner number is rather difficult to measure accurately; therefore, other related parameters, that are easier to obtain experimentally with standard cells, are usually used instead. These parameters are derived from two ratios: the ratio of the plating thickness of a specified region of the cathode "close" to the anode to the thickness of a region "far" from the cathode and the ratio of the distances of these regions through the electrolyte to the anode. In a Haring-Blum cell, for example, for its two independent cathodes, and a cell yielding plating thickness ratio of M = 6 has Harring-Blum throwing power . Other conventions include the Heatley throwing power , Field throwing power , and Luke throwing power . A more uniform thickness is obtained by making the throwing power larger (less negative), except for Luke's throwing power, which has the advantage of having a minimum of 0 and a maximum of 100, in terms of the less negative value, according to any of these definitions. Parameters that describe cell performance such as throwing power are measured in small test cells of various designs that aim to reproduce conditions similar to those found in the production plating bath. Haring–Blum cell The Haring–Blum cell is used to determine the macro throwing power of a plating bath. The cell consists of two parallel cathodes with a fixed anode in the middle. The cathodes are at distances from the anode in the ratio of 1:5. The macro throwing power is calculated from the thickness of plating at the two cathodes when a direct current is passed for a specific period of time. The cell is fabricated out of perspex or glass. Hull cell The Hull cell is a type of test cell used to semi-quantitatively check the condition of an electroplating bath. It measures useable current density range, optimization of additive concentration, recognition of impurity effects, and indication of macro throwing power capability. The Hull cell replicates the plating bath on a lab scale. It is filled with a sample of the plating solution and an appropriate anode which is connected to a rectifier. The "work" is replaced with a Hull cell test panel that will be plated to show the "health" of the bath. The Hull cell is a trapezoidal container that holds 267 milliliters of a plating bath solution. This shape allows one to place the test panel on an angle to the anode. As a result, the deposit is plated at a range current densities along its length, which can be measured with a Hull cell ruler. The solution volume allows for a semi-quantitative measurement of additive concentration: 1 gram addition to 267 mL is equivalent to 0.5 oz/gal in the plating tank. Effects Electroplating changes the chemical, physical, and mechanical properties of the workpiece. An example of a chemical change is when nickel plating improves corrosion resistance. An example of a physical change is a change in the outward appearance. An example of a mechanical change is a change in tensile strength or surface hardness, which is a required attribute in the tooling industry. Electroplating of acid gold on underlying copper- or nickel-plated circuits reduces contact resistance as well as surface hardness. Copper-plated areas of mild steel act as a mask if case-hardening of such areas are not desired. Tin-plated steel is chromium-plated to prevent dulling of the surface due to oxidation of tin. Specific metals Alternatives to electroplating There are a number of alternative processes to produce metallic coatings on solid substrates that do not involve electrolytic reduction: Electroless deposition uses a bath containing metal ions and chemicals that will reduce them to the metal by redox reactions. The reaction should be autocatalytic, so that new metal will be deposited over the growing coating, rather than precipitated as a powder through the whole bath at once. Electroless processes are widely used to deposit nickel-phosphorus or nickel-boron alloys for wear and corrosion resistance, silver for mirror-making, copper for printed circuit boards, and many more. A major advantage of these processes over electroplating is that they can produce coatings of uniform thickness over surfaces of arbitrary shape, even inside holes, and the substrate need not be electrically conducting. Another major benefit is that they do not need power sources or specially-shaped anodes. Disadvantages include lower deposition speed, consumption of relatively expensive chemicals, and a limited choice of coating metals. Immersion coating processes exploit displacement reactions in which the substrate metal is oxidized to soluble ions while ions of the coating metal get reduced and deposited in its place. This process is limited to very thin coatings, since the reaction stops after the substrate has been completely covered. Nevertheless, it has some important applications, such as the electroless nickel immersion gold (ENIG) process used to obtain gold-plated electrical contacts on printed circuit boards. Sputtering uses an electron beam or a plasma to eject microscopic particles of the metal onto the substrate in a vacuum. Physical vapor deposition transfer the metal onto the substrate by evaporating it. Chemical vapor deposition uses a gas containing a volatile compound of the metal, which gets deposited onto the substrate as a result of a chemical reaction. Gilding is a traditional way to attach a gold layer onto metals by applying a very thin sheet of gold held in place by an adhesive. History Electroplating was invented by Italian chemist Luigi Valentino Brugnatelli in 1805. Brugnatelli used his colleague Alessandro Volta's invention of five years earlier, the voltaic pile, to facilitate the first electrodeposition. Brugnatelli's inventions were suppressed by the French Academy of Sciences and did not become used in general industry for the following thirty years. By 1839, scientists in Britain and Russia had independently devised metal-deposition processes similar to Brugnatelli's for the copper electroplating of printing press plates. Research from the 1930s had theorized that electroplating might have been performed in the Parthian Empire using a device resembling a Baghdad Battery, but this has since been refuted; the items were fire-gilded using mercury. Boris Jacobi in Russia not only rediscovered galvanoplastics, but developed electrotyping and galvanoplastic sculpture. Galvanoplastics quickly came into fashion in Russia, with such people as inventor Peter Bagration, scientist Heinrich Lenz, and science-fiction author Vladimir Odoyevsky all contributing to further development of the technology. Among the most notorious cases of electroplating usage in mid-19th century Russia were the gigantic galvanoplastic sculptures of St. Isaac's Cathedral in Saint Petersburg and gold-electroplated dome of the Cathedral of Christ the Saviour in Moscow, the third tallest Orthodox church in the world. Soon after, John Wright of Birmingham, England discovered that potassium cyanide was a suitable electrolyte for gold and silver electroplating. Wright's associates, George Elkington and Henry Elkington were awarded the first patents for electroplating in 1840. These two then founded the electroplating industry in Birmingham from where it spread around the world. The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in industry. It was used by Elkingtons. The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant starting its production in 1876. As the science of electrochemistry grew, its relationship to electroplating became understood and other types of non-decorative metal electroplating were developed. Commercial electroplating of nickel, brass, tin, and zinc were developed by the 1850s. Electroplating baths and equipment based on the patents of the Elkingtons were scaled up to accommodate the plating of numerous large-scale objects and for specific manufacturing and engineering applications. The plating industry received a big boost with the advent of the development of electric generators in the late 19th century. With the higher currents available, metal machine components, hardware, and automotive parts requiring corrosion protection and enhanced wear properties, along with better appearance, could be processed in bulk. The two World Wars and the growing aviation industry gave impetus to further developments and refinements, including such processes as hard chromium plating, bronze alloy plating, sulfamate nickel plating, and numerous other plating processes. Plating equipment evolved from manually-operated tar-lined wooden tanks to automated equipment capable of processing thousands of kilograms per hour of parts. One of the American physicist Richard Feynman's first projects was to develop technology for electroplating metal onto plastic. Feynman developed the original idea of his friend into a successful invention, allowing his employer (and friend) to keep commercial promises he had made but could not have fulfilled otherwise. See also Anodizing Electrochemical engineering Electrogalvanization Electropolishing Nanolamination Electroforming Electrowinning References Bibliography Metal plating Italian inventions
Electroplating
[ "Chemistry" ]
4,435
[ "Metallurgical processes", "Coatings", "Metal plating" ]
51,036
https://en.wikipedia.org/wiki/Drug%20prohibition
The prohibition of drugs through sumptuary legislation or religious law is a common means of attempting to prevent the recreational use of certain intoxicating substances. An area has a prohibition of drugs when its government uses the force of law to punish the use or possession of drugs which have been classified as controlled. A government may simultaneously have systems in place to regulate both controlled and non controlled drugs. Regulation controls the manufacture, distribution, marketing, sale, and use of certain drugs, for instance through a prescription system. For example, in some states, the possession or sale of amphetamines is a crime unless a patient has a physician's prescription for the drug; having a prescription authorizes a pharmacy to sell and a patient to use a drug that would otherwise be prohibited. Although prohibition mostly concerns psychoactive drugs (which affect mental processes such as perception, cognition, and mood), prohibition can also apply to non-psychoactive drugs, such as anabolic steroids. Many governments do not criminalize the possession of a limited quantity of certain drugs for personal use, while still prohibiting their sale or manufacture, or possession in large quantities. Some laws (or judicial practice) set a specific volume of a particular drug, above which is considered ipso jure to be evidence of trafficking or sale of the drug. Some Islamic countries prohibit the use of alcohol (see list of countries with alcohol prohibition). Many governments levy a tax on alcohol and tobacco products, and restrict alcohol and tobacco from being sold or gifted to a minor. Other common restrictions include bans on outdoor drinking and indoor smoking. In the early 20th century, many countries had alcohol prohibition. These include the United States (1920–1933), Finland (1919–1932), Norway (1916–1927), Canada (1901–1948), Iceland (1915–1922) and the Russian Empire/USSR (1914–1925). In fact, the first international treaty to control a psychoactive substance adopted in 1890 actually concerned alcoholic beverages (Brussels Conference). The first treaty on opium only arrived two decades later, in 1912. Definitions Drugs, in the context of prohibition, are any of a number of psychoactive substances whose use a government or religious body seeks to control. What constitutes a drug varies by century and belief system. What is a psychoactive substance is relatively well known to modern science. Examples include a range from caffeine found in coffee, tea, and chocolate, nicotine in tobacco products; botanical extracts morphine and heroin, and synthetic compounds MDMA and fentanyl. Almost without exception, these substances also have a medical use, in which case they are called pharmaceutical drugs or just pharmaceuticals. The use of medicine to save or extend life or to alleviate suffering is uncontroversial in most cultures. Prohibition applies to certain conditions of possession or use. Recreational use refers to the use of substances primarily for their psychoactive effect outside of a clinical situation or doctor's care. In the twenty-first century, caffeine has pharmaceutical uses. Caffeine is used to treat bronchopulmonary dysplasia. In most cultures, caffeine in the form of coffee or tea is unregulated. Over 2.25 billion cups of coffee are consumed in the world every day. Some religions, including the Church of Jesus Christ of Latter-day Saints, prohibit coffee. They believe that it is both physically and spiritually unhealthy to consume coffee. A government's interest to control a drug may be based on its negative effects on its users, or it may simply have a revenue interest. The British parliament prohibited the possession of untaxed tea with the imposition of the Tea Act of 1773. In this case, as in many others, it is not a substance that is prohibited, but the conditions under which it is possessed or consumed. Those conditions include matters of intent, which makes the enforcement of laws difficult. In Colorado possession of "blenders, bowls, containers, spoons, and mixing devices" is illegal if there was intent to use them with drugs. Many drugs, beyond their pharmaceutical and recreational uses, have industrial uses. Nitrous oxide, or laughing gas is a dental anesthetic, also used to prepare whipped cream, fuel rocket engines, and enhance the performance of race cars. Ethanol, or drinking alcohol, is also used as a fuel, industrial solvent and disinfectant. History The cultivation, use, and trade of psychoactive and other drugs has occurred since ancient times. Concurrently, authorities have often restricted drug possession and trade for a variety of political and religious reasons. In the 20th century, the United States led a major renewed surge in drug prohibition called the "War on Drugs". Early drug laws The prohibition on alcohol under Islamic Sharia law, which is usually attributed to passages in the Qur'an, dates back to the early seventh century. Although Islamic law is often interpreted as prohibiting all intoxicants (not only alcohol), the ancient practice of hashish smoking has continued throughout the history of Islam, against varying degrees of resistance. A major campaign against hashish-eating Sufis were conducted in Egypt in the 11th and 12th centuries resulting among other things in the burning of fields of cannabis. Though the prohibition of illegal drugs was established under Sharia law, particularly against the use of hashish as a recreational drug, classical jurists of medieval Islamic jurisprudence accepted the use of hashish for medicinal and therapeutic purposes, and agreed that its "medical use, even if it leads to mental derangement, should remain exempt [from punishment]". In the 14th century, the Islamic scholar Az-Zarkashi spoke of "the permissibility of its use for medical purposes if it is established that it is beneficial". In the Ottoman Empire, Murad IV attempted to prohibit coffee drinking to Muslims as haraam, arguing that it was an intoxicant, but this ruling was overturned soon after he died in 1640. The introduction of coffee in Europe from Muslim Turkey prompted calls for it to be banned as the devil's work, although Pope Clement VIII sanctioned its use in 1600, declaring that it was "so delicious that it would be a pity to let the infidels have exclusive use of it". Bach's Coffee Cantata, from the 1730s, presents a vigorous debate between a girl and her father over her desire to consume coffee. The early association between coffeehouses and seditious political activities in England led to the banning of such establishments in the mid-17th century. A number of Asian rulers had similarly enacted early prohibitions, many of which were later forcefully overturned by Western colonial powers during the 18th and 19th centuries. In 1360, for example, King Ramathibodi I, of Ayutthaya Kingdom (now Thailand), prohibited opium consumption and trade. The prohibition lasted nearly 500 years until 1851 when King Rama IV allowed Chinese migrants to consume opium. The Konbaung Dynasty prohibited all intoxicants and stimulants during the reign of King Bodawpaya (1781–1819). After Burma became a British colony, the restrictions on opium were abolished and the colonial government established monopolies selling Indian-produced opium. In late Qing China, opium imported by foreign traders, such as those employed by Jardine Matheson and the East India Company, was consumed by all social classes in Southern China. Between 1821 and 1837, imports of the drug increased fivefold. The wealth drain and widespread social problems that resulted from this consumption prompted the Chinese government to attempt to end the trade. This effort was initially successful, with Lin Zexu ordering the destruction of opium at Humen in June 1839. However, the opium traders lobbied the British government to declare war on China, resulting in the First Opium War. The Qing government was defeated and the war ended with the Treaty of Nanking, which legalized opium trading in Chinese law First modern drug regulations The first modern law in Europe for the regulating of drugs was the Pharmacy Act 1868 in the United Kingdom. There had been previous moves to establish the medical and pharmaceutical professions as separate, self-regulating bodies, but the General Medical Council, established in 1863, unsuccessfully attempted to assert control over drug distribution. The Act set controls on the distribution of poisons and drugs. Poisons could only be sold if the purchaser was known to the seller or to an intermediary known to both, and drugs, including opium and all preparations of opium or of poppies, had to be sold in containers with the seller's name and address. Despite the reservation of opium to professional control, general sales did continue to a limited extent, with mixtures with less than 1 percent opium being unregulated. After the legislation passed, the death rate caused by opium immediately fell from 6.4 per million population in 1868 to 4.5 in 1869. Deaths among children under five dropped from 20.5 per million population between 1863 and 1867 to 12.7 per million in 1871 and further declined to between 6 and 7 per million in the 1880s. In the United States, the first drug law was passed in San Francisco in 1875, banning the smoking of opium in opium dens. The reason cited was "many women and young girls, as well as young men of a respectable family, were being induced to visit the Chinese opium-smoking dens, where they were ruined morally and otherwise." This was followed by other laws throughout the country, and federal laws that barred Chinese people from trafficking in opium. Though the laws affected the use and distribution of opium by Chinese immigrants, no action was taken against the producers of such products as laudanum, a tincture of opium and alcohol, commonly taken as a panacea by white Americans. The distinction between its use by white Americans and Chinese immigrants was thus a form of racial discrimination as it was based on the form in which it was ingested: Chinese immigrants tended to smoke it, while it was often included in various kinds of generally liquid medicines often (but not exclusively) used by Americans of European descent. The laws targeted opium smoking, but not other methods of ingestion. Britain passed the All-India Opium Act of 1878, which limited recreational opium sales to registered Indian opium-eaters and Chinese opium-smokers and prohibiting its sale to emigrant workers from British Burma. Following the passage of a regional law in 1895, Australia's 1897 Aboriginals Protection and Restriction of the Sale of Opium Act addressed opium addiction among Aborigines, though it soon became a general vehicle for depriving them of basic rights by administrative regulation. Opium sale was prohibited to the general population in 1905, and smoking and possession were prohibited in 1908. Despite these laws, the late 19th century saw an increase in opiate consumption. This was due to the prescribing and dispensing of legal opiates by physicians and pharmacists to relieve menstruation pain. It is estimated that between 150,000 and 200,000 opiate addicts lived in the United States at the time, and a majority of these addicts were women. Changing attitudes and the drug prohibition campaign Foreign traders, including those employed by Jardine Matheson and the East India Company, smuggled opium into China in order to balance high trade deficits. Chinese attempts to outlaw the trade led to the First Opium War and the subsequent legalization of the trade at the Treaty of Nanking. Attitudes towards the opium trade were initially ambivalent, but in 1874 the Society for the Suppression of the Opium Trade was formed in England by Quakers led by the Rev. Frederick Storrs-Turner. By the 1890s, increasingly strident campaigns were waged by Protestant missionaries in China for its abolition. The first such society was established at the 1890 Shanghai Missionary Conference, where British and American representatives, including John Glasgow Kerr, Arthur E. Moule, Arthur Gostick Shorrock and Griffith John, agreed to establish the Permanent Committee for the Promotion of Anti-Opium Societies. Due to increasing pressure in the British parliament, the Liberal government under William Ewart Gladstone approved the appointment of a Royal Commission on Opium to India in 1893. The commission was tasked with ascertaining the impact of Indian opium exports to the Far East, and to advise whether the trade should be banned and opium consumption itself banned in India. After an extended inquiry, the Royal Commission rejected the claims made by the anti-opium campaigners regarding the supposed societal harm caused by the trade and the issue was finalized for another 15 years. The missionary organizations were outraged over the Royal Commission on Opium's conclusions and set up the Anti-Opium League in China; the league gathered data from every Western-trained medical doctor in China and published Opinions of Over 100 Physicians on the Use of Opium in China. This was the first anti-drug campaign to be based on scientific principles, and it had a tremendous impact on the state of educated opinion in the West. In England, the home director of the China Inland Mission, Benjamin Broomhall, was an active opponent of the opium trade, writing two books to promote the banning of opium smoking: The Truth about Opium Smoking and The Chinese Opium Smoker. In 1888, Broomhall formed and became secretary of the Christian Union for the Severance of the British Empire with the Opium Traffic and editor of its periodical, National Righteousness. He lobbied the British parliament to ban the opium trade. Broomhall and James Laidlaw Maxwell appealed to the London Missionary Conference of 1888 and the Edinburgh Missionary Conference of 1910 to condemn the continuation of the trade. As Broomhall lay dying, an article from The Times was read to him with the welcome news that an international agreement had been signed ensuring the end of the opium trade within two years. In 1906, a motion to 'declare the opium trade "morally indefensible" and remove Government support for it', initially unsuccessfully proposed by Arthur Pease in 1891, was put before the House of Commons. This time the motion passed. The Qing government banned opium soon afterward. These changing attitudes led to the founding of the International Opium Commission in 1909. An International Opium Convention was signed by 13 nations at The Hague on January 23, 1912, during the First International Opium Conference. This was the first international drug control treaty and it was registered in the League of Nations Treaty Series on January 23, 1922. The Convention provided that "The contracting Powers shall use their best endeavors to control or to cause to be controlled, all person manufacturing, importing, selling, distributing, and exporting morphine, cocaine, and their respective salts, as well as the buildings in which these persons carry such an industry or trade." The treaty became international law in 1919 when it was incorporated into the Treaty of Versailles. The role of the commission was passed to the League of Nations, and all signatory nations agreed to prohibit the import, sale, distribution, export, and use of all narcotic drugs, except for medical and scientific purposes. Prohibition In the UK the Defence of the Realm Act 1914, passed at the onset of the First World War, gave the government wide-ranging powers to requisition the property and to criminalize specific activities. A moral panic was whipped up by the press in 1916 over the alleged sale of drugs to the troops of the British Indian Army. With the temporary powers of DORA, the Army Council quickly banned the sale of all psychoactive drugs to troops, unless required for medical reasons. However, shifts in the public attitude towards drugs—they were beginning to be associated with prostitution, vice and immorality—led the government to pass further unprecedented laws, banning and criminalising the possession and dispensation of all narcotics, including opium and cocaine. After the war, this legislation was maintained and strengthened with the passing of the Dangerous Drugs Act 1920 (10 & 11 Geo. 5. c. 46). Home Office control was extended to include raw opium, morphine, cocaine, ecogonine and heroin. Hardening of Canadian attitudes toward Chinese-Canadian opium users and fear of a spread of the drug into the white population led to the effective criminalization of opium for nonmedical use in Canada between 1908 and the mid-1920s. The Mao Zedong government nearly eradicated both consumption and production of opium during the 1950s using social control and isolation. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. The remnant opium trade primarily served Southeast Asia, but spread to American soldiers during the Vietnam War, with 20 percent of soldiers regarding themselves as addicted during the peak of the epidemic in 1971. In 2003, China was estimated to have four million regular drug users and one million registered drug addicts. In the US, the Harrison Act was passed in 1914, and required sellers of opiates and cocaine to get a license. While originally intended to regulate the trade, it soon became a prohibitive law, eventually becoming legal precedent that any prescription for a narcotic given by a physician or pharmacist – even in the course of medical treatment for addiction – constituted conspiracy to violate the Harrison Act. In 1919, the Supreme Court ruled in Doremus that the Harrison Act was constitutional and in Webb that physicians could not prescribe narcotics solely for maintenance. In Jin Fuey Moy v. United States, the court upheld that it was a violation of the Harrison Act even if a physician provided prescription of a narcotic for an addict, and thus subject to criminal prosecution. This is also true of the later Marijuana Tax Act in 1937. Soon, however, licensing bodies did not issue licenses, effectively banning the drugs. The American judicial system did not initially accept drug prohibition. Prosecutors argued that possessing drugs was a tax violation, as no legal licenses to sell drugs were in existence; hence, a person possessing drugs must have purchased them from an unlicensed source. After some wrangling, this was accepted as federal jurisdiction under the interstate commerce clause of the U.S. Constitution. Alcohol prohibition The prohibition of alcohol commenced in Finland in 1919 and in the United States in 1920. Because alcohol was the most popular recreational drug in these countries, reactions to its prohibition were far more negative than to the prohibition of other drugs, which were commonly associated with ethnic minorities, prostitution, and vice. Public pressure led to the repeal of alcohol prohibition in Finland in 1932, and in the United States in 1933. Residents of many provinces of Canada also experienced alcohol prohibition for similar periods in the first half of the 20th century. In Sweden, a referendum in 1922 decided against an alcohol prohibition law (with 51% of the votes against and 49% for prohibition), but starting in 1914 (nationwide from 1917) and until 1955 Sweden employed an alcohol rationing system with personal liquor ration books ("motbok"). War on Drugs In response to rising drug use among young people and the counterculture movement, government efforts to enforce prohibition were strengthened in many countries from the 1960s onward. Support at an international level for the prohibition of psychoactive drug use became a consistent feature of United States policy during both Republican and Democratic administrations, to such an extent that US support for foreign governments has often been contingent on their adherence to US drug policy. Major milestones in this campaign include the introduction of the Single Convention on Narcotic Drugs in 1961, the Convention on Psychotropic Substances in 1971 and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances in 1988. A few developing countries where consumption of the prohibited substances has enjoyed longstanding cultural support, long resisted such outside pressure to pass legislation adhering to these conventions. Nepal only did so in 1976. In 1972, United States President Richard Nixon announced the commencement of the so-called "War on Drugs". Later, President Reagan added the position of drug czar to the President's Executive Office. In 1973, New York introduced mandatory minimum sentences of 15 years to life imprisonment for possession of more than of a so-called hard drug, called the Rockefeller drug laws after New York Governor and later Vice President Nelson Rockefeller. Similar laws were introduced across the United States. California's broader 'three strikes and you're out' policy adopted in 1994 was the first mandatory sentencing policy to gain widespread publicity and was subsequently adopted in most United States jurisdictions. This policy mandates life imprisonment for a third criminal conviction of any felony offense. A similar 'three strikes' policy was introduced to the United Kingdom by the Conservative government in 1997. This legislation enacted a mandatory minimum sentence of seven years for those convicted for a third time of a drug trafficking offense involving a class A drug. Calls for legalization, relegalization or decriminalization The terms relegalization, legalization, legal regulations, or decriminalization are used with very different meanings by different authors, something that can be confusing when the claims are not specified. Here are some variants: Sales of one or more drugs (e.g., marijuana) for personal use become legal, at least if sold in a certain way. Sales of an extracts with a specific substance become legal sold in a certain way, for example on prescription. Use or possession of small amounts for personal use do not lead to incarceration if it is the only crime, but it is still illegal; the court or the prosecutor can impose a fine. (In that sense, Sweden both legalized and supported drug prohibition simultaneously.) Use or possession of small amounts for personal use do not lead to incarceration. The case is not treated in an ordinary court, but by a commission that may recommend treatment or sanctions including fines. (In that sense, Portugal both legalized and supported drug prohibitions). There are efforts around the world to promote the relegalization and decriminalization of drugs. These policies are often supported by proponents of liberalism and libertarianism on the grounds of individual freedom, as well as by leftists who believe prohibition to be a method of suppression of the working class by the ruling class. Prohibition of drugs is supported by proponents of conservatism as well various NGOs. A number of NGOs are aligned in support of drug prohibition as members of the World Federation Against Drugs. In the WFAD constitution, the "Declaration of the World Forum Against Drugs" (2008) advocates for "no other goal than a drug-free world", and states that a balanced policy of drug abuse prevention, education, treatment, law enforcement, research, and supply reduction provides the most effective platform to reduce drug abuse and its associated harms and calls on governments to consider demand reduction as one of their first priorities. It supports the UN drug conventions, the inclusion of cannabis as one of the "hard drugs", and the use of criminal sanctions "when appropriate" to deter drug use. It opposes legalization in any form, and harm reduction in general. According to some critics, drug prohibition is responsible for enriching "organised criminal networks" while the hypothesis that the prohibition of drugs generates violence is consistent with research done over long time-series and cross-country facts. In the United Kingdom, where the principal piece of drug prohibition legislation is the Misuse of Drugs Act 1971, criticism includes: Drug classification: making a hash of it?, Fifth Report of Session 2005–06, House of Commons Science and Technology Committee, which said that the present system of drug classification is based on historical assumptions, not scientific assessment Development of a rational scale to assess the harm of drugs of potential misuse, David Nutt, Leslie A. King, William Saulsbury, Colin Blakemore, The Lancet, 24 March 2007, said the act is "not fit for purpose" and "the exclusion of alcohol and tobacco from the Misuse of Drugs Act is, from a scientific perspective, arbitrary" The Drug Equality Alliance (DEA) argue that the Government is administering the Act arbitrarily, contrary to its purpose, contrary to the original wishes of Parliament and therefore illegally. They are currently assisting and supporting several legal challenges to this alleged maladministration. In February 2008 the then-president of Honduras, Manuel Zelaya, called on the world to legalize drugs, in order, he said, to prevent the majority of violent murders occurring in Honduras. Honduras is used by cocaine smugglers as a transiting point between Colombia and the US. Honduras, with a population of 7 million, suffers an average of 8–10 murders a day, with an estimated 70% being a result of this international drug trade. The same problem is occurring in Guatemala, El Salvador, Costa Rica and Mexico, according to Zelaya. In January 2012 Colombian President Juan Manuel Santos made a plea to the United States and Europe to start a global debate about legalizing drugs. This call was echoed by the Guatemalan President Otto Pérez Molina, who announced his desire to legalize drugs, saying "What I have done is put the issue back on the table." In a report dealing with HIV in June 2014, the World Health Organization (WHO) of the UN called for the decriminalization of drugs particularly including injected ones. This conclusion put WHO at odds with broader long-standing UN policy favoring criminalization. Eight states of the United States (Alaska, California, Colorado, Maine, Massachusetts, Nevada, Oregon, and Washington), as well as the District of Columbia, have legalized the sale of marijuana for personal recreational use as of 2017, although recreational use remains illegal under U.S. federal law. The conflict between state and federal law is, as of 2018, unresolved. Since Uruguay in 2014 and Canada in 2018 legalized cannabis, the debate has known a new turn internationally. Drug prohibition laws The following individual drugs, listed under their respective family groups (e.g., barbiturates, benzodiazepines, opiates), are the most frequently sought after by drug users and as such are prohibited or otherwise heavily regulated for use in many countries: Among the barbiturates, pentobarbital (Nembutal), secobarbital (Seconal), and amobarbital (Amytal) Among the benzodiazepines, temazepam (Restoril; Normison; Euhypnos), flunitrazepam (Rohypnol; Hypnor; Flunipam), and alprazolam (Xanax) Cannabis products, e.g., marijuana, hashish, and hashish oil Among the dissociatives, phencyclidine (PCP), and ketamine are the most sought after. hallucinogens such as LSD, mescaline, peyote, and psilocybin Empathogen-entactogen drugs like MDMA ("ecstasy") Among the narcotics, it is opiates such as morphine and codeine, and opioids such as diacetylmorphine (Heroin), hydrocodone (Vicodin; Hycodan), oxycodone (Percocet; Oxycontin), hydromorphone (Dilaudid), and oxymorphone (Opana). Sedatives such as GHB and methaqualone (Quaalude) Stimulants such as cocaine, amphetamine (Adderall), dextroamphetamine (Dexedrine), methamphetamine (Desoxyn), methcathinone, and methylphenidate (Ritalin) The regulation of the above drugs varies in many countries. Alcohol possession and consumption by adults is today widely banned only in Islamic countries and certain states of India. Although alcohol prohibition was eventually repealed in the countries that enacted it, there are, for example, still parts of the United States that do not allow alcohol sales, though alcohol possession may be legal (see dry counties). New Zealand has banned the importation of chewing tobacco as part of the Smoke-free Environments Act 1990. In some parts of the world, provisions are made for the use of traditional sacraments like ayahuasca, iboga, and peyote. In Gabon, iboga (tabernanthe iboga) has been declared a national treasure and is used in rites of the Bwiti religion. The active ingredient, ibogaine, is proposed as a treatment of opioid withdrawal and various substance use disorders. In countries where alcohol and tobacco are legal, certain measures are frequently undertaken to discourage use of these drugs. For example, packages of alcohol and tobacco sometimes communicate warnings directed towards the consumer, communicating the potential risks of partaking in the use of the substance. These drugs also frequently have special sin taxes associated with the purchase thereof, in order to recoup the losses associated with public funding for the health problems the use causes in long-term users. Restrictions on advertising also exist in many countries, and often a state holds a monopoly on manufacture, distribution, marketing, and/or the sale of these drugs. List of principal drug prohibition laws by jurisdiction (non-exhaustive) Australia: Standard for the Uniform Scheduling of Medicines and Poisons Bangladesh: Narcotics Substances Control Act, 2018 Belize: Misuse of Drugs Act (Belize) Canada: Controlled Drugs and Substances Act Estonia: Narcotic Drugs and Psychotropic Substances Act (Estonia) Germany: Narcotic Drugs Act India: Narcotic Drugs and Psychotropic Substances Act (India) Netherlands: Opium Law New Zealand: Misuse of Drugs Act 1975 Pakistan: Control of Narcotic Substances Act 1997 Philippines: Comprehensive Dangerous Drugs Act of 2002 Poland: Drug Abuse Prevention Act 2005 Portugal: Decree-Law 15/93 Ireland: Misuse of Drugs Act (Ireland) South Africa: Drugs and Drug Trafficking Act 1992 Singapore: Misuse of Drugs Act (Singapore) Sweden: Lag om kontroll av narkotika (SFS 1992:860) Thailand: Psychotropic Substances Act (Thailand) and Narcotics Act United Kingdom: Misuse of Drugs Act 1971 and Drugs Act 2005 United States: Controlled Substances Act International: Single Convention on Narcotic Drugs Legal dilemmas The sentencing statutes in the United States Code that cover controlled substances are complicated. For example, a first-time offender convicted in a single proceeding for selling marijuana three times, and found to have carried a gun on him all three times (even if it were not used) is subject to a minimum sentence of 55 years in federal prison. In Hallucinations: Behavior, Experience, and Theory (1975), senior US government researchers Louis Jolyon West and Ronald K. Siegel explain how drug prohibition can be used for selective social control: Linguist Noam Chomsky argues that drug laws are currently, and have historically been, used by the state to oppress sections of society it opposes: Legal highs and prohibition In 2013 the European Monitoring Centre for Drugs and Drug Addiction reported that there are 280 new legal drugs, known as "legal highs", available in Europe. One of the best known, mephedrone, was banned in the United Kingdom in 2010. On November 24, 2010, the U.S. Drug Enforcement Administration announced it would use emergency powers to ban many synthetic cannabinoids within a month. An estimated 73 new psychoactive synthetic drugs appeared on the UK market in 2012. The response of the Home Office has been to create a temporary class drug order which bans the manufacture, import, and supply (but not the possession) of named substances. Corruption In certain countries, there is concern that campaigns against drugs and organized crime are a cover for corrupt officials tied to drug trafficking themselves. In the United States, Federal Bureau of Narcotics chief Harry Anslinger's opponents accused him of taking bribes from the Mafia to enact prohibition and create a black market for alcohol. More recently in the Philippines, one death squad hitman told author Niko Vorobyov that he was being paid by military officers to eliminate those drug dealers who failed to pay a 'tax'. Under President Rodrigo Duterte, the Philippines has waged a bloody war against drugs that may have resulted in up to 29,000 extrajudicial killings. When it comes to social control with cannabis, there are different aspects to consider. Not only do we assess legislative leaders and the way they vote on cannabis, but we also must consider the federal regulations and taxation that contribute to social controls. For instance, according to a report on the U.S. customs and border protections, the American industry, although banned the main usage of marijuana, was still using products similar such as hemp seeds, oils etc. leading to the previously discussed marijuana tax act. The Tax act provisions required importers to register and pay an annual tax of $24 and receive an official stamp. Stamps for Products were then affixed to each original order form and recorded by the state revenue collector. Then, a customs collector was to maintain the custody of imported marijuana at entry ports until required documents were received, reviewed and approved.Shipments were subject to searches, seizures and forfeitures if any provisions of the law were not met. Violations would result in fines of no more than $2000 or potential imprisonment for up to 5 years. Oftentimes, this created opportunity for corruption, stolen imports that would later lead to smuggling, oftentimes by state officials and tight knit elitists. Penalties United States Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of "soft drugs", such as cannabis, illegal, though some local governments have laws contradicting federal laws. In the U.S., the War on Drugs is thought to be contributing to a prison overcrowding problem. In 1996, 59.6% of prisoners were drug-related criminals. The U.S. population grew by about +25% from 1980 to 2000. In that same 20 year time period, the U.S. prison population tripled, making the U.S. the world leader in both percentage and absolute number of citizens incarcerated. The United States has 5% of the world's population, but 25% of the prisoners. About 90% of United States prisoners are incarcerated in state jails. In 2016, about 572,000, over 44%, of the 1.3 million people in these state jails, were serving time for drug offenses. 728,000 were incarcerated for violent offenses. The data from Federal Bureau of Prisons online statistics page states that 45.9% of prisoners were incarcerated for drug offenses, as of December 2021. European Union In 2004, the Council of the European Union adopted a framework decision harmonizing the minimum penal provisions for illicit drug-related activities. In particular, article 2(9) stipulates that activities may be exempt from the minimum provisions "when it is committed by its perpetrators exclusively for their own personal consumption as defined by national law." This was made, in particular, to accommodate more liberal national systems such as the Dutch coffee shops (see below) or the Spanish Cannabis Social Clubs. The Netherlands In the Netherlands, cannabis and other "soft" drugs are decriminalised in small quantities. The Dutch government treats the problem as more of a public health issue than a criminal issue. Contrary to popular belief, cannabis is still technically illegal. Coffee shops that sell cannabis to people 18 or above are tolerated, and pay taxes like any other business for their cannabis and hashish sales, although distribution is a grey area that the authorities would rather not go into as it is not decriminalised. Many "coffee shops" are found in Amsterdam and cater mainly to the large tourist trade; the local consumption rate is far lower than in the US. The administrative bodies responsible for enforcing the drug policies include the Ministry of Health, Welfare and Sport, the Ministry of Justice, the Ministry of the Interior and Kingdom Relations, and the Ministry of Finance. Local authorities also shape local policy, within the national framework. When compared to other countries, Dutch drug consumption falls in the European average at six per cent regular use (twenty-one per cent at some point in life) and considerably lower than the Anglo-Saxon countries headed by the United States with an eight per cent recurring use (thirty-four at some point in life). Australia A Nielsen poll in 2012 found that only 27% of voters favoured decriminalisation. Australia has steep penalties for growing and using drugs even for personal use. with Western Australia having the toughest laws. There is an associated anti-drug culture amongst a significant number of Australians. Law enforcement targets drugs, particularly in the party scene. In 2012, crime statistics in Victoria revealed that police were increasingly arresting users rather than dealers, and the Liberal government banned the sale of bongs that year. Indonesia Indonesia carries a maximum penalty of death for drug dealing, and a maximum of 15 years prison for drug use. In 2004, Australian citizen Schapelle Corby was convicted of smuggling 4.4 kilograms of cannabis into Bali, a crime that carried a maximum penalty of death. Her trial reached the verdict of guilty with a punishment of 20 years imprisonment. Corby claimed to be an unwitting drug mule. Australian citizens known as the "Bali Nine" were caught smuggling heroin. Two of the nine, Andrew Chan and Myuran Sukumaran, were executed April 29, 2015 along with six other foreign nationals. In August 2005, Australian model Michelle Leslie was arrested with two ecstasy pills. She pleaded guilty to possession and in November 2005 was sentenced to 3 months imprisonment, which she was deemed to have already served, and was released from prison immediately upon her admission of guilt on the charge of possession. At the 1961 Single Convention on Narcotic Drugs, Indonesia, along with India, Turkey, Pakistan and some South American countries opposed the criminalisation of drugs. Republic of China (Taiwan) Taiwan carries a maximum penalty of death for drug trafficking, while smoking tobacco and wine are classified as legal entertainment drug. The Department of Health is in charge of drug prohibition. Cost In 2020, the direct cost of drug prohibition to United States taxpayers was estimated at over $40 billion annually. Prohibition can increase organized crime, government corruption, and mass incarceration via the trade in illegal drugs, while racial and gender disparities in enforcement are evident. Although drug prohibition is often portrayed by proponents as a measure to improve public health, evidence is lacking. In 2016, the Johns Hopkins–Lancet Commission concluded that the "harms of prohibition far outweigh the benefits", citing increased risk of overdoses and HIV infection and detrimental effects on the social determinants of health. Some proponents argue that drug prohibition's effect on suppressing usage rates (although the magnitude of this effect is unknown) outweighs the negative effects of prohibition. Alternative approaches to prohibition include drug legalization, drug decriminalization, and government monopoly. See also Alcohol law Arguments for and against drug prohibition Chasing the Scream Drug liberalization Demand reduction Drug policy of the Soviet Union Harm reduction List of anti-cannabis organizations Medellín Cartel Mexican drug war Puerto Rican drug war Prohibitionism Tobacco control War on Drugs US specific: Allegations of CIA drug trafficking School district drug policies Drug Free America Foundation Drug Policy Alliance DrugWarRant Gary Webb Marijuana Policy Project National Organization for the Reform of Marijuana Laws Students for Sensible Drug Policy Woman's Christian Temperance Union References Further reading Outsiders: Studies in the Sociology of Deviance, New York: The Free Press, 1963, Eva Bertram, ed., Drug War Politics: The Price of Denial, University of California Press, 1996, . James P. Gray, Why Our Drug Laws Have Failed and What We Can Do About It: A Judicial Indictment of the War on Drugs Temple University Press, 2001, . Richard Lawrence Miller, Drug Warriors and Their Prey, Praeger, 1996, . Dan Baum, Smoke and Mirrors: The War on Drugs and the Politics of Failure, Little Brown & Co., 1996, . Alfred W. McCoy, The Politics of Heroin: CIA Complicity in the Global Drug Trade, Lawrence Hill Books, 1991, . Clarence Lusane and Dennis Desmond, Pipe Dream Blues: Racism and the War on Drugs, South End Press, 1991, . United Nations Office on Drugs and Crime, Colombian Survey, June 2005. Office of National Drug Control Policy, National Drug Control Strategy, March 2004. David F. Musto, The American Disease, The Origins of Narcotics Controls, New Haven: Yale University Press, 1973, p. 43 Why Is Drug Use Forbidden? by François-Xavier Dudouet Re-thinking drug control policy – Historical perspectives and conceptual tools by Peter Cohen Policy from a harm reduction perspective (journal article) Global drug prohibition: its uses and crises (journal article) Self-administration behavior is maintained by the psychoactive ingredient of marijuana in squirrel monkeys (journal article) Should cannabis be taxed and regulated? (journal article) Learning from history: a review of David Bewley-Taylor's The United States and International Drug Control, 1909–1997 (journal article) Shifting the main purposes of drug control: from suppression to regulation of use Setting goals for drug policy: harm or use reduction? Prohibition, pragmatism and drug policy repatriation Challenging the UN drug control conventions: problems and possibilities The Economics of Drug Legalization Britain on drugs (journal article) Laws and the Construction of Drug- and Gender-Related Violence in Central America by Peter Peetz External links Making Contact: The Mission to End Prohibition. Radio piece featuring LEAP founder and former narcotics officer Jack Cole, and Drug Policy Alliance founder Ethan Nadelmann EMCDDA – Decriminalisation in Europe? Recent developments in legal approaches to drug use. 10 Downing Street's Strategy Unit Drugs Report War on drugs Part I: Winners, documentary (50 min) explaining 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. War on drugs Part II: Losers, documentary (50 min) showing downside of the 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. After the War on Drugs: Options for Control (Report) The Drug War as a Socialist Enterprise by Milton Friedman Free from the Nightmare of Prohibition by Harry Browne Prohibition news page – Alcohol and Drugs History Society Drugs and conservatives should go together Social conflict Counterculture Drug control law Drug policy History of drug control Prohibitionism
Drug prohibition
[ "Chemistry" ]
8,821
[ "Drug control law", "Regulation of chemicals" ]
51,038
https://en.wikipedia.org/wiki/Technological%20applications%20of%20superconductivity
Technological applications of superconductivity include: the production of sensitive magnetometers based on SQUIDs (superconducting quantum interference devices) fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology), powerful superconducting electromagnets used in maglev trains, magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) machines, magnetic confinement fusion reactors (e.g. tokamaks), and the beam-steering and focusing magnets used in particle accelerators low-loss power cables RF and microwave filters (e.g., for mobile phone base stations, as well as military ultra-sensitive/selective receivers) fast fault current limiters high sensitivity particle detectors, including the transition edge sensor, the superconducting bolometer, the superconducting tunnel junction detector, the kinetic inductance detector, and the superconducting nanowire single-photon detector railgun and coilgun magnets electric motors and generators Low-temperature superconductivity Magnetic resonance imaging and nuclear magnetic resonance The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). This represents a multi-billion-US$ market for companies such as Oxford Instruments and Siemens. The magnets typically use low-temperature superconductors (LTS) because high-temperature superconductors are not yet cheap enough to cost-effectively deliver the high, stable, and large-volume fields required, notwithstanding the need to cool LTS instruments to liquid helium temperatures. Superconductors are also used in high field scientific magnets. Particle accelerators and magnetic fusion devices Particle accelerators such as the Large Hadron Collider can include many high field electromagnets requiring large quantities of LTS. To construct the LHC magnets required more than 28 percent of the world's niobium-titanium wire production for five years, with large quantities of NbTi also used in the magnets for the LHC's huge experiment detectors. Conventional fusion machines (JET, ST-40, NTSX-U and MAST) use blocks of copper. This limits their fields to 1-3 Tesla. Several superconducting fusion machines are planned for the 2024-2026 timeframe. These include ITER, ARC and the next version of ST-40. The addition of High Temperature Superconductors should yield an order of magnitude improvement in fields (10-13 tesla) for a new generation of Tokamaks. High-temperature superconductivity The commercial applications so far for high-temperature superconductors (HTS) have been limited by other properties of the materials discovered thus far. HTS require only liquid nitrogen, not liquid helium, to cool to superconducting temperatures. However, currently known high-temperature superconductors are brittle ceramics that are expensive to manufacture and not easily formed into wires or other useful shapes. Therefore, the applications for HTS have been where it has some other intrinsic advantage, e.g. in: low thermal loss current leads for LTS devices (low thermal conductivity), RF and microwave filters (low resistance to RF), and increasingly in specialist scientific magnets, particularly where size and electricity consumption are critical (while HTS wire is much more expensive than LTS in these applications, this can be offset by the relative cost and convenience of cooling); the ability to ramp field is desired (the higher and wider range of HTS's operating temperature means faster changes in field can be managed); or cryogen free operation is desired (LTS generally requires liquid helium, which is becoming more scarce and expensive). HTS-based systems HTS has application in scientific and industrial magnets, including use in NMR and MRI systems. Commercial systems are now available in each category. Also one intrinsic attribute of HTS is that it can withstand much higher magnetic fields than LTS, so HTS at liquid helium temperatures are being explored for very high-field inserts inside LTS magnets. Promising future industrial and commercial HTS applications include Induction heaters, transformers, fault current limiters, power storage, motors and generators, fusion reactors (see ITER) and magnetic levitation devices. Early applications will be where the benefit of smaller size, lower weight or the ability to rapidly switch current (fault current limiters) outweighs the added cost. Longer-term as conductor price falls HTS systems should be competitive in a much wider range of applications on energy efficiency grounds alone. (For a relatively technical and US-centric view of state of play of HTS technology in power systems and the development status of Generation 2 conductor see Superconductivity for Electric Systems 2008 US DOE Annual Peer Review.) Electric power transmission The Holbrook Superconductor Project, also known as the LIPA project, was a project to design and build the world's first production superconducting transmission power cable. The cable was commissioned in late June 2008 by the Long Island Power Authority (LIPA) and was in operation for two years. The suburban Long Island electrical substation is fed by a underground cable system which consists of about of high-temperature superconductor wire manufactured by American Superconductor chilled to with liquid nitrogen, greatly reducing the cost required to deliver additional power. In addition, the installation of the cable bypassed strict regulations for overhead power lines, and offered a solution for the public's concerns on overhead power lines. The Tres Amigas Project was proposed in 2009 as an electrical HVDC interconnector between the Eastern Interconnection, the Western Interconnection and Texas Interconnection. It was proposed to be a multi-mile, triangular pathway of superconducting electric cables, capable of transferring five gigawatts of power between the three U.S. power grids. The project lapsed in 2015 when the Eastern Interconnect withdrew from the project. Construction was never begun. Essen, Germany has the world's longest superconducting power cable in production at 1 kilometer. It is a 10 kV liquid nitrogen cooled cable. The cable is smaller than an equivalent 110 kV regular cable and the lower voltage has the additional benefit of smaller transformers. In 2020, an aluminium plant in Voerde, Germany, announced plans to use superconductors for cables carrying 200 kA, citing lower volume and material demand as advantages. Magnesium diboride Magnesium diboride is a much cheaper superconductor than either BSCCO or YBCO in terms of cost per current-carrying capacity per length (cost/(kA*m)), in the same ballpark as LTS, and on this basis many manufactured wires are already cheaper than copper. Furthermore, MgB2 superconducts at temperatures higher than LTS (its critical temperature is 39 K, compared with less than 10 K for NbTi and 18.3 K for Nb3Sn), introducing the possibility of using it at 10-20 K in cryogen-free magnets or perhaps eventually in liquid hydrogen. However MgB2 is limited in the magnetic field it can tolerate at these higher temperatures, so further research is required to demonstrate its competitiveness in higher field applications. Trapped field magnets Exposing superconducting materials to a brief magnetic field can trap the field for use in machines such as generators. In some applications they could replace traditional permanent magnets. Notes Superconductivity
Technological applications of superconductivity
[ "Physics", "Materials_science", "Engineering" ]
1,560
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
51,045
https://en.wikipedia.org/wiki/Guided%20rat
A remotely guided rat, popularly called a ratbot or robo-rat, is a rat with electrodes implanted in the medial forebrain bundle (MFB) and sensorimotor cortex of its brain. They were developed in 2002 by Sanjiv Talwar and John Chapin at the State University of New York Downstate Medical Center. The rats wear a small electronics backpack containing a radio receiver and electrical stimulator. The rat receives remote stimulation in the sensorimotor cortex via its backpack that causes the rat to feel a sensation in its left or right whiskers, and stimulation in the MFB that is interpreted as a reward or pleasure. After a period of training and conditioning using MFB stimulation as a reward, the rats can be remotely directed to move left, right, and forward in response to whisker stimulation signals. It is possible to roughly guide the animal along an obstacle course, jumping small gaps and scaling obstacles. Ethics Concerns have been raised by animal rights groups about the use of animals in this context, particularly due to a concern about the removal of autonomy from an independent creature. For example, a spokesman of the Dr Hadwen Trust, a group funding alternatives to animal research in medicine, has said that the experiments are an "appalling example of how the human species instrumentalizes other species." Researchers tend to liken the training mechanism of the robo-rat to standard operant conditioning techniques. Talwar himself has acknowledged the ethical issues apparent in the development of the robo-rat, but points out that the research meets standards for animal treatment laid down by the National Institute of Health. Moreover, the researchers emphasize that the animals are trained, not coerced, into particular behaviors. Because the rats are encouraged to act via the reward of pleasure, not muscularly compelled to behave in a particular manner, their behavior under MFB stimulation is likened to a carrot-and-stick model of encouraged behavior versus a system of mind control. It seems unlikely that the rats could be persuaded to knowingly risk their lives even with this stimulation. "Our animals were completely happy and treated well," Talwar stated. The technology is reminiscent of experiments performed in 1965 by Dr. Jose Delgado, a controversial scientist who was able to pacify a charging bull via electrodes fitted in its brain. He was also said to control cats and monkeys like "electronic toys." Doctor Robert Galbraith Heath also placed electrodes deep into the brains of patients and wrote hundreds of medical papers on his work. See also Remote control animal References External links News announcement: Nature article: Biocybernetics Cyborgs Rats
Guided rat
[ "Biology" ]
536
[ "Cyborgs" ]
51,067
https://en.wikipedia.org/wiki/Anomie
In sociology, anomie or anomy () is a social condition defined by an uprooting or breakdown of any moral values, standards or guidance for individuals to follow. Anomie is believed to possibly evolve from conflict of belief systems and causes breakdown of social bonds between an individual and the community (both economic and primary socialization). The term, commonly understood to mean normlessness, is believed to have been popularized by French sociologist Émile Durkheim in his influential book Suicide (1897). Émile Durkheim suggested that Protestants exhibited a greater degree of anomie than Catholics. However, Durkheim first introduced the concept of anomie in his 1893 work The Division of Labour in Society. Durkheim never used the term normlessness; rather, he described anomie as "derangement", and "an insatiable will." Durkheim used the term "the malady of the infinite" because desire without limit can never be fulfilled; it only becomes more intense. For Durkheim, anomie arises more generally from a mismatch between personal or group standards and wider social standards; or from the lack of a social ethic, which produces moral deregulation and an absence of legitimate aspirations, i.e.: History In 1893, Durkheim introduced the concept of anomie to describe the mismatch of collective guild labour to evolving societal needs when the guild was homogeneous in its constituency. He equated homogeneous (redundant) skills to mechanical solidarity whose inertia hindered adaptation. He contrasted this with the self-regulating behaviour of a division of labour based on differences in constituency, equated to organic solidarity, whose lack of inertia made it sensitive to needed changes. Durkheim observed that the conflict between the evolved organic division of labour and the homogeneous mechanical type was such that one could not exist in the presence of the other. When solidarity is organic, anomie is impossible, as sensitivity to mutual needs promotes evolution in the division of labour:Durkheim contrasted the condition of anomie as being the result of a malfunction of organic solidarity after the transition to mechanical solidarity: Durkheim's use of anomie was in regards to the phenomenon of industrialization—mass-regimentation that could not adapt due to its own inertia. More specifically, its resistance to change causes disruptive cycles of collective behavior (e.g. economics) due to the necessity of a prolonged buildup of sufficient force or momentum to overcome the inertia. Later in 1897, in his studies of suicide, Durkheim associated anomie to the influence of a lack of norms or norms that were too rigid. However, such normlessness or norm-rigidity was a symptom of anomie, caused by the lack of differential adaptation that would enable norms to evolve naturally due to self-regulation, either to develop norms where none existed or to change norms that had become rigid and obsolete. Durkheim found that Protestant communities have noticeably higher suicide rates than Catholic ones, and justified it with individualism and lack of social cohesion prevalent amongst Protestants, creating poorly integrated society and making Protestants less likely to develop close communal ties that would be crucial in times of hardship. Conversely, he states that the Catholic faith binds individuals together and builds strong social ties, decreasing the risk of suicide and alienation. In this, Durkheim argued that religion is much more important than culture in regards to anomic suicide. This allowed Durkheim to successfully tie social cohesion to suicide rates: In 1938, Robert K. Merton linked anomie with deviance, arguing that the discontinuity between culture and structure have the dysfunctional consequence of leading to deviance within society. He described 5 types of deviance in terms of the acceptance or rejection of social goals and the institutionalized means of achieving them. Etymology The term anomie—"a reborrowing with French spelling of anomy"—comes from (), namely the privative alpha prefix (a-, 'without'), and nomos (). The Greeks distinguished between nomos, and arché (). For example, a monarch is a single ruler but he may still be subject to, and not exempt from, the prevailing laws, i.e. nomos. In the original city state democracy, the majority rule was an aspect of arché because it was a rule-based, customary system, which may or may not make laws, i.e. nomos. Thus, the original meaning of anomie defined anything or anyone against or outside the law, or a condition where the current laws were not applied resulting in a state of illegitimacy or lawlessness. The contemporary English understanding of the word anomie can accept greater flexibility in the word "norm", and some have used the idea of normlessness to reflect a similar situation to the idea of anarchy. However, as used by Émile Durkheim and later theorists, anomie is a reaction against or a retreat from the regulatory social controls of society, and is a completely separate concept from anarchy, which consists of the absence of the roles of rulers and submitted. Social disorder Nineteenth-century French sociologist Émile Durkheim borrowed the term anomie from French philosopher Jean-Marie Guyau. Durkheim used it in his influential book Suicide (1897) in order to outline the social (and not individual) causes of suicide, characterized by a rapid change of the standards or values of societies (often erroneously referred to as normlessness), and an associated feeling of alienation and purposelessness. He believed that anomie is common when the surrounding society has undergone significant changes in its economic fortunes, whether for better or for worse and, more generally, when there is a significant discrepancy between the ideological theories and values commonly professed and what was actually achievable in everyday life. This was contrary to previous theories on suicide which generally maintained that suicide was precipitated by negative events in a person's life and their subsequent depression. In Durkheim's view, traditional religions often provided the basis for the shared values which the anomic individual lacks. Furthermore, he argued that the division of labor that had been prevalent in economic life since the Industrial Revolution led individuals to pursue egoistic ends rather than seeking the good of a larger community. Robert King Merton also adopted the idea of anomie to develop strain theory, defining it as the discrepancy between common social goals and the legitimate means to attain those goals. In other words, an individual suffering from anomie would strive to attain the common goals of a specific society yet would not be able to reach these goals legitimately because of the structural limitations in society. As a result, the individual would exhibit deviant behavior. Friedrich Hayek notably uses the word anomie with this meaning. According to one academic survey, psychometric testing confirmed a link between anomie and academic dishonesty among university students, suggesting that universities needed to foster codes of ethics among students in order to curb it. In another study, anomie was seen as a "push factor" in tourism. As an older variant, the 1913 Webster's Dictionary reports use of the word anomie as meaning "disregard or violation of the law." However, anomie as a social disorder is not to be confused with anarchy: proponents of anarchism claim that anarchy does not necessarily lead to anomie and that hierarchical command actually increases lawlessness. Some anarcho-primitivists argue that complex societies, particularly industrial and post-industrial societies, directly cause conditions such as anomie by depriving the individual of self-determination and a relatively small reference group to relate to, such as the band, clan or tribe. In 2003, José Soltero and Romeo Saravia analyzed the concept of anomie in regards to Protestantism and Catholicism in El Salvador. Massive displacement of population in the 1970s, economic and political crises as well as cycles of violence are credited with radically changing the religious composition of the country, rendering it one of the most Protestant countries in Latin America. According to Soltero and Saravia, the rise of Protestantism is conversationally claimed to be caused by a Catholic failure to "address the spiritual needs of the poor" and the Protestant "deeper quest for salvation, liberation, and eternal life". However, their research does not support these claims, and showed that Protestantism is not more popular amongst the poor. Their findings do confirm the assumptions of anomie, with Catholic communities of El Salvador enjoying high social cohesion, while the Protestant communities have been associated with poorer social integration, internal migration and tend to be places deeply affected by the Salvadoran Civil War. Additionally, Soltero and Saravia found that Salvadoran Catholicism is tied to social activism, liberation theology and the political left, as opposed to the "right wing political orientation, or at least a passive, personally inward orientation, expressed by some Protestant churches". They conclude that their research contradicts the theory that Protestantism responds to the spiritual needs of the poor more adequately than Catholicism, while also disproving the claim that Protestantism appeals more to women: The study by Soltero and Saravia has also found a link between Protestantism and no access to healthcare: Synnomie Freda Adler coined synnomie as the opposite of anomie. Using Émile Durkheim's concept of social solidarity and collective consciousness, Adler defined synnomie as "a congruence of norms to the point of harmonious accommodation". Adler described societies in a synnomie state as "characterized by norm conformity, cohesion, intact social controls and norm integration". Social institutions such as the family, religion and communities, largely serve as sources of norms and social control to maintain a synnomic society. In culture In Albert Camus's existentialist novel The Stranger, Meursault—the bored, alienated protagonist—struggles to construct an individual system of values as he responds to the disappearance of the old. He exists largely in a state of anomie, as seen from the apathy evinced in the opening lines: "" ("Today mum died. Or maybe yesterday, I don't know"). Fyodor Dostoyevsky expresses a similar concern about anomie in his novel The Brothers Karamazov. The Grand Inquisitor remarks that in the absence of God and immortal life, everything would be lawful. In other words, that any act becomes thinkable, that there is no moral compass, which leads to apathy and detachment. In The Ink Black Heart of the Cormoran Strike novels, written by J. K. Rowling under the pseudonym Robert Galbraith, the main antagonist goes by the online handle of "Anomie". See also References Sources Durkheim, Émile. 1893. The Division of Labour in Society. Marra, Realino. 1987. Suicidio, diritto e anomia. Immagini della morte volontaria nella civiltà occidentale. Napoli: Edizioni Scientifiche Italiane. —— 1989. "Geschichte und aktuelle Problematik des Anomiebegriffs." Zeitschrift für Rechtssoziologie 11(1):67–80. Orru, Marco. 1983. "The Ethics of Anomie: Jean Marie Guyau and Émile Durkheim." British Journal of Sociology 34(4):499–518. Riba, Jordi. 1999. La Morale Anomique de Jean-Marie Guyau. L'Harmattan. . External links Deflem, Mathieu. 2015. "Anomie: History of the Concept." pp. 718–721 in International Encyclopedia of Social and Behavioral Sciences, Second Edition (Volume 1), edited by James D. Wright. Oxford, UK: Elsevier. "Anomie" discussed at the Émile Durkheim Archive. Featherstone, Richard, and Mathieu Deflem. 2003. "Anomie and Strain: Context and Consequences of Merton's Two Theories." Sociological Inquiry 73(4):471–489, 2003. Deviance (sociology) Émile Durkheim Social philosophy Sociological terminology Sociological theories
Anomie
[ "Biology" ]
2,546
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
51,070
https://en.wikipedia.org/wiki/Conventional%20superconductor
Conventional superconductors are materials that display superconductivity as described by BCS theory or its extensions. This is in contrast to unconventional superconductors, which do not. Conventional superconductors can be either type-I or type-II. Most elemental superconductors are conventional. Niobium and vanadium are type-II, while most other elemental superconductors are type-I. Critical temperatures of some elemental superconductors: Most compound and alloy superconductors are type-II materials. The most commonly used conventional superconductor in applications is a niobium-titanium alloy - this is a type-II superconductor with a superconducting critical temperature of 11 K. The highest critical temperature so far achieved in a conventional superconductor was 39 K (-234 °C) in magnesium diboride. BKBO Ba0.6K0.4BiO3 is an unusual superconductor (a non-cuprate oxide) - but considered 'conventional' in the sense that the BCS theory applies. See also Matthias rules References Superconductors
Conventional superconductor
[ "Chemistry", "Materials_science" ]
237
[ "Superconductivity", "Superconductors" ]
51,072
https://en.wikipedia.org/wiki/Natural%20deduction
In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning. History Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant now known as Suppes–Lemmon notation. Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper: Gentzen was motivated by a desire to establish the consistency of number theory. He was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for classical and intuitionistic logic. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study was to become a reference work on natural deduction, and included applications for modal and second-order logic. In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives. History of notation styles Natural deduction has had a large variety of notation styles, which can make it difficult to recognize a proof if you're not familiar with one of them. To help with this situation, this article has a section explaining how to read all the notation that it will actually use. This section just explains the historical evolution of notation styles, most of which cannot be shown because there are no illustrations available under a public copyright license – the reader is pointed to the SEP and IEP for pictures. Gentzen invented natural deduction using tree-shaped proofs – see for details. Jaśkowski changed this to a notation that used various nested boxes. Fitch changed Jaśkowski method of drawing the boxes, creating Fitch notation. 1940: In a textbook, Quine indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation. 1950: In a textbook, demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.) 1957: An introduction to practical logic theorem proving in a textbook by . This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line. 1963: uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules. 1965: The entire textbook by is an introduction to logic proofs using a method based on that of Suppes, what is now known as Suppes–Lemmon notation. 1967: In a textbook, briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies. Notation Here is a table with the most common notational variants for logical connectives. Gentzen's tree notation Gentzen, who invented natural deduction, had his own notation style for arguments. This will be exemplified by a simple argument below. Let's say we have a simple example argument in propositional logic, such as, "if it's raining then it's cloudly; it is raining; therefore it's cloudy". (This is in modus ponens.) Representing this as a list of propositions, as is common, we would have: In Gentzen's notation, this would be written like this: The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence, which is also symbolized with ⊢. So the above can also be written in one line as . (The turnstile, for syntactic consequence, is of lower precedence than the comma, which represents premise combination, which in turn is of lower precedence than the arrow, used for material implication; so no parentheses are needed to interpret this formula.) Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because natural deduction is a syntactic proof system, which assumes inference rules as primitives. Gentzen's style will be used in much of this article. Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of sequents Γ ⊢A instead of a tree of judgments that A (is true). Suppes–Lemmon notation Many textbooks use Suppes–Lemmon notation, so this article will also give that – although as of now, this is only included for propositional logic, and the rest of the coverage is given only in Gentzen style. A proof, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. Here's an example proof: This proof will become clearer when the inference rules and their appropriate annotations are specified – see . Propositional language syntax This section defines the formal syntax for a propositional logic language, contrasting the common ways of doing so with a Gentzen-style way of doing so. Common definition styles The formal language of a propositional calculus is usually defined by a recursive definition, such as this one, from Bostock: Each sentence-letter is a formula. "" and "" are formulae. If is a formula, so is . If and are formulae, so are , , , . Nothing else is a formula. There are other ways of doing it, such as the BNF grammar . Gentzen-style definition A syntax definition can also be given using , by writing well-formed formulas below the inference line and any schematic variables used by those formulas above it. For instance, the equivalent of rules 3 and 4, from Bostock's definition above, is written as follows: . A different notational convention sees the language's syntax as a categorial grammar with the single category "formula", denoted by the symbol . So any elements of the syntax are introduced by categorizations, for which the notation is , meaning " is an expression for an object in the category ." The sentence-letters, then, are introduced by categorizations such as , , , and so on; the connectives, in turn, are defined by statements similar to the above, but using categorization notation, as seen below: In the rest of this article, the categorization notation will be used for any Gentzen-notation statements defining the language's grammar; any other statements in Gentzen notation will be inferences, asserting that a sequent follows rather than that an expression is a well-formed formula. Gentzen-style propositional logic Gentzen-style inference rules The following is a complete list of primitive inference rules for natural deduction in classical propositional logic: This table follows the custom of using Greek letters as schemata, which may range over any formulas, rather than only over atomic propositions. The name of a rule is given to the right of its formula tree. For instance, the first introduction rule is named , which is short for "conjunction introduction". Gentzen-style example proofs As an example of the use of inference rules, consider commutativity of conjunction. If A ∧ B, then B ∧ A; this derivation can be drawn by composing inference rules in such a fashion that premises of a lower inference match the conclusion of the next higher inference. As a second example, consider the derivation of "A ⊃ (B ⊃ (A ∧ B))": This full derivation has no unsatisfied premises; however, sub-derivations are hypothetical. For instance, the derivation of "B ⊃ (A ∧ B)" is hypothetical with antecedent "A" (named u). Suppes–Lemmon-style propositional logic Suppes–Lemmon-style inference rules Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive. Suppes–Lemmon-style example proof Recall that an example proof was already given when introducing . This is a second example. Consistency, completeness, and normal forms A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions. Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions: These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry–Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961. It is much easier to show this indirectly by means of a cut-free sequent calculus presentation. First and higher-order extensions The logic of the earlier section is an example of a single-sorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new category, "term", denoted . We shall fix a countable set of variables, another countable set of function symbols, and construct terms with the following formation rules: and For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule: The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory. To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification: The universal quantifier has the introduction and elimination rules: The existential quantifier has the introduction and elimination rules: In these rules, the notation [t/x] A stands for the substitution of t for every (visible) instance of x in A, avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable. So far, the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules: A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind. Proofs and type theory The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile (⊢). This modification sometimes goes under the name of localised hypotheses. The following diagram summarises the change. The collection of hypotheses will be written as Γ when their exact composition is not relevant. To make proofs explicit, we move from the proof-less judgment "A" to a judgment: "π is a proof of (A)", which is written symbolically as "π : A". Following the standard approach, proofs are specified with their own formation rules for the judgment "π proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself. Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus: The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections—first (fst) and second (snd). For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u:A" stands for the collection of hypotheses Γ, together with the additional hypothesis u. With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem, and can be proved by induction on the depth (or structure) of the second judgment. Substitution theorem If Γ ⊢ π1 : A and Γ, u:A ⊢ π2 : B, then Γ ⊢ [π1/u] π2 : B. So far the judgment "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections. The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic. Example: Dependent Type Theory Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory, is used in a number of computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules: These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules. Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable (extensional type theory), or extensional reasoning is more difficult (intensional type theory). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain, for example integers, strings, or linear programs. Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt. The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach. Classical and modal logics For simplicity, the logics presented so far have been intuitionistic. Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle: For any proposition p, the proposition p ∨ ¬p is true. This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting: (XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms. A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgment A with a more classical notion, reminiscent of the sequent calculus: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.) Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, "A valid", that is categorical with respect to truth: If "A" (is true) under no assumption that "B" (is true), then "A valid". This categorical judgment is internalised as a unary connective ◻A (read "necessarily A") with the following introduction and elimination rules: Note that the premise "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment "A"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A". The introduction and elimination forms are then: The modal hypotheses have their own version of the hypothesis rule and substitution theorem. Modal substitution theorem If Ω;⋅ ⊢ π1 : A and Ω, u: (A valid) ; Γ ⊢ π2 : C, then Ω;Γ ⊢ [π1/u] π2 : C. This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference. The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of analytic tableaux to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. surveys the application of many proof theories, such as Avron and Pottinger's hypersequents and Belnap's display logic to such modal logics as S5 and B. Comparison with sequent calculus The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics, gave the first formulation of the sequent calculus in the modern style. In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar: On the left: Recall the ∨E rule of natural deduction in localised form: The proposition A ∨ B, which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows: In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent, which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition: The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument. Soundness of ⇒ wrt. ⊢ If Γ ⇒ A, then Γ ⊢ A. Completeness of ⇒ wrt. ⊢ If Γ ⊢ A, then Γ ⇒ A. It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules. The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction. The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus. Cut (substitution) If Γ ⇒ π1 : A and Γ, u:A ⇒ π2 : C, then Γ ⇒ [π1/u] π2 : C. In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties. See also Notes References General references (English translation Investigations into Logical Deduction in M. E. Szabo. The Collected Works of Gerhard Gentzen. North-Holland Publishing Company, 1969.) Translated and with appendices by Paul Taylor and Yves Lafont. Reprinted in Polish logic 1920–39, ed. Storrs McCall. Lecture notes to a short course at Università degli Studi di Siena, April 1983. PhD thesis. MSc thesis. Inline citations External links Logical calculi Deductive reasoning Proof theory Methods of proof
Natural deduction
[ "Mathematics" ]
6,679
[ "Methods of proof", "Mathematical logic", "Logical calculi", "Proof theory" ]
51,078
https://en.wikipedia.org/wiki/Thalidomide
Thalidomide, sold under the brand names Contergan and Thalomid among others, is an oral medication used to treat a number of cancers (e.g., multiple myeloma), graft-versus-host disease, and many skin disorders (e.g., complications of leprosy such as skin lesions). Thalidomide has been used to treat conditions associated with HIV: aphthous ulcers, HIV-associated wasting syndrome, diarrhea, and Kaposi's sarcoma, but increases in HIV viral load have been reported. Common side effects include sleepiness, rash, and dizziness. Severe side effects include tumor lysis syndrome, blood clots, and peripheral neuropathy. Thalidomide is a known human teratogen and carries an extremely high risk of severe, life-threatening birth defects if administered or taken during pregnancy. It causes skeletal deformities such as amelia (absence of legs and/or arms), absence of bones, and phocomelia (malformation of the limbs). A single dose of thalidomide, regardless of dosage, is enough to cause teratogenic effects. Thalidomide was first marketed in 1957 in West Germany, where it was available over-the-counter. When first released, thalidomide was promoted for anxiety, trouble sleeping, "tension", and morning sickness. While it was initially thought to be safe in pregnancy, concerns regarding birth defects arose, resulting in its removal from the market in Europe in 1961. The total number of infants severely harmed by thalidomide use during pregnancy is estimated at over 10,000, possibly 20,000, of whom about 40% died around the time of birth. Those who survived had limb, eye, urinary tract, and heart problems. Its initial entry into the US market was prevented by Frances Kelsey, a reviewer at the FDA. The birth defects caused by thalidomide led to the development of greater drug regulation and monitoring in many countries. It was approved in the United States in 1998 for use as a treatment for cancer. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses Thalidomide is used as a first-line treatment for multiple myeloma in combination with dexamethasone or with melphalan and prednisone to treat acute episodes of erythema nodosum leprosum, as well as for maintenance therapy. The bacterium that causes tuberculosis (TB) is related to leprosy. Thalidomide may be helpful in some cases where standard TB drugs and corticosteroids are not sufficient to resolve severe inflammation in the brain. It is used as a second-line treatment to manage graft-versus-host disease and aphthous stomatitis in children and has been prescribed for other conditions in children, including actinic prurigo and epidermolysis bullosa; the evidence for these uses is weak. It is recommended only as a third line treatment in graft-versus-host-disease in adults because of lack of efficacy and side effects observed in clinical trials. Contraindications Prescriptions of thalidomide are accompanied by strict measures to avoid any possibility of use during pregnancy, and thalidomide should be avoided in women wanting to conceive. In the United States, the prescribing doctor is required to ensure that contraception is being used and that regular pregnancy tests are taken. Adverse effects Thalidomide causes birth defects. The U.S. Food and Drug Administration (FDA) and other regulatory agencies have approved marketing of the drug only with an auditable risk evaluation and mitigation strategy that ensures that people using the drug are aware of the risks and avoid pregnancy; this applies to both men and women, as the drug can be transmitted in semen. There is a high risk that thalidomide can cause excessive blood clots. There is also a high risk that thalidomide can interfere with the production of several types of new blood cells, creating a risk of infection via neutropenia, leukopenia, and lymphopenia, and risks that blood will not clot via thrombocytopenia. There is also a risk of anemia via lack of red blood cells. The drug can also damage nerves, causing potentially irreversible peripheral neuropathy. Thalidomide has several adverse cardiovascular effects, including risk of heart attacks, pulmonary hypertension, and changes in heart rhythm, such as syncope, bradycardia, and atrioventricular block. Thalidomide can cause liver damage and severe skin reactions like Stevens–Johnson syndrome. It tends to make people sleepy, which creates risk when driving and operating other machinery. As it kills cancer cells, it can cause tumor lysis syndrome. Thalidomide can prevent menstruation. In addition, very common (reported in more than 10% of people) adverse effects include tremor, dizziness, tingling, numbness, constipation, and peripheral edema. Common adverse effects (reported by 1–10% of people) include confusion, depressed mood, reduced coordination, heart failure, difficulty breathing, interstitial lung disease, lung inflammation, vomiting, dry mouth, rashes, dry skin, fever, weakness, and a sense of unwellness. Interactions There are no expected pharmacokinetic interactions between thalidomide and other medicines due to its neutral effects on P-glycoprotein and the cytochrome P450 family. It may interact with sedatives due to its sedative action and bradycardic agents, like beta-blockers, due to its bradycardia-inducing effects. The risk of peripheral neuropathy may be increased by concomitant treatment of thalidomide with other agents known to cause peripheral neuropathy. The risk of venous thromboembolisms with thalidomide seems to be increased when patients are treated with oral contraceptives or other cytotoxic agents (including doxorubicin and melphalan) concurrently. Thalidomide may interfere with various contraceptives, and hence it is advised that women of reproductive age use at least two different means of contraception to ensure that no child will be conceived while they are taking thalidomide. Overdose As of 2013, eighteen cases of overdoses had been reported with doses of up to 14.4 grams, none of them fatal. No specific antidote for overdose exists and treatment is purely supportive. Pharmacology The precise mechanism of action for thalidomide was not known until the twenty-first century, although efforts to identify thalidomide's teratogenic action generated more than 2,000 research papers and the proposal of 15 or 16 plausible mechanisms by 2000. The primary mechanism of action of thalidomide and its analogs in both their anti-cancer and teratogenic effects is now known to be as cereblon E3 ligase modulators. Thalidomide also binds to and acts as an antagonist of the androgen receptor and hence is a nonsteroidal antiandrogen of some capacity. In accordance, it can produce gynecomastia and sexual dysfunction as side effects in men. Chirality and biological activity Thalidomide is provided as a racemic mixture of two enantiomers; while there are reports that only one of the enantiomers may cause birth defects, the body converts each enantiomer into the other through mechanisms that are not well understood. The (R)-enantiomer has the desired sedative effect while the (S)-enantiomer harbors embryo-toxic and teratogenic effects. Attempting to extract solely R-thalidomide does not remove the risk of birth defects, as it was demonstrated that the "safe" R-thalidomide undergoes an in vivo chiral inversion to the "teratogenic" S-thalidomide. Under biological conditions, the enantiomers interconvert (bidirectional chiral inversion – (R)- to (S)- and vice versa). Chemistry Thalidomide is racemic; while S-thalidomide is the bioactive form of the molecule, the individual enantiomers can racemize to each other due to the acidic hydrogen at the chiral centre, which is the carbon of the glutarimide ring bonded to the phthalimide substituent. The racemization process can occur in vivo. The process of conversion of one enantiomer to its mirror-image version with no other change in the molecule is called chiral inversion. Celgene Corporation originally synthesized thalidomide using a three-step sequence starting with L-glutamic acid treatment, but this has since been reformed by the use of L-glutamine. As shown in the image below, N-carbethoxyphthalimide (1) can react with L-glutamine to yield N-phthaloyl-L-glutamine (2). Cyclization of N-phthaloyl-L-glutamine occurs using carbonyldiimidazole, which then yields thalidomide (3). Celgene Corporation's original method resulted in a 31% yield of S-thalidomide, whereas the two-step synthesis yields 85–93% product that is 99% pure. In 2023, it is reported that phthalic anhydride and L-glutamine under suitable conditions can react directly to form thalidomide. In the procedure, phthalic anhydride and L-glutamine are grounded and added into toluene solvent. The solution, along with triethylamine and acetic anhydride, is refluxed at ~110°C for 9 hours; after that the solution goes through a simple vacuum filtration procedure to obtain the product. History In 1952, thalidomide was synthesised by Chemical Industry Basel, but was found "to have no effect on animals" and was discarded on that basis. In 1957, it was acquired by Chemie Grünenthal in Germany. The German company had been established as a soap maker after World War II ended, to address the urgent market need for antibiotics. Heinrich Mückter was appointed to head the discovery program based on his experience working with the German army's antiviral research. While preparing reagents for the work, Mueckter's assistant Wilhelm Kunz isolated a by-product that was recognized by pharmacologist Herbert Keller as an analog of glutethimide, a sedative. The medicinal chemistry work turned to improving the lead compound into a suitable drug: the result was thalidomide. The toxicity was examined in several animals, and the drug was introduced in 1956 as a sedative, but it was never tested on pregnant women. Researchers at Chemie Grünenthal found that thalidomide was a particularly effective antiemetic that had an inhibitory effect on morning sickness. On 1 October 1957, the company launched thalidomide and began marketing it under the trade name Contergan. It was proclaimed a "wonder drug" for insomnia, coughs, colds and headaches. During that period, the use of medications during pregnancy was not strictly controlled, and drugs were not thoroughly tested for potential harm to the fetus. Thousands of pregnant women took the drug to relieve their symptoms. At the time of the drug's development, scientists did not believe any drug taken by a pregnant woman could pass across the placental barrier and harm the developing fetus. There soon appeared reports of abnormalities in children being born to mothers using thalidomide. In late 1959, it was noticed that peripheral neuritis developed in patients who took the drug over a period of time, and it was only after this point that thalidomide ceased to be provided over the counter. While initially considered safe, the drug was responsible for teratogenic deformities in children born after their mothers used it during pregnancies, prior to the third trimester. In November 1961, thalidomide was taken off the market due to massive pressure from the press and public. Experts estimate that thalidomide led to the death of approximately 2,000 children and serious birth defects in more than 10,000 children, with over half of them in West Germany. The regulatory authorities in East Germany never approved thalidomide. One reason for the initially unobserved side effects of the drug and the subsequent approval in West Germany was that at that time drugs did not have to be tested for teratogenic effects. They were tested for toxicity on rodents only, as was usual at the time. In the UK, the British pharmaceutical company The Distillers Company (Biochemicals) Ltd, a subsidiary of Distillers Co. Ltd (now part of Diageo plc), marketed thalidomide throughout the UK, Australia, and New Zealand, under the brand name Distaval, as a remedy for morning sickness. Their advertisement claimed that "Distaval can be given with complete safety to pregnant women and nursing mothers without adverse effect on mother or child ... Outstandingly safe Distaval has been prescribed for nearly three years in this country." Globally, more pharmaceutical companies started to produce and market the drug under license from Chemie Grünenthal. By the mid-1950s, 14 pharmaceutical companies were marketing thalidomide in 46 countries under at least 37 different trade names. In the US, representatives from Chemie Grünenthal approached Smith, Kline & French (SKF), now GlaxoSmithKline, with a request to market and distribute the drug in North America. A memorandum, rediscovered in 2010 in the archives of the FDA, shows that in 1956–57, as part of its in-licensing approach, Smith, Kline and French conducted animal tests and ran a clinical trial of the drug in the US involving 875 people, including pregnant women. In 1956, researchers involved in clinical trials at SKF noted that, even when used in very high doses, thalidomide could not induce sleep in mice. When administered at doses 50 to 650 times larger than that claimed by Chemie Grünenthal to be "sleep-inducing", the researchers could still not achieve the hypnotic effect in animals that it had on humans. After completion of the trial, and based on reasons kept hidden for decades, SKF declined to commercialize the drug. In 1958, Chemie Grünenthal reached an agreement with the William S. Merrell Company in Cincinnati, Ohio (later Richardson-Merrell, now part of Sanofi), to market and distribute thalidomide throughout the US. The US FDA refused to approve thalidomide for marketing and distribution. However, the drug was distributed in large quantities for testing purposes, after the American distributor and manufacturer Richardson-Merrell had applied for its approval in September 1960. The official in charge of the FDA review, Frances Oldham Kelsey, did not rely on information from the company, which did not include any test results. Richardson-Merrell was called on to perform tests and report the results. The company demanded approval six times and was refused each time. The distribution for "testing" resulted in 17 children born in the US with thalidomide-induced malformations. Oldham Kelsey was awarded the President's Award for Distinguished Federal Civilian Service by President Kennedy in 1962 for not allowing thalidomide to be approved for sale in the US. She was also inducted into the National Women's Hall of Fame in 2000. Canada's Food and Drug Directorate approved the sale of thalidomide by prescription in November 1960. There were many different forms sold: Kevadon, produced by the William S. Merrell Company seeking approval for its thalidomide product, was released on the market in April 1961, and the most common variant (Horner's Talimol) was put on the market on October 23 of the same year. Two months after Talimol went on sale, pharmaceutical companies sent physicians letters warning about the risk of birth defects. It was not until March 1962 that both drugs were banned from the Canadian market by the directorate, and soon afterward physicians were warned to destroy their supplies. Leprosy treatment In 1964, Israeli physician Jacob Sheskin administered thalidomide to a patient critically ill with leprosy. The patient exhibited erythema nodosum leprosum (ENL), a painful skin condition, one of the complications of leprosy. The treatment was attempted despite the ban on thalidomide's use, and the results were favourable: the patient slept for hours and was able to get out of bed without aid upon awakening. A clinical trial studying the use of thalidomide in leprosy soon followed. Thalidomide has been used by Brazilian physicians as the drug of choice for the treatment of severe ENL since 1965, and by 1996, at least 33 cases of thalidomide embryopathy were recorded in people born in Brazil after 1965. Since 1994, the production, dispensing, and prescription of thalidomide have been strictly controlled, requiring women to use two forms of birth control and submit to regular pregnancy tests. Despite this, cases of thalidomide embryopathy continue, with at least 100 cases identified in Brazil between 2005 and 2010. 5.8 million thalidomide pills were distributed throughout Brazil in this time period, largely to poor Brazilians in areas with little access to healthcare, and these cases have occurred despite the controls. In 1998, the FDA approved the drug's use in the treatment of ENL. Because of thalidomide's potential for causing birth defects, the drug may be distributed only under tightly controlled conditions. The FDA required that Celgene Corporation, which planned to market thalidomide under the brand name Thalomid, establish a system for thalidomide education and prescribing safety (STEPS) oversight program. The conditions required under the program include limiting prescription and dispensing rights to authorized prescribers and pharmacies only, keeping a registry of all patients prescribed thalidomide, providing extensive patient education about the risks associated with the drug, and providing periodic pregnancy tests for women who take the drug. In 2010, the World Health Organization stated that it did not recommend thalidomide for leprosy due to the difficulty of adequately controlling its use, and due to the availability of clofazimine. Cancer treatment Shortly after the teratogenic properties of thalidomide were recognized in the mid-1960s, its anti-cancer potential was explored and two clinical trials were conducted in people with advanced cancer, including some people with multiple myeloma; the trials were inconclusive. Little further work was done with thalidomide in cancer until the 1990s. Judah Folkman pioneered studies into the role of angiogenesis (the proliferation and growth of blood vessels) in the development of cancer, and in the early 1970s had shown that solid tumors could not expand without it. In 1993 he surprised the scientific world by hypothesizing the same was true of blood cancers, and the next year he published work showing that a biomarker of angiogenesis was higher in all people with cancer, but especially high in people with blood cancers, and other evidence emerged as well. Meanwhile, a member of his lab, Robert D'Amato, who was looking for angiogenesis inhibitors, discovered in 1994 that thalidomide inhibited angiogenesis and was effective in suppressing tumor growth in rabbits. Around that time, the wife of a man who was dying of multiple myeloma and whom standard treatments had failed, called Folkman asking him about his anti-angiogenesis ideas. Folkman persuaded the patient's doctor to try thalidomide, and that doctor conducted a clinical trial of thalidomide for people with multiple myeloma in which about a third of the subjects responded to the treatment. The results of that trial were published in the New England Journal of Medicine in 1999. After further work was done by Celgene and others, in 2006 the US Food and Drug Administration granted accelerated approval for thalidomide in combination with dexamethasone for the treatment of newly diagnosed multiple myeloma patients. It was also evaluated whether thalidomide can be combined with melphalan and prednisone for patients with multiple myeloma. This combination of drugs probably increases the overall survival. Society and culture Birth defect crisis In the late 1950s and early 1960s, more than 10,000 children in 46 countries were born with deformities, such as phocomelia, as a consequence of thalidomide use. The severity and location of the deformities depended on how many days into the pregnancy the mother was before beginning treatment, with the time-sensitive window occurring approximately between day 20 and day 36 post-fertilisation. Thalidomide taken on the 20th day of pregnancy caused central brain damage, day 21 would damage the eyes, day 22 the ears and face, day 24 the arms, and leg damage would occur if taken up to day 28. It is not known exactly how many worldwide victims of the drug there have been, although estimates range from 10,000 to 20,000. Despite the side effects, thalidomide was sold in pharmacies in Canada until 1962. Notable cases Lorraine Mercer MBE of the United Kingdom, born with phocomelia of both arms and legs, is the only thalidomide survivor to carry the Olympic Torch. Thomas Quasthoff, an internationally acclaimed bass-baritone, describes himself: "1.34 meters tall, short arms, seven fingers — four right, three left — large, relatively well-formed head, brown eyes, distinctive lips; profession: singer". Niko von Glasow produced a documentary called NoBody's Perfect, based on the lives of 12 people affected by the drug, which was released in 2008. Mercédes Benegbi, born with phocomelia of both arms, drove the successful campaign for compensation from her government for Canadians who were affected by thalidomide. Mat Fraser, born with phocomelia of both arms, is an English rock musician, actor, writer and performance artist. He produced a 2002 television documentary "Born Freak", which looked at this historical tradition and its relevance to modern disabled performers. This work has become the subject of academic analysis in the field of disability studies. Sue Kent, born in 1963 with phocomelia of both arms, eight inches long, no thumbs, and seven fingers – three on one hand, four on the other - has appeared as a presenter on the BBC TV show Gardener's World since 2020, demonstrating her ability to garden using her feet and toes where others would use their hands. Christian Lohr, born in 1962 with phocomelia of both arms and both legs, is a Swiss politician who served for 14 years in the legislature in the Canton Thurgau including 2 years as its president and has been a member of the national legislature since 2011. Change in drug regulations The disaster prompted many countries to introduce tougher rules for the testing and licensing of drugs, such as the 1962 Kefauver Harris Amendment (US), 1965 Directive 65/65/EEC1 (EU), and the Medicines Act 1968 (UK). In the United States, the new regulations strengthened the FDA, among other ways, by requiring applicants to prove efficacy and to disclose all side effects encountered in testing. The FDA subsequently initiated the Drug Efficacy Study Implementation to reclassify drugs already on the market. Impact on research involving women In 1977 the US Federal Drug Administration published a clinical trial guideline that excluded women of "childbearing potential" from the early phases of most clinical trials, which in practice led to their exclusion from later trial phases as well. This 1977 FDA guideline was implemented in response to a protectionist climate caused by the thalidomide tragedy. In the 1980s, a US task force on women's health concluded that a lack of women's health research (in part due to the FDA guideline) had compromised the amount and quality of information available about diseases and treatments affecting women. This led to the National Institute of Health policy that women should, when beneficial, be included in clinical trials. Quality of life In the 1960s, thalidomide was successfully marketed as a safer alternative to barbiturates. Due to a successful marketing campaign, thalidomide was widely used by pregnant women during the first trimester of pregnancy. However, thalidomide is a teratogenic substance, and a proportion of children born during the 1960s had thalidomide embryopathy (TE). Of these babies born with TE, "about 40% of them died before their first birthday". The surviving individuals are now middle-aged and they report experiencing challenges (physical, psychological, and socioeconomic) related to TE. Individuals born with TE frequently experience a wide variety of health problems secondary to their TE. These health conditions include both physical and psychological conditions. When compared to individuals of similar demographic profiles, those born with TE report less satisfaction with their quality of life and their overall health. Access to healthcare services can also be a challenge for these people, and women, in particular, have experienced difficulty in locating healthcare professionals who can understand and embrace their needs. Brand names Brand names include Contergan, Thalomid, Talidex, Talizer, Neurosedyn, Distaval, and many others. Research Research efforts have been focused on determining how thalidomide causes birth defects and its other activities in the human body, efforts to develop safer analogs, and efforts to find further uses for thalidomide. Thalidomide analogs The exploration of the antiangiogenic and immunomodulatory activities of thalidomide has led to the study and creation of thalidomide analogs. Celgene has sponsored numerous clinical trials with analogues to thalidomide, such as lenalidomide, that are substantially more powerful and have fewer side effects — except for greater myelosuppression. In 2005, Celgene received FDA approval for lenalidomide (Revlimid) as the first commercially useful derivative. Revlimid is available only in a restricted distribution setting to avoid its use during pregnancy. Further studies are being conducted to find safer compounds with useful qualities. Another more potent analog, pomalidomide, is now FDA-approved. Additionally, apremilast was approved by the FDA in March 2014. These thalidomide analogs can be used to treat different diseases, or used in a regimen to fight two conditions. Interest turned to pomalidomide, a derivative of thalidomide marketed by Celgene. It is a very active anti-angiogenic agent and also acts as an immunomodulator. Pomalidomide was approved in February 2013 by the FDA as a treatment for relapsed and refractory multiple myeloma. It received a similar approval from the European Commission in August 2013, and is expected to be marketed in Europe under the brand name Imnovid. Clinical research There is no conclusive evidence that thalidomide or lenalidomide is useful to bring about or maintain remission in Crohn's disease. Thalidomide was studied in a Phase II trial for Kaposi's sarcoma, a rare soft-tissue cancer most commonly seen in the immunocompromised, that is caused by the Kaposi's sarcoma-associated herpesvirus (KSHV). AIDS wasting syndrome, associated diarrhea Renal cell carcinoma (RCC) Glioblastoma multiforme Prostate cancer Melanoma Colorectal cancer Crohn's disease Rheumatoid arthritis Behcet's syndrome Breast cancer Head and neck cancer Ovarian cancer Chronic heart failure Graft-versus-host disease Tuberculous meningitis References Further reading External links Drugs developed by Bristol Myers Squibb Chirality Congenital amputations Causes of amputation Drug safety German inventions Glutarimides Racemic mixtures 20th-century health disasters Health disasters in the United Kingdom Hepatotoxins Immunosuppressants Antileprotic drugs Medical controversies Medical scandals Medical scandals in the Republic of Ireland Nonsteroidal antiandrogens Phthalimides Teratogens Withdrawn drugs World Health Organization essential medicines Wikipedia medicine articles ready to translate Cereblon E3 ligase modulators Products introduced in 1957
Thalidomide
[ "Physics", "Chemistry", "Biology" ]
5,999
[ "Pharmacology", "Racemic mixtures", "Withdrawn drugs", "Origin of life", "Biochemistry", "Stereochemistry", "Drug safety", "Chirality", "Chemical mixtures", "Asymmetry", "Biological hypotheses", "Teratogens", "Symmetry" ]
51,079
https://en.wikipedia.org/wiki/Magnet
A magnet is a material or object that produces a magnetic field. This magnetic field is invisible but is responsible for the most notable property of a magnet: a force that pulls on other ferromagnetic materials, such as iron, steel, nickel, cobalt, etc. and attracts or repels other magnets. A permanent magnet is an object made from a material that is magnetized and creates its own persistent magnetic field. An everyday example is a refrigerator magnet used to hold notes on a refrigerator door. Materials that can be magnetized, which are also the ones that are strongly attracted to a magnet, are called ferromagnetic (or ferrimagnetic). These include the elements iron, nickel and cobalt and their alloys, some alloys of rare-earth metals, and some naturally occurring minerals such as lodestone. Although ferromagnetic (and ferrimagnetic) materials are the only ones attracted to a magnet strongly enough to be commonly considered magnetic, all other substances respond weakly to a magnetic field, by one of several other types of magnetism. Ferromagnetic materials can be divided into magnetically "soft" materials like annealed iron, which can be magnetized but do not tend to stay magnetized, and magnetically "hard" materials, which do. Permanent magnets are made from "hard" ferromagnetic materials such as alnico and ferrite that are subjected to special processing in a strong magnetic field during manufacture to align their internal microcrystalline structure, making them very hard to demagnetize. To demagnetize a saturated magnet, a certain magnetic field must be applied, and this threshold depends on coercivity of the respective material. "Hard" materials have high coercivity, whereas "soft" materials have low coercivity. The overall strength of a magnet is measured by its magnetic moment or, alternatively, the total magnetic flux it produces. The local strength of magnetism in a material is measured by its magnetization. An electromagnet is made from a coil of wire that acts as a magnet when an electric current passes through it but stops being a magnet when the current stops. Often, the coil is wrapped around a core of "soft" ferromagnetic material such as mild steel, which greatly enhances the magnetic field produced by the coil. Discovery and development Ancient people learned about magnetism from lodestones (or magnetite) which are naturally magnetized pieces of iron ore. The word magnet was adopted in Middle English from Latin magnetum "lodestone", ultimately from Greek (magnētis [lithos]) meaning "[stone] from Magnesia", a place in Anatolia where lodestones were found (today Manisa in modern-day Turkey). Lodestones, suspended so they could turn, were the first magnetic compasses. The earliest known surviving descriptions of magnets and their properties are from Anatolia, India, and China around 2,500 years ago. The properties of lodestones and their affinity for iron were written of by Pliny the Elder in his encyclopedia Naturalis Historia in the 1st century AD. In 11th century China, it was discovered that quenching red hot iron in the Earth's magnetic field would leave the iron permanently magnetized. This led to the development of the navigational compass, as described in Dream Pool Essays in 1088. By the 12th to 13th centuries AD, magnetic compasses were used in navigation in China, Europe, the Arabian Peninsula and elsewhere. A straight iron magnet tends to demagnetize itself by its own magnetic field. To overcome this, the horseshoe magnet was invented by Daniel Bernoulli in 1743. A horseshoe magnet avoids demagnetization by returning the magnetic field lines to the opposite pole. In 1820, Hans Christian Ørsted discovered that a compass needle is deflected by a nearby electric current. In the same year André-Marie Ampère showed that iron can be magnetized by inserting it in an electrically fed solenoid. This led William Sturgeon to develop an iron-cored electromagnet in 1824. Joseph Henry further developed the electromagnet into a commercial product in 1830–1831, giving people access to strong magnetic fields for the first time. In 1831 he built an ore separator with an electromagnet capable of lifting . Physics Magnetic field The magnetic flux density (also called magnetic B field or just magnetic field, usually denoted by B) is a vector field. The magnetic B field vector at a given point in space is specified by two properties: Its direction, which is along the orientation of a compass needle. Its magnitude (also called strength), which is proportional to how strongly the compass needle orients along that direction. In SI units, the strength of the magnetic B field is given in teslas. Magnetic moment A magnet's magnetic moment (also called magnetic dipole moment and usually denoted μ) is a vector that characterizes the magnet's overall magnetic properties. For a bar magnet, the direction of the magnetic moment points from the magnet's south pole to its north pole, and the magnitude relates to how strong and how far apart these poles are. In SI units, the magnetic moment is specified in terms of A·m2 (amperes times meters squared). A magnet both produces its own magnetic field and responds to magnetic fields. The strength of the magnetic field it produces is at any given point proportional to the magnitude of its magnetic moment. In addition, when the magnet is put into an external magnetic field, produced by a different source, it is subject to a torque tending to orient the magnetic moment parallel to the field. The amount of this torque is proportional both to the magnetic moment and the external field. A magnet may also be subject to a force driving it in one direction or another, according to the positions and orientations of the magnet and source. If the field is uniform in space, the magnet is subject to no net force, although it is subject to a torque. A wire in the shape of a circle with area A and carrying current I has a magnetic moment of magnitude equal to IA. Magnetization The magnetization of a magnetized material is the local value of its magnetic moment per unit volume, usually denoted M, with units A/m. It is a vector field, rather than just a vector (like the magnetic moment), because different areas in a magnet can be magnetized with different directions and strengths (for example, because of domains, see below). A good bar magnet may have a magnetic moment of magnitude 0.1 A·m2 and a volume of 1 cm3, or 1×10−6 m3, and therefore an average magnetization magnitude is 100,000 A/m. Iron can have a magnetization of around a million amperes per meter. Such a large value explains why iron magnets are so effective at producing magnetic fields. Modelling magnets Two different models exist for magnets: magnetic poles and atomic currents. Although for many purposes it is convenient to think of a magnet as having distinct north and south magnetic poles, the concept of poles should not be taken literally: it is merely a way of referring to the two different ends of a magnet. The magnet does not have distinct north or south particles on opposing sides. If a bar magnet is broken into two pieces, in an attempt to separate the north and south poles, the result will be two bar magnets, each of which has both a north and south pole. However, a version of the magnetic-pole approach is used by professional magneticians to design permanent magnets. In this approach, the divergence of the magnetization ∇·M inside a magnet is treated as a distribution of magnetic monopoles. This is a mathematical convenience and does not imply that there are actually monopoles in the magnet. If the magnetic-pole distribution is known, then the pole model gives the magnetic field H. Outside the magnet, the field B is proportional to H, while inside the magnetization must be added to H. An extension of this method that allows for internal magnetic charges is used in theories of ferromagnetism. Another model is the Ampère model, where all magnetization is due to the effect of microscopic, or atomic, circular bound currents, also called Ampèrian currents, throughout the material. For a uniformly magnetized cylindrical bar magnet, the net effect of the microscopic bound currents is to make the magnet behave as if there is a macroscopic sheet of electric current flowing around the surface, with local flow direction normal to the cylinder axis. Microscopic currents in atoms inside the material are generally canceled by currents in neighboring atoms, so only the surface makes a net contribution; shaving off the outer layer of a magnet will not destroy its magnetic field, but will leave a new surface of uncancelled currents from the circular currents throughout the material. The right-hand rule tells which direction positively-charged current flows. However, current due to negatively-charged electricity is far more prevalent in practice. Polarity The north pole of a magnet is defined as the pole that, when the magnet is freely suspended, points towards the Earth's North Magnetic Pole in the Arctic (the magnetic and geographic poles do not coincide, see magnetic declination). Since opposite poles (north and south) attract, the North Magnetic Pole is actually the south pole of the Earth's magnetic field. As a practical matter, to tell which pole of a magnet is north and which is south, it is not necessary to use the Earth's magnetic field at all. For example, one method would be to compare it to an electromagnet, whose poles can be identified by the right-hand rule. The magnetic field lines of a magnet are considered by convention to emerge from the magnet's north pole and reenter at the south pole. Magnetic materials The term magnet is typically reserved for objects that produce their own persistent magnetic field even in the absence of an applied magnetic field. Only certain classes of materials can do this. Most materials, however, produce a magnetic field in response to an applied magnetic field – a phenomenon known as magnetism. There are several types of magnetism, and all materials exhibit at least one of them. The overall magnetic behavior of a material can vary widely, depending on the structure of the material, particularly on its electron configuration. Several forms of magnetic behavior have been observed in different materials, including: Ferromagnetic and ferrimagnetic materials are the ones normally thought of as magnetic; they are attracted to a magnet strongly enough that the attraction can be felt. These materials are the only ones that can retain magnetization and become magnets; a common example is a traditional refrigerator magnet. Ferrimagnetic materials, which include ferrites and the longest used and naturally occurring magnetic materials magnetite and lodestone, are similar to but weaker than ferromagnetics. The difference between ferro- and ferrimagnetic materials is related to their microscopic structure, as explained in Magnetism. Paramagnetic substances, such as platinum, aluminum, and oxygen, are weakly attracted to either pole of a magnet. This attraction is hundreds of thousands of times weaker than that of ferromagnetic materials, so it can only be detected by using sensitive instruments or using extremely strong magnets. Magnetic ferrofluids, although they are made of tiny ferromagnetic particles suspended in liquid, are sometimes considered paramagnetic since they cannot be magnetized. Diamagnetic means repelled by both poles. Compared to paramagnetic and ferromagnetic substances, diamagnetic substances, such as carbon, copper, water, and plastic, are even more weakly repelled by a magnet. The permeability of diamagnetic materials is less than the permeability of a vacuum. All substances not possessing one of the other types of magnetism are diamagnetic; this includes most substances. Although force on a diamagnetic object from an ordinary magnet is far too weak to be felt, using extremely strong superconducting magnets, diamagnetic objects such as pieces of lead and even mice can be levitated, so they float in mid-air. Superconductors repel magnetic fields from their interior and are strongly diamagnetic. There are various other types of magnetism, such as spin glass, superparamagnetism, superdiamagnetism, and metamagnetism. Shape The shape of a permanent magnet has a large influence on its magnetic properties. When a magnet is magnetized, a demagnetizing field will be created inside it. As the name suggests, the demagnetizing field will work to demagnetize the magnet, decreasing its magnetic properties. The strength of the demagnetizing field is proportional to the magnet's magnetization and shape, according to Here, is called the demagnetizing factor, and has a different value depending on the magnet's shape. For example, if the magnet is a sphere, then . The value of the demagnetizing factor also depends on the direction of the magnetization in relation to the magnet's shape. Since a sphere is symmetrical from all angles, the demagnetizing factor only has one value. But a magnet that is shaped like a long cylinder will yield two different demagnetizing factors, depending on if it's magnetized parallel to or perpendicular to its length. Common uses Magnetic recording media: VHS tapes contain a reel of magnetic tape. The information that makes up the video and sound is encoded on the magnetic coating on the tape. Common audio cassettes also rely on magnetic tape. Similarly, in computers, floppy disks and hard disks record data on a thin magnetic coating. Credit, debit, and automatic teller machine cards: All of these cards have a magnetic strip on one side. This strip encodes the information to contact an individual's financial institution and connect with their account(s). Older types of televisions (non flat screen) and older large computer monitors: TV and computer screens containing a cathode-ray tube employ an electromagnet to guide electrons to the screen. Speakers and microphones: Most speakers employ a permanent magnet and a current-carrying coil to convert electric energy (the signal) into mechanical energy (movement that creates the sound). The coil is wrapped around a bobbin attached to the speaker cone and carries the signal as changing current that interacts with the field of the permanent magnet. The voice coil feels a magnetic force and in response, moves the cone and pressurizes the neighboring air, thus generating sound. Dynamic microphones employ the same concept, but in reverse. A microphone has a diaphragm or membrane attached to a coil of wire. The coil rests inside a specially shaped magnet. When sound vibrates the membrane, the coil is vibrated as well. As the coil moves through the magnetic field, a voltage is induced across the coil. This voltage drives a current in the wire that is characteristic of the original sound. Electric guitars use magnetic pickups to transduce the vibration of guitar strings into electric current that can then be amplified. This is different from the principle behind the speaker and dynamic microphone because the vibrations are sensed directly by the magnet, and a diaphragm is not employed. The Hammond organ used a similar principle, with rotating tonewheels instead of strings. Electric motors and generators: Some electric motors rely upon a combination of an electromagnet and a permanent magnet, and, much like loudspeakers, they convert electric energy into mechanical energy. A generator is the reverse: it converts mechanical energy into electric energy by moving a conductor through a magnetic field. Medicine: Hospitals use magnetic resonance imaging to spot problems in a patient's organs without invasive surgery. Chemistry: Chemists use nuclear magnetic resonance to characterize synthesized compounds. Chucks are used in the metalworking field to hold objects. Magnets are also used in other types of fastening devices, such as the magnetic base, the magnetic clamp and the refrigerator magnet. Compasses: A compass (or mariner's compass) is a magnetized pointer free to align itself with a magnetic field, most commonly Earth's magnetic field. Art: Vinyl magnet sheets may be attached to paintings, photographs, and other ornamental articles, allowing them to be attached to refrigerators and other metal surfaces. Objects and paint can be applied directly to the magnet surface to create collage pieces of art. Metal magnetic boards, strips, doors, microwave ovens, dishwashers, cars, metal I beams, and any metal surface can be used magnetic vinyl art. Science projects: Many topic questions are based on magnets, including the repulsion of current-carrying wires, the effect of temperature, and motors involving magnets. Toys: Given their ability to counteract the force of gravity at close range, magnets are often employed in children's toys, such as the Magnet Space Wheel and Levitron, to amusing effect. Refrigerator magnets are used to adorn kitchens, as a souvenir, or simply to hold a note or photo to the refrigerator door. Magnets can be used to make jewelry. Necklaces and bracelets can have a magnetic clasp, or may be constructed entirely from a linked series of magnets and ferrous beads. Magnets can pick up magnetic items (iron nails, staples, tacks, paper clips) that are either too small, too hard to reach, or too thin for fingers to hold. Some screwdrivers are magnetized for this purpose. Magnets can be used in scrap and salvage operations to separate magnetic metals (iron, cobalt, and nickel) from non-magnetic metals (aluminum, non-ferrous alloys, etc.). The same idea can be used in the so-called "magnet test", in which a car chassis is inspected with a magnet to detect areas repaired using fiberglass or plastic putty. Magnets are found in process industries, food manufacturing especially, in order to remove metal foreign bodies from materials entering the process (raw materials) or to detect a possible contamination at the end of the process and prior to packaging. They constitute an important layer of protection for the process equipment and for the final consumer. Magnetic levitation transport, or maglev, is a form of transportation that suspends, guides and propels vehicles (especially trains) through electromagnetic force. Eliminating rolling resistance increases efficiency. The maximum recorded speed of a maglev train is . Magnets may be used to serve as a fail-safe device for some cable connections. For example, the power cords of some laptops are magnetic to prevent accidental damage to the port when tripped over. The MagSafe power connection to the Apple MacBook is one such example. Medical issues and safety Because human tissues have a very low level of susceptibility to static magnetic fields, there is little mainstream scientific evidence showing a health effect associated with exposure to static fields. Dynamic magnetic fields may be a different issue, however; correlations between electromagnetic radiation and cancer rates have been postulated due to demographic correlations (see Electromagnetic radiation and health). If a ferromagnetic foreign body is present in human tissue, an external magnetic field interacting with it can pose a serious safety risk. A different type of indirect magnetic health risk exists involving pacemakers. If a pacemaker has been embedded in a patient's chest (usually for the purpose of monitoring and regulating the heart for steady electrically induced beats), care should be taken to keep it away from magnetic fields. It is for this reason that a patient with the device installed cannot be tested with the use of a magnetic resonance imaging device. Children sometimes swallow small magnets from toys, and this can be hazardous if two or more magnets are swallowed, as the magnets can pinch or puncture internal tissues. Magnetic imaging devices (e.g. MRIs) generate enormous magnetic fields, and therefore rooms intended to hold them exclude ferrous metals. Bringing objects made of ferrous metals (such as oxygen canisters) into such a room creates a severe safety risk, as those objects may be powerfully thrown about by the intense magnetic fields. Magnetizing ferromagnets Ferromagnetic materials can be magnetized in the following ways: Heating the object higher than its Curie temperature, allowing it to cool in a magnetic field and hammering it as it cools. This is the most effective method and is similar to the industrial processes used to create permanent magnets. Placing the item in an external magnetic field will result in the item retaining some of the magnetism on removal. Vibration has been shown to increase the effect. Ferrous materials aligned with the Earth's magnetic field that are subject to vibration (e.g., frame of a conveyor) have been shown to acquire significant residual magnetism. Likewise, striking a steel nail held by fingers in a N-S direction with a hammer will temporarily magnetize the nail. Stroking: An existing magnet is moved from one end of the item to the other repeatedly in the same direction (single touch method) or two magnets are moved outwards from the center of a third (double touch method). Electric Current: The magnetic field produced by passing an electric current through a coil can get domains to line up. Once all of the domains are lined up, increasing the current will not increase the magnetization. Demagnetizing ferromagnets Magnetized ferromagnetic materials can be demagnetized (or degaussed) in the following ways: Heating a magnet past its Curie temperature; the molecular motion destroys the alignment of the magnetic domains, completely demagnetizing it Placing the magnet in an alternating magnetic field with intensity above the material's coercivity and then either slowly drawing the magnet out or slowly decreasing the magnetic field to zero. This is the principle used in commercial demagnetizers to demagnetize tools, erase credit cards, hard disks, and degaussing coils used to demagnetize CRTs. Some demagnetization or reverse magnetization will occur if any part of the magnet is subjected to a reverse field above the magnetic material's coercivity. Demagnetization progressively occurs if the magnet is subjected to cyclic fields sufficient to move the magnet away from the linear part on the second quadrant of the B–H curve of the magnetic material (the demagnetization curve). Hammering or jarring: mechanical disturbance tends to randomize the magnetic domains and reduce magnetization of an object, but may cause unacceptable damage. Types of permanent magnets Magnetic metallic elements Many materials have unpaired electron spins, and the majority of these materials are paramagnetic. When the spins interact with each other in such a way that the spins align spontaneously, the materials are called ferromagnetic (what is often loosely termed as magnetic). Because of the way their regular crystalline atomic structure causes their spins to interact, some metals are ferromagnetic when found in their natural states, as ores. These include iron ore (magnetite or lodestone), cobalt and nickel, as well as the rare earth metals gadolinium and dysprosium (when at a very low temperature). Such naturally occurring ferromagnets were used in the first experiments with magnetism. Technology has since expanded the availability of magnetic materials to include various man-made products, all based, however, on naturally magnetic elements. Composites Ceramic, or ferrite, magnets are made of a sintered composite of powdered iron oxide and barium/strontium carbonate ceramic. Given the low cost of the materials and manufacturing methods, inexpensive magnets (or non-magnetized ferromagnetic cores, for use in electronic components such as portable AM radio antennas) of various shapes can be easily mass-produced. The resulting magnets are non-corroding but brittle and must be treated like other ceramics. Alnico magnets are made by casting or sintering a combination of aluminium, nickel and cobalt with iron and small amounts of other elements added to enhance the properties of the magnet. Sintering offers superior mechanical characteristics, whereas casting delivers higher magnetic fields and allows for the design of intricate shapes. Alnico magnets resist corrosion and have physical properties more forgiving than ferrite, but not quite as desirable as a metal. Trade names for alloys in this family include: Alni, Alcomax, Hycomax, Columax, and Ticonal. Injection-molded magnets are a composite of various types of resin and magnetic powders, allowing parts of complex shapes to be manufactured by injection molding. The physical and magnetic properties of the product depend on the raw materials, but are generally lower in magnetic strength and resemble plastics in their physical properties. Flexible magnet Flexible magnets are composed of a high-coercivity ferromagnetic compound (usually ferric oxide) mixed with a resinous polymer binder. This is extruded as a sheet and passed over a line of powerful cylindrical permanent magnets. These magnets are arranged in a stack with alternating magnetic poles facing up (N, S, N, S...) on a rotating shaft. This impresses the plastic sheet with the magnetic poles in an alternating line format. No electromagnetism is used to generate the magnets. The pole-to-pole distance is on the order of 5 mm, but varies with manufacturer. These magnets are lower in magnetic strength but can be very flexible, depending on the binder used. For magnetic compounds (e.g. Nd2Fe14B) that are vulnerable to a grain boundary corrosion problem it gives additional protection. Rare-earth magnets Rare earth (lanthanoid) elements have a partially occupied f electron shell (which can accommodate up to 14 electrons). The spin of these electrons can be aligned, resulting in very strong magnetic fields, and therefore, these elements are used in compact high-strength magnets where their higher price is not a concern. The most common types of rare-earth magnets are samarium–cobalt and neodymium–iron–boron (NIB) magnets. Single-molecule magnets (SMMs) and single-chain magnets (SCMs) In the 1990s, it was discovered that certain molecules containing paramagnetic metal ions are capable of storing a magnetic moment at very low temperatures. These are very different from conventional magnets that store information at a magnetic domain level and theoretically could provide a far denser storage medium than conventional magnets. In this direction, research on monolayers of SMMs is currently under way. Very briefly, the two main attributes of an SMM are: a large ground state spin value (S), which is provided by ferromagnetic or ferrimagnetic coupling between the paramagnetic metal centres a negative value of the anisotropy of the zero field splitting (D) Most SMMs contain manganese but can also be found with vanadium, iron, nickel and cobalt clusters. More recently, it has been found that some chain systems can also display a magnetization that persists for long times at higher temperatures. These systems have been called single-chain magnets. Nano-structured magnets Some nano-structured materials exhibit energy waves, called magnons, that coalesce into a common ground state in the manner of a Bose–Einstein condensate. Rare-earth-free permanent magnets The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent-magnet technology, and has begun funding such research. The Advanced Research Projects Agency-Energy (ARPA-E) has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program to develop alternative materials. In 2011, ARPA-E awarded 31.6 million dollars to fund Rare-Earth Substitute projects. Iron nitrides are promising materials for rare-earth free magnets. Costs The cheapest permanent magnets, allowing for field strengths, are flexible and ceramic magnets, but these are also among the weakest types. The ferrite magnets are mainly low-cost magnets since they are made from cheap raw materials: iron oxide and Ba- or Sr-carbonate. However, a new low cost magnet, Mn–Al alloy, has been developed and is now dominating the low-cost magnets field. It has a higher saturation magnetization than the ferrite magnets. It also has more favorable temperature coefficients, although it can be thermally unstable. Neodymium–iron–boron (NIB) magnets are among the strongest. These cost more per kilogram than most other magnetic materials but, owing to their intense field, are smaller and cheaper in many applications. Temperature Temperature sensitivity varies, but when a magnet is heated to a temperature known as the Curie point, it loses all of its magnetism, even after cooling below that temperature. The magnets can often be remagnetized, however. Additionally, some magnets are brittle and can fracture at high temperatures. The maximum usable temperature is highest for alnico magnets at over , around for ferrite and SmCo, about for NIB and lower for flexible ceramics, but the exact numbers depend on the grade of material. Electromagnets An electromagnet, in its simplest form, is a wire that has been coiled into one or more loops, known as a solenoid. When electric current flows through the wire, a magnetic field is generated. It is concentrated near (and especially inside) the coil, and its field lines are very similar to those of a magnet. The orientation of this effective magnet is determined by the right hand rule. The magnetic moment and the magnetic field of the electromagnet are proportional to the number of loops of wire, to the cross-section of each loop, and to the current passing through the wire. If the coil of wire is wrapped around a material with no special magnetic properties (e.g., cardboard), it will tend to generate a very weak field. However, if it is wrapped around a soft ferromagnetic material, such as an iron nail, then the net field produced can result in a several hundred- to thousandfold increase of field strength. Uses for electromagnets include particle accelerators, electric motors, junkyard cranes, and magnetic resonance imaging machines. Some applications involve configurations more than a simple magnetic dipole; for example, quadrupole and sextupole magnets are used to focus particle beams. Units and calculations For most engineering applications, MKS (rationalized) or SI (Système International) units are commonly used. Two other sets of units, Gaussian and CGS-EMU, are the same for magnetic properties and are commonly used in physics. In all units, it is convenient to employ two types of magnetic field, B and H, as well as the magnetization M, defined as the magnetic moment per unit volume. The magnetic induction field B is given in SI units of teslas (T). B is the magnetic field whose time variation produces, by Faraday's Law, circulating electric fields (which the power companies sell). B also produces a deflection force on moving charged particles (as in TV tubes). The tesla is equivalent to the magnetic flux (in webers) per unit area (in meters squared), thus giving B the unit of a flux density. In CGS, the unit of B is the gauss (G). One tesla equals 104 G. The magnetic field H is given in SI units of ampere-turns per meter (A-turn/m). The turns appear because when H is produced by a current-carrying wire, its value is proportional to the number of turns of that wire. In CGS, the unit of H is the oersted (Oe). One A-turn/m equals 4π×10−3 Oe. The magnetization M is given in SI units of amperes per meter (A/m). In CGS, the unit of M is the oersted (Oe). One A/m equals 10−3 emu/cm3. A good permanent magnet can have a magnetization as large as a million amperes per meter. In SI units, the relation B = μ0(H + M) holds, where μ0 is the permeability of space, which equals 4π×10−7 T•m/A. In CGS, it is written as B = H + 4πM. (The pole approach gives μ0H in SI units. A μ0M term in SI must then supplement this μ0H to give the correct field within B, the magnet. It will agree with the field B calculated using Ampèrian currents). Materials that are not permanent magnets usually satisfy the relation M = χH in SI, where χ is the (dimensionless) magnetic susceptibility. Most non-magnetic materials have a relatively small χ (on the order of a millionth), but soft magnets can have χ on the order of hundreds or thousands. For materials satisfying M = χH, we can also write B = μ0(1 + χ)H = μ0μrH = μH, where μr = 1 + χ is the (dimensionless) relative permeability and μ =μ0μr is the magnetic permeability. Both hard and soft magnets have a more complex, history-dependent, behavior described by what are called hysteresis loops, which give either B vs. H or M vs. H. In CGS, M = χH, but χSI = 4πχCGS, and μ = μr. Caution: in part because there are not enough Roman and Greek symbols, there is no commonly agreed-upon symbol for magnetic pole strength and magnetic moment. The symbol m has been used for both pole strength (unit A•m, where here the upright m is for meter) and for magnetic moment (unit A•m2). The symbol μ has been used in some texts for magnetic permeability and in other texts for magnetic moment. We will use μ for magnetic permeability and m for magnetic moment. For pole strength, we will employ qm. For a bar magnet of cross-section A with uniform magnetization M along its axis, the pole strength is given by qm = MA, so that M can be thought of as a pole strength per unit area. Fields of a magnet Far away from a magnet, the magnetic field created by that magnet is almost always described (to a good approximation) by a dipole field characterized by its total magnetic moment. This is true regardless of the shape of the magnet, so long as the magnetic moment is non-zero. One characteristic of a dipole field is that the strength of the field falls off inversely with the cube of the distance from the magnet's center. Closer to the magnet, the magnetic field becomes more complicated and more dependent on the detailed shape and magnetization of the magnet. Formally, the field can be expressed as a multipole expansion: A dipole field, plus a quadrupole field, plus an octupole field, etc. At close range, many different fields are possible. For example, for a long, skinny bar magnet with its north pole at one end and south pole at the other, the magnetic field near either end falls off inversely with the square of the distance from that pole. Calculating the magnetic force Pull force of a single magnet The strength of a given magnet is sometimes given in terms of its pull force — its ability to pull ferromagnetic objects. The pull force exerted by either an electromagnet or a permanent magnet with no air gap (i.e., the ferromagnetic object is in direct contact with the pole of the magnet) is given by the Maxwell equation: , where: F is force (SI unit: newton) A is the cross section of the area of the pole (in square meters) B is the magnetic induction exerted by the magnet. This result can be easily derived using Gilbert model, which assumes that the pole of magnet is charged with magnetic monopoles that induces the same in the ferromagnetic object. If a magnet is acting vertically, it can lift a mass m in kilograms given by the simple equation: where g is the gravitational acceleration. Force between two magnetic poles Classically, the force between two magnetic poles is given by: where F is force (SI unit: newton) qm1 and qm2 are the magnitudes of magnetic poles (SI unit: ampere-meter) μ is the permeability of the intervening medium (SI unit: tesla meter per ampere, henry per meter or newton per ampere squared) r is the separation (SI unit: meter). The pole description is useful to the engineers designing real-world magnets, but real magnets have a pole distribution more complex than a single north and south. Therefore, implementation of the pole idea is not simple. In some cases, one of the more complex formulae given below will be more useful. Force between two nearby magnetized surfaces of area A The mechanical force between two nearby magnetized surfaces can be calculated with the following equation. The equation is valid only for cases in which the effect of fringing is negligible and the volume of the air gap is much smaller than that of the magnetized material: where: A is the area of each surface, in m2 H is their magnetizing field, in A/m μ0 is the permeability of space, which equals 4π×10−7 T•m/A B is the flux density, in T. Force between two bar magnets The force between two identical cylindrical bar magnets placed end to end at large distance is approximately:, where: B0 is the magnetic flux density very close to each pole, in T, A is the area of each pole, in m2, L is the length of each magnet, in m, R is the radius of each magnet, in m, and z is the separation between the two magnets, in m. relates the flux density at the pole to the magnetization of the magnet. Note that all these formulations are based on Gilbert's model, which is usable in relatively great distances. In other models (e.g., Ampère's model), a more complicated formulation is used that sometimes cannot be solved analytically. In these cases, numerical methods must be used. Force between two cylindrical magnets For two cylindrical magnets with radius and length , with their magnetic dipole aligned, the force can be asymptotically approximated at large distance by, where is the magnetization of the magnets and is the gap between the magnets. A measurement of the magnetic flux density very close to the magnet is related to approximately by the formula The effective magnetic dipole can be written as Where is the volume of the magnet. For a cylinder, this is . When , the point dipole approximation is obtained, which matches the expression of the force between two magnetic dipoles. See also Dipole magnet Earnshaw's theorem Electret Electromagnetic field Electromagnetism Halbach array Magnetic nanoparticles Magnetic switch Magneto Magnetochemistry Molecule-based magnets Single-molecule magnet Supermagnet Notes References "The Early History of the Permanent Magnet". Edward Neville Da Costa Andrade, Endeavour, Volume 17, Number 65, January 1958. Contains an excellent description of early methods of producing permanent magnets. "positive pole n". The Concise Oxford English Dictionary. Catherine Soanes and Angus Stevenson. Oxford University Press, 2004. Oxford Reference Online. Oxford University Press. Wayne M. Saslow, Electricity, Magnetism, and Light, Academic (2002). . Chapter 9 discusses magnets and their magnetic fields using the concept of magnetic poles, but it also gives evidence that magnetic poles do not really exist in ordinary matter. Chapters 10 and 11, following what appears to be a 19th-century approach, use the pole concept to obtain the laws describing the magnetism of electric currents. Edward P. Furlani, Permanent Magnet and Electromechanical Devices:Materials, Analysis and Applications, Academic Press Series in Electromagnetism (2001). . External links How magnets are made (video) Floating Ring Magnets, Bulletin of the IAPT, Volume 4, No. 6, 145 (June 2012). (Publication of the Indian Association of Physics Teachers). A brief history of electricity and magnetism Types of magnets Magnetism Metallic objects
Magnet
[ "Physics" ]
8,386
[ "Metallic objects", "Physical objects", "Matter" ]
51,084
https://en.wikipedia.org/wiki/Unconventional%20superconductor
Unconventional superconductors are materials that display superconductivity which is not explained by the usual BCS theory or its extension, the Eliashberg theory. The pairing in unconventional superconductors may originate from some other mechanism than the electron–phonon interaction. Alternatively, a superconductor is unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the point group or space group of the system. Per definition, superconductors that break additional symmetries to U (1) symmetry are known as unconventional superconductors. History The superconducting properties of CeCu2Si2, a type of heavy fermion material, were reported in 1979 by Frank Steglich. For a long time it was believed that CeCu2Si2 was a singlet d-wave superconductor, but since the mid-2010s, this notion has been strongly contested. In the early eighties, many more unconventional, heavy fermion superconductors were discovered, including UBe13, UPt3 and URu2Si2. In each of these materials, the anisotropic nature of the pairing was implicated by the power-law dependence of the nuclear magnetic resonance (NMR) relaxation rate and specific heat capacity on temperature. The presence of nodes in the superconducting gap of UPt3 was confirmed in 1986 from the polarization dependence of the ultrasound attenuation. The first unconventional triplet superconductor, organic material (TMTSF)2PF6, was discovered by Denis Jerome, Klaus Bechgaard and coworkers in 1980 (TMTSF = Tetramethyltetraselenafulvalenium, see Fulvalene). Experimental works by Paul Chaikin's and Michael Naughton's groups as well as theoretical analysis of their data by Andrei Lebed have firmly confirmed unconventional nature of superconducting pairing in (TMTSF)2X (X=PF6, ClO4, etc.) organic materials. High-temperature singlet d-wave superconductivity was discovered by J.G. Bednorz and K.A. Müller in 1986, who also discovered that the lanthanum-based cuprate perovskite material LaBaCuO4 develops superconductivity at a critical temperature (Tc) of approximately 35 K (-238 degrees Celsius). This was well above the highest critical temperature known at the time (Tc = 23 K), and thus the new family of materials was called high-temperature superconductors. Bednorz and Müller received the Nobel Prize in Physics for this discovery in 1987. Since then, many other high-temperature superconductors have been synthesized. LSCO (La2−xSrxCuO4) was discovered the same year (1986). Soon after, in January 1987, yttrium barium copper oxide (YBCO) was discovered to have a Tc of 90 K, the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K). This was highly significant from the point of view of the technological applications of superconductivity because liquid nitrogen is far less expensive than liquid helium, which is required to cool conventional superconductors down to their critical temperature. In 1988 bismuth strontium calcium copper oxide (BSCCO) with Tc up to 107 K, and thallium barium calcium copper oxide (TBCCO) (T=thallium) with Tc of 125 K were discovered. The current record critical temperature is about Tc = 133 K (−140 °C) at standard pressure, and somewhat higher critical temperatures can be achieved at high pressure. Nevertheless, at present it is considered unlikely that cuprate perovskite materials will achieve room-temperature superconductivity. On the other hand, other unconventional superconductors have been discovered. These include some that do not superconduct at high temperatures, such as strontium ruthenate Sr2RuO4, but that, like high-temperature superconductors, are unconventional in other ways. (For example, the origin of the attractive force leading to the formation of Cooper pairs may be different from the one postulated in BCS theory.) In addition to this, superconductors that have unusually high values of Tc but that are not cuprate perovskites have been discovered. Some of them may be extreme examples of conventional superconductors (this is suspected of magnesium diboride, MgB2, with Tc = 39 K). Others could display more unconventional features. In 2008 a new class that does not include copper (layered oxypnictide superconductors), for example LaOFeAs, was discovered. An oxypnictide of samarium seemed to have a Tc of about 43 K, which was higher than predicted by BCS theory. Tests at up to 45 T suggested the upper critical field of LaFeAsO0.89F0.11 to be around 64 T. Some other iron-based superconductors do not contain oxygen. , the highest-temperature superconductor (at ambient pressure) is mercury barium calcium copper oxide (HgBa2Ca2Cu3Ox), at 138 K and is held by a cuprate-perovskite material, possibly 164 K under high pressure. Other unconventional superconductors not based on cuprate structure have too been found. Some have unusually high values of the critical temperature, Tc, and hence they are sometimes also called high-temperature superconductors. Graphene In 2017, scanning tunneling microscopy and spectroscopy experiments on graphene proximitized to the electron-doped (non-chiral) d-wave superconductor Pr2−xCexCuO4 (PCCO) revealed evidence for an unconventional superconducting density of states induced in graphene. Publications in March 2018 provided evidence for unconventional superconducting properties of a graphene bilayer where one layer was offset by a "magic angle" of 1.1° relative to the other. Ongoing research While the mechanism responsible for conventional superconductivity is well described by the BCS theory, the mechanism for unconventional superconductivity is still unknown. After more than twenty years of intense research, the origin of high-temperature superconductivity is still not clear, being one of the major unsolved problems of theoretical condensed matter physics. It appears that unlike conventional superconductivity driven by electron-phonon attraction, genuine electronic mechanisms (such as antiferromagnetic correlations) are at play. Moreover, d-wave pairing, rather than s-wave, is significant. One goal of much research is room-temperature superconductivity. Despite intensive research and many promising leads, an explanation has so far eluded scientists. One reason for this is that the materials in question are generally very complex, multi-layered crystals (for example, BSCCO), making theoretical modeling difficult. Possible mechanisms The most controversial topic in condensed matter physics has been the mechanism for high-Tc superconductivity (HTS). There have been two representative theories on the HTS : (See also Resonating valence bond theory ) Weak-coupling theory Firstly, it has been suggested that the HTS emerges by antiferromagnetic spin fluctuation in a doped system. According to this weak-coupling theory, the pairing wave function of the HTS should have a dx2−y2 symmetry. Thus, whether the symmetry of the pairing wave function is the d symmetry or not is essential to demonstrate on the mechanism of the HTS in respect of the spin fluctuation. That is, if the HTS order parameter (pairing wave function) does not have d symmetry, then a pairing mechanism related to spin fluctuation can be ruled out. The tunnel experiment (see below) seems to detect d symmetry in some HTS. Interlayer coupling model Secondly, there is the interlayer coupling model, according to which a layered structure consisting of BCS-type (s symmetry) superconductor can enhance the superconductivity by itself. By introducing an additional tunneling interaction between each layer, this model successfully explained the anisotropic symmetry of the order parameter in the HTS as well as the emergence of the HTS. In order to solve this unsettled problem, there have been numerous experiments such as photoelectron spectroscopy, NMR, specific heat measurement, etc. Unfortunately, the results were ambiguous, where some reports supported the d symmetry for the HTS but others supported the s symmetry. This muddy situation possibly originated from the indirect nature of the experimental evidence, as well as experimental issues such as sample quality, impurity scattering, twinning, etc. Superexchange Promising experimental results from various researchers in September 2022, including Weijiong Chen, J.C. Séamus Davis and H. Eisiaki revealed that superexchange of electrons is possibly the most probable reason for high-temperature superconductivity. Previous studies on the symmetry of the HTS order parameter The symmetry of the HTS order parameter has been studied in nuclear magnetic resonance measurements and, more recently, by angle-resolved photoemission and measurements of the microwave penetration depth in a HTS crystal. NMR measurements probe the local magnetic field around an atom and hence reflect the susceptibility of the material. They have been of special interest for the HTS materials because many researchers have wondered whether spin correlations might play a role in the mechanism of the HTS. NMR measurements of the resonance frequency on YBCO indicated that electrons in the copper oxide superconductors are paired in spin-singlet states. This indication came from the behavior of the Knight shift, the frequency shift that occurs when the internal field is different from the applied field: In a normal metal, the magnetic moments of the conduction electrons in the neighborhood of the ion being probed align with the applied field and create a larger internal field. As these metals go superconducting, electrons with oppositely directed spins couple to form singlet states. In the anisotropic HTS, perhaps NMR measurements have found that the relaxation rate for copper depends on the direction of the applied static magnetic field, with the rate being higher when the static field is parallel to one of the axes in the copper oxide plane. While this observation by some group supported the d symmetry of the HTS, other groups could not observe it. Also, by measuring the penetration depth, the symmetry of the HTS order parameter can be studied. The microwave penetration depth is determined by the superfluid density responsible for screening the external field. In the s wave BCS theory, because pairs can be thermally excited across the gap Δ, the change in superfluid density per unit change in temperature goes as exponential behavior, exp(-Δ/kBT). In that case, the penetration depth also varies exponentially with temperature T. If there are nodes in the energy gap as in the d symmetry HTS, electron pair can more easily be broken, the superfluid density should have a stronger temperature dependence, and the penetration depth is expected to increase as a power of T at low temperatures. If the symmetry is specially dx2-y2 then the penetration depth should vary linearly with T at low temperatures. This technique is increasingly being used to study superconductors and is limited in application largely by the quality of available single crystals. Photoemission spectroscopy also could provide information on the HTS symmetry. By scattering photons off electrons in the crystal, one can sample the energy spectra of the electrons. Because the technique is sensitive to the angle of the emitted electrons one can determine the spectrum for different wave vectors on the Fermi surface. However, within the resolution of the angle-resolved photoemission spectroscopy (ARPES), researchers could not tell whether the gap goes to zero or just gets very small. Also, ARPES are sensitive only to the magnitude and not to the sign of the gap, so it could not tell if the gap goes negative at some point. This means that ARPES cannot determine whether the HTS order parameter has the d symmetry or not. Junction experiment supporting the d-wave symmetry There was a clever experimental design to overcome the muddy situation. An experiment based on pair tunneling and flux quantization in a three-grain ring of YBa2Cu3O7 (YBCO) was designed to test the symmetry of the order parameter in YBCO. Such a ring consists of three YBCO crystals with specific orientations consistent with the d-wave pairing symmetry to give rise to a spontaneously generated half-integer quantum vortex at the tricrystal meeting point. Furthermore, the possibility that junction interfaces can be in the clean limit (no defects) or with maximum zig-zag disorder was taken into account in this tricrystal experiment. A proposal of studying vortices with half magnetic flux quanta in heavy-fermion superconductors in three polycrystalline configurations was reported in 1987 by V. B. Geshkenbein, A. Larkin and A. Barone in 1987. In the first tricrystal pairing symmetry experiment, the spontaneous magnetization of half flux quantum was clearly observed in YBCO, which convincingly supported the d-wave symmetry of the order parameter in YBCO. Because YBCO is orthorhombic, it might inherently have an admixture of s-wave symmetry. So, by tuning their technique further, it was found that there was an admixture of s-wave symmetry in YBCO within about 3%. Also, it was demonstrated by Tsuei, Kirtley et al. that there was a pure dx2-y2 order parameter symmetry in the tetragonal Tl2Ba2CuO6. References Superconductors
Unconventional superconductor
[ "Chemistry", "Materials_science" ]
2,908
[ "Superconductivity", "Superconductors" ]
51,108
https://en.wikipedia.org/wiki/Poison
A poison is any chemical substance that is harmful or lethal to living organisms. The term is used in a wide range of scientific fields and industries, where it is often specifically defined. It may also be applied colloquially or figuratively, with a broad sense. Whether something is considered a poison or not may depend on the amount, the circumstances, and what living things are present. Poisoning could be accidental or deliberate, and if the cause can be identified there may be ways to neutralise the effects or minimise the symptoms. In biology, a poison is a chemical substance causing death, injury or harm to organisms or their parts. In medicine, poisons are a kind of toxin that are delivered passively, not actively. In industry the term may be negative, something to be removed to make a thing safe, or positive, an agent to limit unwanted pests. In ecological terms, poisons introduced into the environment can later cause unwanted effects elsewhere, or in other parts of the food chain. Modern definitions In broad metaphorical (colloquial) usage of the term, "poison" may refer to anything deemed harmful. In biology, poisons are substances that can cause death, injury, or harm to organs, tissues, cells, and DNA usually by chemical reactions or other activity on the molecular scale, when an organism is exposed to a sufficient quantity. Medicinal fields (particularly veterinary medicine) and zoology often distinguish poisons from toxins and venoms. Both poisons and venoms are toxins, which are toxicants produced by organisms in nature. The difference between venom and poison is the delivery method of the toxin. Venoms are toxins that are actively delivered by being injected via a bite or sting through a venom apparatus, such as fangs or a stinger, in a process called envenomation, whereas poisons are toxins that are passively delivered by being swallowed, inhaled, or absorbed through the skin. Unantidoteable refers to toxins that cannot be neutralized by modern medical technology, regardless of their type. Uses Industry, agriculture, and other sectors employ many poisonous substances, usually for reasons other than their toxicity to humans. Examples include medicines (e.g. anthelmintics used on chickens), solvents (e.g. rubbing alcohol, turpentine), cleaners (e.g. bleach, ammonia), coatings (e.g. arsenic wallpaper), and feedstocks. The toxicity itself sometimes has economic value, when it serves agricultural purposes of weed control and pest control. Most poisonous industrial compounds have associated material safety data sheets and are classified as hazardous substances. Hazardous substances are subject to extensive regulation on production, procurement, and use in overlapping domains of occupational safety and health, public health, drinking water quality standards, air pollution, and environmental protection. Due to the mechanics of molecular diffusion, many poisonous compounds rapidly diffuse into biological tissues, air, water, or soil on a molecular scale. By the principle of entropy, chemical contamination is typically costly or infeasible to reverse, unless specific chelating agents or micro-filtration processes are available. Chelating agents are often broader in scope than the acute target, and therefore their ingestion necessitates careful medical or veterinarian supervision. Pesticides are one group of substances whose prime purpose is their toxicity to various insects and other animals deemed to be pests (e.g., rats and cockroaches). Natural pesticides have been used for this purpose for thousands of years (e.g. concentrated table salt is toxic to many slugs and snails). Bioaccumulation of chemically-prepared agricultural insecticides is a matter of concern for the many species, especially birds, which consume insects as a primary food source. Selective toxicity, controlled application, and controlled biodegradation are major challenges in herbicide and pesticide development and in chemical engineering generally, as all lifeforms on earth share an underlying biochemistry; organisms exceptional in their environmental resilience are classified as extremophiles, these for the most part exhibiting radically different susceptibilities. Ecological lifetime A poison which enters the food chain—whether of industrial, agricultural, or natural origin—might not be immediately toxic to the first organism that ingests the toxin, but can become further concentrated in predatory organisms further up the food chain, particularly carnivores and omnivores, especially concerning fat soluble poisons which tend to become stored in biological tissue rather than excreted in urine or other water-based effluents. Apart from food, many poisons readily enter the body through the skin and lungs. Hydrofluoric acid is a notorious contact poison, in addition to its corrosive damage. Naturally occurring sour gas is a fast-acting atmospheric poison, which can be released by volcanic activity or drilling rigs. Plant-based contact irritants, such as that possessed by poison ivy, are often classed as allergens rather than poisons; the effect of an allergen being not a poison as such, but to turn the body's natural defenses against itself. Poison can also enter the body through faulty medical implants, or by injection (which is the basis of lethal injection in the context of capital punishment). In 2013, 3.3 million cases of unintentional human poisonings occurred. This resulted in 98,000 deaths worldwide, down from 120,000 deaths in 1990. In modern society, cases of suspicious death elicit the attention of the Coroner's office and forensic investigators. Of increasing concern since the isolation of natural radium by Marie and Pierre Curie in 1898—and the subsequent advent of nuclear physics and nuclear technologies—are radiological poisons. These are associated with ionizing radiation, a mode of toxicity quite distinct from chemically active poisons. In mammals, chemical poisons are often passed from mother to offspring through the placenta during gestation, or through breast milk during nursing. In contrast, radiological damage can be passed from mother or father to offspring through genetic mutation, which—if not fatal in miscarriage or childhood, or a direct cause of infertility—can then be passed along again to a subsequent generation. Atmospheric radon is a natural radiological poison of increasing impact since humans moved from hunter-gatherer lifestyles and cave dwelling to increasingly enclosed structures able to contain radon in dangerous concentrations. The 2006 poisoning of Alexander Litvinenko was a notable use of radiological assassination, presumably meant to evade the normal investigation of chemical poisons. Poisons widely dispersed into the environment are known as pollution. These are often of human origin, but pollution can also include unwanted biological processes such as toxic red tide, or acute changes to the natural chemical environment attributed to invasive species, which are toxic or detrimental to the prior ecology (especially if the prior ecology was associated with human economic value or an established industry such as shellfish harvesting). The scientific disciplines of ecology and environmental resource management study the environmental life cycle of toxic compounds and their complex, diffuse, and highly interrelated effects. Etymology The word "poison" was first used in 1200 to mean "a deadly potion or substance"; the English term comes from the "...Old French poison, puison (12c., Modern French poison) "a drink", especially a medical drink, later "a (magic) potion, poisonous drink" (14c.), from Latin potionem (nominative potio) "a drinking, a drink", also "poisonous drink" (Cicero), from potare "to drink". The use of "poison" as an adjective ("poisonous") dates from the 1520s. Using the word "poison" with plant names dates from the 18th century. The term "poison ivy", for example, was first used in 1784 and the term "poison oak" was first used in 1743. The term "poison gas" was first used in 1915. Terminology The term "poison" is often used colloquially to describe any harmful substance—particularly corrosive substances, carcinogens, mutagens, teratogens and harmful pollutants, and to exaggerate the dangers of chemicals. Paracelsus (1493–1541), the father of toxicology, once wrote: "Everything is poison, there is poison in everything. Only the dose makes a thing not a poison" (see median lethal dose). The term "poison" is also used in a figurative sense: "His brother's presence poisoned the atmosphere at the party". The law defines "poison" more strictly. Substances not legally required to carry the label "poison" can also cause a medical condition of poisoning. Some poisons are also toxins, which is any poison produced by an organism, such as the bacterial proteins that cause tetanus and botulism. A distinction between the two terms is not always observed, even among scientists. The derivative forms "toxic" and "poisonous" are synonymous. Animal poisons delivered subcutaneously (e.g., by sting or bite) are also called venom. In normal usage, a poisonous organism is one that is harmful to consume, but a venomous organism uses venom to kill its prey or defend itself while still alive. A single organism can be both poisonous and venomous, but it is rare. All living things produce substances to protect them from getting eaten, so the term "poison" is usually only used for substances which are poisonous to humans, while substances that mainly are poisonous to a common pathogen to the organism and humans are considered antibiotics. Bacteria are for example a common adversary for Penicillium chrysogenum mold and humans, and since the mold's poison only targets bacteria, humans use it for getting rid of it in their bodies. Human antimicrobial peptides which are toxic to viruses, fungi, bacteria, and cancerous cells are considered a part of the immune system. In nuclear physics, a poison is a substance that obstructs or inhibits a nuclear reaction. Environmentally hazardous substances are not necessarily poisons, and vice versa. For example, food-industry wastewater—which may contain potato juice or milk—can be hazardous to the ecosystems of streams and rivers by consuming oxygen and causing eutrophication, but is nonhazardous to humans and not classified as a poison. Biologically speaking, any substance, if given in large enough amounts, is poisonous and can cause death. For instance, several kilograms worth of water would constitute a lethal dose. Many substances used as medications—such as fentanyl—have an only one order of magnitude greater than the ED50. An alternative classification distinguishes between lethal substances that provide a therapeutic value and those that do not. Poisoning Poisoning can be either acute or chronic, and caused by a variety of natural or synthetic substances. Substances that destroy tissue but do not absorb, such as lye, are classified as corrosives rather than poisons. Acute Acute poisoning is exposure to a poison on one occasion or during a short period of time. Symptoms develop in close relation to the exposure. Absorption of a poison is necessary for systemic poisoning. Furthermore, many common household medications are not labeled with skull and crossbones, although they can cause severe illness or even death. Poisoning can be caused by excessive consumption of generally safe substances, as in the case of water intoxication. Agents that act on the nervous system can paralyze in seconds or less, and include both biologically derived neurotoxins and so-called nerve gases, which may be synthesized for warfare or industry. Inhaled or ingested cyanide, used as a method of execution in gas chambers, or as a suicide method, almost instantly starves the body of energy by inhibiting the enzymes in mitochondria that make ATP. Intravenous injection of an unnaturally high concentration of potassium chloride, such as in the execution of prisoners in parts of the United States, quickly stops the heart by eliminating the cell potential necessary for muscle contraction. Most biocides, including pesticides, are created to act as acute poisons to target organisms, although acute or less observable chronic poisoning can also occur in non-target organisms (secondary poisoning), including the humans who apply the biocides and other beneficial organisms. For example, the herbicide 2,4-D imitates the action of a plant hormone, which makes its lethal toxicity specific to plants. Indeed, 2,4-D is not a poison, but classified as "harmful" (EU). Many substances regarded as poisons are toxic only indirectly, by toxication. An example is "wood alcohol" or methanol, which is not poisonous itself, but is chemically converted to toxic formaldehyde and formic acid in the liver. Many drug molecules are made toxic in the liver, and the genetic variability of certain liver enzymes makes the toxicity of many compounds differ between individuals. Exposure to radioactive substances can produce radiation poisoning, an unrelated phenomenon. Two common cases of acute natural poisoning are theobromine poisoning of dogs and cats, and mushroom poisoning in humans. Dogs and cats are not natural herbivores, but a chemical defense developed by Theobroma cacao can be incidentally fatal nevertheless. Many omnivores, including humans, readily consume edible fungi, and thus many fungi have evolved to become decisively inedible, in this case as a direct defense. Chronic Chronic poisoning is long-term repeated or continuous exposure to a poison where symptoms do not occur immediately or after each exposure. The person gradually becomes ill, or becomes ill after a long latent period. Chronic poisoning most commonly occurs following exposure to poisons that bioaccumulate, or are biomagnified, such as mercury, gadolinium, and lead. Management Initial management for all poisonings includes ensuring adequate cardiopulmonary function and providing treatment for any symptoms such as seizures, shock, and pain. Injected poisons (e.g., from the sting of animals) can be treated by binding the affected body part with a pressure bandage and placing the affected body part in hot water (with a temperature of 50 °C). The pressure bandage prevents the poison being pumped throughout the body, and the hot water breaks it down. This treatment, however, only works with poisons composed of protein-molecules. In the majority of poisonings the mainstay of management is providing supportive care for the patient, i.e., treating the symptoms rather than the poison. Decontamination Treatment of a recently ingested poison may involve gastric decontamination to decrease absorption. Gastric decontamination can involve activated charcoal, gastric lavage, whole bowel irrigation, or nasogastric aspiration. Routine use of emetics (syrup of Ipecac), cathartics or laxatives are no longer recommended. Activated charcoal is the treatment of choice to prevent poison absorption. It is usually administered when the patient is in the emergency room or by a trained emergency healthcare provider such as a Paramedic or EMT. However, charcoal is ineffective against metals such as sodium, potassium, and lithium, and alcohols and glycols; it is also not recommended for ingestion of corrosive chemicals such as acids and alkalis. Cathartics were postulated to decrease absorption by increasing the expulsion of the poison from the gastrointestinal tract. There are two types of cathartics used in poisoned patients; saline cathartics (sodium sulfate, magnesium citrate, magnesium sulfate) and saccharide cathartics (sorbitol). They do not appear to improve patient outcome and are no longer recommended. Emesis (i.e. induced by ipecac) is no longer recommended in poisoning situations, because vomiting is ineffective at removing poisons. Gastric lavage, commonly known as a stomach pump, is the insertion of a tube into the stomach, followed by administration of water or saline down the tube. The liquid is then removed along with the contents of the stomach. Lavage has been used for many years as a common treatment for poisoned patients. However, a recent review of the procedure in poisonings suggests no benefit. It is still sometimes used if it can be performed within 1 hour of ingestion and the exposure is potentially life-threatening. Nasogastric aspiration involves the placement of a tube via the nose down into the stomach, the stomach contents are then removed by suction. This procedure is mainly used for liquid ingestions where activated charcoal is ineffective, e.g. ethylene glycol poisoning. Whole bowel irrigation cleanses the bowel. This is achieved by giving the patient large amounts of a polyethylene glycol solution. The osmotically balanced polyethylene glycol solution is not absorbed into the body, having the effect of flushing out the entire gastrointestinal tract. Its major uses are to treat ingestion of sustained release drugs, toxins not absorbed by activated charcoal (e.g., lithium, iron), and for removal of ingested drug packets (body packing/smuggling). Enhanced excretion In some situations elimination of the poison can be enhanced using diuresis, hemodialysis, hemoperfusion, hyperbaric medicine, peritoneal dialysis, exchange transfusion or chelation. However, this may actually worsen the poisoning in some cases, so it should always be verified based on what substances are involved. Epidemiology In 2010, poisoning resulted in about 180,000 deaths down from 200,000 in 1990. There were approximately 727,500 emergency department visits in the United States involving poisonings—3.3% of all injury-related encounters. Applications Poisonous compounds may be useful either for their toxicity, or, more often, because of another chemical property, such as specific chemical reactivity. Poisons are widely used in industry and agriculture, as chemical reagents, solvents or complexing reagents, e.g. carbon monoxide, methanol and sodium cyanide, respectively. They are less common in household use, with occasional exceptions such as ammonia and methanol. For instance, phosgene is a highly reactive nucleophile acceptor, which makes it an excellent reagent for polymerizing diols and diamines to produce polycarbonate and polyurethane plastics. For this use, millions of tons are produced annually. However, the same reactivity makes it also highly reactive towards proteins in human tissue and thus highly toxic. In fact, phosgene has been used as a chemical weapon. It can be contrasted with mustard gas, which has only been produced for chemical weapons uses, as it has no particular industrial use. Biocides need not be poisonous to humans, because they can target metabolic pathways absent in humans, leaving only incidental toxicity. For instance, the herbicide 2,4-dichlorophenoxyacetic acid is a mimic of a plant growth hormone, which causes uncontrollable growth leading to the death of the plant. Humans and animals, lacking this hormone and its receptor, are unaffected by this, and need to ingest relatively large doses before any toxicity appears. Human toxicity is, however, hard to avoid with pesticides targeting mammals, such as rodenticides. The risk from toxicity is also distinct from toxicity itself. For instance, the preservative thiomersal used in vaccines is toxic, but the quantity administered in a single shot is negligible. History Throughout human history, intentional application of poison has been used as a method of murder, pest-control, suicide, and execution. As a method of execution, poison has been ingested, as the ancient Athenians did (see Socrates), inhaled, as with carbon monoxide or hydrogen cyanide (see gas chamber), injected (see lethal injection), or even as an enema. Poison's lethal effect can be combined with its allegedly magical powers; an example is the Chinese gu poison. Poison was also employed in gunpowder warfare. For example, the 14th-century Chinese text of the Huolongjing written by Jiao Yu outlined the use of a poisonous gunpowder mixture to fill cast iron grenade bombs. While arsenic is a naturally occurring environmental poison, its artificial concentrate was once nicknamed inheritance powder. In Medieval Europe, it was common for monarchs to employ personal food tasters to thwart royal assassination, in the dawning age of the Apothecary. Figurative use The term poison is also used in a figurative sense. The slang sense of alcoholic drink is first attested 1805, American English (e.g., a bartender might ask a customer "what's your poison?" or "Pick your poison"). Figurative use of the term dates from the late 15th century. Figuratively referring to persons as poison dates from 1910. The figurative term poison-pen letter became well known in 1913 by a notorious criminal case in Pennsylvania, U.S.; the phrase dates to 1898. See also References External links National Capital Poison Center webPOISONCONTROL(R) Agency for Toxic Substances and Disease Registry American Association of Poison Control Centers American College of Medical Toxicology Clinical Toxicology Teaching Wiki Find Your Local Poison Control Centre Here (Worldwide) Poison Prevention and Education Website Cochrane Injuries Group , Systematic reviews on the prevention, treatment and rehabilitation of traumatic injury (including poisoning) Pick Your Poison—12 Toxic Tales by Cathy Newman Execution equipment Execution methods Suicide by poison
Poison
[ "Environmental_science" ]
4,430
[ "Poisons", "Toxicology" ]
51,111
https://en.wikipedia.org/wiki/Pipeline
A pipeline is a system of pipes for long-distance transportation of a liquid or gas, typically to a market area for consumption. The latest data from 2014 gives a total of slightly less than of pipeline in 120 countries around the world. The United States had 65%, Russia had 8%, and Canada had 3%, thus 76% of all pipeline were in these three countries. The main attribute to pollution from pipelines is caused by corrosion and leakage. Pipeline and Gas Journals worldwide survey figures indicate that of pipelines are planned and under construction. Of these, represent projects in the planning and design phase; reflect pipelines in various stages of construction. Liquids and gases are transported in pipelines, and any chemically stable substance can be sent through a pipeline. Pipelines exist for the transport of crude and refined petroleum, fuels – such as oil, natural gas and biofuels – and other fluids including sewage, slurry, water, beer, hot water or steam for shorter distances and even pneumatic systems which allow for the generation of suction pressure for useful work and in transporting solid objects. Pipelines are useful for transporting water for drinking or irrigation over long distances when it needs to move over hills, or where canals or channels are poor choices due to considerations of evaporation, pollution, or environmental impact. Oil pipelines are made from steel or plastic tubes which are usually buried. The oil is moved through the pipelines by pump stations along the pipeline. Natural gas (and similar gaseous fuels) are pressurized into liquids known as natural gas liquids (NGLs). Natural gas pipelines are constructed of carbon steel. Hydrogen pipeline transport is the transportation of hydrogen through a pipe. Pipelines are one of the safest ways of transporting materials as compared to road or rail, and hence in war, pipelines are often the target of military attacks. Oil and natural gas It is well documented when the first crude oil pipeline was built. Credit for the development of pipeline transport belongs indisputably to the Oil Transport Association, which first constructed a wrought iron pipeline over a track from an oil field in Pennsylvania to a railroad station in Oil Creek, in the 1860s. Pipelines are generally the most economical way to transport large quantities of oil, refined oil products or natural gas over land. For example, in 2014, pipeline transport of crude oil cost about $5 per barrel, while rail transport cost about $10 to $15 per barrel. Trucking has even higher costs due to the additional labor required; employment on completed pipelines represents only "1% of that of the trucking industry.". In the United States, 70% of crude oil and petroleum products are shipped by pipeline. (23% are by ship, 4% by truck, and 3% by rail) In Canada for natural gas and petroleum products, 97% are shipped by pipeline. Natural gas (and similar gaseous fuels) are lightly pressurized into liquids known as Natural Gas Liquids (NGLs). Small NGL processing facilities can be located in oil fields so the butane and propane liquid under light pressure of , can be shipped by rail, truck or pipeline. Propane can be used as a fuel in oil fields to heat various facilities used by the oil drillers or equipment and trucks used in the oil patch. EG: Propane will convert from a gas to a liquid under light pressure, 100 psi, give or take depending on temperature, and is pumped into cars and trucks at less than at retail stations. Pipelines and rail cars use about double that pressure to pump at . The distance to ship propane to markets is much shorter, as thousands of natural-gas processing plants are located in or near oil fields. Many Bakken Basin oil companies in North Dakota, Montana, Manitoba and Saskatchewan gas fields separate the NGLs in the field, allowing the drillers to sell propane directly to small wholesalers, eliminating the large refinery control of product and prices for propane or butane. The most recent major pipeline to start operating in North America is a TransCanada natural gas line going north across the Niagara region bridges. This gas line carries Marcellus shale gas from Pennsylvania and other tied in methane or natural gas sources into the Canadian province of Ontario. It began operations in the fall of 2012, supplying 16 percent of all the natural gas used in Ontario. This new US-supplied natural gas displaces the natural gas formerly shipped to Ontario from western Canada in Alberta and Manitoba, thus dropping the government regulated pipeline shipping charges because of the significantly shorter distance from gas source to consumer. To avoid delays and US government regulation, many small, medium and large oil producers in North Dakota have decided to run an oil pipeline north to Canada to meet up with a Canadian oil pipeline shipping oil from west to east. This allows the Bakken Basin and Three Forks oil producers to get higher negotiated prices for their oil because they will not be restricted to just one wholesale market in the US. The distance from the biggest oil patch in North Dakota, in Williston, North Dakota, is only about 85 miles or 137 kilometers to the Canada–US border and Manitoba. Mutual funds and joint ventures are the largest investors in new oil and gas pipelines. In the fall of 2012, the US began exporting propane to Europe, known as LPG, as wholesale prices there are much higher than in North America. Additionally, a pipeline is currently being constructed from North Dakota to Illinois, commonly known as the Dakota Access Pipeline. As more North American pipelines are built, even more exports of LNG, propane, butane, and other natural gas products occur on all three US coasts. To give insight, North Dakota Bakken region's oil production has grown by 600% from 2007 to 2015. North Dakota oil companies are shipping huge amounts of oil by tanker rail car as they can direct the oil to the market that gives the best price, and rail cars can be used to avoid a congested oil pipeline to get the oil to a different pipeline in order to get the oil to market faster or to a different less busy oil refinery. However, pipelines provide a cheaper means to transport by volume. Enbridge in Canada is applying to reverse an oil pipeline going from east-to-west (Line 9) and expanding it and using it to ship western Canadian bitumen oil eastward. From a presently rated 250,000 barrels equivalent per day pipeline, it will be expanded to between 1.0 and 1.3 million barrels per day. It will bring western oil to refineries in Ontario, Michigan, Ohio, Pennsylvania, Quebec and New York by early 2014. New Brunswick will also refine some of this western Canadian crude and export some crude and refined oil to Europe from its deep water oil ULCC loading port. Although pipelines can be built under the sea, that process is economically and technically demanding, so the majority of oil at sea is transported by tanker ships. Similarly, it is often more economically feasible to transport natural gas in the form of LNG, however the break-even point between LNG and pipelines would depend on the volume of natural gas and the distance it travels. Growth of market The market size for oil and gas pipeline construction experienced tremendous growth prior to the economic downturn in 2008. After faltering in 2009, demand for pipeline expansion and updating increased the following year as energy production grew. By 2012, almost 32,000 miles (51500 km) of North American pipeline were being planned or under construction. When pipelines are constrained, additional pipeline product transportation options may include the use of drag reducing agents, or by transporting product via truck or rail. Construction and operation Oil pipelines are made from steel or plastic tubes with inner diameter typically from . Most pipelines are typically buried at a depth of about . To protect pipes from impact, abrasion, and corrosion, a variety of methods are used. These can include wood lagging (wood slats), concrete coating, rockshield, high-density polyethylene, imported sand padding, sacrificial cathodes and padding machines. Crude oil contains varying amounts of paraffin wax and in colder climates wax buildup may occur within a pipeline. Often these pipelines are inspected and cleaned using pigging, the practice of using devices known as "pigs" to perform various maintenance operations on a pipeline. The devices are also known as "scrapers" or "Go-devils". "Smart pigs" (also known as "intelligent" or "intelligence" pigs) are used to detect anomalies in the pipe such as dents, metal loss caused by corrosion, cracking or other mechanical damage. These devices are launched from pig-launcher stations and travel through the pipeline to be received at any other station down-stream, either cleaning wax deposits and material that may have accumulated inside the line or inspecting and recording the condition of the line. For natural gas, pipelines are constructed of carbon steel and vary in size from in diameter, depending on the type of pipeline. The gas is pressurized by compressor stations and is odorless unless mixed with a mercaptan odorant where required by a regulating authority. Ammonia A major ammonia pipeline is the Ukrainian Transammiak line connecting the TogliattiAzot facility in Russia to the exporting Black Sea-port of Odesa. Alcohol fuels Pipelines have been used for transportation of ethanol in Brazil, and there are several ethanol pipeline projects in Brazil and the United States. The main problems related to the transport of ethanol by pipeline are its corrosive nature and tendency to absorb water and impurities in pipelines, which are not problems with oil and natural gas. Insufficient volumes and cost-effectiveness are other considerations limiting construction of ethanol pipelines. In the US minimal amounts of ethanol are transported by pipeline. Most ethanol is shipped by rail, the main alternatives being truck and barge. Delivering ethanol by pipeline is the most desirable option, but ethanol's affinity for water and solvent properties require the use of a dedicated pipeline, or significant cleanup of existing pipelines. Coal and ore Slurry pipelines are sometimes used to transport coal or ore from mines. The material to be transported is closely mixed with water before being introduced to the pipeline; at the far end, the material must be dried. One example is a slurry pipeline which is planned to transport iron ore from the Minas-Rio mine (producing 26.5 million tonnes per year) to the Port of Açu in Brazil. An existing example is the Savage River Slurry pipeline in Tasmania, Australia, possibly the world's first when it was built in 1967. It includes a bridge span at above the Savage River. Hydrogen Hydrogen pipeline transport is a transportation of hydrogen through a pipe as part of the hydrogen infrastructure. Hydrogen pipeline transport is used to connect the point of hydrogen production or delivery of hydrogen with the point of demand, with transport costs similar to CNG, the technology is proven. Most hydrogen is produced at the place of demand with every 50 to an industrial production facility. The 1938 Rhine-Ruhr hydrogen pipeline is still in operation. , there are of low pressure hydrogen pipelines in the US and in Europe. Water Two millennia ago, the ancient Romans made use of large aqueducts to transport water from higher elevations by building the aqueducts in graduated segments that allowed gravity to push the water along until it reached its destination. Hundreds of these were built throughout Europe and elsewhere, and along with flour mills were considered the lifeline of the Roman Empire. The ancient Chinese also made use of channels and pipe systems for public works. The famous Han dynasty court eunuch Zhang Rang (d. 189 AD) once ordered the engineer Bi Lan to construct a series of square-pallet chain pumps outside the capital city of Luoyang. These chain pumps serviced the imperial palaces and living quarters of the capital city as the water lifted by the chain pumps was brought in by a stoneware pipe system. Pipelines are useful for transporting water for drinking or irrigation over long distances when it needs to move over hills, or where canals or channels are poor choices due to considerations of evaporation, pollution, or environmental impact. The Goldfields Water Supply Scheme in Western Australia using 750 mm (30 inch) pipe and completed in 1903 was the largest water supply scheme of its time. Examples of significant water pipelines in South Australia are the Morgan-Whyalla pipeline (completed 1944) and Mannum-Adelaide pipeline (completed 1955) pipelines, both part of the larger Snowy Mountains scheme. Two Los Angeles, California aqueducts, the Owens Valley aqueduct (completed 1913) and the Second Los Angeles Aqueduct (completed 1970), include extensive use of pipelines. The Great Manmade River of Libya supplies of water each day to Tripoli, Benghazi, Sirte, and several other cities in Libya. The pipeline is over long, and is connected to wells tapping an aquifer over underground. Other systems District heating District heating or teleheating systems consist of a network of insulated feed and return pipes which transport heated water, pressurized hot water, or sometimes steam to the customer. While steam is hottest and may be used in industrial processes due to its higher temperature, it is less efficient to produce and transport due to greater heat losses. Heat transfer oils are generally not used for economic and ecological reasons. The typical annual loss of thermal energy through distribution is around 10%, as seen in Norway's district heating network. District heating pipelines are normally installed underground, with some exceptions. Within the system, heat storage may be installed to even out peak load demands. Heat is transferred into the central heating of the dwellings through heat exchangers at heat substations, without mixing of the fluids in either system. Beer Bars in the Veltins-Arena, a major football ground in Gelsenkirchen, Germany, are interconnected by a long beer pipeline. In Randers city in Denmark, the so-called Thor Beer pipeline was operated. Originally, copper pipes ran directly from the brewery, but when the brewery moved out of the city in the 1990s, Thor Beer replaced it with a giant tank. A three-kilometer beer pipeline was completed in Bruges, Belgium in September 2016 to reduce truck traffic on the city streets. Brine The village of Hallstatt in Austria, which is known for its long history of salt mining, claims to contain "the oldest industrial pipeline in the world", dating back to 1595. It was constructed from 13,000 hollowed-out tree trunks to transport brine from Hallstatt to Ebensee. Milk Between 1978 and 1994, a 15 km milk pipeline ran between the Dutch island of Ameland and Holwerd on the mainland, of which 8 km was beneath the Wadden Sea. Every day, 30,000 litres of milk produced on the island were transported to be processed on the mainland. In 1994, the pipeline was abandoned. Pneumatic transport Rather than transporting fluids, pneumatic tubes are usually used to transport solids in a cylindrical container by compressed air or by partial vacuum. They were most popular in the late 19th and early 20th centuries, and were used to transport small solid objects within a building, e.g. documents in an office or money in a bank. By the 21st century, pneumatic tube transport had been mostly superseded by digital solutions for transporting information, but is still used in cases where convenience and speed in a local environment are important. Hospitals, for example, use them to deliver drugs and specimens. Marine pipelines In places, a pipeline may have to cross water expanses, such as small seas, straits and rivers. In many instances, they lie entirely on the seabed. These pipelines are referred to as "marine" pipelines (also, "submarine" or "offshore" pipelines). They are used primarily to carry oil or gas, but transportation of water is also important. In offshore projects, a distinction is made between a "flowline" and a pipeline. The former is an intrafield pipeline, in the sense that it is used to connect subsea wellheads, manifolds and the platform within a particular development field. The latter, sometimes referred to as an "export pipeline", is used to bring the resource to shore. The construction and maintenance of marine pipelines imply logistical challenges that are different from those onland, mainly because of wave and current dynamics, along with other geohazards. Functions In general, pipelines can be classified in three categories depending on purpose: Gathering pipelines Group of smaller interconnected pipelines forming complex networks with the purpose of bringing crude oil or natural gas from several nearby wells to a treatment plant or processing facility. In this group, pipelines are usually short- a couple hundred metres- and with small diameters. Sub-sea pipelines for collecting product from deep water production platforms are also considered gathering systems. Transportation pipelines Mainly long pipes with large diameters, moving products (oil, gas, refined products) between cities, countries and even continents. These transportation networks include several compressor stations in gas lines or pump stations for crude and multi-products pipelines. Distribution pipelines Composed of several interconnected pipelines with small diameters, used to take the products to the final consumer. Feeder lines to distribute gas to homes and businesses downstream. Pipelines at terminals for distributing products to tanks and storage facilities are included in this groups. Development and planning When a pipeline is built, the construction project not only covers the civil engineering work to lay the pipeline and build the pump/compressor stations, it also has to cover all the work related to the installation of the field devices that will support remote operation. The pipeline is routed along what is known as a "right of way". Pipelines are generally developed and built using the following stages: Open season to determine market interest: Potential customers are given the chance to sign up for part of the new pipeline's capacity rights. Route (right of way) selection including land acquisition (eminent domain) Pipeline design: The pipeline project may take a number of forms, including the construction of a new pipeline, conversion of existing pipeline from one fuel type to another, or improvements to facilities on a current pipeline route. Obtaining approval: Once the design is finalized and the first pipeline customers have purchased their share of capacity, the project must be approved by the relevant regulatory agencies. Surveying the route Clearing the route Trenching – Main Route and Crossings (roads, rail, other pipes, etc.) Installing the pipe Installing valves, intersections, etc. Covering the pipe and trench Testing: Once construction is completed, the new pipeline is subjected to tests to ensure its structural integrity. These may include hydrostatic testing and line packing. Russia has "Pipeline Troops" as part of the Rear Services, who are trained to build and repair pipelines. Russia is the only country to have Pipeline Troops. The U.S. government, mainly through the EPA, the FERC and others, reviews proposed pipeline projects in order to comply with the Clean Water Act, the National Environmental Policy Act, other laws and, in some cases, municipal laws. The Biden administration has sought to permit the respective states and tribal groups to appraise and potentially block the proposed projects. Operation Field devices are instrumentation, data gathering units and communication systems. The field instrumentation includes flow, pressure, and temperature gauges/transmitters, and other devices to measure the relevant data required. These instruments are installed along the pipeline on some specific locations, such as injection or delivery stations, pump stations (liquid pipelines) or compressor stations (gas pipelines), and block valve stations. The information measured by these field instruments is then gathered in local remote terminal units (RTU) that transfer the field data to a central location in real time using communication systems, such as satellite channels, microwave links, or cellular phone connections. Pipelines are controlled and operated remotely, from what is usually known as the "Main Control Room". In this center, all the data related to field measurement is consolidated in one central database. The data is received from multiple RTUs along the pipeline. It is common to find RTUs installed at every station along the pipeline. The SCADA system at the Main Control Room receives all the field data and presents it to the pipeline operator through a set of screens or Human Machine Interface, showing the operational conditions of the pipeline. The operator can monitor the hydraulic conditions of the line, as well as send operational commands (open/close valves, turn on/off compressors or pumps, change setpoints, etc.) through the SCADA system to the field. To optimize and secure the operation of these assets, some pipeline companies are using what is called "Advanced Pipeline Applications", which are software tools installed on top of the SCADA system, that provide extended functionality to perform leak detection, leak location, batch tracking (liquid lines), pig tracking, composition tracking, predictive modeling, look ahead modeling, and operator training. Technology Components Pipeline networks are composed of several pieces of equipment that operate together to move products from location to location. The main elements of a pipeline system are: Initial injection station Known also as "supply" or "inlet" station, is the beginning of the system, where the product is injected into the line. Storage facilities, pumps or compressors are usually located at these locations. Compressor/pump stations Pumps for liquid pipelines and compressors for gas pipelines, are located along the line to move the product through the pipeline. The location of these stations is defined by the topography of the terrain, the type of product being transported, or operational conditions of the network. Partial delivery station Known also as "intermediate stations", these facilities allow the pipeline operator to deliver part of the product being transported. Block valve station These are the first line of protection for pipelines. With these valves the operator can isolate any segment of the line for maintenance work or isolate a rupture or leak. Block valve stations are usually located every 20 to , depending on the type of pipeline. Even though it is not a design rule, it is a very usual practice in liquid pipelines. The location of these stations depends exclusively on the nature of the product being transported, the trajectory of the pipeline and/or the operational conditions of the line. Regulator station This is a special type of valve station, where the operator can release some of the pressure from the line. Regulators are usually located at the downhill side of a peak. Final delivery station Known also as "outlet" stations or terminals, this is where the product will be distributed to the consumer. It could be a tank terminal for liquid pipelines or a connection to a distribution network for gas pipelines. Leak detection systems Since oil and gas pipelines are an important asset of the economic development of almost any country, it has been required either by government regulations or internal policies to ensure the safety of the assets, and the population and environment where these pipelines run. Pipeline companies face government regulation, environmental constraints and social situations. Government regulations may define minimum staff to run the operation, operator training requirements, pipeline facilities, technology and applications required to ensure operational safety. For example, in the State of Washington it is mandatory for pipeline operators to be able to detect and locate leaks of 8 percent of maximum flow within fifteen minutes or less. Social factors also affect the operation of pipelines. Product theft is sometimes also a problem for pipeline companies. In this case, the detection levels should be under two percent of maximum flow, with a high expectation for location accuracy. Various technologies and strategies have been implemented for monitoring pipelines, from physically walking the lines to satellite surveillance. The most common technology to protect pipelines from occasional leaks is Computational Pipeline Monitoring or CPM. CPM takes information from the field related to pressures, flows, and temperatures to estimate the hydraulic behavior of the product being transported. Once the estimation is completed, the results are compared to other field references to detect the presence of an anomaly or unexpected situation, which may be related to a leak. The American Petroleum Institute has published several articles related to the performance of CPM in liquids pipelines. The API Publications are: RAM 1130 – Computational pipeline monitoring for liquids pipelines API 1149 – Pipeline variable uncertainties & their effects on leak detectability Where a pipeline containing passes under a road or railway, it is usually enclosed in a protective casing. This casing is vented to the atmosphere to prevent the build-up of flammable gases or corrosive substances, and to allow the air inside the casing to be sampled to detect leaks. The casing vent, a pipe protruding from the ground, often doubles as a warning marker called a casing vent marker. Implementation Pipelines are generally laid underground because temperature is less variable. Because pipelines are usually metal, this helps to reduce the expansion and shrinkage that can occur with weather changes. However, in some cases it is necessary to cross a valley or a river on a pipeline bridge. Pipelines for centralized heating systems are often laid on the ground or overhead. Pipelines for petroleum running through permafrost areas as Trans-Alaska-Pipeline are often run overhead in order to avoid melting the frozen ground by hot petroleum which would result in sinking the pipeline in the ground. Maintenance Maintenance of pipelines includes checking cathodic protection levels for the proper range, surveillance for construction, erosion, or leaks by foot, land vehicle, boat, or air, and running cleaning pigs when there is anything carried in the pipeline that is corrosive. US pipeline maintenance rules are covered in Code of Federal Regulations(CFR) sections, 49 CFR 192 for natural gas pipelines, and 49 CFR 195 for petroleum liquid pipelines. Regulation In the US, onshore and offshore pipelines used to transport oil and gas are regulated by the Pipeline and Hazardous Materials Safety Administration (PHMSA). Certain offshore pipelines used to produce oil and gas are regulated by the Minerals Management Service (MMS). In Canada, pipelines are regulated by either the provincial regulators or, if they cross provincial boundaries or the Canada–US border, by the National Energy Board (NEB). Government regulations in Canada and the United States require that buried fuel pipelines must be protected from corrosion. Often, the most economical method of corrosion control is by use of pipeline coating in conjunction with cathodic protection and technology to monitor the pipeline. Above ground, cathodic protection is not an option. The coating is the only external protection. Pipelines and geopolitics Pipelines for major energy resources (petroleum and natural gas) are not merely an element of trade. They connect to issues of geopolitics and international security as well, and the construction, placement, and control of oil and gas pipelines often figure prominently in state interests and actions. A notable example of pipeline politics occurred at the beginning of the year 2009, wherein a dispute between Russia and Ukraine ostensibly over pricing led to a major political crisis. Russian state-owned gas company Gazprom cut off natural gas supplies to Ukraine after talks between it and the Ukrainian government fell through. In addition to cutting off supplies to Ukraine, Russian gas flowing through Ukraine—which included nearly all supplies to Southeastern Europe and some supplies to Central and Western Europe—was cut off, creating a major crisis in several countries heavily dependent on Russian gas as fuel. Russia was accused of using the dispute as leverage in its attempt to keep other powers, and particularly the European Union, from interfering in its "near abroad". Oil and gas pipelines also figure prominently in the politics of Central Asia and the Caucasus. Hazard identification Because the solvent fraction of dilbit typically comprises volatile aromatics such as naptha and benzene, reasonably rapid carrier vaporization can be expected to follow an above-ground spill—ostensibly enabling timely intervention by leaving only a viscous residue that is slow to migrate. Effective protocols to minimize exposure to petrochemical vapours are well-established, and oil spilled from the pipeline would be unlikely to reach the aquifer unless incomplete remediation were followed by the introduction of another carrier (e.g. a series of torrential downpours). The introduction of benzene and other volatile organic compounds (collectively BTEX) to the subterranean environment compounds the threat posed by a pipeline leak. Particularly if followed by rain, a pipeline breach would result in BTEX dissolution and equilibration of benzene in water, followed by percolation of the admixture into the aquifer. Benzene can cause many health problems and is carcinogenic with EPA Maximum Contaminant Level (MCL) set at 5 μg/L for potable water. Although it is not well studied, single benzene exposure events have been linked to acute carcinogenesis. Additionally, the exposure of livestock, mainly cattle, to benzene has been shown to cause many health issues, such as neurotoxicity, fetal damage and fatal poisoning. The entire surface of an above-ground pipeline can be directly examined for material breach. Pooled petroleum is unambiguous, readily spotted, and indicates the location of required repairs. Because the effectiveness of remote inspection is limited by the cost of monitoring equipment, gaps between sensors, and data that requires interpretation, small leaks in buried pipe can sometimes go undetected. Pipeline developers do not always prioritize effective surveillance against leaks. Buried pipes draw fewer complaints. They are insulated from extremes in ambient temperature, they are shielded from ultraviolet rays, and they are less exposed to photodegradation. Buried pipes are isolated from airborne debris, electrical storms, tornadoes, hurricanes, hail, and acid rain. They are protected from nesting birds, rutting mammals, and stray buckshot. Buried pipe is less vulnerable to accident damage (e.g. automobile collisions) and less accessible to vandals, saboteurs, and terrorists. Exposure Previous work has shown that a 'worst-case exposure scenario' can be limited to a specific set of conditions. Based on the advanced detection methods and pipeline shut-off SOP developed by TransCanada, the risk of a substantive or large release over a short period of time contaminating groundwater with benzene is unlikely. Detection, shutoff, and remediation procedures would limit the dissolution and transport of benzene. Therefore, the exposure of benzene would be limited to leaks that are below the limit of detection and go unnoticed for extended periods of time. Leak detection is monitored through a SCADA system that assesses pressure and volume flow every 5 seconds. A pinhole leak that releases small quantities that cannot be detected by the SCADA system (<1.5% flow) could accumulate into a substantive spill. Detection of pinhole leaks would come from a visual or olfactory inspection, aerial surveying, or mass-balance inconsistencies. It is assumed that pinhole leaks are discovered within the 14-day inspection interval, however snow cover and location (e.g. remote, deep) could delay detection. Benzene typically makes up 0.1 – 1.0% of oil and will have varying degrees of volatility and dissolution based on environmental factors. Even with pipeline leak volumes within SCADA detection limits, sometimes pipeline leaks are misinterpreted by pipeline operators to be pump malfunctions, or other problems. The Enbridge Line 6B crude oil pipeline failure in Marshall, Michigan, on July 25, 2010, was thought by operators in Edmonton to be from column separation of the dilbit in that pipeline. The leak in wetlands along the Kalamazoo River was only confirmed 17 hours after it happened by a local gas company employee. Spill frequency-volume Although the Pipeline and Hazardous Materials Safety Administration (PHMSA) has standard baseline incident frequencies to estimate the number of spills, TransCanada altered these assumptions based on improved pipeline design, operation, and safety. Whether these adjustments are justified is debatable as these assumptions resulted in a nearly 10-fold decrease in spill estimates. Given that the pipeline crosses 247 miles of the Ogallala Aquifer, or 14.5% of the entire pipeline length, and the 50-year life of the entire pipeline is expected to have between 11 – 91 spills, approximately 1.6 – 13.2 spills can be expected to occur over the aquifer. An estimate of 13.2 spills over the aquifer, each lasting 14 days, results in 184 days of potential exposure over the 50 year lifetime of the pipeline. In the reduced-scope worst-case exposure scenario, the volume of a pinhole leak at 1.5% of max flow-rate for 14 days has been estimated at 189,000 barrels or 7.9 million gallons of oil. According to PHMSA's incident database, only 0.5% of all spills in the last 10 years were >10,000 barrels. Benzene fate and transport Benzene is considered a light aromatic hydrocarbon with high solubility and high volatility. It is unclear how temperature and depth would impact the volatility of benzene, so assumptions have been made that benzene in oil (1% weight by volume) would not volatilize before equilibrating with water. Using the octanol-water partition coefficient and a 100-year precipitation event for the area, a worst-case estimate of 75 mg/L of benzene is anticipated to flow toward the aquifer. The actual movement of the plume through groundwater systems is not well described, although one estimate is that up to 4.9 billion gallons of water in the Ogallala Aquifer could become contaminated with benzene at concentrations above the MCL. The Final Environmental Impact Statement from the State Department does not include a quantitative analysis because it assumed that most benzene will volatilize. Previous dilbit spill remediation difficulties One of the major concerns over dilbit is the difficulty in cleaning it up. When the aforementioned Enbridge Line 6B crude oil pipeline ruptured in Marshall, Michigan in 2010, at least 843,000 gallons of dilbit were spilled. After detection of the leak, booms and vacuum trucks were deployed. Heavy rains caused the river to overtop existing dams, and carried dilbit 30 miles downstream before the spill was contained. Remediation work collected over 1.1 million gallons of oil and almost 200,000 cubic yards of oil-contaminated sediment and debris from the Kalamazoo River system. However, oil was still being found in affected waters in October 2012. Accidents and dangers Pipelines can help ensure a country's economic well-being and as such present a likely target of terrorists or wartime adversaries. Fossil fuels can be transported by pipeline, rail, truck or ship, though natural gas requires compression or liquefaction to make vehicle transport economical. For transport of crude oil via these four modes, various reports rank pipelines as proportionately causing less human death and property damage than rail and truck and spilling less oil than truck. Accidents Pipelines conveying flammable or explosive material, such as natural gas or oil, pose special safety concerns. While corrosion, pressure, and equipment failure are common causes, excavation damage is also a leading accident type that can be avoided by calling 811 before digging near pipelines. 1965 – A 32-inch gas transmission pipeline, north of Natchitoches, Louisiana, belonging to the Tennessee Gas Pipeline exploded and burned from stress corrosion cracking failure on March 4, killing 17 people. At least 9 others were injured, and 7 homes 450 feet from the rupture were destroyed. This accident, and others of the era, led then-President Lyndon B. Johnson to call for the formation of a national pipeline safety agency in 1967. The same pipeline had also had an explosion on May 9, 1955, just 930 feet (280 m) from the 1965 failure. June 16, 1976 – A gasoline pipeline was ruptured by a road construction crew in Los Angeles, California. Gasoline sprayed across the area, and soon ignited, killing 9, and injuring at least 14 others. Confusion over the depth of the pipeline in the construction area seemed to be a factor in the accident. June 4, 1989 – The Ufa train disaster: Sparks from two passing trains detonated gas leaking from a LPG pipeline near Ufa, Russia. At least 575 people were reported killed. October 17, 1998 – 1998 Jesse pipeline explosion: A petroleum pipeline exploded at Jesse on the Niger Delta in Nigeria, killing about 1,200 villagers, some of whom were scavenging gasoline. June 10, 1999 – A pipeline rupture in a Bellingham, Washington park led to the release of 277,200 gallons of gasoline. The gasoline was ignited, causing an explosion that killed two children and one adult. Misoperation of the pipeline and a previously damaged section of the pipe that was not detected before were identified as causing the failure. August 19, 2000 – A natural gas pipeline rupture and fire near Carlsbad, New Mexico; this explosion and fire killed 12 members of an extended family. The cause was due to severe internal corrosion of the pipeline. July 30, 2004 – A major natural gas pipeline exploded in Ghislenghien, Belgium near Ath (thirty kilometres southwest of Brussels), killing at least 24 people and leaving 132 wounded, some critically. May 12, 2006 – An oil pipeline ruptured outside Lagos, Nigeria. Up to 200 people may have been killed. See Nigeria oil blast. November 1, 2007 – A propane pipeline exploded near Carmichael, Mississippi, about south of Meridian, Mississippi. Two people were killed instantly and an additional four were injured. Several homes were destroyed and sixty families were displaced. The pipeline is owned by Enterprise Products Partners LP, and runs from Mont Belvieu, Texas, to Apex, North Carolina. Inability to find flaws in pre-1971 ERW seam welded pipe flaws was a contributing factor to the accident. September 9, 2010 – 2010 San Bruno pipeline explosion: A 30-inch diameter high pressure natural gas pipeline owned by the Pacific Gas and Electric Company exploded in the Crestmoor residential neighborhood 2 mi (3.2 km) west of San Francisco International Airport, killing 8, injuring 58, and destroying 38 homes. Poor quality control of the pipe used & of the construction were cited as factors in the accident. June 27, 2014 – An explosion occurred after a natural gas pipe line ruptured in Nagaram village, East Godavari district, Andhra Pradesh, India causing 16 deaths and destroying "scores of homes". July 31, 2014 – On the night of July 31, a series of explosions originating in underground gas pipelines occurred in the city of Kaohsiung, Taiwan. Leaking gas filled the sewers along several major thoroughfares and the resulting explosions turned several kilometers of road surface into deep trenches, sending vehicles and debris high into the air and igniting fires over a large area. At least 32 people were killed and 321 injured. As targets Pipelines can be the target of vandalism, sabotage, or even terrorist attacks. For example, between early 2011 and July 2012, a natural gas pipeline connecting Egypt to Israel and Jordan was attacked 15 times. In 2019, a fuel pipeline north of Mexico City exploded after fuel thieves tapped into the line. At least sixty-six people were reported to have been killed. In war, pipelines are often the target of military attacks, as destruction of pipelines can seriously disrupt enemy logistics. On 26 September 2022, a series of explosions and subsequent major gas leaks occurred on the Nord Stream 1 and Nord Stream 2 pipelines that run to Europe from Russia under the Baltic Sea. The leaks are believed to have been caused by an act of sabotage. Fluid control in pipeline transport In pipeline transport systems, the efficient and safe movement of fluids—whether gas, oil, water, or chemicals—relies on effective fluid control mechanisms. These mechanisms help regulate the flow, pressure, and direction of the fluids within the pipeline, preventing blockages, backflows, and ensuring smooth transportation over long distances. Valves Industrial valves are integral in controlling fluid flow in pipelines. They are used for starting, stopping, or regulating fluid movement. Different types of valves play distinct roles in pipeline systems, including: Ball valves allow for quick on/off flow control and are often used in pipeline systems where rapid changes in flow are required. Gate valves provide full flow control, making them suitable for pipelines that require complete shutoff or consistent flow. Check valves are used to prevent backflow, protecting the pipeline from potential damage caused by reverse fluid movement. Flow meters Flow meters are critical for monitoring and adjusting the flow of liquids and gases in pipelines. These devices ensure that the correct flow rate is maintained, preventing both under and over-supply of fluid in the pipeline. Pressure regulators Pressure regulators are designed to maintain stable pressure in pipelines by automatically adjusting the flow of fluid. This helps to prevent damage caused by pressure surges or drops, ensuring that the pipeline system operates within safe and efficient parameters. Actuators Actuators are used in conjunction with valves to control their opening and closing. Powered by electric, pneumatic, or hydraulic sources, actuators allow for automated fluid control, making them essential for pipelines that require continuous or remote monitoring. See also Lists of pipelines Black powder in gas pipelines Central gas system Coal pipeline District heating Geomagnetically induced current (GIC) HCNG dispenser HDPE pipe Hot tapping Hydraulically activated pipeline pigging Hydrogen pipeline transport Hydrostatic test Inland Petroleum Distribution System List of countries by total length of pipelines List of natural gas pipelines List of pipeline accidents Natural gas pipeline system in the United States Gas networks simulation Operation Pluto Petroleum transport Pigging Pipeline bridge Pneumatic tube, a method for sending documents and other solid materials in capsules through a tube Plastic pipework Reinforced thermoplastic pipe Russia–Ukraine gas disputes Slurry pipeline Sprayed in place pipe Trans-Alaska Pipeline Authorization Act References External links Pipeline news and industry magazine , US historical summary Pipeline Politics in Asia: The Intersection of Demand, Energy Markets, and Supply Routes, by Mikkal E. Herberg et al. (National Bureau of Asian Research, 2010) The Dolphin Project: The Development of a Gulf Gas Initiative, by Justin Dargin, Oxford Institute for Energy Studies Jan 2008 Working Paper NG #22 UK – Linewatch – a joint awareness initiative between 14 oil and gas pipeline operators Article about first undersea gas pipeline constructed in the US and the problems encountered Construction and delivery of compressor stations for a gas pipeline in the Soviet Union by AEG (company video from the 1970s with subtitles) Gas Pipeline Safety: Guidance and More Information Needed before Using Risk-Based Reassessment Intervals: Report to Congressional Committees Government Accountability Office Freight transport Gas technologies Infrastructure Piping
Pipeline
[ "Chemistry", "Engineering" ]
8,730
[ "Building engineering", "Chemical engineering", "Construction", "Mechanical engineering", "Piping", "Infrastructure" ]
51,117
https://en.wikipedia.org/wiki/Meissner%20effect
In condensed-matter physics, the Meissner effect (or Meißner–Ochsenfeld effect) is the expulsion of a magnetic field from a superconductor during its transition to the superconducting state when it is cooled below the critical temperature. This expulsion will repel a nearby magnet. The German physicists Walther Meißner (anglicized Meissner) and Robert Ochsenfeld discovered this phenomenon in 1933 by measuring the magnetic field distribution outside superconducting tin and lead samples. The samples, in the presence of an applied magnetic field, were cooled below their superconducting transition temperature, whereupon the samples cancelled nearly all interior magnetic fields. They detected this effect only indirectly because the magnetic flux is conserved by a superconductor: when the interior field decreases, the exterior field increases. The experiment demonstrated for the first time that superconductors were more than just perfect conductors and provided a uniquely defining property of the superconductor state. The ability for the expulsion effect is determined by the nature of equilibrium formed by the neutralization within the unit cell of a superconductor. A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too strong. Superconductors can be divided into two classes according to how this breakdown occurs. In type-I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In type-II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the electric current as long as the current is not too large. Some type-II superconductors exhibit a small but finite resistance in the mixed state due to motion of the flux vortices induced by the Lorentz forces from the current. As the cores of the vortices are normal electrons, their motion will have dissipation. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are type I, while almost all impure and compound superconductors are type II. Explanation The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided where H is the magnetic field and λ is the London penetration depth. This equation, known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. This exclusion of magnetic field is a manifestation of the superdiamagnetism emerged during the phase transition from conductor to superconductor, for example by reducing the temperature below critical temperature. In a weak applied field (less than the critical field that breaks down the superconducting phase), a superconductor expels nearly all magnetic flux by setting up electric currents near its surface, as the magnetic field H induces magnetization M within the London penetration depth from the surface. These surface currents shield the internal bulk of the superconductor from the external applied field. As the field expulsion, or cancellation, does not change with time, the currents producing this effect (called persistent currents or screening currents) do not decay with time. Near the surface, within the London penetration depth, the magnetic field is not completely canceled. Each superconducting material has its own characteristic penetration depth. Any perfect conductor will prevent any change to magnetic flux passing through its surface due to ordinary electromagnetic induction at zero resistance. However, the Meissner effect is distinct from this: when an ordinary conductor is cooled so that it makes the transition to a superconducting state in the presence of a constant applied magnetic field, the magnetic flux is expelled during the transition. This effect cannot be explained by infinite conductivity, but only by the London equation. The placement and subsequent levitation of a magnet above an already superconducting material does not demonstrate the Meissner effect, while an initially stationary magnet later being repelled by a superconductor as it is cooled below its critical temperature does. The persisting currents that exist in the superconductor to expel the magnetic field is commonly misconceived as a result of Lenz's Law or Faraday's Law. A reason this is not the case is that no change in flux was made to induce the current. Another explanation is that since the superconductor experiences zero resistance, there cannot be an induced emf in the superconductor. The persisting current therefore is not a result of Faraday's Law. Perfect diamagnetism Superconductors in the Meissner state exhibit perfect diamagnetism, or superdiamagnetism, meaning that the total magnetic field is very close to zero deep inside them (many penetration depths from the surface). This means that their volume magnetic susceptibility is = −1. Diamagnetics are defined by the generation of a spontaneous magnetization of a material which directly opposes the direction of an applied field. However, the fundamental origins of diamagnetism in superconductors and normal materials are very different. In normal materials diamagnetism arises as a direct result of the orbital spin of electrons about the nuclei of an atom induced electromagnetically by the application of an applied field. In superconductors the illusion of perfect diamagnetism arises from persistent screening currents which flow to oppose the applied field (the Meissner effect); not solely the orbital spin. Consequences The discovery of the Meissner effect led to the phenomenological theory of superconductivity by Fritz and Heinz London in 1935. This theory explained resistanceless transport and the Meissner effect, and allowed the first theoretical predictions for superconductivity to be made. However, this theory only explained experimental observations—it did not allow the microscopic origins of the superconducting properties to be identified. This was done successfully by the BCS theory in 1957, from which the penetration depth and the Meissner effect result. However, some physicists argue that BCS theory does not explain the Meissner effect. Paradigm for the Higgs mechanism The Meissner superconductivity effect serves as an important paradigm for the generation mechanism of a mass M (i.e., a reciprocal range, where h is the Planck constant and c is the speed of light) for a gauge field. In fact, this analogy is an abelian example for the Higgs mechanism, which generates the masses of the electroweak and gauge particles in high-energy physics. The length is identical with the London penetration depth in the theory of superconductivity. See also Flux pinning Silsbee effect Superfluid References Further reading By the man who explained the Meissner effect. pp. 34–37 gives a technical discussion of the Meissner effect for a superconducting sphere. pp. 486–489 gives a simple mathematical discussion of the surface currents responsible for the Meissner effect, in the case of a long magnet levitated above a superconducting plane. A good technical reference. External links The Meissner effect - The Feynman Lectures on Physics Meissner Effect (Science from scratch) Short video from Imperial College London about the Meissner effect and levitating trains of the future. Introduction to superconductivity Video about Type 1 Superconductors: R = 0/Transition temperatures/B is a state variable/Meissner effect/Energy gap (Giaever)/BCS model. Meissner Effect (Hyperphysics) Historical Background of the Meissner Effect Magnetic levitation Quantum magnetism Superconductivity
Meissner effect
[ "Physics", "Materials_science", "Engineering" ]
1,729
[ "Physical quantities", "Superconductivity", "Quantum mechanics", "Materials science", "Quantum magnetism", "Condensed matter physics", "Electrical resistance and conductance" ]
51,129
https://en.wikipedia.org/wiki/Nash%20embedding%20theorems
The Nash embedding theorems (or imbedding theorems), named after John Forbes Nash Jr., state that every Riemannian manifold can be isometrically embedded into some Euclidean space. Isometric means preserving the length of every path. For instance, bending but neither stretching nor tearing a page of paper gives an isometric embedding of the page into Euclidean space because curves drawn on the page retain the same arclength however the page is bent. The first theorem is for continuously differentiable (C1) embeddings and the second for embeddings that are analytic or smooth of class Ck, 3 ≤ k ≤ ∞. These two theorems are very different from each other. The first theorem has a very simple proof but leads to some counterintuitive conclusions, while the second theorem has a technical and counterintuitive proof but leads to a less surprising result. The C1 theorem was published in 1954, the Ck-theorem in 1956. The real analytic theorem was first treated by Nash in 1966; his argument was simplified considerably by . (A local version of this result was proved by Élie Cartan and Maurice Janet in the 1920s.) In the real analytic case, the smoothing operators (see below) in the Nash inverse function argument can be replaced by Cauchy estimates. Nash's proof of the Ck- case was later extrapolated into the h-principle and Nash–Moser implicit function theorem. A simpler proof of the second Nash embedding theorem was obtained by who reduced the set of nonlinear partial differential equations to an elliptic system, to which the contraction mapping theorem could be applied. Nash–Kuiper theorem ( embedding theorem) Given an -dimensional Riemannian manifold , an isometric embedding is a continuously differentiable topological embedding such that the pullback of the Euclidean metric equals . In analytical terms, this may be viewed (relative to a smooth coordinate chart ) as a system of many first-order partial differential equations for unknown (real-valued) functions: If is less than , then there are more equations than unknowns. From this perspective, the existence of isometric embeddings given by the following theorem is considered surprising. Nash–Kuiper theorem. Let be an -dimensional Riemannian manifold and a short smooth embedding (or immersion) into Euclidean space , where . This map is not required to be isometric. Then there is a sequence of continuously differentiable isometric embeddings (or immersions) of which converge uniformly to . The theorem was originally proved by John Nash with the stronger assumption . His method was modified by Nicolaas Kuiper to obtain the theorem above. The isometric embeddings produced by the Nash–Kuiper theorem are often considered counterintuitive and pathological. They often fail to be smoothly differentiable. For example, a well-known theorem of David Hilbert asserts that the hyperbolic plane cannot be smoothly isometrically immersed into . Any Einstein manifold of negative scalar curvature cannot be smoothly isometrically immersed as a hypersurface, and a theorem of Shiing-Shen Chern and Kuiper even says that any closed -dimensional manifold of nonpositive sectional curvature cannot be smoothly isometrically immersed in . Furthermore, some smooth isometric embeddings exhibit rigidity phenomena which are violated by the largely unrestricted choice of in the Nash–Kuiper theorem. For example, the image of any smooth isometric hypersurface immersion of the round sphere must itself be a round sphere. By contrast, the Nash–Kuiper theorem ensures the existence of continuously differentiable isometric hypersurface immersions of the round sphere which are arbitrarily close to (for instance) a topological embedding of the sphere as a small ellipsoid. Any closed and oriented two-dimensional manifold can be smoothly embedded in . Any such embedding can be scaled by an arbitrarily small constant so as to become short, relative to any given Riemannian metric on the surface. It follows from the Nash–Kuiper theorem that there are continuously differentiable isometric embeddings of any such Riemannian surface where the radius of a circumscribed ball is arbitrarily small. By contrast, no negatively curved closed surface can even be smoothly isometrically embedded in . Moreover, for any smooth (or even ) isometric embedding of an arbitrary closed Riemannian surface, there is a quantitative (positive) lower bound on the radius of a circumscribed ball in terms of the surface area and curvature of the embedded metric. In higher dimension, as follows from the Whitney embedding theorem, the Nash–Kuiper theorem shows that any closed -dimensional Riemannian manifold admits an continuously differentiable isometric embedding into an arbitrarily small neighborhood in -dimensional Euclidean space. Although Whitney's theorem also applies to noncompact manifolds, such embeddings cannot simply be scaled by a small constant so as to become short. Nash proved that every -dimensional Riemannian manifold admits a continuously differentiable isometric embedding into . At the time of Nash's work, his theorem was considered to be something of a mathematical curiosity. The result itself has not found major applications. However, Nash's method of proof was adapted by Camillo De Lellis and László Székelyhidi to construct low-regularity solutions, with prescribed kinetic energy, of the Euler equations from the mathematical study of fluid mechanics. In analytical terms, the Euler equations have a formal similarity to the isometric embedding equations, via the quadratic nonlinearity in the first derivatives of the unknown function. The ideas of Nash's proof were abstracted by Mikhael Gromov to the principle of convex integration, with a corresponding h-principle. This was applied by Stefan Müller and Vladimír Šverák to Hilbert's nineteenth problem, constructing minimizers of minimal differentiability in the calculus of variations. Ck embedding theorem The technical statement appearing in Nash's original paper is as follows: if M is a given m-dimensional Riemannian manifold (analytic or of class Ck, 3 ≤ k ≤ ∞), then there exists a number n (with n ≤ m(3m+11)/2 if M is a compact manifold, and with n ≤ m(m+1)(3m+11)/2 if M is a non-compact manifold) and an isometric embedding ƒ: M → Rn (also analytic or of class Ck). That is ƒ is an embedding of Ck manifolds and for every point p of M, the derivative dƒp is a linear map from the tangent space TpM to Rn which is compatible with the given inner product on TpM and the standard dot product of Rn in the following sense: for all vectors u, v in TpM. When is larger than , this is an underdetermined system of partial differential equations (PDEs). The Nash embedding theorem is a global theorem in the sense that the whole manifold is embedded into Rn. A local embedding theorem is much simpler and can be proved using the implicit function theorem of advanced calculus in a coordinate neighborhood of the manifold. The proof of the global embedding theorem relies on Nash's implicit function theorem for isometric embeddings. This theorem has been generalized by a number of other authors to abstract contexts, where it is known as Nash–Moser theorem. The basic idea in the proof of Nash's implicit function theorem is the use of Newton's method to construct solutions. The standard Newton's method fails to converge when applied to the system; Nash uses smoothing operators defined by convolution to make the Newton iteration converge: this is Newton's method with postconditioning. The fact that this technique furnishes a solution is in itself an existence theorem and of independent interest. In other contexts, the convergence of the standard Newton's method had earlier been proved by Leonid Kantorovitch. Citations General and cited references Riemannian geometry Riemannian manifolds Theorems in Riemannian geometry
Nash embedding theorems
[ "Mathematics" ]
1,697
[ "Riemannian manifolds", "Space (mathematics)", "Metric spaces" ]
51,138
https://en.wikipedia.org/wiki/Mail
The mail or post is a system for physically transporting postcards, letters, and parcels. A postal service can be private or public, though many governments place restrictions on private systems. Since the mid-19th century, national postal systems have generally been established as a government monopoly, with a fee on the article prepaid. Proof of payment is usually in the form of an adhesive postage stamp, but a postage meter is also used for bulk mailing. Postal authorities often have functions aside from transporting letters. In some countries, a postal, telegraph and telephone (PTT) service oversees the postal system, in addition to telephone and telegraph systems. Some countries' postal systems allow for savings accounts and handle applications for passports. The Universal Postal Union (UPU), established in 1874, includes 192 member countries and sets the rules for international mail exchanges as a Specialized Agency of the United Nations. Etymology The word mail comes from the Middle English word , referring to a travelling bag or pack. It was spelled in that manner until the 17th century and is distinct from the word male. The French have a similar word, , for a trunk or large box, and is the Irish term for a bag. In the 17th century, the word mail began to appear as a reference for a bag that contained letters: "bag full of letter" (1654). Over the next hundred years the word mail began to be applied strictly to the letters themselves and the sack as the mailbag. In the 19th century, the British typically used mail to refer to letters being sent abroad (i.e. on a ship) and post to refer to letters for domestic delivery. The word Post is derived from Old French , which ultimately stems from the past participle of the Latin verb 'to lay down or place'. So in the U.K., the Royal Mail delivers the post, while in North America both the U.S. Postal Service and Canada Post deliver the mail. The term email, short for "electronic mail", first appeared in the 1970s. The term snail mail is a retronym to distinguish it from the quicker email. Various dates have been given for its first use. History The practice of communication by written documents carried by an intermediary from one person or place to another almost certainly dates back nearly to the invention of writing. However, the development of formal postal systems occurred much later. The first documented use of an organized courier service for the dissemination of written documents is in Egypt, where Pharaohs used couriers to send out decrees throughout the territory of the state (2400 BCE). The earliest surviving piece of mail is also Egyptian, dating to 255 BCE. Persia (Iran) The first credible claim for the development of a real postal system comes from Ancient Persia. The best-documented claim (Xenophon) attributes the invention to the Persian King Cyrus the Great (550 BCE), who mandated that every province in his kingdom would organize reception and delivery of post to each of its citizens. Other writers credit his successor Darius I of Persia (521 BCE). Other sources claim much earlier dates for an Assyrian postal system, with credit given to Hammurabi (1700 BCE) and Sargon II (722 BCE). Mail may not have been the primary mission of this postal service, however. The role of the system as an intelligence gathering apparatus is well documented, and the service was (later) called angariae, a term that in time came to indicate a tax system. The Old Testament (Esther, VIII) makes mention of this system: Ahasuerus, king of Medes, used couriers for communicating his decisions. The Persian system worked using stations (called Chapar-Khaneh), whence the message carrier (called Chapar) would ride to the next post, whereupon he would swap his horse with a fresh one for maximum performance and delivery speed. Herodotus described the system in this way: "It is said that as many days as there are in the whole journey, so many are the men and horses that stand along the road, each horse and man at the interval of a day's journey; and these are stayed neither by snow nor rain nor heat nor darkness from accomplishing their appointed course with all speed". The verse prominently features on New York's James Farley Post Office, although it uses the translation "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds". India The economic growth and political stability under the Mauryan Empire (322–185 BCE) stimulated sustained development of civil infrastructure in ancient India. The Mauryans developed early Indian mail service as well as public wells, rest houses, and other facilities for the public. Common chariots called Dagana were sometimes used as mail chariots in ancient India. Couriers were used militarily by kings and local rulers to deliver information through runners and other carriers. The postmaster, the head of the intelligence service, was responsible for ensuring the maintenance of the courier system. Couriers were also used to deliver personal letters. In South India, the Wodeyar dynasty (1399–1947) of the Kingdom of Mysore used mail service for espionage purposes thereby acquiring knowledge related to matters that took place at great distances. By the end of the 18th century, a postal system in India was in operation. Later this system underwent complete modernization when the British Raj established its control over most of India. The Post Office Act XVII of 1837 provided that the Governor-General of India in Council had the exclusive right of conveying letters by post for hire within the territories of the East India Company. The mails were available to certain officials without charge, which became a controversial privilege as the years passed. On this basis the Indian Post Office was established on October 1, 1837. Rome The first well-documented postal service was that of Rome. Organized at the time of Augustus Caesar (62 BCE – 14 CE), the service was called cursus publicus and was provided with light carriages (rhedæ) pulled by fast horses. By the time of Diocletian, a parallel service was established with two-wheeled carts (birotæ) pulled by oxen. This service was reserved for government correspondence. Yet another service for citizens was later added. Vietnam In 1802, the first Vietnamese postal service was established under the Nguyen dynasty, under the Ministry of Rites. During the Nguyen dynasty, official documents were transported by horse and other primitive means to stations built about 25-30 kilometers apart. In 1904, three wireless communication offices were established, and in early 1906 they were merged with the postal service to form the Post and Wireless Office. In 1945, after the August Revolution, the Post and Wireless Office was renamed the Post Office under the Ministry of Transportation. In 1955, the Post Office was upgraded to the Ministry of Post. China Some Chinese sources claim mail or postal systems dating back to the Xia or Shang dynasties, which would be the oldest mailing service in the world. The earliest credible system of couriers was initiated by the Han dynasty (206 BCE – 220 CE), who had relay stations every 30 li (about 15km) along major routes. The Tang dynasty (618 to 907 AD) operated a recorded 1,639 posthouses, including maritime offices, employing around 20,000 people. The system was administered by the Ministry of War and private correspondence was forbidden from the network. The Ming dynasty (1368 to 1644) sought a postal system to deliver mail quickly, securely, and cheaply. Adequate speed was always a problem, because of the slow overland transportation system, and underfunding. Its network had 1,936 posthouses every 60 li along major routes, with fresh horses available every 10 li between them. The Qing operated 1,785 posthouses throughout their lands. More efficient, however, was the system linking the international settlements, centered around Shanghai and the Treaty ports. It was the main communication system for China's international trade. Mongol Empire Genghis Khan installed an empire-wide messenger and postal station system named Örtöö within the Mongol Empire. During the Yuan dynasty under Kublai Khan, this system also covered the territory of China. Postal stations were used not only for the transmission and delivery of official mail but were also available for travelling officials, military men, and foreign dignitaries. These stations aided and facilitated the transport of foreign and domestic tribute specifically and the conduct of trade in general. By the end of Kublai Khan's rule, there were more than 1400 postal stations in China alone, which in turn had at their disposal about 50,000 horses, 1,400 oxen, 6,700 mules, 400 carts, 6,000 boats, more than 200 dogs, and 1,150 sheep. The stations were apart and had reliable attendants working for the mail service. Foreign observers, such as Marco Polo, have attested to the efficiency of this early postal system. Each station was maintained by up to twenty five families. Work for postal service counted as military service. The system was still operational in 18th century when 64 stations were required for a message to cross Mongolia from the Altai Mountains to China. Japan The modern Japanese system was developed in the mid-19th century, closely copying European models. Japan was highly innovative in developing the world's largest and most successful postal savings system and later a postal life insurance system as well. Postmasters play a key role in linking the Japanese political system to local politics. The postmasters are high prestige, and are often hereditary. To a large extent, the postal system generated the enormous funding necessary to rapidly industrialize Japan in the late 19th century. Korea The postal service was one of Korea's first attempts at modernization. The Joseon Post Office was established in 1884. Other systems Another important postal service was created in the Islamic world by the caliph Mu'awiyya; the service was called barid, for the name of the towers built to protect the roads by which couriers travelled. By 3000 BC, Egypt was using homing pigeons for pigeon post, taking advantage of a singular quality of this bird, which when taken far from its nest is able to find its way home due to a particularly developed sense of orientation. Messages were then tied around the legs of the pigeon, which was freed and could reach its original nest. By the 19th century, homing pigeons were used extensively for military communications. Charlemagne extended to the whole territory of his empire the system used by Franks in northern Gaul and connected this service with that of missi dominici. In the mid-11th century, flax traders known as the Cairo Geniza Merchants from Fustat, Egypt wrote about using a postal service known as the kutubi. The kutubi system managed routes between the cities of Jerusalem, Ramla, Tyre, Ascalon, Damascus, Aleppo, and Fustat with year-round, regular mail delivery. Many religious orders had a private mail service. Notably, the Cistercians had one which connected more than 6,000 abbeys, monasteries, and churches. The best organization, however, was created by the Knights Templar. In 1716, Correos y Telégrafos was established in Spain as public mail service, available to all citizens. Delivery postmen were first employed in 1756 and post boxes were installed firstly in 1762. Thurn und Taxis In 1505, Holy Roman Emperor Maximilian I established a postal system in the Empire, appointing Franz von Taxis to run it. This system, originally the Kaiserliche Reichspost, is often considered the first modern postal service in the world, which initiated a revolution in communication in Europe. The system combined contemporary technical and organization means to create a stable transcontinental service which was also the first to offer (fee-based) public access. The Thurn und Taxis family, then known as Tassis, had operated postal services between Italian city-states from 1290 onward. For 500 years the postal business based in Brussels and in Frankfurt was passed from one generation to another. Following the abolition of the Empire in 1806, the Thurn-und-Taxis Post system continued as a private organization into the postage stamp era before being absorbed into the postal system of the new German Empire after 1871. 1867 July 1 the State of Prussia had to make a compensation payment of 3.000.000 Thalers reinvested by Helene von Thurn & Taxis, daughter-in-law of the last postmaster, Maximilian Karl, 6th Prince of Thurn and Taxis, into real estate, most of it continuing to exist today. The Phone Book of the World has its roots in the long history of the avant-garde telecommunications family Thurn & Taxis. The directory is the result of Johannes, 11th Prince of Thurn & Taxis transmitting PTT culture to a student and helping with the opening of a small Telephone Boutique next to a historic Postal mansion his ancestors used to go to centuries earlier. Several European Post Carriers like Deutsche Post or Austrian Post continue to use the Thurn & Taxis Post Horn in their company logo just like the global Phone Book of the World based in the old Postal mansion of King Louis XIV in Paris. Postal reforms In the United Kingdom, prior to 1840 letters were paid for by the recipient and the cost was determined by the distance from sender to recipient and the number of sheets of paper rather than by a countrywide flat rate with weight restrictions. Sir Rowland Hill reformed the postal system based on the concepts of penny postage and prepayment. In his proposal, Hill also called for official pre-printed envelopes and adhesive postage stamps as alternative ways of getting the sender to pay for the postage, at a time when prepayment was optional, which led to the invention of the postage stamp, the Penny Black. Modern transport and technology The postal system was important in the development of modern transportation. Railways carried railway post offices. During the 20th century, air mail became the transport of choice for inter-continental mail. Postmen started to use mail trucks. The handling of mail became increasingly automated. The Internet came to change the conditions for physical mail. Email (and in recent years social networking sites) became a fierce competitor to physical mail systems, but online auctions and Internet shopping opened new business opportunities as people often get items bought online through the mail. Modern mail Modern mail is organized by national and privatized services, which are reciprocally connected by international regulations, organizations and international agreements. Paper letters and parcels can be sent to almost any country in the world relatively easily and cheaply. The Internet has made the process of sending letter-like messages nearly instantaneous, and in many cases and situations correspondents use email where they previously would have used letters. The volume of paper mail sent through the U.S. Postal Service has declined by more than 15% since its peak at 213 billion pieces per annum in 2006. Organization Some countries have organized their mail services as public limited liability corporations without a legal monopoly. The worldwide postal system constituting the individual national postal systems of the world's self-governing states is coordinated by the Universal Postal Union, which among other things sets international postage rates, defines standards for postage stamps and operates the system of international reply coupons. In most countries a system of codes has been created (referred to as ZIP codes in the United States, postcodes in the United Kingdom and Australia, eircodes in Ireland and postal codes in most other countries) in order to facilitate the automation of operations. This also includes placing additional marks on the address portion of the letter or mailed object, called "bar coding". Bar coding of mail for delivery is usually expressed either by a series of vertical bars, usually called POSTNET coding or a block of dots as a two-dimensional barcode. The "block of dots" method allows for the encoding of proof of payment of postage, exact routing for delivery, and other features. The ordinary mail service was improved in the 20th century with the use of planes for a quicker delivery. The world's first scheduled airmail post service took place in the United Kingdom between the London suburbs of Hendon and Windsor, Berkshire, on 9 September 1911. Some methods of airmail proved ineffective, however, including the United States Postal Service's experiment with rocket mail. Receipt services were made available in order to grant the sender a confirmation of effective delivery. Payment Before about the mid-nineteenth century, in regions where postal systems existed, the payment models varied, but most mail was sent unpaid requiring the recipient to pay the postage fee. In some regions a partial payment was made by the sender. Today, worldwide, the most common method of prepaying postage is by buying an adhesive postage stamp to be applied to the envelope before mailing; a much less common method is to use a postage-prepaid envelope. Franking is a method of creating postage-prepaid envelopes under licence using a special machine. They are used by companies with large mail programs, such as banks and direct mail companies. In 1998, the U.S. Postal Service authorised the first tests of a secure system of sending digital franks via the Internet to be printed out on a PC printer, obviating the necessity to license a dedicated franking machine and allowing companies with smaller mail programs to make use of the option; this was later expanded to test the use of personalized postage. The service provided by the U.S. Postal Service in 2003 allows the franks to be printed out on special adhesive-backed labels. In 2004 the Royal Mail in the United Kingdom introduced its SmartStamp Internet-based system, allowing printing on ordinary adhesive labels or envelopes. Similar systems are being considered by postal administrations around the world. When the pre-paid envelope or package is accepted into the mail by an agent of the postal service, the agent usually indicates by means of a cancellation that it is no longer valid for pre-payment of postage. The exceptions are when the agent forgets or neglects to cancel the mailpiece, for stamps that are pre-cancelled and thus do not require cancellation and for, in most cases, metered mail. (The "personalized stamps" authorized by the USPS and manufactured by Zazzle and other companies are in fact a form of meter label and thus do not need to be cancelled.) Privacy and censorship Documents should generally not be read by anyone other than the addressee; for example, in the United States of America it is a violation of federal law for anyone other than the addressee and the government to open mail. There are exceptions however: executives often assign secretaries or assistants the task of handling their mail; and postcards do not require opening and can be read by anyone. For mail contained within an envelope, there are legal provisions in some jurisdictions allowing the recording of identities of sender and recipient. The privacy of correspondence is guaranteed by the constitutions of Mexico, Colombia, Brazil and Venezuela, and is alluded to in the European Convention on Human Rights and the Universal Declaration of Human Rights. The control of the contents inside private citizens' mail is censorship and concerns social, political, and legal aspects of civil rights. International mail and packages are subject to customs control, with the mail and packages often surveyed and their contents sometimes edited out (or even in). There have been cases over the millennia of governments opening and copying or photographing the contents of private mail. Subject to the laws in the relevant jurisdiction, correspondence may be openly or covertly opened, or the contents determined via some other method, by the police or other authorities in some cases relating to a suspected criminal conspiracy, although black chambers (largely in the past, though there is apparently some continuance of their use today) opened extralegally. The mail service may be allowed to open the mail if neither addressee nor sender can be located, in order to attempt to locate either. Mail service may also open the mail to inspect if it contains materials that are hazardous to transport or violate local laws. While in most cases mail censorship is exceptional, military mail to and from soldiers is often subject to surveillance. The mail is censored to prevent leaking tactical secrets, such as troop movements or weather conditions. Depending on the country, civilian mail containing military secrets can also be monitored and censored. Mail sent to and from inmates in jails or prisons within the United States is subject to opening and review by jail or prison staff to determine if the mail has any criminal action dictated or provides means for an escape. The only mail that is not able to be read is attorney-client mail, which is covered under the attorney-client confidentiality laws in the United States. Rise of electronic correspondence Modern alternatives, such as the telegraph, telephone, telex, facsimile, and email, have reduced the attractiveness of paper mail for many applications. These modern alternatives have some advantages: in addition to their speed, they may be more secure, e.g., because the general public cannot learn the address of the sender or recipient from the envelope, and occasionally traditional items of mail may fail to arrive, e.g. due to vandalism to mailboxes, unfriendly pets, and adverse weather conditions. Mail carriers due to perceived hazards or inconveniences, may refuse, officially or otherwise, to deliver mail to a particular address (for instance, if there is no clear path to the door or mailbox). On the other hand, traditional mail avoids the possibility of computer malfunctions and malware, and the recipient does not need to print it out if they wish to have a paper copy, though scanning is required to make a digital copy. Physical mail is still widely used in business and personal communications for such reasons as legal requirements for signatures, requirements of etiquette, and the requirement to enclose small physical objects. Since the advent of email, which is almost always much faster, the postal system has come to be referred to in Internet slang by the retronym "snail mail". Occasionally, the term "white mail" has also been used as a neutral term for postal mail. Mainly during the 20th century, experimentation with hybrid mail has combined electronic and paper delivery. Electronic mechanisms include telegram, telex, facsimile (fax), email, and short message service (SMS). There have been methods which have combined mail and some of these newer methods, such as temporary emails, that combine facsimile transmission with overnight delivery. These vehicles commonly use a mechanical or electro-mechanical standardised writing (typing), that on the one hand makes for more efficient communication, while on the other hand makes impossible characteristics and practices that traditionally were in conventional mail, such as calligraphy. This epoch is undoubtedly mainly dominated by mechanical writing, with a general use of no more of half a dozen standard typographic fonts from standard keyboards. However, the increased use of typewritten or computer-printed letters for personal communication and the advent of email have sparked renewed interest in calligraphy, as a letter has become more of a "special event". Long before email and computer-printed letters, however, decorated envelopes, rubber stamps and artistamps formed part of the medium of mail art. In the 2000s (decade) with the advent of eBay and other online auction sites and online stores, postal services in industrialized nations have seen a major shift to item shipping. This has been seen as a boost to the system's usage in the wake of lower paper mail volume due to the accessibility of email. Online post offices have emerged to give recipients a means of receiving traditional correspondence mail in a scanned electronic format. Collecting Postage stamps are also object of a particular form of collecting. Stamp collecting has been a very popular hobby. In some cases, when demand greatly exceeds supply, their commercial value on this specific market may become enormously greater than face value, even after use. For some postal services the sale of stamps to collectors who will never use them is a significant source of revenue; for example, stamps from Tokelau, South Georgia & South Sandwich Islands, Tristan da Cunha, Niuafoʻou and many others. Stamp collecting is commonly known as philately, although strictly the latter term refers to the study of stamps. Another form of collecting regards postcards, a document written on a single robust sheet of paper, usually decorated with photographic pictures or artistic drawings on one of the sides, and short messages on a small part of the other side, that also contained the space for the address. In strict philatelic usage, the postcard is to be distinguished from the postal card, which has a pre-printed postage on the card. The fact that this communication is visible by other than the receiver often causes the messages to be written in jargon. Letters are often studied as an example of literature, and also in biography in the case of a famous person. A portion of the New Testament of the Bible is composed of the Apostle Paul's epistles to Christian congregations in various parts of the Roman Empire. See below for a list of famous letters. A style of writing, called epistolary, tells a fictional story in the form of the correspondence between two or more characters. A makeshift mail method after stranding on a deserted island is a message in a bottle. Deregulation Numerous countries, including Sweden (1 January 1993), New Zealand (1998 and 2003), Germany (2005 and 2007), Argentina and Chile opened up the postal services market to new entrants. In the case of New Zealand Post Limited, this included (from 2003) its right to be the sole New Zealand postal administration member of the Universal Postal Union, thus the ending of its monopoly on stamps bearing the name New Zealand. Types Letters Letter-sized mail constitutes the bulk of the contents sent through most postal services. These are usually documents printed on A4 (210×297 mm), Letter-sized (8.5×11 inches), or smaller paper and placed in envelopes. Handwritten correspondence, while once a major means of communications between distant people, is now used less frequently due to the advent of more immediate forms of communication, such as the telephone or email. Traditional letters, however, are often considered to hark back to a "simpler time" and are still used when someone wishes to be deliberate and thoughtful about their communication. An example would be a letter of sympathy to a bereaved person. Bills and invoices are often sent through the mail, like regular billing correspondence from utility companies and other service providers. These letters often contain a self-addressed envelope that allows the receiver to remit payment back to the company easily. While still very common, many people now opt to use online bill payment services, which eliminate the need to receive bills through the mail. Paperwork for the confirmation of large financial transactions is often sent through the mail. Many tax documents are as well. New credit cards and their corresponding personal identification numbers are sent to their owners through the mail. The card and number are usually mailed separately several days or weeks apart for security reasons. Bulk mail is mail that is prepared for bulk mailing, often by presorting, and processing at reduced rates. It is often used in direct marketing and other advertising mail, although it has other uses as well. The senders of these messages sometimes purchase lists of addresses (which are sometimes targeted towards certain demographics) and then send letters advertising their product or service to all recipients. Other times, commercial solicitations are sent by local companies advertising local products, like a restaurant delivery service advertising to their delivery area or a retail store sending their weekly advertising circular to a general area. Bulk mail is also often sent to companies' existing subscriber bases, advertising new products or services. First-Class First-Class Mail in the U.S. includes postcards, letters, large envelopes (flats), and small packages, providing each piece weighs or less. Delivery is given priority over second-class (newspapers and magazines), third class (bulk advertisements), and fourth-class mail (books and media packages). First-Class Mail prices are based on both the shape and weight of the item being mailed. Pieces over 13 ounces can be sent as Priority Mail. As of 2011 42% of First-Class Mail arrived the next day, 27% in two days, and 31% in three. The USPS expected that changes to the service in 2012 would cause about 51% to arrive in two days and most of the rest in three. The Canada Post counterpart is Lettermail. The British Royal Mail's 1st Class, as it is styled, is simply a priority option over 2nd Class, at a slightly higher cost. Royal Mail aims (but does not guarantee) to deliver all 1st Class letters the day after posting. In Austria priority delivery mail is called Prio, in Switzerland A-Post. Registered and recorded mail Registered mail allows the location and in particular the correct delivery of a letter to be tracked. It is usually considerably more expensive than regular mail, and is typically used for valuable items. Registered mail is constantly tracked through the system. Recorded mail is handled just like ordinary mail with the exception that it has to be signed for on receipt. This is useful for legal documents where proof of delivery is required. In the United Kingdom recorded delivery mail (branded as signed for by the Royal Mail) is covered by The Recorded Delivery Services Act 1962. Under this legislation any document which its relevant law requires service by registered post can also be lawfully served by recorded delivery. Repositionable notes The United States Postal Service introduced a test allowing "repositionable notes" (for example, 3M's Post-it notes) to be attached to the outside of envelopes and bulk mailings, afterwards extending the test for an unspecified period. The repositionable note may be fixed directly to the address side of First-Class Mail and Standard Mail letter-size mailpieces. These mailpieces must meet the standards in 7.2 through 7.6. The note is included as an integral part of the mailpiece for weight and postage rate and must be accounted for in pricing. Postal cards and postcards Postal cards and postcards are small message cards that are sent by mail unenveloped; the distinction often, though not invariably and reliably, drawn between them is that "postal cards" are issued by the postal authority or entity with the "postal indicia" (or "stamp") preprinted on them, while postcards are privately issued and require affixing an adhesive stamp (though there have been some cases of a postal authority's issuing non-stamped postcards). Postcards are often printed to promote tourism, with pictures of resorts, tourist attractions or humorous messages on the front and allowing for a short message from the sender to be written on the back. The postage required for postcards is generally less than postage required for standard letters; however, certain technicalities such as their being oversized or having cut-outs, may result in payment of the first-class rate being required. Postcards are also used by magazines for new subscriptions. Inside many magazines are postage-paid subscription cards that a reader can fill out and mail back to the publishing company to be billed for a subscription to the magazine. In this fashion, magazines also use postcards for other purposes, including reader surveys, contests or information requests. Postcards are sometimes sent by charities to their members with a message to be signed and sent to a politician (e.g. to promote fair trade or third world debt cancellation). Other mail services Small packets are usually less than 2 kg (4 lb). Larger envelopes are also sent through the mail. These are often composed of a stronger material than standard envelopes and are often used by businesses to transport documents that may not be folded or damaged, such as legal documents and contracts. Due to their size, larger envelopes are sometimes charged additional postage. Packages are often sent through some postal services, usually requiring additional postage than an average letter or postcard. Many postal services have limitations as to what a package may or may not contain, usually placing limits or bans on perishable, hazardous or flammable materials. Some hazardous materials in limited quantities may be shipped with appropriate markings and packaging, like an ORM-D label. Additionally, as a result of terrorism concerns, the U.S. Postal Service subjects their packages to numerous security tests, often scanning or x-raying packages for materials that might be found in biological materials or mail bombs. Newspapers and magazines are also sent through postal services. Many magazines are simply deposited in the mail like any other mailpiece. In the U.S., they are printed with a special Intelligent Mail barcode that acts as prepaid postage. Other magazines are now shipped in shrinkwrap to protect loose contents such as blow-in cards. During the second half of the 19th century and the first half of the 20th century, newspapers and magazines were normally posted using wrappers with a stamp imprint. Hybrid mail, sometimes referred to as L-mail, is the electronic lodgement of mail from the mail generator's computer directly to a Postal Service provider. The Postal Service provider is then able to use electronic means to have the mail piece sorted, routed and physically produced at a site closest to the delivery point. It is a type of mail growing in popularity with some Post Office operations and individual businesses venturing into this market. In some countries, these services are available to print and deliver emails to those who are unable to receive email, such as the elderly or infirm. Services provided by Hybrid mail providers are closely related to that of mail forwarding service providers. Business model The business model of modern postal operators can be broken down to four stages: (1) collection, (2) sorting, (3) transportation, and (4) delivery. Collection is the gathering of mailpieces from various locations such as customer premises, post boxes, and post offices. Newly collected mail is normally not sorted immediately upon receipt and is instead taken directly in its unsorted state to sorting centers. Sorting is the process of segregating mailpieces into groups based on their type and destination, so that they can be loaded onto an appropriate mode of transportation headed in the general direction of their final destinations. Traditionally, mail was manually sorted by hand, but it is increasingly sorted by automatic sorting machines. The main dilemma faced by postal operators when organizing the sorting stage is whether to have a smaller number of large, centralized sorting centers (a spoke–hub distribution paradigm) or a larger number of smaller sorting centers along with a larger number of direct connections between all of them (point-to-point transit). Transportation is the process of carrying mail from one place to another. A mailpiece usually has to be transported from one sorting center to another sorting center, where it is often sorted to another transportation segment headed towards its destination address, until it reaches the sorting center that directly serves that address. Delivery is the process of carrying mail to final destinations such as letter boxes. Sorting centers sort mailpieces destined for addresses in their immediate vicinity to carriers serving those addresses. Transporting mail to final destinations is the most labor-intensive stage and accounts for up to 50% of postal operators' expenses. Depending upon the final destination, carriers often use vehicles, their own feet, or a combination of both. Postal operators try to control costs by presorting mail for carriers, so that they receive mail already arranged in the correct sequence for their designated routes; reducing the frequency of deliveries; or retiming deliveries so that they are spread throughout the day. See also See Category Postal organizations, examples being: Deutsche Post DHL Group, Germany La Poste (France) Poste Italiane, Italy Royal Mail, British Post Office Limited, British United States Postal Service Express mail EPPML Parcel (package) Shipping insurance Universal Postal Union List of postal entities Components of a postal system: Fire sign (address) Letter box Mail carrier Mail bag Mail train Packstation Post box Post office Post-office box Postage rate Postal code Notes Further reading Daunton, M. J. Royal Mail: The Post Office Since 1840 (Athlone, 1985), Great Britain. Hemmeon, Joseph Clarence. The history of the British post office (Harvard University Press, 1912) online. John, Richard R. Spreading the News: The American Postal System from Franklin to Morse (1995) excerpt Le Roux, Muriel, et al. eds. A Concise History of the French Post Office: From Its Origins to the Present Time (2018) External links A Hundred Years by Post by J. Wilson Hyde Potts, Albert, " (First U.S. street mailbox patent)". US patent office. 1858 The British Postal Museum & Archive From Thurn & Taxis to Phone Book of the World - 7 centuries of Telecom History Royal Engineers Museum British Army Postal Services History James Meek, London Review of Books, 28 April 2011, In the Sorting Office, 33(9) U.S. National Postal Museum, a part of the Smithsonian Institution Universal Postal Union, a part of the United Nations
Mail
[ "Technology" ]
7,583
[ "Transport systems", "Postal systems" ]
51,143
https://en.wikipedia.org/wiki/Giant-impact%20hypothesis
The giant-impact hypothesis, sometimes called the Theia Impact, is an astrogeology hypothesis for the formation of the Moon first proposed in 1946 by Canadian geologist Reginald Daly. The hypothesis suggests that the Early Earth collided with a Mars-sized protoplanet of the same orbit approximately 4.5 billion years ago in the early Hadean eon (about 20 to 100 million years after the Solar System coalesced), and the ejecta of the impact event later accreted to form the Moon. The impactor planet is sometimes called Theia, named after the mythical Greek Titan who was the mother of Selene, the goddess of the Moon. Analysis of lunar rocks published in a 2016 report suggests that the impact might have been a direct hit, causing a fragmentation and thorough mixing of both parent bodies. The giant-impact hypothesis is currently the favored hypothesis for lunar formation among astronomers. Evidence that supports this hypothesis includes: The Moon's orbit has a similar orientation to Earth's rotation, both of which are at a similar angle to the ecliptic plane of the Solar System. The stable isotope ratios of lunar and terrestrial rock are identical, implying a common origin. The Earth–Moon system contains an anomalously high angular momentum, meaning the momentum contained in Earth's rotation, the Moon's rotation and the Moon revolving around Earth is significantly higher than the other terrestrial planets. A giant impact might have supplied this excess momentum. Moon samples indicate that the Moon was once molten to a substantial, but unknown, depth. This might have required much more energy than predicted to be available from the accretion of a celestial body of the Moon's size and mass. An extremely energetic process, such as a giant impact, could provide this energy. The Moon has a relatively small iron core, which gives it a much lower density than Earth. Computer models of a giant impact of a Mars-sized body with Earth indicate the impactor's core would likely penetrate deep into Earth and fuse with its own core. This would leave the Moon, which was formed from the ejecta of lighter crust and mantle fragments that went beyond the Roche limit and were not pulled back by gravity to re-fuse with Earth, with less remaining metallic iron than other planetary bodies. The Moon is depleted in volatile elements compared to Earth. Vaporizing at comparably lower temperatures, they could be lost in a high-energy event, with the Moon's smaller gravity unable to recapture them while Earth did. There is evidence in other star systems of similar collisions, resulting in debris discs. Giant collisions are consistent with the leading theory of the formation of the Solar System. However, several questions remain concerning the best current models of the giant-impact hypothesis. The energy of such a giant impact is predicted to have heated Earth to produce a global magma ocean, and evidence of the resultant planetary differentiation of the heavier material sinking into Earth's mantle has been documented. However, there is no self-consistent model that starts with the giant-impact event and follows the evolution of the debris into a single moon. Other remaining questions include when the Moon lost its share of volatile elements and why Venuswhich experienced giant impacts during its formationdoes not host a similar moon. History In 1898, George Darwin made the suggestion that Earth and the Moon were once a single body. Darwin's hypothesis was that a molten Moon had been spun from Earth because of centrifugal forces, and this became the dominant academic explanation. Using Newtonian mechanics, he calculated that the Moon had orbited much more closely in the past and was drifting away from Earth. This drifting was later confirmed by American and Soviet experiments, using laser ranging targets placed on the Moon. Nonetheless, Darwin's calculations could not resolve the mechanics required to trace the Moon back to the surface of Earth. In 1946, Reginald Aldworth Daly of Harvard University challenged Darwin's explanation, adjusting it to postulate that the creation of the Moon was caused by an impact rather than centrifugal forces. Little attention was paid to Professor Daly's challenge until a conference on satellites in 1974, during which the idea was reintroduced and later published and discussed in Icarus in 1975 by William K. Hartmann and Donald R. Davis. Their models suggested that, at the end of the planet formation period, several satellite-sized bodies had formed that could collide with the planets or be captured. They proposed that one of these objects might have collided with Earth, ejecting refractory, volatile-poor dust that could coalesce to form the Moon. This collision could potentially explain the unique geological and geochemical properties of the Moon. A similar approach was taken by Canadian astronomer Alastair G. W. Cameron and American astronomer William R. Ward, who suggested that the Moon was formed by the tangential impact upon Earth of a body the size of Mars. It is hypothesized that most of the outer silicates of the colliding body would be vaporized, whereas a metallic core would not. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron. The more volatile materials that were emitted during the collision probably would escape the Solar System, whereas silicates would tend to coalesce. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, and do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant-impact hypothesis emerged as the most favored hypothesis. Theia The name of the hypothesised protoplanet is derived from the mythical Greek titan Theia , who gave birth to the Moon goddess Selene. This designation was proposed initially by the English geochemist Alex N. Halliday in 2000 and has become accepted in the scientific community. According to modern theories of planet formation, Theia was part of a population of Mars-sized bodies that existed in the Solar System 4.5 billion years ago. One of the attractive features of the giant-impact hypothesis is that the formation of the Moon and Earth align; during the course of its formation, Earth is thought to have experienced dozens of collisions with planet-sized bodies. The Moon-forming collision would have been only one such "giant impact" but certainly the last significant impactor event. The Late Heavy Bombardment by much smaller asteroids may have occurred laterapproximately 3.9 billion years ago. Basic model Astronomers think the collision between Earth and Theia happened at about 4.4 to 4.45 billion years ago (bya); about 0.1 billion years after the Solar System began to form. In astronomical terms, the impact would have been of moderate velocity. Theia is thought to have struck Earth at an oblique angle when Earth was nearly fully formed. Computer simulations of this "late-impact" scenario suggest an initial impactor velocity below at "infinity" (far enough that gravitational attraction is not a factor), increasing as it approached to over at impact, and an impact angle of about 45°. However, oxygen isotope abundance in lunar rock suggests "vigorous mixing" of Theia and Earth, indicating a steep impact angle. Theia's iron core would have sunk into the young Earth's core, and most of Theia's mantle accreted onto Earth's mantle. However, a significant portion of the mantle material from both Theia and Earth would have been ejected into orbit around Earth (if ejected with velocities between orbital velocity and escape velocity) or into individual orbits around the Sun (if ejected at higher velocities). Modelling has hypothesised that material in orbit around Earth may have accreted to form the Moon in three consecutive phases; accreting first from the bodies initially present outside Earth's Roche limit, which acted to confine the inner disk material within the Roche limit. The inner disk slowly and viscously spread back out to Earth's Roche limit, pushing along outer bodies via resonant interactions. After several tens of years, the disk spread beyond the Roche limit, and started producing new objects that continued the growth of the Moon, until the inner disk was depleted in mass after several hundreds of years. Material in stable Kepler orbits was thus likely to hit the Earth–Moon system sometime later (because the Earth–Moon system's Kepler orbit around the Sun also remains stable). Estimates based on computer simulations of such an event suggest that some twenty percent of the original mass of Theia would have ended up as an orbiting ring of debris around Earth, and about half of this matter coalesced into the Moon. Earth would have gained significant amounts of angular momentum and mass from such a collision. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact, and Earth's equator and the Moon's orbit would have become coplanar. Not all of the ring material need have been swept up right away: the thickened crust of the Moon's far side suggests the possibility that a second moon about in diameter formed in a Lagrange point of the Moon. The smaller moon may have remained in orbit for tens of millions of years. As the two moons migrated outward from Earth, solar tidal effects would have made the Lagrange orbit unstable, resulting in a slow-velocity collision that "pancaked" the smaller moon onto what is now the far side of the Moon, adding material to its crust. Lunar magma cannot pierce through the thick crust of the far side, causing fewer lunar maria, while the near side has a thin crust displaying the large maria visible from Earth. Above a high resolution threshold for simulations, a study published in 2022 finds that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. Furthermore, the outer layers of these directly formed satellites are molten over cooler interiors and are composed of around 60% proto-Earth material. This could alleviate the tension between the Moon's Earth-like isotopic composition and the different signature expected for the impactor. Immediate formation opens up new options for the Moon's early orbit and evolution, including the possibility of a highly tilted orbit to explain the lunar inclination, and offers a simpler, single-stage scenario for the origin of the Moon. Composition In 2001, a team at the Carnegie Institution of Washington reported that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System. In 2014, a team in Germany reported that the Apollo samples had a slightly different isotopic signature from Earth rocks. The difference was slight, but statistically significant. One possible explanation is that Theia formed near Earth. This empirical data showing close similarity of composition can be explained only by the standard giant-impact hypothesis, as it is extremely unlikely that two bodies prior to collision had such similar composition. Equilibration hypothesis In 2007, researchers from the California Institute of Technology showed that the likelihood of Theia having an identical isotopic signature as Earth was very small (less than 1 percent). They proposed that in the aftermath of the giant impact, while Earth and the proto-lunar disc were molten and vaporised, the two reservoirs were connected by a common silicate vapor atmosphere and that the Earth–Moon system became homogenised by convective stirring while the system existed in the form of a continuous fluid. Such an "equilibration" between the post-impact Earth and the proto-lunar disc is the only proposed scenario that explains the isotopic similarities of the Apollo rocks with rocks from Earth's interior. For this scenario to be viable, however, the proto-lunar disc would have to endure for about 100 years. Work is ongoing to determine whether or not this is possible. Direct collision hypothesis According to research (2012) to explain similar compositions of the Earth and the Moon based on simulations at the University of Bern by physicist Andreas Reufer and his colleagues, Theia collided directly with Earth instead of barely swiping it. The collision speed may have been higher than originally assumed, and this higher velocity may have totally destroyed Theia. According to this modification, the composition of Theia is not so restricted, making a composition of up to 50% water ice possible. Synestia hypothesis One effort, in 2018, to homogenise the products of the collision was to energise the primary body by way of a greater pre-collision rotational speed. This way, more material from the primary body would be spun off to form the Moon. Further computer modelling determined that the observed result could be obtained by having the pre-Earth body spinning very rapidly, so much so that it formed a new celestial object which was given the name 'synestia'. This is an unstable state that could have been generated by yet another collision to get the rotation spinning fast enough. Further modelling of this transient structure has shown that the primary body spinning as a doughnut-shaped object (the synestia) existed for about a century (a very short time) before it cooled down and gave birth to Earth and the Moon. Terrestrial magma ocean hypothesis Another model, in 2019, to explain the similarity of Earth and the Moon's compositions posits that shortly after Earth formed, it was covered by a sea of hot magma, while the impacting object was likely made of solid material. Modelling suggests that this would lead to the impact heating the magma much more than solids from the impacting object, leading to more material being ejected from the proto-Earth, so that about 80% of the Moon-forming debris originated from the proto-Earth. Many prior models had suggested 80% of the Moon coming from the impactor. Evidence Indirect evidence for the giant impact scenario comes from rocks collected during the Apollo Moon landings, which show oxygen isotope ratios nearly identical to those of Earth. The highly anorthositic composition of the lunar crust, as well as the existence of KREEP-rich samples, suggest that a large portion of the Moon once was molten; and a giant impact scenario could easily have supplied the energy needed to form such a magma ocean. Several lines of evidence show that if the Moon has an iron-rich core, it must be a small one. In particular, the mean density, moment of inertia, rotational signature, and magnetic induction response of the Moon all suggest that the radius of its core is less than about 25% the radius of the Moon, in contrast to about 50% for most of the other terrestrial bodies. Appropriate impact conditions satisfying the angular momentum constraints of the Earth–Moon system yield a Moon formed mostly from the mantles of Earth and the impactor, while the core of the impactor accretes to Earth. Earth has the highest density of all the planets in the Solar System; the absorption of the core of the impactor body explains this observation, given the proposed properties of the early Earth and Theia. Comparison of the zinc isotopic composition of lunar samples with that of Earth and Mars rocks provides further evidence for the impact hypothesis. Zinc is strongly fractionated when volatilised in planetary rocks, but not during normal igneous processes, so zinc abundance and isotopic composition can distinguish the two geological processes. Moon rocks contain more heavy isotopes of zinc, and overall less zinc, than corresponding igneous Earth or Mars rocks, which is consistent with zinc being depleted from the Moon through evaporation, as expected for the giant impact origin. Collisions between ejecta escaping Earth's gravity and asteroids would have left impact heating signatures in stony meteorites; analysis based on assuming the existence of this effect has been used to date the impact event to 4.47 billion years ago, in agreement with the date obtained by other means. Warm silica-rich dust and abundant SiO gas, products of high velocity impactsover between rocky bodies, have been detected by the Spitzer Space Telescope around the nearby (29 pc distant) young (~12 My old) star HD 172555 in the Beta Pictoris moving group. A belt of warm dust in a zone between 0.25AU and 2AU from the young star HD 23514 in the Pleiades cluster appears similar to the predicted results of Theia's collision with the embryonic Earth, and has been interpreted as the result of planet-sized objects colliding with each other. A similar belt of warm dust was detected around the star BD+20°307 (HIP 8920, SAO 75016). On 1 November 2023, scientists reported that, according to computer simulations, remnants of Theia could be still visible inside the Earth as two giant anomalies of the Earth's mantle. Difficulties This lunar origin hypothesis has some difficulties that have yet to be resolved. For example, the giant-impact hypothesis implies that a surface magma ocean would have formed following the impact. Yet there is no evidence that Earth ever had such a magma ocean and it is likely there exists material that has never been processed in a magma ocean. Composition A number of compositional inconsistencies need to be addressed. The ratios of the Moon's volatile elements are not explained by the giant-impact hypothesis. If the giant-impact hypothesis is correct, these ratios must be due to some other cause. The presence of volatiles such as water trapped in lunar basalts and carbon emissions from the lunar surface is more difficult to explain if the Moon was caused by a high-temperature impact. The iron oxide (FeO) content (13%) of the Moon, intermediate between that of Mars (18%) and the terrestrial mantle (8%), rules out most of the source of the proto-lunar material from Earth's mantle. If the bulk of the proto-lunar material had come from an impactor, the Moon should be enriched in siderophilic elements, when, in fact, it is deficient in them. The Moon's oxygen isotopic ratios are essentially identical to those of Earth. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each Solar System body. If a separate proto-planet Theia had existed, it probably would have had a different oxygen isotopic signature than Earth, as would the ejected mixed material. The Moon's titanium isotope ratio (50Ti/47Ti) appears so close to Earth's (within 4 ppm), that little if any of the colliding body's mass could likely have been part of the Moon. Lack of a Venusian moon If the Moon was formed by such an impact, it is possible that other inner planets also may have been subjected to comparable impacts. A moon that formed around Venus by this process would have been unlikely to escape. If such a moon-forming event had occurred there, a possible explanation of why the planet does not have such a moon might be that a second collision occurred that countered the angular momentum from the first impact. Another possibility is that the strong tidal forces from the Sun would tend to destabilise the orbits of moons around close-in planets. For this reason, if Venus's slow rotation rate began early in its history, any satellites larger than a few kilometers in diameter would likely have spiraled inwards and collided with Venus. Simulations of the chaotic period of terrestrial planet formation suggest that impacts like those hypothesised to have formed the Moon were common. For typical terrestrial planets with a mass of 0.5 to 1 Earth masses, such an impact typically results in a single moon containing 4% of the host planet's mass. The inclination of the resulting moon's orbit is random, but this tilt affects the subsequent dynamic evolution of the system. For example, some orbits may cause the moon to spiral back into the planet. Likewise, the proximity of the planet to the star will also affect the orbital evolution. The net effect is that it is more likely for impact-generated moons to survive when they orbit more distant terrestrial planets and are aligned with the planetary orbit. Possible origin of Theia In 2004, Princeton University mathematician Edward Belbruno and astrophysicist J. Richard Gott III proposed that Theia coalesced at the or Lagrangian point relative to Earth (in about the same orbit and about 60° ahead or behind), similar to a trojan asteroid. Two-dimensional computer models suggest that the stability of Theia's proposed trojan orbit would have been affected when its growing mass exceeded a threshold of approximately 10% of Earth's mass (the mass of Mars). In this scenario, gravitational perturbations by planetesimals caused Theia to depart from its stable Lagrangian location, and subsequent interactions with proto-Earth led to a collision between the two bodies. In 2008, evidence was presented that suggests that the collision might have occurred later than the accepted value of 4.53 Gya, at approximately 4.48 Gya. A 2014 comparison of computer simulations with elemental abundance measurements in Earth's mantle indicated that the collision occurred approximately 95 My after the formation of the Solar System. It has been suggested that other significant objects might have been created by the impact, which could have remained in orbit between Earth and the Moon, stuck in Lagrangian points. Such objects might have stayed within the Earth–Moon system for as long as 100 million years, until the gravitational tugs of other planets destabilised the system enough to free the objects. A study published in 2011 suggested that a subsequent collision between the Moon and one of these smaller bodies caused the notable differences in physical characteristics between the two hemispheres of the Moon. This collision, simulations have supported, would have been at a low enough velocity so as not to form a crater; instead, the material from the smaller body would have spread out across the Moon (in what would become its far side), adding a thick layer of highlands crust. The resulting mass irregularities would subsequently produce a gravity gradient that resulted in tidal locking of the Moon so that today, only the near side remains visible from Earth. However, mapping by the GRAIL mission has ruled out this scenario. In 2019, a team at the University of Münster reported that the molybdenum isotopic composition in Earth's primitive mantle originates from the outer Solar System, hinting at the source of water on Earth. One possible explanation is that Theia originated in the outer Solar System. Alternative hypotheses Other mechanisms that have been suggested at various times for the Moon's origin are that the Moon was spun off from Earth's molten surface by centrifugal force; that it was formed elsewhere and was subsequently captured by Earth's gravitational field; or that Earth and the Moon formed at the same time and place from the same accretion disk. None of these hypotheses can account for the high angular momentum of the Earth–Moon system. Another hypothesis attributes the formation of the Moon to the impact of a large asteroid with Earth much later than previously thought, creating the satellite primarily from debris from Earth. In this hypothesis, the formation of the Moon occurs 60–140 million years after the formation of the Solar System (as compared to hypothesized Theia impact at 4.527 ± 0.010 billion years). The asteroid impact in this scenario would have created a magma ocean on Earth and the proto-Moon with both bodies sharing a common plasma metal vapor atmosphere. The shared metal vapor bridge would have allowed material from Earth and the proto-Moon to exchange and equilibrate into a more common composition. Yet another hypothesis proposes that the Moon and Earth formed together, not from the collision of once-distant bodies. This model, published in 2012 by Robin M. Canup, suggests that the Moon and Earth formed from a massive collision of two planetary bodies, each larger than Mars, which then re-collided to form what is now called Earth. After the re-collision, Earth was surrounded by a disk of material which accreted to form the Moon. See also References Notes Further reading Academic articles Non-academic books External links Planetary Science Institute: Giant Impact Hypothesis Origin of the Moon by Prof. AGW Cameron Klemperer rosette and Lagrangian point simulations using JavaScript SwRI giant impact hypothesis simulation (.wmv and .mov) Origin of the Moon – computer model of accretion Moon Archive – Including articles about the giant impact hypothesis Planet Smash-Up Sends Vaporized Rock, Hot Lava Flying (2009-08-10 JPL News) How common are Earth–Moon planetary systems? : 23 May 2011 The Surprising State of the Earth after the Moon-Forming Giant Impact – Sarah Stewart (SETI Talks), Jan 28, 2015 Lunar science Hypothetical impact events Earth sciences Solar System dynamic theories
Giant-impact hypothesis
[ "Astronomy", "Biology" ]
5,167
[ "Biological hypotheses", "Astronomical hypotheses", "Hypothetical impact events" ]
51,160
https://en.wikipedia.org/wiki/Photophone
The photophone is a telecommunications device that allows transmission of speech on a beam of light. It was invented jointly by Alexander Graham Bell and his assistant Charles Sumner Tainter on February 19, 1880, at Bell's laboratory at 1325 L Street in Washington, D.C. Both were later to become full associates in the Volta Laboratory Association, created and financed by Bell. On June 3, 1880, Bell's assistant transmitted a wireless voice telephone message from the roof of the Franklin School to the window of Bell's laboratory, some 213 meters (about 700 ft.) away. Bell believed the photophone was his most important invention. Of the 18 patents granted in Bell's name alone, and the 12 he shared with his collaborators, four were for the photophone, which Bell referred to as his "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication systems that achieved worldwide popular usage starting in the 1980s. The master patent for the photophone ( Apparatus for Signalling and Communicating, called Photophone) was issued in December 1880, many decades before its principles came to have practical applications. Design The photophone was similar to a contemporary telephone, except that it used modulated light as a means of wireless transmission while the telephone relied on modulated electricity carried over a conductive wire circuit. Bell's own description of the light modulator: The brightness of a reflected beam of light, as observed from the location of the receiver, therefore varied in accordance with the audio-frequency variations in air pressure—the sound waves—which acted upon the mirror. In its initial form, the photophone receiver was also non-electronic, using the photoacoustic effect. Bell found that many substances could be used as direct light-to-sound transducers. Lampblack proved to be outstanding. Using a fully modulated beam of sunlight as a test signal, one experimental receiver design, employing only a deposit of lampblack, produced a tone that Bell described as "painfully loud" to an ear pressed close to the device. In its ultimate electronic form, the photophone receiver used a simple selenium cell photodetector at the focus of a parabolic mirror. The cell's electrical resistance (between about 100 and 300 ohms) varied inversely with the light falling upon it, i.e., its resistance was higher when dimly lit, lower when brightly lit. The selenium cell took the place of a carbon microphone—also a variable-resistance device—in the circuit of what was otherwise essentially an ordinary telephone, consisting of a battery, an electromagnetic earphone, and the variable resistance, all connected in series. The selenium modulated the current flowing through the circuit, and the current was converted back into variations of air pressure—sound—by the earphone. In his speech to the American Association for the Advancement of Science in August 1880, Bell gave credit for the first demonstration of speech transmission by light to Mr. A.C. Brown of London in the Fall of 1878. Because the device used radiant energy, the French scientist Ernest Mercadier suggested that the invention should not be named 'photophone', but 'radiophone', as its mirrors reflected the Sun's radiant energy in multiple bands including the invisible infrared band. Bell used the name for a while but it should not be confused with the later invention "radiophone" which used radio waves. First successful wireless voice communications While honeymooning in Europe with his bride Mabel Hubbard, Bell likely read of the newly discovered property of selenium having a variable resistance when acted upon by light, in a paper by Robert Sabine as published in Nature on 25 April 1878. In his experiments, Sabine used a meter to see the effects of light acting on selenium connected in a circuit to a battery. However Bell reasoned that by adding a telephone receiver to the same circuit he would be able to hear what Sabine could only see. As Bell's former associate, Thomas Watson, was fully occupied as the superintendent of manufacturing for the nascent Bell Telephone Company back in Boston, Massachusetts, Bell hired Charles Sumner Tainter, an instrument maker who had previously been assigned to the U.S. 1874 Transit of Venus Commission, for his new 'L' Street laboratory in Washington, at the rate of $15 per week. On February 19, 1880, the pair had managed to make a functional photophone in their new laboratory by attaching a set of metallic gratings to a diaphragm, with a beam of light being interrupted by the gratings movement in response to spoken sounds. When the modulated light beam fell upon their selenium receiver Bell, on his headphones, was able to clearly hear Tainter singing Auld Lang Syne. In an April 1, 1880, Washington, D.C., experiment, Bell and Tainter communicated some along an alleyway to the laboratory's rear window. Then a few months later on June 21 they succeeded in communicating clearly over a distance of some 213 meters (about 700 ft.), using plain sunlight as their light source, practical electrical lighting having only just been introduced to the U.S. by Edison. The transmitter in their latter experiments had sunlight reflected off the surface of a very thin mirror positioned at the end of a speaking tube; as words were spoken they cause the mirror to oscillate between convex and concave, altering the amount of light reflected from its surface to the receiver. Tainter, who was on the roof of the Franklin School, spoke to Bell, who was in his laboratory listening and who signaled back to Tainter by waving his hat vigorously from the window, as had been requested. The receiver was a parabolic mirror with selenium cells at its focal point. Conducted from the roof of the Franklin School to Bell's laboratory at 1325 'L' Street, this was the world's first formal wireless telephone communication (away from their laboratory), thus making the photophone the world's earliest known voice wireless telephone system, at least 19 years ahead of the first spoken radio wave transmissions. Before Bell and Tainter had concluded their research in order to move on to the development of the Graphophone, they had devised some 50 different methods of modulating and demodulating light beams for optical telephony. Reception and adoption The telephone itself was still something of a novelty, and radio was decades away from commercialization. The social resistance to the photophone's futuristic form of communications could be seen in an August 1880 New York Times commentary: However at the time of their February 1880 breakthrough, Bell was immensely proud of the achievement, to the point that he wanted to name his new second daughter "Photophone", which was subtly discouraged by his wife Mabel Bell (they instead chose "Marian", with "Daisy" as her nickname). He wrote somewhat enthusiastically: Bell transferred the photophone's intellectual property rights to the American Bell Telephone Company in May 1880. While Bell had hoped his new photophone could be used by ships at sea and to also displace the plethora of telephone lines that were blooming along busy city boulevards, his design failed to protect its transmissions from outdoor interferences such as clouds, fog, rain, snow and such, that could easily disrupt the transmission of light. Factors such as the weather and the lack of light inhibited the use of Bell's invention. Not long after its invention laboratories within the Bell System continued to improve the photophone in the hope that it could supplement or replace expensive conventional telephone lines. Its earliest non-experimental use came with military communication systems during World War I and II, its key advantage being that its light-based transmissions could not be intercepted by the enemy. Bell pondered the photophone's possible scientific use in the spectral analysis of artificial light sources, stars and sunspots. He later also speculated on its possible future applications, though he did not anticipate either the laser or fiber-optic telecommunications: Further development Although Bell Telephone researchers made several modest incremental improvements on Bell and Tainter's design, Marconi's radio transmissions started to far surpass the maximum range of the photophone as early as 1897 and further development of the photophone was largely arrested until German-Austrian experiments began at the turn of the 20th century. The German physicist Ernst Ruhmer believed that the increased sensitivity of his improved selenium cells, combined with the superior receiving capabilities of professor H. T. Simon's "speaking arc", would make the photophone practical over longer signalling distances. Ruhmer carried out a series of experimental transmissions along the Havel river and on Lake Wannsee from 1901 to 1902. He reported achieving sending distances under good conditions of 15 kilometers (9 miles), with equal success during the day and at night. He continued his experiments around Berlin through 1904, in conjunction with the German Navy, which supplied high-powered searchlights for use in the transmissions. The German Siemens & Halske Company boosted the photophone's range by utilizing current-modulated carbon arc lamps which provided a useful range of approximately . They produced units commercially for the German Navy, which were further adapted to increase their range to using voice-modulated ship searchlights. British Admiralty research during WWI resulted in the development of a vibrating mirror modulator in 1916. More sensitive molybdenite receiver cells, which also had greater sensitivity to infra-red radiation, replaced the older selenium cells in 1917. The United States and German governments also worked on technical improvements to Bell's system. By 1935 the German Carl Zeiss Company had started producing infra-red photophones for the German Army's tank battalions, employing tungsten lamps with infra-red filters which were modulated by vibrating mirrors or prisms. These also used receivers which employed lead sulfide detector cells and amplifiers, boosting their range to under optimal conditions. The Japanese and Italian armies also attempted similar development of lightwave telecommunications before 1945. Several military laboratories, including those in the United States, continued R&D efforts on the photophone into the 1950s, experimenting with high-pressure vapour and mercury arc lamps of between 500 and 2,000 watts power. Commemorations On March 3, 1947, the centenary of Alexander Graham Bell's birth, the Telephone Pioneers of America dedicated a historical marker on the side of one of the buildings, the Franklin School, which Bell and Sumner Tainter used for their first formal trial involving a considerable distance. Tainter had originally stood on the roof of the school building and transmitted to Bell at the window of his laboratory. The marker did not acknowledge Tainter's scientific and engineering contributions. On February 19, 1980, exactly 100 years to the day after Bell and Tainter's first photophone transmission in their laboratory, staff from the Smithsonian Institution, the National Geographic Society and AT&T's Bell Labs gathered at the location of Bell's former 1325 'L' Street Volta Laboratory in Washington, D.C. for a commemoration of the event. The Photophone Centenary commemoration had first been proposed by electronics researcher and writer Forrest M. Mims, who suggested it to Dr. Melville Bell Grosvenor, the inventor's grandson, during a visit to his office at the National Geographic Society. The historic grouping later observed the centennial of the photophone's first successful laboratory transmission by using Mims hand-made demonstration photophone, which functioned similar to Bell and Tainter's model. Mims also built and provided a pair of modern hand-held battery-powered LED transceivers connected by of optical fiber. The Bell Labs' Richard Gundlach and the Smithsonian's Elliot Sivowitch used the device at the commemoration to demonstrate one of the photophone's modern-day descendants. The National Geographic Society also mounted a special educational exhibit in its Explorer's Hall, highlighting the photophone's invention with original items borrowed from the Smithsonian Institution. See also Atomic line filter Free-space optical communication History of telecommunication Laser microphone Mie scattering Modulating retro-reflector Optical sound Optical window Photoacoustic effect Radio window Rayleigh scattering Semaphore line Visible light communication Volta Laboratory and Bureau References Footnotes Citations Bibliography Bell, A. G: "On the Production and Reproduction of Sound by Light", American Journal of Science, Third Series, Vol. XX, #118, October 1880, pp. 305–324; also published as "Selenium and the Photophone" in Nature, September 1880. Bruce, Robert V Bell: Alexander Bell and the Conquest of Solitude, Ithaca, New York: Cornell University Press, 1990. . Mims III, Forest M. The First Century of Lightwave Communications, Fiber Optics Weekly Update, Information Gatekeepers, February 10–26, 1982, pp. 6–23. Grosvenor, Edwin S. and Morgan Wesson. Alexander Graham Bell: The Life and Times of the Man Who Invented the Telephone. New York: Harry N. Abrahms, Inc., 1997. . Further reading Chris Long and Mike Groth's optical audio telecommunications webpage Ackroyd, William. "The Photophone" in "Science for All", Vol. 2 (R. Brown, ed.), Cassell & Co., London, circa 1884, pp. 307–312. A popular account, profusely illustrated with steel engravings. Armengaud, J. " Le photophone de M.Graham Bell". Soc. Ing. civ. Mem., year 1880, Vol 2. pp. 513–522. AT&T Company. "The Radiophone", pamphlet distributed at Louisiana Purchase Exhibition, St Louis, Missouri, 1904. Describes the photophone work of Hammond V Hayes at the Bell Labs (patented 1897) and the German engineer H T Simon in the same year. Bell, Alexander Graham. "On the Production and Reproduction of Sound by Light: the Photophone". Am. Ass. for the Advancement of Sci., Proc., Vol 29., October 1880, pp. 115–136. Also in American Journal of Science, Series 3. No. 20, 1880, pp. 305–324; Eng. L., 30. 1880, pp. 240–242; Electrician, Vol 5. 1880, pp. 214–215, 220–221, 237; Journal of the Society of Telegraph Engineers, No. 9, 1880, pp. 404–426; Nat. L., Vol 22. 1880, pp. 500–503; Ann. Chim. Phys., Serie 5. Vol 21. 1880, pp. 399–430; E.T.Z., Vol. 1. 1880, pp. 391–396. Discussed at length in Eng. L., Vol 30. 1880, pp. 253–254, 407–409. In these papers, Bell accords the credit for the first demonstrations of the transmission of speech by light to a Mr A C Brown of London "in September or October 1878". Bell, Alexander Graham. "Sur l'application du photophone a l'etude des bruits qui ont lieu a la surface solaire". C. R., Vol. 91. 1880, pp. 726–727. Bell, Alexander Graham. "Professor A G Bell on Selenium and the Photophone". Pharm. J. and Trans., Series 3. Vol. 11., 1880–1881, pp. 272–276; The Electrician No 5, 18 September 1880, pp 220–221 and 2 October 1880 pp 237; Nature (London) Vol 22, 23 September 1880, pp. 500–503; Engineering Vol 30, pp 240–242, 253, 254, 407–409; and Journal of the Society of Telegraph Engineers Vol 9, pp 375–387. Bell, Alexander Graham. "Other papers on the photophone" E.T.Z. No. 1, 1880, pp 391–396; Journal of the Society for the Arts 1880, No. 28, pp 847–848 & No. 29 pp 60–62; C.R. No. 91, 1880–1881, pp 595–598, 726, 727, 929–931, 982, 1882 pp 409–412, 450, 451, 1224–1227. Bell, Alexander Graham. "Le Photophone De La Production Et De La Lumiere". Gauthier-Villars, Imprimeur-Libraire, Paris. 1880. (Note: this is item #26, Folder #4, as noted in "Finding Aid for the Alexander Graham Bell Collection, 1880–1925", Collection number: 308, UCLA Library, Department of Special Collections Manuscripts Division, as viewable at the Online Archive of California) "Bell's Photophone". Nature Vol 24, 4 November 1880; The Electrician, Vol. 6, 1881, pp. 136–138. Appleton's Journal. "The Photophone". Appleton's Journal, Vol. 10 No. 56, New York, February 1881, pp. 181–182. Bidwell, Shelford. "The Photophone". Nature., 23. 1881, pp. 58–59. Bidwell, Shelford. "Selenium and Its Applications to the Photophone and Telephotography". Proceedings of the Royal Institution (G.B.), Vol 9. 1881, pp. 524–535; The English Mechanic and World Of Science, Vol. 33, 22 April 1881, pp. 158–159 and 29 April 1881 pp. 180–181. Also in Chem. News, Vol. 44, 1881, pp. 1–3, 18–21. (From a lecture at the Royal Institution on 11 March 1881). Breguet, A. "Les recepteurs photophoniques de selenium". Ann. Chim. Phys., Series 5. Vol 21. 1880, pp. 560–563. Breguet, A. "Sur les experiences photophonique du Professeur Alexander Graham Bell et de M. Sumner Tainter": C.R.; Vol 91., 1880, pp 595–598. Electrician. "Bell's Photophone", Electrician, Vol. 6, February 5, 1881, pp. 136–138,183. Jamieson, Andrew. Nat. L., Vol. 10, 1881, p. 11. This Glasgow scientist seems to have been the first to suggest the usage of a manometric gas flame for optical transmission, demonstrated at a meeting of the Glasgow Philosophical Society; "The History of selenium and its action in the Bell Photophone, with description of recently designed form", Proceedings of the Philosophical Society of Glasgow No. 13, 1881, ***Moser, J. "The Microphonic Action of Selenium Cells". Phys. Soc., Proc., Vol. 4, 1881, pp. 348–360. Also in Phil. Mag., Series 5, Vol.12, 1881, pp. 212–223. Kalischer, S. "Photophon Ohne Batterie". Rep. f. Phys., Vol. 17., 1881, pp. 563–570. MacKenzie, Catherine "Alexander Graham Bell", Houghton Mifflin Company, Boston, p. 226, 1928. Mercadier, E. "La radiophonie indirecte". Lumiere Electrique, Vol. 4, 1881, pp. 295–299. Mercadier, E. "Sur la radiophonie produite a l'aide du selenium". C. R., Vol. 92,1881, pp. 705–707. Mercadier, E. "Sur la construction de recepteurs photophoniques a selenium". C. R., Vol. 92, 1881, pp. 789–790. Mercadier, E. "Sur l'influence de la temperature sur les recepteurs radiophoniques a selenium". C. R., Vol. 92, 1881, pp. 1407–1408. Molera & Cebrian. "The Photophone". Eng. L., Vol. 31, 1881, p. 358. Preece, Sir William H. "Radiophony", Engineering Vol. 32, 8 July 1881, pp. 29–33; Journal of the Society of Telegraph Engineers, Vol 10, 1881, pp. 212–228. On the photophone. Rankine, A.O. "Talking over a Sunbeam". El. Exp. (N. Y.), Vol. 7, 1920, pp. 1265–1316. Sternberg, J.M. The Volta Prize of the French Academy Awarded to Prof. Alexander Graham Bell: A Talk With Dr. J.M. Sternberg, The Evening Traveler, September 1, 1880, The Alexander Graham Bell Papers at the Library of Congress Thompson, Silvanus P. "Notes on the Construction of the Photophone". Phys. Soc.Proc., Vol. 4, 1881, pp. 184–190. Also in Phil. Mag., Vol. 11, 1881, pp. 286–291. Abstracted in Chem. News, Vol. 43, 1881, p. 43; Eng. L., Vol. 31, 1881, p. 96. Tomlinson, H. "The Photophone". Nat. L., Vol. 23, 1881, pp. 457–458. U.S. Radio and Television Corp. "Ultra-violet rays used in Television", New York Times, 29 May 1929, p. 5: Demonstration of transmission of a low definition (mechanically scanned) video signal over a modulated light beam. Terminal stations 50 feet apart. Public demonstration at Bamberger and Company's Store, Newark, New Jersey. Earliest known usage of modulated light comms for conveying video signals. See also report "Invisible Ray Transmits Pictures" in Science and Invention, November 1929, Vol. 17, p. 629. White, R.H. "Photophone". Harmsworth's Wireless Encyclopaedia, Vol. 3, pp. 1541–1544. Weinhold, A. "Herstellung von Selenwiderstanden fur Photophonzwecke". E.T.Z., Vol. 1, 1880, p. 423. External links Bell's speech before the American Association for the Advancement of Science in Boston on August 27, 1880, in which he presented his paper "On the Production and Reproduction of Sound by Light: the Photophone". Long-distance Atmospheric Optical Communications, by Chris Long and Mike Groth (VK7MJ) Téléphone et photophone: les contributions indirectes de Graham Bell à l'idée de la vision à distance par l'électricité (1880–1895 Alexander Graham Bell History of telecommunications History of the telephone Optical communications Photonics
Photophone
[ "Engineering" ]
4,820
[ "Optical communications", "Telecommunications engineering" ]
51,172
https://en.wikipedia.org/wiki/DNIX
DNIX (original spelling: D-Nix) is a discontinued Unix-like real-time operating system from the Swedish company Dataindustrier AB (DIAB). A version named ABCenix was developed for the ABC 1600 computer from Luxor. Daisy Systems also had a system named Daisy DNIX on some of their computer-aided design (CAD) workstations. It was unrelated to DIAB's product. History Inception at DIAB in Sweden Dataindustrier AB (literal translation: computer industries shareholding company) was started in 1970 by Lars Karlsson as a single-board computer manufacture in Sundsvall, Sweden, producing a Zilog Z80-based computer named Data Board 4680. In 1978, DIAB started to work with the Swedish television company Luxor AB to produce the home and office computer series ABC 80 and ABC 800. In 1983 DIAB independently developed the first Unix-compatible machine, DIAB DS90 based on the Motorola 68000 CPU. D-NIX here made its appearance, based on a UNIX System V license from AT&T Corporation. DIAB was however an industrial control system (automation) company, and needed a real-time operating system, so the company replaced the AT&T-supplied UNIX kernel with their own in-house developed, yet compatible real-time variant. This kernel was inspired by a Z80 kernel named OS.8 created for Monroe Systems division of Litton Industries. Over time, the company also replaced several of the UNIX standard userspace tools with their own implementations, to the point where no code was derived from UNIX, and their machines could be deployed independently of any AT&T UNIX license. Two years later and in cooperation with Luxor, a computer called ABC 1600 was developed for the office market, while in parallel, DIAB continue to produce enhanced versions of the DS90 computer using newer versions of the Motorola CPUs such as Motorola 68010, 68020, 68030 and eventually 68040. In 1990 DIAB was acquired by Groupe Bull who continued to produce and support the DS machines under the brand name DIAB, with names such as DIAB 2320, DIAB 2340 etc., still running DIABs version of DNIX. Derivative at ISC Systems Corporation ISC Systems Corporation (ISC) purchased the right to use DNIX in the late 1980s for use in its line of Motorola 68k-based banking computers. (ISC was later bought by Olivetti, and was in turn resold to Wang, which was then bought by Getronics. This corporate entity, most often referred to as 'ISC', has answered to a bewildering array of names over the years.) This code branch was the SVR2 compatible version, and received extensive modification and development at their hands. Notable features of this operating system were its support of demand paging, diskless workstations, multiprocessing, asynchronous input/output (I/O), the ability to mount processes (handlers) on directories in the file system, and message passing. Its real-time support consisted largely of internal event-driven queues rather than list search mechanisms (no 'thundering herd'), static process priorities in two classes (run to completion and timesliced), support for contiguous files (to avoid fragmentation of critical resources), and memory locking. The quality of the orthogonal asynchronous event implementation has yet to be equalled in current commercial operating systems, though some approach it. (The concept that has yet to be adopted is that the synchronous marshalling point of all the asynchronous activity could also be asynchronous, ad infinitum. DNIX handled this with aplomb.) The asynchronous I/O facility obviated the need for Berkeley sockets select or SVR4's STREAMS poll mechanism, though there was a socket emulation library that preserved the socket semantics for backward compatibility. Another feature of DNIX was that none of the standard utilities (such as ps, a frequent offender) rummaged around in the kernel's memory to do their job. System calls were used instead, and this meant the kernel's internal architecture was free to change as required. The handler concept allowed network protocol stacks to be outside the kernel, which greatly eased development and improved overall reliability, though at a performance cost. It also allowed for foreign file systems to be user-level processes, again for improved reliability. The main file system, though it could have been (and once was) an external process, was pulled into the kernel for performance reasons. Were it not for this DNIX could well have been considered a microkernel, though it was not formally developed as such. Handlers could appear as any type of 'native' Unix file, directory structure, or device, and file I/O requests that the handler could not process could be passed off to other handlers, including the underlying one on which the handler was mounted. Handler connections could also exist and be passed around independent of the file system, much like a pipe. One effect of this is that text terminal (TTY) like devices could be emulated without needing a kernel-based pseudo terminal facility. An example of where a handler saved the day was in ISC's diskless workstation support, where a bug in the implementation meant that using named pipes on the workstation could induce undesirable resource locking on the fileserver. A handler was created on the workstation to field accesses to the afflicted named pipes until the appropriate kernel fixes could be developed. This handler required approximately 5 kilobytes of code to implement, an indication that a non-trivial handler did not need to be large. ISC also received the right to manufacture DIAB's DS90-10 and DS90-20 machines as its file servers. The multiprocessor DS90-20's, however, were too expensive for the target market and ISC designed its own servers and ported DNIX to them. ISC designed its own GUI-based diskless workstations for use with these file servers, and ported DNIX again. (Though ISC used Daisy workstations running Daisy DNIX to design the machines that would run DIAB's DNIX, there was negligible confusion internally as the drafting and layout staff rarely talked to the software staff. Moreover, the hardware design staff didn't use either system! The running joke went something like: "At ISC we build computers, we don't use them.") The asynchronous I/O support of DNIX allowed for easy event-driven programming in the workstations, which performed well even though they had relatively limited resources. (The GUI diskless workstation had a 7 MHz 68010 processor and was usable with only 512K of memory, of which the kernel consumed approximately half. Most workstations had 1 MB of memory, though there were later 2 MB and 4 MB versions, along with 10 MHz processors.) A full-blown installation could consist of one server (16 MHz 68020, 8 MB of RAM, and a 200 MB hard disk) and up to 64 workstations. Though slow to boot up, such an array would perform acceptably in a bank teller application. Besides the innate efficiency of DNIX, the associated DIAB C compiler was key to high performance. It generated particularly good code for the 68010, especially after ISC got done with it. (ISC also retargeted it to the Texas Instruments TMS34010 graphics coprocessor used in its last workstation.) The DIAB C compiler was, of course, used to build DNIX, which was one of the factors contributing to its efficiency, and is still available, in some form, through Wind River Systems. These systems are still in use as of this writing in 2006, in former Seattle-First National Bank branches now branded Bank of America. There may be, and probably are, other ISC customers still using DNIX in some capacity. Through ISC there was a considerable DNIX presence in Central and South America. Asynchronous events DNIX's native system call was the dnix(2) library function, analogous to the standard Unix unix(2) or syscall(2) function. It took multiple arguments, the first of which was a function code. Semantically this single call provided all appropriate Unix functionality, though it was syntactically different from Unix and had, of course, numerous DNIX-only extensions. DNIX function codes were organized into two classes: Type 1 and Type 2. Type 1 commands were those that were associated with I/O activity, or anything that could potentially cause the issuing process to block. Major examples were, F_OPEN, F_CLOSE, F_READ, F_WRITE, F_IOCR, F_IOCW, F_WAIT, and F_NAP. Type 2 were the remainder, such as, F_GETPID, F_GETTIME, etc. They could be satisfied by the kernel immediately. To invoke asynchronicity, a special file descriptor called a trap queue had to have been created via the Type 2 opcode, F_OTQ. A Type 1 call would have the, F_NOWAIT bit OR-ed with its function value, and one of the additional parameters to, dnix(2) was the trap queue file descriptor. The return value from an asynchronous call was not the normal value but a kernel-assigned identifier. At such time as the asynchronous request completed, a, read(2) (or, F_READ) of the trap queue file descriptor would return a small kernel-defined structure containing the identifier and result status. The, F_CANCEL operation was available to cancel any asynchronous operation that hadn't yet been completed, one of its arguments was the kernel-assigned identifier. (A process could only cancel requests that were currently self-owned. The exact semantics of cancelling was up to each request's handler, fundamentally it only meant that any waiting was to be terminated. A partially completed operation could be returned.) In addition to the kernel-assigned identifier, one of the arguments given to any asynchronous operation was a 32-bit user-assigned identifier. This most often referenced a function pointer to the appropriate subroutine that would handle the I/O completion method, but this was merely convention. It was the entity that read the trap queue elements that was responsible for interpreting this value. struct itrq { /* Structure of data read from trap queue. */ short it_stat; /* Status */ short it_rn; /* Request number */ long it_oid; /* Owner ID given on request */ long it_rpar; /* Returned parameter */ }; Of note is that the asynchronous events were gathered via normal file descriptor read operations, and that such reading was also able to be asynchronous. This had implications for semi-autonomous asynchronous event handling packages that could exist within one process. (DNIX 5.2 did not have lightweight processes or threads.) Also of note is that any potentially blocking operation could be issued asynchronously, so DNIX was well equipped to handle many clients with a single server process. A process was not restricted to having only one trap queue, so I/O requests could be grossly prioritized in this way. Compatibility In addition to the native dnix(2) call, a complete set of 'standard' libc interface calls was available. open(2), close(2), read(2), write(2), etc. Besides being useful for backwards compatibility, these were implemented in a binary-compatible manner with the NCR Tower computer, so that binaries compiled for it would run unchanged under DNIX. The DNIX kernel had two trap dispatchers internally, one for the DNIX method and one for the Unix method. Choice of dispatcher was up to the programmer, and using both interchangeably was acceptable. Semantically they were identical wherever functionality overlapped. (In these machines the 68000 trap #0 instruction was used for the unix(2) calls, and the trap #4 instruction for dnix(2). The two trap handlers were very similar, though the [usually hidden] unix(2) call held the function code in the processor's D0 register, whereas dnix(2) held it on the stack with the rest of the parameters.) DNIX 5.2 had no networking protocol stacks internally (except for the thin X.25-based Ethernet protocol stack added by ISC for use by its diskless workstation support package), all networking was conducted by reading and writing to Handlers. Thus, there was no socket mechanism, but a libsocket(3) existed that used asynchronous I/O to talk to the TCP/IP handler. The typical Berkeley-derived networking program could be compiled and run unchanged (modulo the usual Unix porting problems), though it might not be as efficient as an equivalent program that used native asynchronous I/O. Handlers Under DNIX, a process could be used to handle I/O requests and to extend the file system. Such a process was called a Handler, and was a major feature of the operating system. A handler was defined as a process that owned at least one request queue, a special file descriptor that was procured in one of two ways: with a F_ORQ or a F_MOUNT call. The former invented an isolated request queue, one end of which was then typically handed down to a child process. (The network remote execution programs, of which there were many, used this method to provide standard I/O paths to their children.) The latter hooked into the file system so that file I/O requests could be adopted by handlers. (The network login programs, of which there were even more, used this method to provide standard I/O paths to their children, as the semantics of logging in under Unix requires a way for multiple perhaps-unrelated processes to horn in on the standard I/O path to the operator.) Once mounted on a directory in the file system, the handler then received all I/O calls to that point. A handler would then read small kernel-assigned request data structures from the request queue. (Such reading could be done synchronously or asynchronously as the handler's author desired.) The handler would then do whatever each request required to be satisfied, often using the DNIX F_UREAD and F_UWRITE calls to read and write into the request's data space, and then would terminate the request appropriately using F_TERMIN. A privileged handler could adopt the permissions of its client for individual requests to subordinate handlers (such as the file system) via the F_T1REQ call, so it didn't need to reproduce the subordinate's permission scheme. If a handler was unable to complete a request alone, the F_PASSRQ function could be used to pass I/O requests from one handler to another. A handler could perform part of the work requested before passing the rest on to another handler. It was very common for a handler to be state-machine oriented so that requests it was fielding from a client were all done asynchronously. This allowed for a single handler to field requests from multiple clients simultaneously without them blocking each other unnecessarily. Part of the request structure was the process ID and its priority so that a handler could choose what to work on first based upon this information, there was no requirement that work be performed in the order it was requested. To aid in this, it was possible to poll both request and trap queues to see if there was more work to be considered before buckling down to actually do it. struct ireq { /* Structure of incoming request */ short ir_fc; /* Function code */ short ir_rn; /* Request number */ long ir_opid; /* Owner ID that you gave on open or mount */ long ir_bc; /* Byte count */ long ir_upar; /* User parameter */ long ir_rad; /* Random address */ ushort ir_uid; /* User ID */ ushort ir_gid; /* User group */ time_t ir_time; /* Request time */ ulong ir_nph; ulong ir_npl; /* Node and process ID */ }; There was no particular restriction on the number of request queues a process could have. This was used to provide networking facilities to chroot jails, for example. Examples To give some appreciation of the utility of handlers, at ISC handlers existed for: foreign file systems FAT CD-ROM/ISO9660 disk image files RAM disk (for use with write-protected boot disks) networking protocols DNET (essentially X.25 over Ethernet, with multicast capability) X.25 TCP/IP DEC LAT AppleTalk remote file systems DNET's /net/machine/path/from/its/root... NFS remote login ncu (DNET) telnet rlogin wcu (DNET GUI) X.25 PAD DEC LAT remote execution rx (DNET) remsh rexec system extension windowman (GUI) vterm (xterm-like) document (passbook) printer dmap (ruptime analog) windowmac (GUI gateway to Macintosh) system patches named pipe handler ISC's extensions ISC purchased both 5.2 (SVR2 compatible) and 5.3 (SVR3 compatible) versions of DNIX. At the time of purchase, DNIX 5.3 was still undergoing development at DIAB so DNIX 5.2 was what was deployed. Over time, ISC's engineers incorporated most of their 5.3 kernel's features into 5.2, primarily shared memory and IPC, so there was some divergence of features between DIAB and ISC's versions of DNIX. DIAB's 5.3 likely went on to contain more SVR3 features than ISC's 5.2 ended up with. Also, DIAB went on to DNIX 5.4, a SVR4 compatible OS. At ISC, developers considerably extended their version of DNIX 5.2 (only listed are features involving the kernel) based upon both their needs and the general trends of the Unix industry: Diskless workstation support. The workstation's kernel file system was removed, and replaced with an X.25-based Ethernet communications stub. The file server's kernel was also extended with a mating component that received the remote requests and handed them to a pool of kernel processes for service, though a standard handler could have been written to do this. (Later in its product lifecycle, ISC deployed standard SVR4-based Unix servers in place of the DNIX servers. These used X.25 STREAMS and a custom-written file server program. Despite the less efficient structuring, the raw horsepower of the platforms used made for a much faster server. It is unfortunate that this file server program did not support all of the functionality of the native DNIX server. Tricky things, like named pipes, never worked at all. This was another justification for the named pipe handler process.) gdb watchpoint support using the features of ISC's MMU. Asynchronous I/O to the file system was made real. (Originally it blocked anyway.) Kernel processes (kprocs, or threads) were used to do this. Support for a truss- or strace-like program. In addition to some repairs to bugs in the standard Unix ptrace single-stepping mechanism, this required adding a temporary process adoption facility so that the tracer could use the standard single-stepping mechanism on existing processes. SVR4 signal mechanism extensions. Primarily for the new STOP and CONT signals, but encompassing the new signal control calls as well. Due to ISC's lack of source code for the adb and sdb debuggers the u-page could not be modified, so the new signals could only be blocked or receive default handling, they could not be caught. Support for network sniffing. This required extending the Ethernet driver so that a single event could satisfy more than one I/O request, and conditionally implementing the hardware filtering in software to support promiscuous mode. Disk mirroring. This was done in the file system and not the device driver, so that slightly (or even completely) different devices could still be mirrored together. Mirroring a small hard disk to the floppy was a popular way to test mirroring as ejecting the floppy was an easy way to induce disk errors. 32-bit inode, 30-character filename, symbolic link, and sticky directory extensions to the file system. Added /dev/zero, /dev/noise, /dev/stdXXX, and /dev/fd/X devices. Process group id lists (from SVR4). #! direct script execution. Serial port multiplication using ISC's Z-80 based VMEbus communications boards. Movable swap partition. Core 'dump' snapshots of running processes. Support for fuser command. Process renice function. Associated timesharing reprioritizer program to implement floating priorities. A way to 'mug' a process, instantly depriving it of all memory resources. Very useful for determining what the current working set is, as opposed to what is still available to it but not necessarily being used. This was associated with a GUI utility showing the status of all 1024 pages of a process's memory map. (This being the number of memory pages supported by ISC's MMU.) In use you would 'mug' the target process periodically through its life and then watch to see how much memory was swapped back in. This was useful as ISC's production environment used only a few long-lived processes, controlling their memory utilization and growth was key to maintaining performance. Features that were never added When DNIX development at ISC effectively ceased in 1997, a number of planned OS features were left on the table: Shared objects – There were two dynamically loaded libraries in existence, an encryptor for DNET and the GUI's imaging library, but the facility was never generalized. ISC's machines were characterized by a general lack of virtual address space, so extensive use of memory-mapped entities would not have been possible. Lightweight processes – The kernel already had multiple threads that shared a single MMU context, extending this to user processes should have been straightforward. The API implications would have been the most difficult part of this. Access-control lists (ACL) – Trivial to implement using an ACL handler mounted over the stock file system. Multiple swap partitions – DNIX already used free space on the selected volume for swapping, it would have been easy to give it a list of volumes to try in turn, potentially with associated space limits to keep it from consuming all free space on a volume before moving on to the next one. Remote kernel debugging via gdb – All the pieces were there to do it either through the customary serial port or over Ethernet using the kernel's embedded X.25 link software, but they were never assembled. 68030 support – ISC's prototypes were never completed. Two processor piggyback plug-in cards were built, but were never used as more than faster 68020's. They were not reliable, nor were they as fast as they could have been due to having to fit into a 68020 socket. The fast context switching ISC MMU would be left disabled (and left out altogether in proposed production units), and the embedded one of the 68030 was to have been used instead, using a derivative of the DS90-20's MMU code. While the ISC MMU was very efficient and supported instant switching among 32 resident processes, it was very limited in addressability. The 68030 MMU would have allowed for much more than 8 MB of virtual space in a process, which was the limit of the ISC MMU. Though this MMU would be slower, the overall faster speed of the 68030 should have more than made up for it, so that a 68030 machine was expected to be in all ways faster, and support much larger processes. See also Comparison of real-time operating systems Timeline of operating systems Cromemco Cromix References UNIX System V Real-time operating systems Science and technology in Sweden
DNIX
[ "Technology" ]
5,238
[ "Real-time computing", "Real-time operating systems" ]
51,203
https://en.wikipedia.org/wiki/Coefficient
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or any other type of expression. It may be a number without units, in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression (including variables such as , and ). When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial has coefficients 2, −1, and 3, and the powers of the variable in the polynomial have coefficient parameters , , and . A , also known as constant term or simply constant, is a quantity either implicitly attached to the zeroth power of a variable or not attached to other variables in an expression; for example, the constant coefficients of the expressions above are the number 3 and the parameter c, involved in 3cx0. The coefficient attached to the highest degree of the variable in a polynomial of one variable is referred to as the leading coefficient; for example, in the example expressions above, the leading coefficients are 2 and a, respectively. In the context of differential equations, these equations can often be written in terms of polynomials in one or more unknown functions and their derivatives. In such cases, the coefficients of the differential equation are the coefficients of this polynomial, and these may be non-constant functions. A coefficient is a constant coefficient when it is a constant function. For avoiding confusion, in this context a coefficient that is not attached to unknown functions or their derivatives is generally called a constant term rather than a constant coefficient. In particular, in a linear differential equation with constant coefficient, the constant coefficient term is generally not assumed to be a constant function. Terminology and definition In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression. For example, in the polynomial with variables and , the first two terms have the coefficients 7 and −3. The third term 1.5 is the constant coefficient. In the final term, the coefficient is 1 and is not explicitly written. In many scenarios, coefficients are numbers (as is the case for each term of the previous example), although they could be parameters of the problem—or any expression in these parameters. In such a case, one must clearly distinguish between symbols representing variables and symbols representing parameters. Following René Descartes, the variables are often denoted by , , ..., and the parameters by , , , ..., but this is not always the case. For example, if is considered a parameter in the above expression, then the coefficient of would be , and the constant coefficient (with respect to ) would be . When one writes it is generally assumed that is the only variable, and that , and are parameters; thus the constant coefficient is in this case. Any polynomial in a single variable can be written as for some nonnegative integer , where are the coefficients. This includes the possibility that some terms have coefficient 0; for example, in , the coefficient of is 0, and the term does not appear explicitly. For the largest such that (if any), is called the leading coefficient of the polynomial. For example, the leading coefficient of the polynomial is 4. This can be generalised to multivariate polynomials with respect to a monomial order, see . Linear algebra In linear algebra, a system of linear equations is frequently represented by its coefficient matrix. For example, the system of equations the associated coefficient matrix is Coefficient matrices are used in algorithms such as Gaussian elimination and Cramer's rule to find solutions to the system. The leading entry (sometimes leading coefficient) of a row in a matrix is the first nonzero entry in that row. So, for example, in the matrix the leading coefficient of the first row is 1; that of the second row is 2; that of the third row is 4, while the last row does not have a leading coefficient. Though coefficients are frequently viewed as constants in elementary algebra, they can also be viewed as variables as the context broadens. For example, the coordinates of a vector in a vector space with basis are the coefficients of the basis vectors in the expression See also Correlation coefficient Degree of a polynomial Monic polynomial Binomial coefficient References Further reading Sabah Al-hadad and C.H. Scott (1979) College Algebra with Applications, page 42, Winthrop Publishers, Cambridge Massachusetts . Gordon Fuller, Walter L Wilson, Henry C Miller, (1982) College Algebra, 5th edition, page 24, Brooks/Cole Publishing, Monterey California . Polynomials Mathematical terminology Algebra Numbers Variables (mathematics)
Coefficient
[ "Mathematics" ]
965
[ "Variables (mathematics)", "Polynomials", "Mathematical objects", "Arithmetic", "nan", "Numbers", "Algebra" ]
51,210
https://en.wikipedia.org/wiki/Slow-scan%20television
Slow-scan television (SSTV) is a picture transmission method, used mainly by amateur radio operators, to transmit and receive static pictures via radio in monochrome or color. A literal term for SSTV is narrowband television. Analog broadcast television requires at least 6 MHz wide channels, because it transmits 25 or 30 picture frames per second (see ITU analog broadcast standards), but SSTV usually only takes up to a maximum of 3 kHz of bandwidth. It is a much slower method of still picture transmission, usually taking from about eight seconds to a couple of minutes, depending on the mode used, to transmit one image frame. Since SSTV systems operate on voice frequencies, amateurs use it on shortwave (also known as HF by amateur radio operators), VHF and UHF radio. History Concept The concept of SSTV was introduced by Copthorne Macdonald in 1957–58. He developed the first SSTV system using an electrostatic monitor and a vidicon tube. It was deemed sufficient to use 120 lines and about 120 pixels per line to transmit a black-and-white still picture within a 3 kHz telephone channel. First live tests were performed on the 11-meter ham band which was later given to the CB service in the US. In the 1970s, two forms of paper printout receivers were invented by hams. Early usage in space exploration SSTV was used to transmit images of the far side of the Moon from Luna 3. The first space television system was called Seliger-Tral-D and was used aboard Vostok. Vostok was based on an earlier videophone project which used two cameras, with persistent LI-23 iconoscope tubes. Its output was 10 frames per second at 100 lines per frame video signal. The Seliger system was tested during the 1960 launches of the Vostok capsule, including Sputnik 5, containing the space dogs Belka and Strelka, whose images are often mistaken for the dog Laika, and the 1961 flight of Yuri Gagarin, the first man in space on Vostok 1. Vostok 2 and thereafter used an improved 400-line television system referred to as Topaz. A second generation system (Krechet, incorporating docking views, overlay of docking data, etc.) was introduced after 1975. A similar concept, also named SSTV, was used on Faith 7, as well as on the early years of the NASA Apollo program. The Faith 7 camera transmitted one frame every two seconds, with a resolution of 320 lines. The Apollo TV cameras used SSTV to transmit images from inside Apollo 7, Apollo 8, and Apollo 9, as well as the Apollo 11 Lunar Module television from the Moon. NASA had taken all the original tapes and erased them for use on subsequent missions; however, the Apollo 11 Tape Search and Restoration Team formed in 2003 tracked down the highest-quality films among the converted recordings of the first broadcast, pieced together the best parts, then contracted a specialist film restoration company to enhance the degraded black-and-white film and convert it into digital format for archival records. The SSTV system used in NASA's early Apollo missions transferred 10 frames per second with a resolution of 320 frame lines in order to use less bandwidth than a normal TV transmission. The early SSTV systems used by NASA differ significantly from the SSTV systems currently in use by amateur radio enthusiasts today. Progression Commercial systems started appearing in the United States in 1970, after the FCC had legalized the use of SSTV for advanced level amateur radio operators in 1968. SSTV originally required quite a bit of specialized equipment. Usually there was a scanner or camera, a modem to create and receive the characteristic audio howl, and a cathode-ray tube from a surplus radar set. The special cathode-ray tube would have "long persistence" phosphors that would keep a picture visible for about ten seconds. The modem would generate audio tones between 1,200 and 2,300 Hz from picture signals, and picture signals from received audio tones. The audio would be attached to a radio receiver and transmitter. Current systems A modern system, having gained ground since the early 1990s, uses a personal computer and special software in place of much of the custom equipment. The sound card of a PC, with special processing software, acts as a modem. The computer screen provides the output. A small digital camera or digital photos provide the input. Modulation Like the similar radiofax mode, SSTV is an analog signal. SSTV uses frequency modulation, in which every different value of brightness in the image gets a different audio frequency. In other words, the signal frequency shifts up or down to designate brighter or darker pixels, respectively. Color is achieved by sending the brightness of each color component (usually red, green and blue) separately. This signal can be fed into an SSB transmitter, which in part modulates the carrier signal. There are a number of different modes of transmission, but the most common ones are Martin M1 (popular in Europe) and Scottie S1 (used mostly in the USA). Using one of these, an image transfer takes 114 (M1) or 110 (S1) seconds. Some black and white modes take only 8 seconds to transfer an image. Header A calibration header is sent before the image. It consists of a 300-millisecond leader tone at 1,900 Hz, a 10 ms break at 1,200 Hz, another 300-millisecond leader tone at 1,900 Hz, followed by a digital VIS (vertical interval signaling) code, identifying the transmission mode used. The VIS consists of bits of 30 milliseconds in length. The code starts with a start bit at 1,200 Hz, followed by 7 data bits (LSB first; 1,100 Hz for 1, 1,300 Hz for 0). An even parity bit follows, then a stop bit at 1,200 Hz. For example, the bits corresponding the decimal numbers 44 or 32 imply that the mode is Martin M1, whereas the number 60 represents Scottie S1. Scanlines A transmission consists of horizontal lines, scanned from left to right. The color components are sent separately one line after another. The color encoding and order of transmission can vary between modes. Most modes use an RGB color model; some modes are black-and-white, with only one channel being sent; other modes use a YC color model, which consists of luminance (Y) and chrominance (R–Y and B–Y). The modulating frequency changes between 1,500 and 2,300 Hz, corresponding to the intensity (brightness) of the color component. The modulation is analog, so even though the horizontal resolution is often defined as 256 or 320 pixels, they can be sampled using any rate. The image aspect ratio is conventionally 4:3. Lines usually end in a 1,200 Hz horizontal synchronization pulse of 5 milliseconds (after all color components of the line have been sent); in some modes, the synchronization pulse lies in the middle of the line. Modes Below is a table of some of the most common SSTV modes and their differences. These modes share many properties, such as synchronization and/or frequencies and grey/color level correspondence. Their main difference is the image quality, which is proportional to the time taken to transfer the image and in the case of the AVT modes, related to synchronous data transmission methods and noise resistance conferred by the use of interlace. ¹ Martin and Scottie modes actually send 256 scanlines, but the first 16 are usually grayscale. The mode family called AVT (for Amiga Video Transceiver) was originally designed by Ben Blish-Williams (N4EJI, then AA7AS) for a custom modem attached to an Amiga computer, which was eventually marketed by AEA corporation. The Scottie and Martin modes were originally implemented as ROM enhancements for the Robot Research Corporation SSTV unit. The exact line timings for the Martin M1 mode are given in this reference. The Robot SSTV modes were designed by Robot Research Corporation for their own SSTV units. All four sets of SSTV modes are now available in various PC-resident SSTV systems and no longer depend upon the original hardware. AVT AVT is an abbreviation of "Amiga Video Transceiver", software and hardware modem originally developed by "Black Belt Systems" (USA) around 1990 for the Amiga home computer popular all over the world before the IBM PC family gained sufficient audio quality with the help of special sound cards. These AVT modes differ radically from the other modes mentioned above, in that they are synchronous, that is, they have no per-line horizontal synchronization pulse but instead use the standard VIS vertical signal to identify the mode, followed by a frame-leading digital pulse train which pre-aligns the frame timing by counting first one way and then the other, allowing the pulse train to be locked in time at any single point out of 32 where it can be resolved or demodulated successfully, after which they send the actual image data, in a fully synchronous and typically interlaced mode. Interlace, no dependence upon sync, and interline reconstruction gives the AVT modes a better noise resistance than any of the other SSTV modes. Full frame images can be reconstructed with reduced resolution even if as much as 1/2 of the received signal was lost in a solid block of interference or fade because of the interlace feature. For instance, first the odd lines are sent, then the even lines. If a block of odd lines are lost, the even lines remain, and a reasonable reconstruction of the odd lines can be created by a simple vertical interpolation, resulting in a full frame of lines where the even lines are unaffected, the good odd lines are present, and the bad odd lines have been replaced with an interpolation. This is a significant visual improvement over losing a non-recoverable contiguous block of lines in a non-interlaced transmission mode. Interlace is an optional mode variation, however without it, much of the noise resistance is sacrificed, although the synchronous character of the transmission ensures that intermittent signal loss does not cause loss of the entire image. The AVT modes are mainly used in Japan and the United States. There is a full set of them in terms of black and white, color, and scan line counts of 128 and 256. Color bars and greyscale bars may be optionally overlaid top and/or bottom, but the full frame is available for image data unless the operator chooses otherwise. For receiving systems where timing was not aligned with the incoming image's timing, the AVT system provided for post-receive re-timing and alignment. Other modes Frequencies Using a receiver capable of demodulating single-sideband modulation, SSTV transmissions can be heard on the following frequencies: Media In popular culture In Valve's 2007 video game Portal, there was an internet update of the program files on 3 March 2010. This update gave a challenge to find hidden radios in each test chamber and bring them to certain spots to receive hidden signals. The hidden signals became part of an ARG-style analysis by fans of the game hinting at a sequel of the game some sounds were of Morse code strings that implied the restarting of a computer system, while others could be decoded as purposefully low-quality SSTV images. When some of these decoded images were put together in the correct order, it revealed a decodable MD5 hash for a bulletin-board system phone number (425)822-5251. It provides multiple ASCII art images relating to the game and its potential sequel. The sequel, Portal 2, was later confirmed. According to a hidden commentary node SSTV image from Portal 2, the BBS is running from a Linux-based computer and is linked to a 2,400 bit/s modem from 1987. It is hooked up in an unspecified Valve developer's kitchen. They kept spare modems in case one failed, and one did. The BBS only sends about 20 megabytes of data in total. In the aforementioned sequel, Portal 2, there are four SSTV images. One is broadcast in a Rattman den. When decoded, this image is a very subtle hint towards the game's ending. The image is of a Weighted Companion Cube on the Moon. The other three images are decoded from a commentary node in another Rattman den. These 3 images are slides with bullet points on how the ARG was done, and what the outcome was, such as how long it took the combined internet to solve the puzzle (the average completion time was 7 hours). In another video game, Kerbal Space Program, there is a small hill in the southern hemisphere on the planet "Duna", which transmits a color SSTV image in Robot 24 format. It depicts four astronauts standing next to what is either the Lunar Lander from the Apollo missions, or an unfinished pyramid. Above them is the game's logo and three circles. It emits sound if an object is near the hill. As of the latest version of the game (1.12), the hill no longer transmits the signal. Caparezza, an Italian songwriter, inserted an image on the ghost track of his album Prisoner 709. The Aphex Twin release 2 Remixes by AFX contains a track that displays an SSTV image that has text about the programs used to make the release as well as a picture of Richard sitting on a couch. See also Amateur television Hellschreiber Narrow-bandwidth television Radiofax Radioteletype Shortwave SSTV repeater Videotelephony References Glidden, Ramon (September 1997). "Getting Started With Slow Scan Television." QST. Accessed on April 28, 2005. "Slow scan definition." On-line Medical Dictionary. Accessed on April 28, 2005. Turner, Jeremy (December 2003). "07: Interview With Tav Falco About Early Telematic Art at Televista in Memphis, New Center for Art Activities in New York and Open Space Gallery in Victoria, Canada." Outer Space: The Past, Present and Future of Telematic Art. Accessed on April 28, 2005. Sarkissian, John. Television from the Moon . The Parkes Observatory's Support of the Apollo 11 Mission. Latest Update: 21 October 2005. Notes External links World Ham/11mtr SSTV cams - https://worldsstv.com/ eng075 - UK Norfolk 11 mtr sstv stration, live sstv signal reports for Live SSTV from round the world & loads more SSTV from the International Space Station lists images received from the International Space Station via SSTV Image Communication on Short Waves – an online free ham radio handbook for SSTV, WEFAX and digital SSTV Modem software: MMSSTV for Microsoft Windows Ham Radio Deluxe for Microsoft Windows RX-SSTV for Microsoft Windows QSSTV for Linux MultiMode Cocoa for Mac OS X MultiScan for Mac OS X Robot36 for Android (operating system)(only decoding) SSTV Encoder for Android (operating system) (only encoding) SSTV Encoder/Decoder for iPhone/iPad Amateur radio Radio modulation modes Television technology
Slow-scan television
[ "Technology" ]
3,167
[ "Information and communications technology", "Television technology" ]
51,260
https://en.wikipedia.org/wiki/Bulb
In botany, a bulb is a short underground stem with fleshy leaves or leaf bases that function as food storage organs during dormancy. In gardening, plants with other kinds of storage organ are also called ornamental bulbous plants or just bulbs. Description The bulb's leaf bases, also known as scales, generally do not support leaves, but contain food reserves to enable the plant to survive adverse conditions. At the center of the bulb is a vegetative growing point or an unexpanded flowering shoot. The base is formed by a reduced stem, and plant growth occurs from this basal plate. Roots emerge from the underside of the base, and new stems and leaves from the upper side. Tunicate bulbs have dry, membranous outer scales that protect the continuous lamina of fleshy scales. Species in the genera Allium, Hippeastrum, Narcissus, and Tulipa all have tunicate bulbs. Non-tunicate bulbs, such as Lilium and Fritillaria species, lack the protective tunic and have looser scales. Bulbous plant species cycle through vegetative and reproductive growth stages; the bulb grows to flowering size during the vegetative stage and the plant flowers during the reproductive stage. Certain environmental conditions are needed to trigger the transition from one stage to the next, such as the shift from a cold winter to spring. Once the flowering period is over, the plant enters a foliage period of about six weeks during which time the plant absorbs nutrients from the soil and energy from the sun for setting flowers for the next year. Bulbs dug up before the foliage period is completed will not bloom the following year but then should flower normally in subsequent years. Plants that form bulbs Plants that form underground storage organs, including bulbs as well as tubers and corms, are called geophytes. Some epiphytic orchids (family Orchidaceae) form above-ground storage organs called pseudobulbs, that superficially resemble bulbs. Nearly all plants that form true bulbs are monocotyledons, and include: Amaryllis, Crinum, Hippeastrum, Narcissus, and several other members of the amaryllis family Amaryllidaceae. This includes onion, garlic, and other alliums, members of the Amaryllid subfamily Allioideae. Lily, tulip, and many other members of the lily family Liliaceae. Two groups of Iris species, family Iridaceae: subgenus Xiphium (the "Dutch" irises) and subgenus Hermodactyloides (the miniature "rock garden" irises). The only eudicot plants that produce true bulbs are just a few species in the genus Oxalis, such as Oxalis latifolia. Bulbil A bulbil is a small bulb, and may also be called a bulblet, bulbet, or bulbel. Small bulbs can develop or propagate a large bulb. If one or several moderate-sized bulbs form to replace the original bulb, they are called renewal bulbs. Increase bulbs are small bulbs that develop either on each of the leaves inside a bulb, or else on the end of small underground stems connected to the original bulb. Some lilies, such as the tiger lily Lilium lancifolium, form small bulbs, called bulbils, in their leaf axils. Several members of the onion family, Alliaceae, including Allium sativum (garlic), form bulbils in their flower heads, sometimes as the flowers fade, or even instead of the flowers (which is a form of apomixis). The so-called tree onion (Allium × proliferum) forms small onions which are large enough for pickling. Some ferns, such as the hen-and-chicken fern, produce new plants at the tips of the fronds' pinnae that are sometimes referred to as bulbils. See also List of flower bulbs References Further reading Coccoris, Patricia (2012) The Curious History of the Bulb Vase. Published by Cortex Design. Plant morphology Garden plants Plant reproduction
Bulb
[ "Biology" ]
835
[ "Behavior", "Plant reproduction", "Plants", "Reproduction", "Plant morphology" ]
51,288
https://en.wikipedia.org/wiki/Translation%20memory
A translation memory (TM) is a database that stores "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid human translators. The translation memory stores the source text and its corresponding translation in language pairs called “translation units”. Individual words are handled by terminology bases and are not within the domain of TM. Software programs that use translation memories are sometimes known as translation memory managers (TMM) or translation memory systems (TM systems, not to be confused with a translation management system (TMS), which is another type of software focused on managing the process of translation). Translation memories are typically used in conjunction with a dedicated computer-assisted translation (CAT) tool, word processing program, terminology management systems, multilingual dictionary, or even raw machine translation output. Research indicates that many companies producing multilingual documentation are using translation memory systems. In a survey of language professionals in 2006, 82.5% out of 874 replies confirmed the use of a TM. Usage of TM correlated with text type characterised by technical terms and simple sentence structure (technical, to a lesser degree marketing and financial), computing skills, and repetitiveness of content. Using TMs The program breaks the source text (the text to be translated) into segments, looks for matches between segments and the source half of previously translated source-target pairs stored in a translation memory, and presents such matching pairs as translation full and partial matches. The translator can accept a match, replace it with a fresh translation, or modify it to match the source. In the last two cases, the new or modified translation goes into the database. Some translation memory systems search for 100% matches only, i.e. they can only retrieve segments of text that match entries in the database exactly, while others employ fuzzy matching algorithms to retrieve similar segments, which are presented to the translator with differences flagged. Typical translation memory systems only search for text in the source segment. The flexibility and robustness of the matching algorithm largely determine the performance of the translation memory, although for some applications the recall rate of exact matches can be high enough to justify the 100%-match approach. Segments where no match is found will have to be translated by the translator manually. These newly translated segments are stored in the database where they can be used for future translations as well as repetitions of that segment in the current text. Translation memories work best on texts which are highly repetitive, such as technical manuals. They are also helpful for translating incremental changes in a previously translated document, corresponding, for example, to minor changes in a new version of a user manual. Traditionally, translation memories have not been considered appropriate for literary or creative texts, for the simple reason that there is so little repetition in the language used. However, others find them of value even for non-repetitive texts, because the database resources created have value for concordance searches to determine appropriate usage of terms, for quality assurance (no empty segments), and the simplification of the review process (source and target segment are always displayed together while translators have to work with two documents in a traditional review environment). Main benefits Translation memory managers are most suitable for translating technical documentation and documents containing specialized vocabularies. Their benefits include: Ensuring that the document is completely translated (translation memories do not accept empty target segments) Ensuring that the translated documents are consistent, including common definitions, phrasings and terminology. This is important when different translators are working on a single project. Enabling translators to translate documents in a wide variety of formats without having to own the software typically required to process these formats. Accelerating the overall translation process; since translation memories "remember" previously translated material, translators have to translate it only once. Reducing costs of long-term translation projects; for example the text of manuals, warning messages or series of documents needs to be translated only once and can be used several times. For large documentation projects, savings (in time or money) thanks to the use of a TM package may already be apparent even for the first translation of a new project, but normally such savings are only apparent when translating subsequent versions of a project that was translated before using translation memory. Main obstacles The main problems hindering wider use of translation memory managers include: The concept of "translation memories" is based on the premise that sentences used in previous translations can be "recycled". However, a guiding principle of translation is that the translator must translate the message of the text, and not its component sentences. Translation memory managers do not easily fit into existing translation or localization processes. In order to take advantage of TM technology, the translation processes must be redesigned. Translation memory managers do not presently support all documentation formats, and filters may not exist to support all file types. There is a learning curve associated with using translation memory managers, and the programs must be customized for greatest effectiveness. In cases where all or part of the translation process is outsourced or handled by freelance translators working off-site, the off-site workers require special tools to be able to work with the texts generated by the translation memory manager. Full versions of many translation memory managers can cost from US$500 to US$2,500 per seat, which can represent a considerable investment (although lower cost programs are also available). However, some developers produce free or low-cost versions of their tools with reduced feature sets that individual translators can use to work on projects set up with full versions of those tools. (Note that there are freeware and shareware TM packages available, but none of these has yet gained a large market share.) The costs involved in importing the user's past translations into the translation memory database, training, as well as any add-on products may also represent a considerable investment. Maintenance of translation memory databases still tends to be a manual process in most cases, and failure to maintain them can result in significantly decreased usability and quality of TM matches. As stated previously, translation memory managers may not be suitable for text that lacks internal repetition or which does not contain unchanged portions between revisions. Technical text is generally best suited for translation memory, while marketing or creative texts will be less suitable. Effects on quality The use of TM systems might have an effect on the quality of the texts translated. Its main effect is clearly related to the so-called "error propagation": if the translation for a particular segment is incorrect, it is in fact more likely that the incorrect translation will be reused the next time the same source text, or a similar source text, is translated, thereby perpetuating the error. Traditionally, two main effects on the quality of translated texts have been described: the "sentence-salad" effect (Bédard 2000; cited in O'Hagan 2009: 50) and the "peep-hole" effect (Heyn 1998). The first refers to a lack of coherence at the text level when a text is translated using sentences from a TM which have been translated by different translators with different styles. According to the latter, translators may adapt their style to the use of TM system in order for these not to contain intratextual references, so that the segments can be better reused in future texts, thus affecting cohesion and readability (O'Hagan 2009). There is a potential, and, if present, probably an unconscious effect on the translated text. Different languages use different sequences for the logical elements within a sentence and a translator presented with a multiple clause sentence that is half translated is less likely to completely rebuild a sentence. Consistent empirical evidences (Martín-Mor 2011) show that translators will most likely modify the structure of a multiple clause sentence when working with a text processor rather than with a TM system. There is also a potential for the translator to deal with the text mechanically sentence-by-sentence, instead of focusing on how each sentence relates to those around it and to the text as a whole. Researchers (Dragsted 2004) have identified this effect, which relates to the automatic segmentation feature of these programs, but it does not necessarily have a negative effect on the quality of translations. These effects are closely related to training rather than inherent to the tool. According to Martín-Mor (2011), the use of TM systems does have an effect on the quality of the translated texts, especially on novices, but experienced translators are able to avoid it. Pym (2013) reminds that "translators using TM/MT tend to revise each segment as they go along, allowing little time for a final revision of the whole text at the end", which might be the ultimate cause of some of the effects described here. Types of TM systems Desktop: Desktop translation memory tools are typically what individual translators use to complete translations. They are programs that a freelance translator downloads and installs on a desktop computer. Server-based or centralised: Centralized translation memory systems store TM on a central server. They work together with desktop TM and can increase TM match rates by 30–60% more than the TM leverage attained by desktop TM alone. Functions The following is a summary of the main functions of a translation memory. Offline functions Import This function is used to transfer a text and its translation from a text file to the TM. Import can be done from a raw format, in which an external source text is available for importing into a TM along with its translation. Sometimes the texts have to be reprocessed by the user. There is another format that can be used to import: the native format. This format is the one that uses the TM to save translation memories in a file. Analysis The process of analysis involves the following steps: Textual parsing It is very important to recognize punctuation correctly in order to distinguish between for example a full stop at the end of a sentence and a full stop in an abbreviation. Thus, mark-up is a kind of pre-editing. Usually, materials which have been processed through translators' aid programs contain mark-up, as the translation stage is embedded in a multilingual document production line. Other special text elements may be set off by mark-up. There are special elements which do not need to be translated, such as proper names and codes, while others may need to be converted to native format. Linguistic parsing The base form reduction is used to prepare lists of words and a text for automatic retrieval of terms from a term bank. On the other hand, syntactic parsing may be used to extract multi-word terms or phraseology from a source text. So parsing is used to normalise word order variation of phraseology, this is which words can form a phrase. Segmentation Its purpose is to choose the most useful translation units. Segmentation is like a type of parsing. It is done monolingually using superficial parsing and alignment is based on segmentation. If the translators correct the segmentations manually, later versions of the document will not find matches against the TM based on the corrected segmentation because the program will repeat its own errors. Translators usually proceed sentence by sentence, although the translation of one sentence may depend on the translation of the surrounding ones. Alignment It is the task of defining translation correspondences between source and target texts. There should be feedback from alignment to segmentation and a good alignment algorithm should be able to correct initial segmentation. Term extraction It can have as input a previous dictionary. Moreover, when extracting unknown terms, it can use parsing based on text statistics. These are used to estimate the amount of work involved in a translation job. This is very useful for planning and scheduling the work. Translation statistics usually count the words and estimate the amount of repetition in the text. Export Export transfers the text from the TM into an external text file. Import and export should be inverses. Online functions When translating, one of the main purposes of the TM is to retrieve the most useful matches in the memory so that the translator can choose the best one. The TM must show both the source and target text pointing out the identities and differences. Retrieval Several different types of matches can be retrieved from a TM. Exact match Exact matches appear when the match between the current source segment and the stored one is a character by character match. When translating a sentence, an exact match means the same sentence has been translated before. Exact matches are also called "100 % matches". In-Context Exact (ICE) match or Guaranteed Match An ICE match is an exact match that occurs in exactly the same context, that is, the same location in a paragraph. Context is often defined by the surrounding sentences and attributes such as document file name, date, and permissions. Fuzzy match When the match is not exact, it is a "fuzzy" match. Some systems assign percentages to these kinds of matches, in which case a fuzzy match is greater than 0% and less than 100%. Those figures are not comparable across systems unless the method of scoring is specified. Concordance When the translator selects one or more words in the source segment, the system retrieves segment pairs that match the search criteria. This feature is helpful for finding translations of terms and idioms in the absence of a terminology database. Updating A TM is updated with a new translation when it has been accepted by the translator. As always in updating a database, there is the question what to do with the previous contents of the database. A TM can be modified by changing or deleting entries in the TM. Some systems allow translators to save multiple translations of the same source segment. Automatic translation Translation memory tools often provide automatic retrieval and substitution. Automatic retrieval TM systems are searched and their results displayed automatically as a translator moves through a document. Automatic substitution With automatic substitution, if an exact match comes up in translating a new version of a document, the software will repeat the old translation. If the translator does not check the translation against the source, a mistake in the previous translation will be repeated. Networking Networking enables a group of translators to translate a text together faster than if each was working in isolation, because sentences and phrases translated by one translator are available to the others. Moreover, if translation memories are shared before the final translation, there is an opportunity for mistakes by one translator to be corrected by other team members. Text memory "Text memory" is the basis of the proposed Lisa OSCAR xml:tm standard. Text memory comprises author memory and translation memory. Translation memory The unique identifiers are remembered during translation so that the target language document is 'exactly' aligned at the text unit level. If the source document is subsequently modified, then those text units that have not changed can be directly transferred to the new target version of the document without the need for any translator interaction. This is the concept of 'exact' or 'perfect' matching to the translation memory. xml:tm can also provide mechanisms for in-document leveraged and fuzzy matching. History 1970s is the infancy stage for TM systems in which scholars carried on a preliminary round of exploratory discussions. The original idea for TM systems is often attributed to Martin Kay's "Proper Place" paper, but the details of it are not fully given. In this paper, it has shown the basic concept of the storing system: "The translator might start by issuing a command causing the system to display anything in the store that might be relevant to .... Before going on, he can examine past and future fragments of text that contain similar material". This observation from Kay was actually influenced by the suggestion of Peter Arthern that translators can use similar, already translated documents online. In his 1978 article he gave fully demonstration of what we call TM systems today: Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, together with its translation into all the other official languages [of the European Community]. ... One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations. The idea was incorporated from ALPS (Automated Language Processing Systems) Tools first developed by researcher from Brigham Young University, and at that time the idea of TM systems was mixed with a tool called "Repetitions Processing" which only aimed to find matched strings. Only after a long time, did the concept of so-called translation memory come into being. The real exploratory stage of TM systems would be 1980s. One of the first implementations of TM system appeared in Sadler and Vendelmans' Bilingual Knowledge Bank. A Bilingual Knowledge Bank is a syntactically and referentially structured pair of corpora, one being a translation of the other, in which translation units are cross-coded between the corpora. The aim of Bilingual Knowledge Bank is to develop a corpus-based general-purpose knowledge source for applications in machine translation and computer-aided translation (Sadler & Vendelman, 1987). Another important step was made by Brian Harris with his "Bi-text". He has defined the bi-text as "a single text in two dimensions" (1988), the source and target texts related by the activity of the translator through translation units which made a similar echoes with Sadler's Bilingual Knowledge Bank. And in Harris's work he proposed something like TM system without using this name: a database of paired translations, searchable either by individual word, or by "whole translation unit", in the latter case the search being allowed to retrieve similar rather than identical units. TM technology only became commercially available on a wide scale in the late 1990s, through the efforts made by several engineers and translators. Of note is the first TM tool called Trados (SDL Trados nowadays). In this tool, when opening the source file and applying the translation memory so that any "100% matches" (identical matches) or "fuzzy matches" (similar, but not identical matches) within the text are instantly extracted and placed within the target file. Then, the "matches" suggested by the translation memory can be either accepted or overridden with new alternatives. If a translation unit is manually updated, then it is stored within the translation memory for future use as well as for repetition in the current text. In a similar way, all segments in the target file without a "match" would be translated manually and then automatically added to the translation memory. In the 2000s, online translation services began incorporating TM. Machine translation services like Google Translate, as well as professional and "hybrid" translation services provided by sites like Gengo and Ackuna, incorporate databases of TM data supplied by translators and volunteers to make more efficient connections between languages provide faster translation services to end-users. Recent trends One recent development is the concept of 'text memory' in contrast to translation memory. This is also the basis of the proposed LISA OSCAR standard. Text memory within xml:tm comprises 'author memory' and 'translation memory'. Author memory is used to keep track of changes during the authoring cycle. Translation memory uses the information from author memory to implement translation memory matching. Although primarily targeted at XML documents, xml:tm can be used on any document that can be converted to XLIFF format. Second-generation translation memories Much more powerful than first-generation TM systems, they include a linguistic analysis engine, use chunk technology to break down segments into intelligent terminological groups, and automatically generate specific glossaries. Related standards TMX Translation Memory eXchange (TMX) is a standard that enables the interchange of translation memories between translation suppliers. TMX has been adopted by the translation community as the best way of importing and exporting translation memories. The current version is 1.4b - it allows for the recreation of the original source and target documents from the TMX data. TBX TermBase eXchange. This LISA standard, which was revised and republished as ISO 30042, allows for the interchange of terminology data including detailed lexical information. The framework for TBX is provided by three ISO standards: ISO 12620, ISO 12200 and ISO 16642. ISO 12620 provides an inventory of well-defined “data categories” with standardized names that function as data element types or as predefined values. ISO 12200 (also known as MARTIF) provides the basis for the core structure of TBX. ISO 16642 (also known as Terminological Markup Framework) includes a structural meta-model for Terminology Markup Languages in general. UTX Universal Terminology eXchange (UTX) format is a standard specifically designed to be used for user dictionaries of machine translation, but it can be used for general, human-readable glossaries. The purpose of UTX is to accelerate dictionary sharing and reuse by its extremely simple and practical specification. SRX Segmentation Rules eXchange (SRX) is intended to enhance the TMX standard so that translation memory data that is exchanged between applications can be used more effectively. The ability to specify the segmentation rules that were used in the previous translation may increase the leveraging that can be achieved. GMX GILT Metrics. GILT stands for (Globalization, Internationalization, Localization, and Translation). The GILT Metrics standard comprises three parts: GMX-V for volume metrics, GMX-C for complexity metrics and GMX-Q for quality metrics. The proposed GILT Metrics standard is tasked with quantifying the workload and quality requirements for any given GILT task. OLIF Open Lexicon Interchange Format. OLIF is an open, XML-compliant standard for the exchange of terminological and lexical data. Although originally intended as a means for the exchange of lexical data between proprietary machine translation lexicons, it has evolved into a more general standard for terminology exchange. XLIFF XML Localisation Interchange File Format (XLIFF) is intended to provide a single interchange file format that can be understood by any localization provider. XLIFF is the preferred way of exchanging data in XML format in the translation industry. TransWS Translation Web Services. TransWS specifies the calls needed to use Web services for the submission and retrieval of files and messages relating to localization projects. It is intended as a detailed framework for the automation of much of the current localization process by the use of Web Services. xml:tm The xml:tm (XML-based Text Memory) approach to translation memory is based on the concept of text memory which comprises author and translation memory. xml:tm has been donated to Lisa OSCAR by XML-INTL. PO Gettext Portable Object format. Though often not regarded as a translation memory format, Gettext PO files are bilingual files that are also used in translation memory processes in the same way translation memories are used. Typically, a PO translation memory system will consist of various separate files in a directory tree structure. Common tools that work with PO files include the GNU Gettext Tools and the Translate Toolkit. Several tools and programs also exist that edit PO files as if they are mere source text files. See also Comparison of computer-assisted translation tools List of translation software Translation Text corpus Computer-assisted reviewing Translation software#Applications Parallel text Online bilingual concordance References Further reading Dragsted, Barbara. (2004). Segmentation in translation and translation memory systems: An empirical investigation of cognitive segmentation and effects of integrating a TM system into the translation process. Copenhagen: Samfundslitteratur. 369 p. Heyn, Matthias. (1998). “Translation memories: Insights and prospects”. In: Lynne Bowker; et al. (eds.), Unity in diversity? Current trends in translation studies. Manchester: St. Jerome. P. 123–136. Martín-Mor, Adrià (2011), La interferència lingüística en entorns de Traducció Assistida per Ordinador: Recerca empíricoexperimental. Bellaterra: Universitat Autònoma de Barcelona. URL: http://www.tdx.cat/handle/10803/83987. O’Hagan, Minako. (2009). “Computer-aided translation (CAT)”. In: Mona Baker & Gabriela Saldanha (eds.), Routledge encyclopedia of translation studies. London: Routledge. P. 48–51. Pym, Anthony (2013). Translation Skill-Sets in a Machine-Translation Age. Meta: Translators' Journal, 58 (3), p. 487-503. URL: http://id.erudit.org/iderudit/1025047ar External links Translation memories Benchmarking translation memories Ecolore survey of TM use by freelance translators (Word document) Power Shifts in Web-Based Translation Memory Computer-assisted translation Translation databases
Translation memory
[ "Technology" ]
5,153
[ "Natural language and computing", "Computer-assisted translation" ]
51,331
https://en.wikipedia.org/wiki/Dimensionless%20quantity
Dimensionless quantities, or quantities of dimension one, are quantities implicitly defined in a manner that prevents their aggregation into units of measurement. Typically expressed as ratios that align with another system, these quantities do not necessitate explicitly defined units. For instance, alcohol by volume (ABV) represents a volumetric ratio; its value remains independent of the specific units of volume used, such as in milliliters per milliliter (mL/mL). The number one is recognized as a dimensionless base quantity. Radians serve as dimensionless units for angular measurements, derived from the universal ratio of 2π times the radius of a circle being equal to its circumference. Dimensionless quantities play a crucial role serving as parameters in differential equations in various technical disciplines. In calculus, concepts like the unitless ratios in limits or derivatives often involve dimensionless quantities. In differential geometry, the use of dimensionless parameters is evident in geometric relationships and transformations. Physics relies on dimensionless numbers like the Reynolds number in fluid dynamics, the fine-structure constant in quantum mechanics, and the Lorentz factor in relativity. In chemistry, state properties and ratios such as mole fractions concentration ratios are dimensionless. History Quantities having dimension one, dimensionless quantities, regularly occur in sciences, and are formally treated within the field of dimensional analysis. In the 19th century, French mathematician Joseph Fourier and Scottish physicist James Clerk Maxwell led significant developments in the modern concepts of dimension and unit. Later work by British physicists Osborne Reynolds and Lord Rayleigh contributed to the understanding of dimensionless numbers in physics. Building on Rayleigh's method of dimensional analysis, Edgar Buckingham proved the theorem (independently of French mathematician Joseph Bertrand's previous work) to formalize the nature of these quantities. Numerous dimensionless numbers, mostly ratios, were coined in the early 1900s, particularly in the areas of fluid mechanics and heat transfer. Measuring logarithm of ratios as levels in the (derived) unit decibel (dB) finds widespread use nowadays. There have been periodic proposals to "patch" the SI system to reduce confusion regarding physical dimensions. For example, a 2017 op-ed in Nature argued for formalizing the radian as a physical unit. The idea was rebutted on the grounds that such a change would raise inconsistencies for both established dimensionless groups, like the Strouhal number, and for mathematically distinct entities that happen to have the same units, like torque (a vector product) versus energy (a scalar product). In another instance in the early 2000s, the International Committee for Weights and Measures discussed naming the unit of 1 as the "uno", but the idea of just introducing a new SI name for 1 was dropped. Buckingham theorem The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (e. g., pressure and volume are linked by Boyle's Law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and Buckingham's theorem would not hold. Another consequence of the theorem is that the functional dependence between a certain number (say, n) of variables can be reduced by the number (say, k) of independent dimensions occurring in those variables to give a set of p = n − k independent, dimensionless quantities. For the purposes of the experimenter, different systems that share the same description by dimensionless quantity are equivalent. Integers Integer numbers may represent dimensionless quantities. They can represent discrete quantities, which can also be dimensionless. More specifically, counting numbers can be used to express countable quantities. The concept is formalized as quantity number of entities (symbol N) in ISO 80000-1. Examples include number of particles and population size. In mathematics, the "number of elements" in a set is termed cardinality. Countable nouns is a related linguistics concept. Counting numbers, such as number of bits, can be compounded with units of frequency (inverse second) to derive units of count rate, such as bits per second. Count data is a related concept in statistics. The concept may be generalized by allowing non-integer numbers to account for fractions of a full item, e.g., number of turns equal to one half. Ratios, proportions, and angles Dimensionless quantities can be obtained as ratios of quantities that are not dimensionless, but whose dimensions cancel out in the mathematical operation. Examples of quotients of dimension one include calculating slopes or some unit conversion factors. Another set of examples is mass fractions or mole fractions, often written using parts-per notation such as ppm (= 10−6), ppb (= 10−9), and ppt (= 10−12), or perhaps confusingly as ratios of two identical units (kg/kg or mol/mol). For example, alcohol by volume, which characterizes the concentration of ethanol in an alcoholic beverage, could be written as . Other common proportions are percentages % (= 0.01),  ‰ (= 0.001). Some angle units such as turn, radian, and steradian are defined as ratios of quantities of the same kind. In statistics the coefficient of variation is the ratio of the standard deviation to the mean and is used to measure the dispersion in the data. It has been argued that quantities defined as ratios having equal dimensions in numerator and denominator are actually only unitless quantities and still have physical dimension defined as . For example, moisture content may be defined as a ratio of volumes (volumetric moisture, m3⋅m−3, dimension L⋅L) or as a ratio of masses (gravimetric moisture, units kg⋅kg−1, dimension M⋅M); both would be unitless quantities, but of different dimension. Dimensionless physical constants Certain universal dimensioned physical constants, such as the speed of light in vacuum, the universal gravitational constant, the Planck constant, the Coulomb constant, and the Boltzmann constant can be normalized to 1 if appropriate units for time, length, mass, charge, and temperature are chosen. The resulting system of units is known as the natural units, specifically regarding these five constants, Planck units. However, not all physical constants can be normalized in this fashion. For example, the values of the following constants are independent of the system of units, cannot be defined, and can only be determined experimentally: engineering strain, a measure of physical deformation defined as a change in length divided by the initial length. fine-structure constant, α ≈ 1/137 which characterizes the magnitude of the electromagnetic interaction between electrons. β (or μ) ≈ 1836, the proton-to-electron mass ratio. This ratio is the rest mass of the proton divided by that of the electron. An analogous ratio can be defined for any elementary particle. Strong force coupling strength αs ≈ 1. The tensor-to-scalar ratio , a ratio between the contributions of tensor and scalar modes to the primordial power spectrum observed in the CMB. The Immirzi-Barbero parameter , which characterizes the area gap in loop quantum gravity. emissivity, which is the ratio of actual emitted radiation from a surface to that of an idealized surface at the same temperature List Physics and engineering Lorentz factor – parameter used in the context of special relativity for time dilation, length contraction, and relativistic effects between observers moving at different velocities Fresnel number – wavenumber (spatial frequency) over distance Beta (plasma physics) – ratio of plasma pressure to magnetic pressure, used in magnetospheric physics as well as fusion plasma physics. Thiele modulus – describes the relationship between diffusion and reaction rate in porous catalyst pellets with no mass transfer limitations. Numerical aperture – characterizes the range of angles over which the system can accept or emit light. Zukoski number, usually noted , is the ratio of the heat release rate of a fire to the enthalpy of the gas flow rate circulating through the fire. Accidental and natural fires usually have a . Flat spread fires such as forest fires have . Fires originating from pressured vessels or pipes, with additional momentum caused by pressure, have . Fluid mechanics Chemistry Relative density – density relative to water Relative atomic mass, Standard atomic weight Equilibrium constant (which is sometimes dimensionless) Other fields Cost of transport is the efficiency in moving from one place to another Elasticity is the measurement of the proportional change of an economic variable in response to a change in another Basic reproduction number is a dimensionless ratio used in epidemiology to quantify the transmissibility of an infection. See also List of dimensionless quantities Arbitrary unit Dimensional analysis Normalization (statistics) and standardized moment, the analogous concepts in statistics Orders of magnitude (numbers) Similitude (model) References Further reading (15 pages) External links
Dimensionless quantity
[ "Physics", "Mathematics" ]
1,869
[ "Quantity", "Physical quantities", "Dimensionless quantities" ]
51,332
https://en.wikipedia.org/wiki/Power%20number
For Newton number, see also Kissing number in the sphere packing problem. The power number Np (also known as Newton number) is a commonly used dimensionless number relating the resistance force to the inertia force. The power-number has different specifications according to the field of application. E.g., for stirrers the power number is defined as: with P: power ρ: fluid density n: rotational speed in revolutions per second D: diameter of stirrer References Dimensionless numbers of fluid mechanics Fluid dynamics
Power number
[ "Chemistry", "Engineering" ]
106
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
51,346
https://en.wikipedia.org/wiki/Coconut
The coconut tree (Cocos nucifera) is a member of the palm tree family (Arecaceae) and the only living species of the genus Cocos. The term "coconut" (or the archaic "cocoanut") can refer to the whole coconut palm, the seed, or the fruit, which botanically is a drupe, not a nut. They are ubiquitous in coastal tropical regions and are a cultural icon of the tropics. The coconut tree provides food, fuel, cosmetics, folk medicine and building materials, among many other uses. The inner flesh of the mature seed, as well as the coconut milk extracted from it, forms a regular part of the diets of many people in the tropics and subtropics. Coconuts are distinct from other fruits because their endosperm contains a large quantity of an almost clear liquid, called "coconut water" or "coconut juice". Mature, ripe coconuts can be used as edible seeds, or processed for oil and plant milk from the flesh, charcoal from the hard shell, and coir from the fibrous husk. Dried coconut flesh is called copra, and the oil and milk derived from it are commonly used in cookingfrying in particularas well as in soaps and cosmetics. Sweet coconut sap can be made into drinks or fermented into palm wine or coconut vinegar. The hard shells, fibrous husks and long pinnate leaves can be used as material to make a variety of products for furnishing and decoration. The coconut has cultural and religious significance in certain societies, particularly in the Austronesian cultures of the Western Pacific where it is featured in their mythologies, songs, and oral traditions. The fall of its mature fruit has led to a preoccupation with death by coconut. It also had ceremonial importance in pre-colonial animistic religions. It has also acquired religious significance in South Asian cultures, where it is used in rituals of Hinduism. It forms the basis of wedding and worship rituals in Hinduism. It also plays a central role in the Coconut Religion founded in 1963 in Vietnam. Coconuts were first domesticated by the Austronesian peoples in Island Southeast Asia and were spread during the Neolithic via their seaborne migrations as far east as the Pacific Islands, and as far west as Madagascar and the Comoros. They played a critical role in the long sea voyages of Austronesians by providing a portable source of food and water, as well as providing building materials for Austronesian outrigger boats. Coconuts were also later spread in historic times along the coasts of the Indian and Atlantic Oceans by South Asian, Arab, and European sailors. Based on these separate introductions, coconut populations can still be divided into Pacific coconuts and Indo-Atlantic coconuts, respectively. Coconuts were introduced by Europeans to the Americas during the colonial era in the Columbian exchange, but there is evidence of a possible pre-Columbian introduction of Pacific coconuts to Panama by Austronesian sailors. The evolutionary origin of the coconut is under dispute, with theories stating that it may have evolved in Asia, South America, or Pacific islands. Trees can grow up to tall and can yield up to 75 fruits per year, though fewer than 30 is more typical. Plants are intolerant to cold and prefer copious precipitation and full sunlight. Many insect pests and diseases affect the species and are a nuisance for commercial production. In 2022, about 73% of the world's supply of coconuts was produced by Indonesia, India, and the Philippines. Description Cocos nucifera is a large palm, growing up to tall, with pinnate leaves long, and pinnae long; old leaves break away cleanly, leaving the trunk smooth. On fertile soil, a tall coconut palm tree can yield up to 75 fruits per year, but more often yields less than 30. Given proper care and growing conditions, coconut palms produce their first fruit in six to ten years, taking 15 to 20 years to reach peak production. True-to-type dwarf varieties of Pacific coconuts have been cultivated by the Austronesian peoples since ancient times. These varieties were selected for slower growth, sweeter coconut water, and often brightly colored fruits. Many modern varieties are also grown, including the Maypan, King, and Macapuno. These vary by the taste of the coconut water and color of the fruit, as well as other genetic factors. Fruit Botanically, the coconut fruit is a drupe, not a true nut. Like other fruits, it has three layers: the exocarp, mesocarp, and endocarp. The exocarp is the glossy outer skin, usually yellow-green to yellow-brown in color. The mesocarp is composed of a fiber, called coir, which has many traditional and commercial uses. Both the exocarp and the mesocarp make up the "husk" of the coconut, while the endocarp makes up the hard coconut "shell". The endocarp is around thick and has three distinctive germination pores (micropyles) on the distal end. Two of the pores are plugged (the "eyes"), while one is functional. The interior of the endocarp is hollow and is lined with a thin brown seed coat around thick. The endocarp is initially filled with a multinucleate liquid endosperm (the coconut water). As development continues, cellular layers of endosperm deposit along the walls of the endocarp up to thick, starting at the distal end. They eventually form the edible solid endosperm (the "coconut meat" or "coconut flesh") which hardens over time. The small cylindrical embryo is embedded in the solid endosperm directly below the functional pore of the endosperm. During germination, the embryo pushes out of the functional pore and forms a haustorium (the coconut sprout) inside the central cavity. The haustorium absorbs the solid endosperm to nourish the seedling. Coconut fruits have two distinctive forms depending on . Wild coconuts feature an elongated triangular fruit with a thicker husk and a smaller amount of endosperm. These allow the fruits to be more buoyant and make it easier for them to lodge into sandy shorelines, making their shape ideal for ocean dispersal. Domesticated Pacific coconuts, on the other hand, are rounded in shape with a thinner husk and a larger amount of endosperm. Domesticated coconuts also contain more coconut water. These two forms are referred to by the Samoan terms niu kafa for the elongated wild coconuts, and niu vai for the rounded domesticated Pacific coconuts. A full-sized coconut fruit weighs about . Coconuts sold domestically in coconut-producing countries are typically not de-husked. Especially immature coconuts (6 to 8 months from flowering) are sold for coconut water and softer jelly-like coconut meat (known as "green coconuts", "young coconuts", or "water coconuts"), where the original coloration of the fruit is more aesthetically pleasing. Whole mature coconuts (11 to 13 months from flowering) sold for export, however, typically have the husk removed to reduce weight and volume for transport. This results in the naked coconut "shell" with three pores more familiar in countries where coconuts are not grown locally. De-husked coconuts typically weigh around . coconuts are also easier for consumers to open, but have a shorter postharvest storage life of around two to three weeks at temperatures of or up to 2 months at . In comparison, mature coconuts with the husk intact can be stored for three to five months at normal room temperature . Roots Unlike some other plants, the palm tree has neither a taproot nor root hairs, but has a fibrous root system. The root system consists of an abundance of thin roots that grow outward from the plant near the surface. Only a few of the roots penetrate deep into the soil for stability. This type of root system is known as fibrous or adventitious, and is a characteristic of grass species. Other types of large trees produce a single downward-growing tap root with a number of feeder roots growing from it. 2,000–4,000 adventitious roots may grow, each about large. Decayed roots are replaced regularly as the tree grows new ones. Inflorescence The palm produces both the female and male flowers on the same inflorescence; thus, the palm is monoecious. However, there is some evidence that it may be polygamomonoecious and may occasionally have bisexual flowers. The female flower is much larger than the male flower. Flowering occurs continuously. Coconut palms are believed to be largely cross-pollinated, although most dwarf varieties are self-pollinating. Taxonomy Phylogeny The evolutionary history and fossil distribution of Cocos nucifera and other members of the tribe Cocoseae is more ambiguous than modern-day dispersal and distribution, with its ultimate origin and pre-human dispersal still unclear. There are currently two major viewpoints on the origins of the genus Cocos, one in the Indo-Pacific, and another in South America. The vast majority of Cocos-like fossils have been recovered generally from only two regions in the world: New Zealand and west-central India. However, like most palm fossils, Cocos-like fossils are still putative, as they are usually difficult to identify. The earliest Cocos-like fossil to be found was Cocos zeylandica, a fossil species described as small fruits, around × in size, recovered from the Miocene (~23 to 5.3 million years ago) of New Zealand in 1926. Since then, numerous other fossils of similar fruits were recovered throughout New Zealand from the Eocene, Oligocene, and possibly the Holocene. But research on them is still ongoing to determine their phylogenetic affinities. Endt & Hayward (1997) have noted their resemblance to members of the South American genus Parajubaea, rather than Cocos, and propose a South American origin. Conran et al. (2015), however, suggests that their diversity in New Zealand indicate that they evolved endemically, rather than being introduced to the islands by long-distance dispersal. In west-central India, numerous fossils of Cocos-like fruits, leaves, and stems have been recovered from the Deccan Traps. They include morphotaxa like Palmoxylon sundaran, Palmoxylon insignae, and Palmocarpon cocoides. Cocos-like fossils of fruits include Cocos intertrappeansis, Cocos pantii, and Cocos sahnii. They also include fossil fruits that have been tentatively identified as modern Cocos nucifera. These include two specimens named Cocos palaeonucifera and Cocos binoriensis, both dated by their authors to the Maastrichtian–Danian of the early Tertiary (70 to 62 million years ago). C. binoriensis has been claimed by their authors to be the earliest known fossil of Cocos nucifera. Outside of New Zealand and India, only two other regions have reported Cocos-like fossils, namely Australia and Colombia. In Australia, a Cocos-like fossil fruit, measuring , were recovered from the Chinchilla Sand Formation dated to the latest Pliocene or basal Pleistocene. Rigby (1995) assigned them to modern Cocos nucifera based on its size. In Colombia, a single Cocos-like fruit was recovered from the middle to late Paleocene Cerrejón Formation. The fruit, however, was compacted in the fossilization process and it was not possible to determine if it had the diagnostic three pores that characterize members of the tribe Cocoseae. Nevertheless, Gomez-Navarro et al. (2009), assigned it to Cocos based on the size and the ridged shape of the fruit. Further complicating measures to determine the evolutionary history of Cocos is the genetic diversity present within C. nucifera as well as its relatedness to other palms. Phylogenetic evidence supports the closest relatives of Cocos being either Syagrus or Attalea, both of which are found in South America. However, Cocos is not thought to be indigenous to South America, and the highest genetic diversity is present in Asian Cocos, indicating that at least the modern species Cocos nucifera is native to there. In addition, fossils of potential Cocos ancestors have been recovered from both Colombia and India. In order to resolve this enigma, a 2014 study proposed that the ancestors of Cocos had likely originated on the Caribbean coast of what is now Colombia, and during the Eocene the ancestral Cocos performed a long-distance dispersal across the Atlantic Ocean to North Africa. From here, island-hopping via coral atolls lining the Tethys Sea, potentially boosted by ocean currents at the time, would have proved crucial to dispersal, eventually allowing ancestral coconuts to reach India. The study contended that an adaptation to coral atolls would explain the prehistoric and modern distributions of Cocos, would have provided the necessary evolutionary pressures, and would account for morphological factors such as a thick husk to protect against ocean degradation and provide a moist medium in which to germinate on sparse atolls. Etymology The name coconut is derived from the 16th-century Portuguese word coco, meaning 'head' or 'skull' after the three indentations on the coconut shell that resemble facial features. Coco and coconut apparently came from 1521 encounters by Portuguese and Spanish explorers with Pacific Islanders, with the coconut shell reminding them of a ghost or witch in Portuguese folklore called coco (also côca). In the West it was originally called nux indica, a name used by Marco Polo in 1280 while in Sumatra. He took the term from the Arabs, who called it جوز هندي jawz hindī, translating to 'Indian nut'. Thenga, its Tamil/Malayalam name, was used in the detailed description of coconut found in Itinerario by Ludovico di Varthema published in 1510 and also in the later Hortus Indicus Malabaricus. Carl Linnaeus first wanted to name the coconut genus Coccus from latinizing the Portuguese word coco, because he saw works by other botanists in middle of the 17th century use the name as well. He consulted the catalogue Herbarium Amboinense by Georg Eberhard Rumphius where Rumphius said that coccus was a homonym of coccum and coccus from Greek kokkos meaning "grain" or "berry", but Romans identified coccus with "kermes insects"; Rumphius preferred the word cocus as a replacement. However, the word cocus could also mean "cook" like coquus in Latin, so Linnaeus chose Cocos directly from the Portuguese word coco instead. The specific name nucifera is derived from the Latin words nux (nut) and fera (bearing), for 'nut-bearing'. Distribution and habitat Coconuts have a nearly cosmopolitan distribution due to human cultivation and dispersal. However, their original distribution was in the Central Indo-Pacific, in the regions of Maritime Southeast Asia and Melanesia. Origin Modern genetic studies have identified the center of origin of coconuts as being the Central Indo-Pacific, the region between western Southeast Asia and Melanesia, where it shows greatest genetic diversity. Their cultivation and spread was closely tied to the early migrations of the Austronesian peoples who carried coconuts as canoe plants to the islands they settled. The similarities of the local names in the Austronesian region is also cited as evidence that the plant originated in the region. For example, the Polynesian and Melanesian term niu; Tagalog and Chamorro term niyog; and the Malay word nyiur or nyior. Other evidence for a Central Indo-Pacific origin is the native range of the coconut crab; and the higher amounts of C. nucifera-specific insect pests in the region (90%) in comparison to the Americas (20%), and Africa (4%). A study in 2011 identified two highly genetically differentiated subpopulations of coconuts, one originating from Island Southeast Asia (the Pacific group) and the other from the southern margins of the Indian subcontinent (the Indo-Atlantic group). The Pacific group is the only one to display clear genetic and phenotypic indications that they were domesticated; including dwarf habit, self-pollination, and the round "niu vai" fruit morphology with larger endosperm-to-husk ratios. The distribution of the Pacific coconuts correspond to regions settled by Austronesian voyagers indicating that its spread was largely the result of human introductions. It is most strikingly displayed in Madagascar, an island settled by Austronesian sailors at around 2000 to 1500 BP. The coconut populations on the island show genetic admixture between the two subpopulations indicating that Pacific coconuts were first brought by the Austronesian settlers, which then interbred with the later Indo-Atlantic coconuts brought by Europeans from India. Genetic studies of coconuts have also confirmed pre-Columbian populations of coconuts in Panama. However, it is not native and displays a genetic bottleneck resulting from a founder effect. A study in 2008 showed that the coconuts in the Americas are genetically closest related to the coconuts in the Philippines, and not to any other nearby coconut populations (including Polynesia). Such an origin indicates that the coconuts were not introduced naturally, such as by sea currents. The researchers concluded that it was brought by early Austronesian sailors to the Americas from at least 2,250 BP, and may be proof of pre-Columbian contact between Austronesian cultures and South American cultures. It is further strengthened by other similar botanical evidence of contact, like the pre-colonial presence of sweet potato in Oceanian cultures. During the colonial era, Pacific coconuts were further introduced to Mexico from the Spanish East Indies via the Manila galleons. In contrast to the Pacific coconuts, Indo-Atlantic coconuts were largely spread by Arab and Persian traders into the East African coast. Indo-Atlantic coconuts were also introduced into the Atlantic Ocean by Portuguese ships from their colonies in coastal India and Sri Lanka; first introduced to coastal West Africa, then onwards into the Caribbean and the east coast of Brazil. All of these introductions are within the last few centuries, relatively recent in comparison to the spread of Pacific coconuts. Natural habitat The coconut palm thrives on sandy soils and is highly tolerant of salinity. It prefers areas with abundant sunlight and regular rainfall ( annually), which makes colonizing shorelines of the tropics relatively straightforward. Coconuts also need high humidity (at least 70–80%) for optimum growth, which is why they are rarely seen in areas with low humidity. However, they can be found in humid areas with low annual precipitation such as in Karachi, Pakistan, which receives only about of rainfall per year, but is consistently warm and humid. Coconut palms require warm conditions for successful growth, and are intolerant of cold weather. Some seasonal variation is tolerated, with good growth where mean summer temperatures are between , and survival as long as winter temperatures are above ; they will survive brief drops to . Severe frost is usually fatal, although they have been known to recover from temperatures of . Due to this, there are not many coconut palms in California. They may grow but not fruit properly in areas with insufficient warmth or sunlight, such as Bermuda. The conditions required for coconut trees to grow without any care are: Mean daily temperature above every day of the year Mean annual rainfall above No or very little overhead canopy, since even small trees require direct sun The main limiting factor for most locations which satisfy the rainfall and temperature requirements is canopy growth, except those locations near coastlines, where the sandy soil and salt spray limit the growth of most other trees. Domestication Wild coconuts are naturally restricted to coastal areas in sandy, saline soils. The fruit is adapted for ocean dispersal. Coconuts could not reach inland locations without human intervention (to carry seednuts, plant seedlings, etc.) and early germination on the palm (vivipary) was important. Coconuts today can be grouped into two highly genetically distinct subpopulations: the Indo-Atlantic group originating from southern India and nearby regions (including Sri Lanka, the Laccadives, and the Maldives); and the Pacific group originating from the region between maritime Southeast Asia and Melanesia. Linguistic, archaeological, and genetic evidence all point to the early domestication of Pacific coconuts by the Austronesian peoples in maritime Southeast Asia during the Austronesian expansion (c. 3000 to 1500 BCE). Although archaeological remains dating to 1000 to 500 BCE also suggest that the Indo-Atlantic coconuts were also later independently cultivated by the Dravidian peoples, only Pacific coconuts show clear signs of domestication traits like dwarf habits, self-pollination, and rounded fruits. Indo-Atlantic coconuts, in contrast, all have the ancestral traits of tall habits and elongated triangular fruits. The coconut played a critical role in the migrations of the Austronesian peoples. They provided a portable source of both food and water, allowing Austronesians to survive long sea voyages to colonize new islands as well as establish long-range trade routes. Based on linguistic evidence, the absence of words for coconut in the Taiwanese Austronesian languages makes it likely that the Austronesian coconut culture developed only after Austronesians started colonizing the Philippine islands. The importance of the coconut in Austronesian cultures is evidenced by shared terminology of even very specific parts and uses of coconuts, which were carried outwards from the Philippines during the Austronesian migrations. Indo-Atlantic type coconuts were also later spread by Arab and South Asian traders along the Indian Ocean basin, resulting in limited admixture with Pacific coconuts introduced earlier to Madagascar and the Comoros via the ancient Austronesian maritime trade network. Coconuts can be broadly divided into two fruit typesthe ancestral niu kafa form with a thick-husked, angular fruit, and the niu vai form with a thin-husked, spherical fruit with a higher proportion of endosperm. The terms are derived from the Samoan language and was adopted into scientific usage by Harries (1978). The niu kafa form is the wild ancestral type, with thick husks to protect the seed, an angular, highly ridged shape to promote buoyancy during ocean dispersal, and a pointed base that allowed fruits to dig into the sand, preventing them from being washed away during germination on a new island. It is the dominant form in the Indo-Atlantic coconuts. However, they may have also been partially selected for thicker husks for coir production, which was also important in Austronesian material culture as a source for cordage in building houses and boats. The niu vai form is the domesticated form dominant in Pacific coconuts. They were selected for by the Austronesian peoples for their larger endosperm-to-husk ratio as well as higher coconut water content, making them more useful as food and water reserves for sea voyages. The decreased buoyancy and increased fragility of this spherical, thin-husked fruit would not matter for a species that had started to be dispersed by humans and grown in plantations. Niu vai endocarp fragments have been recovered in archaeological sites in the St. Matthias Islands of the Bismarck Archipelago. The fragments are dated to approximately 1000 BCE, suggesting that cultivation and artificial selection of coconuts were already practiced by the Austronesian Lapita people. Coconuts can also be broadly divided into two general types based on habit: the "Tall" (var. typica) and "Dwarf" (var. nana) varieties. The two groups are genetically distinct, with the dwarf variety showing a greater degree of artificial selection for ornamental traits and for early germination and fruiting. The tall variety is outcrossing while dwarf palms are self-pollinating, which has led to a much greater degree of genetic diversity within the tall group. The dwarf coconut cultivars are fully domesticated, in contrast to tall cultivars which display greater diversity in terms of domestication (and lack thereof). The fact that all dwarf coconuts share three genetic markers out of thirteen (which are only present at low frequencies in tall cultivars) makes it likely that they all originate from a single domesticated population. Philippine and Malayan dwarf coconuts diverged early into two distinct types. They usually remain genetically isolated when introduced to new regions, making it possible to trace their origins. Numerous other dwarf cultivars also developed as the initial dwarf cultivar was introduced to other regions and hybridized with various tall cultivars. The origin of dwarf varieties is Southeast Asia, which contain the tall cultivars that are genetically closest to dwarf coconuts. Sequencing of the genome of the tall and dwarf varieties revealed that they diverged 2 to 8 million years ago and that the dwarf variety arose through alterations in genes involved in the metabolism of the plant hormone gibberellin. Another ancestral variety is the niu leka of Polynesia (sometimes called the "Compact Dwarfs"). Although it shares similar characteristics to dwarf coconuts (including slow growth), it is genetically distinct and is thus believed to be independently domesticated, likely in Tonga. Other cultivars of niu leka may also exist in other islands of the Pacific, and some are probably descendants of advanced crosses between Compact Dwarfs and Southeast Asian Dwarf types. Dispersal Coconut fruit in the wild is light, buoyant, and highly water resistant. It is claimed that they evolved to disperse significant distances via marine currents. However, it can also be argued that the placement of the vulnerable eye of the nut (down when floating), and the site of the coir cushion are better positioned to ensure that the water-filled nut does not fracture when dropping on rocky ground, rather than for flotation. It is also often stated that coconuts can travel 110 days, or , by sea and still be able to germinate. This figure has been questioned based on the extremely small sample size that forms the basis of the paper that makes this claim. Thor Heyerdahl provides an alternative, and much shorter, estimate based on his first-hand experience crossing the Pacific Ocean on the raft Kon-Tiki: The nuts we had in baskets on deck remained edible and capable of germinating the whole way to Polynesia. But we had laid about half among the special provisions below deck, with the waves washing around them. Every single one of these was ruined by the sea water. And no coconut can float over the sea faster than a balsa raft moves with the wind behind it. He also notes that several of the nuts began to germinate by the time they had been ten weeks at sea, precluding an unassisted journey of 100 days or more. Drift models based on wind and ocean currents have shown that coconuts could not have drifted across the Pacific unaided. If they were naturally distributed and had been in the Pacific for a thousand years or so, then we would expect the eastern shore of Australia, with its own islands sheltered by the Great Barrier Reef, to have been thick with coconut palms: the currents were directly into, and down along this coast. However, both James Cook and William Bligh (put adrift after the Bounty mutiny) found no sign of the nuts along this stretch when he needed water for his crew. Nor were there coconuts on the east side of the African coast until Vasco da Gama, nor in the Caribbean when first visited by Christopher Columbus. They were commonly carried by Spanish ships as a source of fresh water. These provide substantial circumstantial evidence that deliberate Austronesian voyagers were involved in carrying coconuts across the Pacific Ocean and that they could not have dispersed worldwide without human agency. More recently, genomic analysis of cultivated coconut (C. nucifera L.) has shed light on the movement. However, admixture, the transfer of genetic material, evidently occurred between the two populations. Given that coconuts are ideally suited for inter-island group ocean dispersal, obviously some natural distribution did take place. However, the locations of the admixture events are limited to Madagascar and coastal east Africa, and exclude the Seychelles. This pattern coincides with the known trade routes of Austronesian sailors. Additionally, a genetically distinct subpopulation of coconut on the Pacific coast of Latin America has undergone a genetic bottleneck resulting from a founder effect; however, its ancestral population is the Pacific coconut from the Philippines. This, together with their use of the South American sweet potato, suggests that Austronesian peoples may have sailed as far east as the Americas. In the Hawaiian Islands, the coconut is regarded as a Polynesian introduction, first brought to the islands by early Polynesian voyagers (also Austronesians) from their homelands in the southern islands of Polynesia. Specimens have been collected from the sea as far north as Norway (but it is not known where they entered the water). They have been found in the Caribbean and the Atlantic coasts of Africa and South America for less than 500 years (the Caribbean native inhabitants do not have a dialect term for them, but use the Portuguese name), but evidence of their presence on the Pacific coast of South America antedates Columbus's arrival in the Americas. They are now almost ubiquitous between 26° N and 26° S except for the interiors of Africa and South America. The 2014 coral atoll origin hypothesis proposed that the coconut had dispersed in an island hopping fashion using the small, sometimes transient, coral atolls. It noted that by using these small atolls, the species could easily island-hop. Over the course of evolutionary time-scales the shifting atolls would have shortened the paths of colonization, meaning that any one coconut would not have to travel very far to find new land. Ecology Coconuts are susceptible to the phytoplasma disease, lethal yellowing. One recently selected cultivar, the 'Maypan', has been bred for resistance to this disease. Yellowing diseases affect plantations in Africa, India, Mexico, the Caribbean and the Pacific Region. Konan et al., 2007 explains much resistance with a few alleles at a few microsatellites. They find that 'Vanuatu Tall' and 'Sri-Lanka Green Dwarf' are the most resistant while 'West African Tall' breeds are especially susceptible. The coconut palm is damaged by the larvae of many Lepidoptera (butterfly and moth) species which feed on it, including the African armyworm (Spodoptera exempta) and Batrachedra spp.: B. arenosella, B. atriloqua (feeds exclusively on C. nucifera), B. mathesoni (feeds exclusively on C. nucifera), and B. nuciferae. Brontispa longissima (coconut leaf beetle) feeds on young leaves, and damages both seedlings and mature coconut palms. In 2007, the Philippines imposed a quarantine in Metro Manila and 26 provinces to stop the spread of the pest and protect the Philippine coconut industry managed by some 3.5 million farmers. The fruit may also be damaged by eriophyid coconut mites (Eriophyes guerreronis). This mite infests coconut plantations, and is devastating; it can destroy up to 90% of coconut production. The immature seeds are infested and desapped by larvae staying in the portion covered by the perianth of the immature seed; the seeds then drop off or survive deformed. Spraying with wettable sulfur 0.4% or with Neem-based pesticides can give some relief, but is cumbersome and labor-intensive. In Kerala, India, the main coconut pests are the coconut mite, the rhinoceros beetle, the red palm weevil, and the coconut leaf caterpillar. Research into countermeasures to these pests has yielded no results; researchers from the Kerala Agricultural University and the Central Plantation Crop Research Institute, Kasaragode, continue to work on countermeasures. The Krishi Vigyan Kendra, Kannur under Kerala Agricultural University has developed an innovative extension approach called the compact area group approach to combat coconut mites. Cultivation Coconut palms are normally cultivated in hot and wet tropical climates. They need year-round warmth and moisture to grow well and fruit. Coconut palms are hard to establish in dry climates, and cannot grow there without frequent irrigation; in drought conditions, the new leaves do not open well, and older leaves may become desiccated; fruit also tends to be shed. The extent of cultivation in the tropics is threatening a number of habitats, such as mangroves; an example of such damage to an ecoregion is in the Petenes mangroves of the Yucatán. Unique among plants, coconut trees can be irrigated with sea water. Although that is recommended for coconut plants that are over 2 years old. Cultivars Coconut has a number of commercial and traditional cultivars. They can be sorted mainly into tall cultivars, dwarf cultivars, and hybrid cultivars (hybrids between talls and dwarfs). Some of the dwarf cultivars such as 'Malayan dwarf' have shown some promising resistance to lethal yellowing, while other cultivars such as 'Jamaican tall' are highly affected by the same plant disease. Some cultivars are more drought resistant such as 'West coast tall' (India) while others such as 'Hainan Tall' (China) are more cold tolerant. Other aspects such as seed size, shape and weight, and copra thickness are also important factors in the selection of new cultivars. Some cultivars such as 'Fiji dwarf' form a large bulb at the lower stem and others are cultivated to produce very sweet coconut water with orange-colored husks (king coconut) used entirely in fruit stalls for drinking (Sri Lanka, India). Harvesting The two most common harvesting methods are the climbing method and the pole method. Climbing is the most widespread, but it is also more dangerous and requires skilled workers. Manually climbing trees is traditional in most countries and requires a specific posture that exerts pressure on the trunk with the feet. Climbers employed on coconut plantations often develop musculoskeletal disorders and risk severe injury or death from falling. To avoid this, coconuts workers in the Philippines and Guam traditionally use bolos tied with a rope to the waist to cut grooves at regular intervals on the coconut trunks. This basically turns the trunk of the tree into a ladder, though it reduces the value of coconut timber recovered from the trees and can be an entry point for infection. Other manual methods to make climbing easier include using a system of pulleys and ropes; using pieces of vine, rope, or cloth tied to both hands or feet; using spikes attached to the feet or legs; or attaching coconut husks to the trunk with ropes. Modern methods use hydraulic elevators mounted on tractors or ladders. Mechanical coconut climbing devices and even automated robots have also been recently developed in countries like India, Sri Lanka, and Malaysia. The pole method uses a long pole with a cutting device at the end. In the Philippines, the traditional tool is known as the halabas and is made from a long bamboo pole with a sickle-like blade mounted at the tip. Though safer and faster than the climbing method, its main disadvantage is that it does not allow workers to examine and clean the crown of coconuts for pests and diseases. Determining whether to harvest is also important. Gatchalian et al. 1994 developed a sonometry technique for precisely determining the stage of ripeness of young coconuts. A system of bamboo bridges and ladders directly connecting the tree canopies are also utilized in the Philippines for coconut plantations that harvest coconut sap (not fruits) for coconut vinegar and palm wine production. In other areas, like in Papua New Guinea, coconuts are simply collected when they fall to the ground. A more controversial method employed by a small number of coconut farmers in Thailand and Malaysia use trained pig-tailed macaques to harvest coconuts. Thailand has been raising and training pig-tailed macaques to pick coconuts for around 400 years. Training schools for pig-tailed macaques still exist both in southern Thailand and in the Malaysian state of Kelantan. The practice of using macaques to harvest coconuts was exposed in Thailand by the People for the Ethical Treatment of Animals (PETA) in 2019, resulting in calls for boycotts on coconut products. PETA later clarified that the use of macaques is not practiced in the Philippines, India, Brazil, Colombia, Hawaii, and other major coconut-producing regions. Substitutes for cooler climates In cooler climates (but not less than USDA Zone 9), a similar palm, the queen palm (Syagrus romanzoffiana), is used in landscaping. Its fruits are similar to coconut, but smaller. The queen palm was originally classified in the genus Cocos along with the coconut, but was later reclassified in Syagrus. A recently discovered palm, Beccariophoenix alfredii from Madagascar, is nearly identical to the coconut, more so than the queen palm and can also be grown in slightly cooler climates than the coconut palm. Coconuts can only be grown in temperatures above and need a daily temperature above to produce fruit. Production In 2022, world production of coconuts was 62 million tonnes, led by Indonesia, India, and the Philippines, with 73% combined of the total. Indonesia Indonesia is the world's largest producer of coconuts, with a gross production of 15 million tonnes. Philippines The Philippines is the world's second-largest producer of coconuts. It was the world's largest producer for decades until a decline in production due to aging trees as well as from typhoon devastations. Indonesia overtook it in 2010. It is still the largest producer of coconut oil and copra, accounting for 64% of global production. The production of coconuts plays an important role in the economy, with 25% of cultivated land (around 3.56 million hectares) used for coconut plantations and approximately 25 to 33% of the population reliant on coconuts for their livelihood. Two important coconut products were first developed in the Philippines, Macapuno and Nata de coco. Macapuno is a coconut variety with a jelly-like coconut meat. Its meat is sweetened, cut into strands, and sold in glass jars as coconut strings, sometimes labeled as "coconut sport". Nata de coco, also called coconut gel, is another jelly-like coconut product made from fermented coconut water. India Traditional areas of coconut cultivation in India are the states of Kerala, Tamil Nadu, Karnataka, Puducherry, Andhra Pradesh, Goa, Maharashtra, Odisha, West Bengal and, Gujarat and the islands of Lakshadweep and Andaman and Nicobar. As per 2014–15 statistics from Coconut Development Board of Government of India, four southern states combined account for almost 90% of the total production in the country: Tamil Nadu (33.8%), Karnataka (25.2%), Kerala (24.0%), and Andhra Pradesh (7.2%). Other states, such as Goa, Maharashtra, Odisha, West Bengal, and those in the northeast (Tripura and Assam) account for the remaining productions. Though Kerala has the largest number of coconut trees, in terms of production per hectare, Tamil Nadu leads all other states. In Tamil Nadu, Coimbatore and Tirupur regions top the production list. The coconut tree is the official state tree of Kerala, India. In Goa, the coconut tree has been reclassified by the government as a palm (rather than a tree), enabling farmers and developers to clear land with fewer restrictions and without needing permission from the forest department before cutting a coconut tree. Middle East The main coconut-producing area in the Middle East is the Dhofar region of Oman, but they can be grown all along the Persian Gulf, Arabian Sea, and Red Sea coasts, because these seas are tropical and provide enough humidity (through seawater evaporation) for coconut trees to grow. The young coconut plants need to be nursed and irrigated with drip pipes until they are old enough (stem bulb development) to be irrigated with brackish water or seawater alone, after which they can be replanted on the beaches. In particular, the area around Salalah maintains large coconut plantations similar to those found across the Arabian Sea in Kerala. The reasons why coconut are cultivated only in Yemen's Al Mahrah and Hadramaut governorates and in the Sultanate of Oman, but not in other suitable areas in the Arabian Peninsula, may originate from the fact that Oman and Hadramaut had long dhow trade relations with Burma, Malaysia, Indonesia, East Africa, and Zanzibar, as well as southern India and China. Omani people needed the coir rope from the coconut fiber to stitch together their traditional seagoing dhow vessels in which nails were never used. The know-how of coconut cultivation and necessary soil fixation and irrigation may have found its way into Omani, Hadrami and Al-Mahra culture by people who returned from those overseas areas. The ancient coconut groves of Dhofar were mentioned by the medieval Moroccan traveller Ibn Battuta in his writings, known as Al Rihla. The annual rainy season known locally as khareef or monsoon makes coconut cultivation easy on the Arabian east coast. Coconut trees also are increasingly grown for decorative purposes along the coasts of the United Arab Emirates and Saudi Arabia with the help of irrigation. The UAE has, however, imposed strict laws on mature coconut tree imports from other countries to reduce the spread of pests to other native palm trees, as the mixing of date and coconut trees poses a risk of cross-species palm pests, such as rhinoceros beetles and red palm weevils. The artificial landscaping may have been the cause for lethal yellowing, a viral coconut palm disease that leads to the death of the tree. It is spread by host insects that thrive on heavy turf grasses. Therefore, heavy turf grass environments (beach resorts and golf courses) also pose a major threat to local coconut trees. Traditionally, dessert banana plants and local wild beach flora such as Scaevola taccada and Ipomoea pes-caprae were used as humidity-supplying green undergrowth for coconut trees, mixed with sea almond and sea hibiscus. Due to growing sedentary lifestyles and heavy-handed landscaping, a decline in these traditional farming and soil-fixing techniques has occurred. Sri Lanka Sri Lanka is the world's fourth-largest producer of coconuts and is the second-largest producer of coconut oil and copra, accounting for 15% of the global production. The production of coconuts is the main source of Sri Lanka economy, with 12% of cultivated land and 409,244 hectares used for coconut growing (2017). Sri Lanka established its Coconut Development Authority and Coconut Cultivation Board and Coconut Research Institute in the early British Ceylon period. United States In the United States, coconut palms can be grown and reproduced outdoors without irrigation in Hawaii, southern and central Florida, and the territories of Puerto Rico, Guam, American Samoa, the U.S. Virgin Islands, and the Northern Mariana Islands. Coconut palms are also periodically successful in the Lower Rio Grande Valley region of southern Texas and in other microclimates in the southwest. In Florida, wild populations of coconut palms extend up the East Coast from Key West to Jupiter Inlet, and up the West Coast from Marco Island to Sarasota. Many of the smallest coral islands in the Florida Keys are known to have abundant coconut palms sprouting from coconuts that have drifted or been deposited by ocean currents. Coconut palms are cultivated north of South Florida to roughly Cocoa Beach on the East Coast and Clearwater on the West Coast. Australia Coconuts are commonly grown around the northern coast of Australia, and in some warmer parts of New South Wales. However, they are mainly present as decoration, and the Australian coconut industry is small; Australia is a net importer of coconut products. Australian cities put much effort into de-fruiting decorative coconut trees to ensure that mature coconuts do not fall and injure people. Allergens Food Coconut oil is increasingly used in the food industry. Proteins from coconut may cause allergic reactions, including anaphylaxis, in some people. In the United States, the Food and Drug Administration declared that coconut must be disclosed as an ingredient on package labels as a "tree nut" with potential allergenicity. Topical Cocamidopropyl betaine (CAPB) is a surfactant manufactured from coconut oil that is increasingly used as an ingredient in personal hygiene products and cosmetics, such as shampoos, liquid soaps, cleansers and antiseptics, among others. CAPB may cause mild skin irritation, but allergic reactions to CAPB are rare and probably related to impurities rendered during the manufacturing process (which include amidoamine and dimethylaminopropylamine) rather than CAPB itself. Uses The coconut palm is grown throughout the tropics for decoration, as well as for its many culinary and nonculinary uses; virtually every part of the coconut palm can be used by humans in some manner and has significant economic value. Coconuts' versatility is sometimes noted in its naming. In Sanskrit, it is kalpa vriksha ("the tree which provides all the necessities of life"). In the Malay language, it is pokok seribu guna ("the tree of a thousand uses"). In the Philippines, the coconut is commonly called the "tree of life". It is one of the most useful trees in the world. Culinary Nutrition A reference serving of raw coconut flesh supplies of food energy and a high amount of total fat (33 grams), especially saturated fat (89% of total fat), along with a moderate quantity of carbohydrates (15g), and protein (3g). Micronutrients in significant content (more than 10% of the Daily Value) include the dietary minerals, manganese, copper, iron, phosphorus, selenium, and zinc (table). Coconut meat The edible white, fleshy part of the seed (the endosperm) is known as the "coconut meat", "coconut flesh", or "coconut kernel". In the coconut industry, coconut meat can be classified loosely into three different types depending on maturitynamely "Malauhog", "Malakanin" and "Malakatad". The terminology is derived from the Tagalog language. Malauhog (literally "mucus-like") refers to very young coconut meat (around 6 to 7 months old) which has a translucent appearance and a gooey texture that disintegrates easily. Malakanin (literally "cooked rice-like") refers to young coconut meat (around 7–8 months old) which has a more opaque white appearance, a soft texture similar to cooked rice, and can still be easily scraped off the coconut shell. Malakatad (literally "leather-like") refers to fully mature coconut meat (around 8 to 9 months old) with an opaque white appearance, a tough rubbery to leathery texture, and is difficult to separate from the shell. Maturity is difficult to assess on an unopened coconut, and there is no technically proven method for determining maturity. Based on color and size, younger coconuts tend to be smaller and have brighter colors, while more mature coconuts have browner colors and are larger. They can also be determined traditionally by tapping on the coconut fruit. Malauhog has a "solid" sound when tapped, while Malakanin and Malakatad produce a "hollow" sound. Another method is to shake the coconut. Immature coconuts produce a sloshing sound when shaken (the sharper the sound, the younger it is), while fully mature coconuts do not. Both "Malauhog" and "Malakanin" meats of immature coconuts can be eaten as is or used in salads, drinks, desserts, and pastries such as buko pie and es kelapa muda. Because of their soft textures, they are unsuitable for grating. Mature Malakatad coconut meat has a tough texture and thus is processed before consumption or made into copra. Freshly shredded mature coconut meat, known as "grated coconut", "shredded coconut", or "coconut flakes", is used in the extraction of coconut milk. They are also used as a garnish for various dishes, as in klepon and puto bumbong. They can also be cooked in sugar and eaten as a dessert in the Philippines known as bukayo. Grated coconut that is dehydrated by drying or baking is known as "desiccated coconut". It contains less than 3% of the original moisture content of coconut meat. It is predominantly used in the bakery and confectionery industries (especially in non-coconut-producing countries) because of its longer shelf life compared to freshly grated coconut. Desiccated coconut is used in confections and desserts such as macaroons. Dried coconut is also used as the filling for many chocolate bars. Some dried coconut is purely coconut, but others are manufactured with other ingredients, such as sugar, propylene glycol, salt, and sodium metabisulfite. Coconut meat can also be cut into larger pieces or strips, dried, and salted to make "coconut chips" or "coco chips". These can be toasted or baked to make bacon-like fixings. Macapuno A special cultivar of coconut known as macapuno produces a large amount of jelly-like coconut meat. Its meat fills the entire interior of the coconut shell, rather than just the inner surfaces. It was first developed for commercial cultivation in the Philippines and is used widely in Philippine cuisine for desserts, drinks, and pastries. It is also popular in Indonesia (where it is known as kopyor) for making beverages. Coconut milk Coconut milk, not to be confused with coconut water, is obtained by pressing the grated coconut meat, usually with hot water added which extracts the coconut oil, proteins, and aromatic compounds. It is used for cooking various dishes. Coconut milk contains 5% to 20% fat, while coconut cream contains around 20% to 50% fat. Most of the fat is saturated (89%), with lauric acid being the major fatty acid. Coconut milk can be diluted to create coconut milk beverages. These have a much lower fat content and are suitable as milk substitutes. Coconut milk powder, a protein-rich powder, can be processed from coconut milk following centrifugation, separation, and spray drying. Coconut milk and coconut cream extracted from grated coconut is often added to various desserts and savory dishes, as well as in curries and stews. It can also be diluted into a beverage. Various other products made from thickened coconut milk with sugar and/or eggs like coconut jam and coconut custard are also widespread in Southeast Asia. In the Philippines, sweetened reduced coconut milk is marketed as coconut syrup and is used for various desserts. Coconut oil extracted from coconut milk or copra is also used for frying, cooking, and making margarine, among other uses. Coconut water Coconut water serves as a suspension for the endosperm of the coconut during its nuclear phase of development. Later, the endosperm matures and deposits onto the coconut rind during the cellular phase. The water is consumed throughout the humid tropics, and has been introduced into the retail market as a processed sports drink. Mature fruits have significantly less liquid than young, immature coconuts, barring spoilage. Coconut water can be fermented to produce coconut vinegar. Per 100-gram serving, coconut water contains 19 calories and no significant content of essential nutrients. Coconut water can be drunk fresh or used in cooking as in binakol. It can also be fermented to produce a jelly-like dessert known as nata de coco. Coconut flour Coconut flour has also been developed for use in baking, to combat malnutrition. Sprouted coconut Newly germinated coconuts contain a spherical edible mass known as the sprouted coconut or coconut sprout. It has a crunchy watery texture and a slightly sweet taste. It is eaten as is or used as an ingredient in various dishes. It is produced as the endosperm nourishes the developing embryo. It is a haustorium, a spongy absorbent tissue formed from the distal part of embryo during coconut germination, which facilitates absorption of nutrients for the growing shoot and root. Heart of palm Apical buds of adult plants are edible, and are known as "palm cabbage" or heart of palm. They are considered a rare delicacy, as harvesting the buds kills the palms. Hearts of palm are eaten in salads, sometimes called "millionaire's salad". Toddy and sap The sap derived from incising the flower clusters of the coconut is drunk as toddy, also known as tubâ in the Philippines (both fermented and fresh), tuak (Indonesia and Malaysia), karewe (fresh and not fermented, collected twice a day, for breakfast and dinner) in Kiribati, and neera in South Asia. When left to ferment on its own, it becomes palm wine. Palm wine is distilled to produce arrack. In the Philippines, this alcoholic drink is called lambanog (historically also called vino de coco in Spanish) or "coconut vodka". The sap can be reduced by boiling to create a sweet syrup or candy such as te kamamai in Kiribati or dhiyaa hakuru and addu bondi in the Maldives. It can be reduced further to yield coconut sugar also referred to as palm sugar or jaggery. A young, well-maintained tree can produce around of toddy per year, while a 40-year-old tree may yield around . Coconut sap, usually extracted from cut inflorescence stalks is sweet when fresh and can be drunk as is such as in tuba fresca of Mexico (derived from the Philippine tubâ). They can also be processed to extract palm sugar. The sap when fermented can also be made into coconut vinegar or various palm wines (which can be further distilled to make arrack). Coconut vinegar Coconut vinegar, made from fermented coconut water or sap, is used extensively in Southeast Asian cuisine (notably the Philippines, where it is known as sukang tuba), as well as in some cuisines of India and Sri Lanka, especially Goan cuisine. A cloudy white liquid, it has a particularly sharp, acidic taste with a slightly yeasty note. Coconut oil Coconut oil is commonly used in cooking, especially for frying. It can be used in liquid form as would other vegetable oils, or in solid form similar to butter or lard. Long-term consumption of coconut oil may have negative health effects similar to those from consuming other sources of saturated fats, including butter, beef fat, and palm oil. Its chronic consumption may increase the risk of cardiovascular diseases by raising total blood cholesterol levels through elevated blood levels of LDL cholesterol and lauric acid. Coconut butter Coconut butter is often used to describe solidified coconut oil, but has also been adopted as an alternate name for creamed coconut, a specialty product made of coconut milk solids or puréed coconut meat and oil. Having a creamy Consistency that is spreadable, reminiscent of Peanut butter albeit a little richer. Copra Copra is the dried meat of the seed and after processing produces coconut oil and coconut meal. Coconut oil, aside from being used in cooking as an ingredient and for frying, is used in soaps, cosmetics, hair oil, and massage oil. Coconut oil is also a main ingredient in Ayurvedic oils. In Vanuatu, coconut palms for copra production are generally spaced apart, allowing a tree density of . It takes around 6,000 full-grown coconuts to produce one tonne of copra. Husks and shells The husk and shells can be used for fuel and are a source of charcoal. Activated carbon manufactured from coconut shell is considered extremely effective for the removal of impurities. The coconut's obscure origin in foreign lands led to the notion of using cups made from the shell to neutralise poisoned drinks. Coconut cups were frequently carved with scenes in relief and mounted with precious metals. The husks can be used as flotation devices. As an abrasive, a dried half coconut shell with husk can be used to buff floors. It is known as a bunot in the Philippines and simply a "coconut brush" in Jamaica. The fresh husk of a brown coconut may serve as a dish sponge or body sponge. Coconut cups, often with highly decorated mounts in precious metals, were an exotic luxury in medieval and Early Modern Europe, that were also thought to have medical benefits. A coco chocolatero was a simpler type of cup used to serve small quantities of beverages (such as chocolate drinks) between the 17th and 19th centuries in countries such as Mexico, Guatemala, and Venezuela. In Asia, coconut shells are also used as bowls and in the manufacture of various handicrafts, including buttons carved from the dried shell. Coconut buttons are often used for Hawaiian aloha shirts. Tempurung, as the shell is called in the Malay language, can be used as a soup bowl andif fixed with a handlea ladle. In Thailand, the coconut husk is used as a potting medium to produce healthy forest tree saplings. The process of husk extraction from the coir bypasses the retting process, using a custom-built coconut husk extractor designed by ASEAN–Canada Forest Tree Seed Centre in 1986. Fresh husks contain more tannin than old husks. Tannin produces negative effects on sapling growth. The shell and husk can be burned for smoke to repel mosquitoes and are used in parts of South India for this purpose. Half coconut shells are used in theatre Foley sound effects work, struck together to create the sound effect of a horse's hoofbeats. Dried half shells are used as the bodies of musical instruments, including the Chinese yehu and banhu, along with the Vietnamese đàn gáo and Arabo-Turkic rebab. In the Philippines, dried half shells are also used as a musical instrument in a folk dance called maglalatik. The shell, freed from the husk, and heated on warm ashes, exudes an oily material that is used to soothe dental pains in traditional medicine of Cambodia. In World War II, coastwatcher scout Biuku Gasa was the first of two from the Solomon Islands to reach the shipwrecked and wounded crew of Motor Torpedo Boat PT-109 commanded by future U.S. president John F. Kennedy. Gasa suggested, for lack of paper, delivering by dugout canoe a message inscribed on a husked coconut shell, reading "Nauru Isl commander / native knows posit / he can pilot / 11 alive need small boat / Kennedy." This coconut was later kept on the president's desk, and is now in the John F. Kennedy Library. The Philippine Coast Guard used unconventional coconut husk boom to clean up the oil slick in the 2024 Manila Bay oil spill. Coir Coir (the fiber from the husk of the coconut) is used in ropes, mats, doormats, brushes, and sacks, as caulking for boats, and as stuffing fiber for mattresses. It is used in horticulture in potting compost, especially in orchid mix. The coir is used to make brooms in Cambodia. Leaves The stiff midribs of coconut leaves are used for making brooms in India, Indonesia (sapu lidi), Malaysia, the Maldives, and the Philippines (walis tingting). The green of the leaves (lamina) is stripped away, leaving the veins (long, thin, woodlike strips) which are tied together to form a broom or brush. A long handle made from some other wood may be inserted into the base of the bundle and used as a two-handed broom. The leaves also provide material for baskets that can draw well water and for roofing thatch; they can be woven into mats, cooking skewers, and kindling arrows as well. Leaves are also woven into small pouches that are filled with rice and cooked to make pusô and ketupat. Dried coconut leaves can be burned to ash, which can be harvested for lime. In India, the woven coconut leaves are used to build wedding marquees, especially in the states of Kerala, Karnataka, and Tamil Nadu. The leaves are used for thatching houses, or for decorating climbing frames and meeting rooms in Cambodia, where the plant is known as dôô:ng. Timber Coconut trunks are used for building small bridges and huts; they are preferred for their straightness, strength, and salt resistance. In Kerala, coconut trunks are used for house construction. Coconut timber comes from the trunk, and is increasingly being used as an ecologically sound substitute for endangered hardwoods. It has applications in furniture and specialized construction, as notably demonstrated in Manila's Coconut Palace. Hawaiians hollowed out the trunk to form drums, containers, or small canoes. The "branches" (leaf petioles) are strong and flexible enough to make a switch. The use of coconut branches in corporal punishment was revived in the Gilbertese community on Choiseul in the Solomon Islands in 2005. Roots The roots are used as a dye, a mouthwash, and a folk medicine for diarrhea and dysentery. A frayed piece of root can also be used as a toothbrush. In Cambodia, the roots are used in traditional medicine as a treatment for dysentery. Other uses The leftover fiber from coconut oil and coconut milk production, coconut meal, is used as livestock feed. The dried calyx is used as fuel in wood-fired stoves. Coconut water is traditionally used as a growth supplement in plant tissue culture and micropropagation. The smell of coconuts comes from the 6-pentyloxan-2-one molecule, known as δ-decalactone in the food and fragrance industries. Tool and shelter for animals Researchers from the Melbourne Museum in Australia observed the octopus species Amphioctopus marginatus use tools, specifically coconut shells, for defense and shelter. The discovery of this behavior was observed in Bali and North Sulawesi in Indonesia between 1998 and 2008. Amphioctopus marginatus is the first invertebrate known to be able to use tools. A coconut can be hollowed out and used as a home for a rodent or a small bird. Halved, drained coconuts can also be hung up as bird feeders, and after the flesh has gone, can be filled with fat in winter to attract tits. In culture The coconut was a critical food item for the people of Polynesia, and the Polynesians brought it with them as they spread to new islands. In the Ilocos region of the northern Philippines, the Ilocano people fill two halved coconut shells with diket (cooked sweet rice), and place liningta nga itlog (halved boiled egg) on top of it. This ritual, known as niniyogan, is an offering made to the deceased and one's ancestors. This accompanies the palagip (prayer to the dead). A coconut () is an essential element of rituals in Hindu tradition. Often it is decorated with bright metal foils and other symbols of auspiciousness. It is offered during worship to a Hindu god or goddess. Narali Purnima is celebrated on a full moon day which usually signifies the end of monsoon season in India. The word Narali is derived from naral implying "coconut" in Marathi. Fishermen give an offering of coconut to the sea to celebrate the beginning of a new fishing season. Irrespective of their religious affiliations, fishermen of India often offer it to the rivers and seas in the hopes of having bountiful catches. Hindus often initiate the beginning of any new activity by breaking a coconut to ensure the blessings of the gods and successful completion of the activity. The Hindu goddess of well-being and wealth, Lakshmi, is often shown holding a coconut. In the foothills of the temple town of Palani, before going to worship Murugan for the Ganesha, coconuts are broken at a place marked for the purpose. Every day, thousands of coconuts are broken, and some devotees break as many as 108 coconuts at a time as per the prayer. They are also used in Hindu weddings as a symbol of prosperity. The flowers are used sometimes in wedding ceremonies in Cambodia. The Zulu Social Aid and Pleasure Club of New Orleans traditionally throws hand-decorated coconuts, one of the most valuable Mardi Gras souvenirs, to parade revelers. The tradition began in the 1910s, and has continued since. In 1987, a "coconut law" was signed by Governor Edwin Edwards exempting from insurance liability any decorated coconut "handed" from a Zulu float. The coconut is also used as a target and prize in the traditional British fairground game coconut shy. The player buys some small balls which are then thrown as hard as possible at coconuts balanced on sticks. The aim is to knock a coconut off the stand and win it. It was the main food of adherents of the now discontinued Vietnamese religion Đạo Dừa. Myths and legends Some South Asian, Southeast Asian, and Pacific Ocean cultures have origin myths in which the coconut plays the main role. In the Hainuwele myth from Maluku, a girl emerges from the blossom of a coconut tree. In Maldivian folklore, one of the main myths of origin reflects the dependence of the Maldivians on the coconut tree. In the story of Sina and the Eel, the origin of the coconut is related as the beautiful woman Sina burying an eel, which eventually became the first coconut. According to urban legend, more deaths are caused by falling coconuts than by sharks annually. Historical records Literary evidence from the Ramayana and Sri Lankan chronicles indicates that the coconut was present in the Indian subcontinent before the 1st century BCE. The earliest direct description is given by Cosmas Indicopleustes in his Topographia Christiana written around 545, referred to as "the great nut of India". Another early mention of the coconut dates back to the "One Thousand and One Nights" story of Sinbad the Sailor wherein he bought and sold a coconut during his fifth voyage. In March 1521, a description of the coconut was given by Antonio Pigafetta writing in Italian and using the words "cocho"/"cochi", as recorded in his journal after the first European crossing of the Pacific Ocean during the Magellan circumnavigation and meeting the inhabitants of what would become known as Guam and the Philippines. He explained how at Guam "they eat coconuts" ("mangiano cochi") and that the natives there also "anoint the body and the hair with coconut and beniseed oil" ("ongieno el corpo et li capili co oleo de cocho et de giongioli"). In politics United States Vice President Kamala Harris said during a May 2023 White House ceremony "You think you just fell out of a coconut tree?", which became a meme among her supporters during her failed 2024 presidential campaign. See also Domesticated plants and animals of Austronesia Central Plantation Crops Research Institute Coconut production in Kerala Coir Board of India List of coconut dishes List of dishes made using coconut milk Ravanahatha a musical instrument sometimes made of coconuts Voanioala gerardii forest coconut, the closest relative of the modern coconut References Sources Further reading Adkins S.W., M. Foale and Y.M.S. Samosir (eds.) (2006). Coconut revival – new possibilities for the 'tree of life'. Proceedings of the International Coconut Forum held in Cairns, Australia, November 22–24, 2005. ACIAR Proceedings No. 125. Frison, E.A.; Putter, C.A.J.; Diekmann, M. (eds.). (1993). Coconut . . International Plant Genetic Resources Institute (IPGRI). (1995). Descriptors for Coconut (Cocos nucifera L.). . Mathur, P.N.; Muralidharan, K.; Parthasarathy, V.A.; Batugal, P.; Bonnot, F. (2008). Data Analysis Manual for Coconut Researchers . . Salunkhe, D.K., J.K. Chavan, R.N. Adsule, and S.S. Kadam. (1992). World Oilseeds – Chemistry, Technology, and Utilization. Springer. . Edible palms Flora of the Maldives Flora of the Tubuai Islands Fruits originating in Asia Garden plants of Asia Garden plants of Central America Halophytes Flora of the Afrotropical realm Flora of the Australasian realm Flora of the Indomalayan realm Flora of the Oceanian realm Medicinal plants Non-timber forest products Ornamental trees Plants described in 1753 Flora of Florida Endemic flora of Florida Symbols of Florida Trees of Belize Trees of Haiti Trees of the Dominican Republic Flora of India (region) Trees of Indo-China Trees of Malesia Trees of Pakistan Trees of the Caribbean Trees of the Pacific Tropical agriculture Tropical fruit Crops Extant Eocene first appearances Pre-Columbian trans-oceanic contact Oil seeds Drupes Fruit trees Plant dyes Flora without expected TNC conservation status Austronesian agriculture
Coconut
[ "Chemistry" ]
14,528
[ "Halophytes", "Salts" ]
51,357
https://en.wikipedia.org/wiki/Polychlorinated%20dibenzodioxins
Polychlorinated dibenzodioxins (PCDDs), or simply dioxins, are a group of long-lived polyhalogenated organic compounds that are primarily anthropogenic, and contribute toxic, persistent organic pollution in the environment. They are commonly but inaccurately referred to as dioxins for simplicity, because every PCDD molecule contains a dibenzo-1,4-dioxin skeletal structure, with 1,4-dioxin as the central ring. Members of the PCDD family bioaccumulate in humans and wildlife because of their lipophilic properties, and may cause developmental disturbances and cancer. Because dioxins can persist in the environment for more than 100 years, the majority of PCDD pollution today is not the result of recent emissions, but the cumulative result of synthetic processes undertaken since the beginning of the 20th century, including organochloride-related manufacturing, incineration of chlorine-containing substances such as polyvinyl chloride (PVC), and chlorine bleaching of paper. Forest fires and volcanic eruptions have also been cited as an airborne source, although their contribution to the current levels of PCDD accumulation are minor in comparison. Incidents of dioxin poisoning resulting from industrial emissions and accidents were first recorded as early as the mid 19th century during the Industrial Revolution. The word "dioxins" may also refer to other similarly acting chlorinated compounds (see Dioxins and dioxin-like compounds). Chemical structure of dibenzo-1,4-dioxins The structure of dibenzo-1,4-dioxin consists of two benzene rings joined by two oxygen bridges. This makes the compound an aromatic diether. The name dioxin formally refers to the central dioxygenated ring, which is stabilized by the two flanking benzene rings. In PCDDs, chlorine atoms are attached to this structure at any of 8 different places on the molecule, at positions 1–4 and 6–9. There are 75 different PCDD congeners (that is, related dioxin compounds). The toxicity of PCDDs depends on the number and positions of the chlorine atoms. Congeners that have chlorine in the 2, 3, 7, and 8 positions have been found to be significantly toxic. In fact, 7 congeners have chlorine atoms in the relevant positions which were considered toxic by the World Health Organization toxic equivalent (WHO-TEQ) scheme. Historical perspective Low concentrations of dioxins existed in nature prior to industrialization as a result of natural combustion and geological processes. Dioxins were first unintentionally produced as by-products from 1848 onwards as Leblanc process plants started operating in Germany. The first intentional synthesis of chlorinated dibenzodioxin was in 1872. Today, concentrations of dioxins are found in all humans, with higher levels commonly found in persons living in more industrialized countries. The most toxic dioxin, 2,3,7,8-tetrachlorodibenzodioxin (TCDD), became well known as a contaminant of Agent Orange, a herbicide used in the Malayan Emergency and the Vietnam War. Later, dioxins were found in Times Beach, Missouri and Love Canal, New York and Seveso, Italy. More recently, dioxins have been in the news with the poisoning of President Viktor Yushchenko of Ukraine in 2004, the Naples Mozzarella Crisis, the 2008 Irish pork crisis, and the German feed incident of 2010. Sources of dioxins The United States Environmental Protection Agency inventory of sources of dioxin-like compounds is possibly the most comprehensive review of the sources and releases of dioxins, but other countries now have substantial research as well. Occupational exposure is an issue for some in the chemical industries, historically for those making chlorophenols or chlorophenoxy acid herbicides or in the application of chemicals, notably herbicides. In many developed nations there are now emissions regulations which have dramatically decreased the emissions and thus alleviated some concerns, although the lack of continuous sampling of dioxin emissions causes concern about the understatement of emissions. In Belgium, through the introduction of a process called AMESA, continuous sampling showed that periodic sampling understated emissions by a factor of 30 to 50 times. Few facilities have continuous sampling. Dioxins are produced in small concentrations when organic material is burned in the presence of chlorine, whether the chlorine is present as chloride ions or as organochlorine compounds, so they are widely produced in many contexts. According to the most recent US EPA data, the major sources of dioxins are broadly in the following types: Combustion sources, e.g. municipal waste or medical waste incinerators and private backyard barrel burning Metal smelting Refining and process sources Chemical manufacturing sources Natural sources Environmental reservoirs When first carried out in 1987, the original US EPA inventory of dioxin sources revealed that incineration represented more than 80% of known dioxin sources. As a result, US EPA implemented new emissions requirements. These regulations succeeded in reducing dioxin stack emissions from incinerators. Incineration of municipal solid waste, medical waste, sewage sludge, and hazardous waste together now produce less than 3% of all dioxin emissions. Since 1987, however, backyard barrel burning has showed almost no decrease, and is now the largest source of dioxin emissions, producing about one third of the total output. In incineration, dioxins can also reform or form de novo in the atmosphere above the stack as the exhaust gases cool through a temperature window of 600 to 200 °C. The most common method of reducing the quantity of dioxins reforming or forming de novo is through rapid (30 millisecond) quenching of the exhaust gases through that 400 °C window. Incinerator emissions of dioxins have been reduced by over 90% as a result of new emissions control requirements. Incineration in developed countries is now a very minor contributor to dioxin emissions. Dioxins are also generated in reactions that do not involve burning — such as chlorine bleaching fibers for paper or textiles, and in the manufacture of chlorinated phenols, particularly when reaction temperature is not well controlled. Compounds involved include the wood preservative pentachlorophenol, and also herbicides such as 2,4-dichlorophenoxyacetic acid (or 2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T). Higher levels of chlorination require higher reaction temperatures and greater dioxin production. Dioxins may also be formed during the photochemical breakdown of the common antimicrobial compound triclosan. Sources of human intake Tolerable daily, monthly or annual intakes have been set by the World Health Organization and a number of governments. Dioxins enter the general population almost exclusively from ingestion of food, specifically through the consumption of fish, meat, and dairy products since dioxins are fat-soluble and readily climb the food chain. Children are passed substantial body burdens by their mothers, and breastfeeding increases the child's body burden. Dioxin exposure can also occur from contact with Pentachlorophenol (Penta) treated lumber as Pentachlorophenol often contains dioxins as a contaminant. Children's daily intakes during breast feeding are often many times above the intakes of adults based on body weight. This is why the WHO consultation group assessed the tolerable intake so as to prevent a woman from accumulating harmful body burdens before her first pregnancy. Breast fed children usually still have higher dioxin body burdens than non breast fed children. The WHO still recommends breast feeding for its other benefits. In many countries dioxins in breast milk have decreased by even 90% during the two last decades. Dioxins are present in cigarette smoke. Dioxin in cigarette smoke was noted as "understudied" by the US EPA in its "Re-Evaluating Dioxin" (1995). In that same document, the US EPA acknowledged that dioxin in cigarettes is "anthropogenic" (man-made, "not likely in nature"). Metabolism Dioxins are absorbed primarily through dietary intake of fat, as this is where they accumulate in animals and humans. In humans, the highly chlorinated dioxins are stored in fatty tissues and are neither readily metabolized nor excreted. The estimated elimination half-life for highly chlorinated dioxins (4–8 chlorine atoms) in humans ranges from 4.9 to 13.1 years. The persistence of a particular dioxin congener in an animal is thought to be a consequence of its structure. Dioxins with no lateral (2, 3, 7, and 8) chlorines, which thus contain hydrogen atoms on adjacent pairs of carbons, can more readily be oxidized by cytochromes P450. The oxidized dioxins can then be more readily excreted rather than stored for a long time. Toxicity 2,3,7,8-Tetrachlorodibenzodioxin (TCDD) is considered the most toxic of the congeners (for the mechanism of action, see 2,3,7,8-Tetrachlorodibenzodioxin and Aryl hydrocarbon receptor). Other dioxin congeners including PCDFs and PCBs with dioxin-like toxicity, are given a toxicity rating from 0 to 1, where TCDD = 1 (see Dioxins and dioxin-like compounds). This toxicity rating is called the Toxic Equivalence Factor concept, or TEF. TEFs are consensus values and, because of the strong species dependence for toxicity, are listed separately for mammals, fish, and birds. TEFs for mammalian species are generally applicable to human risk calculations. The TEFs have been developed from detailed assessment of literature data to facilitate both risk assessment and regulatory control. Many other compounds may also have dioxin-like properties, particularly non-ortho PCBs, one of which has a TEF as high as 0.1. The total dioxin toxic equivalence (TEQ) value expresses the toxicity as if the mixture were pure TCDD. The TEQ approach and current TEFs have been adopted internationally as the most appropriate way to estimate the potential health risks of mixture of dioxins. Recent data suggest that this type of simple scaling factor may not be the most appropriate treatment for complex mixtures of dioxins; both transfer from the source and absorption and elimination vary among different congeners, and the TEF value is not able to accurately reflect this. Dioxins and other persistent organic pollutants (POPs) are subject to the Stockholm Convention. The treaty obliges signatories to take measures to eliminate where possible, and minimize where not possible to eliminate, all sources of dioxin. Health effects in humans Dioxins build up primarily in fatty tissues over time (bioaccumulation), so even small exposures may eventually reach dangerous levels. In 1994, the US EPA reported that dioxins are a probable carcinogen, but noted that non-cancer effects (reproduction and sexual development, immune system) may pose a greater threat to human health. TCDD, the most toxic of the dibenzodioxins, is classified as a Group 1 carcinogen by the International Agency for Research on Cancer (IARC). TCDD has a half-life of approximately 8 years in humans, although at high concentrations, the elimination rate is enhanced by metabolism. The health effects of dioxins are mediated by their action on a cellular receptor, the aryl hydrocarbon receptor (AhR). Exposure to high levels of dioxins in humans causes a severe form of persistent acne, known as chloracne. High occupational or accidental levels of exposures to dioxins have been shown by epidemiological studies to lead to an increased risk of tumors at all sites. Other effects in humans (at high dose levels) may include: Developmental abnormalities in the enamel of children's teeth. Central and peripheral nervous system pathology Thyroid disorders Damage to the immune systems Endometriosis Diabetes Recent studies have shown that high exposure to dioxins changes the ratio of male to female births among a population such that more females are born than males. Dioxins accumulate in food chains in a fashion similar to other chlorinated compounds (bioaccumulation). This means that even small concentrations in contaminated water can be concentrated up a food chain to dangerous levels because of the long biological half life and low water solubility of dioxins. Toxic effects in animals While it has been difficult to establish specific health effects in humans due to the lack of controlled dose experiments, studies in animals have shown that dioxin causes a wide variety of toxic effects. In particular, TCDD has been shown to be teratogenic, mutagenic, carcinogenic, immunotoxic, and hepatotoxic. Furthermore, alterations in multiple endocrine and growth factor systems have been reported. The most sensitive effects, observed in multiple species, appear to be developmental, including effects on the developing immune, nervous, and reproductive systems. The most sensitive effects are caused at body burdens relatively close to those reported in humans. Among the animals for which TCDD toxicity has been studied, there is strong evidence for the following effects: Birth defects (teratogenicity) In rodents, including rats, mice, hamsters and guinea pigs, birds, and fish. Cancer (including neoplasms in the mammalian lung, oral/nasal cavities, thyroid and adrenal glands, and liver, squamous cell carcinoma, and various animal hepatocarcinomas) In rodents and fish. Hepatotoxicity (liver toxicity) In rodents, chickens, and fish. Endocrine disruption In rodents and fish. Immunosuppression In rodents and fish. The LD50 of dioxin also varies wildly between species with the most notable disparity being between the ostensibly similar species of hamster and guinea pig. The oral LD50 for guinea pigs is as low as 0.5 to 2 μg/kg body weight, whereas the oral LD50 for hamsters can be as high as 1 to 5 mg/kg body weight, a difference of as much as thousandfold or more, and even among rat strains there may be thousandfold differences. Agent Orange Agent Orange was the code name for one of the herbicides and defoliants the U.S. military used as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. It was a mixture of 2,4,5-T and 2,4-D. The 2,4,5-T used was contaminated with 2,3,7,8-tetrachlorodibenzodioxin (TCDD), an extremely toxic dioxin compound. During the Vietnam war, between 1962 and 1971, the United States military sprayed of chemical herbicides and defoliants in Vietnam, eastern Laos and parts of Cambodia, as part of Operation Ranch Hand. By 1971, 12% of the total area of South Vietnam had been sprayed with defoliating chemicals, which were often applied at rates that were 13 times as high as the legal USDA limit. In South Vietnam alone, an estimated 10 million hectares of agricultural land were ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the U.S. Environmental Protection Agency. According to Vietnamese Ministry of Foreign Affairs, 4.8 million Vietnamese people were exposed to Agent Orange, resulting in 400,000 people being killed or maimed, and 500,000 children born with birth defects. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to Agent Orange contamination. The United States government has challenged these figures as being unreliable and unrealistically high. Dioxin exposure incidents In 1949, in a Monsanto herbicide production plant for 2,4,5-T in Nitro, West Virginia, 240 people were affected when a relief valve opened. In 1963, a dioxin cloud escaped after an explosion in a Philips-Duphar plant (now Solvay Group) near Amsterdam. The plant was so polluted with dioxin after the accident that it had to be dismantled, embedded in concrete, and dumped into the ocean. Between 1965 and 1968 production of 2,4,5-trichlorophenol in Spolana Neratovice plant in Czechoslovakia seriously poisoned about 60 workers with dioxins; after 3 years of investigation of the health problems of workers, Spolana stopped manufacture of 2,4,5-T (most of which was supplied to the US military in Vietnam). Several buildings of the Spolana chemical plant were heavily contaminated by dioxins. Unknown amounts of dioxins were flushed into the Elbe and Mulde rivers during the 2002 European flood, contaminating soil. Analysis of eggs and ducks found dioxin levels 15 times higher than the EU limit and a high concentration of dioxin-like PCBs in the village of Libiš. In 2004, the state health authority published a study which analysed the level of toxic substances in human blood near Spolana. According to the study, dioxin levels in Neratovice, Libiš and Tišice were about twice the level of the control group in Benešov. The quantity of dioxins near Spolana is significantly higher than the background levels in other countries. According to the US EPA, even a background level can pose a risk of cancer from 1:10000 to 1:1000, about 100 times higher than normal. The consumption of local fish, eggs, poultry, and some produce was prohibited because of post-flood contamination. Also during 1965 through 1968, Dr. Albert M. Kligman was contracted by the Dow Chemical Company to perform threshold tests for TCDD on inmates at Holmesburg Prison in Philadelphia after Dow studies revealed adverse effects on workers at Dow's Midland, Michigan, plant were likely due to TCDD. A subsequent test by Dow in rabbit ear models when exposed to 4–8μg usually caused a severe response. The human studies carried out in Holmesburg failed to follow Dow's original protocol and lacked proper informed consent by the participants. As a result of poor study design and subsequent destruction of records, the tests were virtually worthless even though ten inmates were exposed to 7,500μg of TCDD. In 1976, large amounts of dioxins were released in an industrial accident at Seveso, Italy, although no immediate human fatalities or birth defects occurred. In 1978, dioxins were some of the contaminants that forced the evacuation of the Love Canal neighborhood of Niagara Falls, New York. From 1982 through to 1985, Times Beach, Missouri, was bought out and evacuated under order of the United States Environmental Protection Agency due to high levels of dioxins in the soil caused by applications of contaminated oil meant to control dust on the town's dirt roads. The town eventually disincorporated. In the spring of 1990, a chemical plant Khimprom in Ufa, Russia released phenol into the water tributaries. An investigation revealed previously classified disposal of dioxin in manufacturing 2,4,5-Trichlorophenoxyacetic acid. The accident affected 670,000 people. Dioxin was found in tap water. It was assumed that it resulted from chlorophenol produced by a reaction with chlorine in water purification. In December 1991, an electrical explosion caused dioxins (created from the oxidation of polychlorinated biphenyl) to spread through four residence halls and two other buildings on the college campus of SUNY New Paltz. In May 1999, there was a dioxin crisis in Belgium: quantities of polychlorinated biphenyls with dioxin-like toxicity had entered the food chain through contaminated animal feed. 7,000,000 chickens and 60,000 pigs had to be slaughtered. This scandal was followed by a landslide change in government in the elections one month later. Explosions resulting from the terrorist attacks on the US on 11 September 2001, released massive amounts of dust into the air. The air was measured for dioxins from 23 September 2001, to 21 November 2001, and reported to be "likely the highest ambient concentration that have ever been reported [in history]." The United States Environmental Protection Agency report dated October 2002 and released in December 2002 titled "Exposure and Human Health Evaluation of Airborne Pollution from the World Trade Center Disaster" authored by the EPA Office of Research and Development in Washington states that dioxin levels recorded at a monitoring station on Park Row near City Hall Park in New York between 12 and 29 October 2001, averaged 5.6 parts per trillion, or nearly six times the highest dioxin level ever recorded in the U.S. Dioxin levels in the rubble of the World Trade Centers were much higher with concentrations ranging from 10 to 170 parts per trillion. The report did no measuring of the toxicity of indoor air. In a 2001 case study, physicians reported clinical changes in a 30-year-old woman who had been exposed to a massive dosage (144,000 pg/g blood fat) of dioxin equal to 16,000 times the normal body level; the highest dose of dioxin ever recorded in a human. She suffered from chloracne, nausea, vomiting, epigastric pain, loss of appetite, leukocytosis, anemia, amenorrhoea and thrombocytopenia. However, other notable laboratory tests, such as immune function tests, were relatively normal. The same study also covered a second subject who had received a dosage equivalent to 2,900 times the normal level, who apparently suffered no notable negative effects other than chloracne. These patients were provided with olestra to accelerate dioxin elimination. In 2004, in a notable individual case of dioxin poisoning, Ukrainian politician Viktor Yushchenko was exposed to the second-largest measured dose of dioxins, according to the reports of the physicians responsible for diagnosing him. This is the first known case of a single high dose of TCDD dioxin poisoning, and was diagnosed only after a toxicologist recognized the symptoms of chloracne while viewing television news coverage of his condition. In the early 2000s, residents of the city of New Plymouth, New Zealand, reported many illnesses of people living around and working at the Dow Chemical plant. This plant ceased production of 2,4,5-T in 1987. DuPont has been sued by 1,995 people who claim dioxin emissions from DuPont's plant in DeLisle, Mississippi, caused their cancers, illnesses or loved ones' deaths; of these only 850 were pending as of June 2008. In August 2005, Glen Strong, an oyster fisherman with the rare blood cancer multiple myeloma, was awarded $14 million from DuPont, but the ruling was overturned 5 June 2008, by a Mississippi jury who found DuPont's plant had no connection to Mr. Strong's disease. In another case, parents claimed dioxin from pollution caused the death of their 8-year-old daughter; the trial took place in the summer of 2007, and a jury wholly rejected the family's claims, as no scientific connection could be proven between DuPont and the family's tragic loss. DuPont's DeLisle plant is one of three titanium dioxide facilities (including Edgemoor, Delaware, and New Johnsonville, Tennessee) that are the largest producers of dioxin in the country, according to the US EPA's Toxic Release Inventory. DuPont maintains its operations are safe and environmentally responsible. In 2007, thousands of tonnes of foul-smelling refuse were piled up in Naples, Italy and its surrounding villages, defacing entire neighbourhoods. Authorities discovered that polychlorinated dibenzodioxins levels in buffalo milk used by 29 mozzarella makers exceeded permitted limits; after further investigation they impounded milk from 66 farms. Authorities suspected the source of the contamination was from waste illegally disposed of on land grazed by buffalo. Prosecutors in Naples placed 109 people under investigation on suspicion of fraud and food poisoning. Sales of Mozzarella cheese fell by 50% in Italy. In December 2008 in Ireland dioxin levels in pork were disclosed to have been between 80 and 200 times the legal limit. All Irish pork products were withdrawn from sale both nationally and internationally. In this case the dioxin toxicity was found to be mostly due to dioxin-like polychlorinated dibenzofurans and polychlorinated biphenyls, and the contribution from actual polychlorinated dibenzodioxins was relatively low. It is thought that the incident resulted from the contamination of fuel oil used in a drying burner at a single feed processor, with PCBs. The resulting combustion produced a highly toxic mixture of PCBs, dioxins and furans, which was included in the feed produced and subsequently fed to a large number of pigs. According to data in 2009, in 2005 the production of dioxin by the steel industry ILVA in Taranto (Italy) accounted for 90.3 per cent of the overall Italian emissions, and 8.8 per cent of the European emissions. German dioxin incident: In January 2011 about 4700 German farms were banned from making deliveries after self-checking of an animal feed producer had showed levels of dioxin above maximum levels. This incident appeared to involve PCDDs and not PCBs. Dioxins were found in animal feed and eggs in many farms. The maximum values were exceeded twofold in feed and maximally fourfold in some individual eggs. Thus the incident was minor as compared with the Belgian crisis in 1999, and delivery bans were rapidly cleared. Dioxin testing The analyses used to determine these compounds' relative toxicity share common elements that differ from methods used for more traditional analytical determinations. The preferred methods for dioxins and related analyses use high resolution gas chromatography/mass spectrometry (HRGC/HRMS). Concentrations are determined by measuring the ratio of the analyte to the appropriate isotopically labeled internal standard. Also novel bio-assays like DR CALUX are nowadays used in identification of dioxins and dioxin-like compounds. The advantage in respect to HRGC/HRMS is that it is able to scan many samples at lower costs. Also it is able to detect all compounds that interact with the Ah-receptor which is responsible for carcinogenic effects. See also Dioxins and dioxin-like compounds Polychlorinated dibenzofurans (PCDFs) – A group of compounds, produced by the same conditions as dioxins and commonly co-present with dioxins in contamination incidents. They have the same toxic mode of action and are included in the toxic equivalent scheme for the purposes of assessing dioxin levels. Chemetco – this former copper smelter is cited in an academic study as one of the 10 highest ranking sources of dioxin pollution reaching Nunavut in the Canadian Arctic Polychlorinated biphenyls – A group of compounds historically used in the manufacture of electrical transformers certain members of which can also contribute to dioxin-like toxicity. These dioxin like compounds are also included in the toxic equivalent scheme when measuring dioxin levels. References External links NIEHS dioxin fact sheet "Dioxins and Dioxin-like Compounds in the Food Supply: Strategies to Decrease Exposure" , a 2003 report by the National Academy of Sciences "Assessment of the Health Risks of Dioxins", a 1998 report by the World Health Organization. Chloroarenes IARC Group 1 carcinogens IARC Group 3 carcinogens Immunotoxins Dibenzodioxins Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Persistent organic pollutants under the Stockholm Convention
Polychlorinated dibenzodioxins
[ "Chemistry" ]
5,894
[ "Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution", "Persistent organic pollutants under the Stockholm Convention" ]
51,393
https://en.wikipedia.org/wiki/Czech%20Biomass%20Association
The Czech Biomass Association (CZ Biom - ) is a NGO, which supports the development of phytoenergetics (energy from plant material) in the Czech Republic. Members of CZ BIOM are scientists, specialists, entrepreneurs, and activists interested in using biomass as an energy resource. CZ BIOM is a member of the European Biomass Association. References External links Biom.cz Website of CZ BIOM, 2002 archive Bioenergy organizations Science and technology in the Czech Republic Environmental organizations based in the Czech Republic Biomass Nature conservation organisations based in Europe Renewable energy organizations
Czech Biomass Association
[ "Engineering" ]
121
[ "Renewable energy organizations", "Energy organizations" ]
51,399
https://en.wikipedia.org/wiki/Buckingham%20%CF%80%20theorem
In engineering, applied mathematics, and physics, the Buckingham theorem is a key theorem in dimensional analysis. It is a formalisation of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n physical variables, then the original equation can be rewritten in terms of a set of p = n − k dimensionless parameters 1, 2, ..., p constructed from the original variables, where k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix. The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown. The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold. History Although named for Edgar Buckingham, the theorem was first proved by the French mathematician Joseph Bertrand in 1878. Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the theorem in the general case to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892, a heuristic proof with the use of series expansions, to 1894. Formal generalization of the theorem for the case of arbitrarily many quantities was given first by in 1892, then in 1911—apparently independently—by both A. Federman and D. Riabouchinsky, and again in 1914 by Buckingham. It was Buckingham's article that introduced the use of the symbol "" for the dimensionless variables (or parameters), and this is the source of the theorem's name. Statement More formally, the number of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent. In mathematical terms, if we have a physically meaningful equation such as where are any physical variables, and there is a maximal dimensionally independent subset of size , then the above equation can be restated as where are dimensionless parameters constructed from the by dimensionless equations — the so-called Pi groups — of the form where the exponents are rational numbers. (They can always be taken to be integers by redefining as being raised to a power that clears all denominators.) If there are fundamental units in play, then . Significance The Buckingham theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful". Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions. Proof For simplicity, it will be assumed that the space of fundamental and derived physical units forms a vector space over the real numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity has units of (length over time squared), so it is represented as the vector with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers. Rescaling units Suppose we have quantities , where the units of contain length raised to the power . If we originally measure length in meters but later switch to centimeters, then the numerical value of would be rescaled by a factor of . Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on. Formal proof Given a system of dimensional variables in fundamental (basis) dimensions, the dimensional matrix is the matrix whose rows correspond to the fundamental dimensions and whose columns are the dimensions of the variables: the th entry (where and ) is the power of the th fundamental dimension in the th variable. The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So the (column) vector that results from the multiplication consists of the units of in terms of the fundamental independent (basis) units. If we rescale the th fundamental unit by a factor of , then gets rescaled by , where is the th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we take logarithms (the base is irrelevant), yielding which is an action of on . We define a physical law to be an arbitrary function such that is a permissible set of values for the physical system when . We further require to be invariant under this action. Hence it descends to a function . All that remains is to exhibit an isomorphism between and , the (log) space of pi groups . We construct an matrix whose columns are a basis for . It tells us how to embed into as the kernel of . That is, we have an exact sequence Taking tranposes yields another exact sequence The first isomorphism theorem produces the desired isomorphism, which sends the coset to . This corresponds to rewriting the tuple into the pi groups coming from the columns of . The International System of Units defines seven base units, which are the ampere, kelvin, second, metre, kilogram, candela and mole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (See orientational analysis and reference.) Examples Speed This example is elementary but serves to demonstrate the procedure. Suppose a car is driving at 100 km/h; how long does it take to go 200 km? This question considers dimensioned variables: distance time and speed and we are seeking some law of the form Any two of these variables are dimensionally independent, but the three taken together are not. Thus there is dimensionless quantity. The dimensional matrix is in which the rows correspond to the basis dimensions and and the columns to the considered dimensions where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column states that represented by the column vector is expressible in terms of the basis dimensions as since For a dimensionless constant we are looking for vectors such that the matrix-vector product equals the zero vector In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant: If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written: Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant. Dimensional analysis has thus provided a general equation relating the three physical variables: or, letting denote a zero of function which can be written in the desired form (which recall was ) as The actual relationship between the three variables is simply In other words, in this case has one physically relevant root, and it is unity. The fact that only a single value of will do and that it is equal to 1 is not revealed by the technique of dimensional analysis. The simple pendulum We wish to determine the period of small oscillations in a simple pendulum. It will be assumed that it is a function of the length the mass and the acceleration due to gravity on the surface of the Earth which has dimensions of length divided by time squared. The model is of the form (Note that it is written as a relation, not as a function: is not written here as a function of ) Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need only dimensionless parameter, denoted by and the model can be re-expressed as where is given by for some values of The dimensions of the dimensional quantities are: The dimensional matrix is: (The rows correspond to the dimensions and and the columns to the dimensional variables For instance, the 4th column, states that the variable has dimensions of ) We are looking for a kernel vector such that the matrix product of on yields the zero vector The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant: Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written: In fundamental terms: which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant. In this example, three of the four dimensional quantities are fundamental units, so the last (which is ) must be a combination of the previous. Note that if (the coefficient of ) had been non-zero then there would be no way to cancel the value; therefore be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, is the only nontrivial way to construct a vector of a dimensionless parameter.) The model can now be expressed as: Then this implies that for some zero of the function If there is only one zero, call it then It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero. Electric power To demonstrate the application of the theorem, consider the power consumption of a stirrer with a given shape. The power, P, in dimensions [M · L2/T3], is a function of the density, ρ [M/L3], and the viscosity of the fluid to be stirred, μ [M/(L · T)], as well as the size of the stirrer given by its diameter, D [L], and the angular speed of the stirrer, n [1/T]. Therefore, we have a total of n = 5 variables representing our example. Those n = 5 variables are built up from k = 3 independent dimensions, e.g., length: L (SI units: m), time: T (s), and mass: M (kg). According to the -theorem, the n = 5 variables can be reduced by the k = 3 dimensions to form p = n − k = 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen as , commonly named the Reynolds number which describes the fluid flow regime, and , the power number, which is the dimensionless description of the stirrer. Note that the two dimensionless quantities are not unique and depend on which of the n = 5 variables are chosen as the k = 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis if , n, and D are chosen to be the basis variables. If, instead, , n, and D are selected, the Reynolds number is recovered while the second dimensionless quantity becomes . We note that is the product of the Reynolds number and the power number. Other examples An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method. The theorem has also been used in fields other than physics, for instance in sports science. See also Blast wave Dimensionless quantity Natural units Similitude (model) Reynolds number References Notes Citations Bibliography Original sources External links Some reviews and original sources on the history of pi theorem and the theory of similarity (in Russian) Articles containing proofs Dimensional analysis Eponymous theorems of physics
Buckingham π theorem
[ "Physics", "Mathematics", "Engineering" ]
2,874
[ "Dimensional analysis", "Equations of physics", "Eponymous theorems of physics", "Mechanical engineering", "Articles containing proofs", "Physics theorems" ]
51,414
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20algebra
The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero. Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed. The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division. Despite its name, it is not fundamental for modern algebra; it was named when algebra was synonymous with the theory of equations. History , in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation although incomplete, has four solutions (counting multiplicities): 1 (twice), and As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type (with real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial , but he got a letter from Euler in 1742 in which it was shown that this polynomial is equal to with Also, Euler pointed out that A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z). At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981). The first rigorous proof was published by Argand, an amateur mathematician, in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849. The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it. None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981. Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). However, Fred Richman proved a reformulated version of the theorem that does work. Equivalent statements There are several equivalent formulations of the theorem: Every univariate polynomial of positive degree with real coefficients has at least one complex root. Every univariate polynomial of positive degree with complex coefficients has at least one complex root. This implies immediately the previous assertion, as real numbers are also complex numbers. The converse results from the fact that one gets a polynomial with real coefficients by taking the product of a polynomial and its complex conjugate (obtained by replacing each coefficient with its complex conjugate). A root of this product is either a root of the given polynomial, or of its conjugate; in the latter case, the conjugate of this root is a root of the given polynomial. Every univariate polynomial of positive degree with complex coefficients can be factorized as where are complex numbers. The complex numbers are the roots of the polynomial. If a root appears in several factors, it is a multiple root, and the number of its occurrences is, by definition, the multiplicity of the root. The proof that this statement results from the previous ones is done by recursion on : when a root has been found, the polynomial division by provides a polynomial of degree whose roots are the other roots of the given polynomial. The next two statements are equivalent to the previous ones, although they do not involve any nonreal complex number. These statements can be proved from previous factorizations by remarking that, if is a non-real root of a polynomial with real coefficients, its complex conjugate is also a root, and is a polynomial of degree two with real coefficients (this is the complex conjugate root theorem). Conversely, if one has a factor of degree two, the quadratic formula gives a root. Every univariate polynomial with real coefficients of degree larger than two has a factor of degree two with real coefficients. Every univariate polynomial with real coefficients of positive degree can be factored as where is a real number and each is a monic polynomial of degree at most two with real coefficients. Moreover, one can suppose that the factors of degree two do not have any real root. Proofs All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra. Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial with complex coefficients, the polynomial has only real coefficients, and, if is a root of , then either or its conjugate is a root of . Here, is the polynomial obtained by replacing each coefficient of with its complex conjugate; the roots of are exactly the complex conjugates of the roots of Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p(z) of degree n whose dominant coefficient is 1 behaves like zn when |z| is large enough. More precisely, there is some positive real number R such that when |z| > R. Real-analytic proofs Even without using complex numbers, it is possible to show that a real-valued polynomial p(x): p(0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients. In other words, for some real-valued a and b, the coefficients of the linear remainder on dividing p(x) by x2 − ax − b simultaneously become zero. where q(x) is a polynomial of degree n − 2. The coefficients Rp(x)(a, b) and Sp(x)(a, b) are independent of x and completely defined by the coefficients of p(x). In terms of representation, Rp(x)(a, b) and Sp(x)(a, b) are bivariate polynomials in a and b. In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b, all the roots of both Rp(x)(a, b) and Sp(x)(a, b) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain Rp(x)(a, b) and Sp(x)(a, b) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As Sp(a, b = 0) = p(0) has no roots, interlacing of Rp(x)(a, b) and Sp(x)(a, b) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of Rp(x)(a, b) and Sp(x)(a, b) must intersect for some real-valued a and b < 0. Complex-analytic proofs Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The maximum modulus principle applied to 1/p(z) implies that p(z0) = 0. In other words, z0 is a zero of p(z). A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0, we can write Here, the cj are simply the coefficients of the polynomial z → p(z + z0) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z0 this function has behavior asymptotically similar to the simpler polynomial . More precisely, the function for some positive constant M in some neighborhood of z0. Therefore, if we define and let tracing a circle of radius r > 0 around z, then for any sufficiently small r (so that the bound M holds), we see that When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, contradicting the definition of z0. Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|. Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0. Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n. Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue. The proof of the latter statement is by contradiction. Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that On the other hand, R(z) expanded as a geometric series gives: This formula is valid outside the closed disc of radius (the operator norm of A). Let Then (in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue. Finally, Rouché's theorem gives perhaps the shortest proof of the theorem. Topological proofs Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and: If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D. For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words, When z traverses the circle once counter-clockwise then winds n times counter-clockwise around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero. Algebraic proofs These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases): every polynomial with an odd degree and real coefficients has some real root; every non-negative real number has a square root. The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R() is algebraically closed. By induction As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define: Then the coefficients of qt(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, ..., n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 −  (zi + zj)z + zizj. Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since has a root, where k is chosen so that ). From Galois theory Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension. Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof. Geometric proofs There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat. A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, the Gauss–Bonnet theorem, when applied to the sphere S2, claims that which proves that the sphere is not flat. Let us now assume that n > 0 and for each complex number z. Let us define Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore, We can use this functional equation to prove that g, given by for w in C, and for w ∈ S2\{0}, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ {∞}). Now, a simple computation shows that since the real part of an analytic function is harmonic. This proves that Kg = 0. Corollaries Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers: The field of complex numbers is the algebraic closure of the field of real numbers. Every polynomial in one variable z with complex coefficients is the product of a complex constant and polynomials of the form z + a with a complex. Every polynomial in one variable x with real coefficients can be uniquely written as the product of a constant, polynomials of the form x + a with a real, and polynomials of the form x2 + ax + b with a and b real and a2 − 4b < 0 (which is the same thing as saying that the polynomial x2 + ax + b has no real roots). (By the Abel–Ruffini theorem, the real numbers a and b are not necessarily expressible in terms of the coefficients of the polynomial, the basic arithmetic operations and the extraction of n-th roots.) This implies that the number of non-real complex roots is always even and remains even when counted with their multiplicity. Every rational function in one variable x, with real coefficients, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n (where n is a natural number, and a and b are real numbers), and rational functions of the form (ax + b)/(x2 + cx + d)n (where n is a natural number, and a, b, c, and d are real numbers such that c2 − 4d < 0). A corollary of this is that every rational function in one variable and real coefficients has an elementary primitive. Every algebraic extension of the real field is isomorphic either to the real field or to the complex field. Bounds on the zeros of a polynomial While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simplest result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial satisfy an inequality |ζ| ≤ R∞, where As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R∞. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients that is |ζ| ≤ Rp, where Rp is precisely the q-norm of the 2-vector q being the conjugate exponent of p, for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by for 1 < p < ∞, and in particular (where we define an to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n, is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on , that is, the roots of Finally, the distance from the roots ζ to any point can be estimated from below and above, seeing as zeros of the polynomial , whose coefficients are the Taylor expansion of P(z) at Let ζ be a root of the polynomial in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as and using the Hölder's inequality we find Now, if p = 1, this is thus In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have thus and simplifying, Therefore holds, for all 1 ≤ p ≤ ∞. See also Weierstrass factorization theorem, a generalization of the theorem to other entire functions Eilenberg–Niven theorem, a generalization of the theorem to polynomials with quaternionic coefficients and variables Hilbert's Nullstellensatz, a generalization to several variables of the assertion that complex roots exist Bézout's theorem, a generalization to several variables of the assertion on the number of roots. References Citations Historic sources (tr. Course on Analysis of the Royal Polytechnic Academy, part 1: Algebraic Analysis) . English translation: (tr. New proof of the theorem that every integral rational algebraic function of one variable can be resolved into real factors of the first or second degree). – first proof. – second proof. – third proof. – fourth proof. (The Fundamental Theorem of Algebra and Intuitionism). (tr. An extension of a work of Hellmuth Kneser on the Fundamental Theorem of Algebra). (tr. On the first and fourth Gaussian proofs of the Fundamental Theorem of Algebra). (tr. New proof of the theorem that every integral rational function of one variable can be represented as a product of linear functions of the same variable). Recent literature (tr. On the history of the fundamental theorem of algebra: theory of equations and integral calculus.) (tr. The rational functions §80–88: the fundamental theorem). – English translation of Gauss's second proof. External links Algebra, fundamental theorem of at Encyclopaedia of Mathematics Fundamental Theorem of Algebra — a collection of proofs From the Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path Mizar system proof: http://mizar.org/version/current/html/polynom5.html#T74 Articles containing proofs Field (mathematics) Theorems about polynomials Theorems in complex analysis
Fundamental theorem of algebra
[ "Mathematics" ]
6,355
[ "Theorems in mathematical analysis", "Theorems in algebra", "Theorems in complex analysis", "Theorems about polynomials", "Articles containing proofs" ]
51,420
https://en.wikipedia.org/wiki/Carbonic%20acid
Carbonic acid is a chemical compound with the chemical formula . The molecule rapidly converts to water and carbon dioxide in the presence of water. However, in the absence of water, it is quite stable at room temperature. The interconversion of carbon dioxide and carbonic acid is related to the breathing cycle of animals and the acidification of natural waters. In biochemistry and physiology, the name "carbonic acid" is sometimes applied to aqueous solutions of carbon dioxide. These chemical species play an important role in the bicarbonate buffer system, used to maintain acid–base homeostasis. Terminology in biochemical literature In chemistry, the term "carbonic acid" strictly refers to the chemical compound with the formula . Some biochemistry literature effaces the distinction between carbonic acid and carbon dioxide dissolved in extracellular fluid. In physiology, carbon dioxide excreted by the lungs may be called volatile acid or respiratory acid. Anhydrous carbonic acid At ambient temperatures, pure carbonic acid is a stable gas. There are two main methods to produce anhydrous carbonic acid: reaction of hydrogen chloride and potassium bicarbonate at 100 K in methanol and proton irradiation of pure solid carbon dioxide. Chemically, it behaves as a diprotic Brønsted acid. Carbonic acid monomers exhibit three conformational isomers: cis–cis, cis–trans, and trans–trans. At low temperatures and atmospheric pressure, solid carbonic acid is amorphous and lacks Bragg peaks in X-ray diffraction. But at high pressure, carbonic acid crystallizes, and modern analytical spectroscopy can measure its geometry. According to neutron diffraction of dideuterated carbonic acid () in a hybrid clamped cell (Russian alloy/copper-beryllium) at 1.85 GPa, the molecules are planar and form dimers joined by pairs of hydrogen bonds. All three C-O bonds are nearly equidistant at 1.34 Å, intermediate between typical C-O and C=O distances (respectively 1.43 and 1.23 Å). The unusual C-O bond lengths are attributed to delocalized π bonding in the molecule's center and extraordinarily strong hydrogen bonds. The same effects also induce a very short O—O separation (2.13 Å), through the 136° O-H-O angle imposed by the doubly hydrogen-bonded 8-membered rings. Longer O—O distances are observed in strong intramolecular hydrogen bonds, e.g. in oxalic acid, where the distances exceed 2.4 Å. In aqueous solution In even a slight presence of water, carbonic acid dehydrates to carbon dioxide and water, which then catalyzes further decomposition. For this reason, carbon dioxide can be considered the carbonic acid anhydride. The hydration equilibrium constant at 25 °C is in pure water and ≈ 1.2×10−3 in seawater. Hence the majority of carbon dioxide at geophysical or biological air-water interfaces does not convert to carbonic acid, remaining dissolved gas. However, the uncatalyzed equilibrium is reached quite slowly: the rate constants are 0.039 s−1 for hydration and 23 s−1 for dehydration. In biological solutions In the presence of the enzyme carbonic anhydrase, equilibrium is instead reached rapidly, and the following reaction takes precedence: HCO3^- {+} H^+ <=> CO2 {+} H2O When the created carbon dioxide exceeds its solubility, gas evolves and a third equilibrium CO_2 (soln) <=> CO_2 (g) must also be taken into consideration. The equilibrium constant for this reaction is defined by Henry's law. The two reactions can be combined for the equilibrium in solution: When Henry's law is used to calculate the denominator care is needed with regard to units since Henry's law constant can be commonly expressed with 8 different dimensionalities. In water pH control In wastewater treatment and agriculture irrigation, carbonic acid is used to acidify the water similar to sulfuric acid and sulfurous acid produced by sulfur burners. Under high CO2 partial pressure In the beverage industry, sparkling or "fizzy water" is usually referred to as carbonated water. It is made by dissolving carbon dioxide under a small positive pressure in water. Many soft drinks treated the same way effervesce. Significant amounts of molecular exist in aqueous solutions subjected to pressures of multiple gigapascals (tens of thousands of atmospheres) in planetary interiors. Pressures of 0.6–1.6 GPa at 100 K, and 0.75–1.75 GPa at 300 K are attained in the cores of large icy satellites such as Ganymede, Callisto, and Titan, where water and carbon dioxide are present. Pure carbonic acid, being denser, is expected to have sunk under the ice layers and separate them from the rocky cores of these moons. Relationship to bicarbonate and carbonate Carbonic acid is the formal Brønsted–Lowry conjugate acid of the bicarbonate anion, stable in alkaline solution. The protonation constants have been measured to great precision, but depend on overall ionic strength . The two equilibria most easily measured are as follows: where brackets indicate the concentration of species. At 25 °C, these equilibria empirically satisfy decreases with increasing , as does . In a solution absent other ions (e.g. ), these curves imply the following stepwise dissociation constants: Direct values for these constants in the literature include and . To interpret these numbers, note that two chemical species in an acid equilibrium are equiconcentrated when . In particular, the extracellular fluid (cytosol) in biological systems exhibits , so that carbonic acid will be almost 50%-dissociated at equilibrium. Ocean acidification The Bjerrum plot shows typical equilibrium concentrations, in solution, in seawater, of carbon dioxide and the various species derived from it, as a function of pH. As human industrialization has increased the proportion of carbon dioxide in Earth's atmosphere, the proportion of carbon dioxide dissolved in sea- and freshwater as carbonic acid is also expected to increase. This rise in dissolved acid is also expected to acidify those waters, generating a decrease in pH. It has been estimated that the increase in dissolved carbon dioxide has already caused the ocean's average surface pH to decrease by about 0.1 from pre-industrial levels. Further reading References External links Carbonic acid/bicarbonate/carbonate equilibrium in water: pH of solutions, buffer capacity, titration, and species distribution vs. pH, computed with a free spreadsheet How to calculate concentration of carbonic acid in water Carbonates Carboxylic acids Inorganic carbon compounds Mineral acids
Carbonic acid
[ "Chemistry" ]
1,434
[ "Acids", "Inorganic compounds", "Mineral acids", "Carboxylic acids", "Functional groups", "Inorganic carbon compounds" ]
51,421
https://en.wikipedia.org/wiki/Integer%20sequence
In mathematics, an integer sequence is a sequence (i.e., an ordered list) of integers. An integer sequence may be specified explicitly by giving a formula for its nth term, or implicitly by giving a relationship between its terms. For example, the sequence 0, 1, 1, 2, 3, 5, 8, 13, ... (the Fibonacci sequence) is formed by starting with 0 and 1 and then adding any two consecutive terms to obtain the next one: an implicit description . The sequence 0, 3, 8, 15, ... is formed according to the formula n2 − 1 for the nth term: an explicit definition. Alternatively, an integer sequence may be defined by a property which members of the sequence possess and other integers do not possess. For example, we can determine whether a given integer is a perfect number, , even though we do not have a formula for the nth perfect number. Computable and definable sequences An integer sequence is computable if there exists an algorithm that, given n, calculates an, for all n > 0. The set of computable integer sequences is countable. The set of all integer sequences is uncountable (with cardinality equal to that of the continuum), and so not all integer sequences are computable. Although some integer sequences have definitions, there is no systematic way to define what it means for an integer sequence to be definable in the universe or in any absolute (model independent) sense. Suppose the set M is a transitive model of ZFC set theory. The transitivity of M implies that the integers and integer sequences inside M are actually integers and sequences of integers. An integer sequence is a definable sequence relative to M if there exists some formula P(x) in the language of set theory, with one free variable and no parameters, which is true in M for that integer sequence and false in M for all other integer sequences. In each such M, there are definable integer sequences that are not computable, such as sequences that encode the Turing jumps of computable sets. For some transitive models M of ZFC, every sequence of integers in M is definable relative to M; for others, only some integer sequences are. There is no systematic way to define in M itself the set of sequences definable relative to M and that set may not even exist in some such M. Similarly, the map from the set of formulas that define integer sequences in M to the integer sequences they define is not definable in M and may not exist in M. However, in any model that does possess such a definability map, some integer sequences in the model will not be definable relative to the model. If M contains all integer sequences, then the set of integer sequences definable in M will exist in M and be countable and countable in M. Complete sequences A sequence of positive integers is called a complete sequence if every positive integer can be expressed as a sum of values in the sequence, using each value at most once. Examples Integer sequences that have their own name include: Abundant numbers Baum–Sweet sequence Bell numbers Binomial coefficients Carmichael numbers Catalan numbers Composite numbers Deficient numbers Euler numbers Even and odd numbers Factorial numbers Fibonacci numbers Fibonacci word Figurate numbers Golomb sequence Happy numbers Highly composite numbers Highly totient numbers Home primes Hyperperfect numbers Juggler sequence Kolakoski sequence Lucky numbers Lucas numbers Motzkin numbers Natural numbers Padovan numbers Partition numbers Perfect numbers Practical numbers Prime numbers Pseudoprime numbers Recamán's sequence Regular paperfolding sequence Rudin–Shapiro sequence Semiperfect numbers Semiprime numbers Superperfect numbers Triangular numbers Thue–Morse sequence Ulam numbers Weird numbers Wolstenholme number See also Constant-recursive sequence On-Line Encyclopedia of Integer Sequences List of OEIS sequences References . External links Journal of Integer Sequences. Articles are freely available online. Arithmetic functions
Integer sequence
[ "Mathematics" ]
834
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Arithmetic functions", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
51,423
https://en.wikipedia.org/wiki/P-adic%20number
In number theory, given a prime number , the -adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; -adic numbers can be written in a form similar to (possibly infinite) decimals, but with digits based on a prime number rather than ten, and extending to the left rather than to the right. For example, comparing the expansion of the rational number in base vs. the -adic expansion, Formally, given a prime number , a -adic number can be defined as a series where is an integer (possibly negative), and each is an integer such that A -adic integer is a -adic number such that In general the series that represents a -adic number is not convergent in the usual sense, but it is convergent for the -adic absolute value where is the least integer such that (if all are zero, one has the zero -adic number, which has as its -adic absolute value). Every rational number can be uniquely expressed as the sum of a series as above, with respect to the -adic absolute value. This allows considering rational numbers as special -adic numbers, and alternatively defining the -adic numbers as the completion of the rational numbers for the -adic absolute value, exactly as the real numbers are the completion of the rational numbers for the usual absolute value. -adic numbers were first described by Kurt Hensel in 1897, though, with hindsight, some of Ernst Kummer's earlier work can be interpreted as implicitly using -adic numbers. Motivation Roughly speaking, modular arithmetic modulo a positive integer consists of "approximating" every integer by the remainder of its division by , called its residue modulo . The main property of modular arithmetic is that the residue modulo of the result of a succession of operations on integers is the same as the result of the same succession of operations on residues modulo . If one knows that the absolute value of the result is less than , this allows a computation of the result which does not involve any integer larger than . For larger results, an old method, still in common use, consists of using several small moduli that are pairwise coprime, and applying the Chinese remainder theorem for recovering the result modulo the product of the moduli. Another method discovered by Kurt Hensel consists of using a prime modulus , and applying Hensel's lemma for recovering iteratively the result modulo If the process is continued infinitely, this provides eventually a result which is a -adic number. Basic lemmas The theory of -adic numbers is fundamentally based on the two following lemmas Every nonzero rational number can be written where , , and are integers and neither nor is divisible by . The exponent is uniquely determined by the rational number and is called its -adic valuation (this definition is a particular case of a more general definition, given below). The proof of the lemma results directly from the fundamental theorem of arithmetic. Every nonzero rational number of valuation can be uniquely written where is a rational number of valuation greater than , and is an integer such that The proof of this lemma results from modular arithmetic: By the above lemma, where and are integers coprime with . By Bezout lemma, there exist integers and , with , such that Setting (hence ), we have To show the uniqueness of this representation, observe that if with and , there holds by difference with and . Write , where is coprime to ; then , which is possible only if and . Hence and . The above process can be iterated starting from instead of , giving the following. Given a nonzero rational number of valuation and a positive integer , there are a rational number of nonnegative valuation and uniquely defined nonnegative integers less than such that and The -adic numbers are essentially obtained by continuing this infinitely to produce an infinite series. p-adic series The -adic numbers are commonly defined by means of -adic series. A -adic series is a formal power series of the form where is an integer and the are rational numbers that either are zero or have a nonnegative valuation (that is, the denominator of is not divisible by ). Every rational number may be viewed as a -adic series with a single nonzero term, consisting of its factorization of the form with and both coprime with . Two -adic series and are equivalent if there is an integer such that, for every integer the rational number is zero or has a -adic valuation greater than . A -adic series is normalized if either all are integers such that and or all are zero. In the latter case, the series is called the zero series. Every -adic series is equivalent to exactly one normalized series. This normalized series is obtained by a sequence of transformations, which are equivalences of series; see § Normalization of a -adic series, below. In other words, the equivalence of -adic series is an equivalence relation, and each equivalence class contains exactly one normalized -adic series. The usual operations of series (addition, subtraction, multiplication, division) are compatible with equivalence of -adic series. That is, denoting the equivalence with , if , and are nonzero -adic series such that one has The -adic numbers are often defined as the equivalence classes of -adic series, in a similar way as the definition of the real numbers as equivalence classes of Cauchy sequences. The uniqueness property of normalization, allows uniquely representing any -adic number by the corresponding normalized -adic series. The compatibility of the series equivalence leads almost immediately to basic properties of -adic numbers: Addition, multiplication and multiplicative inverse of -adic numbers are defined as for formal power series, followed by the normalization of the result. With these operations, the -adic numbers form a field, which is an extension field of the rational numbers. The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first non zero term of the corresponding normalized series; the valuation of zero is The -adic absolute value of a nonzero -adic number , is for the zero -adic number, one has Normalization of a p-adic series Starting with the series the first above lemma allows getting an equivalent series such that the -adic valuation of is zero. For that, one considers the first nonzero If its -adic valuation is zero, it suffices to change into , that is to start the summation from . Otherwise, the -adic valuation of is and where the valuation of is zero; so, one gets an equivalent series by changing to and to Iterating this process, one gets eventually, possibly after infinitely many steps, an equivalent series that either is the zero series or is a series such that the valuation of is zero. Then, if the series is not normalized, consider the first nonzero that is not an integer in the interval The second above lemma allows writing it one gets n equivalent series by replacing with and adding to Iterating this process, possibly infinitely many times, provides eventually the desired normalized -adic series. Definition There are several equivalent definitions of -adic numbers. The one that is given here is relatively elementary, since it does not involve any other mathematical concepts than those introduced in the preceding sections. Other equivalent definitions use completion of a discrete valuation ring (see ), completion of a metric space (see ), or inverse limits (see ). A -adic number can be defined as a normalized -adic series. Since there are other equivalent definitions that are commonly used, one says often that a normalized -adic series represents a -adic number, instead of saying that it is a -adic number. One can say also that any -adic series represents a -adic number, since every -adic series is equivalent to a unique normalized -adic series. This is useful for defining operations (addition, subtraction, multiplication, division) of -adic numbers: the result of such an operation is obtained by normalizing the result of the corresponding operation on series. This well defines operations on -adic numbers, since the series operations are compatible with equivalence of -adic series. With these operations, -adic numbers form a field called the field of -adic numbers and denoted or There is a unique field homomorphism from the rational numbers into the -adic numbers, which maps a rational number to its -adic expansion. The image of this homomorphism is commonly identified with the field of rational numbers. This allows considering the -adic numbers as an extension field of the rational numbers, and the rational numbers as a subfield of the -adic numbers. The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first nonzero term of every -adic series that represents . By convention, that is, the valuation of zero is This valuation is a discrete valuation. The restriction of this valuation to the rational numbers is the -adic valuation of that is, the exponent in the factorization of a rational number as with both and coprime with . p-adic integers The -adic integers are the -adic numbers with a nonnegative valuation. A -adic integer can be represented as a sequence of residues mod for each integer , satisfying the compatibility relations for . Every integer is a -adic integer (including zero, since ). The rational numbers of the form with coprime with and are also -adic integers (for the reason that has an inverse mod for every ). The -adic integers form a commutative ring, denoted or , that has the following properties. It is an integral domain, since it is a subring of a field, or since the first term of the series representation of the product of two non zero -adic series is the product of their first terms. The units (invertible elements) of are the -adic numbers of valuation zero. It is a principal ideal domain, such that each ideal is generated by a power of . It is a local ring of Krull dimension one, since its only prime ideals are the zero ideal and the ideal generated by , the unique maximal ideal. It is a discrete valuation ring, since this results from the preceding properties. It is the completion of the local ring which is the localization of at the prime ideal The last property provides a definition of the -adic numbers that is equivalent to the above one: the field of the -adic numbers is the field of fractions of the completion of the localization of the integers at the prime ideal generated by . Topological properties The -adic valuation allows defining an absolute value on -adic numbers: the -adic absolute value of a nonzero -adic number is where is the -adic valuation of . The -adic absolute value of is This is an absolute value that satisfies the strong triangle inequality since, for every and one has if and only if Moreover, if one has This makes the -adic numbers a metric space, and even an ultrametric space, with the -adic distance defined by As a metric space, the -adic numbers form the completion of the rational numbers equipped with the -adic absolute value. This provides another way for defining the -adic numbers. However, the general construction of a completion can be simplified in this case, because the metric is defined by a discrete valuation (in short, one can extract from every Cauchy sequence a subsequence such that the differences between two consecutive terms have strictly decreasing absolute values; such a subsequence is the sequence of the partial sums of a -adic series, and thus a unique normalized -adic series can be associated to every equivalence class of Cauchy sequences; so, for building the completion, it suffices to consider normalized -adic series instead of equivalence classes of Cauchy sequences). As the metric is defined from a discrete valuation, every open ball is also closed. More precisely, the open ball equals the closed ball where is the least integer such that Similarly, where is the greatest integer such that This implies that the -adic numbers form a locally compact space, and the -adic integers—that is, the ball —form a compact space. p-adic expansion of rational numbers The decimal expansion of a positive rational number is its representation as a series where is an integer and each is also an integer such that This expansion can be computed by long division of the numerator by the denominator, which is itself based on the following theorem: If is a rational number such that there is an integer such that and with The decimal expansion is obtained by repeatedly applying this result to the remainder which in the iteration assumes the role of the original rational number . The -adic expansion of a rational number is defined similarly, but with a different division step. More precisely, given a fixed prime number , every nonzero rational number can be uniquely written as where is a (possibly negative) integer, and are coprime integers both coprime with , and is positive. The integer is the -adic valuation of , denoted and is its -adic absolute value, denoted (the absolute value is small when the valuation is large). The division step consists of writing where is an integer such that and is either zero, or a rational number such that (that is, ). The -adic expansion of is the formal power series obtained by repeating indefinitely the above division step on successive remainders. In a -adic expansion, all are integers such that If with , the process stops eventually with a zero remainder; in this case, the series is completed by trailing terms with a zero coefficient, and is the representation of in base-. The existence and the computation of the -adic expansion of a rational number results from Bézout's identity in the following way. If, as above, and and are coprime, there exist integers and such that So Then, the Euclidean division of by gives with This gives the division step as so that in the iteration is the new rational number. The uniqueness of the division step and of the whole -adic expansion is easy: if one has This means divides Since and the following must be true: and Thus, one gets and since divides it must be that The -adic expansion of a rational number is a series that converges to the rational number, if one applies the definition of a convergent series with the -adic absolute value. In the standard -adic notation, the digits are written in the same order as in a standard base- system, namely with the powers of the base increasing to the left. This means that the production of the digits is reversed and the limit happens on the left hand side. The -adic expansion of a rational number is eventually periodic. Conversely, a series with converges (for the -adic absolute value) to a rational number if and only if it is eventually periodic; in this case, the series is the -adic expansion of that rational number. The proof is similar to that of the similar result for repeating decimals. Example Let us compute the 5-adic expansion of Bézout's identity for 5 and the denominator 3 is (for larger examples, this can be computed with the extended Euclidean algorithm). Thus For the next step, one has to expand (the factor 5 has to be viewed as a "shift" of the -adic valuation, similar to the basis of any number expansion, and thus it should not be itself expanded). To expand , we start from the same Bézout's identity and multiply it by , giving The "integer part" is not in the right interval. So, one has to use Euclidean division by for getting giving and the expansion in the first step becomes Similarly, one has and As the "remainder" has already been found, the process can be continued easily, giving coefficients for odd powers of five, and for even powers. Or in the standard 5-adic notation with the ellipsis on the left hand side. Positional notation It is possible to use a positional notation similar to that which is used to represent numbers in base . Let be a normalized -adic series, i.e. each is an integer in the interval One can suppose that by setting for (if ), and adding the resulting zero terms to the series. If the positional notation consists of writing the consecutively, ordered by decreasing values of , often with appearing on the right as an index: So, the computation of the example above shows that and When a separating dot is added before the digits with negative index, and, if the index is present, it appears just after the separating dot. For example, and If a -adic representation is finite on the left (that is, for large values of ), then it has the value of a nonnegative rational number of the form with integers. These rational numbers are exactly the nonnegative rational numbers that have a finite representation in base . For these rational numbers, the two representations are the same. Modular properties The quotient ring may be identified with the ring of the integers modulo This can be shown by remarking that every -adic integer, represented by its normalized -adic series, is congruent modulo with its partial sum whose value is an integer in the interval A straightforward verification shows that this defines a ring isomorphism from to The inverse limit of the rings is defined as the ring formed by the sequences such that and for every . The mapping that maps a normalized -adic series to the sequence of its partial sums is a ring isomorphism from to the inverse limit of the This provides another way for defining -adic integers (up to an isomorphism). This definition of -adic integers is specially useful for practical computations, as allowing building -adic integers by successive approximations. For example, for computing the -adic (multiplicative) inverse of an integer, one can use Newton's method, starting from the inverse modulo ; then, each Newton step computes the inverse modulo from the inverse modulo The same method can be used for computing the -adic square root of an integer that is a quadratic residue modulo . This seems to be the fastest known method for testing whether a large integer is a square: it suffices to test whether the given integer is the square of the value found in . Applying Newton's method to find the square root requires to be larger than twice the given integer, which is quickly satisfied. Hensel lifting is a similar method that allows to "lift" the factorization modulo of a polynomial with integer coefficients to a factorization modulo for large values of . This is commonly used by polynomial factorization algorithms. Notation There are several different conventions for writing -adic expansions. So far this article has used a notation for -adic expansions in which powers of increase from right to left. With this right-to-left notation the 3-adic expansion of for example, is written as When performing arithmetic in this notation, digits are carried to the left. It is also possible to write -adic expansions so that the powers of increase from left to right, and digits are carried to the right. With this left-to-right notation the 3-adic expansion of is -adic expansions may be written with other sets of digits instead of  }. For example, the -adic expansion of can be written using balanced ternary digits }, with representing negative one, as In fact any set of integers which are in distinct residue classes modulo may be used as -adic digits. In number theory, Teichmüller representatives are sometimes used as digits. is a variant of the -adic representation of rational numbers that was proposed in 1979 by Eric Hehner and Nigel Horspool for implementing on computers the (exact) arithmetic with these numbers. Cardinality Both and are uncountable and have the cardinality of the continuum. For this results from the -adic representation, which defines a bijection of on the power set For this results from its expression as a countably infinite union of copies of : Algebraic closure contains and is a field of characteristic . Because can be written as sum of squares, cannot be turned into an ordered field. The field of real numbers has only a single proper algebraic extension: the complex numbers . In other words, this quadratic extension is already algebraically closed. By contrast, the algebraic closure of , denoted has infinite degree, that is, has infinitely many inequivalent algebraic extensions. Also contrasting the case of real numbers, although there is a unique extension of the -adic valuation to the latter is not (metrically) complete. Its (metric) completion is called or . Here an end is reached, as is algebraically closed. However unlike this field is not locally compact. and are isomorphic as rings, so we may regard as endowed with an exotic metric. The proof of existence of such a field isomorphism relies on the axiom of choice, and does not provide an explicit example of such an isomorphism (that is, it is not constructive). If is any finite Galois extension of , the Galois group is solvable. Thus, the Galois group is prosolvable. Multiplicative group contains the -th cyclotomic field () if and only if . For instance, the -th cyclotomic field is a subfield of if and only if , or . In particular, there is no multiplicative -torsion in if . Also, is the only non-trivial torsion element in . Given a natural number , the index of the multiplicative group of the -th powers of the non-zero elements of in is finite. The number , defined as the sum of reciprocals of factorials, is not a member of any -adic field; but for . For one must take at least the fourth power. (Thus a number with similar properties as — namely a -th root of — is a member of for all .) Local–global principle Helmut Hasse's local–global principle is said to hold for an equation if it can be solved over the rational numbers if and only if it can be solved over the real numbers and over the -adic numbers for every prime . This principle holds, for example, for equations given by quadratic forms, but fails for higher polynomials in several indeterminates. Rational arithmetic with Hensel lifting Generalizations and related concepts The reals and the -adic numbers are the completions of the rationals; it is also possible to complete other fields, for instance general algebraic number fields, in an analogous way. This will be described now. Suppose D is a Dedekind domain and E is its field of fractions. Pick a non-zero prime ideal P of D. If x is a non-zero element of E, then xD is a fractional ideal and can be uniquely factored as a product of positive and negative powers of non-zero prime ideals of D. We write ordP(x) for the exponent of P in this factorization, and for any choice of number c greater than 1 we can set Completing with respect to this absolute value |⋅|P yields a field EP, the proper generalization of the field of p-adic numbers to this setting. The choice of c does not change the completion (different choices yield the same concept of Cauchy sequence, so the same completion). It is convenient, when the residue field D/P is finite, to take for c the size of D/P. For example, when E is a number field, Ostrowski's theorem says that every non-trivial non-Archimedean absolute value on E arises as some |⋅|P. The remaining non-trivial absolute values on E arise from the different embeddings of E into the real or complex numbers. (In fact, the non-Archimedean absolute values can be considered as simply the different embeddings of E into the fields Cp, thus putting the description of all the non-trivial absolute values of a number field on a common footing.) Often, one needs to simultaneously keep track of all the above-mentioned completions when E is a number field (or more generally a global field), which are seen as encoding "local" information. This is accomplished by adele rings and idele groups. p-adic integers can be extended to p-adic solenoids . There is a map from to the circle group whose fibers are the p-adic integers , in analogy to how there is a map from to the circle whose fibers are . See also Non-Archimedean p-adic quantum mechanics p-adic Hodge theory p-adic Teichmuller theory p-adic analysis p-adic valuation 1 + 2 + 4 + 8 + ... k-adic notation C-minimal theory Hensel's lemma Locally compact field Mahler's theorem Profinite integer Volkenborn integral Two's complement Footnotes Notes Citations References . — Translation into English by John Stillwell of Theorie der algebraischen Functionen einer Veränderlichen (1882). Further reading External links p-adic number at Springer On-line Encyclopaedia of Mathematics Field (mathematics) Number theory
P-adic number
[ "Mathematics" ]
5,322
[ "P-adic numbers", "Discrete mathematics", "Number theory" ]
51,426
https://en.wikipedia.org/wiki/Cantor%27s%20diagonal%20argument
Cantor's diagonal argument (among various similar names) is a mathematical proof that there are infinite sets which cannot be put into one-to-one correspondence with the infinite set of natural numbersinformally, that there are sets which in some sense contain more elements than there are positive integers. Such sets are now called uncountable sets, and the size of infinite sets is treated by the theory of cardinal numbers, which Cantor began. Georg Cantor published this proof in 1891, but it was not his first proof of the uncountability of the real numbers, which appeared in 1874. However, it demonstrates a general technique that has since been used in a wide range of proofs, including the first of Gödel's incompleteness theorems and Turing's answer to the Entscheidungsproblem. Diagonalization arguments are often also the source of contradictions like Russell's paradox and Richard's paradox. Uncountable set Cantor considered the set T of all infinite sequences of binary digits (i.e. each digit is zero or one). He begins with a constructive proof of the following lemma: If s1, s2, ... , sn, ... is any enumeration of elements from T, then an element s of T can be constructed that doesn't correspond to any sn in the enumeration. The proof starts with an enumeration of elements from T, for example {| |- | s1 = || (0, || 0, || 0, || 0, || 0, || 0, || 0, || ...) |- | s2 = || (1, || 1, || 1, || 1, || 1, || 1, || 1, || ...) |- | s3 = || (0, || 1, || 0, || 1, || 0, || 1, || 0, || ...) |- | s4 = || (1, || 0, || 1, || 0, || 1, || 0, || 1, || ...) |- | s5 = || (1, || 1, || 0, || 1, || 0, || 1, || 1, || ...) |- | s6 = || (0, || 0, || 1, || 1, || 0, || 1, || 1, || ...) |- | s7 = || (1, || 0, || 0, || 0, || 1, || 0, || 0, || ...) |- | ... |} Next, a sequence s is constructed by choosing the 1st digit as complementary to the 1st digit of s1 (swapping 0s for 1s and vice versa), the 2nd digit as complementary to the 2nd digit of s2, the 3rd digit as complementary to the 3rd digit of s3, and generally for every n, the nth digit as complementary to the nth digit of sn. For the example above, this yields {| |- | s1 || = || (0, || 0, || 0, || 0, || 0, || 0, || 0, || ...) |- | s2 || = || (1, || 1, || 1, || 1, || 1, || 1, || 1, || ...) |- | s3 || = || (0, || 1, || 0, || 1, || 0, || 1, || 0, || ...) |- | s4 || = || (1, || 0, || 1, || 0, || 1, || 0, || 1, || ...) |- | s5 || = || (1, || 1, || 0, || 1, || 0, || 1, || 1, || ...) |- | s6 || = || (0, || 0, || 1, || 1, || 0, || 1, || 1, || ...) |- | s7 || = || (1, || 0, || 0, || 0, || 1, || 0, || 0, || ...) |- | ... |- | |- | s || = || (1, || 0, || 1, || 1, || 1, || 0, || 1, || ...) |} By construction, s is a member of T that differs from each sn, since their nth digits differ (highlighted in the example). Hence, s cannot occur in the enumeration. Based on this lemma, Cantor then uses a proof by contradiction to show that: The set T is uncountable. The proof starts by assuming that T is countable. Then all its elements can be written in an enumeration s1, s2, ... , sn, ... . Applying the previous lemma to this enumeration produces a sequence s that is a member of T, but is not in the enumeration. However, if T is enumerated, then every member of T, including this s, is in the enumeration. This contradiction implies that the original assumption is false. Therefore, T is uncountable. Real numbers The uncountability of the real numbers was already established by Cantor's first uncountability proof, but it also follows from the above result. To prove this, an injection will be constructed from the set T of infinite binary strings to the set R of real numbers. Since T is uncountable, the image of this function, which is a subset of R, is uncountable. Therefore, R is uncountable. Also, by using a method of construction devised by Cantor, a bijection will be constructed between T and R. Therefore, T and R have the same cardinality, which is called the "cardinality of the continuum" and is usually denoted by or . An injection from T to R is given by mapping binary strings in T to decimal fractions, such as mapping t = 0111... to the decimal 0.0111.... This function, defined by , is an injection because it maps different strings to different numbers. Constructing a bijection between T and R is slightly more complicated. Instead of mapping 0111... to the decimal 0.0111..., it can be mapped to the base b number: 0.0111...b. This leads to the family of functions: . The functions are injections, except for . This function will be modified to produce a bijection between T and R. General sets A generalized form of the diagonal argument was used by Cantor to prove Cantor's theorem: for every set S, the power set of S—that is, the set of all subsets of S (here written as P(S))—cannot be in bijection with S itself. This proof proceeds as follows: Let f be any function from S to P(S). It suffices to prove f cannot be surjective. That means that some member T of P(S), i.e. some subset of S, is not in the image of f. As a candidate consider the set: . For every s in S, either s is in T or not. If s is in T, then by definition of T, s is not in f(s), so T is not equal to f(s). On the other hand, if s is not in T, then by definition of T, s is in f(s), so again T is not equal to f(s); cf. picture. For a more complete account of this proof, see Cantor's theorem. Consequences Ordering of cardinals With equality defined as the existence of a bijection between their underlying sets, Cantor also defines binary predicate of cardinalities and in terms of the existence of injections between and . It has the properties of a preorder and is here written "". One can embed the naturals into the binary sequences, thus proving various injection existence statements explicitly, so that in this sense , where denotes the function space . But following from the argument in the previous sections, there is no surjection and so also no bijection, i.e. the set is uncountable. For this one may write , where "" is understood to mean the existence of an injection together with the proven absence of a bijection (as opposed to alternatives such as the negation of Cantor's preorder, or a definition in terms of assigned ordinals). Also in this sense, as has been shown, and at the same time it is the case that , for all sets . Assuming the law of excluded middle, characteristic functions surject onto powersets, and then . So the uncountable is also not enumerable and it can also be mapped onto . Classically, the Schröder–Bernstein theorem is valid and says that any two sets which are in the injective image of one another are in bijection as well. Here, every unbounded subset of is then in bijection with itself, and every subcountable set (a property in terms of surjections) is then already countable, i.e. in the surjective image of . In this context the possibilities are then exhausted, making "" a non-strict partial order, or even a total order when assuming choice. The diagonal argument thus establishes that, although both sets under consideration are infinite, there are actually more infinite sequences of ones and zeros than there are natural numbers. Cantor's result then also implies that the notion of the set of all sets is inconsistent: If were the set of all sets, then would at the same time be bigger than and a subset of . In the absence of excluded middle Also in constructive mathematics, there is no surjection from the full domain onto the space of functions or onto the collection of subsets , which is to say these two collections are uncountable. Again using "" for proven injection existence in conjunction with bijection absence, one has and . Further, , as previously noted. Likewise, , and of course , also in constructive set theory. It is however harder or impossible to order ordinals and also cardinals, constructively. For example, the Schröder–Bernstein theorem requires the law of excluded middle. In fact, the standard ordering on the reals, extending the ordering of the rational numbers, is not necessarily decidable either. Neither are most properties of interesting classes of functions decidable, by Rice's theorem, i.e. the set of counting numbers for the subcountable sets may not be recursive and can thus fail to be countable. The elaborate collection of subsets of a set is constructively not exchangeable with the collection of its characteristic functions. In an otherwise constructive context (in which the law of excluded middle is not taken as axiom), it is consistent to adopt non-classical axioms that contradict consequences of the law of excluded middle. Uncountable sets such as or may be asserted to be subcountable. This is a notion of size that is redundant in the classical context, but otherwise need not imply countability. The existence of injections from the uncountable or into is here possible as well. So the cardinal relation fails to be antisymmetric. Consequently, also in the presence of function space sets that are even classically uncountable, intuitionists do not accept this relation to constitute a hierarchy of transfinite sizes. When the axiom of powerset is not adopted, in a constructive framework even the subcountability of all sets is then consistent. That all said, in common set theories, the non-existence of a set of all sets also already follows from Predicative Separation. In a set theory, theories of mathematics are modeled. Weaker logical axioms mean fewer constraints and so allow for a richer class of models. A set may be identified as a model of the field of real numbers when it fulfills some axioms of real numbers or a constructive rephrasing thereof. Various models have been studied, such as the Cauchy reals or the Dedekind reals, among others. The former relate to quotients of sequences while the later are well-behaved cuts taken from a powerset, if they exist. In the presence of excluded middle, those are all isomorphic and uncountable. Otherwise, variants of the Dedekind reals can be countable or inject into the naturals, but not jointly. When assuming countable choice, constructive Cauchy reals even without an explicit modulus of convergence are then Cauchy-complete and Dedekind reals simplify so as to become isomorphic to them. Indeed, here choice also aids diagonal constructions and when assuming it, Cauchy-complete models of the reals are uncountable. Open questions Motivated by the insight that the set of real numbers is "bigger" than the set of natural numbers, one is led to ask if there is a set whose cardinality is "between" that of the integers and that of the reals. This question leads to the famous continuum hypothesis. Similarly, the question of whether there exists a set whose cardinality is between |S| and |P(S)| for some infinite S leads to the generalized continuum hypothesis. Diagonalization in broader context Russell's paradox has shown that set theory that includes an unrestricted comprehension scheme is contradictory. Note that there is a similarity between the construction of T and the set in Russell's paradox. Therefore, depending on how we modify the axiom scheme of comprehension in order to avoid Russell's paradox, arguments such as the non-existence of a set of all sets may or may not remain valid. Analogues of the diagonal argument are widely used in mathematics to prove the existence or nonexistence of certain objects. For example, the conventional proof of the unsolvability of the halting problem is essentially a diagonal argument. Also, diagonalization was originally used to show the existence of arbitrarily hard complexity classes and played a key role in early attempts to prove P does not equal NP. Version for Quine's New Foundations The above proof fails for W. V. Quine's "New Foundations" set theory (NF). In NF, the naive axiom scheme of comprehension is modified to avoid the paradoxes by introducing a kind of "local" type theory. In this axiom scheme, { s ∈ S: s ∉ f(s) } is not a set — i.e., does not satisfy the axiom scheme. On the other hand, we might try to create a modified diagonal argument by noticing that { s ∈ S: s ∉ f({s}) } is a set in NF. In which case, if P1(S) is the set of one-element subsets of S and f is a proposed bijection from P1(S) to P(S), one is able to use proof by contradiction to prove that |P1(S)| < |P(S)|. The proof follows by the fact that if f were indeed a map onto P(S), then we could find r in S, such that f({r}) coincides with the modified diagonal set, above. We would conclude that if r is not in f({r}), then r is in f({r}) and vice versa. It is not possible to put P1(S) in a one-to-one relation with S, as the two have different types, and so any function so defined would violate the typing rules for the comprehension scheme. See also Cantor's first uncountability proof Controversy over Cantor's theory Diagonal lemma Notes References External links Cantor's Diagonal Proof at MathPages Set theory Theorems in the foundations of mathematics Mathematical proofs Infinity Arguments Cardinal numbers Georg Cantor
Cantor's diagonal argument
[ "Mathematics" ]
3,479
[ "Mathematical theorems", "Cardinal numbers", "Foundations of mathematics", "Set theory", "Mathematical logic", "Mathematical objects", "Infinity", "Numbers", "nan", "Mathematical problems", "Theorems in the foundations of mathematics" ]
51,429
https://en.wikipedia.org/wiki/Hyperreal%20number
In mathematics, hyperreal numbers are an extension of the real numbers to include certain classes of infinite and infinitesimal numbers. A hyperreal number is said to be finite if, and only if, for some integer . is said to be infinitesimal if, and only if, for all positive integers . The term "hyper-real" was introduced by Edwin Hewitt in 1948. The hyperreal numbers satisfy the transfer principle, a rigorous version of Leibniz's heuristic law of continuity. The transfer principle states that true first-order statements about R are also valid in *R. For example, the commutative law of addition, , holds for the hyperreals just as it does for the reals; since R is a real closed field, so is *R. Since for all integers n, one also has for all hyperintegers . The transfer principle for ultrapowers is a consequence of Łoś's theorem of 1955. Concerns about the soundness of arguments involving infinitesimals date back to ancient Greek mathematics, with Archimedes replacing such proofs with ones using other techniques such as the method of exhaustion. In the 1960s, Abraham Robinson proved that the hyperreals were logically consistent if and only if the reals were. This put to rest the fear that any proof involving infinitesimals might be unsound, provided that they were manipulated according to the logical rules that Robinson delineated. The application of hyperreal numbers and in particular the transfer principle to problems of analysis is called nonstandard analysis. One immediate application is the definition of the basic concepts of analysis such as the derivative and integral in a direct fashion, without passing via logical complications of multiple quantifiers. Thus, the derivative of f(x) becomes for an infinitesimal , where st(⋅) denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Similarly, the integral is defined as the standard part of a suitable infinite sum. Transfer principle The idea of the hyperreal system is to extend the real numbers R to form a system *R that includes infinitesimal and infinite numbers, but without changing any of the elementary axioms of algebra. Any statement of the form "for any number x ..." that is true for the reals is also true for the hyperreals. For example, the axiom that states "for any number x, x + 0 = x" still applies. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." This ability to carry over statements from the reals to the hyperreals is called the transfer principle. However, statements of the form "for any set of numbers S ..." may not carry over. The only properties that differ between the reals and the hyperreals are those that rely on quantification over sets, or other higher-level structures such as functions and relations, which are typically constructed out of sets. Each real set, function, and relation has its natural hyperreal extension, satisfying the same first-order properties. The kinds of logical sentences that obey this restriction on quantification are referred to as statements in first-order logic. The transfer principle, however, does not mean that R and *R have identical behavior. For instance, in *R there exists an element ω such that but there is no such number in R. (In other words, *R is not Archimedean.) This is possible because the nonexistence of ω cannot be expressed as a first-order statement. Use in analysis Informal notations for non-real quantities have historically appeared in calculus in two contexts: as infinitesimals, like dx, and as the symbol ∞, used, for example, in limits of integration of improper integrals. As an example of the transfer principle, the statement that for any nonzero number x, 2x ≠ x, is true for the real numbers, and it is in the form required by the transfer principle, so it is also true for the hyperreal numbers. This shows that it is not possible to use a generic symbol such as ∞ for all the infinite quantities in the hyperreal system; infinite quantities differ in magnitude from other infinite quantities, and infinitesimals from other infinitesimals. Similarly, the casual use of 1/0 = ∞ is invalid, since the transfer principle applies to the statement that zero has no multiplicative inverse. The rigorous counterpart of such a calculation would be that if ε is a non-zero infinitesimal, then 1/ε is infinite. For any finite hyperreal number x, the standard part, st(x), is defined as the unique closest real number to x; it necessarily differs from x only infinitesimally. The standard part function can also be defined for infinite hyperreal numbers as follows: If x is a positive infinite hyperreal number, set st(x) to be the extended real number , and likewise, if x is a negative infinite hyperreal number, set st(x) to be (the idea is that an infinite hyperreal number should be smaller than the "true" absolute infinity but closer to it than any real number is). Differentiation One of the key uses of the hyperreal number system is to give a precise meaning to the differential operator d as used by Leibniz to define the derivative and the integral. For any real-valued function the differential is defined as a map which sends every ordered pair (where is real and is nonzero infinitesimal) to an infinitesimal Note that the very notation "" used to denote any infinitesimal is consistent with the above definition of the operator for if one interprets (as is commonly done) to be the function then for every the differential will equal the infinitesimal . A real-valued function is said to be differentiable at a point if the quotient is the same for all nonzero infinitesimals If so, this quotient is called the derivative of at . For example, to find the derivative of the function , let be a non-zero infinitesimal. Then, {| |- | | |- | | |- | | |- | | |- |- | | |- | | |} The use of the standard part in the definition of the derivative is a rigorous alternative to the traditional practice of neglecting the square of an infinitesimal quantity. Dual numbers are a number system based on this idea. After the third line of the differentiation above, the typical method from Newton through the 19th century would have been simply to discard the dx2 term. In the hyperreal system, dx2 ≠ 0, since dx is nonzero, and the transfer principle can be applied to the statement that the square of any nonzero number is nonzero. However, the quantity dx2 is infinitesimally small compared to dx; that is, the hyperreal system contains a hierarchy of infinitesimal quantities. Using hyperreal numbers for differentiation allows for a more algebraically manipulable approach to derivatives. In standard differentiation, partial differentials and higher-order differentials are not independently manipulable through algebraic techniques. However, using the hyperreals, a system can be established for doing so, though resulting in a slightly different notation. Integration Another key use of the hyperreal number system is to give a precise meaning to the integral sign ∫ used by Leibniz to define the definite integral. For any infinitesimal functionone may define the integral as a map sending any ordered triple (whereandare real, andis infinitesimal of the same sign as ) to the value whereis any hyperinteger number satisfying A real-valued function is then said to be integrable over a closed intervalif for any nonzero infinitesimalthe integral is independent of the choice of If so, this integral is called the definite integral (or antiderivative) of on This shows that using hyperreal numbers, Leibniz's notation for the definite integral can actually be interpreted as a meaningful algebraic expression (just as the derivative can be interpreted as a meaningful quotient). Properties The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order topology. The use of the definite article the in the phrase the hyperreal numbers is somewhat misleading in that there is not a unique ordered field that is referred to in most treatments. However, a 2003 paper by Vladimir Kanovei and Saharon Shelah shows that there is a definable, countably saturated (meaning ω-saturated but not countable) elementary extension of the reals, which therefore has a good claim to the title of the hyperreal numbers. Furthermore, the field obtained by the ultrapower construction from the space of all real sequences, is unique up to isomorphism if one assumes the continuum hypothesis. The condition of being a hyperreal field is a stronger one than that of being a real closed field strictly containing R. It is also stronger than that of being a superreal field in the sense of Dales and Woodin. Development The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed. From Leibniz to Robinson When Newton and (more explicitly) Leibniz introduced differentials, they used infinitesimals and these were still regarded as useful by later mathematicians such as Euler and Cauchy. Nonetheless these concepts were from the beginning seen as suspect, notably by George Berkeley. Berkeley's criticism centered on a perceived shift in hypothesis in the definition of the derivative in terms of infinitesimals (or fluxions), where dx is assumed to be nonzero at the beginning of the calculation, and to vanish at its conclusion (see Ghosts of departed quantities for details). When in the 1800s calculus was put on a firm footing through the development of the (ε, δ)-definition of limit by Bolzano, Cauchy, Weierstrass, and others, infinitesimals were largely abandoned, though research in non-Archimedean fields continued (Ehrlich 2006). However, in the 1960s Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. Robinson developed his theory nonconstructively, using model theory; however it is possible to proceed using only algebra and topology, and proving the transfer principle as a consequence of the definitions. In other words hyperreal numbers per se, aside from their use in nonstandard analysis, have no necessary relationship to model theory or first order logic, although they were discovered by the application of model theoretic techniques from logic. Hyper-real fields were in fact originally introduced by Hewitt (1948) by purely algebraic techniques, using an ultrapower construction. Ultrapower construction We are going to construct a hyperreal field via sequences of reals. In fact we can add and multiply sequences componentwise; for example: and analogously for multiplication. This turns the set of such sequences into a commutative ring, which is in fact a real algebra A. We have a natural embedding of R in A by identifying the real number r with the sequence (r, r, r, …) and this identification preserves the corresponding algebraic operations of the reals. The intuitive motivation is, for example, to represent an infinitesimal number using a sequence that approaches zero. The inverse of such a sequence would represent an infinite number. As we will see below, the difficulties arise because of the need to define rules for comparing such sequences in a manner that, although inevitably somewhat arbitrary, must be self-consistent and well defined. For example, we may have two sequences that differ in their first n members, but are equal after that; such sequences should clearly be considered as representing the same hyperreal number. Similarly, most sequences oscillate randomly forever, and we must find some way of taking such a sequence and interpreting it as, say, , where is a certain infinitesimal number. Comparing sequences is thus a delicate matter. We could, for example, try to define a relation between sequences in a componentwise fashion: but here we run into trouble, since some entries of the first sequence may be bigger than the corresponding entries of the second sequence, and some others may be smaller. It follows that the relation defined in this way is only a partial order. To get around this, we have to specify which positions matter. Since there are infinitely many indices, we don't want finite sets of indices to matter. A consistent choice of index sets that matter is given by any free ultrafilter U on the natural numbers; these can be characterized as ultrafilters that do not contain any finite sets. (The good news is that Zorn's lemma guarantees the existence of many such U; the bad news is that they cannot be explicitly constructed.) We think of U as singling out those sets of indices that "matter": We write (a0, a1, a2, ...) ≤ (b0, b1, b2, ...) if and only if the set of natural numbers { n : an ≤ bn } is in U. This is a total preorder and it turns into a total order if we agree not to distinguish between two sequences a and b if a ≤ b and b ≤ a. With this identification, the ordered field *R of hyperreals is constructed. From an algebraic point of view, U allows us to define a corresponding maximal ideal I in the commutative ring A (namely, the set of the sequences that vanish in some element of U), and then to define *R as A/I; as the quotient of a commutative ring by a maximal ideal, *R is a field. This is also notated A/U, directly in terms of the free ultrafilter U; the two are equivalent. The maximality of I follows from the possibility of, given a sequence a, constructing a sequence b inverting the non-null elements of a and not altering its null entries. If the set on which a vanishes is not in U, the product ab is identified with the number 1, and any ideal containing 1 must be A. In the resulting field, these a and b are inverses. The field A/U is an ultrapower of R. Since this field contains R it has cardinality at least that of the continuum. Since A has cardinality it is also no larger than , and hence has the same cardinality as R. One question we might ask is whether, if we had chosen a different free ultrafilter V, the quotient field A/U would be isomorphic as an ordered field to A/V. This question turns out to be equivalent to the continuum hypothesis; in ZFC with the continuum hypothesis we can prove this field is unique up to order isomorphism, and in ZFC with the negation of continuum hypothesis we can prove that there are non-order-isomorphic pairs of fields that are both countably indexed ultrapowers of the reals. For more information about this method of construction, see ultraproduct. An intuitive approach to the ultrapower construction The following is an intuitive way of understanding the hyperreal numbers. The approach taken here is very close to the one in the book by Goldblatt. Recall that the sequences converging to zero are sometimes called infinitely small. These are almost the infinitesimals in a sense; the true infinitesimals include certain classes of sequences that contain a sequence converging to zero. Let us see where these classes come from. Consider first the sequences of real numbers. They form a ring, that is, one can multiply, add and subtract them, but not necessarily divide by a non-zero element. The real numbers are considered as the constant sequences, the sequence is zero if it is identically zero, that is, an = 0 for all n. In our ring of sequences one can get ab = 0 with neither a = 0 nor b = 0. Thus, if for two sequences one has ab = 0, at least one of them should be declared zero. Surprisingly enough, there is a consistent way to do it. As a result, the equivalence classes of sequences that differ by some sequence declared zero will form a field, which is called a hyperreal field. It will contain the infinitesimals in addition to the ordinary real numbers, as well as infinitely large numbers (the reciprocals of infinitesimals, including those represented by sequences diverging to infinity). Also every hyperreal that is not infinitely large will be infinitely close to an ordinary real, in other words, it will be the sum of an ordinary real and an infinitesimal. This construction is parallel to the construction of the reals from the rationals given by Cantor. He started with the ring of the Cauchy sequences of rationals and declared all the sequences that converge to zero to be zero. The result is the reals. To continue the construction of hyperreals, consider the zero sets of our sequences, that is, the , that is, is the set of indexes for which . It is clear that if , then the union of and is N (the set of all natural numbers), so: One of the sequences that vanish on two complementary sets should be declared zero. If is declared zero, should be declared zero too, no matter what is. If both and are declared zero, then should also be declared zero. Now the idea is to single out a bunch U of subsets X of N and to declare that if and only if belongs to U. From the above conditions one can see that: From two complementary sets one belongs to U. Any set having a subset that belongs to U, also belongs to U. An intersection of any two sets belonging to U belongs to U. Finally, we do not want the empty set to belong to U because then everything would belong to U, as every set has the empty set as a subset. Any family of sets that satisfies (2–4) is called a filter (an example: the complements to the finite sets, it is called the Fréchet filter and it is used in the usual limit theory). If (1) also holds, U is called an ultrafilter (because you can add no more sets to it without breaking it). The only explicitly known example of an ultrafilter is the family of sets containing a given element (in our case, say, the number 10). Such ultrafilters are called trivial, and if we use it in our construction, we come back to the ordinary real numbers. Any ultrafilter containing a finite set is trivial. It is known that any filter can be extended to an ultrafilter, but the proof uses the axiom of choice. The existence of a nontrivial ultrafilter (the ultrafilter lemma) can be added as an extra axiom, as it is weaker than the axiom of choice. Now if we take a nontrivial ultrafilter (which is an extension of the Fréchet filter) and do our construction, we get the hyperreal numbers as a result. If is a real function of a real variable then naturally extends to a hyperreal function of a hyperreal variable by composition: where means "the equivalence class of the sequence relative to our ultrafilter", two sequences being in the same class if and only if the zero set of their difference belongs to our ultrafilter. All the arithmetical expressions and formulas make sense for hyperreals and hold true if they are true for the ordinary reals. It turns out that any finite (that is, such that for some ordinary real ) hyperreal will be of the form where is an ordinary (called standard) real and is an infinitesimal. It can be proven by bisection method used in proving the Bolzano-Weierstrass theorem, the property (1) of ultrafilters turns out to be crucial. Properties of infinitesimal and infinite numbers The finite elements F of *R form a local ring, and in fact a valuation ring, with the unique maximal ideal S being the infinitesimals; the quotient F/S is isomorphic to the reals. Hence we have a homomorphic mapping, st(x), from F to R whose kernel consists of the infinitesimals and which sends every element x of F to a unique real number whose difference from x is in S; which is to say, is infinitesimal. Put another way, every finite nonstandard real number is "very close" to a unique real number, in the sense that if x is a finite nonstandard real, then there exists one and only one real number st(x) such that x – st(x) is infinitesimal. This number st(x) is called the standard part of x, conceptually the same as x to the nearest real number. This operation is an order-preserving homomorphism and hence is well-behaved both algebraically and order theoretically. It is order-preserving though not isotonic; i.e. implies , but does not imply . We have, if both x and y are finite, If x is finite and not infinitesimal. x is real if and only if The map st is continuous with respect to the order topology on the finite hyperreals; in fact it is locally constant. Hyperreal fields Suppose X is a Tychonoff space, also called a T3.5 space, and C(X) is the algebra of continuous real-valued functions on X. Suppose M is a maximal ideal in C(X). Then the factor algebra A = C(X)/M is a totally ordered field F containing the reals. If F strictly contains R then M is called a hyperreal ideal (terminology due to Hewitt (1948)) and F a hyperreal field. Note that no assumption is being made that the cardinality of F is greater than R; it can in fact have the same cardinality. An important special case is where the topology on X is the discrete topology; in this case X can be identified with a cardinal number κ and C(X) with the real algebra Rκ of functions from κ to R. The hyperreal fields we obtain in this case are called ultrapowers of R and are identical to the ultrapowers constructed via free ultrafilters in model theory. See also – Surreal numbers are a much larger class of numbers, that contains the hyperreals as well as other classes of non-real numbers. References Further reading Hatcher, William S. (1982) "Calculus is Algebra", American Mathematical Monthly 89: 362–370. Hewitt, Edwin (1948) Rings of real-valued continuous functions. I. Trans. Amer. Math. Soc. 64, 45—99. Keisler, H. Jerome (1994) The hyperreal line. Real numbers, generalizations of the reals, and theories of continua, 207—237, Synthese Lib., 242, Kluwer Acad. Publ., Dordrecht. External links Crowell, Brief Calculus. A text using infinitesimals. Hermoso, Nonstandard Analysis and the Hyperreals. A gentle introduction. Keisler, Elementary Calculus: An Approach Using Infinitesimals. Includes an axiomatic treatment of the hyperreals, and is freely available under a Creative Commons license Mathematical analysis Nonstandard analysis Field (mathematics) Real closed field Infinity Mathematics of infinitesimals Numbers
Hyperreal number
[ "Mathematics" ]
5,029
[ "Mathematical analysis", "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Arithmetic", "Model theory", "Numbers" ]
51,432
https://en.wikipedia.org/wiki/Surreal%20number
In mathematics, the surreal number system is a totally ordered proper class containing not only the real numbers but also infinite and infinitesimal numbers, respectively larger or smaller in absolute value than any positive real number. Research on the Go endgame by John Horton Conway led to the original definition and construction of surreal numbers. Conway's construction was introduced in Donald Knuth's 1974 book Surreal Numbers: How Two Ex-Students Turned On to Pure Mathematics and Found Total Happiness. The surreals share many properties with the reals, including the usual arithmetic operations (addition, subtraction, multiplication, and division); as such, they form an ordered field. If formulated in von Neumann–Bernays–Gödel set theory, the surreal numbers are a universal ordered field in the sense that all other ordered fields, such as the rationals, the reals, the rational functions, the Levi-Civita field, the superreal numbers (including the hyperreal numbers) can be realized as subfields of the surreals. The surreals also contain all transfinite ordinal numbers; the arithmetic on them is given by the natural operations. It has also been shown (in von Neumann–Bernays–Gödel set theory) that the maximal class hyperreal field is isomorphic to the maximal class surreal field. History of the concept Research on the Go endgame by John Horton Conway led to the original definition and construction of the surreal numbers. Conway's construction was introduced in Donald Knuth's 1974 book Surreal Numbers: How Two Ex-Students Turned On to Pure Mathematics and Found Total Happiness. In his book, which takes the form of a dialogue, Knuth coined the term surreal numbers for what Conway had called simply numbers. Conway later adopted Knuth's term, and used surreals for analyzing games in his 1976 book On Numbers and Games. A separate route to defining the surreals began in 1907, when Hans Hahn introduced Hahn series as a generalization of formal power series, and Felix Hausdorff introduced certain ordered sets called -sets for ordinals and asked if it was possible to find a compatible ordered group or field structure. In 1962, Norman Alling used a modified form of Hahn series to construct such ordered fields associated to certain ordinals and, in 1987, he showed that taking to be the class of all ordinals in his construction gives a class that is an ordered field isomorphic to the surreal numbers. If the surreals are considered as 'just' a proper-class-sized real closed field, Alling's 1962 paper handles the case of strongly inaccessible cardinals which can naturally be considered as proper classes by cutting off the cumulative hierarchy of the universe one stage above the cardinal, and Alling accordingly deserves much credit for the discovery/invention of the surreals in this sense. There is an important additional field structure on the surreals that isn't visible through this lens however, namely the notion of a 'birthday' and the corresponding natural description of the surreals as the result of a cut-filling process along their birthdays given by Conway. This additional structure has become fundamental to a modern understanding of the surreal numbers, and Conway is thus given credit for discovering the surreals as we know them today—Alling himself gives Conway full credit in a 1985 paper preceding his book on the subject. Description Notation In the context of surreal numbers, an ordered pair of sets and , which is written as in many other mathematical contexts, is instead written including the extra space adjacent to each brace. When a set is empty, it is often simply omitted. When a set is explicitly described by its elements, the pair of braces that encloses the list of elements is often omitted. When a union of sets is taken, the operator that represents that is often a comma. For example, instead of , which is common notation in other contexts, we typically write . Outline of construction In the Conway construction, the surreal numbers are constructed in stages, along with an ordering ≤ such that for any two surreal numbers and , or . (Both may hold, in which case and are equivalent and denote the same number.) Each number is formed from an ordered pair of subsets of numbers already constructed: given subsets and of numbers such that all the members of are strictly less than all the members of , then the pair represents a number intermediate in value between all the members of and all the members of . Different subsets may end up defining the same number: and may define the same number even if and . (A similar phenomenon occurs when rational numbers are defined as quotients of integers: and are different representations of the same rational number.) So strictly speaking, the surreal numbers are equivalence classes of representations of the form that designate the same number. In the first stage of construction, there are no previously existing numbers so the only representation must use the empty set: . This representation, where and are both empty, is called 0. Subsequent stages yield forms like and The integers are thus contained within the surreal numbers. (The above identities are definitions, in the sense that the right-hand side is a name for the left-hand side. That the names are actually appropriate will be evident when the arithmetic operations on surreal numbers are defined, as in the section below). Similarly, representations such as arise, so that the dyadic rationals (rational numbers whose denominators are powers of 2) are contained within the surreal numbers. After an infinite number of stages, infinite subsets become available, so that any real number can be represented by where is the set of all dyadic rationals less than and is the set of all dyadic rationals greater than (reminiscent of a Dedekind cut). Thus the real numbers are also embedded within the surreals. There are also representations like where is a transfinite number greater than all integers and is an infinitesimal greater than 0 but less than any positive real number. Moreover, the standard arithmetic operations (addition, subtraction, multiplication, and division) can be extended to these non-real numbers in a manner that turns the collection of surreal numbers into an ordered field, so that one can talk about or and so forth. Construction Surreal numbers are constructed inductively as equivalence classes of pairs of sets of surreal numbers, restricted by the condition that each element of the first set is smaller than each element of the second set. The construction consists of three interdependent parts: the construction rule, the comparison rule and the equivalence rule. Forms A form is a pair of sets of surreal numbers, called its left set and its right set. A form with left set and right set is written . When and are given as lists of elements, the braces around them are omitted. Either or both of the left and right set of a form may be the empty set. The form with both left and right set empty is also written . Numeric forms and their equivalence classes Construction rule A form is numeric if the intersection of and is the empty set and each element of is greater than every element of , according to the order relation ≤ given by the comparison rule below. The numeric forms are placed in equivalence classes; each such equivalence class is a surreal number. The elements of the left and right sets of a form are drawn from the universe of the surreal numbers (not of forms, but of their equivalence classes). Equivalence rule Two numeric forms and are forms of the same number (lie in the same equivalence class) if and only if both and . An ordering relationship must be antisymmetric, i.e., it must have the property that (i. e., and are both true) only when and are the same object. This is not the case for surreal number forms, but is true by construction for surreal numbers (equivalence classes). The equivalence class containing is labeled 0; in other words, is a form of the surreal number 0. Order The recursive definition of surreal numbers is completed by defining comparison: Given numeric forms and , if and only if both: There is no such that . That is, every element in the left part of is strictly smaller than . There is no such that . That is, every element in the right part of is strictly larger than . Surreal numbers can be compared to each other (or to numeric forms) by choosing a numeric form from its equivalence class to represent each surreal number. Induction This group of definitions is recursive, and requires some form of mathematical induction to define the universe of objects (forms and numbers) that occur in them. The only surreal numbers reachable via finite induction are the dyadic fractions; a wider universe is reachable given some form of transfinite induction. Induction rule There is a generation , in which 0 consists of the single form . Given any ordinal number , the generation is the set of all surreal numbers that are generated by the construction rule from subsets of . The base case is actually a special case of the induction rule, with 0 taken as a label for the "least ordinal". Since there exists no with , the expression is the empty set; the only subset of the empty set is the empty set, and therefore consists of a single surreal form lying in a single equivalence class 0. For every finite ordinal number , is well-ordered by the ordering induced by the comparison rule on the surreal numbers. The first iteration of the induction rule produces the three numeric forms (the form is non-numeric because ). The equivalence class containing is labeled 1 and the equivalence class containing is labeled −1. These three labels have a special significance in the axioms that define a ring; they are the additive identity (0), the multiplicative identity (1), and the additive inverse of 1 (−1). The arithmetic operations defined below are consistent with these labels. For every , since every valid form in is also a valid form in , all of the numbers in also appear in (as supersets of their representation in ). (The set union expression appears in our construction rule, rather than the simpler form , so that the definition also makes sense when is a limit ordinal.) Numbers in that are a superset of some number in are said to have been inherited from generation . The smallest value of for which a given surreal number appears in is called its birthday. For example, the birthday of 0 is 0, and the birthday of −1 is 1. A second iteration of the construction rule yields the following ordering of equivalence classes: Comparison of these equivalence classes is consistent, irrespective of the choice of form. Three observations follow: contains four new surreal numbers. Two contain extremal forms: contains all numbers from previous generations in its right set, and contains all numbers from previous generations in its left set. The others have a form that partitions all numbers from previous generations into two non-empty sets. Every surreal number that existed in the previous "generation" exists also in this generation, and includes at least one new form: a partition of all numbers other than from previous generations into a left set (all numbers less than ) and a right set (all numbers greater than ). The equivalence class of a number depends on only the maximal element of its left set and the minimal element of the right set. The informal interpretations of and are "the number just after 1" and "the number just before −1" respectively; their equivalence classes are labeled 2 and −2. The informal interpretations of and are "the number halfway between 0 and 1" and "the number halfway between −1 and 0" respectively; their equivalence classes are labeled and −. These labels will also be justified by the rules for surreal addition and multiplication below. The equivalence classes at each stage of induction may be characterized by their -complete forms (each containing as many elements as possible of previous generations in its left and right sets). Either this complete form contains every number from previous generations in its left or right set, in which case this is the first generation in which this number occurs; or it contains all numbers from previous generations but one, in which case it is a new form of this one number. We retain the labels from the previous generation for these "old" numbers, and write the ordering above using the old and new labels: . The third observation extends to all surreal numbers with finite left and right sets. (For infinite left or right sets, this is valid in an altered form, since infinite sets might not contain a maximal or minimal element.) The number is therefore equivalent to ; one can establish that these are forms of 3 by using the birthday property, which is a consequence of the rules above. Birthday property A form occurring in generation represents a number inherited from an earlier generation if and only if there is some number in that is greater than all elements of and less than all elements of the . (In other words, if and are already separated by a number created at an earlier stage, then does not represent a new number but one already constructed.) If represents a number from any generation earlier than , there is a least such generation , and exactly one number with this least as its birthday that lies between and ; is a form of this . In other words, it lies in the equivalence class in that is a superset of the representation of in generation . Arithmetic The addition, negation (additive inverse), and multiplication of surreal number forms and are defined by three recursive formulas. Negation Negation of a given number is defined by where the negation of a set of numbers is given by the set of the negated elements of : This formula involves the negation of the surreal numbers appearing in the left and right sets of , which is to be understood as the result of choosing a form of the number, evaluating the negation of this form, and taking the equivalence class of the resulting form. This makes sense only if the result is the same, irrespective of the choice of form of the operand. This can be proved inductively using the fact that the numbers occurring in and are drawn from generations earlier than that in which the form first occurs, and observing the special case: Addition The definition of addition is also a recursive formula: where This formula involves sums of one of the original operands and a surreal number drawn from the left or right set of the other. It can be proved inductively with the special cases: For example: , which by the birthday property is a form of 1. This justifies the label used in the previous section. Subtraction Subtraction is defined with addition and negation: Multiplication Multiplication can be defined recursively as well, beginning from the special cases involving 0, the multiplicative identity 1, and its additive inverse −1: The formula contains arithmetic expressions involving the operands and their left and right sets, such as the expression that appears in the left set of the product of and . This is understood as , the set of numbers generated by picking all possible combinations of members of and , and substituting them into the expression. For example, to show that the square of is : . Division The definition of division is done in terms of the reciprocal and multiplication: where for positive . Only positive are permitted in the formula, with any nonpositive terms being ignored (and are always positive). This formula involves not only recursion in terms of being able to divide by numbers from the left and right sets of , but also recursion in that the members of the left and right sets of itself. 0 is always a member of the left set of , and that can be used to find more terms in a recursive fashion. For example, if }, then we know a left term of will be 0. This in turn means is a right term. This means is a left term. This means will be a right term. Continuing, this gives For negative , is given by If , then is undefined. Consistency It can be shown that the definitions of negation, addition and multiplication are consistent, in the sense that: Addition and negation are defined recursively in terms of "simpler" addition and negation steps, so that operations on numbers with birthday will eventually be expressed entirely in terms of operations on numbers with birthdays less than ; Multiplication is defined recursively in terms of additions, negations, and "simpler" multiplication steps, so that the product of numbers with birthday will eventually be expressed entirely in terms of sums and differences of products of numbers with birthdays less than ; As long as the operands are well-defined surreal number forms (each element of the left set is less than each element of the right set), the results are again well-defined surreal number forms; The operations can be extended to numbers (equivalence classes of forms): the result of negating or adding or multiplying and will represent the same number regardless of the choice of form of and ; and These operations obey the associativity, commutativity, additive inverse, and distributivity axioms in the definition of a field, with additive identity and multiplicative identity . With these rules one can now verify that the numbers found in the first few generations were properly labeled. The construction rule is repeated to obtain more generations of surreals: Arithmetic closure For each natural number (finite ordinal) , all numbers generated in are dyadic fractions, i.e., can be written as an irreducible fraction , where and are integers and . The set of all surreal numbers that are generated in some for finite may be denoted as . One may form the three classes of which is the union. No individual is closed under addition and multiplication (except ), but is; it is the subring of the rationals consisting of all dyadic fractions. There are infinite ordinal numbers for which the set of surreal numbers with birthday less than is closed under the different arithmetic operations. For any ordinal , the set of surreal numbers with birthday less than (using powers of ) is closed under addition and forms a group; for birthday less than it is closed under multiplication and forms a ring; and for birthday less than an (ordinal) epsilon number it is closed under multiplicative inverse and forms a field. The latter sets are also closed under the exponential function as defined by Kruskal and Gonshor. However, it is always possible to construct a surreal number that is greater than any member of a set of surreals (by including the set on the left side of the constructor) and thus the collection of surreal numbers is a proper class. With their ordering and algebraic operations they constitute an ordered field, with the caveat that they do not form a set. In fact it is the biggest ordered field, in that every ordered field is a subfield of the surreal numbers. The class of all surreal numbers is denoted by the symbol . Infinity Define as the set of all surreal numbers generated by the construction rule from subsets of . (This is the same inductive step as before, since the ordinal number is the smallest ordinal that is larger than all natural numbers; however, the set union appearing in the inductive step is now an infinite union of finite sets, and so this step can be performed only in a set theory that allows such a union.) A unique infinitely large positive number occurs in : also contains objects that can be identified as the rational numbers. For example, the -complete form of the fraction is given by: The product of this form of with any form of 3 is a form whose left set contains only numbers less than 1 and whose right set contains only numbers greater than 1; the birthday property implies that this product is a form of 1. Not only do all the rest of the rational numbers appear in ; the remaining finite real numbers do too. For example, The only infinities in are and ; but there are other non-real numbers in among the reals. Consider the smallest positive number in : This number is larger than zero but less than all positive dyadic fractions. It is therefore an infinitesimal number, often labeled . The -complete form of (respectively ) is the same as the -complete form of 0, except that 0 is included in the left (respectively right) set. The only "pure" infinitesimals in are and its additive inverse ; adding them to any dyadic fraction produces the numbers , which also lie in . One can determine the relationship between and by multiplying particular forms of them to obtain: . This expression is well-defined only in a set theory which permits transfinite induction up to . In such a system, one can demonstrate that all the elements of the left set of are positive infinitesimals and all the elements of the right set are positive infinities, and therefore is the oldest positive finite number, 1. Consequently, . Some authors systematically use in place of the symbol . Contents of S Given any in , exactly one of the following is true: and are both empty, in which case ; is empty and some integer is greater than every element of , in which case equals the smallest such integer ; is empty and no integer is greater than every element of , in which case equals ; is empty and some integer is less than every element of , in which case equals the largest such integer ; is empty and no integer is less than every element of , in which case equals ; and are both non-empty, and: Some dyadic fraction is "strictly between" and (greater than all elements of and less than all elements of ), in which case equals the oldest such dyadic fraction ; No dyadic fraction lies strictly between and , but some dyadic fraction is greater than or equal to all elements of and less than all elements of , in which case equals ; No dyadic fraction lies strictly between and , but some dyadic fraction is greater than all elements of and less than or equal to all elements of , in which case equals ; Every dyadic fraction is either greater than some element of or less than some element of , in which case is some real number that has no representation as a dyadic fraction. is not an algebraic field, because it is not closed under arithmetic operations; consider , whose form does not lie in any number in . The maximal subset of that is closed under (finite series of) arithmetic operations is the field of real numbers, obtained by leaving out the infinities , the infinitesimals , and the infinitesimal neighbors of each nonzero dyadic fraction . This construction of the real numbers differs from the Dedekind cuts of standard analysis in that it starts from dyadic fractions rather than general rationals and naturally identifies each dyadic fraction in with its forms in previous generations. (The -complete forms of real elements of are in one-to-one correspondence with the reals obtained by Dedekind cuts, under the proviso that Dedekind reals corresponding to rational numbers are represented by the form in which the cut point is omitted from both left and right sets.) The rationals are not an identifiable stage in the surreal construction; they are merely the subset of containing all elements such that for some and some nonzero , both drawn from . By demonstrating that is closed under individual repetitions of the surreal arithmetic operations, one can show that it is a field; and by showing that every element of is reachable from by a finite series (no longer than two, actually) of arithmetic operations including multiplicative inversion, one can show that is strictly smaller than the subset of identified with the reals. The set has the same cardinality as the real numbers . This can be demonstrated by exhibiting surjective mappings from to the closed unit interval of and vice versa. Mapping onto is routine; map numbers less than or equal to (including ) to 0, numbers greater than or equal to (including ) to 1, and numbers between and to their equivalent in (mapping the infinitesimal neighbors of each dyadic fraction , along with itself, to ). To map onto , map the (open) central third (, ) of onto ; the central third (, ) of the upper third to ; and so forth. This maps a nonempty open interval of onto each element of , monotonically. The residue of consists of the Cantor set , each point of which is uniquely identified by a partition of the central-third intervals into left and right sets, corresponding precisely to a form in . This places the Cantor set in one-to-one correspondence with the set of surreal numbers with birthday . Transfinite induction Continuing to perform transfinite induction beyond produces more ordinal numbers , each represented as the largest surreal number having birthday . (This is essentially a definition of the ordinal numbers resulting from transfinite induction.) The first such ordinal is . There is another positive infinite number in generation : . The surreal number is not an ordinal; the ordinal is not the successor of any ordinal. This is a surreal number with birthday , which is labeled on the basis that it coincides with the sum of and . Similarly, there are two new infinitesimal numbers in generation : and . At a later stage of transfinite induction, there is a number larger than for all natural numbers : This number may be labeled both because its birthday is (the first ordinal number not reachable from by the successor operation) and because it coincides with the surreal sum of and ; it may also be labeled because it coincides with the product of and . It is the second limit ordinal; reaching it from via the construction step requires a transfinite induction on This involves an infinite union of infinite sets, which is a "stronger" set theoretic operation than the previous transfinite induction required. Note that the conventional addition and multiplication of ordinals does not always coincide with these operations on their surreal representations. The sum of ordinals equals , but the surreal sum is commutative and produces . The addition and multiplication of the surreal numbers associated with ordinals coincides with the natural sum and natural product of ordinals. Just as is bigger than for any natural number , there is a surreal number that is infinite but smaller than for any natural number . That is, is defined by where on the right hand side the notation is used to mean . It can be identified as the product of and the form of . The birthday of is the limit ordinal . Powers of ω and the Conway normal form To classify the "orders" of infinite and infinitesimal surreal numbers, also known as archimedean classes, Conway associated to each surreal number the surreal number , where and range over the positive real numbers. If then is "infinitely greater" than , in that it is greater than for all real numbers . Powers of also satisfy the conditions , , so they behave the way one would expect powers to behave. Each power of also has the redeeming feature of being the simplest surreal number in its archimedean class; conversely, every archimedean class within the surreal numbers contains a unique simplest member. Thus, for every positive surreal number there will always exist some positive real number and some surreal number so that is "infinitely smaller" than . The exponent is the "base logarithm" of , defined on the positive surreals; it can be demonstrated that maps the positive surreals onto the surreals and that . This gets extended by transfinite induction so that every surreal number has a "normal form" analogous to the Cantor normal form for ordinal numbers. This is the Conway normal form: Every surreal number may be uniquely written as , where every is a nonzero real number and the s form a strictly decreasing sequence of surreal numbers. This "sum", however, may have infinitely many terms, and in general has the length of an arbitrary ordinal number. (Zero corresponds of course to the case of an empty sequence, and is the only surreal number with no leading exponent.) Looked at in this manner, the surreal numbers resemble a power series field, except that the decreasing sequences of exponents must be bounded in length by an ordinal and are not allowed to be as long as the class of ordinals. This is the basis for the formulation of the surreal numbers as a Hahn series. Gaps and continuity In contrast to the real numbers, a (proper) subset of the surreal numbers does not have a least upper (or lower) bound unless it has a maximal (minimal) element. Conway defines a gap as such that every element of is less than every element of , and ; this is not a number because at least one of the sides is a proper class. Though similar, gaps are not quite the same as Dedekind cuts, but we can still talk about a completion of the surreal numbers with the natural ordering which is a (proper class-sized) linear continuum. For instance there is no least positive infinite surreal, but the gap is greater than all real numbers and less than all positive infinite surreals, and is thus the least upper bound of the reals in . Similarly the gap is larger than all surreal numbers. (This is an esoteric pun: In the general construction of ordinals, "is" the set of ordinals smaller than , and we can use this equivalence to write in the surreals; denotes the class of ordinal numbers, and because is cofinal in we have by extension.) With a bit of set-theoretic care, can be equipped with a topology where the open sets are unions of open intervals (indexed by proper sets) and continuous functions can be defined. An equivalent of Cauchy sequences can be defined as well, although they have to be indexed by the class of ordinals; these will always converge, but the limit may be either a number or a gap that can be expressed as with decreasing and having no lower bound in . (All such gaps can be understood as Cauchy sequences themselves, but there are other types of gap that are not limits, such as and ). Exponential function Based on unpublished work by Kruskal, a construction (by transfinite induction) that extends the real exponential function (with base ) to the surreals was carried through by Gonshor. Other exponentials The powers of function is also an exponential function, but does not have the properties desired for an extension of the function on the reals. It will, however, be needed in the development of the base- exponential, and it is this function that is meant whenever the notation is used in the following. When is a dyadic fraction, the power function , may be composed from multiplication, multiplicative inverse and square root, all of which can be defined inductively. Its values are completely determined by the basic relation , and where defined it necessarily agrees with any other exponentiation that can exist. Basic induction The induction steps for the surreal exponential are based on the series expansion for the real exponential, more specifically those partial sums that can be shown by basic algebra to be positive but less than all later ones. For positive these are denoted and include all partial sums; for negative but finite, denotes the odd steps in the series starting from the first one with a positive real part (which always exists). For negative infinite the odd-numbered partial sums are strictly decreasing and the notation denotes the empty set, but it turns out that the corresponding elements are not needed in the induction. The relations that hold for real are thenandand this can be extended to the surreals with the definition This is well-defined for all surreal arguments (the value exists and does not depend on the choice of and ). Results Using this definition, the following hold: is a strictly increasing positive function, satisfies is a surjection (onto ) and has a well-defined inverse, coincides with the usual exponential function on the reals (and thus ) For infinitesimal, the value of the formal power series (Taylor expansion) of is well defined and coincides with the inductive definition When is given in Conway normal form, the set of exponents in the result is well-ordered and the coefficients are finite sums, directly giving the normal form of the result (which has a leading ) Similarly, for infinitesimally close to , is given by power series expansion of For positive infinite , is infinite as well If has the form (), has the form where is a strictly increasing function of . In fact there is an inductively defined bijection whose inverse can also be defined inductively If is "pure infinite" with normal form where all , then Similarly, for , the inverse is given by Any surreal number can be written as the sum of a pure infinite, a real and an infinitesimal part, and the exponential is the product of the partial results given above The normal form can be written out by multiplying the infinite part (a single power of ) and the real exponential into the power series resulting from the infinitesimal Conversely, dividing out the leading term of the normal form will bring any surreal number into the form , for , where each factor has a form for which a way of calculating the logarithm has been given above; the sum is then the general logarithm While there is no general inductive definition of (unlike for ), the partial results are given in terms of such definitions. In this way, the logarithm can be calculated explicitly, without reference to the fact that it's the inverse of the exponential. The exponential function is much greater than any finite power For any positive infinite and any finite , is infinite For any integer and surreal , . This stronger constraint is one of the Ressayre axioms for the real exponential field satisfies all the Ressayre axioms for the real exponential field The surreals with exponential is an elementary extension of the real exponential field For an ordinal epsilon number, the set of surreal numbers with birthday less than constitute a field that is closed under exponentials, and is likewise an elementary extension of the real exponential field Examples The surreal exponential is essentially given by its behaviour on positive powers of , i.e., the function , combined with well-known behaviour on finite numbers. Only examples of the former will be given. In addition, holds for a large part of its range, for instance for any finite number with positive real part and any infinite number that is less than some iterated power of ( for some number of levels). and This shows that the "power of " function is not compatible with , since compatibility would demand a value of here Exponentiation A general exponentiation can be defined as , giving an interpretation to expressions like . Again it is essential to distinguish this definition from the "powers of " function, especially if may occur as the base. Surcomplex numbers A surcomplex number is a number of the form , where and are surreal numbers and is the square root of . The surcomplex numbers form an algebraically closed field (except for being a proper class), isomorphic to the algebraic closure of the field generated by extending the rational numbers by a proper class of algebraically independent transcendental elements. Up to field isomorphism, this fact characterizes the field of surcomplex numbers within any fixed set theory. Games The definition of surreal numbers contained one restriction: each element of must be strictly less than each element of . If this restriction is dropped we can generate a more general class known as games. All games are constructed according to this rule: Construction rule If and are two sets of games then is a game. Addition, negation, and comparison are all defined the same way for both surreal numbers and games. Every surreal number is a game, but not all games are surreal numbers, e.g. the game is not a surreal number. The class of games is more general than the surreals, and has a simpler definition, but lacks some of the nicer properties of surreal numbers. The class of surreal numbers forms a field, but the class of games does not. The surreals have a total order: given any two surreals, they are either equal, or one is greater than the other. The games have only a partial order: there exist pairs of games that are neither equal, greater than, nor less than each other. Each surreal number is either positive, negative, or zero. Each game is either positive, negative, zero, or fuzzy (incomparable with zero, such as ). A move in a game involves the player whose move it is choosing a game from those available in (for the left player) or (for the right player) and then passing this chosen game to the other player. A player who cannot move because the choice is from the empty set has lost. A positive game represents a win for the left player, a negative game for the right player, a zero game for the second player to move, and a fuzzy game for the first player to move. If , , and are surreals, and , then . However, if , , and are games, and , then it is not always true that . Note that "" here means equality, not identity. Application to combinatorial game theory The surreal numbers were originally motivated by studies of the game Go, and there are numerous connections between popular games and the surreals. In this section, we will use a capitalized Game for the mathematical object , and the lowercase game for recreational games like Chess or Go. We consider games with these properties: Two players (named Left and Right) Deterministic (the game at each step will completely depend on the choices the players make, rather than a random factor) No hidden information (such as cards or tiles that a player hides) Players alternate taking turns (the game may or may not allow multiple moves in a turn) Every game must end in a finite number of moves As soon as there are no legal moves left for a player, the game ends, and that player loses For most games, the initial board position gives no great advantage to either player. As the game progresses and one player starts to win, board positions will occur in which that player has a clear advantage. For analyzing games, it is useful to associate a Game with every board position. The value of a given position will be the Game , where is the set of values of all the positions that can be reached in a single move by Left. Similarly, is the set of values of all the positions that can be reached in a single move by Right. The zero Game (called ) is the Game where and are both empty, so the player to move next ( or ) immediately loses. The sum of two Games and is defined as the Game where the player to move chooses which of the Games to play in at each stage, and the loser is still the player who ends up with no legal move. One can imagine two chess boards between two players, with players making moves alternately, but with complete freedom as to which board to play on. If is the Game , is the Game , i.e. with the role of the two players reversed. It is easy to show for all Games (where is defined as ). This simple way to associate Games with games yields a very interesting result. Suppose two perfect players play a game starting with a given position whose associated Game is . We can classify all Games into four classes as follows: If then Left will win, regardless of who plays first. If then Right will win, regardless of who plays first. If then the player who goes second will win. If then the player who goes first will win. More generally, we can define as , and similarly for , and . The notation means that and are incomparable. is equivalent to , i.e. that , and are all false. Incomparable games are sometimes said to be confused with each other, because one or the other may be preferred by a player depending on what is added to it. A game confused with zero is said to be fuzzy, as opposed to positive, negative, or zero. An example of a fuzzy game is star (*). Sometimes when a game nears the end, it will decompose into several smaller games that do not interact, except in that each player's turn allows moving in only one of them. For example, in Go, the board will slowly fill up with pieces until there are just a few small islands of empty space where a player can move. Each island is like a separate game of Go, played on a very small board. It would be useful if each subgame could be analyzed separately, and then the results combined to give an analysis of the entire game. This doesn't appear to be easy to do. For example, there might be two subgames where whoever moves first wins, but when they are combined into one big game, it is no longer the first player who wins. Fortunately, there is a way to do this analysis. The following theorem can be applied: If a big game decomposes into two smaller games, and the small games have associated Games of and , then the big game will have an associated Game of . A game composed of smaller games is called the disjunctive sum of those smaller games, and the theorem states that the method of addition we defined is equivalent to taking the disjunctive sum of the addends. Historically, Conway developed the theory of surreal numbers in the reverse order of how it has been presented here. He was analyzing Go endgames, and realized that it would be useful to have some way to combine the analyses of non-interacting subgames into an analysis of their disjunctive sum. From this he invented the concept of a Game and the addition operator for it. From there he moved on to developing a definition of negation and comparison. Then he noticed that a certain class of Games had interesting properties; this class became the surreal numbers. Finally, he developed the multiplication operator, and proved that the surreals are actually a field, and that it includes both the reals and ordinals. Alternative realizations Alternative approaches to the surreal numbers complement Conway's exposition in terms of games. Sign expansion Definitions In what is now called the sign-expansion or sign-sequence of a surreal number, a surreal number is a function whose domain is an ordinal and whose codomain is . This is equivalent to Conway's L-R sequences. Define the binary predicate "simpler than" on numbers by: is simpler than if is a proper subset of , i.e. if and for all . For surreal numbers define the binary relation to be lexicographic order (with the convention that "undefined values" are greater than and less than ). So if one of the following holds: is simpler than and ; is simpler than and ; there exists a number such that is simpler than , is simpler than , and . Equivalently, let , so that if and only if . Then, for numbers and , if and only if one of the following holds: For numbers and , if and only if , and if and only if . Also if and only if . The relation is transitive, and for all numbers and , exactly one of , , , holds (law of trichotomy). This means that is a linear order (except that is a proper class). For sets of numbers and such that , there exists a unique number such that For any number such that , or is simpler than . Furthermore, is constructible from and by transfinite induction. is the simplest number between and . Let the unique number be denoted by . For a number , define its left set and right set by then . One advantage of this alternative realization is that equality is identity, not an inductively defined relation. Unlike Conway's realization of the surreal numbers, however, the sign-expansion requires a prior construction of the ordinals, while in Conway's realization, the ordinals are constructed as particular cases of surreals. However, similar definitions can be made that eliminate the need for prior construction of the ordinals. For instance, we could let the surreals be the (recursively-defined) class of functions whose domain is a subset of the surreals satisfying the transitivity rule and whose range is . "Simpler than" is very simply defined now: is simpler than if . The total ordering is defined by considering and as sets of ordered pairs (as a function is normally defined): Either , or else the surreal number is in the domain of or the domain of (or both, but in this case the signs must disagree). We then have if or (or both). Converting these functions into sign sequences is a straightforward task; arrange the elements of in order of simplicity (i.e., inclusion), and then write down the signs that assigns to each of these elements in order. The ordinals then occur naturally as those surreal numbers whose range is . Addition and multiplication The sum of two numbers and is defined by induction on and by , where , . The additive identity is given by the number , i.e. the number is the unique function whose domain is the ordinal , and the additive inverse of the number is the number , given by , and, for , if , and if . It follows that a number is positive if and only if and , and is negative if and only if and . The product of two numbers, and , is defined by induction on and by , where The multiplicative identity is given by the number , i.e. the number has domain equal to the ordinal , and . Correspondence with Conway's realization The map from Conway's realization to sign expansions is given by , where and . The inverse map from the alternative realization to Conway's realization is given by , where and . Axiomatic approach In another approach to the surreals, given by Alling, explicit construction is bypassed altogether. Instead, a set of axioms is given that any particular approach to the surreals must satisfy. Much like the axiomatic approach to the reals, these axioms guarantee uniqueness up to isomorphism. A triple is a surreal number system if and only if the following hold: is a total order over is a function from onto the class of all ordinals ( is called the "birthday function" on ). Let and be subsets of such that for all and , (using Alling's terminology, is a "Conway cut" of ). Then there exists a unique such that is minimal and for all and all , . (This axiom is often referred to as "Conway's Simplicity Theorem".) Furthermore, if an ordinal is greater than for all , then . (Alling calls a system that satisfies this axiom a "full surreal number system".) Both Conway's original construction and the sign-expansion construction of surreals satisfy these axioms. Given these axioms, Alling derives Conway's original definition of and develops surreal arithmetic. Simplicity hierarchy A construction of the surreal numbers as a maximal binary pseudo-tree with simplicity (ancestor) and ordering relations is due to Philip Ehrlich. The difference from the usual definition of a tree is that the set of ancestors of a vertex is well-ordered, but may not have a maximal element (immediate predecessor); in other words the order type of that set is a general ordinal number, not just a natural number. This construction fulfills Alling's axioms as well and can easily be mapped to the sign-sequence representation. Ehrlich additionally constructed an isomorphism between Conway's maximal surreal number field and the maximal hyperreals in von Neumann–Bernays–Gödel set theory. Hahn series Alling also proves that the field of surreal numbers is isomorphic (as an ordered field) to the field of Hahn series with real coefficients on the value group of surreal numbers themselves (the series representation corresponding to the normal form of a surreal number, as defined above). This provides a connection between surreal numbers and more conventional mathematical approaches to ordered field theory. This isomorphism makes the surreal numbers into a valued field where the valuation is the additive inverse of the exponent of the leading term in the Conway normal form, e.g., . The valuation ring then consists of the finite surreal numbers (numbers with a real and/or an infinitesimal part). The reason for the sign inversion is that the exponents in the Conway normal form constitute a reverse well-ordered set, whereas Hahn series are formulated in terms of (non-reversed) well-ordered subsets of the value group. See also Hyperreal number Non-standard analysis Notes References Further reading Donald Knuth's original exposition: Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness, 1974, . More information can be found at the book's official homepage (archived). An update of the classic 1976 book defining the surreal numbers, and exploring their connections to games: John Conway, On Numbers And Games, 2nd ed., 2001, . An update of the first part of the 1981 book that presented surreal numbers and the analysis of games to a broader audience: Berlekamp, Conway, and Guy, Winning Ways for Your Mathematical Plays, vol. 1, 2nd ed., 2001, . Martin Gardner, Penrose Tiles to Trapdoor Ciphers, W. H. Freeman & Co., 1989, , Chapter 4. A non-technical overview; reprint of the 1976 Scientific American article. Polly Shulman, "Infinity Plus One, and Other Surreal Numbers", Discover, December 1995. A detailed treatment of surreal numbers: Norman L. Alling, Foundations of Analysis over Surreal Number Fields, 1987, . A treatment of surreals based on the sign-expansion realization: Harry Gonshor, An Introduction to the Theory of Surreal Numbers, 1986, . A detailed philosophical development of the concept of surreal numbers as a most general concept of number: Alain Badiou, Number and Numbers, New York: Polity Press, 2008, (paperback), (hardcover). The surreal numbers are studied in the context of homotopy type theory in section 11.6. External links Hackenstrings, and the 0.999... ?= 1 FAQ, by A. N. Walker, an archive of the disappeared original A gentle yet thorough introduction by Claus Tøndering Good Math, Bad Math: Surreal Numbers, a series of articles about surreal numbers and their variations Conway's Mathematics after Conway, survey of Conway's accomplishments in the AMS Notices, with a section on surreal numbers Combinatorial game theory Mathematical logic Infinity Real closed field John Horton Conway Nonstandard analysis Numbers
Surreal number
[ "Mathematics" ]
10,445
[ "Recreational mathematics", "Mathematical logic", "Mathematical objects", "Infinity", "Combinatorics", "Nonstandard analysis", "Arithmetic", "Mathematics of infinitesimals", "Game theory", "Model theory", "Combinatorial game theory", "Numbers" ]
51,438
https://en.wikipedia.org/wiki/Hypercomplex%20number
In mathematics, hypercomplex number is a traditional term for an element of a finite-dimensional unital algebra over the field of real numbers. The study of hypercomplex numbers in the late 19th century forms the basis of modern group representation theory. History In the nineteenth century, number systems called quaternions, tessarines, coquaternions, biquaternions, and octonions became established concepts in mathematical literature, added to the real and complex numbers. The concept of a hypercomplex number covered them all, and called for a discipline to explain and classify them. The cataloguing project began in 1872 when Benjamin Peirce first published his Linear Associative Algebra, and was carried forward by his son Charles Sanders Peirce. Most significantly, they identified the nilpotent and the idempotent elements as useful hypercomplex numbers for classifications. The Cayley–Dickson construction used involutions to generate complex numbers, quaternions, and octonions out of the real number system. Hurwitz and Frobenius proved theorems that put limits on hypercomplexity: Hurwitz's theorem says finite-dimensional real composition algebras are the reals , the complexes , the quaternions , and the octonions , and the Frobenius theorem says the only real associative division algebras are , , and . In 1958 J. Frank Adams published a further generalization in terms of Hopf invariants on H-spaces which still limits the dimension to 1, 2, 4, or 8. It was matrix algebra that harnessed the hypercomplex systems. For instance, 2 x 2 real matrices were found isomorphic to coquaternions. Soon the matrix paradigm began to explain several others as they were represented by matrices and their operations. In 1907 Joseph Wedderburn showed that associative hypercomplex systems could be represented by square matrices, or direct products of algebras of square matrices. From that date the preferred term for a hypercomplex system became associative algebra, as seen in the title of Wedderburn's thesis at University of Edinburgh. Note however, that non-associative systems like octonions and hyperbolic quaternions represent another type of hypercomplex number. As Thomas Hawkins explains, the hypercomplex numbers are stepping stones to learning about Lie groups and group representation theory. For instance, in 1929 Emmy Noether wrote on "hypercomplex quantities and representation theory". In 1973 Kantor and Solodovnikov published a textbook on hypercomplex numbers which was translated in 1989. Karen Parshall has written a detailed exposition of the heyday of hypercomplex numbers, including the role of mathematicians including Theodor Molien and Eduard Study. For the transition to modern algebra, Bartel van der Waerden devotes thirty pages to hypercomplex numbers in his History of Algebra. Definition A definition of a hypercomplex number is given by as an element of a unital, but not necessarily associative or commutative, finite-dimensional algebra over the real numbers. Elements are generated with real number coefficients for a basis . Where possible, it is conventional to choose the basis so that . A technical approach to hypercomplex numbers directs attention first to those of dimension two. Two-dimensional real algebras Theorem: Up to isomorphism, there are exactly three 2-dimensional unital algebras over the reals: the ordinary complex numbers, the split-complex numbers, and the dual numbers. In particular, every 2-dimensional unital algebra over the reals is associative and commutative. Proof: Since the algebra is 2-dimensional, we can pick a basis . Since the algebra is closed under squaring, the non-real basis element u squares to a linear combination of 1 and u: for some real numbers a0 and a1. Using the common method of completing the square by subtracting a1u and adding the quadratic complement a/4 to both sides yields Thus where The three cases depend on this real value: If , the above formula yields . Hence, ũ can directly be identified with the nilpotent element of the basis of the dual numbers. If , the above formula yields . This leads to the split-complex numbers which have normalized basis with . To obtain j from ũ, the latter must be divided by the positive real number which has the same square as ũ has. If , the above formula yields . This leads to the complex numbers which have normalized basis with . To yield i from ũ, the latter has to be divided by a positive real number which squares to the negative of ũ2. The complex numbers are the only 2-dimensional hypercomplex algebra that is a field. Split algebras such as the split-complex numbers that include non-real roots of 1 also contain idempotents and zero divisors , so such algebras cannot be division algebras. However, these properties can turn out to be very meaningful, for instance in representing a light cone with a null cone. In a 2004 edition of Mathematics Magazine the 2-dimensional real algebras have been styled the "generalized complex numbers". The idea of cross-ratio of four complex numbers can be extended to the 2-dimensional real algebras. Higher-dimensional examples (more than one non-real axis) Clifford algebras A Clifford algebra is the unital associative algebra generated over an underlying vector space equipped with a quadratic form. Over the real numbers this is equivalent to being able to define a symmetric scalar product, that can be used to orthogonalise the quadratic form, to give a basis such that: Imposing closure under multiplication generates a multivector space spanned by a basis of 2k elements, . These can be interpreted as the basis of a hypercomplex number system. Unlike the basis , the remaining basis elements need not anti-commute, depending on how many simple exchanges must be carried out to swap the two factors. So , but . Putting aside the bases which contain an element ei such that (i.e. directions in the original space over which the quadratic form was degenerate), the remaining Clifford algebras can be identified by the label Clp,q(), indicating that the algebra is constructed from p simple basis elements with , q with , and where indicates that this is to be a Clifford algebra over the reals—i.e. coefficients of elements of the algebra are to be real numbers. These algebras, called geometric algebras, form a systematic set, which turn out to be very useful in physics problems which involve rotations, phases, or spins, notably in classical and quantum mechanics, electromagnetic theory and relativity. Examples include: the complex numbers Cl0,1(), split-complex numbers Cl1,0(), quaternions Cl0,2(), split-biquaternions Cl0,3(), split-quaternions (the natural algebra of two-dimensional space); Cl3,0() (the natural algebra of three-dimensional space, and the algebra of the Pauli matrices); and the spacetime algebra Cl1,3(). The elements of the algebra Clp,q() form an even subalgebra Cl() of the algebra Clq+1,p(), which can be used to parametrise rotations in the larger algebra. There is thus a close connection between complex numbers and rotations in two-dimensional space; between quaternions and rotations in three-dimensional space; between split-complex numbers and (hyperbolic) rotations (Lorentz transformations) in 1+1-dimensional space, and so on. Whereas Cayley–Dickson and split-complex constructs with eight or more dimensions are not associative with respect to multiplication, Clifford algebras retain associativity at any number of dimensions. In 1995 Ian R. Porteous wrote on "The recognition of subalgebras" in his book on Clifford algebras. His Proposition 11.4 summarizes the hypercomplex cases: Let A be a real associative algebra with unit element 1. Then 1 generates (algebra of real numbers), any two-dimensional subalgebra generated by an element e0 of A such that is isomorphic to (algebra of complex numbers), any two-dimensional subalgebra generated by an element e0 of A such that is isomorphic to 2 (pairs of real numbers with component-wise product, isomorphic to the algebra of split-complex numbers), any four-dimensional subalgebra generated by a set of mutually anti-commuting elements of A such that is isomorphic to (algebra of quaternions), any four-dimensional subalgebra generated by a set of mutually anti-commuting elements of A such that is isomorphic to M2() (2 × 2 real matrices, coquaternions), any eight-dimensional subalgebra generated by a set of mutually anti-commuting elements of A such that is isomorphic to 2 (split-biquaternions), any eight-dimensional subalgebra generated by a set of mutually anti-commuting elements of A such that is isomorphic to M2() ( complex matrices, biquaternions, Pauli algebra). Cayley–Dickson construction All of the Clifford algebras Clp,q() apart from the real numbers, complex numbers and the quaternions contain non-real elements that square to +1; and so cannot be division algebras. A different approach to extending the complex numbers is taken by the Cayley–Dickson construction. This generates number systems of dimension 2n, n = 2, 3, 4, ..., with bases , where all the non-real basis elements anti-commute and satisfy . In 8 or more dimensions () these algebras are non-associative. In 16 or more dimensions () these algebras also have zero-divisors. The first algebras in this sequence include the 4-dimensional quaternions, 8-dimensional octonions, and 16-dimensional sedenions. An algebraic symmetry is lost with each increase in dimensionality: quaternion multiplication is not commutative, octonion multiplication is non-associative, and the norm of sedenions is not multiplicative. After the sedenions are the 32-dimensional trigintaduonions (or 32-nions), the 64-dimensional sexagintaquatronions (or 64-nions), the 128-dimensional centumduodetrigintanions (or 128-nions), the 256-dimensional ducentiquinquagintasexions (or 256-nions), and ad infinitum, as summarized in the table below. The Cayley–Dickson construction can be modified by inserting an extra sign at some stages. It then generates the "split algebras" in the collection of composition algebras instead of the division algebras: split-complex numbers with basis satisfying , split-quaternions with basis satisfying , and split-octonions with basis satisfying , Unlike the complex numbers, the split-complex numbers are not algebraically closed, and further contain nontrivial zero divisors and nontrivial idempotents. As with the quaternions, split-quaternions are not commutative, but further contain nilpotents; they are isomorphic to the square matrices of dimension two. Split-octonions are non-associative and contain nilpotents. Tensor products The tensor product of any two algebras is another algebra, which can be used to produce many more examples of hypercomplex number systems. In particular taking tensor products with the complex numbers (considered as algebras over the reals) leads to four-dimensional bicomplex numbers (isomorphic to tessarines ), eight-dimensional biquaternions , and 16-dimensional complex octonions . Further examples bicomplex numbers: a 4-dimensional vector space over the reals, 2-dimensional over the complex numbers, isomorphic to tessarines. multicomplex numbers: 2n-dimensional vector spaces over the reals, 2n−1-dimensional over the complex numbers composition algebra: algebra with a quadratic form that composes with the product See also Thomas Kirkman Georg Scheffers Richard Brauer Hypercomplex analysis References Further reading . and Ouvres Completes T.2 pt. 1, pp 107–246. External links (English translation) (English translation) History of mathematics Historical treatment of quaternions
Hypercomplex number
[ "Mathematics" ]
2,651
[ "Mathematical structures", "Mathematical objects", "Algebraic structures", "Hypercomplex numbers", "Numbers" ]
51,441
https://en.wikipedia.org/wiki/Zero%20divisor
In abstract algebra, an element of a ring is called a left zero divisor if there exists a nonzero in such that , or equivalently if the map from to that sends to is not injective. Similarly, an element of a ring is called a right zero divisor if there exists a nonzero in such that . This is a partial case of divisibility in rings. An element that is a left or a right zero divisor is simply called a zero divisor. An element  that is both a left and a right zero divisor is called a two-sided zero divisor (the nonzero such that may be different from the nonzero such that ). If the ring is commutative, then the left and right zero divisors are the same. An element of a ring that is not a left zero divisor (respectively, not a right zero divisor) is called left regular or left cancellable (respectively, right regular or right cancellable). An element of a ring that is left and right cancellable, and is hence not a zero divisor, is called regular or cancellable, or a non-zero-divisor. A zero divisor that is nonzero is called a nonzero zero divisor or a nontrivial zero divisor. A non-zero ring with no nontrivial zero divisors is called a domain. Examples In the ring , the residue class is a zero divisor since . The only zero divisor of the ring of integers is . A nilpotent element of a nonzero ring is always a two-sided zero divisor. An idempotent element of a ring is always a two-sided zero divisor, since . The ring of n × n matrices over a field has nonzero zero divisors if n ≥ 2. Examples of zero divisors in the ring of 2 × 2 matrices (over any nonzero ring) are shown here: A direct product of two or more nonzero rings always has nonzero zero divisors. For example, in with each nonzero, , so is a zero divisor. Let be a field and be a group. Suppose that has an element of finite order . Then in the group ring one has , with neither factor being zero, so is a nonzero zero divisor in . One-sided zero-divisor Consider the ring of (formal) matrices with and . Then and . If , then is a left zero divisor if and only if is even, since , and it is a right zero divisor if and only if is even for similar reasons. If either of is , then it is a two-sided zero-divisor. Here is another example of a ring with an element that is a zero divisor on one side only. Let be the set of all sequences of integers . Take for the ring all additive maps from to , with pointwise addition and composition as the ring operations. (That is, our ring is , the endomorphism ring of the additive group .) Three examples of elements of this ring are the right shift , the left shift , and the projection map onto the first factor . All three of these additive maps are not zero, and the composites and are both zero, so is a left zero divisor and is a right zero divisor in the ring of additive maps from to . However, is not a right zero divisor and is not a left zero divisor: the composite is the identity. is a two-sided zero-divisor since , while is not in any direction. Non-examples The ring of integers modulo a prime number has no nonzero zero divisors. Since every nonzero element is a unit, this ring is a finite field. More generally, a division ring has no nonzero zero divisors. A non-zero commutative ring whose only zero divisor is 0 is called an integral domain. Properties In the ring of  ×  matrices over a field, the left and right zero divisors coincide; they are precisely the singular matrices. In the ring of  ×  matrices over an integral domain, the zero divisors are precisely the matrices with determinant zero. Left or right zero divisors can never be units, because if is invertible and for some nonzero , then , a contradiction. An element is cancellable on the side on which it is regular. That is, if is a left regular, implies that , and similarly for right regular. Zero as a zero divisor There is no need for a separate convention for the case , because the definition applies also in this case: If is a ring other than the zero ring, then is a (two-sided) zero divisor, because any nonzero element satisfies . If is the zero ring, in which , then is not a zero divisor, because there is no nonzero element that when multiplied by yields . Some references include or exclude as a zero divisor in all rings by convention, but they then suffer from having to introduce exceptions in statements such as the following: In a commutative ring , the set of non-zero-divisors is a multiplicative set in . (This, in turn, is important for the definition of the total quotient ring.) The same is true of the set of non-left-zero-divisors and the set of non-right-zero-divisors in an arbitrary ring, commutative or not. In a commutative noetherian ring , the set of zero divisors is the union of the associated prime ideals of . Zero divisor on a module Let be a commutative ring, let be an -module, and let be an element of . One says that is -regular if the "multiplication by " map is injective, and that is a zero divisor on otherwise. The set of -regular elements is a multiplicative set in . Specializing the definitions of "-regular" and "zero divisor on " to the case recovers the definitions of "regular" and "zero divisor" given earlier in this article. See also Zero-product property Glossary of commutative algebra (Exact zero divisor) Zero-divisor graph Sedenions, which have zero divisors Notes References Further reading Abstract algebra Ring theory 0 (number) Sedenions
Zero divisor
[ "Mathematics" ]
1,375
[ "Abstract algebra", "Fields of abstract algebra", "Ring theory", "Algebra" ]
51,442
https://en.wikipedia.org/wiki/Zorn%27s%20lemma
Zorn's lemma, also known as the Kuratowski–Zorn lemma, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element. The lemma was proved (assuming the axiom of choice) by Kazimierz Kuratowski in 1922 and independently by Max Zorn in 1935. It occurs in the proofs of several theorems of crucial importance, for instance the Hahn–Banach theorem in functional analysis, the theorem that every vector space has a basis, Tychonoff's theorem in topology stating that every product of compact spaces is compact, and the theorems in abstract algebra that in a ring with identity every proper ideal is contained in a maximal ideal and that every field has an algebraic closure. Zorn's lemma is equivalent to the well-ordering theorem and also to the axiom of choice, in the sense that within ZF (Zermelo–Fraenkel set theory without the axiom of choice) any one of the three is sufficient to prove the other two. An earlier formulation of Zorn's lemma is the Hausdorff maximal principle which states that every totally ordered subset of a given partially ordered set is contained in a maximal totally ordered subset of that partially ordered set. Motivation To prove the existence of a mathematical object that can be viewed as a maximal element in some partially ordered set in some way, one can try proving the existence of such an object by assuming there is no maximal element and using transfinite induction and the assumptions of the situation to get a contradiction. Zorn's lemma tidies up the conditions a situation needs to satisfy in order for such an argument to work and enables mathematicians to not have to repeat the transfinite induction argument by hand each time, but just check the conditions of Zorn's lemma. Statement of the lemma Preliminary notions: A set P equipped with a binary relation ≤ that is reflexive ( for every x), antisymmetric (if both and hold, then ), and transitive (if and then ) is said to be (partially) ordered by ≤. Given two elements x and y of P with x ≤ y, y is said to be greater than or equal to x. The word "partial" is meant to indicate that not every pair of elements of a partially ordered set is required to be comparable under the order relation, that is, in a partially ordered set P with order relation ≤ there may be elements x and y with neither x ≤ y nor y ≤ x. An ordered set in which every pair of elements is comparable is called totally ordered. Every subset S of a partially ordered set P can itself be seen as partially ordered by restricting the order relation inherited from P to S. A subset S of a partially ordered set P is called a chain (in P) if it is totally ordered in the inherited order. An element m of a partially ordered set P with order relation ≤ is maximal (with respect to ≤) if there is no other element of P greater than m, that is, if there is no s in P with s ≠ m and m ≤ s. Depending on the order relation, a partially ordered set may have any number of maximal elements. However, a totally ordered set can have at most one maximal element. Given a subset S of a partially ordered set P, an element u of P is an upper bound of S if it is greater than or equal to every element of S. Here, S is not required to be a chain, and u is required to be comparable to every element of S but need not itself be an element of S. Zorn's lemma can then be stated as: In fact, property (1) is redundant, since property (2) says, in particular, that the empty chain has an upper bound in , implying is nonempty. However, in practice, one often checks (1) and then verifies (2) only for nonempty chains, since the case of the empty chain is taken care by (1). In the terminology of Bourbaki, a partially ordered set is called inductive if each chain has an upper bound in the set (in particular, the set is then nonempty). Then the lemma can be stated as: For some applications, the following variant may be useful. Indeed, let with the partial ordering from . Then, for a chain in , an upper bound in is in and so satisfies the hypothesis of Zorn's lemma and a maximal element in is a maximal element in as well. Example applications Every vector space has a basis Zorn's lemma can be used to show that every vector space V has a basis. If V = {0}, then the empty set is a basis for V. Now, suppose that V ≠ {0}. Let P be the set consisting of all linearly independent subsets of V. Since V is not the zero vector space, there exists a nonzero element v of V, so P contains the linearly independent subset {v}. Furthermore, P is partially ordered by set inclusion (see inclusion order). Finding a maximal linearly independent subset of V is the same as finding a maximal element in P. To apply Zorn's lemma, take a chain T in P (that is, T is a subset of P that is totally ordered). If T is the empty set, then {v} is an upper bound for T in P. Suppose then that T is non-empty. We need to show that T has an upper bound, that is, there exists a linearly independent subset B of V containing all the members of T. Take B to be the union of all the sets in T. We wish to show that B is an upper bound for T in P. To do this, it suffices to show that B is a linearly independent subset of V. Suppose otherwise, that B is not linearly independent. Then there exists vectors v1, v2, ..., vk ∈ B and scalars a1, a2, ..., ak, not all zero, such that Since B is the union of all the sets in T, there are some sets S1, S2, ..., Sk ∈ T such that vi ∈ Si for every i = 1, 2, ..., k. As T is totally ordered, one of the sets S1, S2, ..., Sk must contain the others, so there is some set Si that contains all of v1, v2, ..., vk. This tells us there is a linearly dependent set of vectors in Si, contradicting that Si is linearly independent (because it is a member of P). The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal linearly independent subset B of V. Finally, we show that B is indeed a basis of V. It suffices to show that B is a spanning set of V. Suppose for the sake of contradiction that B is not spanning. Then there exists some v ∈ V not covered by the span of B. This says that B ∪ {v} is a linearly independent subset of V that is larger than B, contradicting the maximality of B. Therefore, B is a spanning set of V, and thus, a basis of V. Every nontrivial ring with unity contains a maximal ideal Zorn's lemma can be used to show that every nontrivial ring R with unity contains a maximal ideal. Let P be the set consisting of all proper ideals in R (that is, all ideals in R except R itself). Since R is non-trivial, the set P contains the trivial ideal {0}. Furthermore, P is partially ordered by set inclusion. Finding a maximal ideal in R is the same as finding a maximal element in P. To apply Zorn's lemma, take a chain T in P. If T is empty, then the trivial ideal {0} is an upper bound for T in P. Assume then that T is non-empty. It is necessary to show that T has an upper bound, that is, there exists an ideal I ⊆ R containing all the members of T but still smaller than R (otherwise it would not be a proper ideal, so it is not in P). Take I to be the union of all the ideals in T. We wish to show that I is an upper bound for T in P. We will first show that I is an ideal of R. For I to be an ideal, it must satisfy three conditions: I is a nonempty subset of R, For every x, y ∈ I, the sum x + y is in I, For every r ∈ R and every x ∈ I, the product rx is in I. #1 - I is a nonempty subset of R. Because T contains at least one element, and that element contains at least 0, the union I contains at least 0 and is not empty. Every element of T is a subset of R, so the union I only consists of elements in R. #2 - For every x, y ∈ I, the sum x + y is in I. Suppose x and y are elements of I. Then there exist two ideals J, K ∈ T such that x is an element of J and y is an element of K. Since T is totally ordered, we know that J ⊆ K or K ⊆ J. Without loss of generality, assume the first case. Both x and y are members of the ideal K, therefore their sum x + y is a member of K, which shows that x + y is a member of I. #3 - For every r ∈ R and every x ∈ I, the product rx is in I. Suppose x is an element of I. Then there exists an ideal J ∈ T such that x is in J. If r ∈ R, then rx is an element of J and hence an element of I. Thus, I is an ideal in R. Now, we show that I is a proper ideal. An ideal is equal to R if and only if it contains 1. (It is clear that if it is R then it contains 1; on the other hand, if it contains 1 and r is an arbitrary element of R, then r1 = r is an element of the ideal, and so the ideal is equal to R.) So, if I were equal to R, then it would contain 1, and that means one of the members of T would contain 1 and would thus be equal to R – but R is explicitly excluded from P. The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal ideal in R. Proof sketch A sketch of the proof of Zorn's lemma follows, assuming the axiom of choice. Suppose the lemma is false. Then there exists a partially ordered set, or poset, P such that every totally ordered subset has an upper bound, and that for every element in P there is another element bigger than it. For every totally ordered subset T we may then define a bigger element b(T), because T has an upper bound, and that upper bound has a bigger element. To actually define the function b, we need to employ the axiom of choice (explicitly: let , that is, the set of upper bounds for T. The axiom of choice furnishes ). Using the function b, we are going to define elements a0 < a1 < a2 < a3 < ... < aω < aω+1 <…, in P. This uncountable sequence is really long: the indices are not just the natural numbers, but all ordinals. In fact, the sequence is too long for the set P; there are too many ordinals (a proper class), more than there are elements in any set (in other words, given any set of ordinals, there exists a larger ordinal), and the set P will be exhausted before long and then we will run into the desired contradiction. The ai are defined by transfinite recursion: we pick a0 in P arbitrary (this is possible, since P contains an upper bound for the empty set and is thus not empty) and for any other ordinal w we set aw = b({av : v < w}). Because the av are totally ordered, this is a well-founded definition. The above proof can be formulated without explicitly referring to ordinals by considering the initial segments {av : v < w} as subsets of P. Such sets can be easily characterized as well-ordered chains S ⊆ P where each x ∈ S satisfies x = b({y ∈ S : y < x}). Contradiction is reached by noting that we can always find a "next" initial segment either by taking the union of all such S (corresponding to the limit ordinal case) or by appending b(S) to the "last" S (corresponding to the successor ordinal case). This proof shows that actually a slightly stronger version of Zorn's lemma is true: Alternatively, one can use the same proof for the Hausdorff maximal principle. This is the proof given for example in Halmos' Naive Set Theory or in below. Finally, the Bourbaki–Witt theorem can also be used to give a proof. Proof The basic idea of the proof is to reduce the proof to proving the following weak form of Zorn's lemma: (Note that, strictly speaking, (1) is redundant since (2) implies the empty set is in .) Note the above is a weak form of Zorn's lemma since Zorn's lemma says in particular that any set of subsets satisfying the above (1) and (2) has a maximal element ((3) is not needed). The point is that, conversely, Zorn's lemma follows from this weak form. Indeed, let be the set of all chains in . Then it satisfies all of the above properties (it is nonempty since the empty subset is a chain.) Thus, by the above weak form, we find a maximal element in ; i.e., a maximal chain in . By the hypothesis of Zorn's lemma, has an upper bound in . Then this is a maximal element since if , then is larger than or equal to and so . Thus, . The proof of the weak form is given in Hausdorff maximal principle#Proof. Indeed, the existence of a maximal chain is exactly the assertion of the Hausdorff maximal principle. The same proof also shows the following equivalent variant of Zorn's lemma: Indeed, trivially, Zorn's lemma implies the above lemma. Conversely, the above lemma implies the aforementioned weak form of Zorn's lemma, since a union gives a least upper bound. Zorn's lemma implies the axiom of choice A proof that Zorn's lemma implies the axiom of choice illustrates a typical application of Zorn's lemma. (The structure of the proof is exactly the same as the one for the Hahn–Banach theorem.) Given a set of nonempty sets and its union (which exists by the axiom of union), we want to show there is a function such that for each . For that end, consider the set . It is partially ordered by extension; i.e., if and only if is the restriction of . If is a chain in , then we can define the function on the union by setting when . This is well-defined since if , then is the restriction of . The function is also an element of and is a common extension of all 's. Thus, we have shown that each chain in has an upper bound in . Hence, by Zorn's lemma, there is a maximal element in that is defined on some . We want to show . Suppose otherwise; then there is a set . As is nonempty, it contains an element . We can then extend to a function by setting and . (Note this step does not need the axiom of choice.) The function is in and , a contradiction to the maximality of . Essentially the same proof also shows that Zorn's lemma implies the well-ordering theorem: take to be the set of all well-ordered subsets of a given set and then shows a maximal element of is . History The Hausdorff maximal principle is an early statement similar to Zorn's lemma. Kazimierz Kuratowski proved in 1922 a version of the lemma close to its modern formulation (it applies to sets ordered by inclusion and closed under unions of well-ordered chains). Essentially the same formulation (weakened by using arbitrary chains, not just well-ordered) was independently given by Max Zorn in 1935, who proposed it as a new axiom of set theory replacing the well-ordering theorem, exhibited some of its applications in algebra, and promised to show its equivalence with the axiom of choice in another paper, which never appeared. The name "Zorn's lemma" appears to be due to John Tukey, who used it in his book Convergence and Uniformity in Topology in 1940. Bourbaki's Théorie des Ensembles of 1939 refers to a similar maximal principle as "le théorème de Zorn". The name "Kuratowski–Zorn lemma" prevails in Poland and Russia. Equivalent forms of Zorn's lemma Zorn's lemma is equivalent (in ZF) to three main results: Hausdorff maximal principle Axiom of choice Well-ordering theorem. A well-known joke alluding to this equivalency (which may defy human intuition) is attributed to Jerry Bona: "The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" Zorn's lemma is also equivalent to the strong completeness theorem of first-order logic. Moreover, Zorn's lemma (or one of its equivalent forms) implies some major results in other mathematical areas. For example, Banach's extension theorem which is used to prove one of the most fundamental results in functional analysis, the Hahn–Banach theorem Every vector space has a basis, a result from linear algebra (to which it is equivalent). In particular, the real numbers, as a vector space over the rational numbers, possess a Hamel basis. Every commutative unital ring has a maximal ideal, a result from ring theory known as Krull's theorem, to which Zorn's lemma is equivalent Tychonoff's theorem in topology (to which it is also equivalent) Every proper filter is contained in an ultrafilter, a result that yields the completeness theorem of first-order logic In this sense, Zorn's lemma is a powerful tool, applicable to many areas of mathematics. Analogs under weakenings of the axiom of choice A weakened form of Zorn's lemma can be proven from ZF + DC (Zermelo–Fraenkel set theory with the axiom of choice replaced by the axiom of dependent choice). Zorn's lemma can be expressed straightforwardly by observing that the set having no maximal element would be equivalent to stating that the set's ordering relation would be entire, which would allow us to apply the axiom of dependent choice to construct a countable chain. As a result, any partially ordered set with exclusively finite chains must have a maximal element. More generally, strengthening the axiom of dependent choice to higher ordinals allows us to generalize the statement in the previous paragraph to higher cardinalities. In the limit where we allow arbitrarily large ordinals, we recover the proof of the full Zorn's lemma using the axiom of choice in the preceding section. In popular culture The 1970 film Zorns Lemma is named after the lemma. The lemma was referenced on The Simpsons in the episode "Bart's New Friend". See also Chain-complete partial order – a partially ordered set in which every chain has a least upper bound Teichmüller–Tukey lemma (sometimes named Tukey's lemma) Notes References Further reading The Zorn Identity at the n-category cafe. External links Zorn's Lemma at ProvenMath contains a formal proof down to the finest detail of the equivalence of the axiom of choice and Zorn's Lemma. Zorn's Lemma at Metamath is another formal proof. (Unicode version for recent browsers.) Axiom of choice Lemmas in set theory Order theory
Zorn's lemma
[ "Mathematics" ]
4,367
[ "Lemmas in set theory", "Mathematical axioms", "Axiom of choice", "Axioms of set theory", "Order theory", "Lemmas", "Theorems in the foundations of mathematics" ]
51,453
https://en.wikipedia.org/wiki/Singular%20function
In mathematics, a real-valued function f on the interval [a, b] is said to be singular if it has the following properties: f is continuous on [a, b]. (**) there exists a set N of measure 0 such that for all x outside of N, the derivative f (x) exists and is zero; that is, the derivative of f vanishes almost everywhere. f is non-constant on [a, b]. A standard example of a singular function is the Cantor function, which is sometimes called the devil's staircase (a term also used for singular functions in general). There are, however, other functions that have been given that name. One is defined in terms of the circle map. If f(x) = 0 for all x ≤ a and f(x) = 1 for all x ≥ b, then the function can be taken to represent a cumulative distribution function for a random variable which is neither a discrete random variable (since the probability is zero for each point) nor an absolutely continuous random variable (since the probability density is zero everywhere it exists). Singular functions occur, for instance, as sequences of spatially modulated phases or structures in solids and magnets, described in a prototypical fashion by the Frenkel–Kontorova model and by the ANNNI model, as well as in some dynamical systems. Most famously, perhaps, they lie at the center of the fractional quantum Hall effect. When referring to functions with a singularity When discussing mathematical analysis in general, or more specifically real analysis or complex analysis or differential equations, it is common for a function which contains a mathematical singularity to be referred to as a 'singular function'. This is especially true when referring to functions which diverge to infinity at a point or on a boundary. For example, one might say, "1/x becomes singular at the origin, so 1/x is a singular function." Advanced techniques for working with functions that contain singularities have been developed in the subject called distributional or generalized function analysis. A weak derivative is defined that allows singular functions to be used in partial differential equations, etc. See also Absolute continuity Mathematical singularity Generalized function Distribution Minkowski's question-mark function References (**) This condition depends on the references Fractal curves Types of functions
Singular function
[ "Mathematics" ]
481
[ "Mathematical objects", "Functions and mappings", "Types of functions", "Mathematical relations" ]
51,462
https://en.wikipedia.org/wiki/Machine
A machine is a physical system that uses power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems. Renaissance natural philosophers identified six simple machines which were the elementary devices that put a load into motion, and calculated the ratio of output force to input force, known today as mechanical advantage. Modern machines are complex systems that consist of structural elements, mechanisms and control components and include interfaces for convenient use. Examples include: a wide range of vehicles, such as trains, automobiles, boats and airplanes; appliances in the home and office, including computers, building air handling and water handling systems; as well as farm machinery, machine tools and factory automation systems and robots. Etymology The English word machine comes through Middle French from Latin , which in turn derives from the Greek (Doric , Ionic 'contrivance, machine, engine', a derivation from 'means, expedient, remedy'). The word mechanical (Greek: ) comes from the same Greek roots. A wider meaning of 'fabric, structure' is found in classical Latin, but not in Greek usage. This meaning is found in late medieval French, and is adopted from the French into English in the mid-16th century. In the 17th century, the word machine could also mean a scheme or plot, a meaning now expressed by the derived machination. The modern meaning develops out of specialized application of the term to stage engines used in theater and to military siege engines, both in the late 16th and early 17th centuries. The OED traces the formal, modern meaning to John Harris' Lexicon Technicum (1704), which has: Machine, or Engine, in Mechanicks, is whatsoever hath Force sufficient either to raise or stop the Motion of a Body. Simple Machines are commonly reckoned to be Six in Number, viz. the Ballance, Leaver, Pulley, Wheel, Wedge, and Screw. Compound Machines, or Engines, are innumerable. The word engine used as a (near-) synonym both by Harris and in later language derives ultimately (via Old French) from Latin 'ingenuity, an invention'. History The hand axe, made by chipping flint to form a wedge, in the hands of a human transforms force and movement of the tool into a transverse splitting forces and movement of the workpiece. The hand axe is the first example of a wedge, the oldest of the six classic simple machines, from which most machines are based. The second oldest simple machine was the inclined plane (ramp), which has been used since prehistoric times to move heavy objects. The other four simple machines were invented in the ancient Near East. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia , and then in ancient Egyptian technology . The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever. Three of the simple machines were studied and described by Greek philosopher Archimedes around the 3rd century BC: the lever, pulley and screw. Archimedes discovered the principle of mechanical advantage in the lever. Later Greek philosophers defined the classic five simple machines (excluding the inclined plane) and were able to roughly calculate their mechanical advantage. Hero of Alexandria (–75 AD) in his work Mechanics lists five mechanisms that can "set a load in motion"; lever, windlass, pulley, wedge, and screw, and describes their fabrication and uses. However, the Greeks' understanding was limited to statics (the balance of forces) and did not include dynamics (the tradeoff between force and distance) or the concept of work. The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi ad-Din Muhammad ibn Ma'ruf in Ottoman Egypt. The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny. The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns. During the Renaissance, the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how much useful work they could perform, leading eventually to the new concept of mechanical work. In 1586 Flemish engineer Simon Stevin derived the mechanical advantage of the inclined plane, and it was included with the other simple machines. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ("On Mechanics"). He was the first to understand that simple machines do not create energy, they merely transform it. The classic rules of sliding friction in machines were discovered by Leonardo da Vinci (1452–1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785). James Watt patented his parallel motion linkage in 1782, which made the double acting steam engine practical. The Boulton and Watt steam engine and later designs powered steam locomotives, steam ships, and factories. The Industrial Revolution was a period from 1750 to 1850 where changes in agriculture, manufacturing, mining, transportation, and technology had a profound effect on the social, economic and cultural conditions of the times. It began in the United Kingdom, then subsequently spread throughout Western Europe, North America, Japan, and eventually the rest of the world. Starting in the later part of the 18th century, there began a transition in parts of Great Britain's previously manual labour and draft-animal-based economy towards machine-based manufacturing. It started with the mechanisation of the textile industries, the development of iron-making techniques and the increased use of refined coal. Simple machines The idea that a machine can be decomposed into simple movable elements led Archimedes to define the lever, pulley and screw as simple machines. By the time of the Renaissance this list increased to include the wheel and axle, wedge and inclined plane. The modern approach to characterizing machines focusses on the components that allow movement, known as joints. Wedge (hand axe): Perhaps the first example of a device designed to manage power is the hand axe, also called biface and Olorgesailie. A hand axe is made by chipping stone, generally flint, to form a bifacial edge, or wedge. A wedge is a simple machine that transforms lateral force and movement of the tool into a transverse splitting force and movement of the workpiece. The available power is limited by the effort of the person using the tool, but because power is the product of force and movement, the wedge amplifies the force by reducing the movement. This amplification, or mechanical advantage is the ratio of the input speed to output speed. For a wedge this is given by 1/tanα, where α is the tip angle. The faces of a wedge are modeled as straight lines to form a sliding or prismatic joint. Lever: The lever is another important and simple device for managing power. This is a body that pivots on a fulcrum. Because the velocity of a point farther from the pivot is greater than the velocity of a point near the pivot, forces applied far from the pivot are amplified near the pivot by the associated decrease in speed. If a is the distance from the pivot to the point where the input force is applied and b is the distance to the point where the output force is applied, then a/b is the mechanical advantage of the lever. The fulcrum of a lever is modeled as a hinged or revolute joint. Wheel: The wheel is an important early machine, such as the chariot. A wheel uses the law of the lever to reduce the force needed to overcome friction when pulling a load. To see this notice that the friction associated with pulling a load on the ground is approximately the same as the friction in a simple bearing that supports the load on the axle of a wheel. However, the wheel forms a lever that magnifies the pulling force so that it overcomes the frictional resistance in the bearing. The classification of simple machines to provide a strategy for the design of new machines was developed by Franz Reuleaux, who collected and studied over 800 elementary machines. He recognized that the classical simple machines can be separated into the lever, pulley and wheel and axle that are formed by a body rotating about a hinge, and the inclined plane, wedge and screw that are similarly a block sliding on a flat surface. Simple machines are elementary examples of kinematic chains or linkages that are used to model mechanical systems ranging from the steam engine to robot manipulators. The bearings that form the fulcrum of a lever and that allow the wheel and axle and pulleys to rotate are examples of a kinematic pair called a hinged joint. Similarly, the flat surface of an inclined plane and wedge are examples of the kinematic pair called a sliding joint. The screw is usually identified as its own kinematic pair called a helical joint. This realization shows that it is the joints, or the connections that provide movement, that are the primary elements of a machine. Starting with four types of joints, the rotary joint, sliding joint, cam joint and gear joint, and related connections such as cables and belts, it is possible to understand a machine as an assembly of solid parts that connect these joints called a mechanism . Two levers, or cranks, are combined into a planar four-bar linkage by attaching a link that connects the output of one crank to the input of another. Additional links can be attached to form a six-bar linkage or in series to form a robot. Mechanical systems A mechanical system manages power to accomplish a task that involves forces and movement. Modern machines are systems consisting of (i) a power source and actuators that generate forces and movement, (ii) a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement, (iii) a controller with sensors that compare the output to a performance goal and then directs the actuator input, and (iv) an interface to an operator consisting of levers, switches, and displays. This can be seen in Watt's steam engine in which the power is provided by steam expanding to drive the piston. The walking beam, coupler and crank transform the linear movement of the piston into rotation of the output pulley. Finally, the pulley rotation drives the flyball governor which controls the valve for the steam input to the piston cylinder. The adjective "mechanical" refers to skill in the practical application of an art or science, as well as relating to or caused by movement, physical forces, properties or agents such as is dealt with by mechanics. Similarly Merriam-Webster Dictionary defines "mechanical" as relating to machinery or tools. Power flow through a machine provides a way to understand the performance of devices ranging from levers and gear trains to automobiles and robotic systems. The German mechanician Franz Reuleaux wrote, "a machine is a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion." Notice that forces and motion combine to define power. More recently, Uicker et al. stated that a machine is "a device for applying power or changing its direction."McCarthy and Soh describe a machine as a system that "generally consists of a power source and a mechanism for the controlled use of this power." Power sources Human and animal effort were the original power sources for early machines. Waterwheel: Waterwheels appeared around the world around 300 BC to use flowing water to generate rotary motion, which was applied to milling grain, and powering lumber, machining and textile operations. Modern water turbines use water flowing through a dam to drive an electric generator. Windmill: Early windmills captured wind power to generate rotary motion for milling operations. Modern wind turbines also drives a generator. This electricity in turn is used to drive motors forming the actuators of mechanical systems. Engine: The word engine derives from "ingenuity" and originally referred to contrivances that may or may not be physical devices. A steam engine uses heat to boil water contained in a pressure vessel; the expanding steam drives a piston or a turbine. This principle can be seen in the aeolipile of Hero of Alexandria. This is called an external combustion engine. An automobile engine is called an internal combustion engine because it burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston. A jet engine uses a turbine to compress air which is burned with fuel so that it expands through a nozzle to provide thrust to an aircraft, and so is also an "internal combustion engine." Power plant: The heat from coal and natural gas combustion in a boiler generates steam that drives a steam turbine to rotate an electric generator. A nuclear power plant uses heat from a nuclear reactor to generate steam and electric power. This power is distributed through a network of transmission lines for industrial and individual use. Motors: Electric motors use either AC or DC electric current to generate rotational movement. Electric servomotors are the actuators for mechanical systems ranging from robotic systems to modern aircraft. Fluid Power: Hydraulic and pneumatic systems use electrically driven pumps to drive water or air respectively into cylinders to power linear movement. Electrochemical: Chemicals and materials can also be sources of power. They may chemically deplete or need re-charging, as is the case with batteries, or they may produce power without changing their state, which is the case for solar cells and thermoelectric generators. All of these, however, still require their energy to come from elsewhere. With batteries, it is the already existing chemical potential energy inside. In solar cells and thermoelectrics, the energy source is light and heat respectively. Mechanisms The mechanism of a mechanical system is assembled from components called machine elements. These elements provide structure for the system and control its movement. The structural components are, generally, the frame members, bearings, splines, springs, seals, fasteners and covers. The shape, texture and color of covers provide a styling and operational interface between the mechanical system and its users. The assemblies that control movement are also called "mechanisms." Mechanisms are generally classified as gears and gear trains, which includes belt drives and chain drives, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms, escapements and friction devices such as brakes and clutches. The number of degrees of freedom of a mechanism, or its mobility, depends on the number of links and joints and the types of joints used to construct the mechanism. The general mobility of a mechanism is the difference between the unconstrained freedom of the links and the number of constraints imposed by the joints. It is described by the Chebychev–Grübler–Kutzbach criterion. Gears and gear trains The transmission of rotation between contacting toothed wheels can be traced back to the Antikythera mechanism of Greece and the south-pointing chariot of China. Illustrations by the renaissance scientist Georgius Agricola show gear trains with cylindrical teeth. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio. Some important features of gears and gear trains are: The ratio of the pitch circles of mating gears defines the speed ratio and the mechanical advantage of the gear set. A planetary gear train provides high gear reduction in a compact package. It is possible to design gear teeth for gears that are non-circular, yet still transmit torque smoothly. The speed ratios of chain and belt drives are computed in the same way as gear ratios. See bicycle gearing. Cam and follower mechanisms A cam and follower is formed by the direct contact of two specially shaped links. The driving link is called the cam (also see cam shaft) and the link that is driven through the direct contact of their surfaces is called the follower. The shape of the contacting surfaces of the cam and follower determines the movement of the mechanism. Linkages A linkage is a collection of links connected by joints. Generally, the links are the structural elements and the joints allow movement. Perhaps the single most useful example is the planar four-bar linkage. However, there are many more special linkages: Watt's linkage is a four-bar linkage that generates an approximate straight line. It was critical to the operation of his design for the steam engine. This linkage also appears in vehicle suspensions to prevent side-to-side movement of the body relative to the wheels. Also see the article Parallel motion. The success of Watt's linkage lead to the design of similar approximate straight-line linkages, such as Hoeken's linkage and Chebyshev's linkage. The Peaucellier linkage generates a true straight-line output from a rotary input. The Sarrus linkage is a spatial linkage that generates straight-line movement from a rotary input. The Klann linkage and the Jansen linkage are recent inventions that provide interesting walking movements. They are respectively a six-bar and an eight-bar linkage. Planar mechanism A planar mechanism is a mechanical system that is constrained so the trajectories of points in all the bodies of the system lie on planes parallel to a ground plane. The rotational axes of hinged joints that connect the bodies in the system are perpendicular to this ground plane. Spherical mechanism A spherical mechanism is a mechanical system in which the bodies move in a way that the trajectories of points in the system lie on concentric spheres. The rotational axes of hinged joints that connect the bodies in the system pass through the center of these circle. Spatial mechanism A spatial mechanism is a mechanical system that has at least one body that moves in a way that its point trajectories are general space curves. The rotational axes of hinged joints that connect the bodies in the system form lines in space that do not intersect and have distinct common normals. Flexure mechanisms A flexure mechanism consists of a series of rigid bodies connected by compliant elements (also known as flexure joints) that is designed to produce a geometrically well-defined motion upon application of a force. Machine elements The elementary mechanical components of a machine are termed machine elements. These elements consist of three basic types (i) structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants, (ii) mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and (iii) control components such as buttons, switches, indicators, sensors, actuators and computer controllers. While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users. Structural components A number of machine elements provide important structural functions such as the frame, bearings, splines, spring and seals. The recognition that the frame of a mechanism is an important machine element changed the name three-bar linkage into four-bar linkage. Frames are generally assembled from truss or beam elements. Bearings are components designed to manage the interface between moving elements and are the source of friction in machines. In general, bearings are designed for pure rotation or straight line movement. Splines and keys are two ways to reliably mount an axle to a wheel, pulley or gear so that torque can be transferred through the connection. Springs provides forces that can either hold components of a machine in place or acts as a suspension to support part of a machine. Seals are used between mating parts of a machine to ensure fluids, such as water, hot gases, or lubricant do not leak between the mating surfaces. Fasteners such as screws, bolts, spring clips, and rivets are critical to the assembly of components of a machine. Fasteners are generally considered to be removable. In contrast, joining methods, such as welding, soldering, crimping and the application of adhesives, usually require cutting the parts to disassemble the components Controllers Controllers combine sensors, logic, and actuators to maintain the performance of components of a machine. Perhaps the best known is the flyball governor for a steam engine. Examples of these devices range from a thermostat that as temperature rises opens a valve to cooling water to speed controllers such as the cruise control system in an automobile. The programmable logic controller replaced relays and specialized control mechanisms with a programmable computer. Servomotors that accurately position a shaft in response to an electrical command are the actuators that make robotic systems possible. Computing machines Charles Babbage designed machines to tabulate logarithms and other functions in 1837. His Difference engine can be considered an advanced mechanical calculator and his Analytical Engine a forerunner of the modern computer, though none of the larger designs were completed in Babbage's lifetime. The Arithmometer and the Comptometer are mechanical computers that are precursors to modern digital computers. Models used to study modern computers are termed State machine and Turing machine. Molecular machines The biological molecule myosin reacts to ATP and ADP to alternately engage with an actin filament and change its shape in a way that exerts a force, and then disengage to reset its shape, or conformation. This acts as the molecular drive that causes muscle contraction. Similarly the biological molecule kinesin has two sections that alternately engage and disengage with microtubules causing the molecule to move along the microtubule and transport vesicles within the cell, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "In effect, the motile cilium is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines. Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. " Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. These molecules are increasingly considered to be nanomachines. Researchers have used DNA to construct nano-dimensioned four-bar linkages. Impact Mechanization and automation Mechanization (or mechanisation in BE) is providing human operators with machinery that assists them with the muscular requirements of work or displaces muscular work. In some fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an un-geared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines. Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provides human operators with machinery to assist them with the muscular requirements of work, automation greatly decreases the need for human sensory and mental requirements as well. Automation plays an increasingly important role in the world economy and in daily experience. Automata An automaton (plural: automata or automatons) is a self-operating machine. The word is sometimes used to describe a robot, more specifically an autonomous robot. A Toy Automaton was patented in 1863. Mechanics Usher reports that Hero of Alexandria's treatise on Mechanics focussed on the study of lifting heavy weights. Today mechanics refers to the mathematical analysis of the forces and movement of a mechanical system, and consists of the study of the kinematics and dynamics of these systems. Dynamics of machines The dynamic analysis of machines begins with a rigid-body model to determine reactions at the bearings, at which point the elasticity effects are included. The rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid, which means that they do not deform under the action of applied forces, simplifies the analysis by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. The dynamics of a rigid body system is defined by its equations of motion, which are derived using either Newtons laws of motion or Lagrangian mechanics. The solution of these equations of motion defines how the configuration of the system of rigid bodies changes as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems. Kinematics of machines The dynamic analysis of a machine requires the determination of the movement, or kinematics, of its component parts, known as kinematic analysis. The assumption that the system is an assembly of rigid components allows rotational and translational movement to be modeled mathematically as Euclidean, or rigid, transformations. This allows the position, velocity and acceleration of all points in a component to be determined from these properties for a reference point, and the angular position, angular velocity and angular acceleration of the component. Machine design Machine design refers to the procedures and techniques used to address the three phases of a machine's lifecycle: invention, which involves the identification of a need, development of requirements, concept generation, prototype development, manufacturing, and verification testing; performance engineering involves enhancing manufacturing efficiency, reducing service and maintenance demands, adding features and improving effectiveness, and validation testing; recycle is the decommissioning and disposal phase and includes recovery and reuse of materials and components. See also Automaton Gear train History of technology Linkage (mechanical) List of mechanical, electrical and electronic equipment manufacturing companies by revenue Mechanism (engineering) Mechanical advantage Outline of automation Outline of machines Power (physics) Simple machines Technology Virtual work Work (physics) References Further reading External links Reuleaux Collection of Mechanisms and Machines – Cornell University
Machine
[ "Physics", "Technology", "Engineering" ]
5,927
[ "Physical systems", "Machines", "Mechanical engineering" ]
51,469
https://en.wikipedia.org/wiki/Mold
A mold () or mould () is one of the structures that certain fungi can form. The dust-like, colored appearance of molds is due to the formation of spores containing fungal secondary metabolites. The spores are the dispersal units of the fungi. Not all fungi form molds. Some fungi form mushrooms; others grow as single cells and are called microfungi (for example yeasts). A large and taxonomically diverse number of fungal species form molds. The growth of hyphae results in discoloration and a fuzzy appearance, especially on food. The network of these tubular branching hyphae, called a mycelium, is considered a single organism. The hyphae are generally transparent, so the mycelium appears like very fine, fluffy white threads over the surface. Cross-walls (septa) may delimit connected compartments along the hyphae, each containing one or multiple, genetically identical nuclei. The dusty texture of many molds is caused by profuse production of asexual spores (conidia) formed by differentiation at the ends of hyphae. The mode of formation and shape of these spores is traditionally used to classify molds. Many of these spores are colored, making the fungus much more obvious to the human eye at this stage in its life-cycle. Molds are considered to be microbes and do not form a specific taxonomic or phylogenetic grouping, but can be found in the divisions Zygomycota and Ascomycota. In the past, most molds were classified within the Deuteromycota. Mold had been used as a common name for now non-fungal groups such as water molds or slime molds that were once considered fungi. Molds cause biodegradation of natural materials, which can be unwanted when it becomes food spoilage or damage to property. They also play important roles in biotechnology and food science in the production of various pigments, foods, beverages, antibiotics, pharmaceuticals and enzymes. Some diseases of animals and humans can be caused by certain molds: disease may result from allergic sensitivity to mold spores, from growth of pathogenic molds within the body, or from the effects of ingested or inhaled toxic compounds (mycotoxins) produced by molds. Biology There are thousands of known species of mold fungi with diverse life-styles including saprotrophs, mesophiles, psychrophiles and thermophiles, and a very few opportunistic pathogens of humans. They all require moisture for growth and some live in aquatic environments. Like all fungi, molds derive energy not through photosynthesis but from the organic matter on which they live, utilizing heterotrophy. Typically, molds secrete hydrolytic enzymes, mainly from the hyphal tips. These enzymes degrade complex biopolymers such as starch, cellulose and lignin into simpler substances which can be absorbed by the hyphae. In this way, molds play a major role in causing decomposition of organic material, enabling the recycling of nutrients throughout ecosystems. Many molds also synthesize mycotoxins and siderophores which, together with lytic enzymes, inhibit the growth of competing microorganisms. Molds can also grow on stored food for animals and humans, making the food unpalatable or toxic and are thus a major source of food losses and illness. Many strategies for food preservation (salting, pickling, jams, bottling, freezing, drying) are to prevent or slow mold growth as well as the growth of other microbes. Molds reproduce by producing large numbers of small spores, which may contain a single nucleus or be multinucleate. Mold spores can be asexual (the products of mitosis) or sexual (the products of meiosis); many species can produce both types. Some molds produce small, hydrophobic spores that are adapted for wind dispersal and may remain airborne for long periods; in some the cell walls are darkly pigmented, providing resistance to damage by ultraviolet radiation. Other mold spores have slimy sheaths and are more suited to water dispersal. Mold spores are often spherical or ovoid single cells, but can be multicellular and variously shaped. Spores may cling to clothing or fur; some are able to survive extremes of temperature and pressure. Although molds can grow on dead organic matter everywhere in nature, their presence is visible to the unaided eye only when they form large colonies. A mold colony does not consist of discrete organisms but is an interconnected network of hyphae called a mycelium. All growth occurs at hyphal tips, with cytoplasm and organelles flowing forwards as the hyphae advance over or through new food sources. Nutrients are absorbed at the hyphal tip. In artificial environments such as buildings, humidity and temperature are often stable enough to foster the growth of mold colonies, commonly seen as a downy or furry coating growing on food or other surfaces. Few molds can begin growing at temperatures of or below, so food is typically refrigerated at this temperature. When conditions do not enable growth to take place, molds may remain alive in a dormant state depending on the species, within a large range of temperatures. The many different mold species vary enormously in their tolerance to temperature and humidity extremes. Certain molds can survive harsh conditions such as the snow-covered soils of Antarctica, refrigeration, highly acidic solvents, anti-bacterial soap and even petroleum products such as jet fuel. Xerophilic molds are able to grow in relatively dry, salty, or sugary environments, where water activity (aw) is less than 0.85; other molds need more moisture. Common molds Common genera of molds include: Acremonium Alternaria Aspergillus Cladosporium Fusarium Mucor Penicillium Rhizopus Stachybotrys Trichoderma Trichophyton Food production The Kōji molds are a group of Aspergillus species, notably Aspergillus oryzae, and secondarily A. sojae, that have been cultured in eastern Asia for many centuries. They are used to ferment a soybean and wheat mixture to make soybean paste and soy sauce. Koji molds break down the starch in rice, barley, sweet potatoes, etc., a process called saccharification, in the production of sake, shōchū and other distilled spirits. Koji molds are also used in the preparation of Katsuobushi. Red rice yeast is a product of the mold Monascus purpureus grown on rice, and is common in Asian diets. The yeast contains several compounds collectively known as monacolins, which are known to inhibit cholesterol synthesis. A study has shown that red rice yeast used as a dietary supplement, combined with fish oil and healthy lifestyle changes, may help reduce "bad" cholesterol as effectively as certain commercial statin drugs. Nonetheless, other work has shown it may not be reliable (perhaps due to non-standardization) and even toxic to liver and kidneys. Some sausages, such as salami, incorporate starter cultures of molds to improve flavor and reduce bacterial spoilage during curing. Penicillium nalgiovense, for example, may appear as a powdery white coating on some varieties of dry-cured sausage. Other molds that have been used in food production include: Fusarium venenatum – quorn Geotrichum candidum – cheese Neurospora sitophila – oncom Penicillium spp. – various cheeses including Brie and Blue cheese Rhizomucor miehei – microbial rennet for making vegetarian and other cheeses Rhizopus oligosporus – tempeh Rhizopus oryzae – tempeh, jiuqu for jiuniang or precursor for making Chinese rice wine Pharmaceuticals from molds Alexander Fleming's accidental discovery of the antibiotic penicillin involved a Penicillium mold called Penicillium rubrum (although the species was later established to be Penicillium rubens). Fleming continued to investigate penicillin, showing that it could inhibit various types of bacteria found in infections and other ailments, but he was unable to produce the compound in large enough amounts necessary for production of a medicine. His work was expanded by a team at Oxford University; Clutterbuck, Lovell, and Raistrick, who began to work on the problem in 1931. This team was also unable to produce the pure compound in any large amount, and found that the purification process diminished its effectiveness and negated the anti-bacterial properties it had. Howard Florey, Ernst Chain, Norman Heatley, Edward Abraham, also all at Oxford, continued the work. They enhanced and developed the concentration technique by using organic solutions rather than water, and created the "Oxford Unit" to measure penicillin concentration within a solution. They managed to purify the solution, increasing its concentration by 45–50 times, but found that a higher concentration was possible. Experiments were conducted and the results published in 1941, though the quantities of penicillin produced were not always high enough for the treatments required. As this was during the Second World War, Florey sought US government involvement. With research teams in the UK and some in the US, industrial-scale production of crystallized penicillin was developed during 1941–1944 by the USDA and by Pfizer. Several statin cholesterol-lowering drugs (such as lovastatin, from Aspergillus terreus) are derived from molds. The immunosuppressant drug cyclosporine, used to suppress the rejection of transplanted organs, is derived from the mold Tolypocladium inflatum. Health effects Molds are ubiquitous, and mold spores are a common component of household and workplace dust; however, when mold spores are present in large quantities, they can present a health hazard to humans, potentially causing allergic reactions and respiratory problems. Some molds also produce mycotoxins that can pose serious health risks to humans and animals. Some studies claim that exposure to high levels of mycotoxins can lead to neurological problems and, in some cases, death. Prolonged exposure, e.g. daily home exposure, may be particularly harmful. Research on the health impacts of mold has not been conclusive. The term "toxic mold" refers to molds that produce mycotoxins, such as Stachybotrys chartarum, and not to all molds in general. Mold in the home can usually be found in damp, dark or steamy areas, e.g. bathrooms, kitchens, cluttered storage areas, recently flooded areas, basement areas, plumbing spaces, areas with poor ventilation and outdoors in humid environments. Symptoms caused by mold allergy are: watery, itchy eyes; a chronic cough; headaches or migraines; difficulty breathing; rashes; tiredness; sinus problems; nasal blockage and frequent sneezing. Molds can also pose a hazard to human and animal health when they are consumed following the growth of certain mold species in stored food. Some species produce toxic secondary metabolites, collectively termed mycotoxins, including aflatoxins, ochratoxins, fumonisins, trichothecenes, citrinin, and patulin. These toxic properties may be used for the benefit of humans when the toxicity is directed against other organisms; for example, penicillin adversely affects the growth of Gram-positive bacteria (e.g. Clostridium species), certain spirochetes and certain fungi. Growth in buildings and homes Mold growth in buildings generally occurs as fungi colonize porous building materials, such as wood. Many building products commonly incorporate paper, wood products, or solid wood members, such as paper-covered drywall, wood cabinets, and insulation. Interior mold colonization can lead to a variety of health problems as microscopic airborne reproductive spores, analogous to tree pollen, are inhaled by building occupants. High quantities of indoor airborne spores as compared to exterior conditions are strongly suggestive of indoor mold growth. Determination of airborne spore counts is accomplished by way of an air sample, in which a specialized pump with a known flow rate is operated for a known period of time. To account for background levels, air samples should be drawn from the affected area, a control area, and the exterior. The air sampler pump draws in air and deposits microscopic airborne particles on a culture medium. The medium is cultured in a laboratory and the fungal genus and species are determined by visual microscopic observation. Laboratory results also quantify fungal growth by way of a spore count for comparison among samples. The pump operation time is recorded and when multiplied by pump flow rate results in a specific volume of air obtained. Although a small volume of air is actually analyzed, common laboratory reports extrapolate the spore count data to estimate spores that would be present in a cubic meter of air. Mold spores are drawn to specific environments, making it easier for them to grow. These spores will usually only turn into a full-blown outbreak if certain conditions are met. Various practices can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels that can facilitate mold growth. Air filtration reduces the number of spores available for germination, especially when a High Efficiency Particulate Air (HEPA) filter is used. A properly functioning AC unit also reduces the relative humidity in rooms. The United States Environmental Protection Agency (EPA) currently recommends that relative humidity be maintained below 60%, ideally between 30% and 50%, to inhibit mold growth. Eliminating the moisture source is the first step at fungal remediation. Removal of affected materials may also be necessary for remediation, if materials are easily replaceable and not part of the load-bearing structure. Professional drying of concealed wall cavities and enclosed spaces such as cabinet toekick spaces may be required. Post-remediation verification of moisture content and fungal growth is required for successful remediation. Many contractors perform post-remediation verification themselves, but property owners may benefit from independent verification. Left untreated, mold can potentially cause serious cosmetic and structural damage to a property. Use in art Various artists have used mold in various artistic fashions. Daniele Del Nero, for example, constructs scale models of houses and office buildings and then induces mold to grow on them, giving them an unsettling, reclaimed-by-nature look. Stacy Levy sandblasts enlarged images of mold onto glass, then allows mold to grow in the crevasses she has made, creating a macro-micro portrait. Sam Taylor-Johnson has made a number of time-lapse films capturing the gradual decay of classically arranged still lifes. See also Slime mold Water mold References External links The EPA's guide to mold Fungus common names Articles containing video clips
Mold
[ "Biology" ]
3,120
[ "Fungus common names", "Fungi", "Common names of organisms" ]
51,472
https://en.wikipedia.org/wiki/Spore
In biology, a spore is a unit of sexual (in fungi) or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. Spores form part of the life cycles of many plants, algae, fungi and protozoa. They were thought to have appeared as early as the mid-late Ordovician period as an adaptation of early land plants. Bacterial spores are not part of a sexual cycle, but are resistant structures used for survival under unfavourable conditions. Myxozoan spores release amoeboid infectious germs ("amoebulae") into their hosts for parasitic infection, but also reproduce within the hosts through the pairing of two nuclei within the plasmodium, which develops from the amoebula. In plants, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte. In some rare cases, diploid spore is also produced in some algae, or fungi. Under favourable conditions, the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte, which eventually goes on to produce gametes. Two gametes fuse to form a zygote, which develops into a new sporophyte. This cycle is known as alternation of generations. The spores of seed plants are produced internally, and the megaspores (formed within the ovules) and the microspores are involved in the formation of more complex structures that form the dispersal units, the seeds and pollen grains. Definition The term spore derives from the ancient Greek word σπορά spora, meaning "seed, sowing", related to σπόρος , "sowing", and σπείρειν , "to sow". In common parlance, the difference between a "spore" and a "gamete" is that a spore will germinate and develop into a sporeling, while a gamete needs to combine with another gamete to form a zygote before developing further. The main difference between spores and seeds as dispersal units is that spores are unicellular, the first cell of a gametophyte, while seeds contain within them a developing embryo (the multicellular sporophyte of the next generation), produced by the fusion of the male gamete of the pollen tube with the female gamete formed by the megagametophyte within the ovule. Spores germinate to give rise to haploid gametophytes, while seeds germinate to give rise to diploid sporophytes. Classification of spore-producing organisms Plants Vascular plant spores are always haploid. Vascular plants are either homosporous (or isosporous) or heterosporous. Plants that are homosporous produce spores of the same size and type. Heterosporous plants, such as seed plants, spikemosses, quillworts, and ferns of the order Salviniales produce spores of two different sizes: the larger spore (megaspore) in effect functioning as a "female" spore and the smaller (microspore) functioning as a "male". Such plants typically give rise to the two kind of spores from within separate sporangia, either a megasporangium that produces megaspores or a microsporangium that produces microspores. In flowering plants, these sporangia occur within the carpel and anthers, respectively. Fungi Fungi commonly produce spores during sexual and asexual reproduction. Spores are usually haploid and grow into mature haploid individuals through mitotic division of cells (Urediniospores and Teliospores among rusts are dikaryotic). Dikaryotic cells result from the fusion of two haploid gamete cells. Among sporogenic dikaryotic cells, karyogamy (the fusion of the two haploid nuclei) occurs to produce a diploid cell. Diploid cells undergo meiosis to produce haploid spores. Classification of spores Spores can be classified in several ways such as by their spore producing structure, function, origin during life cycle, and mobility. Below is a table listing the mode of classification, name, identifying characteristic, examples, and images of different spore species. External anatomy Under high magnification, spores often have complex patterns or ornamentation on their exterior surfaces. A specialized terminology has been developed to describe features of such patterns. Some markings represent apertures, places where the tough outer coat of the spore can be penetrated when germination occurs. Spores can be categorized based on the position and number of these markings and apertures. Alete spores show no lines. In monolete spores, there is a single narrow line (laesura) on the spore. Indicating the prior contact of two spores that eventually separated. In trilete spores, each spore shows three narrow lines radiating from a center pole. This shows that four spores shared a common origin and were initially in contact with each other forming a tetrahedron. A wider aperture in the shape of a groove may be termed a colpus. The number of colpi distinguishes major groups of plants. Eudicots have tricolpate spores (i.e. spores with three colpi). Spore tetrads and trilete spores Envelope-enclosed spore tetrads are taken as the earliest evidence of plant life on land, dating from the mid-Ordovician (early Llanvirn, ~), a period from which no macrofossils have yet been recovered. Individual trilete spores resembling those of modern cryptogamic plants first appeared in the fossil record at the end of the Ordovician period. Dispersal In fungi, both asexual and sexual spores or sporangiospores of many fungal species are actively dispersed by forcible ejection from their reproductive structures. This ejection ensures exit of the spores from the reproductive structures as well as travelling through the air over long distances. Many fungi thereby possess specialized mechanical and physiological mechanisms as well as spore-surface structures, such as hydrophobins, for spore ejection. These mechanisms include, for example, forcible discharge of ascospores enabled by the structure of the ascus and accumulation of osmolytes in the fluids of the ascus that lead to explosive discharge of the ascospores into the air. The forcible discharge of single spores termed ballistospores involves formation of a small drop of water (Buller's drop), which upon contact with the spore leads to its projectile release with an initial acceleration of more than 10,000 g. Other fungi rely on alternative mechanisms for spore release, such as external mechanical forces, exemplified by puffballs. Attracting insects, such as flies, to fruiting structures, by virtue of their having lively colours and a putrid odour, for dispersal of fungal spores is yet another strategy, most prominently used by the stinkhorns. In Common Smoothcap moss (Atrichum undulatum), the vibration of sporophyte has been shown to be an important mechanism for spore release. In the case of spore-shedding vascular plants such as ferns, wind distribution of very light spores provides great capacity for dispersal. Also, spores are less subject to animal predation than seeds because they contain almost no food reserve; however they are more subject to fungal and bacterial predation. Their chief advantage is that, of all forms of progeny, spores require the least energy and materials to produce. In the spikemoss Selaginella lepidophylla, dispersal is achieved in part by an unusual type of diaspore, a tumbleweed. Origin Spores have been found in microfossils dating back to the mid-late Ordovician period. Two hypothesized initial functions of spores relate to whether they appeared before or after land plants. The heavily studied hypothesis is that spores were an adaptation of early land plant species, such as embryophytes, that allowed for plants to easily disperse while adapting to their non-aquatic environment. This is particularly supported by the observation of a thick spore wall in cryptospores. These spore walls would have protected potential offspring from novel weather elements. The second more recent hypothesis is that spores were an early predecessor of land plants and formed during errors in the meiosis of algae, a hypothesized early ancestor of land plants. Whether spores arose before or after land plants, their contributions to topics in fields like paleontology and plant phylogenetics have been useful. The spores found in microfossils, also known as cryptospores, are well preserved due to the fixed material they are in as well as how abundant and widespread they were during their respective time periods. These microfossils are especially helpful when studying the early periods of earth as macrofossils such as plants are not common nor well preserved. Both cryptospores and modern spores have diverse morphology that indicate possible environmental conditions of earlier periods of Earth and evolutionary relationships of plant species. Gallery See also Aeroplankton Auxiliary cell Bioaerosol Cryptospore References Fungal morphology and anatomy Germ cells Plant reproduction Reproduction Articles containing video clips
Spore
[ "Biology" ]
1,950
[ "Behavior", "Plant reproduction", "Plants", "Reproduction", "Biological interactions" ]
51,474
https://en.wikipedia.org/wiki/Seat%20belt
A seat belt, also known as a safety belt or spelled seatbelt, is a vehicle safety device designed to secure the driver or a passenger of a vehicle against harmful movement that may result during a collision or a sudden stop. A seat belt reduces the likelihood of death or serious injury in a traffic collision by reducing the force of secondary impacts with interior strike hazards, by keeping occupants positioned correctly for maximum effectiveness of the airbag (if equipped), and by preventing occupants being ejected from the vehicle in a crash or if the vehicle rolls over. When in motion, the driver and passengers are traveling at the same speed as the vehicle. If the vehicle suddenly stops or crashes, the occupants continue at the same speed the vehicle was going before it stopped. A seat belt applies an opposing force to the driver and passengers to prevent them from falling out or making contact with the interior of the car (especially preventing contact with, or going through, the windshield). Seat belts are considered primary restraint systems (PRSs), because of their vital role in occupant safety. Effectiveness An analysis conducted in the United States in 1984 compared a variety of seat belt types alone and in combination with air bags. The range of fatality reduction for front seat passengers was broad, from 20% to 55%, as was the range of major injury, from 25% to 60%. More recently, the Centers for Disease Control and Prevention has summarized these data by stating "seat belts reduce serious crash-related injuries and deaths by about half." Most malfunctions are a result of there being too much slack in the seat belt at the time of the accident. It has been suggested that although seat belt usage reduces the probability of death in any given accident, mandatory seat belt laws have little or no effect on the overall number of traffic fatalities because seat belt usage also disincentivizes safe driving behaviors, thereby increasing the total number of accidents. This idea, known as compensating-behavior theory, is not supported by the evidence. In case of vehicle rollover in a U.S. passenger car or SUV, from 1994 to 2004, wearing a seat belt reduced the risk of fatalities or incapacitating injuries and increased the probability of no injury: In case of vehicle rollover in a U.S. passenger car, there are % fatalities in 1994 and % in 2014 when user is restrained. There are % fatalities in 1994 and % in 2014 when the user is unrestrained. In case of vehicle rollover, there are % incapacitating injury in 1994 and % in 2014 when the user is restrained. There are % incapacitating injury in 1994 and % in 2014 when user is unrestrained. The probability of no injury is % in 1994 and % in 2014 when the user is restrained. There were % no injury in 1994 and % in 2014 when the user is unrestrained. History Seat belts were invented by English engineer George Cayley, to use on his glider, in the mid-19th century. In 1946, C. Hunter Shelden opened a neurological practice at Huntington Memorial Hospital in Pasadena, California. In the early 1950s, Shelden made a major contribution to the automotive industry with his idea of retractable seat belts. This came about from his care of the high number of head injuries coming through the emergency room. He investigated the early seat belts with primitive designs that were implicated in these injuries and deaths. Nash was the first American car manufacturer to offer seat belts as a factory option, in its 1949 models. They were installed in 40,000 cars, but buyers did not want them and requested that dealers remove them. The feature was "met with insurmountable sales resistance" and Nash reported that after one year "only 1,000 had been used" by customers. Ford offered seat belts as an option in 1955. These were not popular, with only 2% of Ford buyers choosing to pay for seat belts in 1956. To reduce the high level of injuries Shelden was seeing, he proposed, in late 1955, retractable seat belts, recessed steering wheels, reinforced roofs, roll bars, automatic door locks, and passive restraints such as air bags be made mandatory. Glenn W. Sheren, of Mason, Michigan, submitted a patent application on March 31, 1955, for an automotive seat belt and was awarded in 1958. This was a continuation of an earlier patent application that Sheren had filed on September 22, 1952. The first modern three-point seat belt (the so-called CIR-Griswold restraint) commonly used in consumer vehicles was patented in 1955 by the Americans Roger W. Griswold and Hugh DeHaven. Saab introduced seat belts as standard equipment in 1958. After the Saab GT 750 was introduced at the New York Motor Show in 1958 with safety belts fitted as standard, the practice became commonplace. Vattenfall, the Swedish national electric utility, did a study of all fatal, on-the-job accidents among their employees. The study revealed that the majority of fatalities occurred while the employees were on the road on company business. In response, two Vattenfall safety engineers, Bengt Odelgard and Per-Olof Weman, started to develop a seat belt. Their work was presented to Swedish manufacturer Volvo in the late 1950s, and set the standard for seat belts in Swedish cars. The three-point seat belt was developed to its modern form by Swedish inventor Nils Bohlin for Volvo, which introduced it in 1959 as standard equipment. In addition to designing an effective three-point belt, Bohlin demonstrated its effectiveness in a study of 28,000 accidents in Sweden. Unbelted occupants sustained fatal injuries throughout the whole speed scale, whereas none of the belted occupants was fatally injured at accident speeds below 60 mph. No belted occupant was fatally injured if the passenger compartment remained intact. Bohlin was granted for the device. Subsequently, in 1966, Congress passed the National Traffic and Motor Vehicle Safety Act, requiring all automobiles to comply with certain safety standards. The first compulsory seat belt law was put in place in 1970, in the state of Victoria, Australia, requiring their use by drivers and front-seat passengers. This legislation was enacted after trialing Hemco seat belts, designed by Desmond Hemphill (1926–2001), in the front seats of police vehicles, lowering the incidence of officer injury and death. Mandatory seat belt laws in the United States began to be introduced in the 1980s and faced opposition, with some consumers going to court to challenge the laws. Some cut seat belts out of their cars. Material The 'belt' part of the typical seatbelt seen in vehicles worldwide is referred to as the 'webbing'. Modern seat belt webbing has a high tensile strength, about 3000-6000lbs, to resist tearing at high loads such as during high-speed collisions or while restraining larger passengers. While nylon was used in some early seat belts (and is still used for lap belts), it was replaced by 100% polyester due to its better UV resistance, lower extensibility and higher stiffness. Nylon was also prone to stretching much more than polyester, and was prone to wear and tear, with tiny abrasions drastically reducing tensile strength causing a lack of reliability in one of the most important safety measures in a vehicle. Seat belts are commonly 46 or 48 mm wide with a 2/2 herringbone twill weaving pattern to maximize the thread density. Modern seatbelt weaves also feature snag-proof selvedges reinforced with strong polyester threads to prevent the wear and tear, while remaining flexible. The weave features about 300 warp threads for every 46mm wide webbing, leading to around 150 ends per inch of webbing. Accident investigators often examine the webbing of a seatbelt to determine if an occupant of a vehicle was wearing their seatbelt during a collision. The material of the webbing may contain traces of the occupant's clothing. Certain materials such as nylons may become permanently affixed or melted onto the fabric as a result of heat produced by friction, whereas fiber based clothing leaves no remains on modern webbing. Types Two-point A two-point belt attaches at its two endpoints. A simple strap was first used March 12, 1910, by pilot Benjamin Foulois, a pioneering aviator with the Aeronautical Division, U.S. Signal Corps, so he might remain at the controls during turbulence. The Irvin Air Chute Company made the seat belt for use by professional race car driver Barney Oldfield when his team decided the daredevil should have a "safety harness" for the 1923 Indianapolis 500. Lap A lap belt is a strap that goes over the waist. This was the most common type of belt prior to legislation requiring three-point belts and is found in older cars. Coaches are equipped with lap belts (although many newer coaches have three-point belts), as are passenger aircraft seats. University of Minnesota professor James J. (Crash) Ryan was the inventor of, and held the patent for, the automatic retractable lap safety belt. Ralph Nader cited Ryan's work in Unsafe at Any Speed and, following hearings led by Senator Abraham Ribicoff, President Lyndon Johnson signed two bills in 1966 requiring safety belts in all passenger vehicles starting in 1968. Until the 1980s, three-point belts were commonly available only in the front outboard seats of cars; the back seats were often only fitted with lap belts. Evidence of the potential of lap belts to cause separation of the lumbar vertebrae and the sometimes-associated paralysis, or "seat belt syndrome" led to the progressive revision of passenger safety regulations in nearly all developed countries to require three-point belts, first in all outboard seating positions, and eventually in all seating positions in passenger vehicles. Since September 1, 2007, all new cars sold in the U.S. require a lap and shoulder belt in the center rear seat. In addition to regulatory changes, "seat belt syndrome" has led to a liability for vehicle manufacturers. One Los Angeles case resulted in a $45 million jury verdict against Ford; the resulting $30 million judgment (after deductions for another defendant who settled prior to trial) was affirmed on appeal in 2006. While lap belts are exceedingly rare to spot in modern cars, they are the standard in commercial airliners. The lift-lever style of commercial aircraft buckles allows for the seatbelt to be easily clasped and unclasped, accessible quickly in case of an emergency where a passenger must evacuate, and fulfills the minimum safety requirements provided by the FAA while remaining low-cost to produce. Furthermore, in case of any collision, a passenger in economy class has only around 9 inches for their head to travel forward, meaning restraining the torso and head is relatively unnecessary as the head has little room to accelerate before collision. Sash A "sash" or shoulder harness is a strap that goes diagonally over the vehicle occupant's outboard shoulder and is buckled inboard of their lap. The shoulder harness may attach to the lap belt tongue, or it may have a tongue and buckle completely separate from those of the lap belt. Shoulder harnesses of this separate or semi-separate type were installed in conjunction with lap belts in the outboard front seating positions of many vehicles in the North American market starting at the inception of the shoulder belt requirement of the U.S. National Highway Traffic Safety Administration's (NHTSA) Federal Motor Vehicle Safety Standard 208 on January 1, 1968. However, if the shoulder strap is used without the lap belt, the vehicle occupant is likely to "submarine", or slide forward in the seat and out from under the belt, in a frontal collision. In the mid-1970s, three-point belt systems such as Chrysler's "Uni-Belt" began to supplant the separate lap and shoulder belts in American-made cars, though such three-point belts had already been supplied in European vehicles such as Volvo, Mercedes-Benz, and Saab for some years. Three-point A three-point belt is a Y-shaped arrangement, similar to the separate lap and sash belts, but unified. Like the separate lap-and-sash belt, in a collision, the three-point belt spreads out the energy of the moving body over the chest, pelvis, and shoulders. Volvo introduced the first production three-point belt in 1959. The first car with a three-point belt was a Volvo PV 544 that was delivered to a dealer in Kristianstad on August 13, 1959. The first car model to have the three-point seat belt as a standard item was the 1959 Volvo 122, first outfitted with a two-point belt at initial delivery in 1958, replaced with the three-point seat belt the following year. The three-point belt was developed by Nils Bohlin, who had earlier also worked on ejection seats at Saab. Volvo then made the new seat belt design patent open in the interest of safety and made it available to other car manufacturers for free. Belt-in-Seat The Belt-in-Seat (BIS) is a three-point harness with the shoulder belt attached to the seat itself, rather than to the vehicle structure. The first car using this system was the Range Rover Classic, which offered BIS as standard on the front seats from 1970. Some cars like the Renault Vel Satis use this system for the front seats. A General Motors assessment concluded seat-mounted three-point belts offer better protection especially to smaller vehicle occupants, though GM did not find a safety performance improvement in vehicles with seat-mounted belts versus belts mounted to the vehicle body. Belt-in-Seat type belts have been used by automakers in convertibles and pillarless hardtops, where there is no "B" pillar to affix the upper mount of the belt. Chrysler and Cadillac are well known for using this design. Antique auto enthusiasts sometimes replace original seats in their cars with BIS-equipped front seats, providing a measure of safety not available when these cars were new. However, modern BIS systems typically use electronics that must be installed and connected with the seats and the vehicle's electrical system in order to function properly. 4-, 5-, and 6-point Five-point harnesses are typically found in child safety seats and in racing cars. The lap portion is connected to a belt between the legs and there are two shoulder belts, making a total of five points of attachment to the seat. A 4-point harness is similar, but without the strap between the legs, while a 6-point harness has two belts between the legs. In NASCAR, the 6-point harness became popular after the death of Dale Earnhardt, who was wearing a five-point harness when he suffered his fatal crash. As it was first thought that his belt had broken, and broke his neck at impact, some teams ordered a six-point harness in response. Seven-point Aerobatic aircraft frequently use a combination harness consisting of a five-point harness with a redundant lap belt attached to a different part of the aircraft. While providing redundancy for negative-g maneuvers (which lift the pilot out of the seat), they also require the pilot to unlatch two harnesses if it is necessary to parachute from a failed aircraft. Technology Locking retractors The purpose of locking retractors (sometimes called ELR belts, for "Emergency Locking Retractors") is to provide the seated occupant the convenience of some free movement of the upper torso within the compartment while providing a method of limiting this movement in the event of a crash. Starting in 1996, all passenger vehicles were required to lock pre-crash, meaning they have a locking mechanism in the retractor or in the latch plate. Seat belts are stowed on spring-loaded reels called "retractors" equipped with inertial locking mechanisms that stop the belt from extending off the reel during severe deceleration. There are two main types of inertial seat belt locks. A webbing-sensitive lock is based on a centrifugal clutch activated by the rapid acceleration of the strap (webbing) from the reel. The belt can be pulled from the reel only slowly and gradually, as when the occupant extends the belt to fasten it. A sudden rapid pull of the belt—as in a sudden braking or collision event—causes the reel to lock, restraining the occupant in position. The first automatic locking retractor for seat belts and shoulder harnesses in the U.S. was the Irving "Dynalock" safety device. These "Auto-lock" front lap belts were optional on AMC cars with bucket seats in 1967. A vehicle-sensitive lock is based on a pendulum swung away from its plumb position by rapid deceleration or rollover of the vehicle. In the absence of rapid deceleration or rollover, the reel is unlocked and the belt strap may be pulled from the reel against the spring tension of the reel. The vehicle occupant can move around with relative freedom while the spring tension of the reel keeps the belt taut against the occupant. When the pendulum swings away from its normal plumb position due to sudden deceleration or rollover, a pawl is engaged, the reel locks and the strap restrains the belted occupant in position. Dual-sensing locking retractors use both vehicle G-loading and webbing payout rate to initiate the locking mechanism. Pretensioners and webclamps Seat belts in many newer vehicles are also equipped with "pretensioners" or "web clamps", or both. Pretensioners preemptively tighten the belt to prevent the occupant from jerking forward in a crash. Mercedes-Benz first introduced pretensioners on the 1981 S-Class. In the event of a crash, a pretensioner will tighten the belt almost instantaneously. This reduces the motion of the occupant in a violent crash. Like airbags, pretensioners are triggered by sensors in the car's body, and many pretensioners have used explosively expanding gas to drive a piston that retracts the belt. Pretensioners also lower the risk of "submarining", which occurs when a passenger slides forward under a loosely fitted seat belt. Some systems also pre-emptively tighten the belt during fast accelerations and strong decelerations, even if no crash has happened. This has the advantage that it may help prevent the driver from sliding out of position during violent evasive maneuvers, which could cause loss of control of the vehicle. These pre-emptive safety systems may prevent some collisions from happening, as well as reduce injuries in the event an actual collision occurs. Pre-emptive systems generally use electric pretensioners, which can operate repeatedly and for a sustained period, rather than pyrotechnic pretensioners, which can only operate a single time. Webclamps stop the webbing in the event of an accident and limit the distance the webbing can spool out (caused by the unused webbing tightening on the central drum of the mechanism). These belts also often incorporate an energy management loop ("rip stitching") in which a section of the webbing is looped and stitched with special stitching. The function of this is to "rip" at a predetermined load, which reduces the maximum force transmitted through the belt to the occupant during a violent collision, reducing injuries to the occupant. A study demonstrated that standard automotive three-point restraints fitted with pyrotechnic or electric pretensioners were not able to eliminate all interior passenger compartment head strikes in rollover test conditions. Electric pretensioners are often incorporated on vehicles equipped with precrash systems; they are designed to reduce seat belt slack in a potential collision and assist in placing the occupants in a more optimal seating position. The electric pretensioners also can operate on a repeated or sustained basis, providing better protection in the event of an extended rollover or a multiple collision accident. Inflatable The inflatable seat belt was invented by Donald Lewis and tested at the Automotive Products Division of Allied Chemical Corporation. Inflatable seat belts have tubular inflatable bladders contained within an outer cover. When a crash occurs, the bladder inflates with gas to increase the area of the restraint contacting the occupant and also shortening the length of the restraint to tighten the belt around the occupant, improving the protection. The inflatable sections may be shoulder-only or lap and shoulder. The system supports the head during the crash better than a web-only belt. It also provides side impact protection. In 2013, Ford began offering rear-seat inflatable seat belts on a limited set of models, such as the Explorer and Flex. Automatic Seat belts that automatically move into position around a vehicle occupant once the adjacent door is closed and/or the engine is started were developed as a countermeasure against low usage rates of manual seat belts, particularly in the United States. The 1972 Volkswagen ESVW1 Experimental Safety Vehicle presented passive seat belts. Volvo tried to develop a passive three point seat belt. In 1973, Volkswagen announced they had a functional passive seat belt. The first commercial car to use automatic seat belts was the 1975 Volkswagen Golf. Automatic seat belts received a boost in the United States in 1977 when Brock Adams, United States Secretary of Transportation in the Carter Administration, mandated that by 1983 every new car should have either airbags or automatic seat belts. There was strong lobbying against the passive restraint requirement by the auto industry. Adams was criticized by Ralph Nader, who said that the 1983 deadline was too late. The Volkswagen Rabbit also had automatic seat belts, and VW said that by early 1978, 90,000 cars had sold with them. General Motors introduced a three-point non-motorized passive belt system in 1980 to comply with the passive restraint requirement. However, it was used as an active lap-shoulder belt because of unlatching the belt to exit the vehicle. Despite this common practice, field studies of belt use still showed an increase in wearing rates with this door-mounted system. General Motors began offering automatic seat belts on the Chevrolet Chevette. However, the company reported disappointing sales because of this feature. For the 1981 model year, the new Toyota Cressida became the first car to offer motorized automatic passive seat belts. A study released in 1978 by the United States Department of Transportation said that cars with automatic seat belts had a fatality rate of .78 per 100 million miles, compared with 2.34 for cars with regular, manual belts. In 1981, Drew Lewis, the first Transportation Secretary of the Reagan Administration, influenced by studies done by the auto industry, dropped the mandate; the decision was overruled in a federal appeals court the following year, and then by the Supreme Court. In 1984, the Reagan Administration reversed its course, though in the meantime the original deadline had been extended; Elizabeth Dole, then Transportation Secretary, proposed that the two passive safety restraints be phased into vehicles gradually, from vehicle model year 1987 to vehicle model year 1990, when all vehicles would be required to have either automatic seat belts or driver side air bags. Though more awkward for vehicle occupants, most manufacturers opted to use less expensive automatic belts rather than airbags during this time period. When driver side airbags became mandatory on all passenger vehicles in model year 1995, most manufacturers stopped equipping cars with automatic seat belts. Exceptions include the 1995–96 Ford Escort/Mercury Tracer and the Eagle Summit Wagon, which had automatic safety belts along with dual airbags. Systems Manual lap belt with automatic motorized shoulder belt: When the door is opened, the shoulder belt moves from a fixed point near the seat back on a track mounted in the door frame of the car to a point at the other end of the track near the windshield. Once the door is closed and the car is started, the belt moves rearward along the track to its original position, thus securing the passenger. The lap belt must be fastened manually. Manual lap belt with automatic non-motorized shoulder belt: This system was used in American-market vehicles such as the Hyundai Excel and Volkswagen Jetta. The shoulder belt is fixed to the aft upper corner of the vehicle door and is not motorized. The lap belt must be fastened manually. Automatic shoulder and lap belts: This system was mainly used in General Motors vehicles, though it was also used on some Honda Civic hatchbacks and Nissan Sentra coupes. When the door is opened, the belts go from a fixed point in the middle of the car by the floor to the retractors on the door. Passengers must slide into the car under the belts. When the door closes, the seat belt retracts into the door. The belts have normal release buttons that are supposed to be used only in an emergency, but in practice are routinely used in the same manner as manual seat belt clasps. This system also found use by American Specialty Cars when they created the 1991-1994 convertible special edition of the Nissan 240SX, a car that traditionally had a motorized shoulder belt. Disadvantages Automatic belt systems generally offer inferior occupant crash protection. In systems with belts attached to the door rather than a sturdier fixed portion of the vehicle body, a crash that causes the vehicle door to open leaves the occupant without belt protection. In such a scenario, the occupant may be thrown from the vehicle and suffer greater injury or death. Because many automatic belt system designs compliant with the U.S. passive-restraint mandate did not meet the anchorage requirements of Canada (CMVSS 210)—which were not weakened to accommodate automatic belts—vehicle models that had been eligible for easy importation in either direction across the U.S.-Canada border when equipped with manual belts became ineligible for importation in either direction once the U.S. variants obtained automatic belts and the Canadian versions retained manual belts, although some Canadian versions also had automatic seat belts. Two particular models affected were the Dodge Spirit and Plymouth Acclaim. Automatic belt systems also present several operational disadvantages. Motorists who would normally wear seat belts must still fasten the manual lap belt, thus rendering redundant the automation of the shoulder belt. Those who do not fasten the lap belt wind up inadequately protected only by the shoulder belt. In a crash, without a lap belt, such a vehicle occupant is likely to "submarine" (be thrown forward under the shoulder belt) and be seriously injured. Motorized or door-affixed shoulder belts hinder access to the vehicle, making it difficult to enter and exit—particularly if the occupant is carrying items such as a box or a purse. Vehicle owners tend to disconnect the motorized or door-affixed shoulder belt to relieve the nuisance when entering and exiting the vehicle, leaving only a lap belt for crash protection. Also, many automatic seat belt systems are incompatible with child safety seats, or only compatible with special modifications. Homologation and testing Starting in 1971 and ending in 1972, the United States conducted a research project on seat belt effectiveness on a total of 40,000 vehicle occupants using car accident reports collected during that time. Of these 40,000 occupants, 18% were reported wearing lap belts, or two-point safety belts, 2% were reported wearing a three-point safety belt, and the remaining 80% were reported as wearing no safety belt. The results concluded that users of the two-point lap belt had a 73% lower fatality rate, a 53% lower serious injury rate, and a 38% lower injury rate than the occupants that were reported unrestrained. Similarly, users of the three-point safety belt had a 60% lower serious injury rate and a 41% lower rate of all other injuries. Out of the 2% described as wearing a three-point safety belt, no fatalities were reported. This study and others led to the Restraint Systems Evaluation Program (RSEP), started by the NHTSA in 1975 to increase the reliability and authenticity of past studies. A study as part of this program used data taken from 15,000 tow-away accidents that involved only car models made between 1973 and 1975. The study found that for injuries considered “moderate” or worse, individuals wearing a three-point safety belt had a 56.5% lower injury rate than those wearing no safety belt. The study also concluded that the effectiveness of the safety belt did not differ with the size of a car. It was determined that the variation among results of the many studies conducted in the 1960s and 70s was due to the use of different methodologies, and could not be attributed to any significant variation in the effectiveness of safety belts. Wayne State University's Automotive Safety Research Group, as well as other researchers, are testing ways to improve seat belt effectiveness and general vehicle safety apparatuses. Wayne State's Bioengineering Center uses human cadavers in their crash test research. The center's director, Albert King, wrote in 1995 that the vehicle safety improvements made possible since 1987 by the use of cadavers in research had saved nearly 8,500 lives each year, and indicated that improvements made to three-point safety belts save an average of 61 lives every year. The New Car Assessment Program (NCAP) was put in place by the United States National Highway Traffic Safety Administration in 1979. The NCAP is a government program that evaluates vehicle safety designs and sets standards for foreign and domestic automobile companies. The agency developed a rating system and requires access to safety test results. , manufacturers are required to place an NCAP star rating on the automobile price sticker. In 2004, The European New Car Assessment Program (Euro NCAP), started testing seat belts and whiplash safety on all test cars at the Thatcham Research Centre with crash test dummies. Experimental Research and development efforts are ongoing to improve the safety performance of vehicle seat belts. Some experimental designs include: Criss-cross: Experimental safety belt presented in the Volvo SCC. It forms a cross-brace across the chest. 3+2 Point: Experimental safety belt from Autoliv similar to the criss-cross. The 3+2 improves protection against rollovers and side impacts. Four point "belt and suspenders": An experimental design from Ford where the "suspenders" are attached to the backrest, not to the frame of the car. 3-point Adjustable: Experimental safety belt from GWR Safety Systems that allowed the car Hiriko, designed by MIT, to fold without compromising the safety and comfort of the occupants. In rear seats In 1955 (as a 1956 package), Ford offered lap-only seat belts in the rear seats as an option within the Lifeguard safety package. In 1967, Volvo started to install lap belts in the rear seats. In 1972, Volvo upgraded the rear seat belts to a three-point belt. In crashes, unbelted rear passengers increase the risk of belted front seat occupants' death by nearly five times. Child occupants As with adult drivers and passengers, the advent of seat belts was accompanied by calls for their use by child occupants, including legislation requiring such use. Generally, children using adult seat belts suffer significantly lower injury risk when compared to non-buckled children. The UK extended compulsory seat belt wearing to child passengers under the age of 14 in 1989. It was observed that this measure was accompanied by a 10% increase in fatalities and a 12% increase in injuries among the target population. In crashes, small children who wear adult seat belts can suffer "seat-belt syndrome" injuries including severed intestines, ruptured diaphragms, and spinal damage. There is also research suggesting that children in inappropriate restraints are at significantly increased risk of head injury. One of the authors of this research said, "The early graduation of kids into adult lap and shoulder belts is a leading cause of child-occupant injuries and deaths." As a result of such findings, many jurisdictions now advocate or require child passengers to use specially designed child restraints. Such systems include separate child-sized seats with their own restraints and booster cushions for children using adult restraints. In some jurisdictions, children below a certain size are forbidden to travel in front car seats." Automated reminders and engine start interlocks In Europe, the U.S., and some other parts of the world, most modern cars include a seat-belt reminder light for the driver and some also include a reminder for the passenger, when present, activated by a pressure sensor under the passenger seat. Some cars will intermittently flash the reminder light and sound the chime until the driver (and sometimes the front passenger, if present) fasten their seat belts. In 2005, in Sweden, 70% of all cars that were newly registered were equipped with seat belt reminders for the driver. Since November 2014, seat belt reminders are mandatory for the driver's seat on new cars sold in Europe. Two specifications define the standard of seat belt reminder: UN Regulation 16, Section 8.4 and the Euro NCAP assessment protocol (Euro NCAP, 2013). European Union seat belt reminder In the European Union, seat belt reminders are mandatory in all new passenger cars for the driver seat. In 2014, EC Regulation 661/2009 made UN Regulation 16 applicable. Amendment of UN Regulation 16 made seat belt reminders mandatory in all front and rear seats of passenger cars and vans, all front seats of buses and trucks. This improvement applies from 1 September 2019 for new types of motor vehicles and from 1 September 2021 for all new motor vehicles. U.S. regulation history The Federal Motor Vehicle Safety Standard No. 208 (FMVSS 208) was amended by the NHTSA to require a seat belt/starter interlock system to prevent passenger cars from being started with an unbelted front-seat occupant. This mandate applied to passenger cars built after August 1973, i.e., starting with the 1974 model year. The specifications required the system to permit the car to be started only if the belt of an occupied seat were fastened after the occupant sat down, so pre-buckling the belts would not defeat the system. The interlock systems used logic modules complex enough to require special diagnostic computers, and were not entirely dependable—an override button was provided under the hood of equipped cars, permitting one (but only one) "free" starting attempt each time it was pressed. However, the interlock system spurred severe backlash from an American public who largely rejected seat belts. In 1974, Congress acted to prohibit NHTSA from requiring or permitting a system that prevents a vehicle from starting or operating with an unbelted occupant, or that gives an audible warning of an unfastened belt for more than 8 seconds after the ignition is turned on. This prohibition took effect on 27 October 1974, shortly after the 1975 model year began. In response to the Congressional action, NHTSA once again amended FMVSS 208, requiring vehicles to come with a seat belt reminder system that gives an audible signal for 4 to 8 seconds and a warning light for at least 60 seconds after the ignition is turned on if the driver's seat belt is not fastened. This is called a seat belt reminder (SBR) system. In the mid-1990s, the Swedish insurance company Folksam worked with Saab and Ford to determine the requirements for the most efficient seat belt reminder. One characteristic of the optimal SBR, according to the research, is that the audible warning becomes increasingly penetrating the longer the seat belt remains unfastened. Efficacy In 2001, Congress directed NHSTA to study the benefits of technology meant to increase the use of seat belts. NHSTA found that seat belt usage had increased to 73% since the initial introduction of the SBR system. In 2002, Ford demonstrated that seat belts were used more in Fords with seat belt reminders than in those without: 76% and 71% respectively. In 2007, Honda conducted a similar study and found that 90% of people who drove Hondas with seat belt reminders used a seat belt, while 84% of people who drove Hondas without seat belt reminders used a seat belt. In 2003, the Transportation Research Board Committee, chaired by two psychologists, reported that "Enhanced SBRs" (ESBRs) could save an additional 1,000 lives a year. Research by the Insurance Institute for Highway Safety found that Ford's ESBR, which provides an intermittent chime for up to five minutes if the driver is unbelted, sounding for 6 seconds then pausing for 30, increased seat belt use by 5 percentage points. Farmer and Wells found that driver fatality rates were 6% lower for vehicles with ESBR compared with otherwise-identical vehicles without. Delayed start Starting with the 2020 model year, some Chevrolet cars refused to shift from Park to Drive for 20 seconds if the driver is unbuckled and the car is in "teen driver" mode. A similar feature was previously available on some General Motors fleet cars. Regulation by country International regulations Several countries apply UN-ECE vehicle regulations 14 and 16: UN Regulation No. 14: safety belt anchorages UN Regulation No. 16: Safety belts, restraint systems, child restraint systems, and ISOFIX child restraint systems for occupants of power-driven vehicles Vehicles equipped with safety belts, safety belt reminders, restraint systems, child restraint systems and ISOFIX child restraint systems and i-Size child restraint systems UN Regulation No. 44: restraining devices for child occupants of power-driven vehicles ("Child Restraint Systems") UN Regulation No. 129: Enhanced Child Restraint Systems Local regulations Legislation In observational studies of car crash morbidity and mortality, experiments using both crash test dummies and human cadavers indicate that wearing seat belts greatly reduces the risk of death and injury in the majority of car crashes. This has led many countries to adopt mandatory seat belt wearing laws. It is generally accepted that, in comparing like-for-like accidents, a vehicle occupant not wearing a properly fitted seat belt has a significantly and substantially higher chance of death and serious injury. One large observation studying using U.S. data showed that the odds ratio of crash death is 0.46 with a three-point belt when compared with no belt. In another study that examined injuries presenting to the ER pre- and post-seat belt law introduction, it was found that 40% more escaped injury and 35% more escaped mild and moderate injuries. The effects of seat belt laws are disputed by those who observe that their passage did not reduce road fatalities. There has also been concern that instead of legislating for a general protection standard for vehicle occupants, laws that required a particular technical approach would rapidly become dated as motor manufacturers would tool up for a particular standard that could not easily be changed. For example, in 1969 there were competing designs for lap and three-point seat belts, rapidly tilting seats, and airbags being developed. As countries started to mandate seat belt restraints the global auto industry invested in the tooling and standardized exclusively on seat belts, and ignored other restraint designs such as airbags for several decades As of 2016, seat belt laws can be divided into two categories: primary and secondary. A primary seat belt law allows an officer to issue a citation for lack of seat belt use without any other citation, whereas a secondary seat belt law allows an officer to issue a seat belt citation only in the presence of a different violation. In the United States, fifteen states enforce secondary laws, while 34 states, as well as the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands enforce primary seat belt laws. New Hampshire lacks either a primary or secondary seat belt law. Risk compensation Some have proposed that the number of deaths was influenced by the development of risk compensation, which says that drivers adjust their behavior in response to the increased sense of personal safety wearing a seat belt provides. In one trial subjects were asked to drive go-karts around a track under various conditions. It was found that subjects who started driving unbelted drove consistently faster when subsequently belted. Similarly, a study of habitual non-seat belt wearers driving in freeway conditions found evidence that they had adapted to use by adopting higher driving speeds and closer following distances. A 2001 analysis of U.S. crash data aimed to establish the effects of legislation on driving fatalities and found that previous estimates of seat belt effectiveness had been significantly overstated. According to the analysis, seat belts decreased fatalities by 1.35% for each 10% increase in seat belt use. The study controlled for endogenous motivations of seat belt use, because that creates an artificial correlation between seat belt use and fatalities, leading to the conclusion that seat belts cause fatalities. For example, drivers in high-risk areas are more likely to use seat belts and are more likely to be in accidents, creating a non-causal correlation between seat belt use and mortality. After accounting for the endogeneity of seat belt usage, Cohen and Einav found no evidence that the risk compensation effect makes seat belt-wearing drivers more dangerous, a finding at variance with other research. Increased traffic Other statistical analyses have included adjustments for factors such as increased traffic and age, and based on these adjustments, which results in a reduction of morbidity and mortality due to seat belt use. However, Smeed's law predicts a fall in accident rate with increasing car ownership and has been demonstrated independently of seat belt legislation. Mass transit considerations Buses School buses In the U.S., six states—California, Florida, Louisiana, New Jersey, New York, and Texas—require seat belts on school buses. Pros and cons have been debated about the use of seat belts in school buses. School buses, which are much bigger than the average vehicle, allow for the mass transportation of students from place to place. The American School Bus Council states in a brief article: "The children are protected like eggs in an egg carton—compartmentalized, and surrounded with padding and structural integrity to secure the entire container." Although school buses are considered safe for mass transit of students, this will not guarantee that the students will be injury-free if an impact were to occur. Seat belts in buses are sometimes believed to make recovering from a roll or tip harder for passengers, as they could be easily trapped in their own safety belts. In 2015, for the first time, NHTSA endorsed seat belts on school buses. Motor coaches In the European Union, all new long-distance buses and coaches must be fitted with seat belts. Australia has required lap/sash seat belts in new coaches since 1994. These must comply with Australian Design Rule 68, which requires the seat belt, seat and seat anchorage to withstand 20g deceleration and an impact by an unrestrained occupant to the rear. In the United States, NHTSA now requires lap-shoulder seat belts in new "over-the-road" buses (includes most coaches) starting in 2016. Trains The use of seat belts in trains has been investigated. Concerns about survival space intrusion in train crashes and increased injuries to unrestrained or incorrectly restrained passengers led researchers to discourage the use of seat belts in trains. "It has been shown that there is no net safety benefit for passengers who choose to wear 3-point restraints on passenger-carrying rail vehicles. Generally, passengers who choose not to wear restraints in a vehicle modified to accept 3-point restraints receive marginally more severe injuries." Airplanes All aerobatic aircraft and gliders (sailplanes) are fitted with four or five-point harnesses, as are many types of light aircraft and many types of military aircraft. The seat belts in these aircraft have the dual function of crash protection and keeping the pilot(s) and crew in their seat(s) during turbulence and aerobatic maneuvers. Airliners are fitted with lap belts. Unlike road vehicles, airliner seat belts are not primarily designed for crash protection. Their main purpose is to keep passengers in their seats during events such as turbulence. Many civil aviation authorities require a "fasten seat belt" sign in the cabin that can be activated by a pilot during taxiing, takeoff, turbulence, and landing. The International Civil Aviation Organization recommends the use of child restraints. Some airline authorities, including the UK Civil Aviation Authority (CAA), permit the use of airline infant lap belts (sometimes known as an infant loop or belly belt) to secure an infant under age two sitting on an adult's lap. See also Automobile safety Baby transport Crashworthiness List of auto parts Passive safety device Safety harness Seat belt use rates by country References External links Vehicle safety technologies English inventions Safety equipment Vehicle parts
Seat belt
[ "Technology" ]
9,160
[ "Vehicle parts", "Components" ]
51,483
https://en.wikipedia.org/wiki/Mutualism%20%28biology%29
Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples are: the nutrient exchange between vascular plants and mycorrhizal fungi, the fertilization of flowering plants by pollinators, the ways plants use fruits and edible seeds to encourage animal aid in seed dispersal, and the way corals become photosynthetic with the help of the microorganism zooxanthellae. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, and with parasitism, in which one species benefits at the expense of the other. However, mutualism may evolve from interactions that began with imbalanced benefits, such as parasitism. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualism and symbiosis, they have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as: about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. estimates of tropical rainforest plants with seed dispersal mutualisms with animals range at least from 70% to 93.5%. In addition, mutualism is thought to have driven the evolution of much of the biological diversity we see, such as flower forms (important for pollination mutualisms) and co-evolution between groups of species. A prominent example of pollination mutualism is with bees and flowering plants. Bees use these plants as their food source with pollen and nectar. In turn, they transfer pollen to other nearby flowers, inadvertently allowing for cross-pollination. Cross-pollination has become essential in plant reproduction and fruit/seed production. The bees get their nutrients from the plants, and allow for successful fertilization of plants, demonstrating a mutualistic relationship between two seemingly-unlike species. Mutualism has also been linked to major evolutionary events, such as the evolution of the eukaryotic cell (symbiogenesis) and the colonization of land by plants in association with mycorrhizal fungi. Types Resource-resource relationships Mutualistic relationships can be thought of as a form of "biological barter" in mycorrhizal associations between plant roots and fungi, with the plant providing carbohydrates to the fungus in return for primarily phosphate but also nitrogenous compounds. Other examples include rhizobia bacteria that fix nitrogen for leguminous plants (family Fabaceae) in return for energy-containing carbohydrates. Metabolite exchange between multiple mutualistic species of bacteria has also been observed in a process known as cross-feeding. Service-resource relationships Service-resource relationships are common. Three important types are pollination, cleaning symbiosis, and zoochory. In pollination, a plant trades food resources in the form of nectar or pollen for the service of pollen dispersal. However, daciniphilous Bulbophyllum orchid species trade sex pheromone precursor or booster components via floral synomones/attractants in a true mutualistic interactions with males of Dacini fruit flies (Diptera: Tephritidae: Dacinae). Phagophiles feed (resource) on ectoparasites, thereby providing anti-pest service, as in cleaning symbiosis. Elacatinus and Gobiosoma, genera of gobies, feed on ectoparasites of their clients while cleaning them. Zoochory is the dispersal of the seeds of plants by animals. This is similar to pollination in that the plant produces food resources (for example, fleshy fruit, overabundance of seeds) for animals that disperse the seeds (service). Plants may advertise these resources using colour and a variety of other fruit characteristics, e.g., scent. Fruit of the aardvark cucumber (Cucumis humifructus) is buried so deeply that the plant is solely reliant upon the aardvark's keen sense of smell to detect its ripened fruit, extract, consume and then scatter its seeds; C. humifructus's geographical range is thus restricted to that of the aardvark's. Another type is ant protection of aphids, where the aphids trade sugar-rich honeydew (a by-product of their mode of feeding on plant sap) in return for defense against predators such as ladybugs. Service-service relationships Strict service-service interactions are very rare, for reasons that are far from clear. One example is the relationship between sea anemones and anemone fish in the family Pomacentridae: the anemones provide the fish with protection from predators (which cannot tolerate the stings of the anemone's tentacles) and the fish defend the anemones against butterflyfish (family Chaetodontidae), which eat anemones. However, in common with many mutualisms, there is more than one aspect to it: in the anemonefish-anemone mutualism, waste ammonia from the fish feeds the symbiotic algae that are found in the anemone's tentacles. Therefore, what appears to be a service-service mutualism in fact has a service-resource component. A second example is that of the relationship between some ants in the genus Pseudomyrmex and trees in the genus Acacia, such as the whistling thorn and bullhorn acacia. The ants nest inside the plant's thorns. In exchange for shelter, the ants protect acacias from attack by herbivores (which they frequently eat when those are small enough, introducing a resource component to this service-service relationship) and competition from other plants by trimming back vegetation that would shade the acacia. In addition, another service-resource component is present, as the ants regularly feed on lipid-rich food-bodies called Beltian bodies that are on the Acacia plant. In the neotropics, the ant Myrmelachista schumanni makes its nest in special cavities in Duroia hirsute. Plants in the vicinity that belong to other species are killed with formic acid. This selective gardening can be so aggressive that small areas of the rainforest are dominated by Duroia hirsute. These peculiar patches are known by local people as "devil's gardens". In some of these relationships, the cost of the ant's protection can be quite expensive. Cordia sp. trees in the Amazonian rainforest have a kind of partnership with Allomerus sp. ants, which make their nests in modified leaves. To increase the amount of living space available, the ants will destroy the tree's flower buds. The flowers die and leaves develop instead, providing the ants with more dwellings. Another type of Allomerus sp. ant lives with the Hirtella sp. tree in the same forests, but in this relationship, the tree has turned the tables on the ants. When the tree is ready to produce flowers, the ant abodes on certain branches begin to wither and shrink, forcing the occupants to flee, leaving the tree's flowers to develop free from ant attack. The term "species group" can be used to describe the manner in which individual organisms group together. In this non-taxonomic context one can refer to "same-species groups" and "mixed-species groups." While same-species groups are the norm, examples of mixed-species groups abound. For example, zebra (Equus burchelli) and wildebeest (Connochaetes taurinus) can remain in association during periods of long distance migration across the Serengeti as a strategy for thwarting predators. Cercopithecus mitis and Cercopithecus ascanius, species of monkey in the Kakamega Forest of Kenya, can stay in close proximity and travel along exactly the same routes through the forest for periods of up to 12 hours. These mixed-species groups cannot be explained by the coincidence of sharing the same habitat. Rather, they are created by the active behavioural choice of at least one of the species in question. Evolution Mutualistic symbiosis can sometimes evolve from parasitism or commensalism. Symbiogenesis, a leading theory on the evolution of Eukaryotes states the origin of the mitochondria and cell nucleus emerged from a parasitic relationship of ancient Archaea and Bacteria. Fungi's relationship to plants in the form of mycelium evolved from parasitism and commensalism. Under certain conditions species of fungi previously in a state of mutualism can turn parasitic on weak or dying plants. Likewise the symbiotic relationship of clown fish and sea anemones emerged from a commensalist relationship. Once a mutualistic relationship emerges both symbionts are pushed towards co-evolution with each other. Mathematical modeling Mathematical treatments of mutualisms, like the study of mutualisms in general, have lagged behind those for predation, or predator-prey, consumer-resource, interactions. In models of mutualisms, the terms "type I" and "type II" functional responses refer to the linear and saturating relationships, respectively, between the benefit provided to an individual of species 1 (dependent variable) and the density of species 2 (independent variable). Type I functional response One of the simplest frameworks for modeling species interactions is the Lotka–Volterra equations. In this model, the changes in population densities of the two mutualists are quantified as: where = the population density of species i. = the intrinsic growth rate of the population of species i. = the negative effect of within-species crowding on species i. = the beneficial effect of the density of species j on species i. Mutualism is in essence the logistic growth equation modified for mutualistic interaction. The mutualistic interaction term represents the increase in population growth of one species as a result of the presence of greater numbers of another species. As the mutualistic interactive term β is always positive, this simple model may lead to unrealistic unbounded growth. So it may be more realistic to include a further term in the formula, representing a saturation mechanism, to avoid this occurring. Type II functional response In 1989, David Hamilton Wright modified the above Lotka–Volterra equations by adding a new term, βM/K, to represent a mutualistic relationship. Wright also considered the concept of saturation, which means that with higher densities, there is a decrease in the benefits of further increases of the mutualist population. Without saturation, depending on the size of parameter α, species densities would increase indefinitely. Because that is not possible due to environmental constraints and carrying capacity, a model that includes saturation would be more accurate. Wright's mathematical theory is based on the premise of a simple two-species mutualism model in which the benefits of mutualism become saturated due to limits posed by handling time. Wright defines handling time as the time needed to process a food item, from the initial interaction to the start of a search for new food items and assumes that processing of food and searching for food are mutually exclusive. Mutualists that display foraging behavior are exposed to the restrictions on handling time. Mutualism can be associated with symbiosis. Handling time interactions In 1959, C. S. Holling performed his classic disc experiment that assumed that the number of food items captured is proportional to the allotted searching time; and that there is a handling time variable that exists separately from the notion of search time. He then developed an equation for the Type II functional response, which showed that the feeding rate is equivalent to where a = the instantaneous discovery rate x = food item density TH = handling time The equation that incorporates Type II functional response and mutualism is: where N and M = densities of the two mutualists r = intrinsic rate of increase of N c = coefficient measuring negative intraspecific interaction. This is equivalent to inverse of the carrying capacity, 1/K, of N, in the logistic equation. a = instantaneous discovery rate b = coefficient converting encounters with M to new units of N or, equivalently, where X = 1/aTH β = b/TH This model is most effectively applied to free-living species that encounter a number of individuals of the mutualist part in the course of their existences. Wright notes that models of biological mutualism tend to be similar qualitatively, in that the featured isoclines generally have a positive decreasing slope, and by and large similar isocline diagrams. Mutualistic interactions are best visualized as positively sloped isoclines, which can be explained by the fact that the saturation of benefits accorded to mutualism or restrictions posed by outside factors contribute to a decreasing slope. The type II functional response is visualized as the graph of vs. M. Structure of networks Mutualistic networks made up out of the interaction between plants and pollinators were found to have a similar structure in very different ecosystems on different continents, consisting of entirely different species. The structure of these mutualistic networks may have large consequences for the way in which pollinator communities respond to increasingly harsh conditions and on the community carrying capacity. Mathematical models that examine the consequences of this network structure for the stability of pollinator communities suggest that the specific way in which plant-pollinator networks are organized minimizes competition between pollinators, reduce the spread of indirect effects and thus enhance ecosystem stability and may even lead to strong indirect facilitation between pollinators when conditions are harsh. This means that pollinator species together can survive under harsh conditions. But it also means that pollinator species collapse simultaneously when conditions pass a critical point. This simultaneous collapse occurs, because pollinator species depend on each other when surviving under difficult conditions. Such a community-wide collapse, involving many pollinator species, can occur suddenly when increasingly harsh conditions pass a critical point and recovery from such a collapse might not be easy. The improvement in conditions needed for pollinators to recover could be substantially larger than the improvement needed to return to conditions at which the pollinator community collapsed. Humans Humans are involved in mutualisms with other species: their gut flora is essential for efficient digestion. Infestations of head lice might have been beneficial for humans by fostering an immune response that helps to reduce the threat of body louse borne lethal diseases. Some relationships between humans and domesticated animals and plants are to different degrees mutualistic. For example, agricultural varieties of maize provide food for humans and are unable to reproduce without human intervention because the leafy sheath does not fall open, and the seedhead (the "corn on the cob") does not shatter to scatter the seeds naturally. In traditional agriculture, some plants have mutualist as companion plants, providing each other with shelter, soil fertility and/or natural pest control. For example, beans may grow up cornstalks as a trellis, while fixing nitrogen in the soil for the corn, a phenomenon that is used in Three Sisters farming. One researcher has proposed that the key advantage Homo sapiens had over Neanderthals in competing over similar habitats was the former's mutualism with dogs. Intestinal microbiota The microbiota in the human intestine coevolved with the human species, and this relationship is considered to be a mutualism that is beneficial both to the human host and the bacteria in the gut population. The mucous layer of the intestine contains commensal bacteria that produce bacteriocins, modify the pH of the intestinal contents, and compete for nutrition to inhibit colonization by pathogens. The gut microbiota, containing trillions of microorganisms, possesses the metabolic capacity to produce and regulate multiple compounds that reach the circulation and act to influence the function of distal organs and systems. Breakdown of the protective mucosal barrier of the gut can contribute to the development of colon cancer. Evolution of mutualism Evolution by type Every generation of every organism needs nutrients and similar nutrients more than they need particular defensive characteristics, as the fitness benefit of these vary heavily especially by environment. This may be the reason that hosts are more likely to evolve to become dependent on vertically transmitted bacterial mutualists which provide nutrients than those providing defensive benefits. This pattern is generalized beyond bacteria by Yamada et al. 2015's demonstration that undernourished Drosophila are heavily dependent on their fungal symbiont Issatchenkia orientalis for amino acids. Mutualism breakdown Mutualisms are not static, and can be lost by evolution. Sachs and Simms (2006) suggest that this can occur via four main pathways: One mutualist shifts to parasitism, and no longer benefits its partner, such as headlice One partner abandons the mutualism and lives autonomously One partner may go extinct A partner may be switched to another species There are many examples of mutualism breakdown. For example, plant lineages inhabiting nutrient-rich environments have evolutionarily abandoned mycorrhizal mutualisms many times independently. Evolutionarily, headlice may have been mutualistic as they allow for early immunity to various body-louse borne disease; however, as these diseases became eradicated, the relationship has become less mutualistic and more parasitic. Measuring and defining mutualism Measuring the exact fitness benefit to the individuals in a mutualistic relationship is not always straightforward, particularly when the individuals can receive benefits from a variety of species, for example most plant-pollinator mutualisms. It is therefore common to categorise mutualisms according to the closeness of the association, using terms such as obligate and facultative. Defining "closeness", however, is also problematic. It can refer to mutual dependency (the species cannot live without one another) or the biological intimacy of the relationship in relation to physical closeness (e.g., one species living within the tissues of the other species). See also Arbuscular mycorrhiza Co-adaptation Coevolution Ecological facilitation Frugivore Greater honeyguide – has a mutualism with humans Interspecies communication Müllerian mimicry Mutualisms and conservation Mutual Aid: A Factor of Evolution Symbiogenesis Plant–animal interaction References Further references Bronstein JL. 2001. The costs of mutualism. American Zoologist 41 (4): 825-839 S Thompson, J. N. 2005. The Geographic Mosaic of Coevolution. University of Chicago Press. Further reading Biological interactions Symbiosis Ethology
Mutualism (biology)
[ "Biology" ]
4,002
[ "Behavior", "Symbiosis", "Biological interactions", "Behavioural sciences", "Mutualism (biology)", "nan", "Ethology" ]
51,510
https://en.wikipedia.org/wiki/Silk
Silk is a natural protein fiber, some forms of which can be woven into textiles. The protein fiber of silk is composed mainly of fibroin and is most commonly produced by certain insect larvae to form cocoons. The best-known silk is obtained from the cocoons of the larvae of the mulberry silkworm Bombyx mori reared in captivity (sericulture). The shimmering appearance of silk is due to the triangular prism-like structure of the silk fibre, which allows silk cloth to refract incoming light at different angles, thus producing different colors. Harvested silk is produced by several insects; but, generally, only the silk of various moth caterpillars has been used for textile manufacturing. There has been some research into other types of silk, which differ at the molecular level. Silk is mainly produced by the larvae of insects undergoing complete metamorphosis, but some insects, such as webspinners and raspy crickets, produce silk throughout their lives. Silk production also occurs in hymenoptera (bees, wasps, and ants), silverfish, caddisflies, mayflies, thrips, leafhoppers, beetles, lacewings, fleas, flies, and midges. Other types of arthropods produce silk, most notably various arachnids, such as spiders. Etymology The word silk comes from , from and , "silken", ultimately from the Chinese word "sī" and other Asian sources—compare Mandarin "silk", Manchurian , Mongolian . History The production of silk originated in China in the Neolithic period, although it would eventually reach other places of the world ( culture, 4th millennium BC). Silk production remained confined to China until the Silk Road opened at some point during the latter part of the 1st millennium BC, though China maintained its virtual monopoly over silk production for another thousand years. Wild silk Several kinds of wild silk, produced by caterpillars other than the mulberry silkworm, have been known and spun in China, South Asia, and Europe since ancient times. However, the scale of production was always far smaller than for cultivated silks. There are several reasons for this: first, they differ from the domesticated varieties in colour and texture and are therefore less uniform; second, cocoons gathered in the wild have usually had the pupa emerge from them before being discovered so the silk thread that makes up the cocoon has been torn into shorter lengths; and third, many wild cocoons are covered in a mineral layer that prevents attempts to reel from them long strands of silk. Thus, the only way to obtain silk suitable for spinning into textiles in areas where commercial silks are not cultivated was by tedious and labor-intensive carding. Some natural silk structures have been used without being unwound or spun. Spider webs were used as a wound dressing in ancient Greece and Rome, and as a base for painting from the 16th century. Butterfly caterpillar nests were pasted together to make a fabric in the Aztec Empire. Commercial silks originate from reared silkworm pupae, which are bred to produce a white-colored silk thread with no mineral on the surface. The pupae are killed by either dipping them in boiling water before the adult moths emerge or by piercing them with a needle. These factors all contribute to the ability of the whole cocoon to be unravelled as one continuous thread, permitting a much stronger cloth to be woven from the silk. Wild silks also tend to be more difficult to dye than silk from the cultivated silkworm. A technique known as demineralizing allows the mineral layer around the cocoon of wild silk moths to be removed, leaving only variability in color as a barrier to creating a commercial silk industry based on wild silks in the parts of the world where wild silk moths thrive, such as in Africa and South America. China Silk use in fabric was first developed in ancient China. The earliest evidence for silk is the presence of the silk protein fibroin in soil samples from two tombs at the neolithic site Jiahu in Henan, which date back about 8,500 years. The earliest surviving example of silk fabric dates from about 3630 BC, and was used as the wrapping for the body of a child at a Yangshao culture site in Qingtaicun near Xingyang, Henan. Legend gives credit for developing silk to a Chinese empress, Leizu (Hsi-Ling-Shih, Lei-Tzu). Silks were originally reserved for the emperors of China for their own use and gifts to others, but spread gradually through Chinese culture and trade both geographically and socially, and then to many regions of Asia. Because of its texture and lustre, silk rapidly became a popular luxury fabric in the many areas accessible to Chinese merchants. Silk was in great demand, and became a staple of pre-industrial international trade. Silk was also used as a surface for writing, especially during the Warring States period (475–221 BCE). The fabric was light, it survived the damp climate of the Yangtze region, absorbed ink well, and provided a white background for the text. In July 2007, archaeologists discovered intricately woven and dyed silk textiles in a tomb in Jiangxi province, dated to the Eastern Zhou dynasty roughly 2,500 years ago. Although historians have suspected a long history of a formative textile industry in ancient China, this find of silk textiles employing "complicated techniques" of weaving and dyeing provides direct evidence for silks dating before the Mawangdui-discovery and other silks dating to the Han dynasty (202 BC – 220 AD). Silk is described in a chapter of the Fan Shengzhi shu from the Western Han (202 BC – 9 AD). There is a surviving calendar for silk production in an Eastern Han (25–220 AD) document. The two other known works on silk from the Han period are lost. The first evidence of the long distance silk trade is the finding of silk in the hair of an Egyptian mummy of the 21st dynasty, c.1070 BC. The silk trade reached as far as the Indian subcontinent, the Middle East, Europe, and North Africa. This trade was so extensive that the major set of trade routes between Europe and Asia came to be known as the Silk Road. The emperors of China strove to keep knowledge of sericulture secret to maintain the Chinese monopoly. Nonetheless, sericulture reached Korea with technological aid from China around 200 BC, the ancient Kingdom of Khotan by AD 50, and India by AD 140. In the ancient era, silk from China was the most lucrative and sought-after luxury item traded across the Eurasian continent, and many civilizations, such as the ancient Persians, benefited economically from trade. Japan Archaeological evidence indicates that sericulture has been practiced since the Yayoi period. The silk industry was dominant from the 1930s to 1950s, but is less common now. Silk from East Asia had declined in importance after silkworms were smuggled from China to the Byzantine Empire. However, in 1845, an epidemic of flacherie among European silkworms devastated the silk industry there. This led to a demand for silk from China and Japan, where as late as the nineteenth and early twentieth centuries, Japanese exports competed directly with Chinese in the international market in such low value-added, labor-intensive products as raw silk. Between 1850 and 1930, raw silk ranked as the leading export for both countries, accounting for 20%–40% of Japan's total exports and 20%–30% of China's. Between the 1890s and the 1930s, Japanese silk exports quadrupled, making Japan the largest silk exporter in the world. This increase in exports was mostly due to the economic reforms during the Meiji period and the decline of the Qing dynasty in China, which led to rapid industrialization of Japan whilst the Chinese industries stagnated. During World War II, embargoes against Japan had led to adoption of synthetic materials such as nylon, which led to the decline of the Japanese silk industry and its position as the lead silk exporter of the world. Today, China exports the largest volume of raw silk in the world. India Silk has a long history in India. It is known as Resham in eastern and north India, and Pattu in southern parts of India. Recent archaeological discoveries in Harappa and Chanhu-daro suggest that sericulture, employing wild silk threads from native silkworm species, existed in South Asia during the time of the Indus Valley civilisation (now in Pakistan and India) dating between 2450 BC and 2000 BC. Shelagh Vainker, a silk expert at the Ashmolean Museum in Oxford, who sees evidence for silk production in China "significantly earlier" than 2500–2000 BC, suggests, "people of the Indus civilization either harvested silkworm cocoons or traded with people who did, and that they knew a considerable amount about silk." India is the second largest producer of silk in the world after China. About 97% of the raw mulberry silk comes from six Indian states, namely, Andhra Pradesh, Karnataka, Jammu and Kashmir, Tamil Nadu, Bihar, and West Bengal. North Bangalore, the upcoming site of a $20 million "Silk City" Ramanagara and Mysore, contribute to a majority of silk production in Karnataka. In Tamil Nadu, mulberry cultivation is concentrated in the Coimbatore, Erode, Bhagalpuri, Tiruppur, Salem, and Dharmapuri districts. Hyderabad, Andhra Pradesh, and Gobichettipalayam, Tamil Nadu, were the first locations to have automated silk reeling units in India. In the northeastern state of Assam, three different types of indigenous variety of silk are produced, collectively called Assam silk: Muga silk, Eri silk and Pat silk. Muga, the golden silk, and Eri are produced by silkworms that are native only to Assam. They have been reared since ancient times similar to other East and South-East Asian countries. Thailand Silk is produced year-round in Thailand by two types of silkworms, the cultured Bombycidae and wild Saturniidae. Most production is after the rice harvest in the southern and northeastern parts of the country. Women traditionally weave silk on hand looms and pass the skill on to their daughters, as weaving is considered to be a sign of maturity and eligibility for marriage. Thai silk textiles often use complicated patterns in various colours and styles. Most regions of Thailand have their own typical silks. A single thread filament is too thin to use on its own so women combine many threads to produce a thicker, usable fiber. They do this by hand-reeling the threads onto a wooden spindle to produce a uniform strand of raw silk. The process takes around 40 hours to produce a half kilogram of silk. Many local operations use a reeling machine for this task, but some silk threads are still hand-reeled. The difference is that hand-reeled threads produce three grades of silk: two fine grades that are ideal for lightweight fabrics, and a thick grade for heavier material. The silk fabric is soaked in extremely cold water and bleached before dyeing to remove the natural yellow coloring of Thai silk yarn. To do this, skeins of silk thread are immersed in large tubs of hydrogen peroxide. Once washed and dried, the silk is woven on a traditional hand-operated loom. Bangladesh The Rajshahi Division of northern Bangladesh is the hub of the country's silk industry. There are three types of silk produced in the region: mulberry, endi, and tassar. Bengali silk was a major item of international trade for centuries. It was known as Ganges silk in medieval Europe. Bengal was the leading exporter of silk between the 16th and 19th centuries. Central Asia The 7th century CE murals of Afrasiyab in Samarkand, Sogdiana, show a Chinese Embassy carrying silk and a string of silkworm cocoons to the local Sogdian ruler. Middle East In the Torah, a scarlet cloth item called in Hebrew "sheni tola'at" שני תולעת – literally "crimson of the worm" – is described as being used in purification ceremonies, such as those following a leprosy outbreak (Leviticus 14), alongside cedar wood and hyssop (za'atar). Eminent scholar and leading medieval translator of Jewish sources and books of the Bible into Arabic, Rabbi Saadia Gaon, translates this phrase explicitly as "crimson silk" – חריר קרמז حرير قرمز. In Islamic teachings, Muslim men are forbidden to wear silk. Many religious jurists believe the reasoning behind the prohibition lies in avoiding clothing for men that can be considered feminine or extravagant. There are disputes regarding the amount of silk a fabric can consist of (e.g., whether a small decorative silk piece on a cotton caftan is permissible or not) for it to be lawful for men to wear, but the dominant opinion of most Muslim scholars is that the wearing of silk by men is forbidden. Modern attire has raised a number of issues, including, for instance, the permissibility of wearing silk neckties, which are masculine articles of clothing. Ancient Mediterranean In the Odyssey, 19.233, when Odysseus, while pretending to be someone else, is questioned by Penelope about her husband's clothing, he says that he wore a shirt "gleaming like the skin of a dried onion" (varies with translations, literal translation here) which could refer to the lustrous quality of silk fabric. Aristotle wrote of Coa vestis, a wild silk textile from Kos. Sea silk from certain large sea shells was also valued. The Roman Empire knew of and traded in silk, and Chinese silk was the most highly priced luxury good imported by them. During the reign of emperor Tiberius, sumptuary laws were passed that forbade men from wearing silk garments, but these proved ineffectual. The Historia Augusta mentions that the third-century emperor Elagabalus was the first Roman to wear garments of pure silk, whereas it had been customary to wear fabrics of silk/cotton or silk/linen blends. Despite the popularity of silk, the secret of silk-making only reached Europe around AD 550, via the Byzantine Empire. Contemporary accounts state that monks working for the emperor Justinian I smuggled silkworm eggs to Constantinople from China inside hollow canes. All top-quality looms and weavers were located inside the Great Palace complex in Constantinople, and the cloth produced was used in imperial robes or in diplomacy, as gifts to foreign dignitaries. The remainder was sold at very high prices. Medieval and modern Europe Italy was the most important producer of silk during the Medieval age. The first center to introduce silk production to Italy was the city of Catanzaro during the 11th century in the region of Calabria. The silk of Catanzaro supplied almost all of Europe and was sold in a large market fair in the port of Reggio Calabria, to Spanish, Venetian, Genovese, and Dutch merchants. Catanzaro became the lace capital of the world with a large silkworm breeding facility that produced all the laces and linens used in the Vatican. The city was world-famous for its fine fabrication of silks, velvets, damasks, and brocades. Another notable center was the Italian city-state of Lucca which largely financed itself through silk-production and silk-trading, beginning in the 12th century. Other Italian cities involved in silk production were Genoa, Venice, and Florence. The Piedmont area of Northern Italy became a major silk producing area when water-powered silk throwing machines were developed. The Silk Exchange in Valencia from the 15th century—where previously in 1348 also perxal (percale) was traded as some kind of silk—illustrates the power and wealth of one of the great Mediterranean mercantile cities. Silk was produced in and exported from the province of Granada, Spain, especially the Alpujarras region, until the Moriscos, whose industry it was, were expelled from Granada in 1571. Since the 15th century, silk production in France has been centered around the city of Lyon where many mechanic tools for mass production were first introduced in the 17th century. James I attempted to establish silk production in England, purchasing and planting 100,000 mulberry trees, some on land adjacent to Hampton Court Palace, but they were of a species unsuited to the silk worms, and the attempt failed. In 1732 John Guardivaglio set up a silk throwing enterprise at Logwood mill in Stockport; in 1744, Burton Mill was erected in Macclesfield; and in 1753 Old Mill was built in Congleton. These three towns remained the centre of the English silk throwing industry until silk throwing was replaced by silk waste spinning. British enterprise also established silk filature in Cyprus in 1928. In England in the mid-20th century, raw silk was produced at Lullingstone Castle in Kent. Silkworms were raised and reeled under the direction of Zoe Lady Hart Dyke, later moving to Ayot St Lawrence in Hertfordshire in 1956. During World War II, supplies of silk for UK parachute manufacture were secured from the Middle East by Peter Gaddum. North America Wild silk taken from the nests of native butterfly and moth caterpillars was used by the Aztecs to make containers and as paper. Silkworms were introduced to Oaxaca from Spain in the 1530s and the region profited from silk production until the early 17th century, when the king of Spain banned export to protect Spain's silk industry. Silk production for local consumption has continued until the present day, sometimes spinning wild silk. King James I introduced silk-growing to the British colonies in America around 1619, ostensibly to discourage tobacco planting. The Shakers in Kentucky adopted the practice. The history of industrial silk in the United States is largely tied to several smaller urban centers in the Northeast region. Beginning in the 1830s, Manchester, Connecticut emerged as the early center of the silk industry in America, when the Cheney Brothers became the first in the United States to properly raise silkworms on an industrial scale; today the Cheney Brothers Historic District showcases their former mills. With the mulberry tree craze of that decade, other smaller producers began raising silkworms. This economy particularly gained traction in the vicinity of Northampton, Massachusetts and its neighboring Williamsburg, where a number of small firms and cooperatives emerged. Among the most prominent of these was the cooperative utopian Northampton Association for Education and Industry, of which Sojourner Truth was a member. Following the destructive Mill River Flood of 1874, one manufacturer, William Skinner, relocated his mill from Williamsburg to the then-new city of Holyoke. Over the next 50 years he and his sons would maintain relations between the American silk industry and its counterparts in Japan, and expanded their business to the point that by 1911, the Skinner Mill complex contained the largest silk mill under one roof in the world, and the brand Skinner Fabrics had become the largest manufacturer of silk satins internationally. Other efforts later in the 19th century would also bring the new silk industry to Paterson, New Jersey, with several firms hiring European-born textile workers and granting it the nickname "Silk City" as another major center of production in the United States. World War II interrupted the silk trade from Asia, and silk prices increased dramatically. U.S. industry began to look for substitutes, which led to the use of synthetics such as nylon. Synthetic silks have also been made from lyocell, a type of cellulose fiber, and are often difficult to distinguish from real silk (see spider silk for more on synthetic silks). Malaysia In Terengganu, which is now part of Malaysia, a second generation of silkworm was being imported as early as 1764 for the country's silk textile industry, especially songket. However, since the 1980s, Malaysia is no longer engaged in sericulture but does plant mulberry trees. Vietnam In Vietnamese legend, silk appeared in the first millennium AD and is still being woven today. Production process The process of silk production is known as sericulture. The entire production process of silk can be divided into several steps which are typically handled by different entities. Extracting raw silk starts by cultivating the silkworms on mulberry leaves. Once the worms start pupating in their cocoons, these are dissolved in boiling water in order for individual long fibres to be extracted and fed into the spinning reel. To produce 1 kg of silk, 104 kg of mulberry leaves must be eaten by 3000 silkworms. It takes about 5000 silkworms to make a pure silk kimono. The major silk producers are China (54%) and India (14%). Other statistics: The environmental impact of silk production is potentially large when compared with other natural fibers. A life-cycle assessment of Indian silk production shows that the production process has a large carbon and water footprint, mainly due to the fact that it is an animal-derived fiber and more inputs such as fertilizer and water are needed per unit of fiber produced. Properties Physical properties Silk fibers from the Bombyx mori silkworm have a triangular cross section with rounded corners, 5–10 μm wide. The fibroin-heavy chain is composed mostly of beta-sheets, due to a 59-mer amino acid repeat sequence with some variations. The flat surfaces of the fibrils reflect light at many angles, giving silk a natural sheen. The cross-section from other silkworms can vary in shape and diameter: crescent-like for Anaphe and elongated wedge for tussah. Silkworm fibers are naturally extruded from two silkworm glands as a pair of primary filaments (brin), which are stuck together, with sericin proteins that act like glue, to form a bave. Bave diameters for tussah silk can reach 65 μm. See cited reference for cross-sectional SEM photographs. Silk has a smooth, soft texture that is not slippery, unlike many synthetic fibers. Silk is one of the strongest natural fibers, but it loses up to 20% of its strength when wet. It has a good moisture regain of 11%. Its elasticity is moderate to poor: if elongated even a small amount, it remains stretched. It can be weakened if exposed to too much sunlight. It may also be attacked by insects, especially if left dirty. One example of the durable nature of silk over other fabrics is demonstrated by the recovery in 1840 of silk garments from a wreck of 1782: 'The most durable article found has been silk; for besides pieces of cloaks and lace, a pair of black satin breeches, and a large satin waistcoat with flaps, were got up, of which the silk was perfect, but the lining entirely gone ... from the thread giving way ... No articles of dress of woollen cloth have yet been found.' Silk is a poor conductor of electricity and thus susceptible to static cling. Silk has a high emissivity for infrared light, making it feel cool to the touch. Unwashed silk chiffon may shrink up to 8% due to a relaxation of the fiber macrostructure, so silk should either be washed prior to garment construction, or dry cleaned. Dry cleaning may still shrink the chiffon up to 4%. Occasionally, this shrinkage can be reversed by a gentle steaming with a press cloth. There is almost no gradual shrinkage nor shrinkage due to molecular-level deformation. Natural and synthetic silk is known to manifest piezoelectric properties in proteins, probably due to its molecular structure. Silkworm silk was used as the standard for the denier, a measurement of linear density in fibers. Silkworm silk therefore has a linear density of approximately 1 den, or 1.1 dtex. Chemical properties Silk emitted by the silkworm consists of two main proteins, sericin and fibroin, fibroin being the structural center of the silk, and sericin being the sticky material surrounding it. Fibroin is made up of the amino acids Gly-Ser-Gly-Ala-Gly-Ala and forms beta pleated sheets. Hydrogen bonds form between chains, and side chains are oriented above and below the plane of the hydrogen bond network. The high proportion (50%) of glycine allows tight packing. This is because glycine has no side chain and is therefore unencumbered by steric strain. The addition of alanine and serine makes the fibres strong and resistant to breaking. This tensile strength is due to the many interceded hydrogen bonds, and when stretched the force is applied to these numerous bonds and they do not break. Silk resists most mineral acids, except for sulfuric acid, which dissolves it. It is yellowed by perspiration. Chlorine bleach will also destroy silk fabrics. Variants Regenerated silk fiber RSF is produced by chemically dissolving silkworm cocoons, leaving their molecular structure intact. The silk fibers dissolve into tiny thread-like structures known as microfibrils. The resulting solution is extruded through a small opening, causing the microfibrils to reassemble into a single fiber. The resulting material is reportedly twice as stiff as silk. Applications Clothing Silk's absorbency makes it comfortable to wear in warm weather and while active. Its low conductivity keeps warm air close to the skin during cold weather. It is often used for clothing such as shirts, ties, blouses, formal dresses, high-fashion clothes, lining, lingerie, pajamas, robes, dress suits, sun dresses, and traditional Asian clothing. Silk is also excellent for insect-proof clothing, protecting the wearer from mosquitoes and horseflies. Fabrics that are often made from silk include satin, charmeuse, habutai, chiffon, taffeta, crêpe de chine, dupioni, noil, tussah, and shantung, among others. Furniture Silk's attractive lustre and drape makes it suitable for many furnishing applications. It is used for upholstery, wall coverings, window treatments (if blended with another fiber), rugs, bedding, and wall hangings. Industry Silk had many industrial and commercial uses, such as in parachutes, bicycle tires, comforter filling, and artillery gunpowder bags. Medicine A special manufacturing process removes the outer sericin coating of the silk, which makes it suitable as non-absorbable surgical sutures. Sometimes wearing silk is suggested for people with atopic dermatitis but, even though it is safe for the skin, it does not improve symptoms of the condition. Biomaterial Silk began to serve as a biomedical material for sutures in surgeries as early as the second century CE. In the past 30 years, it has been widely studied and used as a biomaterial due to its mechanical strength, biocompatibility, tunable degradation rate, ease to load cellular growth factors (for example, BMP-2), and its ability to be processed into several other formats such as films, gels, particles, and scaffolds. Silks from Bombyx mori, a kind of cultivated silkworm, are the most widely investigated silks. Silks derived from Bombyx mori are generally made of two parts: the silk fibroin fiber which contains a light chain of 25 kDa and a heavy chain of 350 kDa (or 390 kDa) linked by a single disulfide bond and a glue-like protein, sericin, comprising 25 to 30 percentage by weight. Silk fibroin contains hydrophobic beta sheet blocks, interrupted by small hydrophilic groups. The beta-sheets contribute much to the high mechanical strength of silk fibers, which achieves 740 MPa, tens of times that of poly(lactic acid) and hundreds of times that of collagen. This impressive mechanical strength has made silk fibroin very competitive for applications in biomaterials. Indeed, silk fibers have found their way into tendon tissue engineering, where mechanical properties matter greatly. In addition, mechanical properties of silks from various kinds of silkworms vary widely, which provides more choices for their use in tissue engineering. Most products fabricated from regenerated silk are weak and brittle, with only ≈1–2% of the mechanical strength of native silk fibers due to the absence of appropriate secondary and hierarchical structure, Biocompatibility Biocompatibility, i.e., to what level the silk will cause an immune response, is a critical issue for biomaterials. The issue arose during its increasing clinical use. Wax or silicone is usually used as a coating to avoid fraying and potential immune responses when silk fibers serve as suture materials. Although the lack of detailed characterization of silk fibers, such as the extent of the removal of sericin, the surface chemical properties of coating material, and the process used, make it difficult to determine the real immune response of silk fibers in literature, it is generally believed that sericin is the major cause of immune response. Thus, the removal of sericin is an essential step to assure biocompatibility in biomaterial applications of silk. However, further research fails to prove clearly the contribution of sericin to inflammatory responses based on isolated sericin and sericin based biomaterials. In addition, silk fibroin exhibits an inflammatory response similar to that of tissue culture plastic in vitro when assessed with human mesenchymal stem cells (hMSCs) or lower than collagen and PLA when implant rat MSCs with silk fibroin films in vivo. Thus, appropriate degumming and sterilization will assure the biocompatibility of silk fibroin, which is further validated by in vivo experiments on rats and pigs. There are still concerns about the long-term safety of silk-based biomaterials in the human body in contrast to these promising results. Even though silk sutures serve well, they exist and interact within a limited period depending on the recovery of wounds (several weeks), much shorter than that in tissue engineering. Another concern arises from biodegradation because the biocompatibility of silk fibroin does not necessarily assure the biocompatibility of the decomposed products. In fact, different levels of immune responses and diseases have been triggered by the degraded products of silk fibroin. Biodegradability Biodegradability (also known as biodegradation)—the ability to be disintegrated by biological approaches, including bacteria, fungi, and cells—is another significant property of biomaterials. Biodegradable materials can minimize the pain of patients from surgeries, especially in tissue engineering, since there is no need for surgery in order to remove the implanted scaffold. Wang et al. showed the in vivo degradation of silk via aqueous 3D scaffolds implanted into Lewis rats. Enzymes are the means used to achieve degradation of silk in vitro. Protease XIV from Streptomyces griseus and α-chymotrypsin from bovine pancreases are two popular enzymes for silk degradation. In addition, gamma radiation, as well as cell metabolism, can also regulate the degradation of silk. Compared with synthetic biomaterials such as polyglycolides and polylactides, silk is advantageous in some aspects of biodegradation. The acidic degraded products of polyglycolides and polylactides will decrease the pH of the ambient environment and thus adversely influence the metabolism of cells, which is not an issue for silk. In addition, silk materials can retain strength over a desired period from weeks to months on an as-needed basis, by mediating the content of beta sheets. Genetic modification Genetic modification of domesticated silkworms has been used to alter the composition of the silk. As well as possibly facilitating the production of more useful types of silk, this may allow other industrially or therapeutically useful proteins to be made by silkworms. Cultivation Silk moths lay eggs on specially prepared paper. The eggs hatch and the caterpillars (silkworms) are fed fresh mulberry leaves. After about 35 days and 4 moltings, the caterpillars are 10,000 times heavier than when hatched and are ready to begin spinning a cocoon. A straw frame is placed over the tray of caterpillars, and each caterpillar begins spinning a cocoon by moving its head in a pattern. Two glands produce liquid silk and force it through openings in the head called spinnerets. Liquid silk is coated in sericin, a water-soluble protective gum, and solidifies on contact with the air. Within 2–3 days, the caterpillar spins about of filament and is completely encased in a cocoon. The silk farmers then heat the cocoons to kill them, leaving some to metamorphose into moths to breed the next generation of caterpillars. Harvested cocoons are then soaked in boiling water to soften the sericin holding the silk fibers together in a cocoon shape. The fibers are then unwound to produce a continuous thread. Since a single thread is too fine and fragile for commercial use, anywhere from three to ten strands are spun together to form a single thread of silk. Animal rights As the process of harvesting the silk from the cocoon kills the larvae by boiling, sericulture has been criticized by animal welfare activists, including People for the Ethical Treatment of Animals (PETA), who urge people not to buy silk items. Mahatma Gandhi was critical of silk production because of his Ahimsa (non-violent) philosophy, which led to the promotion of cotton and Ahimsa silk, a type of wild silk made from the cocoons of wild and semi-wild silk moths. See also Art silk Bulletproofing International Year of Natural Fibres Mommes Rayon Sea silk Silk waste Sinchaw Spider silk Xiangyunsha silk References Citations Bibliography Hill, John E. (2004). The Peoples of the West from the Weilüe 魏略 by Yu Huan 魚豢: A Third Century Chinese Account Composed between 239 and 265 AD. Draft annotated English translation. Appendix E. Hill, John E. (2009) Through the Jade Gate to Rome: A Study of the Silk Routes during the Later Han Dynasty, 1st to 2nd centuries CE. BookSurge, Charleston, South Carolina. . Magie, David (1924). Historia Augusta Life of Heliogabalus. Loeb Classical Texts No. 140: Harvard University Press.. Further reading Feltwell, John (1990). The Story of Silk. Alan Sutton Publishing. . Good, Irene (December 1995). "On the Question of Silk in Pre-Han Eurasia". Antiquity. Vol. 69, Number 266. pp. 959–968. Kadolph, Sara J. (2007). Textiles (10th ed.). Upper Saddle River: Pearson Prentice Hall. pp. 76–81. Kuhn, Dieter (1995). "Silk Weaving in Ancient China: From Geometric Figures to Patterns of Pictorial Likeness". Chinese Science. 12. pp. 77–114. Ricci, G.; et al. (2004). "Clinical Effectiveness of a Silk Fabric in the Treatment of Atopic Dermatitis". British Journal of Dermatology. Issue 150. pp. 127–131. Sung, Ying-Hsing. 1637. "Chapter 2. Clothing materials". Chinese Technology in the Seventeenth Century – T'ien-kung K'ai-wu. Translated and annotated by E-tu Zen Sun and Shiou-chuan Sun. Pennsylvania State University Press, 1966. Reprint: Dover, 1997. Liu, Xinru (1996). Silk and Religion: An Exploration of Material Life and the Thought of People, AD 600–1200. Oxford University Press. Liu, Xinru (2010). The Silk Road in World History. Oxford University Press. ; (pbk). External links References to silk by Roman and Byzantine writers A series of maps depicting the global trade in silk History of traditional silk in martial arts uniforms Raising silkworms in classrooms for educational purposes (with photos) New thread in fabric of insect silks|physorg.com Animal glandular products Articles containing video clips Biomaterials Chinese inventions Insect products Silk Road Woven fabrics
Silk
[ "Physics", "Biology" ]
7,521
[ "Biomaterials", "Materials", "Matter", "Medical technology" ]
51,512
https://en.wikipedia.org/wiki/Etiquette
Etiquette () is the set of norms of personal behaviour in polite society, usually occurring in the form of an ethical code of the expected and accepted social behaviours that accord with the conventions and norms observed and practised by a society, a social class, or a social group. In modern English usage, the French word (label and tag) dates from the year 1750. History In , the Ancient Egyptian vizier Ptahhotep wrote The Maxims of Ptahhotep (), a didactic book of precepts extolling civil virtues such as truthfulness, self-control, and kindness towards other people. Recurrent thematic motifs in the maxims include learning by listening to other people, being mindful of the imperfection of human knowledge, that avoiding open conflict whenever possible should not be considered weakness, and that the pursuit of justice should be foremost. Yet, in human affairs, the command of a god ultimately prevails in all matters. Some of Ptahhotep's maxims indicate a person's correct behaviours in the presence of great personages (political, military, religious), and instructions on how to choose the right master and how to serve him. Other maxims teach the correct way to be a leader through openness and kindness, that greed is the base of all evil and should be guarded against, and that generosity towards family and friends is praiseworthy. Confucius () was a Chinese intellectual and philosopher whose works emphasized personal and governmental morality, correctness of social relationships, the pursuit of justice in personal dealings, and sincerity in all personal relations. Baldassare Castiglione (), count of Casatico, was an Italian courtier and diplomat, soldier, and author of The Book of the Courtier (1528), an exemplar courtesy book dealing with questions of the etiquette and morality of the courtier during the Italian Renaissance. Louis XIV (1638–1715), King of France, used a codified etiquette to tame the French nobility and assert his supremacy as the absolute monarch of France. In consequence, the ceremonious royal court favourably impressed foreign dignitaries whom the king received at the seat of French government, the Palace of Versailles, to the south-west of Paris. Politeness In the 18th century, during the Age of Enlightenment, the adoption of etiquette was a self-conscious process for acquiring the conventions of politeness and the normative behaviours (charm, manners, demeanour) which symbolically identified the person as a genteel member of the upper class. To identify with the social élite, the upwardly mobile middle class and the bourgeoisie adopted the behaviours and the artistic preferences of the upper class. To that end, socially ambitious people of the middle classes occupied themselves with learning, knowing, and practising the rules of social etiquette, such as the arts of elegant dress and gracious conversation, when to show emotion, and courtesy with and towards women. In the early 18th century, Anthony Ashley-Cooper, 3rd Earl of Shaftesbury, wrote influential essays that defined politeness as the art of being pleasing in company; and discussed the function and nature of politeness in the social discourse of a commercial society: Periodicals, such as The Spectator, a daily publication founded in 1711 by Joseph Addison and Richard Steele, regularly advised their readers on the etiquette required of a gentleman, a man of good and courteous conduct; their stated editorial goal was "to enliven morality with wit, and to temper wit with morality… to bring philosophy out of the closets and libraries, schools and colleges, to dwell in clubs and assemblies, at tea-tables and coffeehouses"; to which end, the editors published articles written by educated authors, which provided topics for civil conversation, and advice on the requisite manners for carrying a polite conversation, and for managing social interactions. Conceptually allied to etiquette is the notion of civility (social interaction characterised by sober and reasoned debate) which for socially ambitious men and women also became an important personal quality to possess for social advancement. In the event, gentlemen's clubs, such as Harrington's Rota Club, published an in-house etiquette that codified the civility expected of the members. Besides The Spectator, other periodicals sought to infuse politeness into English coffeehouse conversation, the editors of The Tatler were explicit that their purpose was the reformation of English manners and morals; to those ends, etiquette was presented as the virtue of morality and a code of behaviour. In the mid-18th century, the first, modern English usage of etiquette (the conventional rules of personal behaviour in polite society) was by Philip Stanhope, 4th Earl of Chesterfield, in the book Letters to His Son on the Art of Becoming a Man of the World and a Gentleman (1774), a correspondence of more than 400 letters written from 1737 until the death of his son, in 1768; most of the letters were instructive, concerning varied subjects that a worldly gentleman should know. The letters were first published in 1774, by Eugenia Stanhope, the widow of the diplomat Philip Stanhope, Chesterfield's bastard son. Throughout the correspondence, Chesterfield endeavoured to decouple the matter of social manners from conventional morality, with perceptive observations that pragmatically argue to Philip that mastery of etiquette was an important means for social advancement, for a man such as he. Chesterfield's elegant, literary style of writing epitomised the emotional restraint characteristic of polite social intercourse in 18th-century society: In the 19th century, Victorian era (1837–1901) etiquette developed into a complicated system of codified behaviours, which governed the range of manners in society—from the proper language, style, and method for writing letters, to correctly using cutlery at table, and to the minute regulation of social relations and personal interactions between men and women and among the social classes. Manners Sociological perspectives In a society, manners are described as either good manners or as bad manners to indicate whether a person's behaviour is acceptable to the cultural group. As such, manners enable ultrasociality and are integral to the functioning of the social norms and conventions that are informally enforced through self-regulation. The perspectives of sociology indicate that manners are a means for people to display their social status, and a means of demarcating, observing, and maintaining the boundaries of social identity and of social class. In The Civilizing Process (1939), sociologist Norbert Elias said that manners arose as a product of group living, and persist as a way of maintaining social order. Manners proliferated during the Renaissance in response to the development of the 'absolute state'—the progression from small-group living to large-group living characterised by the centralized power of the State. The rituals and manners associated with the royal court of England during that period were closely bound to a person's social status. Manners demonstrate a person's position within a social network, and a person's manners are a means of negotiation from that social position. From the perspective of public health, in The Healthy Citizen (1995), Alana R. Petersen and Deborah Lupton said that manners assisted the diminishment of the social boundaries that existed between the public sphere and the private sphere of a person's life, and so gave rise to "a highly reflective self, a self who monitors his or her behavior with due regard for others with whom he or she interacts, socially"; and that "the public behavior of individuals came to signify their social standing; a means of presenting the self and of evaluating others, and thus the control of the outward self was vital." Sociologist Pierre Bourdieu applied the concept of habitus to define the societal functions of manners. The habitus is the set of mental attitudes, personal habits, and skills that a person possesses—his or her dispositions of character that are neither self-determined, nor pre-determined by the external environment, but which are produced and reproduced by social interactions—and are "inculcated through experience and explicit teaching", yet tend to function at the subconscious level. Manners are likely to be a central part of the dispositions that guide a person's ability to decide upon socially-compliant behaviours. Anthropologic perspective In Purity and Danger: An Analysis of Concepts of Pollution and Taboo (2003) the anthropologist Mary Douglas said that manners, social behaviors, and group rituals enable the local cosmology to remain ordered and free from those things that may pollute or defile the integrity of the culture. Ideas of pollution, defilement, and disgust are attached to the margins of socially acceptable behaviour in order to curtail unacceptable behaviour, and so maintain "the assumptions by which experience is controlled" within the culture. Evolutionary perspectives In studying the expression of emotion by humans and animals, naturalist Charles Darwin noted the universality of facial expressions of disgust and shame among infants and blind people, and concluded that the emotional responses of shame and disgust are innate behaviours. Public health specialist Valerie Curtis said that the development of facial responses was concomitant with the development of manners, which are behaviours with an evolutionary role in preventing the transmission of diseases, thus, people who practise personal hygiene and politeness will most benefit from membership in their social group, and so stand the best chance of biological survival, by way of opportunities for reproduction. From the study of the evolutionary bases of prejudice, social psychologists Catherine Cottrell and Steven Neuberg said that human behavioural responses to 'otherness' might enable the preservation of manners and social norms. The feeling of "foreignness"—which people experience in their first social interaction with someone from another culture—might partly serve an evolutionary function: 'Group living surrounds one with individuals [who are] able to physically harm fellow group members, to spread contagious disease, or to "free ride" on their efforts'; therefore, a commitment to sociality is a risk: 'If threats, such as these, are left unchecked, the costs of sociality will quickly exceed its benefits. Thus, to maximize the returns on group "living", individual group members should be attuned to others' features or behaviors.' Therefore, people who possess the social traits common to the cultural group are to be trusted, and people without the common social traits are to be distrusted as 'others', and thus treated with suspicion or excluded from the group. That pressure of social exclusivity, born from the shift towards communal living, excluded uncooperative people and persons with poor personal hygiene. The threat of social exclusion led people to avoid personal behaviours that might embarrass the group or that might provoke revulsion among the group. To demonstrate the transmission of social conformity, anthropologists Joseph Henrich and Robert Boyd developed a behavioural model in which manners are a means of mitigating social differences, curbing undesirable personal behaviours, and fostering co-operation within the social group. Natural selection favoured the acquisition of genetically transmitted mechanisms for learning, thereby increasing a person's chances for acquiring locally adaptive behaviours: "Humans possess a reliably developing neural encoding that compels them both to punish individuals who violate group norms (common beliefs or practices) and [to] punish individuals who do not punish norm-violators." Categories Social manners are in three categories: (i) manners of hygiene, (ii) manners of courtesy, and (iii) manners of cultural norm. Each category accounts for an aspect of the functional role that manners play in a society. The categories of manners are based upon the social outcome of behaviour, rather than upon the personal motivation of the behaviour. As a means of social management, the rules of etiquette encompass most aspects of human social interaction; thus, a rule of etiquette reflects an underlying ethical code and a person's fashion and social status. Manners of hygiene concern avoiding the transmission of disease, and usually are taught by the parent to the child by way of parental discipline, positive behavioural enforcement of body-fluid continence (toilet training), and the avoidance of and removal of disease vectors that risk the health of children. Society expects that by adulthood the manners for personal hygiene have become a second-nature behaviour, violations of which shall provoke physical and moral disgust. Hygiene etiquette during the COVID-19 pandemic included social distancing and warnings against public spitting. Manners of courtesy concern self-control and good-faith behaviour, by which a person gives priority to the interests of another person, and priority to the interests of a socio-cultural group, in order to be a trusted member of that group. Courtesy manners maximize the benefits of group-living by regulating the nature of social interactions; however, the performance of courtesy manners occasionally interferes with the avoidance of communicable disease. Generally, parents teach courtesy manners in the same way they teach hygiene manners, but the child also learns manners directly (by observing the behaviour of other people in their social interactions) and by imagined social interactions (through the executive functions of the brain). A child usually learns courtesy manners at an older age than when he or she was toilet trained (taught hygiene manners), because learning the manners of courtesy requires that the child be self-aware and conscious of social position, which then facilitate understanding that violations (accidental or deliberate) of social courtesy will provoke peer disapproval within the social group. Manners of cultural norms concern the social rules by which a person establishes his or her identity and membership in a given socio-cultural group. In abiding the manners of cultural norm, a person demarcates socio-cultural identity and establishes social boundaries, which then identify whom to trust and whom to distrust as 'the other'. Cultural norm manners are learnt through the enculturation with and the routinisation of 'the familiar', and through social exposure to the 'cultural otherness' of people identified as foreign to the group. Transgressions and flouting of the manners of cultural norm usually result in the social alienation of the transgressor. The nature of culture-norm manners allows a high level of intra-group variability, but the manners usually are common to the people who identify with the given socio-cultural group. Courtesy books 16th century The Book of the Courtier (1528), by Baldassare Castiglione, identified the manners and the morals required by socially ambitious men and women for success in a royal court of the Italian Renaissance (14th–17th c.); as an etiquette text, The Courtier was an influential courtesy book in 16th-century Europe. On Civility in Children (1530), by Erasmus of Rotterdam, instructs boys in the means of becoming a young man; how to walk and talk, speak and act in the company of adults. The practical advice for acquiring adult self-awareness includes explanations of the symbolic meanings—for adults—of a boy's body language when he is fidgeting and yawning, scratching and bickering. On completing Erasmus's curriculum of etiquette, the boy has learnt that civility is the point of good manners: the adult ability to 'readily ignore the faults of others, but avoid falling short, yourself,' in being civilised. 20th century Etiquette in Society, in Business, in Politics, and at Home (1922), by Emily Post documents the "trivialities" of desirable conduct in daily life, and provided pragmatic approaches to the practice of good manners—the social conduct expected and appropriate for the events of life, such as a baptism, a wedding, and a funeral. As didactic texts, books of etiquette (the conventional rules of personal behaviour in polite society) usually feature explanatory titles, such as The Ladies' Book of Etiquette, and Manual of Politeness: A Complete Hand Book for the Use of the Lady in Polite Society (1860), by Florence Hartley; Amy Vanderbilt's Complete Book of Etiquette (1957); Miss Manners' Guide to Excruciatingly Correct Behavior (1979), by Judith Martin; and Peas & Queues: The Minefield of Modern Manners (2013), by Sandi Toksvig. Such books present ranges of civility, socially acceptable behaviours for their respective times. Each author cautions the reader that to be a well-mannered person they must practise good manners in their public and private lives. The How Rude! comic-book series addresses and discusses adolescent perspectives and questions of etiquette, social manners, and civility. Business In commerce, the purpose of etiquette is to facilitate the social relations necessary for realising business transactions; in particular, social interactions among workers, and between labour and management. Business etiquette varies by culture, such as the Chinese and Australian approaches to conflict resolution. The Chinese business philosophy is based upon (personal connections), whereby person-to-person negotiation resolves difficult matters, whereas Australian business philosophy relies upon attorneys-at-law to resolve business conflicts through legal mediation; thus, adjusting to the etiquette and professional ethics of another culture is an element of culture shock for businesspeople. In 2011, etiquette trainers formed the Institute of Image Training and Testing International (IITTI) a non-profit organisation to train personnel departments in measuring and developing and teaching social skills to employees, by way of education in the rules of personal and business etiquette, in order to produce business workers who possess standardised manners for successfully conducting business with people from other cultures. In the retail branch of commerce, the saying "the customer is always right" summarises the profit-orientation of good manners, between the buyer and the seller of goods and services: See also Etiquette and language Acrolect Aizuchi Basilect Honorific title Honorifics (linguistics) - politeness markers Insult Netiquette Polite fiction Prescription and description Profanity Semantics Slang Slang dictionary Standard language Style of address T–V distinction What happens on tour, stays on tour Etiquette and letters Airmail etiquette Email etiquettes Missed call#Social usage Etiquette and society Aliénor de Poitiers early documentor of French etiquette Code of conduct Church etiquette Cigar etiquette Cinema etiquette Civics Concert etiquette Dance etiquette Debrett's Diplomacy Disability etiquette Drinking etiquette Driving etiquette Escalator etiquette Faux pas, Faux pas derived from Chinese pronunciation Golf etiquette Intercultural competence Levée, the English version of Louis XIV's morning rising etiquette (lever) at Versailles. Military courtesy Order of precedence Protocol Respect Rules of Civility and Decent Behaviour In Company and Conversation by George Washington Rudeness Social graces Social Norms Table manners Eating utensil etiquette Technology Cell phone Gaming Work Etiquette Zigzag method Worldwide etiquette Africa Asia Chinese dining Indian dining Indonesia Japan Jewish customs of etiquette Myanmar Pakistan South Korea Australia and New Zealand Europe Dutch customs and etiquette Latin America Middle East North America Islamic etiquette Islamic toilet etiquette References Further reading – proper etiquette for men and women and Emily Post's book Etiquette in Society in Business in Politics and at Home were the U.S. etiquette bibles of the 1950s–1970s era. Habits Popular culture Social concepts
Etiquette
[ "Biology" ]
4,008
[ "Etiquette", "Behavior", "Human behavior", "Habits" ]
51,521
https://en.wikipedia.org/wiki/Low-density%20lipoprotein
Low-density lipoprotein (LDL) is one of the five major groups of lipoprotein that transport all fat molecules around the body in extracellular water. These groups, from least dense to most dense, are chylomicrons (aka ULDL by the overall density naming convention), very low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL) and high-density lipoprotein (HDL). LDL delivers fat molecules to cells. LDL has been associated with the progression of atherosclerosis. Overview Lipoproteins transfer lipids (fats) around the body in the extracellular fluid, making fats available to body cells for receptor-mediated endocytosis. Lipoproteins are complex particles composed of multiple proteins, typically 80–100 proteins per particle (organized by a single apolipoprotein B for LDL and the larger particles). A single LDL particle is about 220–275 angstroms in diameter, typically transporting 3,000 to 6,000 fat molecules per particle, and varying in size according to the number and mix of fat molecules contained within. The lipids carried include all fat molecules with cholesterol, phospholipids, and triglycerides dominant; amounts of each vary considerably. A good clinical interpretation of blood lipid levels is that high LDL, in combination with a high amount of triglycerides, which indicates a high likelihood of the LDL being oxidised, is associated with increased risk of cardiovascular diseases. Biochemistry Structure Each native LDL particle enables emulsification, i.e. surrounding the fatty acids being carried, enabling these fats to move around the body within the water outside cells. Each particle contains a single apolipoprotein B-100 molecule (Apo B-100, a protein that has 4536 amino acid residues and a mass of 514 kDa), along with 80 to 100 additional ancillary proteins. Each LDL has a highly hydrophobic core consisting of polyunsaturated fatty acid known as linoleate and hundreds to thousands (about 1500 commonly cited as an average) of esterified and unesterified cholesterol molecules. This core also carries varying numbers of triglycerides and other fats and is surrounded by a shell of phospholipids and unesterified cholesterol, as well as the single copy of Apo B-100. LDL particles are approximately 22 nm (0.00000087 in.) to 27.5 nm in diameter and have a mass of about 3 million daltons. Since LDL particles contain a variable and changing number of fatty acid molecules, there is a distribution of LDL particle mass and size. Determining the structure of LDL has been difficult for biochemists because of its heterogeneous structure. However, the structure of LDL at human body temperature in native condition, with a resolution of about 16 Angstroms using cryogenic electron microscopy, has been described in 2011. Physiology LDL particles are formed when triglycerides are removed from VLDL by the lipoprotein lipase enzyme (LPL) and they become smaller and denser (i.e. fewer fat molecules with same protein transport shell), containing a higher proportion of cholesterol esters. Transport into the cell When a cell requires additional cholesterol (beyond its current internal HMGCoA production pathway), it synthesizes the necessary LDL receptors as well as PCSK9, a proprotein convertase that marks the LDL receptor for degradation. LDL receptors are inserted into the plasma membrane and diffuse freely until they associate with clathrin-coated pits. When LDL receptors bind LDL particles in the bloodstream, the clathrin-coated pits are endocytosed into the cell. Vesicles containing LDL receptors bound to LDL are delivered to the endosome. In the presence of low pH, such as that found in the endosome, LDL receptors undergo a conformation change, releasing LDL. LDL is then shipped to the lysosome, where cholesterol esters in the LDL are hydrolysed. LDL receptors are typically returned to the plasma membrane, where they repeat this cycle. If LDL receptors bind to PCSK9, however, transport of LDL receptors is redirected to the lysosome, where they are degraded. Role in the innate immune system LDL interferes with the quorum sensing system that upregulates genes required for invasive Staphylococcus aureus infection. The mechanism of antagonism entails binding apolipoprotein B to a S. aureus autoinducer pheromone, preventing signaling through its receptor. Mice deficient in apolipoprotein B are more susceptible to invasive bacterial infection. LDL size patterns LDL can be grouped based on its size: large low density LDL particles are described as pattern A, and small high density LDL particles are pattern B. Pattern B has been associated by some with a higher risk for coronary heart disease. This is thought to be because the smaller particles are more easily able to penetrate the endothelium of arterial walls. Pattern I, for intermediate, indicates that most LDL particles are very close in size to the normal gaps in the endothelium (26 nm). According to one study, sizes 19.0–20.5 nm were designated as pattern B and LDL sizes 20.6–22 nm were designated as pattern A. Other studies have shown no such correlation at all. Some evidence suggests the correlation between Pattern B and coronary heart disease is stronger than the correspondence between the LDL number measured in the standard lipid profile test. Tests to measure these LDL subtype patterns have been more expensive and not widely available, so the common lipid profile test is used more often. There has also been noted a correspondence between higher triglyceride levels and higher levels of smaller, denser LDL particles and alternately lower triglyceride levels and higher levels of the larger, less dense ("buoyant") LDL. With continued research, decreasing cost, greater availability and wider acceptance of other lipoprotein subclass analysis assay methods, including NMR spectroscopy, research studies have continued to show a stronger correlation between human clinically obvious cardiovascular events and quantitatively measured particle concentrations. Oxidized LDL Oxidized LDL is a general term for LDL particles with oxidatively modified structural components. As a result, from free radical attack, both lipid and protein parts of LDL can be oxidized in the vascular wall. Besides the oxidative reactions taking place in vascular wall, oxidized lipids in LDL can also be derived from oxidized dietary lipids. Oxidized LDL is known to associate with the development of atherosclerosis, and it is therefore widely studied as a potential risk factor of cardiovascular diseases. Atherogenicity of oxidized LDL has been explained by lack of recognition of oxidation-modified LDL structures by the LDL receptors, preventing the normal metabolism of LDL particles and leading eventually to development of atherosclerotic plaques. Of the lipid material contained in LDL, various lipid oxidation products are known as the ultimate atherogenic species. Acting as a transporter of these injurious molecules is another mechanism by which LDL can increase the risk of atherosclerosis. Testing Blood tests commonly report LDL-C: the amount of cholesterol which is estimated to be contained with LDL particles, on average, using a formula, the Friedewald equation. In clinical context, mathematically calculated estimates of LDL-C are commonly used as an estimate of how much low density lipoproteins are driving progression of atherosclerosis. The problem with this approach is that LDL-C values are commonly discordant with both direct measurements of LDL particles and actual rates of atherosclerosis progression. Direct LDL measurements are also available and better reveal individual issues but are less often promoted or done due to slightly higher costs and being available from only a couple of laboratories in the United States. In 2008, the ADA and ACC recognized direct LDL particle measurement by NMR as superior for assessing individual risk of cardiovascular events. Estimation of LDL particles via cholesterol content Chemical measures of lipid concentration have long been the most-used clinical measurement, not because they have the best correlation with individual outcome, but because these lab methods are less expensive and more widely available. The lipid profile does not measure LDL particles. It only estimates them using the Friedewald equation by subtracting the amount of cholesterol associated with other particles, such as HDL and VLDL, assuming a prolonged fasting state, etc.: where H is HDL cholesterol, L is LDL cholesterol, C is total cholesterol, T are triglycerides, and k is 0.20 if the quantities are measured in mg/dL and 0.45 if in mmol/L. There are limitations to this method, most notably that samples must be obtained after a 12 to 14 h fast and that LDL-C cannot be calculated if plasma triglyceride is >4.52 mmol/L (400 mg/dL). Even at triglyceride levels 2.5 to 4.5 mmol/L, this formula is considered inaccurate. If both total cholesterol and triglyceride levels are elevated then a modified formula, with quantities in mg/dL, may be used This formula provides an approximation with fair accuracy for most people, assuming the blood was drawn after fasting for about 14 hours or longer, but does not reveal the actual LDL particle concentration because the percentage of fat molecules within the LDL particles which are cholesterol varies, as much as 8:1 variation. There are several formulas published addressing the inaccuracy in LDL-C estimation. The inaccuracy is based on the assumption that VLDL-C (Very low density lipoprotein cholesterol) is always one-fifth of the triglyceride concentration. Another formulae addresses this issue by using an adjustable factor or by using a regression equation. There are few studies which have compared the LDL-C values derived from this formula and values obtained by direct enzymatic method. Direct enzymatic method are found to be accurate and it has to be the test of choice in clinical situations. In the resource poor settings, the option of using the formula has to be considered. However, the concentration of LDL particles, and to a lesser extent their size, has a stronger and consistent correlation with individual clinical outcome than the amount of cholesterol within LDL particles, even if the LDL-C estimation is approximately correct. There is increasing evidence and recognition of the value of more targeted and accurate measurements of LDL particles. Specifically, LDL particle number (concentration), and to a lesser extent size, have shown slightly stronger correlations with atherosclerotic progression and cardiovascular events than obtained using chemical measures of the amount of cholesterol carried by the LDL particles. It is possible that the LDL cholesterol concentration can be low, yet LDL particle number high and cardiovascular events rates are high. Correspondingly, it is possible that LDL cholesterol concentration can be relatively high, yet LDL particle number low and cardiovascular events are also low. Normal ranges In the US, the American Heart Association, NIH, and NCEP provide a set of guidelines for fasting LDL-Cholesterol levels, estimated or measured, and risk for heart disease. As of about 2005, these guidelines were: Over time, with more clinical research, these recommended levels keep being reduced because LDL reduction, including to abnormally low levels, was the most effective strategy for reducing cardiovascular death rates in one large double blind, randomized clinical trial of men with hypercholesterolemia; far more effective than coronary angioplasty/stenting or bypass surgery. For instance, for people with known atherosclerosis diseases, the 2004 updated American Heart Association, NIH and NCEP recommendations are for LDL levels to be lowered to less than 70 mg/dL. This low level of less than 70 mg/dL was recommended for primary prevention of 'very-high risk patients' and in secondary prevention as a 'reasonable further reduction'. This position was disputed. Statin drugs involved in such clinical trials have numerous physiological effects beyond simply the reduction of LDL levels. From longitudinal population studies following progression of atherosclerosis-related behaviors from early childhood into adulthood, the usual LDL in childhood, before the development of fatty streaks, is about 35 mg/dL. However, all the above values refer to chemical measures of lipid/cholesterol concentration within LDL, not measured low-density lipoprotein concentrations, the accurate approach. A study was conducted measuring the effects of guideline changes on LDL cholesterol reporting and control for diabetes visits in the US from 1995 to 2004. It was found that although LDL cholesterol reporting and control for diabetes and coronary heart disease visits improved continuously between 1995 and 2004, neither the 1998 ADA guidelines nor the 2001 ATP III guidelines increased LDL cholesterol control for diabetes relative to coronary heart disease. Direct measurement of LDL particle concentrations There are several competing methods for measurement of lipoprotein particle concentrations and size. The evidence is that the NMR methodology (developed, automated & greatly reduced in costs while improving accuracy as pioneered by Jim Otvos and associates) results in a 22-25% reduction in cardiovascular events within one year, contrary to the longstanding claims by many in the medical industry that the superiority over existing methods was weak, even by statements of some proponents. Since the later 1990s, because of the development of NMR measurements, it has been possible to clinically measure lipoprotein particles at lower cost [under $80 US (including shipping) & is decreasing; versus the previous costs of >$400 to >$5,000] and higher accuracy. There are two other assays for LDL particles, however, like LDL-C, most only estimate LDL particle concentrations. Direct LDL particle measurement by NMR was mentioned by the ADA and ACC, in a 28 March 2008 joint consensus statement, as having advantages for predicting individual risk of atherosclerosis disease events, but the statement noted that the test is less widely available, is more expensive [about $13.00 US (2015 without insurance coverage) from some labs which use the Vantera Analyzer]. Debate continues that it is "...unclear whether LDL particle size measurements add value to measurement of LDL-particle concentration", though outcomes have always tracked LDL particle, not LDL-C, concentrations. Using NMR, the total LDL particle concentrations, in nmol/L plasma, are typically subdivided by percentiles referenced to the 5,382 men and women, not on any lipid medications, who are participating in the MESA trial. LDL particle concentration can also be measured by measuring the concentration of the protein ApoB, based on the generally accepted principle that each LDL or VLDL particle carries one ApoB molecule. Optimal ranges The LDL particle concentrations are typically categorized by percentiles, <20%, 20–50%, 50th–80th%, 80th–95% and >95% groups of the people participating and being tracked in the MESA trial, a medical research study sponsored by the United States National Heart, Lung, and Blood Institute. The lowest incidence of atherosclerotic events over time occurs within the <20% group, with increased rates for the higher groups. Multiple other measures, including particle sizes, small LDL particle concentrations, large total and HDL particle concentrations, along with estimations of insulin resistance pattern and standard cholesterol lipid measurements (for comparison of the plasma data with the estimation methods discussed above) are also routinely provided. Lowering LDL-cholesterol The mevalonate pathway serves as the basis for the biosynthesis of many molecules, including cholesterol. The enzyme 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMG CoA reductase) is an essential component and performs the first of 37 steps within the cholesterol production pathway, and is present in every animal cell. LDL-C is not a measurement of actual LDL particles. LDL-C is only an estimate (not measured from the individual's blood sample) of how much cholesterol is being transported by all LDL particles, which is either a smaller concentration of large particles or a high concentration of small particles. LDL particles carry many fat molecules (typically 3,000 to 6,000 fat molecules per LDL particle); this includes cholesterol, triglycerides, phospholipids and others. Thus even if the hundreds to thousands of cholesterol molecules within an average LDL particle were measured, this does not reflect the other fat molecules or even the number of LDL particles. Pharmaceutical PCSK9 inhibitors, in clinical trials, by several companies, are more effective for LDL reduction than the statins, including statins alone at high dose (though not necessarily the combination of statins plus ezetimibe). Statins reduce high levels of LDL particles by inhibiting the enzyme HMG-CoA reductase in cells, the rate-limiting step of cholesterol synthesis. To compensate for the decreased cholesterol availability, synthesis of LDL receptors (including hepatic) is increased, resulting in an increased clearance of LDL particles from the extracellular water, including of the blood. Ezetimibe reduces intestinal absorption of cholesterol, thus can reduce LDL particle concentrations when combined with statins. Niacin (B3), lowers LDL by selectively inhibiting hepatic diacylglycerol acyltransferase 2, reducing triglyceride synthesis and VLDL secretion through a receptor HM74 and HM74A or GPR109A. Several CETP inhibitors have been researched to improve HDL concentrations, but so far, despite dramatically increasing HDL-C, have not had a consistent track record in reducing atherosclerosis disease events. Some have increased mortality rates compared with placebo. Clofibrate is effective at lowering cholesterol levels, but has been associated with significantly increased cancer and stroke mortality, despite lowered cholesterol levels. Other developed and tested fibrates, e.g. fenofibric acid have had a better track record and are primarily promoted for lowering VLDL particles (triglycerides), not LDL particles, yet can help some in combination with other strategies. Some tocotrienols, especially delta- and gamma-tocotrienols, are being promoted as statin alternative non-prescription agents to treat high cholesterol, having been shown in vitro to have an effect. In particular, gamma-tocotrienol appears to be another HMG-CoA reductase inhibitor, and can reduce cholesterol production. As with statins, this decrease in intra-hepatic (liver) LDL levels may induce hepatic LDL receptor up-regulation, also decreasing plasma LDL levels. As always, a key issue is how benefits and complications of such agents compare with statins—molecular tools that have been analyzed in large numbers of human research and clinical trials since the mid-1970s. Phytosterols are widely recognized as having a proven LDL cholesterol lowering efficacy' A 2018 review found a dose-response relationship for phytosterols, with intakes of 1.5 to 3 g/day lowering LDL-C by 7.5% to 12%, but reviews as of 2017 had found no data indicating that the consumption of phytosterols may reduce the risk of CVD. Current supplemental guidelines for reducing LDL recommend doses of phytosterols in the 1.6-3.0 grams per day range (Health Canada, EFSA, ATP III, FDA) with a 2009 meta-analysis demonstrating an 8.8% reduction in LDL-cholesterol at a mean dose of 2.15 gram per day. Lifestyle LDL cholesterol can be lowered through dietary intervention by limiting foods with saturated fat and avoiding foods with trans fat. Saturated fats are found in meat products (including poultry), full-fat dairy, eggs, and refined tropical oils like coconut and palm. Added trans fat (in the form of partially hydrogenated oils) has been banned in the US since 2021. However, trans fat can still be found in red meat and dairy products as it is produced in small amounts by ruminants such as sheep and cows. LDL cholesterol can also be lowered by increasing consumption of soluble fiber and plant-based foods. Another lifestyle approach to reduce LDL cholesterol has been minimizing total body fat, in particular fat stored inside the abdominal cavity (visceral body fat). Visceral fat, which is more metabolically active than subcutaneous fat, has been found to produce many enzymatic signals, e.g. resistin, which increase insulin resistance and circulating VLDL particle concentrations, thus both increasing LDL particle concentrations and accelerating the development of diabetes mellitus. Research Gene editing In 2021, scientists demonstrated that CRISPR gene editing can decrease blood levels of LDL cholesterol in Macaca fascicularis monkeys for months by 60% via knockout of PCSK9 in the liver. See also Notes and references External links Fat (LDL) Degradation: PMAP The Proteolysis Map-animation Adult Treatment Panel III Full Report ATP III Update 2004 Cardiology Lipid disorders Lipoproteins
Low-density lipoprotein
[ "Chemistry" ]
4,595
[ "Lipid biochemistry", "Lipoproteins" ]
51,526
https://en.wikipedia.org/wiki/Heinrich%20Anton%20de%20Bary
Heinrich Anton de Bary (26 January 183119 January 1888) was a German surgeon, botanist, microbiologist, and mycologist (fungal systematics and physiology). He is considered a founding father of plant pathology (phytopathology) as well as the founder of modern mycology. His extensive and careful studies of the life history of fungi and contribution to the understanding of algae and higher plants established landmarks in biology. Early life and education Born in Frankfurt to physician August Theodor de Bary (1802–1873) and Emilie Meyer de Bary, Anton de Bary was one of ten children. He joined excursions of naturalists who collected local specimens. De Bary’s interest was further inspired by George Fresenius, a physician, who also taught botany at Senckenberg Institute. Fresenius was an expert on thallophytes. In 1848, de Bary graduated from a gymnasium at Frankfurt, and began to study medicine at Heidelberg, continuing at Marburg. In 1850, he went to Berlin to continue pursuing his study of medicine, and also continued to explore and develop his interest in plant science. Although he received his degree in medicine, his dissertation at Berlin in 1853 was titled "De plantarum generatione sexuali", a botanical subject. He also published a book on fungi and the causes of rusts and smuts. Early career After graduation, de Bary briefly practiced medicine in Frankfurt, but he was drawn back to botany and became Privatdozent in botany at the University of Tübingen, where he worked for a while as an assistant to Hugo von Mohl (1805–1872). In 1855, he succeeded the botanist Karl Wilhelm von Nägeli (1818–1891) at Freiburg, where he established the most advanced botanical laboratory at the time and directed many students. Later career and research In 1867, de Bary moved to the University of Halle as successor to Professor Diederich Franz Leonhard von Schlechtendal, who, with Hugo von Mohl, co-founded the pioneer botanical journal Botanische Zeitung. De Bary became its coeditor and later sole editor. As an editor of and contributor to the journal, he exercised great influence upon the development of botany. Following the Franco-Prussian War (1870–1871), de Bary took the position of professor of botany at the University of Strasbourg, where he was the director of the Jardin botanique de l'Université de Strasbourg, and founder of its New Garden. He was also elected as the inaugural rector of the reorganized university. He conducted much research in the university botanical institute, attracted many international students, and made a large contribution to the development of botany. His 1884 book Vergleichende Morphologie und Biologie der Pilze, Mycetozoen und Bakterien was translated into English as Comparative Morphology and Biology of the Fungi, Mycetozoa, and Bacteria (Clarendon Press, 1887). Fungi and plant diseases De Bary was devoted to the study of the life history of fungi. At that time, various fungi were still considered to arise via spontaneous generation. He proved that pathogenic fungi were like other plants, and not the products of secretions from sick cells. In de Bary’s time, potato late blight had caused sweeping crop devastation and economic loss. The origin of such plant diseases was not known at that time. de Bary studied the pathogen Phytophthora infestans (formerly Peronospora infestans) and elucidated its life cycle. Miles Joseph Berkeley (1803–1889) had insisted in 1841 that the oomycete found in potato blight caused the disease. Similarly, de Bary asserted that rust and smut fungi caused the pathological changes that affected diseased plants. He concluded that Uredinales and Ustilaginales were parasites. De Bary spent much time studying the morphology of fungi and noticed that certain forms that were classed as separate species were actually successive stages of development of the same organism. De Bary studied the developmental history of Myxomycetes (slime molds), and thought it was necessary to reclassify the lower animals. He first coined the term Mycetozoa to include lower animals and slime molds. In his work on Myxomycetes (1858), he pointed out that at one stage of their life cycle (the plasmodial stage), they were nearly-formless, motile masses of a substance that Félix Dujardin (1801–1860) had called sarcode (protoplasm). This is the fundamental basis of the protoplasmic theory of life. De Bary was the first to demonstrate sexuality in fungi. In 1858, he had observed conjugation in the alga Spirogyra, and in 1861, he described sexual reproduction in the fungus Peronospora sp. He saw the importance of observing pathogens throughout their whole life cycle and attempted to follow that practice in his studies of living host plants. Peronosporeae De Bary published his first work on potato blight fungi in 1861, and then spent more than 15 years studying Peronosporeae, particularly Phytophthora infestans (formerly Peronospora infestans) and Cystopus (Albugo), parasites of potato. In his published work in 1863 entitled "Recherches sur le developpement de quelques champignons parasites", he reported inoculating healthy potato leaves with spores of P. infestans. He observed that mycelium penetrated the leaf and affected the tissue, forming conidia and the black spots characteristic of potato blight. He did similar experiments on tubers and potato stalks. He watched conidia in the soil and their infection of the tubers, observing that mycelium could survive the cold winter in the tubers. Based on these studies, he concluded that organisms were not being generated spontaneously. Puccinia graminis He did a thorough investigation on Puccinia graminis, the pathogen that produces rust in wheat, rye and other grains. He noticed that P. graminis produced reddish summer spores or "urediospores", and darker winter spores or "teleutospores". He inoculated the leaves of barberry (Berberis vulgaris) with sporidia from winter spores of wheat rust. The sporidia germinated, leading to the forming of aecia with yellow spores, the familiar symptoms of infection on the barberry. De Bary then inoculated aecidiospores on moisture-retaining slides and then transferred them to the leaves of seedling of rye plants. In time, he observed the reddish summer spores appearing in the leaves. Sporidia from winter spores germinated only on barberry. De Bary clearly demonstrated that P. graminis lived upon different hosts at different stages of its development. He called this phenomenon "heteroecism" in contrast to "autoecism", in which development takes place only in one host. De Bary’s discovery explained why the practice of eradicating barberry plants was important as a control for rust. Lichen De Bary also studied the formation of lichens which are the result of an association between a fungus and an alga. He traced their stages of growth and reproduction and showed how adaptations helped them to survive conditions of drought and winter. In 1879 he coined the word "symbiosis", meaning "the living together of unlike organisms", in the publication "Die Erscheinung der Symbiose" (Strasbourg, 1879). He carefully studied the morphology of molds, yeasts, and fungi and basically established mycology as an independent science. Influence De Bary's concept and methods had a great impact on the fields of bacteriology and botany, making him one of the most influential bioscientists of the 19th century. He published more than 100 research papers. Many of his students later became distinguished botanists and microbiologists including Sergei Winogradsky (1856–1953), William Gilson Farlow (1844–1919), and Pierre-Marie-Alexis Millardet (1838–1902). Personal life and death De Bary came from a noble family of Huguenots from Wallonia, which was driven out from there by the Spanish Habsburgs under Emperor Charles V and can be found in Frankfurt since 1555. Anton's father and his brother Johann Jakob de Bary were respected doctors in Frankfurt. His mother was Caroline Emilie von Meyer (1805–1887), whose family produced two renowned scientists. De Bary married Antonie Einert (21 January 1831, Leipzig – 22 May 1892, Thann, Alsace–Lorraine) in 1861; they raised four children: Wilhelm, August, Marie and Hermann. Antonie was a talented artist and painter, particularly of plants, who contributed to her husband's scientific work. He died on 19 January 1888 in Strasbourg, of a tumor of the jaw, after undergoing extensive surgery. See also List of mycologists Wm. Theodore de Bary, American sinologist, a great-nephew References External links 1831 births 1888 deaths German phycologists 19th-century German botanists German surgeons German mycologists German entomologists Physicians from Frankfurt Heidelberg University alumni University of Marburg alumni Humboldt University of Berlin alumni Academic staff of the University of Tübingen Academic staff of the University of Freiburg Academic staff of the Martin Luther University of Halle-Wittenberg Academic staff of the University of Strasbourg Corresponding members of the Saint Petersburg Academy of Sciences Foreign members of the Royal Society Symbiosis Scientists from Frankfurt Members of the Göttingen Academy of Sciences and Humanities
Heinrich Anton de Bary
[ "Biology" ]
2,014
[ "Biological interactions", "Behavior", "Symbiosis" ]
51,537
https://en.wikipedia.org/wiki/Hydraulic%20fill
Hydraulic fill is a means of selectively emplacing soil or other materials using a stream of water. It is also a term used to describe the materials thus emplaced. Gravity, coupled with velocity control, is used to effect the selected deposition of the material. Borrow pits containing suitable material are accessible at an elevation such that the earth can be sluiced to the fill after being washed from the bank by high-pressure nozzles. Hydraulic fill is likely to be the most economic method of construction. Even when the source material lacks sufficient elevation, it can be elevated to the sluice by a dredge pump. In the construction of a hydraulic fill dam, the edges of the dam are defined by low embankments or dykes which are built upward as the fill progresses. The sluices are carried parallel to, and just inside of, these dykes. The sluices discharge their water-earth mixture at intervals, the water fanning out and flowing towards the central pool which is maintained at the desired level by discharge control. While flowing from the sluices, coarse material is deposited first and then finer material is deposited (fine material has a slower terminal velocity thus takes longer to settle, see Stokes' Law) as the flow velocity is reduced towards the center of the dam. This fine material forms an impervious core to the dam. The water flow must be well controlled at all times, otherwise the central section may be bridged by tongues of coarse material which would facilitate seepage through the dam later. Hydraulic fill dams can be dangerous in areas of seismic activity due to the high susceptibility of the uncompacted, cohesion-less soils in them to liquefaction. The Lower San Fernando Dam is an example of a hydraulic fill dam that failed during an earthquake. In these situations, a dam built of compacted soil may be a better choice. Poorly built hydraulic fill dams pose a risk of catastrophic failure. The Fort Peck Dam is an example of a hydraulic fill dam that failed during construction where the hydraulic filling process may have contributed to the failure. Hydraulic fill is also a term used in hard rock mining and describes the placement of finely ground mining wastes into underground stopes in a slurry by boreholes and pipes to stabilize the voids. References Hydraulic engineering
Hydraulic fill
[ "Physics", "Engineering", "Environmental_science" ]
470
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
51,549
https://en.wikipedia.org/wiki/Postal%20code
A postal code (also known locally in various English-speaking countries throughout the world as a postcode, post code, PIN or ZIP Code) is a series of letters or digits or both, sometimes including spaces or punctuation, included in a postal address for the purpose of sorting mail. the Universal Postal Union lists 160 countries which require the use of a postal code. Although postal codes are usually assigned to geographical areas, special codes are sometimes assigned to individual addresses or to institutions that receive large volumes of mail, such as government agencies and large commercial companies. One example is the French CEDEX system. Terms There are a number of synonyms for postal code; some are country-specific: CAP: The standard term in Italy; CAP is an acronym for ('postal expedition code'). CEP: The standard term in Brazil; CEP is an acronym for ('postal addressing code'). Eircode: The standard term in Ireland. NPA in French-speaking Switzerland () and Italian-speaking Switzerland (). PIN: The standard term in India; PIN is an acronym for Postal Index Number. Sometimes called a PIN code. PLZ: The standard term in Germany, Austria, German-speaking Switzerland and Liechtenstein; PLZ is an abbreviation of ('postal routing number'). Postal code: The general term is used in Canada. Postcode: This solid compound is popular in many English-speaking countries and is also the standard term in the Netherlands. Postal index: This term is used in Eastern European countries such as Belarus, Moldova, Russia, Ukraine, etc. PSČ: The standard term in Slovakia and the Czech Republic; PSČ is an acronym for (in Slovak) or (in Czech), both meaning postal routing number. ZIP Code: The standard term in the United States and the Philippines; ZIP is an acronym for Zone Improvement Plan. History The development of postal codes happened first in large cities. Postal codes began with postal district numbers (or postal zone numbers) within large cities. London was first subdivided into 10 districts in 1857 (EC (East Central), WC (West Central), N, NE, E, SE, S, SW, W, and NW), four were created to cover Liverpool in 1864; and Manchester/Salford was split into eight numbered districts in 1867/68. By World War I, such postal district or zone numbers also existed in various large European cities. They existed in the United States at least as early as the 1920s, possibly implemented at the local post office level only (for example, instances of "Boston 9, Mass" in 1920 are attested) although they were evidently not used throughout all major US cities (implemented USPOD-wide) until World War II. By 1930 or earlier, the idea of extending the postal district or zone numbering plans beyond large cities to cover even small towns and rural locales had started. These developed into postal codes as they are defined today. The name of US postal codes, "ZIP Codes", reflects this evolutionary growth from a zone plan to a zone improvement plan, "ZIP". Modern postal codes were first introduced in the Ukrainian Soviet Socialist Republic in December 1932, but the system was abandoned in 1939. The next country to introduce postal codes was Germany in 1941, followed by Singapore in 1950, Argentina in 1958, the United States in 1963 and Switzerland in 1964. The United Kingdom began introducing its current system in Norwich in 1959, but it was not used nationwide until 1974. Presentation Character sets The characters used in postal codes are: The Western Arabic numerals "0" to "9" Letters of the ISO basic Latin alphabet Spaces, hyphens Reserved characters Postal codes in the Netherlands originally did not use the letters 'F', 'I', 'O', 'Q', 'U' and 'Y' for technical reasons. But as almost all existing combinations are now used, these letters were allowed for new locations starting 2005. The letter combinations "SS" (), "SD" (), and "SA" () are not used, due to links with the Nazi occupation in World War II. Postal codes in Canada do not include the letters D, F, I, O, Q, or U, as the optical character recognition (OCR) equipment used in automated sorting could easily confuse them with other letters and digits. The letters W and Z are used, but are not currently used as the first letter. The Canadian Postal Codes use alternate letters and numbers (with a space after the third character) in this format: A9A 9A9 In Ireland, the eircode system uses the following letters only: A, C, D, E, F, H, K, N, P, R, T, V, W, X, Y. This serves to avoid confusion in OCR, and to avoid accidental double-entendres by avoiding the creation of word lookalikes, as Eircode's last four characters are random. Alphanumeric postal codes Most of the postal code systems are numeric; only a few are alphanumeric (i.e., use both letters and digits). Alphanumeric systems can, given the same number of characters, encode many more locations. For example, while a two digit numeric code can represent 100 locations, a two character alphanumeric code using ten numbers and twenty letters can represent 900 locations. The independent nations using alphanumeric postal code systems are: Argentina (see table) Brunei (see table) Canada (see table) Eswatini Ireland (see table) Jamaica (see table) (suspended in 2007) Kazakhstan (since 2015) Malta (see table) Netherlands (see table) Peru (see table), the postal code format in Peru was updated in February 2011 to be of the format of five digits. Somalia United Kingdom (see table) Countries which prefix their postal codes with a fixed group of letters, indicating a country code, include Andorra, Azerbaijan, Barbados, Ecuador and Saint Vincent and the Grenadines. Country code prefixes ISO 3166-1 alpha-2 country codes were recommended by the European Committee for Standardization as well as the Universal Postal Union to be used in conjunction with postal codes starting in 1994, but they have not become widely used. Andorra, Azerbaijan, Barbados, Ecuador, Latvia and Saint Vincent and the Grenadines use the ISO 3166-1 alpha-2 as a prefix in their postal codes. In some countries (such as in continental Europe, where a numeric postcode format of four or five digits is commonly used) the numeric postal code is sometimes prefixed with a country code when sending international mail to that country. Placement of the code Postal services have their own formats and placement rules for postal codes. In most English-speaking countries, the postal code forms the last item of the address, following the city or town name, whereas in most continental European countries it precedes the name of the city or town. When it follows the city, it may be on the same line or on a new line. In Belarus, Kyrgyzstan, Russia and Turkmenistan, it is written at the beginning of an address. In Japan, it is written at the start of the address when written in Japanese, but at the end when the address is written in the Latin alphabet. Geographic coverage Postal codes are usually assigned to geographical areas. Sometimes codes are assigned to individual addresses or to institutions that receive large volumes of mail, e.g. government agencies or large commercial companies. One example is the French Cedex system. Postal zone numbers Before postal codes as described here were used, large cities were often divided into postal zones or postal districts, usually numbered from 1 upwards within each city. The newer postal code systems often incorporate the old zone numbers, as with London postal district numbers, for example. Ireland still uses postal district numbers in Dublin. In New Zealand, Auckland, Wellington and Christchurch were divided into postal zones, but these fell into disuse, and have now become redundant as a result of a new postcode system being introduced. Codes defined along administrative borders Some postal code systems, like those of Ecuador and Costa Rica, show an exact agreement with the hierarchy of administrative divisions. Format of six digit numeric (eight digit alphanumeric) postal codes in Ecuador, introduced in December 2007: ECAABBCC EC – ISO 3166-1 alpha-2 country code AA – one of the 24 provinces of Ecuador (24 of 100 possible codes used = 24%) BB – one of the 226 cantons of Ecuador (for AABB 221 of 10000 codes used, i.e. 2.21%) CC – one of the parishes of Ecuador. Format of five digit numeric Postal codes in Costa Rica, introduced in 2007: ABBCC A – one of the seven provinces of Costa Rica (7 of 10 used, i.e. 70%) BB – one of the 81 cantons of Costa Rica (81 of 100 used, i.e. 81%) CC – one of the districts of Costa Rica. In Costa Rica these codes were originally used as district identifiers by the National Institute of Statistics and Census of Costa Rica and the Administrative Territorial Division, and continue to be equivalent. The first two digits of the postal codes in Turkey correspond to the provinces and each province has assigned only one number. They are the same for them as in ISO 3166-2:TR. The first two digits of the postal codes in Vietnam indicate a province. Some provinces have one, other have several two digit numbers assigned. The numbers differ from the number used in ISO 3166-2:VN. Codes defined close to administrative boundaries In France the numeric code for the departments is used as the first two digits of the postal code, except for the two departments in Corsica that have codes 2A and 2B and use 20 as postal code. Furthermore, the codes are only the codes for the department in charge of delivery of the post, so it can be that a location in one department has a postal code starting with the number of a neighbouring department. Codes defined indirectly to administrative borders The first digit of the postal codes in the United States comprises discrete states. From the first three digits one can infer the state, with a few exceptions where an area is served by a central office in an adjacent state. Similarly, in Canada, the first letter indicates the province or territory, although the provinces of Quebec and Ontario are divided into several lettered sub-regions (e.g. H for Montreal and Laval), and the Northwest Territories and Nunavut share the letter X. Codes defined independently from administrative areas The first two digits of the postal codes in Germany define areas independently of administrative regions. The coding space of the first digit is fully used (0–9); that of the first two combined is utilized to 89%, i.e. there are 89 postal zones defined. Zone 11 is non-geographic. Royal Mail designed the postal codes in the United Kingdom mostly for efficient distribution. Nevertheless, people associated codes with certain areas, leading to some people wanting or not wanting to have a certain code. See also postcode lottery. In Brazil the 8-digit postcodes are an evolution of the five-digit area postal codes. In the 1990s the Brazilian five-digit postal code (illustrated), DDDDD, received a three-digit suffix DDDDD-SSS, but this suffix is not directly related to the administrative district hierarchy. The suffix was created only for logistic reasons. The postal code assignment can be assigned to individual land lots in some special cases – in Brazil, they are named "large receivers" and receive suffixes 900–959. It is an error to associate the postal code with the whole land lot area (illustrated). A postal code is often related to a land lot, but postal codes are usually related to access points on streets. Small or middle-sized houses, in general, only have a single main gate, which is the delivery point. Parks, large businesses such as shopping centers and big houses, may have more than one entrance and more than one delivery point. Precision Czechoslovakia Czechoslovakia introduced Postal Routing Numbers (PSČ – poštovní směrovací čísla) in 1973. The code consists of 5 digits formatted into two groups: NNN NN. Originally, the first group marked a district transport centre, the second group represented the order of post offices on the collection route. In the first group, the first digit corresponds partly with the region, the second digit meant a collection transport node (sběrný přepravní uzel, SPU) and the third digit a "district transport node" (okresní přepravní uzel). However, processing was later centralized and mechanized while codes remained the same. After separation, Slovakia and the Czech Republic kept the system. Codes with an initial digit of 1, 2, 3, 4, 5, 6, or 7 are used in the Czech Republic, while codes with an initial digit of 8, 9, or 0 are used in Slovakia A code corresponds to a local postal office. However, some larger companies or organizations have their own post codes. In 2004–2006, there were some efforts in Slovakia to reform the system, to get separate post codes for every district of single postmen, but the change was not realized. India Postal codes are known as Postal Index Numbers (PINs; sometimes as PIN codes) in India. The PIN system was introduced on 15 August 1972 by India Post. India uses a unique six-digit code as a geographical number to identify locations in India. The format of the PIN is ZSDPPP defined as follows: Z – Zone S – Sub-zone D – Sorting District P – Service Route PP – Post Office The first digit represents nine total zones: eight regional and one functional. Ireland In Ireland, the new postal code system launched in 2015, known as Eircode provides a unique 7-character alphanumerical code for each individual address. The first three digits are the routing key, which is a postal district and the last four characters are a unique identifier that relates to an individual address (business, house or apartment). A fully developed API is also available for integrating the Eircode database into business databases and logistics systems. With a single exception, these codes are in the format: ANN XXXX The single exception is the Dublin D6W postal district. It is the only routing key area in the country that takes the format ANA instead of ANN: D6W XXXX While it is not intended to replace addresses, in theory simply providing a seven-character Eircode would locate any Irish delivery address. For example, the Irish Parliament Dáil Éireann is: D02 A272 Netherlands Postal codes in the Netherlands, known as postcodes, are alphanumeric, consisting of four digits followed by a space and two letters (NNNN AA). Adding the house number to the postcode will identify the address, making the street name and town name redundant. For example: 2597 GV 75 will direct a postal delivery to Theo Mann-Bouwmeesterlaan 75, 's-Gravenhage (the International School of The Hague). Singapore Since 1 September 1995, every building in Singapore has been given a unique, six-digit postal code. United Kingdom For domestic properties, an individual postcode may cover up to 100 properties in contiguous proximity (e.g. a short section of a populous road, or a group of less populous neighbouring roads). The postcode together with the number or name of a property is not always unique, particularly in rural areas. For example, GL20 8NX/1 might refer to either 1 Frampton Cottages or 1 Frampton Farm Cottages, roughly a quarter of a mile (400 metres) apart. The structure is alphanumeric, with the following six valid formats, as defined by BS 7666: A9 9AA A9A 9AA A99 9AA AA9 9AA AA9A 9AA AA99 9AA There are always two halves: the separation between outward and inward postcodes is indicated by one space. The outward postcode covers a unique area and has two parts which may in total be two, three or four characters in length. A postcode area of one or two letters, followed by one or two numbers, followed in some parts of London by a letter. The outward postcode and the leading numeric of the inward postcode in combination forms a postal sector, and this usually corresponds to a couple of thousand properties. Larger businesses and isolated properties such as farms may have a unique postcode. Extremely large organisations such as larger government offices or bank headquarters may have multiple postcodes for different departments. There are 121 postcode areas in the UK, ranging widely in size from BT which covers the whole of Northern Ireland to WC for a small part of Central London. Postcode areas occasionally cross national boundaries, such as SY which covers a large, predominantly rural area from Shrewsbury and Ludlow in Shropshire, England, through to the seaside town of Aberystwyth, Ceredigion on Wales' west coast. There are a number of special purpose postcode areas that are "non-geographic" and which provide special routing instructions (such as parcel returns to online retailers). The three Crown dependencies and Gibraltar also use UK formatted postcodes. Some British Overseas Territories have adopted a single postcode for their territory that is very similar to the UK format. United States In the United States, the basic ZIP Code is composed of five numbers. The first three numbers identify a specific sectional center facility—or central sorting facility—that serves a geographic region (typically a large part of a state). The next two numbers identify a specific post office either serving an area of a city (if in an urban area or large suburban area) or an entire village, town, or small city and its surrounding area (if in a small suburban or rural area). There is an extended format of the ZIP Code known as the ZIP+4, which contains the basic five-digit ZIP Code, followed by a hyphen and four additional digits. These digits identify a specific delivery route, such as one side of a building, a group of apartments, or several floors of a large office building. Although using the ZIP+4 offers higher accuracy, addressing redundancy, and sorting efficiency within the USPS, it is optional and not widely used by the general public. It is primarily only used by business mailers. For high volume business mailers using automated mailing machines, the USPS has promulgated the Intelligent Mail barcode standard, which is a barcode containing the ZIP+4 code plus a two digit delivery point. This 11-digit number is theoretically a unique identifier for every address in the country. States and overseas territories sharing a postal code system French overseas departments and territories use the five-digit French postal code system, each code starting with the three-digit department identifier. Monaco is also integrated in the French system and has no system of its own. The British Crown Dependencies of Guernsey, Jersey and the Isle of Man are part of the UK postcode system. They use the schemes AAN NAA and AANN NAA, in which the first two letters are a unique code (GY, JE and IM respectively). Most of the Overseas Territories have UK-style postcodes, with a single postcode for each territory or dependency, although they are still treated as international destinations by Royal Mail in the UK, and charged at international rather than UK inland rates. The four other Overseas Territories Anguilla, Bermuda, British Virgin Islands and Cayman Islands have their own separate systems and formats. The Pacific island states of Palau, Marshall Islands and the Federated States of Micronesia remain part of the US ZIP code system, despite having become independent states. San Marino and the Vatican City are part of the Italian postcode system, while Liechtenstein similarly uses the Swiss system, as do the Italian exclave of Campione d'Italia and the German exclave of Büsingen am Hochrhein, although they also form part of their respective countries' postal code systems. The Czech Republic and Slovakia still use the codes of the former Czechoslovakia, their ranges not overlapping. In 2004–2006, Slovakia prepared a reform of the system but the plan was postponed and may have been abandoned. In the Czech Republic, there was no significant effort to modify the system. Non-geographic codes In the United Kingdom, the non-conforming postal code GIR 0AA was used for the National Girobank until its closure in 2003. A non-geographic series of postcodes, starting with BX, is used by some banks and government departments. HM Revenue and Customs – VAT Controller VAT Central Unit BX5 5AT The XX postcode is sued for parcel returns. The BF psotcode is used for British Forces Post Office (BFPO) addresses. A fictional address is also used by Royal Mail for letters to Santa Claus, more commonly known as Santa or Father Christmas: Santa's Grotto Reindeerland XM4 5HQ Previously, the postcode SAN TA1 was used. In Finland, the special postal code 96930 is for Korvatunturi, the place where Santa Claus ( in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi. The special postal code 99999 was formerly used. In Canada, the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and all of them are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code: SANTA CLAUS NORTH POLE  H0H 0H0 In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is Sinterklaas Spanjestraat 1 0612 Hemel This translates as "1 Spain Street, 0612 Heaven". In French, the street is called "Paradise Street": Saint-Nicolas Rue du Paradis 1 0612 Ciel Formats Non-postal uses While postal codes were introduced to expedite the delivery of mail, they can be used for: Finding the nearest branch of an organisation to a given address. A computer program uses the postal codes of the target address and the branches to list the closest branches in order of distance. This can be used by companies to inform potential customers where to go, by job centres to find jobs for job-seekers, to alert people of town planning applications in their area, and for other applications. Fine-grained postal codes can be used with satellite navigation systems to navigate to an address by street number and postcode. Geographical sales territories for representatives in the pharmaceutical industry are allocated based on a workload index that is based upon postcode. Population data can be isolated, grouped and/or organized by postal code for statistical analysis. Availability In some countries, the postal authorities charge for access to the code database. , the United Kingdom Government is consulting on whether to waive licensing fees for some geographical data sets (to be determined) related to UK postcodes. See also List of postal codes :Category:Lists of postal codes Address#Format by country and area Postcode Address File References External links Universal Postal Union Addressing Postcodes Postal systems Ukrainian inventions Soviet inventions
Postal code
[ "Technology" ]
4,872
[ "Transport systems", "Postal systems" ]
51,596
https://en.wikipedia.org/wiki/Inositol%20trisphosphate
Inositol trisphosphate or inositol 1,4,5-trisphosphate abbreviated InsP3 or Ins3P or IP3 is an inositol phosphate signaling molecule. It is made by hydrolysis of phosphatidylinositol 4,5-bisphosphate (PIP2), a phospholipid that is located in the plasma membrane, by phospholipase C (PLC). Together with diacylglycerol (DAG), IP3 is a second messenger molecule used in signal transduction in biological cells. While DAG stays inside the membrane, IP3 is soluble and diffuses through the cell, where it binds to its receptor, which is a calcium channel located in the endoplasmic reticulum. When IP3 binds its receptor, calcium is released into the cytosol, thereby activating various calcium regulated intracellular signals. Properties Chemical formula and molecular weight IP3 is an organic molecule with a molecular mass of 420.10 g/mol. Its empirical formula is C6H15O15P3. It is composed of an inositol ring with three phosphate groups bound at the 1, 4, and 5 carbon positions, and three hydroxyl groups bound at positions 2, 3, and 6. Chemical properties Phosphate groups can exist in three different forms depending on a solution's pH. Phosphorus atoms can bind three oxygen atoms with single bonds and a fourth oxygen atom using a double/dative bond. The pH of the solution, and thus the form of the phosphate group determines its ability to bind to other molecules. The binding of phosphate groups to the inositol ring is accomplished by phosphor-ester binding (see phosphoric acids and phosphates). This bond involves combining a hydroxyl group from the inositol ring and a free phosphate group through a dehydration reaction. Considering that the average physiological pH is approximately 7.4, the main form of the phosphate groups bound to the inositol ring in vivo is PO42−. This gives IP3 a net negative charge, which is important in allowing it to dock to its receptor, through binding of the phosphate groups to positively charged residues on the receptor. IP3 has three hydrogen bond donors in the form of its three hydroxyl groups. The hydroxyl group on the 6th carbon atom in the inositol ring is also involved in IP3 docking. Binding to its receptor The docking of IP3 to its receptor, which is called the inositol trisphosphate receptor (InsP3R), was first studied using deletion mutagenesis in the early 1990s. Studies focused on the N-terminus side of the IP3 receptor. In 1997 researchers localized the region of the IP3 receptor involved with binding of IP3 to between amino acid residues 226 and 578 in 1997. Considering that IP3 is a negatively charged molecule, positively charged amino acids such as arginine and lysine were believed to be involved. Two arginine residues at position 265 and 511 and one lysine residue at position 508 were found to be key in IP3 docking. Using a modified form of IP3, it was discovered that all three phosphate groups interact with the receptor, but not equally. Phosphates at the 4th and 5th positions interact more extensively than the phosphate at the 1st position and the hydroxyl group at the 6th position of the inositol ring. Discovery The discovery that a hormone can influence phosphoinositide metabolism was made by Mabel R. Hokin (1924–2003) and her husband Lowell E. Hokin in 1953, when they discovered that radioactive 32P phosphate was incorporated into the phosphatidylinositol of pancreas slices when stimulated with acetylcholine. Up until then phospholipids were believed to be inert structures only used by cells as building blocks for construction of the plasma membrane. Over the next 20 years, little was discovered about the importance of PIP2 metabolism in terms of cell signaling, until the mid-1970s when Robert H. Michell hypothesized a connection between the catabolism of PIP2 and increases in intracellular calcium (Ca2+) levels. He hypothesized that receptor-activated hydrolysis of PIP2 produced a molecule that caused increases in intracellular calcium mobilization. This idea was researched extensively by Michell and his colleagues, who in 1981 were able to show that PIP2 is hydrolyzed into DAG and IP3 by a then unknown phosphodiesterase. In 1984 it was discovered that IP3 acts as a secondary messenger that is capable of traveling through the cytoplasm to the endoplasmic reticulum (ER), where it stimulates the release of calcium into the cytoplasm. Further research provided valuable information on the IP3 pathway, such as the discovery in 1986 that one of the many roles of the calcium released by IP3 is to work with DAG to activate protein kinase C (PKC). It was discovered in 1989 that phospholipase C (PLC) is the phosphodiesterase responsible for hydrolyzing PIP2 into DAG and IP3. Today the IP3 signaling pathway is well mapped out, and is known to be important in regulating a variety of calcium-dependent cell signaling pathways. Signaling pathway Increases in the intracellular Ca2+ concentrations are often a result of IP3 activation. When a ligand binds to a G protein-coupled receptor (GPCR) that is coupled to a Gq heterotrimeric G protein, the α-subunit of Gq can bind to and induce activity in the PLC isozyme PLC-β, which results in the cleavage of PIP2 into IP3 and DAG. If a receptor tyrosine kinase (RTK) is involved in activating the pathway, the isozyme PLC-γ has tyrosine residues that can become phosphorylated upon activation of an RTK, and this will activate PLC-γ and allow it to cleave PIP2 into DAG and IP3. This occurs in cells that are capable of responding to growth factors such as insulin, because the growth factors are the ligands responsible for activating the RTK. IP3 (also abbreviated Ins(1,4,5)P3 is a soluble molecule and is capable of diffusing through the cytoplasm to the ER, or the sarcoplasmic reticulum (SR) in the case of muscle cells, once it has been produced by the action of PLC. Once at the ER, IP3 is able to bind to the Ins(1,4,5)P3 receptor Ins(1,4,5)P3R which is a ligand-gated Ca2+ channel that is found on the surface of the ER. The binding of IP3 (the ligand in this case) to Ins(1,4,5)P3R triggers the opening of the Ca2+ channel, and thus release of Ca2+ into the cytoplasm. In heart muscle cells this increase in Ca2+ activates the ryanodine receptor-operated channel on the SR, results in further increases in Ca2+ through a process known as calcium-induced calcium release. IP3 may also activate Ca2+ channels on the cell membrane indirectly, by increasing the intracellular Ca2+ concentration. Function Human IP3's main functions are to mobilize Ca2+ from storage organelles and to regulate cell proliferation and other cellular reactions that require free calcium. In smooth muscle cells, for example, an increase in concentration of cytoplasmic Ca2+ results in the contraction of the muscle cell. In the nervous system, IP3 serves as a second messenger, with the cerebellum containing the highest concentration of IP3 receptors. There is evidence that IP3 receptors play an important role in the induction of plasticity in cerebellar Purkinje cells. Sea urchin eggs The slow block to polyspermy in the sea urchin is mediated by the PIP2 secondary messenger system. Activation of the binding receptors activates PLC, which cleaves PIP2 in the egg plasma membrane, releasing IP3 into the egg cell cytoplasm. IP3 diffuses to the ER, where it opens Ca2+ channels. Research Huntington's disease Huntington's disease occurs when the cytosolic protein Huntingtin (Htt) has an additional 35 glutamine residues added to its amino terminal region. This modified form of Htt is called Httexp. Httexp makes Type 1 IP3 receptors more sensitive to IP3, which leads to the release of too much Ca2+ from the ER. The release of Ca2+ from the ER causes an increase in the cytosolic and mitochondrial concentrations of Ca2+. This increase in Ca2+ is thought to be the cause of GABAergic MSN degradation. Alzheimer's disease Alzheimer's disease involves the progressive degeneration of the brain, severely impacting mental faculties. Since the Ca2+ hypothesis of Alzheimer's was proposed in 1994, several studies have shown that disruptions in Ca2+ signaling are the primary cause of Alzheimer's disease. Familial Alzheimer's disease has been strongly linked to mutations in the presenilin 1 (PS1), presenilin 2 (PS2), and amyloid precursor protein (APP) genes. All of the mutated forms of these genes observed to date have been found to cause abnormal Ca2+ signaling in the ER. Mutations in PS1 have been shown to increase IP3-mediated Ca2+ release from the ER in several animal models. Calcium channel blockers have been used to treat Alzheimer's disease with some success, and the use of lithium to decrease IP3 turnover has also been suggested as a possible method of treatment. See also Adenophostin Inositol Inositol phosphate myo-Inositol Myo-inositol trispyrophosphate Inositol pentakisphosphate Inositol hexaphosphate Inositol trisphosphate receptor ITPR1 ITPKC References External links Signal transduction Inositol Phosphate esters Second messenger system
Inositol trisphosphate
[ "Chemistry", "Biology" ]
2,162
[ "Inositol", "Second messenger system", "Signal transduction", "Biochemistry", "Neurochemistry" ]
51,632
https://en.wikipedia.org/wiki/Puffball
Puffballs are a type of fungus featuring a ball-shaped fruit body that (when mature) bursts on contact or impact, releasing a cloud of dust-like spores into the surrounding area. Puffballs belong to the division Basidiomycota and encompass several genera, including Calvatia, Calbovista and Lycoperdon. The puffballs were previously treated as a taxonomic group called the Gasteromycetes or Gasteromycetidae, but they are now known to be a polyphyletic assemblage. The distinguishing feature of all puffballs is that they do not have an open cap with spore-bearing gills. Instead, spores are produced internally, in a spheroidal fruit body called a gasterothecium (gasteroid 'stomach-like' basidiocarp). As the spores mature, they form a mass called a gleba in the centre of the fruitbody that is often of a distinctive color and texture. The basidiocarp remains closed until after the spores have been released from the basidia. Eventually, it develops an aperture, or dries, becomes brittle, and splits, and the spores escape. The spores of puffballs are statismospores rather than ballistospores, meaning they are not forcibly extruded from the basidium. Puffballs and similar forms are thought to have evolved convergently (that is, in numerous independent events) from Hymenomycetes by gasteromycetation, through secotioid stages. Thus, 'Gasteromycetes' and 'Gasteromycetidae' are now considered to be descriptive, morphological terms (more properly gasteroid or gasteromycetes, to avoid taxonomic implications) but not valid cladistic terms. True puffballs do not have a visible stalk or stem, while stalked puffballs do have a stalk that supports the gleba. None of the stalked puffballs are edible as they are tough and woody mushrooms. The Hymenogastrales and Enteridium lycoperdon, a slime mold, are the false puffballs. A gleba which is powdery on maturity is a feature of true puffballs, stalked puffballs and earthstars. False puffballs are hard like rock or brittle. All false puffballs are inedible, as they are tough and bitter to taste. The genus Scleroderma, which has a young purple gleba, should also be avoided. Puffballs were traditionally used in Tibet for making ink by burning them, grinding the ash, then putting them in water and adding glue liquid and "a decoction", which, when pressed for a long time, made a black dark substance that was used as ink. Rural Americans burned the common puffball with some kind of bee smoker to anesthetize honey bees as a means to safely procure honey; the practice later inspired experimental medicinal application of the puffball smoke as a surgical general anesthetic in 1853. Edibility and identification While most puffballs are not poisonous, some often look similar to young agarics, and especially the deadly Amanitas, such as the death cap or destroying angel mushrooms. Young puffballs in the edible stage, before maturation of the gleba, have undifferentiated white flesh within, whereas the gills of immature Amanita mushrooms can be seen if they are closely examined. They can be very toxic. The giant puffball, Calvatia gigantea (earlier classified as Lycoperdon giganteum), reaches or more in diameter, and is difficult to mistake for any other fungus. It has been estimated that, when mature, a large specimen of this fungus will produce around 7 × 10 spores. Not all true puffball mushrooms are without stalks. Some may also be stalked, such as the Podaxis pistillaris, which is also called the "false shaggy mane". There are also a number of false puffballs that look similar to the true ones. Stalked Stalked puffballs species: Battarrea phalloides Calostoma cinnabarina (Stalked Puffball-in-Aspic) Pisolithus tinctorius Tulostoma (genus) True True puffballs genera and species: Bovista – various species, including: Bovista aestivalis Bovista dermoxantha Bovista nigrescens Bovista plumbea Calvatia – various species, including: Calvatia bovista Calvatia craniiformis Calvatia cyathiformis Calvatia gigantea Calvatia booniana Calvatia fumosa Calvatia lepidophora Calvatia pachyderma Calvatia sculpta Calvatia subcretacea – edible Calbovista subsculpta Handkea – various species, including: Handkea utriformis Lycoperdon – various species, including: Lycoperdon candidum Lycoperdon echinatum Lycoperdon fusillum Lycoperdon umbrinum Scleroderma – various species, including: Scleroderma auratium Scleroderma geaster – not edible False False puffballs species: Endoptychum agaricoides Nivatogastrium nubigenum Podaxis pistillaris Rhizopogon rubescens Truncocolumella citrina Classification Major orders: Agaricales (including now-obsolete orders Lycoperdales, Tulostomatales, and Nidulariales) Basidiomycetes: Agaricales: Lycoperdaceae: Calvatia Calvatia booniana Calvatia bovista (Handkea utriformis) Calvatia craniiformis Calvatia cyathiformis Calvatia fumosa (Handkea fumosa) Calvatia gigantea Calvatia lepidophora Calvatia rubroflava Calvatia sculpta Calvatia subcretacea (Handkea subcretacea) Basidiomycetes: Agaricales: Lycoperdaceae: Lycoperdon Lycoperdon foetidum (Lycoperdon nigrescens) Lycoperdon perlatum Lycoperdon pulcherrimum Lycoperdon pusillum Lycoperdon pyriforme Basidiomycetes: Agaricales: Lycoperdaceae: Vascellum Vascellum curtisii Vascellum pratense – edible when interior is white Geastrales and Phallales (related to Cantharellales), Basidiomycetes: Phallales: Geastraceae: Geastrum Geastrum coronatum Geastrum fornicatum Geastrum saccatum Sclerodermatales (related to Boletales) Basidiomycetes: Boletales: Sclerodermataceae: Scleroderma Scleroderma areolatum Scleroderma bovista Scleroderma cepa Scleroderma citrinum Scleroderma meridionale Scleroderma michiganense Scleroderma polyrhizum Scleroderma septentrionale Various false-truffles (hypogaeic gasteromycetes) related to different hymenomycete orders Similarly, the true truffles (Tuberales) are gasteroid Ascomycota. Their ascocarps are called tuberothecia. Footnotes References Homobasidiomycetes at the Tree of Life Web Project External links Edible fungi Fungus common names Basidiomycota Mushroom types
Puffball
[ "Biology" ]
1,611
[ "Fungus common names", "Fungi", "Common names of organisms", "Mushroom types" ]
51,653
https://en.wikipedia.org/wiki/Burali-Forti%20paradox
In set theory, a field of mathematics, the Burali-Forti paradox demonstrates that constructing "the set of all ordinal numbers" leads to a contradiction and therefore shows an antinomy in a system that allows its construction. It is named after Cesare Burali-Forti, who, in 1897, published a paper proving a theorem which, unknown to him, contradicted a previously proved result by Georg Cantor. Bertrand Russell subsequently noticed the contradiction, and when he published it in his 1903 book Principles of Mathematics, he stated that it had been suggested to him by Burali-Forti's paper, with the result that it came to be known by Burali-Forti's name. Stated in terms of von Neumann ordinals We will prove this by contradiction. Let be a set consisting of all ordinal numbers. is transitive because for every element of (which is an ordinal number and can be any ordinal number) and every element of (i.e. under the definition of Von Neumann ordinals, for every ordinal number ), we have that is an element of because any ordinal number contains only ordinal numbers, by the definition of this ordinal construction. is well ordered by the membership relation because all its elements are also well ordered by this relation. So, by steps 2 and 3, we have that is an ordinal class and also, by step 1, an ordinal number, because all ordinal classes that are sets are also ordinal numbers. This implies that is an element of . Under the definition of Von Neumann ordinals, is the same as being an element of . This latter statement is proven by step 5. But no ordinal class is less than itself, including because of step 4 ( is an ordinal class), i.e. . We have deduced two contradictory propositions ( and ) from the sethood of and, therefore, disproved that is a set. Stated more generally The version of the paradox above is anachronistic, because it presupposes the definition of the ordinals due to John von Neumann, under which each ordinal is the set of all preceding ordinals, which was not known at the time the paradox was framed by Burali-Forti. Here is an account with fewer presuppositions: suppose that we associate with each well-ordering an object called its order type in an unspecified way (the order types are the ordinal numbers). The order types (ordinal numbers) themselves are well-ordered in a natural way, and this well-ordering must have an order type . It is easily shown in naïve set theory (and remains true in ZFC but not in New Foundations) that the order type of all ordinal numbers less than a fixed is itself. So the order type of all ordinal numbers less than is itself. But this means that , being the order type of a proper initial segment of the ordinals, is strictly less than the order type of all the ordinals, but the latter is itself by definition. This is a contradiction. If we use the von Neumann definition, under which each ordinal is identified as the set of all preceding ordinals, the paradox is unavoidable: the offending proposition that the order type of all ordinal numbers less than a fixed is itself must be true. The collection of von Neumann ordinals, like the collection in the Russell paradox, cannot be a set in any set theory with classical logic. But the collection of order types in New Foundations (defined as equivalence classes of well-orderings under similarity) is actually a set, and the paradox is avoided because the order type of the ordinals less than turns out not to be . Resolutions of the paradox Modern axioms for formal set theory such as ZF and ZFC circumvent this antinomy by not allowing the construction of sets using terms like "all sets with the property ", as is possible in naive set theory and as is possible with Gottlob Frege's axiomsspecifically Basic Law Vin the "Grundgesetze der Arithmetik." Quine's system New Foundations (NF) uses a different solution. showed that in the original version of Quine's system "Mathematical Logic" (ML), an extension of New Foundations, it is possible to derive the Burali-Forti paradox, showing that this system was contradictory. Quine's revision of ML following Rosser's discovery does not suffer from this defect, and indeed was subsequently proved equiconsistent with NF by Hao Wang. See also Absolute infinite References Irving Copi (1958) "The Burali-Forti Paradox", Philosophy of Science 25(4): 281–286, External links Stanford Encyclopedia of Philosophy: "Paradoxes and Contemporary Logic"—by Andrea Cantini. Ordinal numbers Paradoxes of naive set theory
Burali-Forti paradox
[ "Mathematics" ]
1,051
[ "Ordinal numbers", "Basic concepts in infinite set theory", "Mathematical objects", "Basic concepts in set theory", "Paradoxes of naive set theory", "Order theory", "Numbers" ]
51,654
https://en.wikipedia.org/wiki/Soliton
In mathematics and physics, a soliton is a nonlinear, self-reinforcing, localized wave packet that is strongly stable, in that it preserves its shape while propagating freely, at constant velocity, and recovers it even after collisions with other such localized wave packets. Its remarkable stability can be traced to a balanced cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons were subsequently found to provide stable solutions of a wide class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described in 1834 by John Scott Russell who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation". The Korteweg–de Vries equation was later formulated to model such waves, and the term soliton was coined by Zabusky and Kruskal to describe localized, strongly stable propagating solutions to this equation. The name was meant to characterize the solitary nature of the waves, with the 'on' suffix recalling the usage for particles such as electrons, baryons or hadrons, reflecting their observed particle-like behaviour. Definition A single, consensus definition of a soliton is difficult to find. ascribe three properties to solitons: They are of permanent form; They are localized within a region; They can interact with other solitons, and emerge from the collision unchanged, except for a phase shift. More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction). Explanation Dispersion and nonlinearity can interact to produce permanent and localized wave forms. Consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies. Since glass shows dispersion, these different frequencies travel at different speeds and the shape of the pulse therefore changes over time. However, also the nonlinear Kerr effect occurs; the refractive index of a material at a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect exactly cancels the dispersion effect and the pulse's shape does not change over time. Thus, the pulse is a soliton. See soliton (optics) for a more detailed description. Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform, and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research. Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the morning glory cloud of the Gulf of Carpentaria, where pressure solitons traveling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons. A topological soliton, also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution". Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a nontrivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes. No continuous transformation maps a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess–Zumino–Witten model in quantum field theory, the magnetic skyrmion in condensed matter physics, and cosmic strings and domain walls in cosmology. History In 1834, John Scott Russell described his wave of translation: Scott Russell spent some time making practical and theoretical investigations of these waves. He built wave tanks at his home and noticed some key properties: The waves are stable, and can travel over very large distances (normal waves would tend to either flatten out, or steepen and topple over) The speed depends on the size of the wave, and its width on the depth of water. Unlike normal waves they will never merge – so a small wave is overtaken by a large one, rather than the two combining. If a wave is too big for the depth of water, it splits into two, one big and one small. Scott Russell's experimental work seemed at odds with Isaac Newton's and Daniel Bernoulli's theories of hydrodynamics. George Biddell Airy and George Gabriel Stokes had difficulty accepting Scott Russell's experimental observations because they could not be explained by the existing water wave theories. Additional observations were reported by Henry Bazin in 1862 after experiments carried out in the canal de Bourgogne in France. Their contemporaries spent some time attempting to extend the theory but it would take until the 1870s before Joseph Boussinesq and Lord Rayleigh published a theoretical treatment and solutions. In 1895 Diederik Korteweg and Gustav de Vries provided what is now known as the Korteweg–de Vries equation, including solitary wave and periodic cnoidal wave solutions. In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta, Ulam, and Tsingou. In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems. Solitons are, by definition, unaltered in shape and speed by a collision with other solitons. So solitary waves on a water surface are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind. Solitons are also studied in quantum mechanics, thanks to the fact that they could provide a new foundation of it through de Broglie's unfinished program, known as "Double solution theory" or "Nonlinear wave mechanics". This theory, developed by de Broglie in 1927 and revived in the 1950s, is the natural continuation of his ideas developed between 1923 and 1926, which extended the wave–particle duality introduced by Albert Einstein for the light quanta, to all the particles of matter. The observation of accelerating surface gravity water wave soliton using an external hydrodynamic linear potential was demonstrated in 2019. This experiment also demonstrated the ability to excite and measure the phases of ballistic solitons. In fiber optics Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations. Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well. In biology Solitons may occur in proteins and DNA. Solitons are related to the low-frequency collective motion in proteins and DNA. A recently developed model in neuroscience proposes that signals, in the form of density waves, are conducted within neurons in the form of solitons. Solitons can be described as almost lossless energy transfer in biomolecular chains or lattices as wave-like propagations of coupled conformational and electronic disturbances. In material physics Solitons can occur in materials, such as ferroelectrics, in the form of domain walls. Ferroelectric materials exhibit spontaneous polarization, or electric dipoles, which are coupled to configurations of the material structure. Domains of oppositely poled polarizations can be present within a single material as the structural configurations corresponding to opposing polarizations are equally favorable with no presence of external forces. The domain boundaries, or “walls”, that separate these local structural configurations are regions of lattice dislocations. The domain walls can propagate as the polarizations, and thus, the local structural configurations can switch within a domain with applied forces such as electric bias or mechanical stress. Consequently, the domain walls can be described as solitons, discrete regions of dislocations that are able to slip or propagate and maintain their shape in width and length.   In recent literature, ferroelectricity has been observed in twisted bilayers of van der Waal materials such as molybdenum disulfide and graphene. The moiré superlattice that arises from the relative twist angle between the van der Waal monolayers generates regions of different stacking orders of the atoms within the layers. These regions exhibit inversion symmetry breaking structural configurations that enable ferroelectricity at the interface of these monolayers. The domain walls that separate these regions are composed of partial dislocations where different types of stresses, and thus, strains are experienced by the lattice. It has been observed that soliton or domain wall propagation across a moderate length of the sample (order of nanometers to micrometers) can be initiated with applied stress from an AFM tip on a fixed region. The soliton propagation carries the mechanical perturbation with little loss in energy across the material, which enables domain switching in a domino-like fashion. It has also been observed that the type of dislocations found at the walls can affect propagation parameters such as direction. For instance, STM measurements showed four types of strains of varying degrees of shear, compression, and tension at domain walls depending on the type of localized stacking order in twisted bilayer graphene. Different slip directions of the walls are achieved with different types of strains found at the domains, influencing the direction of the soliton network propagation. Nonidealities such as disruptions to the soliton network and surface impurities can influence soliton propagation as well. Domain walls can meet at nodes and get effectively pinned, forming triangular domains, which have been readily observed in various ferroelectric twisted bilayer systems. In addition, closed loops of domain walls enclosing multiple polarization domains can inhibit soliton propagation and thus, switching of polarizations across it. Also, domain walls can propagate and meet at wrinkles and surface inhomogeneities within the van der Waal layers, which can act as obstacles obstructing the propagation. In magnets In magnets, there also exist different types of solitons and other nonlinear waves. These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation, continuum Heisenberg model, Ishimori equation, nonlinear Schrödinger equation and others. In nuclear physics Atomic nuclei may exhibit solitonic behavior. Here the whole nuclear wave function is predicted to exist as a soliton under certain conditions of temperature and energy. Such conditions are suggested to exist in the cores of some stars in which the nuclei would not react but pass through each other unchanged, retaining their soliton waves through a collision between nuclei. The Skyrme Model is a model of nuclei in which each nucleus is considered to be a topologically stable soliton solution of a field theory with conserved baryon number. Bions The bound state of two solitons is known as a bion, or in systems where the bound state periodically oscillates, a breather. The interference-type forces between solitons could be used in making bions. However, these forces are very sensitive to their relative phases. Alternatively, the bound state of solitons could be formed by dressing atoms with highly excited Rydberg levels. The resulting self-generated potential profile features an inner attractive soft-core supporting the 3D self-trapped soliton, an intermediate repulsive shell (barrier) preventing solitons’ fusion, and an outer attractive layer (well) used for completing the bound state resulting in giant stable soliton molecules. In this scheme, the distance and size of the individual solitons in the molecule can be controlled dynamically with the laser adjustment. In field theory bion usually refers to the solution of the Born–Infeld model. The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system. The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons. On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon, where "E" stands for Einstein. Alcubierre drive Erik Lentz, a physicist at the University of Göttingen, has theorized that solitons could allow for the generation of Alcubierre warp bubbles in spacetime without the need for exotic matter, i.e., matter with negative mass. See also Compacton, a soliton with compact support Dissipative soliton Freak waves may be a Peregrine soliton related phenomenon involving breather waves which exhibit concentrated localized energy with non-linear properties. Instantons Nematicons Non-topological soliton, in quantum field theory Nonlinear Schrödinger equation Oscillons Pattern formation Peakon, a soliton with a non-differentiable peak Q-ball a non-topological soliton Sine-Gordon equation Soliton (optics) Soliton (topological) Soliton distribution Soliton hypothesis for ball lightning, by David Finkelstein Soliton model of nerve impulse propagation Topological quantum number Vector soliton Notes References Further reading External links Related to John Scott Russell John Scott Russell and the solitary wave John Scott Russell biography Photograph of soliton on the Scott Russell Aqueduct Other Heriot–Watt University soliton page Helmholtz solitons, Salford University Short didactic review on optical solitons 1834 introductions 1834 in science Fluid dynamics Integrable systems Partial differential equations Quasiparticles Wave mechanics
Soliton
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,246
[ "Physical phenomena", "Integrable systems", "Chemical engineering", "Theoretical physics", "Classical mechanics", "Waves", "Wave mechanics", "Subatomic particles", "Condensed matter physics", "Piping", "Quasiparticles", "Matter", "Fluid dynamics" ]
51,668
https://en.wikipedia.org/wiki/Shoot%20%28botany%29
In botany, a plant shoot consists of any plant stem together with its appendages like leaves, lateral buds, flowering stems, and flower buds. The new growth from seed germination that grows upward is a shoot where leaves will develop. In the spring, perennial plant shoots are the new growth that grows from the ground in herbaceous plants or the new stem or flower growth that grows on woody plants. In everyday speech, shoots are often synonymous with stems. Stems, which are an integral component of shoots, provide an axis for buds, fruits, and leaves. Young shoots are often eaten by animals because the fibers in the new growth have not yet completed secondary cell wall development, making the young shoots softer and easier to chew and digest. As shoots grow and age, the cells develop secondary cell walls that have a hard and tough structure. Some plants (e.g. bracken) produce toxins that make their shoots inedible or less palatable. Shoot types of woody plants Many woody plants have distinct short shoots and long shoots. In some angiosperms, the short shoots, also called spur shoots or fruit spurs, produce the majority of flowers and fruit. A similar pattern occurs in some conifers and in Ginkgo, although the "short shoots" of some genera such as Picea are so small that they can be mistaken for part of the leaf that they have produced. A related phenomenon is seasonal heterophylly, which involves visibly different leaves from spring growth and later lammas growth. Whereas spring growth mostly comes from buds formed the previous season, and often includes flowers, lammas growth often involves long shoots. See also Bud Crown (botany) Heteroblasty, an abrupt change in the growth pattern of some plants as they mature Lateral shoot Phyllotaxis, the arrangement of leaves along a plant stem Seedling Sterigma, the "woody peg" below the leaf of some conifers Thorn (botany), true thorns, as distinct from spines or prickles, are short shoots References Plant morphology
Shoot (botany)
[ "Biology" ]
417
[ "Plant morphology", "Plants" ]
51,672
https://en.wikipedia.org/wiki/Vacuous%20truth
In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied. It is sometimes said that a statement is vacuously true because it does not really say anything. For example, the statement "all cell phones in the room are turned off" will be true when no cell phones are present in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off", which would otherwise be incoherent and false. More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent. One example of such a statement is "if Tokyo is in Spain, then the Eiffel Tower is in Bolivia". Such statements are considered vacuous truths because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent. In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("Tokyo is in Spain" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way. Examples common to everyday speech include conditional phrases used as idioms of improbability like "when hell freezes over ..." and "when pigs can fly ...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition. In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction. This notion has relevance in pure mathematics, as well as in any other field that uses classical logic. Outside of mathematics, statements in the form of a vacuous truth, while logically valid, can nevertheless be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist. For example, a child might truthfully tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten some vegetables, even though that is not true. Scope of the concept A statement is "vacuously true" if it resembles a material conditional statement , where the antecedent is known to be false. Vacuously true statements that can be reduced (with suitable transformations) to this basic form (material conditional) include the following universally quantified statements: , where it is the case that . , where the set is empty. This logical form can be converted to the material conditional form in order to easily identify the antecedent. For the above example "all cell phones in the room are turned off", it can be formally written as where is the set of all cell phones in the room and is " is turned off". This can be written to a material conditional statement where is the set of all things in the room (including cell phones if they exist in the room), the antecedent is " is a cell phone", and the consequent is " is turned off". , where the symbol is restricted to a type that has no representatives. Vacuous truths most commonly appear in classical logic with two truth values. However, vacuous truths can also appear in, for example, intuitionistic logic, in the same situations as given above. Indeed, if is false, then will yield a vacuous truth in any logic that uses the material conditional; if is a necessary falsehood, then it will also yield a vacuous truth under the strict conditional. Other non-classical logics, such as relevance logic, may attempt to avoid vacuous truths by using alternative conditionals (such as the case of the counterfactual conditional). In computer programming Many programming environments have a mechanism for querying if every item in a collection of items satisfies some predicate. It is common for such a query to always evaluate as true for an empty collection. For example: In JavaScript, the array method every executes a provided callback function once for each element present in the array, only stopping (if and when) it finds an element where the callback function returns false. Notably, calling the every method on an empty array will return true for any condition. In Python, the built in all() function returns True only when all of the elements of an array are True or the array is of length zero as shown in these examples: all([1,1])==True; all([1,1,0])==False; all([])==True. A less ambiguous way to express this is to say all() returns True when none of the elements are False. In Rust, the Iterator::all function accepts an iterator and a predicate and returns true only when the predicate returns true for all items produced by the iterator, or if the iterator produces no items. In SQL, the function, the function ANY_VALUE can differ depending on the RDBMS's behaviour relating NULLs to vacuous truth. Some RDBMS might return null even if there are non-null values. Some DBMS might not allow for its use in filter(…) or over(…) clauses. In Kotlin, the collection method all returns true when the collection is empty. In C#, the Linq method All returns true when the collection is empty. In C++, the std::all_of function template returns true for an empty collection. Examples These examples, one from mathematics and one from natural language, illustrate the concept of vacuous truths: "For any integer x, if then ." – This statement is true non-vacuously (since some integers are indeed greater than 5), but some of its implications are only vacuously true: for example, when x is the integer 2, the statement implies the vacuous truth that "if then ". "All my children are goats" is a vacuous truth when spoken by someone without children. Similarly, "None of my children is a goat" would also be a vacuous truth when spoken by the same person. See also Definite description De Morgan's laws – specifically the law that a universal statement is true just in case no counterexample exists: Empty sum and empty product Empty function Paradoxes of material implication, especially the principle of explosion Presupposition, double question State of affairs (philosophy) Tautology (logic) – another type of true statement that also fails to convey any substantive information Triviality (mathematics) and degeneracy (mathematics) References Bibliography Blackburn, Simon (1994). "vacuous", The Oxford Dictionary of Philosophy. Oxford: Oxford University Press, p. 388. David H. Sanford (1999). "implication". The Cambridge Dictionary of Philosophy, 2nd. ed., p. 420. External links Conditional Assertions: Vacuous truth Mathematical logic Informal fallacies Logical truth
Vacuous truth
[ "Mathematics" ]
1,542
[ "Mathematical logic", "Logical truth" ]
51,698
https://en.wikipedia.org/wiki/Extended%20real%20number%20line
In mathematics, the extended real number system is obtained from the real number system by adding two elements denoted and that are respectively greater and lower than every real number. This allows for treating the potential infinities of infinitely increasing sequences and infinitely decreasing series as actual infinities. For example, the infinite sequence of the natural numbers increases infinitively and has no upper bound in the real number system (a potential infinity); in the extended real number line, the sequence has as its least upper bound and as its limit (an actual infinity). In calculus and mathematical analysis, the use of and as actual limits extends significantly the possible computations. It is the Dedekind–MacNeille completion of the real numbers. The extended real number system is denoted , , or . When the meaning is clear from context, the symbol is often written simply as . There is also a distinct projectively extended real line where and are not distinguished, i.e., there is a single actual infinity for both infinitely increasing sequences and infinitely decreasing sequences that is denoted as just or as . Motivation Limits The extended number line is often useful to describe the behavior of a function when either the argument or the function value gets "infinitely large" in some sense. For example, consider the function defined by . The graph of this function has a horizontal asymptote at . Geometrically, when moving increasingly farther to the right along the -axis, the value of approaches 0. This limiting behavior is similar to the limit of a function in which the real number approaches except that there is no real number that approaches when increases infinitely. Adjoining the elements and to enables a definition of "limits at infinity" which is very similar to the usual defininion of limits, except that is replaced by (for ) or (for ). This allows proving and writing Measure and integration In measure theory, it is often useful to allow sets that have infinite measure and integrals whose value may be infinite. Such measures arise naturally out of calculus. For example, in assigning a measure to that agrees with the usual length of intervals, this measure must be larger than any finite real number. Also, when considering improper integrals, such as the value "infinity" arises. Finally, it is often useful to consider the limit of a sequence of functions, such as . Without allowing functions to take on infinite values, such essential results as the monotone convergence theorem and the dominated convergence theorem would not make sense. Order and topological properties The extended real number system , defined as or , can be turned into a totally ordered set by defining for all . With this order topology, has the desirable property of compactness: Every subset of has a supremum and an infimum (the infimum of the empty set is , and its supremum is ). Moreover, with this topology, is homeomorphic to the unit interval . Thus the topology is metrizable, corresponding (for a given homeomorphism) to the ordinary metric on this interval. There is no metric, however, that is an extension of the ordinary metric on . In this topology, a set is a neighborhood of if and only if it contains a set for some real number . The notion of the neighborhood of can be defined similarly. Using this characterization of extended-real neighborhoods, limits with tending to or , and limits "equal" to and , reduce to the general topological definition of limits—instead of having a special definition in the real number system. Arithmetic operations The arithmetic operations of can be partially extended to as follows: For exponentiation, see . Here, means both and , while means both and . The expressions , , and (called indeterminate forms) are usually left undefined. These rules are modeled on the laws for infinite limits. However, in the context of probability or measure theory, is often defined as 0. When dealing with both positive and negative extended real numbers, the expression is usually left undefined, because, although it is true that for every real nonzero sequence that converges to 0, the reciprocal sequence is eventually contained in every neighborhood of , it is not true that the sequence must itself converge to either or Said another way, if a continuous function achieves a zero at a certain value then it need not be the case that tends to either or in the limit as tends to . This is the case for the limits of the identity function when tends to 0, and of (for the latter function, neither nor is a limit of , even if only positive values of are considered). However, in contexts where only non-negative values are considered, it is often convenient to define . For example, when working with power series, the radius of convergence of a power series with coefficients is often defined as the reciprocal of the limit-supremum of the sequence . Thus, if one allows to take the value , then one can use this formula regardless of whether the limit-supremum is 0 or not. Algebraic properties With the arithmetic operations defined above, is not even a semigroup, let alone a group, a ring or a field as in the case of . However, it has several convenient properties: and are either equal or both undefined. and are either equal or both undefined. and are either equal or both undefined. and are either equal or both undefined and are equal if both are defined. If and if both and are defined, then . If and and if both and are defined, then . In general, all laws of arithmetic are valid in as long as all occurring expressions are defined. Miscellaneous Several functions can be continuously extended to by taking limits. For instance, one may define the extremal points of the following functions as: , , , . Some singularities may additionally be removed. For example, the function can be continuously extended to (under some definitions of continuity), by setting the value to for , and 0 for and . On the other hand, the function cannot be continuously extended, because the function approaches as approaches 0 from below, and as approaches 0 from above, i.e., the function not converging to the same value as its independent variable approaching to the same domain element from both the positive and negative value sides. A similar but different real-line system, the projectively extended real line, does not distinguish between and (i.e. infinity is unsigned). As a result, a function may have limit on the projectively extended real line, while in the extended real number system only the absolute value of the function has a limit, e.g. in the case of the function at . On the other hand, on the projectively extended real line, and correspond to only a limit from the right and one from the left, respectively, with the full limit only existing when the two are equal. Thus, the functions and cannot be made continuous at on the projectively extended real line. See also Division by zero Extended complex plane Extended natural numbers Improper integral Infinity Log semiring Series (mathematics) Projectively extended real line Computer representations of extended real numbers, see and IEEE floating point Notes References Further reading Infinity Real numbers
Extended real number line
[ "Mathematics" ]
1,451
[ "Real numbers", "Mathematical objects", "Infinity", "Numbers" ]
51,702
https://en.wikipedia.org/wiki/Superscalar%20processor
A superscalar processor (or multiple-issue processor) is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute or start executing more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time which can even be less than 1) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic unit. While a superscalar CPU is typically also pipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former (superscalar) executes multiple instructions in parallel by using multiple execution units, whereas the latter (pipeline) executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. In the "Simple superscalar pipeline" figure, fetching two instructions at the same time is superscaling, and fetching the next two before the first pair has been written back is pipelining. The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU): Instructions are issued from a sequential instruction stream The CPU dynamically checks for data dependencies between instructions at run time (versus software checking at compile time) The CPU can execute multiple instructions per clock cycle History Seymour Cray's CDC 6600 from 1964 is often mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe. The Intel i960CA (1989), the AMD 29000-series 29050 (1990), and the Motorola MC88110 (1991), microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units and the traditional uniformity of the instruction set favors superscalar dispatch (this was why RISC designs were faster than CISC designs through the 1980s and into the 1990s, and it's far more complicated to do multiple dispatch when instructions have variable bit length). Except for CPUs used in low-power applications, embedded systems, and battery-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar. The P5 Pentium was the first superscalar x86 processor; the Nx586, P6 Pentium Pro and AMD K5 were among the first designs which decode x86-instructions asynchronously into dynamic microcode-like micro-op sequences prior to actual execution on a superscalar microarchitecture; this opened up for dynamic scheduling of buffered partial instructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplified speculative execution and allowed higher clock frequencies compared to designs such as the advanced Cyrix 6x86. Scalar to superscalar The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by a vector processor operates simultaneously on many data items. An analogy is the difference between scalar and vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently. Superscalar CPU design emphasizes improving the instruction dispatcher accuracy and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have two ALUs and a single FPU, a later design such as the PowerPC 970 includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design. A superscalar processor usually sustains an execution rate in excess of one instruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, since pipelined, multiprocessor or multi-core architectures also achieve that, but with different methods. In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned as having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread. Most modern superscalar CPUs also have logic to reorder the instructions to try to avoid pipeline stalls and increase parallel execution. Limitations Available performance improvement from superscalar techniques is limited by three key areas: The degree of intrinsic parallelism in the instruction stream (instructions requiring the same computational resources from the CPU) The complexity and time cost of dependency checking logic and register renaming circuitry The branch instruction processing Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructions a = b + c; d = e + f can be run in parallel because none of the results depend on other calculations. However, the instructions a = b + c; b = e + f might not be runnable in parallel, depending on the order in which the instructions complete while they move through the units. Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results. No matter how advanced the semiconductor process or how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units (e.g. ALUs), the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively the power consumption, complexity and gate delay costs limit the achievable superscalar speedup. However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation. Alternatives Collectively, these limits drive investigation into alternative architectural changes such as very long instruction word (VLIW), explicitly parallel instruction computing (EPIC), simultaneous multithreading (SMT), and multi-core computing. With VLIW, the burdensome task of dependency checking by hardware logic at run time is removed and delegated to the compiler. Explicitly parallel instruction computing (EPIC) is like VLIW with extra cache prefetching instructions. Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. The fact that they are independent means that we know that the instruction of one thread can be executed out of order and/or in parallel with the instruction of a different one. Also, one independent thread will not produce a pipeline bubble in the code stream of a different one, for example, due to a branch. Superscalar processors differ from multi-core processors in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the ALU, integer multiplier, integer shifter, FPU, etc. There may be multiple versions of each execution unit to enable the execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions from multiple threads, one thread per processing unit (called "core"). It also differs from a pipelined processor, where the multiple instructions can concurrently be in various stages of execution, assembly-line fashion. The various alternative techniques are not mutually exclusive—they can be (and frequently are) combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also include vector capability. See also Eager execution Hyper-threading Simultaneous multithreading Out-of-order execution Shelving buffer Speculative execution Software lockout, a multiprocessor issue similar to logic dependencies on superscalars Super-threading References Mike Johnson, Superscalar Microprocessor Design, Prentice-Hall, 1991, Sorin Cotofana, Stamatis Vassiliadis, "On the Design Complexity of the Issue Logic of Superscalar Machines", EUROMICRO 1998: 10277-10284 Steven McGeady, et al., "Performance Enhancements in the Superscalar i960MM Embedded Microprocessor," ACM Proceedings of the 1991 Conference on Computer Architecture (Compcon), 1991, pp. 4–7 External links Eager Execution / Dual Path / Multiple Path, By Mark Smotherman Classes of computers Computer architecture Parallel computing
Superscalar processor
[ "Technology", "Engineering" ]
1,992
[ "Computer engineering", "Computer architecture", "Computer systems", "Computers", "Classes of computers" ]
51,714
https://en.wikipedia.org/wiki/Taylor%27s%20theorem
In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial. Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1715, although an earlier version of the result was already mentioned in 1671 by James Gregory. Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis. It gives simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions. It is the starting point of the study of analytic functions, and is fundamental in various areas of mathematics, as well as in numerical analysis and mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions. It provided the mathematical basis for some landmark early computing machines: Charles Babbage's Difference Engine calculated sines, cosines, logarithms, and other transcendental functions by numerically integrating the first 7 terms of their Taylor series. Motivation If a real-valued function is differentiable at the point , then it has a linear approximation near this point. This means that there exists a function h1(x) such that Here is the linear approximation of for x near the point a, whose graph is the tangent line to the graph at . The error in the approximation is: As x tends to a, this error goes to zero much faster than , making a useful approximation. For a better approximation to , we can fit a quadratic polynomial instead of a linear function: Instead of just matching one derivative of at , this polynomial has the same first and second derivatives, as is evident upon differentiation. Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of , more accurate than the linear approximation. Specifically, Here the error in the approximation is which, given the limiting behavior of , goes to zero faster than as x tends to a. Similarly, we might get still better approximations to f if we use polynomials of higher degree, since then we can match even more derivatives with f at the selected base point. In general, the error in approximating a function by a polynomial of degree k will go to zero much faster than as x tends to a. However, there are functions, even infinitely differentiable ones, for which increasing the degree of the approximating polynomial does not increase the accuracy of approximation: we say such a function fails to be analytic at x = a: it is not (locally) determined by its derivatives at this point. Taylor's theorem is of asymptotic nature: it only tells us that the error in an approximation by a -th order Taylor polynomial Pk tends to zero faster than any nonzero -th degree polynomial as . It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulas for the remainder term (given below) which are valid under some additional regularity assumptions on f. These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function f is analytic. In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.) There are several ways we might use the remainder term: Estimate the error for a polynomial Pk(x) of degree k estimating on a given interval (a – r, a + r). (Given the interval and degree, we find the error.) Find the smallest degree k for which the polynomial Pk(x) approximates to within a given error tolerance on a given interval (a − r, a + r) . (Given the interval and error tolerance, we find the degree.) Find the largest interval (a − r, a + r) on which Pk(x) approximates to within a given error tolerance. (Given the degree and error tolerance, we find the interval.) Taylor's theorem in one real variable Statement of the theorem The precise statement of the most basic version of Taylor's theorem is as follows: The polynomial appearing in Taylor's theorem is the -th order Taylor polynomial of the function f at the point a. The Taylor polynomial is the unique "asymptotic best fit" polynomial in the sense that if there exists a function and a -th order polynomial p such that then p = Pk. Taylor's theorem describes the asymptotic behavior of the remainder term which is the approximation error when approximating f with its Taylor polynomial. Using the little-o notation, the statement in Taylor's theorem reads as Explicit formulas for the remainder Under stronger regularity assumptions on f there are several precise formulas for the remainder term Rk of the Taylor polynomial, the most common ones being the following. These refinements of Taylor's theorem are usually proved using the mean value theorem, whence the name. Additionally, notice that this is precisely the mean value theorem when . Also other similar expressions can be found. For example, if G(t) is continuous on the closed interval and differentiable with a non-vanishing derivative on the open interval between and , then for some number between and . This version covers the Lagrange and Cauchy forms of the remainder as special cases, and is proved below using Cauchy's mean value theorem. The Lagrange form is obtained by taking and the Cauchy form is obtained by taking . The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. However, it holds also in the sense of Riemann integral provided the (k + 1)th derivative of f is continuous on the closed interval [a,x]. Due to the absolute continuity of f on the closed interval between and , its derivative f exists as an L-function, and the result can be proven by a formal calculation using the fundamental theorem of calculus and integration by parts. Estimates for the remainder It is often useful in practice to be able to estimate the remainder term appearing in the Taylor approximation, rather than having an exact formula for it. Suppose that f is -times continuously differentiable in an interval I containing a. Suppose that there are real constants q and Q such that throughout I. Then the remainder term satisfies the inequality if , and a similar estimate if . This is a simple consequence of the Lagrange form of the remainder. In particular, if on an interval with some , then for all The second inequality is called a uniform estimate, because it holds uniformly for all x on the interval Example Suppose that we wish to find the approximate value of the function on the interval while ensuring that the error in the approximation is no more than 10−5. In this example we pretend that we only know the following properties of the exponential function: From these properties it follows that for all , and in particular, . Hence the -th order Taylor polynomial of at and its remainder term in the Lagrange form are given by where is some number between 0 and x. Since ex is increasing by (), we can simply use for to estimate the remainder on the subinterval . To obtain an upper bound for the remainder on , we use the property for to estimate using the second order Taylor expansion. Then we solve for ex to deduce that simply by maximizing the numerator and minimizing the denominator. Combining these estimates for ex we see that so the required precision is certainly reached, when (See factorial or compute by hand the values and .) As a conclusion, Taylor's theorem leads to the approximation For instance, this approximation provides a decimal expression , correct up to five decimal places. Relationship to analyticity Taylor expansions of real analytic functions Let I ⊂ R be an open interval. By definition, a function f : I → R is real analytic if it is locally defined by a convergent power series. This means that for every a ∈ I there exists some r > 0 and a sequence of coefficients ck ∈ R such that and In general, the radius of convergence of a power series can be computed from the Cauchy–Hadamard formula This result is based on comparison with a geometric series, and the same method shows that if the power series based on a converges for some b ∈ R, it must converge uniformly on the closed interval , where . Here only the convergence of the power series is considered, and it might well be that extends beyond the domain I of f. The Taylor polynomials of the real analytic function f at a are simply the finite truncations of its locally defining power series, and the corresponding remainder terms are locally given by the analytic functions Here the functions are also analytic, since their defining power series have the same radius of convergence as the original series. Assuming that ⊂ I and r < R, all these series converge uniformly on . Naturally, in the case of analytic functions one can estimate the remainder term by the tail of the sequence of the derivatives f′(a) at the center of the expansion, but using complex analysis also another possibility arises, which is described below. Taylor's theorem and convergence of Taylor series The Taylor series of f will converge in some interval in which all its derivatives are bounded and do not grow too fast as k goes to infinity. (However, even if the Taylor series converges, it might not converge to f, as explained below; f is then said to be non-analytic.) One might think of the Taylor series of an infinitely many times differentiable function f : R → R as its "infinite order Taylor polynomial" at a. Now the estimates for the remainder imply that if, for any r, the derivatives of f are known to be bounded over (a − r, a + r), then for any order k and for any r > 0 there exists a constant such that for every x ∈ (a − r,a + r). Sometimes the constants can be chosen in such way that is bounded above, for fixed r and all k. Then the Taylor series of f converges uniformly to some analytic function (One also gets convergence even if is not bounded above as long as it grows slowly enough.) The limit function is by definition always analytic, but it is not necessarily equal to the original function f, even if f is infinitely differentiable. In this case, we say f is a non-analytic smooth function, for example a flat function: Using the chain rule repeatedly by mathematical induction, one shows that for any order k, for some polynomial pk of degree 2(k − 1). The function tends to zero faster than any polynomial as , so f is infinitely many times differentiable and for every positive integer k. The above results all hold in this case: The Taylor series of f converges uniformly to the zero function Tf(x) = 0, which is analytic with all coefficients equal to zero. The function f is unequal to this Taylor series, and hence non-analytic. For any order k ∈ N and radius r > 0 there exists Mk,r > 0 satisfying the remainder bound () above. However, as k increases for fixed r, the value of Mk,r grows more quickly than rk, and the error does not go to zero. Taylor's theorem in complex analysis Taylor's theorem generalizes to functions f : C → C which are complex differentiable in an open subset U ⊂ C of the complex plane. However, its usefulness is dwarfed by other general theorems in complex analysis. Namely, stronger versions of related results can be deduced for complex differentiable functions f : U → C using Cauchy's integral formula as follows. Let r > 0 such that the closed disk B(z, r) ∪ S(z, r) is contained in U. Then Cauchy's integral formula with a positive parametrization of the circle S(z, r) with gives Here all the integrands are continuous on the circle S(z, r), which justifies differentiation under the integral sign. In particular, if f is once complex differentiable on the open set U, then it is actually infinitely many times complex differentiable on U. One also obtains Cauchy's estimate for any z ∈ U and r > 0 such that B(z, r) ∪ S(c, r) ⊂ U. The estimate implies that the complex Taylor series of f converges uniformly on any open disk with into some function Tf. Furthermore, using the contour integral formulas for the derivatives f(c), so any complex differentiable function f in an open set U ⊂ C is in fact complex analytic. All that is said for real analytic functions here holds also for complex analytic functions with the open interval I replaced by an open subset U ∈ C and a-centered intervals (a − r, a + r) replaced by c-centered disks B(c, r). In particular, the Taylor expansion holds in the form where the remainder term Rk is complex analytic. Methods of complex analysis provide some powerful results regarding Taylor expansions. For example, using Cauchy's integral formula for any positively oriented Jordan curve which parametrizes the boundary of a region , one obtains expressions for the derivatives as above, and modifying slightly the computation for , one arrives at the exact formula The important feature here is that the quality of the approximation by a Taylor polynomial on the region is dominated by the values of the function f itself on the boundary . Similarly, applying Cauchy's estimates to the series expression for the remainder, one obtains the uniform estimates Example The function is real analytic, that is, locally determined by its Taylor series. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. This kind of behavior is easily understood in the framework of complex analysis. Namely, the function f extends into a meromorphic function on the compactified complex plane. It has simple poles at and , and it is analytic elsewhere. Now its Taylor series centered at z0 converges on any disc B(z0, r) with r < |z − z0|, where the same Taylor series converges at z ∈ C. Therefore, Taylor series of f centered at 0 converges on B(0, 1) and it does not converge for any z ∈ C with |z| > 1 due to the poles at i and −i. For the same reason the Taylor series of f centered at 1 converges on and does not converge for any z ∈ C with . Generalizations of Taylor's theorem Higher-order differentiability A function is differentiable at if and only if there exists a linear functional and a function such that If this is the case, then is the (uniquely defined) differential of at the point . Furthermore, then the partial derivatives of exist at and the differential of at is given by Introduce the multi-index notation for and . If all the -th order partial derivatives of are continuous at , then by Clairaut's theorem, one can change the order of mixed derivatives at , so the short-hand notation for the higher order partial derivatives is justified in this situation. The same is true if all the ()-th order partial derivatives of exist in some neighborhood of and are differentiable at . Then we say that is times differentiable at the point . Taylor's theorem for multivariate functions Using notations of the preceding section, one has the following theorem. If the function is times continuously differentiable in a closed ball for some , then one can derive an exact formula for the remainder in terms of order partial derivatives of f in this neighborhood. Namely, In this case, due to the continuity of ()-th order partial derivatives in the compact set , one immediately obtains the uniform estimates Example in two dimensions For example, the third-order Taylor polynomial of a smooth function is, denoting , Proofs Proof for Taylor's theorem in one real variable Let where, as in the statement of Taylor's theorem, It is sufficient to show that The proof here is based on repeated application of L'Hôpital's rule. Note that, for each , . Hence each of the first derivatives of the numerator in vanishes at , and the same is true of the denominator. Also, since the condition that the function be times differentiable at a point requires differentiability up to order in a neighborhood of said point (this is true, because differentiability requires a function to be defined in a whole neighborhood of a point), the numerator and its derivatives are differentiable in a neighborhood of . Clearly, the denominator also satisfies said condition, and additionally, doesn't vanish unless , therefore all conditions necessary for L'Hôpital's rule are fulfilled, and its use is justified. So where the second-to-last equality follows by the definition of the derivative at . Alternate proof for Taylor's theorem in one real variable Let be any real-valued continuous function to be approximated by the Taylor polynomial. Step 1: Let and be functions. Set and to be Step 2: Properties of and : Similarly, Step 3: Use Cauchy Mean Value Theorem Let and be continuous functions on . Since so we can work with the interval . Let and be differentiable on . Assume for all . Then there exists such that Note: in and so for some . This can also be performed for : for some . This can be continued to . This gives a partition in : with Set : Step 4: Substitute back By the Power Rule, repeated derivatives of , , so: This leads to: By rearranging, we get: or because eventually: Derivation for the mean value forms of the remainder Let G be any real-valued function, continuous on the closed interval between and and differentiable with a non-vanishing derivative on the open interval between and , and define For . Then, by Cauchy's mean value theorem, for some on the open interval between and . Note that here the numerator is exactly the remainder of the Taylor polynomial for . Compute plug it into () and rearrange terms to find that This is the form of the remainder term mentioned after the actual statement of Taylor's theorem with remainder in the mean value form. The Lagrange form of the remainder is found by choosing and the Cauchy form by choosing . Remark. Using this method one can also recover the integral form of the remainder by choosing but the requirements for f needed for the use of mean value theorem are too strong, if one aims to prove the claim in the case that f is only absolutely continuous. However, if one uses Riemann integral instead of Lebesgue integral, the assumptions cannot be weakened. Derivation for the integral form of the remainder Due to the absolute continuity of on the closed interval between and , its derivative exists as an -function, and we can use the fundamental theorem of calculus and integration by parts. This same proof applies for the Riemann integral assuming that is continuous on the closed interval and differentiable on the open interval between and , and this leads to the same result than using the mean value theorem. The fundamental theorem of calculus states that Now we can integrate by parts and use the fundamental theorem of calculus again to see that which is exactly Taylor's theorem with remainder in the integral form in the case . The general statement is proved using induction. Suppose that Integrating the remainder term by parts we arrive at Substituting this into the formula shows that if it holds for the value , it must also hold for the value . Therefore, since it holds for , it must hold for every positive integer . Derivation for the remainder of multivariate Taylor polynomials We prove the special case, where has continuous partial derivatives up to the order in some closed ball with center . The strategy of the proof is to apply the one-variable case of Taylor's theorem to the restriction of to the line segment adjoining and . Parametrize the line segment between and by We apply the one-variable version of Taylor's theorem to the function : Applying the chain rule for several variables gives where is the multinomial coefficient. Since , we get: See also Footnotes References . . . . . . . . . . External links Taylor Series Approximation to Cosine at cut-the-knot Trigonometric Taylor Expansion interactive demonstrative applet Taylor Series Revisited at Holistic Numerical Methods Institute Articles containing proofs Theorems in calculus Theorems in real analysis Approximations
Taylor's theorem
[ "Mathematics" ]
4,353
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theorems in real analysis", "Mathematical relations", "Articles containing proofs", "Approximations" ]
51,718
https://en.wikipedia.org/wiki/United%20States%20Atomic%20Energy%20Commission
The United States Atomic Energy Commission (AEC) was an agency of the United States government established after World War II by the U.S. Congress to foster and control the peacetime development of atomic science and technology. President Harry S. Truman signed the McMahon/Atomic Energy Act on August 1, 1946, transferring the control of atomic energy from military to civilian hands, effective on January 1, 1947. This shift gave the members of the AEC complete control of the plants, laboratories, equipment, and personnel assembled during the war to produce the atomic bomb. An increasing number of critics during the 1960s charged that the AEC's regulations were insufficiently rigorous in several important areas, including radiation protection standards, nuclear reactor safety, plant siting, and environmental protection. By 1974, the AEC's regulatory programs had come under such strong attack that the U.S. Congress decided to abolish the AEC. The AEC was abolished by the Energy Reorganization Act of 1974, which assigned its functions to two new agencies: the Energy Research and Development Administration and the Nuclear Regulatory Commission. On August 4, 1977, President Jimmy Carter signed into law the Department of Energy Organization Act, which created the Department of Energy. The new agency assumed the responsibilities of the Federal Energy Administration (FEA), the Energy Research and Development Administration (ERDA), the Federal Power Commission (FPC), and various other federal agencies. History In creating the AEC, Congress declared that atomic energy should be employed not only in the form of nuclear weapons for the nation's defense, but also to promote world peace, improve the public welfare and strengthen free competition in private enterprise. At the same time, the McMahon Act which created the AEC also gave it unprecedented powers of regulation over the entire field of nuclear science and technology. It furthermore explicitly prevented technology transfer between the United States and other countries, and required FBI investigations for all scientists or industrial contractors who wished to have access to any AEC controlled nuclear information. The signing was the culmination of long months of intensive debate among politicians, military planners and atomic scientists over the fate of this new energy source and the means by which it would be regulated. President Truman appointed David Lilienthal as the first Chairman of the AEC. Congress gave the new civilian AEC extraordinary power and considerable independence to carry out its mission. To provide the AEC exceptional freedom in hiring its scientists and engineers, AEC employees were exempt from the civil service system. The AEC's first order of business was to inspect the scattered empire of atomic plants and laboratories to be inherited from the U.S. Army. Because of the great need for security, all production facilities and nuclear reactors would be government-owned, while all technical information and research results would be under AEC control. The National Laboratory system was established from the facilities created under the Manhattan Project. Argonne National Laboratory was one of the first laboratories authorized under this legislation as a contractor-operated facility dedicated to fulfilling the new AEC's missions. Argonne was the first of the regional laboratories to involve universities in the Chicago area. Others were the Clinton (CEW) labs and the Brookhaven National Laboratory in the Northeast, although a similar lab in Southern California did not eventuate. On 11 March 1948 Lilienthal and Kenneth Nichols were summoned to the White House where Truman told them "I know you two hate each other’s guts". He directed that "the primary objective of the AEC was to develop and produce atomic weapons", Nichols was appointed a major general and replaced Leslie Groves as chief of the Armed Forces Special Weapons Project (AFSWP); previously Lilienthal had opposed his appointment. Lilienthal was told to "forgo your desire to place a bottle of milk on every doorstop and get down to the business of producing atomic weapons." Nichols became General Manager of the AEC on 2 November 1953. The AEC was in charge of developing the U.S. nuclear arsenal, taking over these responsibilities from the wartime Manhattan Project. In its first decade, the AEC oversaw the operation of Los Alamos Scientific Laboratory, devoted primarily to weapons development, and in 1952, the creation of new second weapons laboratory in California, the Lawrence Livermore National Laboratory. The AEC also carried out the "crash program" to develop the hydrogen bomb (H-bomb), and the AEC played a key role in the prosecution of the Rosenbergs for espionage. The AEC also began a program of regular nuclear weapons testing, both in the faraway Pacific Proving Grounds and at the Nevada Test Site in the western United States. While the AEC also supported much basic research, the vast majority of its early budget was devoted to nuclear weapons development and production. After serving as director of the Manhattan Project's Los Alamos Laboratory, physicist J. Robert Oppenheimer voiced strong opinions to the AEC, as chairman of its general advisory board of nuclear scientists, against development of the "super" or hydrogen bomb along with Lilienthal. Subsequently, Lilienthal left the AEC at the White House's request in 1950 and Oppenheimer's appointment to the board was not renewed in 1952. With them removed, President Truman announced his decision to develop and produce the hydrogen bomb. The first test firing of an experimental H-bomb ("Ivy Mike") was carried out in the Central Pacific on November 1, 1952, under President Truman. Furthermore, U.S. Navy Admiral Lewis W. Strauss was appointed in 1953 by the new President Eisenhower as the Chairman of the AEC, to carry out the military development and production of the H-bomb. Lilienthal wanted to give high priority to peaceful uses, especially with nuclear power plants. However, coal was still cheap, and the electric power industry was not interested. The first experimental nuclear power plant was started in Pennsylvania under President Eisenhower in 1954. Domestic uranium procurement program The AEC developed a program for sourcing uranium domestically. Before 1947, the main sources for the mineral had been Canada and (what was then) the Belgian Congo, though the Manhattan Project also secretly processed uranium from the tailings of vanadium plants in the US West during World War II. The Colorado Plateau was known to contain veins of carnotite ore, which contains both vanadium and uranium. The AEC developed its program in accordance with the principle of free enterprise. Rather than discovering, mining, and processing the ore itself, the federal government provided geological information, built roads, and set a fixed rate for purchasing ore through one of the mills in the area. This prompted individuals to discover and produce the ore, which the government would then buy. The AEC was the only legal buyer of uranium from the beginning of the program in 1947 through 1966. From 1966 to the end of the program in 1970, the AEC continued to buy uranium to support the market until private industry could develop sufficiently. Because the government itself was not producing ore, it claimed that it had no obligation to regulate miner safety. A congressional report published in 1995 concluded that, "The government failed to act to require the reduction of the hazard by ventilating the mines, and it failed to adequately warn the miners of the hazard to which they were being exposed." The Radiation Exposure Compensation Act of 1990 sought to compensate miners and families who developed cancer as a result of exposure to radon gas in uranium mines. Regulations and experiments The AEC was connected with the U.S. Department of Defense by a "Military Liaison Committee"'. The Joint Committee on Atomic Energy exercised congressional oversight over the AEC and had considerable power in influencing AEC decisions and policy. The AEC's far-reaching powers and control over a subject matter which had far-reaching social, public health, and military implications made it an extremely controversial organization. One of the drafters of the McMahon Act, James R. Newman, famously concluded that the bill made "the field of atomic energy [an] island of socialism in the midst of a free-enterprise economy". Before the Nuclear Regulatory Commission (NRC) was created, nuclear regulation was the responsibility of the AEC, which Congress first established in the Atomic Energy Act of 1946. Eight years later, Congress replaced that law with the Atomic Energy Act of 1954, which for the first time made the development of commercial nuclear power possible, and resolved a number of other outstanding problems in implementing the first Atomic Energy Act. The act assigned the AEC the functions of both encouraging the use of nuclear power and regulating its safety. The AEC's regulatory programs sought to ensure public health and safety from the hazards of nuclear power without imposing excessive requirements that would inhibit the growth of the industry. This was a difficult goal to achieve, especially in a new industry, and within a short time the AEC's programs stirred considerable controversy. Stephanie Cooke has written that: the AEC had become an oligarchy controlling all facets of the military and civilian sides of nuclear energy, promoting them and at the same time attempting to regulate them, and it had fallen down on the regulatory side ... a growing legion of critics saw too many inbuilt conflicts of interest. The AEC had a history of involvement in experiments involving radioactive iodine. In a 1949 operation called the "Green Run", the AEC released iodine-131 and xenon-133 to the atmosphere which contaminated a area containing three small towns near the Hanford site in Washington. In 1953, the AEC ran several studies on the health effects of radioactive iodine in newborns and pregnant women at the University of Iowa. Also in 1953, the AEC sponsored a study to discover if radioactive iodine affected premature babies differently from full-term babies. In the experiment, researchers from Harper Hospital in Detroit orally administered iodine-131 to 65 premature and full-term infants who weighed from . In another AEC study, researchers at the University of Nebraska College of Medicine fed iodine-131 to 28 healthy infants through a gastric tube to test the concentration of iodine in the infants' thyroid glands. Public opinion and abolition of the AEC During the 1960s and early 1970s, the Atomic Energy Commission came under fire from opposition concerned with more fundamental ecological problems such as the pollution of air and water. Under the Nixon Administration, environmental consciousness grew exponentially and the first Earth Day was held on April 22, 1970. Along with rising environmental awareness came a growing suspicion of the AEC and public hostility for their projects increased. In the public eye, there was a strong association between nuclear power and nuclear weapons, and even though the AEC had made a push in the late 1960s, to portray their efforts as being geared toward peaceful uses of atomic energy, criticism of the agency grew. The AEC was chiefly held responsible for the health problems of people living near atmospheric test sites from the early 1960s, and there was a strong association of nuclear energy with the radioactive fallout from these tests. Around the same time, the AEC was also struggling with opposition to nuclear power plant siting as well as nuclear testing. An organized push was finally made to curb the power held by the AEC, and in 1970 the AEC was forced to prepare an Environmental impact statement (EIS) for a nuclear test in northwestern Colorado as part of the initial preparation for Project Rio Blanco. In 1973, the AEC predicted that, by the turn of the century, one thousand reactors would be needed producing electricity for homes and businesses across the United States. However, after 1973, orders for nuclear reactors declined sharply as electricity demand fell and construction costs rose. Some partially completed nuclear power plants in the U.S. were stricken, and many planned nuclear plants were canceled. By 1974, the AEC's regulatory programs had come under such strong attack that Congress decided to abolish the agency. Supporters and critics of nuclear power agreed that the promotional and regulatory duties of the AEC should be assigned to different agencies. The Energy Reorganization Act of 1974 transferred the regulatory functions of the AEC to the new Nuclear Regulatory Commission (NRC), which began operations on January 19, 1975. Promotional functions went to the Energy Research and Development Administration which was later incorporated into the United States Department of Energy. Lasting through the mid-1970s, the AEC, along with other entities including the Department of Defense, National Institutes of Health, the American Cancer Society, the Manhattan Project, and various universities funded or conducted human radiation experiments. The government covered up most of these radiation mishaps until 1993, when President Bill Clinton ordered a change of policy. Nuclear radiation was known to be dangerous and deadly (from the atomic bombings of Hiroshima and Nagasaki in 1945), and the experiments were designed to ascertain the detailed effect of radiation on human health. In Oregon, 67 prisoners with inadequate consent to vasectomies had their testicles exposed to irradiation. In Chicago, 102 volunteers with unclear consent received injections of strontium and cesium solutions to simulate radioactive fallout. AEC Chair Atomic Energy Commission Commissioners Sumner Pike: October 31, 1946 – December 15, 1951 David E. Lilienthal, Chairman: November 1, 1946 – February 15, 1950 Robert F. Bacher: November 1, 1946 – May 10, 1949 William W. Waymack: November 5, 1946 – December 21, 1948 Lewis L. Strauss: November 12, 1946 – April 15, 1950; Chairman: July 2, 1953 – June 30, 1958 Gordon Dean: May 24, 1949 – June 30, 1953; Chairman: July 11, 1950 – June 30, 1953 Henry DeWolf Smyth: May 30, 1949 – September 30, 1954 Thomas E. Murray: May 9, 1950 – June 30, 1957 Thomas Keith Glennan: October 2, 1950 – November 1, 1952 Eugene M. Zuckert: February 25, 1952 – June 30, 1954 Joseph Campbell: July 27, 1953 – November 30, 1954 Willard F. Libby: October 5, 1954 – June 30, 1959 John von Neumann: March 15, 1955 – February 8, 1957 Harold S. Vance: October 31, 1955 – August 31, 1959 John Stephens Graham: September 12, 1957 – June 30, 1962 John Forrest Floberg: October 1, 1957 – June 23, 1960 John A. McCone, Chairman: July 14, 1958 – January 20, 1961 John H. Williams: August 13, 1959 – June 30, 1960 Robert E. Wilson: March 22, 1960 – January 31, 1964 Loren K. Olson: June 23, 1960 – June 30, 1962 Glenn T. Seaborg, Chairman: March 1, 1961 – August 16, 1971 Leland J. Haworth: April 17, 1961 – June 30, 1963 John G. Palfrey: August 31, 1962 – June 30, 1966 James T. Ramey: August 31, 1962 – June 30, 1973 Gerald F. Tape: July 15, 1963 – April 30, 1969 Mary I. Bunting: June 29, 1964 – June 30, 1965 Wilfrid E. Johnson: August 1, 1966 – June 30, 1972 Samuel M. Nabrit: August 1, 1966 – August 1, 1967 Francesco Costagliola: October 1, 1968 – June 30, 1969 Theos J. Thompson: June 12, 1969 – November 25, 1970 Clarence E. Larson: September 2, 1969 – June 30, 1974 James R. Schlesinger, Chairman: August 17, 1971 – January 26, 1973 William O. Doub: August 17, 1971 – August 17, 1974 Dixy Lee Ray: August 8, 1972; Chairman: February 6, 1973 – January 18, 1975 William E. Kriegsman: June 12, 1973 – January 18, 1975 William A. Anders: August 6, 1973 – January 18, 1975 Relationship with science Ecology For many years, the AEC provided the most conspicuous example of the benefit of atomic age technologies to biology and medicine. Shortly after the Atomic Energy Commission was established, its Division of Biology and Medicine began supporting diverse programs of research in the life sciences, mainly the fields of genetics, physiology, and ecology. Specifically concerning the AEC's relationship with the field of ecology, one of the first approved funding grants went to Eugene Odum in 1951. This grant sought to observe and document the effects of radiation emission on the environment from a recently built nuclear facility on the Savannah River in South Carolina. Odum, a professor at the University of Georgia, initially submitted a proposal requesting annual funding of $267,000, but the AEC rejected the proposal and instead offered to fund a $10,000 project to observe local animal populations and the effects of secondary succession on abandoned farmland around the nuclear plant. In 1961, AEC chairman Glenn T. Seaborg established the Technical Analysis Branch (to be directed by Hal Hollister) to study the long-term biological and ecological effects of nuclear war. Throughout the early 1960s, this group of scientists conducted several studies to determine nuclear weapons' ecological consequences and their implications for human life. As a result, during the 1950s and 1960s, the U.S. government placed emphasis on the development and potential use of "clean" nuclear weapons to mitigate these effects. In later years, the AEC began providing increased research opportunities to scientists by approving funding for ecological studies at various nuclear testing sites, most notably at Eniwetok, which was part of the Marshall Islands. Through their support of nuclear testing, the AEC gave ecologists a unique opportunity to study the effects of radiation on whole populations and entire ecological systems in the field. Prior to 1954, no one had investigated a complete ecosystem with the intent to measure its overall metabolism, but the AEC provided the means as well as the funding to do so. Ecological development was further spurred by environmental concerns about radioactive waste from nuclear energy and postwar atomic weapons production. In the 1950s, such concerns led the AEC to build a large ecology research group at their Oak Ridge National Laboratory, which was instrumental in the development of radioecology. A wide variety of research efforts in biology and medicine took place under the umbrella of the AEC at national laboratories and at some universities with agency sponsorship and funding. As a result of increased funding as well as the increased opportunities given to scientists and the field of ecology in general, a plethora of new techniques were developed which led to rapid growth and expansion of the field as a whole. One of these techniques afforded to ecologists involved the use of radiation, namely in ecological dating and to study the effects of stresses on the environment. In 1969, the AEC's relationship with science and the environment was brought to the forefront of a growing public controversy that had been building since 1965. In search for an ideal location for a large-yield nuclear test, the AEC settled upon the island of Amchitka, part of the Aleutian Islands National Wildlife Refuge in Alaska. The main public concern was about their location choice, as there was a large colony of endangered sea otters in close proximity. To help defuse the issue, the AEC sought a formal agreement with the Department of the Interior and the U.S. state of Alaska to help transplant the colony of sea otters to other former habitats along the West Coast. Arctic ecology The AEC played a role in expanding the field of arctic ecology. From 1959 to 1962, the Commission's interest in this type of research peaked. For the first time, extensive effort was placed by a national agency on funding bio-environmental research in the Arctic. Research took place at Cape Thompson on the northwest coast of Alaska, and was tied to an excavation proposal named Project Chariot. The excavation project was to involve a series of underground nuclear detonations that would create an artificial harbor, consisting of a channel and circular terminal basin, which would fill with water. This would have allowed for enhanced ecological research of the area in conjunction with any nuclear testing that might occur, as it essentially would have created a controlled environment where levels and patterns of radioactive fallout resulting from weapons testing could be measured. The proposal never went through, but it evidenced the AEC's interest in Arctic research and development. The simplicity of biotic compositions and ecological processes in the arctic regions of the globe made ideal locations in which to pursue ecological research, especially since at the time there was minimal human modification of the landscape. All investigations conducted by the AEC produced new data from the Arctic, but few or none of them were supported solely on that basis. While the development of ecology and other sciences was not always the primary objective of the AEC, support was often given to research in these fields indirectly as an extension of their efforts for peaceful applications of nuclear energy. Reports The AEC issued a large number of technical reports through their technical information service and other channels. These had many numbering schemes, often associated with the lab from which the report was issued. AEC report numbers included AEC-AECU (unclassified), AEC-AECD (declassified), AEC-BNL (Brookhaven National Lab), AEC-HASL (Health and Safety Laboratory), AEC-HW (Hanford Works), AEC-IDO (Idaho Operations Office), AEC-LA (Los Alamos), AEC-MDCC (Manhattan District), AEC-TID (Technical Information Division), and others. Today, these reports can be found in library collections that received government documents, through the National Technical Information Service (NTIS), and through public domain digitization projects such as the Technical Report Archive & Image Library, which are available via HathiTrust. Gallery See also Anti-nuclear movement in the United States Atomic bombing of Hiroshima and Nagasaki Harold Hodge, administrator and researcher for the Manhattan Project List of anti-nuclear groups in the United States Nuclear waste Operation Plowshare Price-Anderson Nuclear Industries Indemnity Act Alvin Radkowsky (Chief Scientist, Office of Naval Reactors from 1950 to 1972) The Cult of the Atom We Almost Lost Detroit References Further reading Clarfield, Gerard H., and William M. Wiecek. Nuclear America: military and civilian nuclear power in the United States, 1940–1980 (Harpercollins, 1984). Richard G. Hewlett; Oscar E. Anderson. The New World, 1939–1946. University Park: Pennsylvania State University Press, 1962. Richard G. Hewlett; Francis Duncan. Atomic Shield, 1947–1952. University Park: Pennsylvania State University Press, 1969. Richard G. Hewlett; Jack M. Holl. Atoms for Peace and War, 1953–1961: Eisenhower and the Atomic Energy Commission. Berkeley: University of California Press, 1989. Rebecca S. Lowen. "Entering the Atomic Power Race: Science, Industry, and Government," Political Science Quarterly 102#3 (1987), pp. 459–479 in JSTOR Mazuzan, George T., and J. Samuel Walker. Controlling the atom: The beginnings of nuclear regulation, 1946–1962 (Univ of California Press, 1985) online. External links U.S. Nuclear Regulatory Commission Glossary: "Atomic Energy Commission" Diary of T. Keith Glennan, Dwight D. Eisenhower Presidential Library Papers of John A. McCone, Dwight D. Eisenhower Presidential Library Technicalreports.org: TRAIL—Technical Report Archive and Image Library – historic technical reports from the Atomic Energy Commission (& other Federal agencies) are available here Briefing Book: "Clean" Nukes and the Ecology of Nuclear War, published by the National Security Archive Governmental nuclear organizations Atomic Energy Commission Atomic Energy Commission Atomic Energy Commission Government agencies established in 1946 1946 establishments in the United States 1975 disestablishments in the United States Atomic Energy Commission Atomic Energy Commission
United States Atomic Energy Commission
[ "Engineering" ]
4,839
[ "Governmental nuclear organizations", "Nuclear organizations" ]
51,736
https://en.wikipedia.org/wiki/Blimp
A non-rigid airship, commonly called a blimp (/blɪmp/), is an airship (dirigible) without an internal structural framework or a keel. Unlike semi-rigid and rigid airships (e.g. Zeppelins), blimps rely on the pressure of their lifting gas (usually helium, rather than flammable hydrogen) and the strength of the envelope to maintain their shape. Blimps are known for their use in advertising, surveillance, and observation due to their maneuverability, slow speeds and steady flight capabilities. Principle Since blimps keep their shape with internal overpressure, typically the only solid parts are the passenger car (gondola) and the tail fins. A non-rigid airship that uses heated air instead of a light gas (such as helium) as a lifting medium is called a hot-air airship (sometimes there are battens near the bow, which assist with higher forces there from a mooring attachment or from the greater aerodynamic pressures there). Volume changes of the lifting gas due to temperature changes or to changes of altitude are compensated for by pumping air into internal ballonets (air bags) to maintain the overpressure. Without sufficient overpressure, the blimp loses its ability to be steered and is slowed due to increased drag and distortion. The propeller air stream can be used to inflate the ballonets and so the hull. In some models, such as the Skyship 600, differential ballonet inflation can provide a measure of pitch trim control. The engines driving the propellers are usually directly attached to the gondola, and in some models are partly steerable. Blimps are the most commonly built airships because they are relatively easy to build and easy to transport once deflated. However, because of their unstable hull, their size is limited. A blimp with too long a hull may kink in the middle when the overpressure is insufficient or when maneuvered too fast (this has also happened with semi-rigid airships with weak keels). This led to the development of semi-rigids and rigid airships. Modern blimps are launched somewhat heavier than air (overweight), in contrast to historic blimps. The missing lift is provided by lifting the nose and using engine power, or by angling the engine thrust. Some types also use steerable propellers or ducted fans. Operating in a state heavier than air avoids the need to dump ballast at lift-off and also avoids the need to lose costly helium lifting gas on landing (most of the Zeppelins achieved lift with very inexpensive hydrogen, which could be vented without concern to decrease altitude). Etymology The origin of the word "blimp" has been the subject of some confusion. Lennart Ege notes two possible derivations: A 1943 etymology, published in The New York Times, supports a British origin during the First World War when the British were experimenting with lighter-than-air craft. The initial non-rigid aircraft was called the A-limp; and a second version called the B-limp was deemed more satisfactory. Yet a third derivation is given by Barnes and James in Shorts Aircraft since 1900: Dr. A. D. Topping researched the origins of the word and concluded that the British had never had a "Type B, limp" designation, and that Cunningham's coinage appeared to be the correct explanation. The Oxford English Dictionary notes its use in print in 1916: "Visited the Blimps ... this afternoon at Capel". In 1918, the Illustrated London News said that it was "an onomatopœic name invented by that genius for apposite nomenclature, the late Horace Short". Use The B-class blimps were patrol airships operated by the United States Navy during and shortly after World War I. The Navy learned a great deal from the DN-1 fiasco. The result was the very successful B-type airships. Dr. Jerome Hunsaker was asked to develop a theory of airship design. This was followed by then-Lieutenant John H. Towers, USN, returning from Europe having inspected British designs, and the U.S. Navy subsequently sought bids for 16 blimps from American manufacturers. On 4 February 1917 the Secretary of the Navy directed that 16 nonrigid airships of Class B be procured. Ultimately Goodyear built 9 envelopes, Goodrich built five and Curtiss built the gondolas for all of those 14 ships. Connecticut Aircraft contracted with U.S. Rubber for its two envelopes and with Pigeon Fraser for its gondolas. The Curtiss-built gondolas were modified JN-4 fuselages and were powered by OX-5 engines. The Connecticut Aircraft blimps were powered by Hall-Scott engines. In 1930, a former German airship officer, Captain Anton Heinen, working in the US for the US Navy on its dirigible fleet, attempted to design and build a four-place blimp called the "family air yacht" for private fliers which the inventor claimed would be priced below $10,000 and easier to fly than a fixed-wing aircraft if placed in production. It was unsuccessful. In 2021, Reader's Digest said that "consensus is that there are about 25 blimps still in existence and only about half of them are still in use for advertising purposes". The Airsign Airship Group is the owner and operator of 8 of these active ships, including the Hood Blimp, DirecTV blimp, and the MetLife blimp. Surveillance blimp This blimp is a type of airborne early warning and control aircraft, typically as the active part of a system which includes a mooring platform, communications and information processing. Example systems include the U.S. JLENS and Israeli Aeronautics Defense Skystar 300. Surveillance blimps known as aerostats have been used extensively in the Middle East by the United States military, the United Arab Emirates and Kuwait. Examples of non-rigid airships Manufacturers in many countries have built blimps in many designs. Some examples include: ADB-3-X01, the largest lightship ever manufactured by Airship do Brasil, the only blimp manufacturing company in Latin America AVIC AS700 Airship Astra-Torres airship, non-rigid airships manufactured by Société Astra and used in World War I by France and UK British Army airship Beta Coastal class airship, C* class airship UK coastal blimps used in WW I SS, SSP, SST, SSZ and NS class airships, convoy escort blimps used by the UK in WW I G class blimp and L class blimp, US training blimps built by Goodyear during World War II K class blimp and M class blimp, US anti-submarine blimps operated during World War II Mantainer Ardath, an Australian blimp, in use during the mid-1970s N class blimp (the "Nan ship"), used for anti-submarine and as a radar early-warning platform during the 1950s Goodyear Blimps, a fleet of blimps operated for advertising purposes and as a television camera platform Skyship 600, a private blimp used by advertising companies P-791, an experimental aerostatic/aerodynamic hybrid airship developed by Lockheed-Martin corporation SVAM CA-80, an airship manufactured by the Shanghai Vantage Airship Manufacture Co in China TC-3 and TC-7, two US Army Corps non-rigid blimps used for parasite fighter trials during 1923–24 UConn Lumpy, an airship built and flown in 1975 by students at the University of Connecticut WDL 2, airship for aerial advertising manufactured and used by WDL Group, Germany Willows airships See also Airship hangar List of current airships in the United States Mooring mast Solar aircraft Thermal airship, a type of blimp using hot air for lift Notes References External links Popular Mechanics, June 1943, "Gas Bags Go On Patrol" detailed article on antisubmarine blimps during World War II "How The First Sea-Air Rescue Was Made", October 1944, Popular Science first air-to-sea rescue without aircraft landing first Airships Airship configurations Industrial gases
Blimp
[ "Chemistry" ]
1,708
[ "Chemical process engineering", "Industrial gases" ]
51,742
https://en.wikipedia.org/wiki/Drawing%20board
A drawing board (also drawing table, drafting table or architect's table) is, in its antique form, a kind of multipurpose desk which can be used for any kind of drawing, writing or impromptu sketching on a large sheet of paper or for reading a large format book or other oversized document or for drafting precise technical illustrations (such as engineering drawings or architectural drawings). The drawing table used to be a frequent companion to a pedestal desk in a study or private library, during the pre-industrial and early industrial era. During the Industrial Revolution, draftsmanship gradually became a specialized trade and drawing tables slowly moved out of the libraries and offices of most gentlemen. They became more utilitarian and were built of steel and plastic instead of fine woods and brass. More recently, engineers and draftsmen use the drawing board for making and modifying drawings on paper with ink or pencil. Different drawing instruments (set square, protractor, etc.) are used on it to draw parallel, perpendicular or oblique lines. There are instruments for drawing circles, arcs, other curves and symbols too (compass, French curve, stencil, etc.). However, with the gradual introduction of computer aided drafting and design (CADD or CAD) in the last decades of the 20th century and the first of the 21st century, the drawing board is becoming less common. A drawing table is also sometimes called a mechanical desk because, for several centuries, most mechanical desks were drawing tables. Unlike the gadgety mechanical desks of the second part of the 18th century, however, the mechanical parts of drawing tables were usually limited to notches, ratchets, and perhaps a few simple gears, or levers or cogs to elevate and incline the working surface. Very often a drawing table could look like a writing table or even a pedestal desk when the working surface was set at the horizontal and the height adjusted to 29 inches, in order to use it as a "normal" desk. The only giveaway was usually a lip on one of the sides of the desktop. This lip or edge stopped paper or books from sliding when the surface was given an angle. It was also sometimes used to hold writing implements. When the working surface was extended at its full height, a drawing table could be used as a standing desk. Many reproductions have been made and are still being produced of drawing tables, copying the period styles they were originally made in during the 18th and 19th centuries. History In the 18th and 19th centuries, drawing paper was dampened and then its edges glued to the drawing board. After drying the paper would be flat and smooth. The completed drawing was then cut free. Paper could also be secured to the drawing board with drawing pins or even C-clamps. More recent practice is to use self-adhesive drafting tape to secure paper to the board, including the sophisticated use of individualized adhesive dots from a dispensing roll. Some drawing boards are magnetized, allowing paper to be held down by long steel strips. Boards used for overlay drafting or animation may include registration pins or peg bars to ensure alignment of multiple layers of drawing media. Contemporary drafting tables Despite the prevalence of computer aided drafting, many older architects and even some structural designers still rely on paper and pencil graphics produced on a drafting table. Modern drafting tables typically rely on a steel frame. Steel provides as much strength as the old oak drafting table frames and much easier portability. Typically the drafting board surface is a thick sheet of compressed fibreboard with sheets of Formica laminated to all its surfaces. The drafting board surface is usually secured to the frame by screws which can easily be removed for drafting table transportation. The steel frame allows mechanical linkages to be installed that control both the height and angle of the drafting board surface. Typically, a single foot pedal is used to control a clutch which clamps the board in the desired position. A heavy counterweight full of lead shot is installed in the steel linkage so that if the pedal is accidentally released, the drafting board will not spring into the upright position and injure the user. Drafting table linkages and clutches have to be maintained to ensure that this safety mechanism counterbalances the weight of the table surface. The drafting table surface is usually covered with a thin vinyl sheet called a board cover. This provides an optimum surface for pen and pencil drafting. It allows compasses and dividers to be used without damaging the wooden surface of the board. A board cover must be frequently cleaned to prevent graphite buildup from making new drawings dirty. At the bottom edge of the table, a single strip of aluminum or steel may serve as a place to rest drafting pencils. More purpose-built trays are also used which hold pencils even while the board is being adjusted. Various types of drafting machine may be attached to the board surface to assist the draftsperson or artist. Parallel rules often span the entire width of the board and are so named because they remain parallel to the top edge of the board as they are moved up and down. Drafting machines use pre-calibrated scales and built in protractors to allow accurate drawing measurement. Some drafting tables incorporate electric motors to provide the up and down and angle adjustment of the drafting table surface. These tables are at least as heavy as the original oak and brass drafting tables and so sacrifice portability for the convenience of push button table adjustment. Modern-day idiom The expression "back to the drawing board" is used when a plan or course of action needs to be changed, often drastically; usually due to a very unsuccessful result; e.g., "The battle plan, the result of months of conferences, failed because the enemy retreated too far back. It was back to the drawing board for the army captains." The phrase was coined in the caption to a Peter Arno cartoon of The New Yorker of March 1, 1941. See also List of desk forms and types Studio Surface computing Drafting machine Technical drawing tools Plane table References External links Drafting Table Use and Care Architectural communication Furniture Tables (furniture) Technical drawing tools
Drawing board
[ "Engineering" ]
1,238
[ "Architectural communication", "Architecture" ]
51,746
https://en.wikipedia.org/wiki/Cisco
Cisco Systems, Inc. (using the trademark Cisco) is an American multinational digital communications technology conglomerate corporation headquartered in San Jose, California. Cisco develops, manufactures, and sells networking hardware, software, telecommunications equipment and other high-technology services and products. Cisco specializes in specific tech markets, such as the Internet of things (IoT), domain security, videoconferencing, and energy management with products including Webex, OpenDNS, Jabber, Duo Security, Silicon One, and Jasper. Cisco Systems was founded in December 1984 by Leonard Bosack and Sandy Lerner, two Stanford University computer scientists who had been instrumental in connecting computers at Stanford. They pioneered the concept of a local area network (LAN) being used to connect distant computers over a multiprotocol router system. The company went public in 1990 and, by the end of the dot-com bubble in 2000, had a market capitalization of $500 billion, surpassing Microsoft as the world's most valuable company. Cisco stock (CSCO) was added to the Dow Jones Industrial Average on June 8, 2009, and is also included in the S&P 500, Nasdaq-100, the Russell 1000, and the Russell 1000 Growth Stock indices. History 1984–1995: Origins and initial growth Cisco Systems was founded in December 1984 by Sandy Lerner along with her husband Leonard Bosack. Lerner was the director of computer facilities for the Stanford University Graduate School of Business. Bosack was in charge of the Stanford University computer science department's computers. Cisco's initial product has roots in Stanford University's campus technology. In the early 1980s students and staff at Stanford, including Bosack, used technology on the campus to link all of the school's computer systems to talk to one another, creating a box that functioned as a multiprotocol router called the "Blue Box". The Blue Box used circuitry made by Andy Bechtolsheim, and software that was originally written at Stanford by research engineer William Yeager. Due to the underlying architecture, and its ability to scale well, Yeager's well-designed invention became a key to Cisco's early success. In 1985, Bosack and Stanford employee Kirk Lougheed began a project to formally network Stanford's campus. They adapted Yeager's software into what became the foundation for Cisco IOS, despite Yeager's claims that he had been denied permission to sell the Blue Box commercially. On July 11, 1986, Bosack and Lougheed were forced to resign from Stanford and the university contemplated filing criminal complaints against Cisco and its founders for the theft of its software, hardware designs, and other intellectual properties. In 1987, Stanford licensed the router software and two computer boards to Cisco. In addition to Bosack, Lerner, Lougheed, Greg Satz (a programmer), and Richard Troiano (who handled sales), completed the early Cisco team. The company's first CEO was Bill Graves, who held the position from 1987 to 1988. In 1988, John Morgridge was appointed CEO. The name "Cisco" was derived from the city name San Francisco, which is why the company's engineers insisted on using the lower case "cisco" in its early years. The logo is a stylized depiction of the two towers of the Golden Gate Bridge. On February 16, 1990, Cisco Systems went public with a market capitalization of $224 million, and was listed on the NASDAQ stock exchange. On August 28, 1990, Lerner was fired. Upon hearing the news, her husband Bosack resigned in protest. Although Cisco was not the first company to develop and sell dedicated network nodes, it was one of the first to sell commercially successful routers supporting multiple network protocols. Classical, CPU-based architecture of early Cisco devices coupled with flexibility of operating system IOS allowed for keeping up with evolving technology needs by means of frequent software upgrades. Some popular models of that time (such as Cisco 2500) managed to stay in production for almost a decade virtually unchanged. The company was quick to capture the emerging service provider environment, entering the SP market with product lines such as Cisco 7000 and Cisco 8500. Between 1992 and 1994, Cisco acquired several companies in Ethernet switching, such as Kalpana, Grand Junction and most notably, Mario Mazzola's Crescendo Communications, which together formed the Catalyst business unit. At the time, the company envisioned layer 3 routing and layer 2 (Ethernet, Token Ring) switching as complementary functions of different intelligence and architecture—the former was slow and complex, the latter was fast but simple. This philosophy dominated the company's product lines throughout the 1990s. In 1995, John Morgridge was succeeded by John T. Chambers. 1996–2005: Internet and silicon intelligence The Internet Protocol (IP) became widely adopted in the mid-to-late 1990s. Cisco introduced products ranging from modem access shelves (AS5200) to core GSR routers, making them a major player in the market. In late March 2000, at the height of the dot-com bubble, Cisco became the most valuable company in the world, with a market capitalization of more than $500 billion. As of July 2014, with a market cap of about US$129 billion, it was still one of the most valuable companies. The perceived complexity of programming routing functions in silicon led to the formation of several startups determined to find new ways to process IP and MPLS packets entirely in hardware and blur boundaries between routing and switching. One of them, Juniper Networks, shipped their first product in 1999 and by 2000 chipped away about 30% from Cisco SP Market share. In response, Cisco later developed homegrown ASICs and fast processing cards for GSR routers and Catalyst 6500 switches. In 2004, Cisco also started the migration to new high-end hardware CRS-1 and software architecture IOS XR. 2006–2012: The Human Network As part of a rebranding campaign in 2006, Cisco Systems adopted the shortened name "Cisco" and created "The Human Network" advertising campaign. These efforts were meant to make Cisco a "household" brand—a strategy designed to support the low-end Linksys products and future consumer products. On the more traditional business side, Cisco continued to develop its routing, switching and security portfolio. The quickly growing importance of Ethernet also influenced the company's product lines. Limits of IOS and aging Crescendo architecture also forced Cisco to look at merchant silicon in the carrier Ethernet segment. This resulted in a new ASR 9000 product family intended to consolidate the company's carrier Ethernet and subscriber management business around EZChip-based hardware and IOS-XR. In March 2007, Cisco acquired Reactivity Inc, a privately held XML gateway provider based in Redwood City, California. Cisco placed the Reactivity team and product portfolio under its Datacenter Switching and Security Technology Group, which reported to the company's then senior vice president Jayshree Ullal. Throughout the mid-2000s, Cisco also built a significant presence in India, establishing its Globalization Centre East in Bangalore for $1 billion. Cisco also expanded into new markets by acquisition—one example being a 2009 purchase of mobile specialist Starent Networks. Cisco continued to be challenged by both domestic competitors Alcatel-Lucent, Juniper Networks, and an overseas competitor Huawei. Due to lower-than-expected profit in 2011, Cisco reduced annual expenses by $1 billion. The company cut around 3,000 employees with an early-retirement program who accepted a buyout and planned to eliminate as many as 10,000 jobs (around 14 percent of the 73,400 total employees before curtailment). During the 2011 analyst call, Cisco's CEO John Chambers called out several competitors by name, including Juniper and HP. On July 24, 2012, Cisco received approval from the EU to acquire NDS (a TV software developer) for US$5 billion. In 2013, Cisco sold its Linksys home-router unit to Belkin International Inc., signaling a shift to sales to businesses rather than consumers. 2013–present On July 23, 2013, Cisco Systems announced a definitive agreement to acquire Sourcefire for $2.7 billion. On August 14, 2013, Cisco Systems announced it would cut 4,000 jobs from its workforce, which was roughly 6%, starting in 2014. At the end of 2013, Cisco announced poor revenue due to depressed sales in emerging markets, caused by economic uncertainty and by fears of the National Security Agency planting backdoors in its products. In April 2014, Cisco announced funding for early-stage firms to focus on the Internet of Things. The investment fund was allocated to investments in IoT accelerators and startups such as The Alchemist Accelerator, Ayla Networks and EVRYTHNG. Later that year, the company announced it was laying off another 6,000 workers or 8% of its global workforce, as part of a second restructuring. On November 4, 2014, Cisco announced an investment in Stratoscale. On May 4, 2015, Cisco announced CEO and Chairman John Chambers would step down as CEO on July 26, 2015, but remain chairman. Chuck Robbins, senior vice president of worldwide sales & operations and 17-year Cisco veteran, was announced as the next CEO. On July 23, 2015, Cisco announced the divestiture of its television set-top-box and cable modem business to Technicolor SA for $600 million, a division originally formed by Cisco's $6.9 billion purchase of Scientific Atlanta. The deal came as part of Cisco's gradual exit from the consumer market, and as part of an effort by Cisco's new leadership to focus on cloud-based products in enterprise segments. Cisco indicated that it would still collaborate with Technicolor on video products. On November 19, 2015, Cisco, alongside ARM Holdings, Dell, Intel, Microsoft and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. In January 2016, Cisco invested in VeloCloud, a software-defined WAN (SD-WAN) start-up with a cloud offering for configuring and optimizing branch office networks. Cisco contributed to VeloCloud's $27 million Series C round, led by March Capital Partners. In February 2017, Cisco launched a cloud-based secure internet gateway, called Cisco Umbrella, to provide safe internet access to users who do not use their corporate networks or VPNs to connect to remote data centers. Immediately after reporting their fourth-quarter earnings for 2017, Cisco's price-per-share value jumped by over 7%, while its earnings per share ratio increased from 60 to 61 cents per share, due in part to Cisco's outperformance of analyst expectations. In September 2017, Chambers announced that he would step down from the executive chairman role at the end of his term on the board in December 2017. On December 11, 2017, Robbins was elected to succeed Chambers as executive chairman while retaining his role as CEO, and Chambers was given the title of "Chairman Emeritus". Reuters reported that "Cisco Systems Inc's (CSCO.O) product revenue in Russia grew 20 percent in 2017, ahead of Cisco's technology product revenue growth in the other so-called BRIC countries of Brazil, China and India." On May 1, 2018, Cisco Systems agreed to buy AI-driven business intelligence startup Accompany for $270 million. As of June 2018, Cisco Systems ranked 444th on Forbes Global 2000 list, with $221.3 billion market cap. In 2019, Cisco acquired CloudCherry, a customer experience management company, and Voicea, an artificial intelligence company. In 2019, Cisco also introduced the "Silicon One" ASIC chip with the G100 model reaching a speed of 25.6 Tbit/s. The Silicon One competes against the Tomahawk series by Broadcom the Nvidia Spectrum, the Marvell Teralynx and the Intel Tofino. In 2023, the Silicon One G200 will offer a speed of 51.2 Tbit/sec. In March 2020, SVP and GM of Enterprise Networking David Goeckeler left to become CEO of Western Digital. and was replaced by Todd Nightingale, head of Cisco Meraki. In October 2022, Cisco announced a partnership adding the Microsoft Teams app to its meeting devices. In 2022, Cisco completely curtailed sales of its equipment in Russia due to Russian invasion of Ukraine, and completely discontinued service for already-sold devices. In April 2023, it became known that the company had destroyed equipment, spare parts, and even vehicles and office furniture worth 1.86 billion rubles (about $23 million) due to the impossibility of re-exporting. In February 2023, Cisco also wrote off the debt of the Russian mobile operator MTS in the amount of 1.234 billion rubles. As expected, these are unpaid amounts for previous equipment deliveries. In 2023, Cisco announced plans to begin manufacturing equipment in India. On 15 February 2024, Cisco announced it would lay off more than 4,000 employees, or 5% of its global workforce, and lowered its annual revenue forecast due to economic challenges and reduced demand from telecom and cable service providers. On 24 April 2024, Chuck Robbins, CEO of Cisco, met with Pope Francis and signed the Rome Call for AI ethics at the Vatican, endorsing the document's principles for responsible and ethical AI use. Finance For the fiscal year 2023, Cisco reported earnings of US$12.6 billion, with an annual revenue of US$57 billion, an increase of 10.6% over the previous fiscal cycle. Cisco's shares traded at over $43 per share, and its market capitalization was valued at US$213.2 billion in September 2018. Corporate structure Acquisitions and subsidiaries Cisco acquired a variety of companies to spin products and talent into the company. In 1995–1996 the company completed 11 acquisitions. Several acquisitions, such as Stratacom, were one of the biggest deals in the industry when they occurred. During the Internet boom in 1999, the company acquired Cerent Corporation, a start-up company located in Petaluma, California, for about US$7 billion. It was the most expensive acquisition made by Cisco to that date, and only the acquisition of Scientific Atlanta has been larger. In 1999, Cisco also acquired a stake for $1 billion in KPMG Consulting to enable establishing Internet firm Metrius founded by Keyur Patel of Fuse. Several acquired companies have grown into $1Bn+ business units for Cisco, including LAN switching, Enterprise Voice over Internet Protocol (VOIP) platform Webex and home networking. The latter came as result of Cisco acquiring Linksys in 2003 and in 2010 was supplemented with new product line dubbed Cisco Valet. Cisco announced on January 12, 2005, that it would acquire Airespace for US$450 million to reinforce the wireless controller product lines. Cisco announced on January 4, 2007, that it would buy IronPort in a deal valued at US$830 million and completed the acquisition on June 25, 2007. IronPort was best known for its IronPort AntiSpam, its SenderBase email reputation service and its email security appliances. Accordingly, IronPort was integrated into the Cisco Security business unit. Ironport's Senderbase was renamed as Sensorbase to take account of the input into this database that other Cisco devices provide. SensorBase allows these devices to build a risk profile on IP addresses, therefore allowing risk profiles to be dynamically created on http sites and SMTP email sources. In 2010, Cisco bought Starent Networks (a mobile packet core company) for $2.9 billion and Moto Development Group, a product design consulting firm that helped develop Cisco's Flip video camera. Also in 2010, Cisco became a key stakeholder in e-Skills Week. In March 2011, Cisco completed the acquisition of privately held network configuration and change management software company Pari Networks. Although many buy-ins (such as Crescendo Networks in 1993, Tandberg in 2010) resulted in acquisition of flagship technology to Cisco, many others have failed—partially or completely. For instance, in 2010 Cisco occupied a meaningful share of the packet-optical market, revenues were still not on par with US$7 billion price tag paid in 1999 for Cerent. Some of acquired technologies (such as Flip from Pure Digital) saw their product lines terminated. Cisco announced on March 15, 2012, that it would acquire NDS Group for $5bn. The transaction was completed on July 30, 2012. In January 2013, Cisco Systems acquired Israeli software maker Intucell for around $475 million in cash, a move to expand its mobile network management offerings. In the same month, Cisco Systems acquired Cognitive Security, a company focused on Cyber Threat Protection. Cisco also acquired SolveDirect (cloud services) in March 2013 and UK-based Ubiquisys (mobile software) in April 2013 for $310 million. Cisco acquired cyber-security firm Sourcefire, in October 2013. On June 16, 2014, Cisco announced that it has completed the acquisition of ThreatGRID, a company that provided dynamic malware analysis and threat intelligence technology. On June 17, 2014, Cisco announced its intent to acquire privately held Tail-f Systems, a leader in configuration management software. On April 2, 2015, Cisco announced plans to buy Embrane, a software-defined networking startup. The deal will give Cisco Embrane's software platform, which provides layer 3–7 network services for things such as firewalls, VPN termination, server load balancers and SSL offload. On May 7, 2015, Cisco announced plans to buy Tropo, a cloud API platform that simplifies the addition of real-time communications and collaboration capabilities within applications. On June 30, 2015, Cisco acquired privately held OpenDNS, the company best known for its DNS service that adds a level of security by monitoring domain name requests. On August 6, 2015, Cisco announced that it has completed the acquisition of privately held MaintenanceNet, the US-based company best known for its cloud-based contract management platform ServiceExchange. On the same month, Cisco acquired Pawaa, a privately held company in Bangalore, India that provides secure on-premises and cloud-based file-sharing software. On September 30, 2015, Cisco announced its intent to acquire privately held Portcullis Computer Security, a UK-based company that provides cybersecurity services to enterprise clients and the government sectors. On October 26, 2015, Cisco announced its intent to acquire ParStream, a privately held company based in Cologne, Germany, that provides an analytics database that allows companies to analyze large amounts of data and store it in near real-time anywhere in the network. On October 27, 2015, Cisco announced that it would acquire Lancope, a company that focuses on detecting threat activity, for $452.5 million in a cash-and-equity deal. On June 28, 2016, Cisco announced its intent to acquire CloudLock, a privately held cloud security company founded in 2011 by three Israeli military veterans, for $293 million. The deal was expected to close in the first quarter of 2017. In August 2016, Cisco announced it is getting closer to making a deal to acquire Springpath, the startup whose technology is used in Cisco's HyperFlex Systems. Cisco already owns an undisclosed stake in the hyper-converged provider. In September 2023, Cisco announced discontinuation of its HyperFlex infrastructure products. In January 2017, Cisco announced they would acquire AppDynamics, a company that monitors application performance, for $3.7 billion. The acquisition came just one day before AppDynamics was set to IPO. On January 26, 2017, Cisco founded the Innovation Alliance in Germany with eleven other companies bringing together 40 sites and 2,000 staff to provide small businesses in Germany with expertise. On August 1, 2017, Cisco completed the acquisition of Viptela Inc. for $610 million in cash and assumed equity awards. Viptela was a privately held software-defined wide area network (SD-WAN) company based in San Jose, Ca. On October 23, 2017, Cisco Systems announced it would be acquiring Broadsoft for $1.9 Billion to further entrench itself in the cloud communication and collaboration area. On August 7, 2020, Cisco completed its acquisition of network intelligence company ThousandEyes for around $1 billion. On October 1, 2020, Cisco announced that it would be acquiring Israeli startup Portshift for a reported $100 million. On December 7, 2020, Cisco announced that it would be acquiring Slido to improve Q&A, polls and engagement in WebEx videoconferencing On December 7, 2020, Cisco announced the acquisition of U.K based IMImobile in a $730M deal. On May 3, 2021, Cisco completed its acquisition of Q&A and polling platform Slido, which they offered both as a standalone product and with integrations. In 2023, Cisco acquired the following cybersecurity companies: Valtix, Lightspin, and Armorblox. Cisco also announced its intention to acquire networking and security startup Isovalent later that year. On September 21, 2023, Cisco announced the acquisition of cybersecurity firm Splunk in a $28 billion deal, its biggest acquisition yet, and the acquisition is announced to be completed on March 18, 2024. Ownership As of 2017 Cisco Systems shares are mainly held by institutional investors (The Vanguard Group, BlackRock, State Street Corporation and others). Facilities Cisco is headquartered in San Jose, California at 170 West Tasman Dr. with dozens of buildings comprising its corporate campus. Over 15,000 full-time employees are based at the San Jose campus and the surrounding Bay Area. Cisco's second largest campus in the United States is located at Research Triangle Park in North Carolina with 7,000 employees spanning across 12 buildings. In August 2020, Cisco announced the creation of a new 130,000 square feet Midwest headquarters in Chicago at the Old Chicago Main Post Office accommodating 1,200 employees. Cisco maintains over 200 corporate offices in more than 80 countries. In July 2021, Cisco announced all employees the option to work remotely on a permanent basis. Products and services Cisco's products and services focus on three market segments—enterprise, service provider, midsize and small business. Cisco provides IT products and services across five major technology areas: networking (including Ethernet, optical, wireless and mobility), security, collaboration (including voice, video, and data), data center and the Internet of things. Cisco has grown increasingly popular in the Asia-Pacific region over the last three decades and is the dominant vendor in the Australian market with leadership across all market segments. It uses its Australian office as one of the main headquarters for the Asia-Pacific region. VoIP services Cisco became a major provider of Voice over IP to enterprises and is now moving into the home user market through its acquisitions of Scientific Atlanta and Linksys. Hosted Collaboration Solution (HCS) Cisco partners can offer cloud-based services based on Cisco's virtualized Unified Computing System (UCS). A part of the Cisco Unified Services Delivery Solution that includes hosted versions of Cisco Unified Communications Manager (UCM), Cisco Unified Contact Center, Cisco Unified Mobility, Cisco Unified Presence, Cisco Unity Connection (unified messaging) and Cisco Webex Meeting Center. Network Emergency Response As part of its Crisis Response initiative, Cisco maintains several Network Emergency Response Vehicles (NERV)s. The vehicles are maintained and deployed by Cisco employees during natural disasters and other public crises. The vehicles are self-contained and provide wired and wireless services including voice and radio interoperability, voice over IP, network-based video surveillance and secured high-definition video-conferencing for leaders and first responders in crisis areas with up to 3-72 Mbit/s of bandwidth (up and down) via a 1.8-meter satellite antenna. NERVs are based at Cisco headquarters sites in San Jose, California, and at Research Triangle Park, North Carolina, allowing strategic deployment in North America. They can become fully operational within 15 minutes of arrival. High-capacity diesel fuel-tanks allow the largest vehicles to run for up to 72 hours continuously. The NERV has been deployed to incidents such as the October 2007 California wildfires; hurricanes Gustav, Ike and Katrina; the 2010 San Bruno gas pipeline explosion, tornado outbreaks in North Carolina and Alabama in 2011; and Hurricane Sandy in 2012. The Crisis Response Operations team maintains and deploys smaller, more portable communication kits to emergencies outside of North America. In 2010, the team deployed to assist in earthquake recovery in Haiti and Christchurch (New Zealand). In 2011, they deployed to flooding in Brazil, as well as in response to the 2011 earthquake and tsunami in Japan. In 2011, Cisco received the Innovation Preparedness award from the American Red Cross Silicon Valley Chapter for its development and use of these vehicles in disasters. Certifications Cisco Systems also sponsors a line of IT professional certifications for Cisco products. There are five (path to network designers) levels of certification: Entry (CCT), Associate (CCNA/CCDA), Specialist(Cisco Certified Specialist), Professional (CCNP/CCDP), Expert (CCIE/CCDE) and recently Architect (CCAr: CCDE previous), as well as eight different paths, Collaboration, CyberOps, Data Center, DevNet, Enterprise, Security, and Service Provider . A number of specialist technicians, sales, and datacenter certifications are also available. Cisco also provides training for these certifications via a portal called the Cisco Networking Academy. Qualifying schools can become members of the Cisco Networking Academy and then provide CCNA level or other level courses. Cisco Academy Instructors must be CCNA certified to be a CCAI certified instructor. Cisco is involved with technical education in 180 countries with its Cisco Academy program. In March 2013, Cisco announced its interest in Myanmar by investing in two Cisco Networking Academies in Yangon and Mandalay and a channel partner network. Corporate affairs Awards and accolades Cisco products, including IP phones and Telepresence, have been seen in movies and TV series. The company was featured in the documentary film Something Ventured which premiered in 2011. Cisco was a 2002–03 recipient of the Ron Brown Award, a U.S. presidential honor to recognize companies "for the exemplary quality of their relationships with employees and communities". Cisco ranked number one in Great Place to Work's World's Best Workplaces 2019. In 2020, Fortune magazine ranked Cisco Systems at number four on their Fortune List of the Top 100 Companies to Work For in 2020 based on an employee survey of satisfaction. According to a report by technology consulting firm LexInnova, Cisco was one of the leading recipients of network security-related patents with the largest portfolio within other companies (6,442 security-related patents) in 2015. In 2024, Cisco was awarded Best Office Phone for its CP-8861 model by PhonePrices.co.uk. Sponsorship In February 2021, Webex signed a multi-year partnership with McLaren Racing as the Official Collaboration Partner of the team. In the following year, the partnership was extended to Cisco as the Official Technology Partner of the team. In October 2023, Cisco was announced as the Official Primary Partner of the McLaren F1 Academy programme. Cisco's branding will be carried on Bianca Bustamante's car, race suit and team kit in the 2024 F1 Academy season. Controversies Shareholder relations A class action lawsuit filed on April 20, 2001, accused Cisco of making misleading statements that "were relied on by purchasers of Cisco stock" and of insider trading. While Cisco denied all allegations in the suit, on August 18, 2006, Cisco's liability insurers, its directors and officers paid the plaintiffs US$91.75 million to settle the suit. Intellectual property disputes On December 11, 2008, the Free Software Foundation filed suit against Cisco regarding Cisco's failure to comply with the GPL and LGPL licenses and make the applicable source code publicly available. On May 20, 2009, Cisco settled this lawsuit by complying with FSF licensing terms and making a monetary contribution to the FSF. In October 2020, Cisco was ordered to pay US$1.9 billion to Centripetal Networks for infringement on four cybersecurity patents. Censorship in China Cisco has been criticized for its involvement in censorship in the People's Republic of China. According to author Ethan Gutmann, Cisco and other telecommunications equipment providers supplied the Chinese government with surveillance and Internet infrastructure equipment that is used to block Internet websites and track online activities in China. Cisco has stated that it does not customize or develop specialized or unique filtering capabilities to enable governments to block access to information and that it sells the same equipment in China as it sells worldwide. Wired News had uncovered a purported leaked, confidential PowerPoint presentation from Cisco that detailed the commercial opportunities of the Golden Shield Project of Internet control. In May 2011, a group of Falun Gong practitioners filed a lawsuit under the Alien Tort Statute alleging that Cisco knowingly developed and customized its product to assist the Chinese government in prosecution and abuse of Falun Gong practitioners. The lawsuit was dismissed in September 2014 by the United States District Court for the Northern District of California, which decision was appealed to United States Court of Appeals for the Ninth Circuit in September 2015. On July 7, 2023, the Court of Appeals for the Ninth Circuit reversed the lower court's decision and ruled the lawsuit may proceed to trial. Tax fraud investigation In October 2007, employees of Cisco's Brazilian unit were arrested on charges that they had imported equipment without paying import duties. In response, Cisco stated that they do not import directly into Brazil, and instead use middlemen. Antitrust lawsuit On December 1, 2008, Multiven filed an antitrust lawsuit against Cisco Systems, Inc. Multiven's complaint alleges that Cisco harmed Multiven and consumers by bundling and tying bug fixes/patches and updates for its operating system software to its maintenance services (SMARTnet). In May 2010, Cisco accused the person who filed the antitrust suit, British-Nigerian technology entrepreneur Peter Alfred-Adekeye, with hacking and pressured the US government to extradite him from Canada. Cisco settled the antitrust lawsuit two months after Alfred-Adekeye's arrest by making its software updates available to all Multiven customers. Remotely monitoring users' connections Cisco's Linksys E2700, E3500, E4500 devices have been reported to be remotely updated to a firmware version that forces users to register for a cloud service, allows Cisco to monitor their network use and ultimately shut down the cloud service account and thus render the affected router unusable. Firewall backdoor developed by NSA According to the German magazine Der Spiegel the NSA has developed JETPLOW for gaining access to ASA (series 5505, 5510, 5520, 5540 and 5550) and 500-series PIX Firewalls. Cisco's Chief Security Officer addressed the allegations publicly and denied working with any government to weaken Cisco products for exploitation or to implement security backdoors. A document included in the trove of National Security Agency files released with Glenn Greenwald's book No Place to Hide details how the agency's Tailored Access Operations (TAO) unit and other NSA employees intercept servers, routers and other network gear being shipped to organizations targeted for surveillance and install covert firmware onto them before they are delivered. These Trojan horse systems were described by an NSA manager as being "some of the most productive operations in TAO because they pre-position access points into hard target networks around the world." Cisco denied the allegations in a customer document saying that no information was included about specific Cisco products, supply chain intervention or implant techniques, or new security vulnerabilities. Cisco's general counsel also said that Cisco does not work with any government, including the United States government, to weaken its products. The allegations are reported to have prompted the company's CEO to express concern to the President of the United States. Whistle blowers like Edward Snowden, and journalist reporter Julian Assange have echoed similar sentiments publicly. Spherix patent suit In March 2014, Cisco Systems was sued for patent infringement. Spherix says that over $43 billion of Cisco's sales infringe on old Nortel patents owned by Spherix. Officials with Spherix are saying that a wide range of Cisco products, from switches to routers, infringe on 11 former Nortel patents that the company now owns. India net censorship role Cisco Systems is alleged to be helping the Indian Jammu and Kashmir administration build a firewall that will prevent Internet users in Kashmir from accessing blacklisted websites, including social media portals, through fixed-line connections. Cisco denies the allegations. Caste discrimination lawsuit In 2020, a lawsuit was initiated against Cisco and two of its employees by the California Department of Fair Employment and Housing for alleged discrimination against an Indian engineer on account of him being from a lower caste than them. Xinjiang In 2020, the Australian Strategic Policy Institute accused at least 82 major brands, including Cisco, of being connected to forced Uyghur labor in Xinjiang. See also Mass surveillance in the United States Cisco certifications Cisco IOS Packet Tracer Cisco Catalyst Cisco DevNet Cisco Express Forwarding Cisco Discovery Protocol Cisco Security Agent Cisco Systems VPN Client Cisco WebEx Cisco Field References Further reading Bunnell, D. (2000). Making the Cisco Connection: The Story Behind the Real Internet Superpower. Wiley. . Bunnell, D. & Brate, A. (2001). Die Cisco Story (in German). Moderne Industrie. . Paulson, E. (2001). Inside Cisco: The Real Story of Sustained M&A Growth. Wiley. . Slater, R. (2003). The Eye of the Storm: How John Chambers Steered Cisco Through the Technology Collapse. HarperCollins. . Stauffer, D. (2001). Nothing but Net Business the Cisco Way. Wiley. . Waters, J. K. (2002). John Chambers and the Cisco Way: Navigating Through Volatility. Wiley. . Young, J. S. (2001). Cisco Unauthorized: Inside the High-Stakes Race to Own the Future. Prima Lifestyles. . External links 1984 establishments in California American companies established in 1984 Manufacturing companies based in San Jose, California Companies in the Dow Jones Industrial Average Companies in the Nasdaq-100 Companies listed on the Nasdaq Companies in the Dow Jones Global Titans 50 Computer companies established in 1984 Computer companies of the United States Computer hardware companies Computer systems companies Deep packet inspection Electronics companies established in 1984 Multinational companies headquartered in the United States Networking companies of the United States Networking hardware companies Technology companies based in the San Francisco Bay Area Technology companies established in 1984 Telecommunications equipment vendors Videotelephony Companies listed on the Hong Kong Stock Exchange 1990 initial public offerings
Cisco
[ "Technology" ]
7,185
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
51,761
https://en.wikipedia.org/wiki/Elias%20gamma%20coding
Elias code or Elias gamma code is a universal code encoding positive integers developed by Peter Elias. It is used most commonly when coding integers whose upper-bound cannot be determined beforehand. Encoding To code a number x ≥ 1: Let be the highest power of 2 it contains, so 2N ≤ x < 2N+1. Write out zero bits, then Append the binary form of , an -bit binary number. An equivalent way to express the same process: Encode in unary; that is, as zeroes followed by a one. Append the remaining binary digits of to this representation of . To represent a number , Elias gamma (γ) uses bits. The code begins (the implied probability distribution for the code is added for clarity): Decoding To decode an Elias gamma-coded integer: Read and count 0s from the stream until you reach the first 1. Call this count of zeroes N. Considering the one that was reached to be the first digit of the integer, with a value of 2N, read the remaining N digits of the integer. Uses Gamma coding is used in applications where the largest encoded value is not known ahead of time, or to compress data in which small values are much more frequent than large values. Gamma coding can be more size efficient in those situations. For example, note that, in the table above, if a fixed 8-bit size is chosen to store a small number like the number 5, the resulting binary would be 00000101, while the γ-encoding variable-bit version would be 00 1 01, needing 3 bits less. On the contrary, bigger values, like 254 stored in fixed 8-bit size, would be 11111110 while the γ-encoding variable-bit version would be 0000000 1 1111110, needing 7 extra bits. Gamma coding is a building block in the Elias delta code. Generalizations Gamma coding does not code zero or negative integers. One way of handling zero is to add 1 before coding and then subtract 1 after decoding. Another way is to prefix each nonzero code with a 1 and then code zero as a single 0. One way to code all integers is to set up a bijection, mapping integers (0, −1, 1, −2, 2, −3, 3, ...) to (1, 2, 3, 4, 5, 6, 7, ...) before coding. In software, this is most easily done by mapping non-negative inputs to odd outputs, and negative inputs to even outputs, so the least-significant bit becomes an inverted sign bit: Exponential-Golomb coding generalizes the gamma code to integers with a "flatter" power-law distribution, just as Golomb coding generalizes the unary code. It involves dividing the number by a positive divisor, commonly a power of 2, writing the gamma code for one more than the quotient, and writing out the remainder in an ordinary binary code. See also References Further reading Entropy coding Numeral systems Data compression
Elias gamma coding
[ "Mathematics" ]
631
[ "Numeral systems", "Mathematical objects", "Numbers" ]
51,766
https://en.wikipedia.org/wiki/Elias%20delta%20coding
Elias δ code or Elias delta code is a universal code encoding the positive integers developed by Peter Elias. Encoding To code a number X ≥ 1: Let N = ⌊log2 X⌋; be the highest power of 2 in X, so 2N ≤ X < 2N+1. Let L = ⌊log2 N+1⌋ be the highest power of 2 in N+1, so 2L ≤ N+1 < 2L+1. Write L zeros, followed by the L+1-bit binary representation of N+1, followed by all but the leading bit (i.e. the last N bits) of X. An equivalent way to express the same process: Separate X into the highest power of 2 it contains (2N) and the remaining N binary digits. Encode N+1 with Elias gamma coding. Append the remaining N binary digits to this representation of N+1. To represent a number , Elias delta (δ) uses bits. This is useful for very large integers, where the overall encoded representation's bits end up being fewer [than what one might obtain using Elias gamma coding] due to the portion of the previous expression. The code begins, using instead of : To decode an Elias delta-coded integer: Read and count zeros from the stream until you reach the first one. Call this count of zeros L. Considering the one that was reached to be the first digit of an integer, with a value of 2L, read the remaining L digits of the integer. Call this integer N+1, and subtract one to get N. Put a one in the first place of our final output, representing the value 2N. Read and append the following N digits. Example: 001010011 1. 2 leading zeros in 001 2. read 2 more bits i.e. 00101 3. decode N+1 = 00101 = 5 4. get N = 5 − 1 = 4 remaining bits for the complete code i.e. '0011' 5. encoded number = 24 + 3 = 19 This code can be generalized to zero or negative integers in the same ways described in Elias gamma coding. Example code Encoding void eliasDeltaEncode(char* source, char* dest) { IntReader intreader(source); BitWriter bitwriter(dest); while (intreader.hasLeft()) { int num = intreader.getInt(); int len = 0; int lengthOfLen = 0; len = 1 + floor(log2(num)); // calculate 1+floor(log2(num)) lengthOfLen = floor(log2(len)); // calculate floor(log2(len)) for (int i = lengthOfLen; i > 0; --i) bitwriter.outputBit(0); for (int i = lengthOfLen; i >= 0; --i) bitwriter.outputBit((len >> i) & 1); for (int i = len-2; i >= 0; i--) bitwriter.outputBit((num >> i) & 1); } bitwriter.close(); intreader.close(); } Decoding void eliasDeltaDecode(char* source, char* dest) { BitReader bitreader(source); IntWriter intwriter(dest); while (bitreader.hasLeft()) { int num = 1; int len = 1; int lengthOfLen = 0; while (!bitreader.inputBit()) // potentially dangerous with malformed files. lengthOfLen++; for (int i = 0; i < lengthOfLen; i++) { len <<= 1; if (bitreader.inputBit()) len |= 1; } for (int i = 0; i < len-1; i++) { num <<= 1; if (bitreader.inputBit()) num |= 1; } intwriter.putInt(num); // write out the value } bitreader.close(); intwriter.close(); } Generalizations Elias delta coding does not code zero or negative integers. One way to code all non negative integers is to add 1 before coding and then subtract 1 after decoding. One way to code all integers is to set up a bijection, mapping integers all integers (0, 1, −1, 2, −2, 3, −3, ...) to strictly positive integers (1, 2, 3, 4, 5, 6, 7, ...) before coding. This bijection can be performed using the "ZigZag" encoding from Protocol Buffers (not to be confused with Zigzag code, nor the JPEG Zig-zag entropy coding). See also Elias gamma (γ) coding Elias omega (ω) coding Golomb-Rice code References Further reading (NB. The Elias δ code coincides with Hamada's URR representation.) Entropy coding Numeral systems Data compression
Elias delta coding
[ "Mathematics" ]
1,102
[ "Numeral systems", "Mathematical objects", "Numbers" ]