id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
17,599,701 | https://en.wikipedia.org/wiki/Burn%20recovery%20bed | A burn recovery bed or burn bed is a special type of bed designed for hospital patients who have suffered severe skin burns across large portions of their body.
Generally, concentrated pressure on any one spot of the damaged skin can be extremely painful to the patient, so the primary function of a burn bed is to distribute the weight of the patient so evenly that no single bed contact point is pressed harder than any other.
Air-chamber burn bed
One type of weight-distributing burn bed uses a series of interlinked inflatable air chambers which have the surface appearance of an upside-down egg carton. Although inflatable, the air chambers are maintained in a partially deflated state so that the air pressure can freely distribute itself. Heavier parts of the patient's body can sink deeper into the grid of chambers and the air moves to chambers with less weight.
Air volume in the chambers may be regulated so as to make the bed firmer when the patient is first being placed on the bed, and then air is released to allow for a more conformal shape once lying flat across the bed surface.
Deep-floatation water burn bed
This type of burn bed is similar in construction to a typical water bed, except the surface covering of the water pool has a large amount of slack and extra folds of material around the perimeter of the pool.
To limit the depth of immersion into the burn bed water pool, the water's density may be increased by adding several hundred pounds of salt to the water, as is done with a relaxation float tank.
As the patient is placed onto the bed, they displace the water and can freely sink down into the pool, unlike a typical consumer water bed. As they sink down, the slack around the edges is played out so that the patient is now sunk into the water, in a form-fitting, very gentle, and dry depression in the pool.
Generally the pool is not deep enough to permit a patient to lie on their side in the pool, but even so, lying sideways is a safe condition since the patient is at no risk of breathing in water and drowning due to the water-isolation covering.
See also
Skin graft
References
Medical equipment
Beds
Burns | Burn recovery bed | [
"Biology"
] | 443 | [
"Behavior",
"Beds",
"Medical equipment",
"Sleep",
"Medical technology"
] |
17,600,071 | https://en.wikipedia.org/wiki/Polydioctylfluorene | Polydioctylfluorene (PFO) is an organic compound, a polymer of 9,9-dioctylfluorene, with formula (C13H6(C8H17)2)n. It is an electroluminescent conductive polymer that characteristically emits blue light. Like other polyfluorene polymers, it has been studied as a possible material for light-emitting diodes.
Structure
The monomer has an aromatic fluorene core -C13H6- with two aliphatic n-octyl -C8H17 tails attached to the central carbon. Polydioctylfluorene (PFO) can be found in liquid-crystalline, glassy, amorphous, semi-crystalline or β-chain formation. This variety is on account of the intermolecular forces that PFO can participate in. The secondary forces present in PFO are typically van der Waals, which are relatively weak. These weak forces makes it a solid that can also be used as a film on a substrate. The glassy films formed by PFO chains form solutions in good solvents, meaning it is at least partially soluble. These van der Waals also add complexity to the microstructure of PFO, which is why it has a wide range of solid formations. The solid formations though, typically form low density due to the low cooling rate of the polymer. The density of polydioctylfluorene is measured by using the process of ultraviolet photoelectron spectroscopy. Chain stiffness is also prominent in PFO, because of this it is predicted that the molecular weight is a factor of 2.7 lower than polystyrene, which can produce an approximation of 190 repeat units in a standard PFO chain. By changing the strain and temperature applied to the polymer's structure results in an alteration of the PFO's properties. Thermal treatment such as friction transfer can be applied to the structure, this is a way to alter the properties. The friction transfer aligns the structure to become crystalline or liquid crystalline. Polymer 196 is the most commonly studied type of polydioctylfluorene. In studies, polymer 196 has shown the most promising properties and the best crystallinity. Within the crystal structure of polymer 196 octyl side chains are inserted between the layer of the polymer to provide more space for efficiency in structuring the material.
In studies, the structure of polydioctylfluorene was observed by using grazing-incidence X-ray diffraction after applying friction to the structure. Experiments revealed PFO was present in crystalline films and liquid crystalline after cooling and use of friction. As a result of the friction exerted, twofold symmetry in PFO was broken. The friction transfer used to obtain a single crystal film is important in the process of fabricating polarized light emitting diodes.
Properties
Polydioctylfluorene, can also be known as polymer 196 to polyfluorene. The molar mass of PFO ranges between 24,000–41,600 (g/mol) and because of this varying molar mass, many other properties vary as well. For example, the glass transition temperature can fall somewhere between 72–113 degrees Celsius. The absolute wavelength emitted by PFO can range between 386–389 nm in a solution of CHCl3, and falls around 389 in a solution of THF. The absolute film wavelength of PFO though falls between 380–394 nm. The melting point of a crystalline molecule of PFO is predicted to be about 150 degrees Celsius.
There have also been reports that some of the solid states of polydioctylfluorene are composted in sheet-like layers which are about 50–100 nm thick. As a result of these sheets, the glassy and semicrystalline states can be formed (excluding amorphous, liquid crystalline, and beta chain states). When cooled quickly, the chains tightly align, giving PFO a close packing factor, though because of the high complexity of the chains, this sometimes gets messy and creates the amorphous state. The parts of the molecule that add this complexity are the carbon rings (that are located in the backbone) making the molecule overall large in size.
Applications
The formation of beta-phase chains in PFO can be formed through dip-pen nanolithography, to represent wavelength changes in metamaterials. The dip-pen technique allows a scale of 500 nm > to be visible. The beta chains can be converted into the glassy films by adding extra stress to the main fluorine backbone unit, whether beta chains are formed is determined by peaks in wavelength absorption. Beta chains can also be confirmed to be present by using solvent to non-solvent mixtures. If the molecule were to be dipped into this mixture for ten seconds, the chains with no dissolution of films are able to produce these said beta chains.
Polydioctylfluorene is a polymer light-emitting device known as PLED, which covalently bonds to the carbon hydrogen chains. PFO is a copolymer of basic polyfluorene, which enables it to release phosphorescent light. This basic fluorene backbone strengthens the molecule on account of the carbon rings. The cross-linking in polydioctylfluorene structure provides an efficient technique for hole-transport layers to emit light. Also, when a solvent-polymer compound is added the β-phase crystalline structure to be maintained. Efficiency in current can reach a maximum of about 17 cd/A and maximum luminance obtained can be approximately 14,000 cd/m(2). The hole-transport layers (HTLs) improve the polymer's anode hole injection and greatly increase electron blocking. By having the capability to control the microstructure of phase domains gives an opportunity to optimize the optoelectronic properties of PFO based products. When needs for optoelectronic emittance are reached in polydioctylfluorene, the electroluminescence given off in dependent on the active layer in the conjugate polymer. Another way to affect the optoelectronic properties is by altering how dense the phase chain segments are ordered. Low densities can be achieved from tremendously slow crystallization while on the other hand directional crystalline solution can be achieved by use of thermal gradients.
References
Organic polymers
Conductive polymers | Polydioctylfluorene | [
"Chemistry"
] | 1,350 | [
"Organic compounds",
"Organic polymers",
"Molecular electronics",
"Conductive polymers"
] |
17,600,105 | https://en.wikipedia.org/wiki/Scrim%20and%20sarking | Scrim and sarking is a method of interior construction widely used in Australia and New Zealand in the late 19th and early 20th centuries. In this method, wooden panels were nailed over the beams and joists of a house frame, and a heavy, loosely woven cloth, called scrim, was then stapled or tacked over the wood panels. This construction method allowed wallpaper to be applied directly.
In New Zealand, the sarking was often the native rimu (red pine), and the scrim was usually either jute or hessian. It is easy to tell whether walls have scrim and sarking as their basis: knocking on the wall produces the sound of the wood, and any wallpaper laid over the top has an uneven finish. In many instances, the scrim will come loose from the sarking, in which case the wallpaper will appear to float loose from the wall.
Disuse
Compared with more modern forms of interior wall surfacing, scrim and sarking has poor insulation properties and can encourage damp. It is also more costly to insure homes with scrim and sarking walls, as they pose a fire danger. For these reasons, home renovation will often see it replaced with gypsum-based wallboards.
References
Interior design
Construction
Timber framing | Scrim and sarking | [
"Physics",
"Technology",
"Engineering"
] | 263 | [
"Timber framing",
"Materials stubs",
"Structural system",
"Construction",
"Materials",
"Matter"
] |
17,600,324 | https://en.wikipedia.org/wiki/Q%20band | The Q band is a range of frequencies contained in the microwave region of the electromagnetic spectrum. Common usage places this range between 33 and 50 GHz, but may vary depending on the source using the term. The foregoing range corresponds to the recommended frequency band of operation of WR22 waveguides. These frequencies are equivalent to wavelengths between 6 mm and 9.1 mm in air/vacuum. The Q band is in the EHF range of the radio spectrum.
The term "Q band" does not have a consistently precise usage in the technical literature, but tends to be a concurrent subset of both the IEEE designated Ka band (26.5–40 GHz) and V band (40–75 GHz). Neither the IEEE nor the ITU-R recognize the Q band in their standards, which define the nomenclature of bands in the electromagnetic spectrum. The ISO recognizes the Q band; however, the range therefore defined is 36 to 46 GHz. Other ISO frequency band definitions do not precisely match the concurrent definitions of the IEEE and ITU-R.
The Q band is mainly used for satellite communications, terrestrial microwave communications and for radio astronomy studies such as the QUIET telescope. It is also used in automotive radar and in radar investigating the properties of the Earth's surface.
References
Microwave bands
Satellite broadcasting | Q band | [
"Engineering"
] | 261 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
17,600,647 | https://en.wikipedia.org/wiki/Nightcap%20%28drink%29 | A nightcap is a drink taken shortly before bedtime. For example, a small alcoholic beverage or glass of warm milk can supposedly promote a good night's sleep.
Alcoholic nightcaps and sleep
In folk medicine consuming a nightcap is for the purpose of inducing sleep. Alcohol is not recommended by many doctors as a sleep aid because it interferes with sleep quality. But, in low doses, alcohol has sleep-promoting benefits, and some popular sleep medicines include 10% alcohol, although the effects of alcohol upon sleep can wear off somewhat after several nights of consecutive use.
Nightcaps can be neat or mixed. They should not be served chilled or on the rocks, because a nightcap is supposed to induce a feeling of warmth. The hot toddy is usually considered the original nightcap. Other traditional nightcaps include brown liquor like brandy or bourbon, and cream-based liqueurs such as Irish cream. Wine, especially fortified, can also function as a nightcap. Since some nightcaps are made of amaro, a digestif, they are believed to also make settling into bed easier by promoting digestion.
Non-alcoholic
A nightcap was originally alcoholic, since it warms drinkers and helps them sleep, just like the garment of the same name. However, warm milk is often recommended as a nightcap for inducing sleep, as it contains both tryptophan and calcium. The effectiveness of warm milk for inducing sleep is disputed.
In 1930, Ovaltine was advertised as "the world's best 'night-cap' to ensure sound, natural sleep".
See also
Apéritif and digestif
Sleep hygiene
References
External links
.
.
Drinking culture
Sleep medicine | Nightcap (drink) | [
"Biology"
] | 343 | [
"Behavior",
"Sleep",
"Sleep medicine"
] |
17,601,160 | https://en.wikipedia.org/wiki/Economic%20epidemiology | Economic epidemiology is a field at the intersection of epidemiology and economics. Its premise is to incorporate incentives for healthy behavior and their attendant behavioral responses into an epidemiological context to better understand how diseases are transmitted. This framework should help improve policy responses to epidemic diseases by giving policymakers and health-care providers clear tools for thinking about how certain actions can influence the spread of disease transmission.
The main context through which this field emerged was the idea of prevalence-dependence, or disinhibition, which suggests that individuals change their behavior as the prevalence of a disease changes. However, economic epidemiology also encompasses other ideas, including the role of externalities, global disease commons and how individuals’ incentives can influence the outcome and cost of health interventions.
Strategic epidemiology is a branch of economic epidemiology that adopts an explicitly game theoretic approach to analyzing the interplay between individual behavior and population wide disease dynamics.
Prevalence-dependence
The spread of an infectious disease is a population-level phenomenon, but decisions to prevent or treat a disease are typically made by individuals who may change their behavior over the course of an epidemic, especially if their perception of risk changes depending on the available information on the epidemics – their decisions will then have population-level consequences. For example, an individual may choose to have unsafe sex or a doctor may prescribe antibiotics to someone without a confirmed bacterial infection. In both cases, the choice may be rational from the individual's point of view but undesirable from a societal perspective.
Limiting the spread of disease at the population level requires changing individual behavior, which in turn depends on what information individuals have about the level of risk. When risk is low, people will tend to ignore it. However, if the risk of infection is higher, individuals are more likely to take preventive action. Moreover, the more transmissible the pathogen, the greater the incentive is to make personal investments for control.
The converse is also true: if there is a lowered risk of disease, either through vaccination or because of lowered prevalence, individuals may increase their risk-taking behavior. This effect is analogous to the introduction of safety regulations, such as seatbelts in cars, which because they reduce the cost of an accident in terms of expected injury and death, could lead people to drive with less caution and the resulting injuries to nonoccupants and increased nonfatal crashes may offset some of the gains from the use of seatbelts.
Prevalence-dependent behavior introduces a crucial difference with respect to the way individuals respond when the prevalence of a disease increases. If behavior is exogenous or if behavioral responses are assumed to be inelastic with respect to disease prevalence, the per capita risk of infection in the susceptible population increases as prevalence increases. In contrast, when behavior is endogenous and elastic, hosts can act to reduce their risks. If their responses are strong enough, they can reduce the average per capita risk and offset the increases in the risk of transmission associated with higher prevalence.
Alternatively, the waning of perceived risk, either through the diminution of prevalence or the introduction of a vaccine, may lead to increases in risky behavior. For example, models suggested that the introduction of highly active antiretroviral therapy (HAART), which significantly reduced the morbidity and mortality associated with HIV/AIDS, may lead to increases in the incidence of HIV as the perceived risk of HIV/AIDS decreased.
Recent analysis suggests that an individual's likelihood of engaging in unprotected sex is related to their personal analysis of risk, with those who believed that receiving HAART or having an undetectable viral load protects against transmitting HIV or who had reduced concerns about engaging in unsafe sex given the availability of HAART were more likely to engage in unprotected sex regardless of HIV status.
This behavioral response can have important implications for the timing of public interventions, because prevalence and public subsidies may compete to induce protective behavior. In other words, if prevalence induces the same sort of protective behavior as public subsidies, the subsidies become irrelevant because people will choose to protect themselves when prevalence is high, regardless of the subsidy, and subsidies may not be helpful at the times when they are typically applied.
Although STDs are logical targets for examining the role of human behavior in a modeling framework, personal actions are important for other infectious diseases as well. The rapidity with which individuals reduce their contact rate with others during an outbreak of a highly transmissible disease can significantly affect the spread of the disease. Even small reductions in the contact rate can be important, especially for diseases like influenza or severe acute respiratory syndrome (SARS). However, this may also affect policy planning for a biological attack with a disease such as smallpox.
Individual behavioral responses to interventions for non-sexually transmitted diseases are also important. For example, mass spraying to reduce malaria transmission can reduce the irritating effects of biting by nuisance mosquitoes and so lead to reduced personal use of bednets. Economic epidemiology strives to incorporate these types of behavior responses into epidemiological models to enhance a model's utility in evaluating control measures.
Vaccination
Immunization represents a classic case of a social dilemma: a conflict of interest between the private gains of individuals and the collective gains of society, and prevalence-dependent behavior may have significant effects on vaccine policy formation. For instance, it was found in an analysis of the hypothetical introduction of a vaccine that would reduce (though not eliminate) the risk of contracting HIV, that individual levels of risk behavior were a significant barrier to eliminating HIV, as small changes in behavior could actually increase the incidence/prevalence of HIV, even if the vaccine were highly efficacious. These results, as well as others, may have contributed to a decision not to release existing semi-efficacious vaccines.
An individual's self-interest and choice often leads to a vaccination uptake rate less than the social optimum as individuals do not take into account the benefit to others. In addition, prevalence dependent behavior suggests how the introduction of a vaccine may affect the spread of a disease. As the prevalence of a disease increases, people will demand to be vaccinated. As prevalence decreases, however, the incentive, and thus demand, will slacken and allow the susceptible population to increase until the disease can reinvade. As long as a vaccine is not free, either monetarily or through true or even perceived side effects, demand will be insufficient to pay for the vaccine at some point, leaving some people unvaccinated. If the disease is contagious, it could then begin spreading again among non-vaccinated individuals. Thus, it is impossible to eradicate a vaccine-preventable disease through voluntary vaccination if people act in their own self-interest.
COVID-19
The idea of intertwining epidemiology and economics is relatively new with it first appearing in the early 1990s amidst the HIV/AIDS epidemic. Epidemiologists at the time realized that the disease was spread through one's decisions around sex, and reasoned that it must then be considered an endogenous variable within the Nash-Equilibrium, therefor linking this with economics as the outcomes could then be predicted. Both Economics and Epidemiology however have influence from Utilitarianism in the form of, "doing the most good for the most people" or cost-benefit analysis as both fields of study hope to find net positives in the outcomes of their decisions. However, the SARS-CoV-2 Pandemic and its fallout, has brought extremely relevant and timely data to researchers in this field.
From January 1, 2020, until December 4, 2022, there has been a centrally estimated 1,277,204 excess deaths relating from the COVID-19 pandemic, with a majority of deaths consisting of the disease. Somewhat similar to John Snow discovering the vector for cholera through water pumps, epidemiologists were able to track community spread of COVID-19 through municipal wastewater systems. These excess deaths are often thought of in terms of the human loss, the relationships and families members we no longer possess, but there is also an economic side to these excess mortalities. According to data from the World Bank, in 2021 the average GDP per capita for someone living in the United States was $69,288. Despite the shortcomings of Gross Domestic Product in this scenario it serves as a decent variable to describe the lost economic output due to these excess deaths. Doing the arithmetic of excess deaths to GDP per capita we can see that the United States has lost around $88.5 billion in total output due to excess deaths during the COVID-19 Pandemic. The costs of the pandemic can also be extrapolated out into the cost of vaccine development/deployment, the cost of shutdowns or lack thereof (i.e. lost work/lost spending/low risk areas being closed), the extra health spending for patients that did not need it or could have avoided hospitalization if vaccinated, the fiscal stimulus provided by our government, the lost values to retirement accounts, and the broader effects of inflation.
Individuals have a something to lose as well when it comes to contracting the disease of SARS-CoV-2. For many hourly workers, this sick time off results in lost income and many salaried workers are able to do some work from a home office. Both of these situations can have positive and negative outcomes; whether it's getting additional assistance from the enhanced unemployment benefits for the greater part of 2021, or working from home with poor internet connectivity or no dedicated workspace. These headaches for many potentially contributed to the difference in reported incidence versus estimated-actual incidence rates of COVID-19 within a population. A 2020 cross-sectional study published in the JAMA Internal Medicine Journal performed blood testing on a convenience sample in 10 geographic sites across the United States and found that based on seroprevalence there were 10 times more cases than was being reported.
References
Philipson, T. "Economic epidemiology and infectious disease". In Handbook of Health Economics. Edited by Cuyler AJ, Newhouse JP. Amsterdam: North Holland, 2000; volume 1, part 2, pages 1761–1799.
Further reading
Interdisciplinary subfields of economics
Epidemiology
Medical statistics | Economic epidemiology | [
"Environmental_science"
] | 2,114 | [
"Epidemiology",
"Environmental social science"
] |
17,601,646 | https://en.wikipedia.org/wiki/Restoration%20of%20the%20Everglades | An ongoing effort to remedy damage inflicted during the 20th century on the Everglades, a region of tropical wetlands in southern Florida, is the most expensive and comprehensive environmental repair attempt in history. The degradation of the Everglades became an issue in the United States in the early 1970s after a proposal to construct an airport in the Big Cypress Swamp. Studies indicated the airport would have destroyed the ecosystem in South Florida and Everglades National Park. After decades of destructive practices, both state and federal agencies are looking for ways to balance the needs of the natural environment in South Florida with urban and agricultural centers that have recently and rapidly grown in and near the Everglades.
In response to floods caused by hurricanes in 1947, the Central and Southern Florida Flood Control Project (C&SF) was established to construct flood control devices in the Everglades. The C&SF built of canals and levees between the 1950s and 1971 throughout South Florida. Their last venture was the C-38 canal, which straightened the Kissimmee River and caused catastrophic damage to animal habitats, adversely affecting water quality in the region. The canal became the first C&SF project to revert when the canal began to be backfilled, or refilled with the material excavated from it, in the 1980s.
When high levels of phosphorus and mercury were discovered in the waterways in 1986, water quality became a focus for water management agencies. Costly and lengthy court battles were waged between various government entities to determine who was responsible for monitoring and enforcing water quality standards. Governor Lawton Chiles proposed a bill that determined which agencies would have that responsibility, and set deadlines for pollutant levels to decrease in water. Initially the bill was criticized by conservation groups for not being strict enough on polluters, but the Everglades Forever Act was passed in 1994. Since then, the South Florida Water Management District (SFWMD) and the U.S. Army Corps of Engineers have surpassed expectations for achieving lower phosphorus levels.
A commission appointed by Governor Chiles published a report in 1995 stating that South Florida was unable to sustain its growth, and the deterioration of the environment was negatively affecting daily life for residents in South Florida. The environmental decline was predicted to harm tourism and commercial interests if no actions were taken to halt current trends. Results of an eight-year study that evaluated the C&SF were submitted to the United States Congress in 1999. The report warned that if no action was taken the region would rapidly deteriorate. A strategy called the Comprehensive Everglades Restoration Plan (CERP) was enacted to restore portions of the Everglades, Lake Okeechobee, the Caloosahatchee River, and Florida Bay to undo the damage of the past 50 years. It would take 30 years and cost $7.8 billion to complete. Though the plan was passed into law in 2000, it has been compromised by political and funding problems.
Background
The Everglades are part of a very large watershed that begins in the vicinity of Orlando. The Kissimmee River drains into Lake Okeechobee, a lake with an average depth of . During the wet season when the lake exceeds its capacity, the water leaves the lake in a very wide and shallow river, approximately long and wide. This wide and shallow flow is known as sheetflow. The land gradually slopes toward Florida Bay, the historical destination of most of the water leaving the Everglades. Before drainage attempts, the Everglades comprised , taking up a third of the Florida peninsula.
Since the early 19th century the Everglades have been a subject of interest for agricultural development. The first attempt to drain the Everglades occurred in 1882 when Pennsylvania land developer Hamilton Disston constructed the first canals. Though these attempts were largely unsuccessful, Disston's purchase of land spurred tourism and real estate development of the state. The political motivations of Governor Napoleon Bonaparte Broward resulted in more successful attempts at canal construction between 1906 and 1920. Recently reclaimed wetlands were used for cultivating sugarcane and vegetables, while urban development began in the Everglades.
The 1926 Miami Hurricane and the 1928 Okeechobee Hurricane caused widespread devastation and flooding which prompted the Army Corps of Engineers to construct a dike around Lake Okeechobee. The four-story wall cut off water from the Everglades. Floods from hurricanes in 1947 motivated the US Congress to establish the Central and Southern Florida Flood Control Project (C&SF), responsible for constructing of canals and levees, hundreds of pumping stations and other water control devices. The C&SF established Water Conservation Areas (WCAs) in 37% of the original Everglades, which acted as reservoirs providing excess water to the South Florida metropolitan area, or flushing it into the Atlantic Ocean or the Gulf of Mexico. The C&SF also established the Everglades Agricultural Area (EAA), which grows the majority of sugarcane crops in the United States. When the EAA was first established, it encompassed approximately 27% of the original Everglades.
By the 1960s, urban development and agricultural use had decreased the size of the Everglades considerably. The remaining 25% of the Everglades in its original state is protected in Everglades National Park, but the park was established before the C&SF, and it depended upon the actions of the C&SF to release water. As Miami and other metropolitan areas began to intrude on the Everglades in the 1960s, political battles took place between park management and the C&SF when insufficient water in the park threw ecosystems into chaos. Fertilizers used in the EAA began to alter soil and hydrology in Everglades National Park, causing the proliferation of exotic plant species. A proposition to build a massive jetport in the Big Cypress Swamp in 1969 focused attention on the degraded natural systems in the Everglades. For the first time, the Everglades became a subject of environmental conservation.
Everglades as a priority
Environmental protection became a national priority in the 1970s. Time magazine declared it the Issue of the Year in January 1971, reporting that it was rated as Americans' "most serious problem confronting their community—well ahead of crime, drugs and poor schools". When South Florida experienced a severe drought from 1970 to 1975, with Miami receiving only of rain in 1971— less than average—media attention focused on the Everglades. With the assistance of governor's aide Nathaniel Reed and U.S. Fish and Wildlife Service biologist Arthur R. Marshall, politicians began to take action. Governor Reubin Askew implemented the Land Conservation Act in 1972, allowing the state to use voter-approved bonds of $240 million to purchase land considered to be environmentally unique and irreplaceable. Since then, Florida has purchased more land for public use than any other state. In 1972 President Richard Nixon declared the Big Cypress Swamp—the intended location for the Miami jetport in 1969—to be federally protected. Big Cypress National Preserve was established in 1974, and Fakahatchee Strand State Preserve was created the same year.
In 1976, Everglades National Park was declared an International Biosphere Reserve by UNESCO, which also listed the park as a World Heritage Site in 1979. The Ramsar Convention designated the Everglades a Wetland of International Importance in 1987. Only three locations on Earth have appeared on all three lists: Everglades National Park, Lake Ichkeul in Tunisia, and Srebarna Lake in Bulgaria.
Kissimmee River
In the 1960s, the C&SF came under increased scrutiny from government overseers and conservation groups. Critics maintained its size was comparable to the Tennessee Valley Authority's dam-building projects during the Great Depression, and that the construction had run into the billions of dollars without any apparent resolution or plan. The projects of the C&SF have been characterized as part of "crisis and response" cycles that "ignored the consequence for the full system, assumed certainty of the future, and succeeded in solving the momentary crisis, but set in motion conditions that exaggerate future crises". The last project, to build a canal to straighten the winding floodplain of the Kissimmee River that had historically fed Lake Okeechobee which in turn fed the Everglades, began in 1962. Marjory Stoneman Douglas later wrote that the C&SF projects were "interrelated stupidity", crowned by the C-38 canal. Designed to replace a meandering river with a channel, the canal was completed in 1971 and cost $29 million. It supplanted approximately of marshland with retention ponds, dams, and vegetation. Loss of habitat has caused the region to experience a drastic decrease of waterfowl, wading birds, and game fish. The reclaimed floodplains were taken over by agriculture, bringing fertilizers and insecticides that washed into Lake Okeechobee. Even before the canal was finished, conservation organizations and sport fishing and hunting groups were calling for the restoration of the Kissimmee River.
Arthur R. Marshall led the efforts to undo the damage. According to Douglas, Marshall was successful in portraying the Everglades from the Kissimmee Chain of Lakes to Florida Bay—including the atmosphere, climate, and limestone—as a single organism. Rather than remaining the preserve of conservation organizations, the cause of restoring the Everglades became a priority for politicians. Douglas observed, "Marshall accomplished the extraordinary magic of taking the Everglades out of the bleeding-hearts category forever". At the insistent urging of Marshall, newly elected Governor Bob Graham announced the formation of the "Save Our Everglades" campaign in 1983, and in 1985 Graham lifted the first shovel of backfill for a portion of the C-38 canal. Within a year the area was covered with water returning to its original state. Graham declared that by the year 2000, the Everglades would resemble its predrainage state as much as possible. The Kissimmee River Restoration Project was approved by Congress in the Water Resources Development Act of 1992. The project was estimated to cost $578 million to convert only of the canal; the cost was designed to be divided between the state of Florida and the U.S. government, with the state being responsible for purchasing land to be restored. A project manager for the Army Corps of Engineers explained in 2002, "What we're doing on this scale is going to be taken to a larger scale when we do the restoration of the Everglades". The entire project was originally estimated to be completed by 2011, but was completed in July 2021. In all, about of the Kissimmee River was restored, plus 20,000 acres of wetlands.
Water quality
Attention to water quality was focused in South Florida in 1986 when a widespread algal bloom occurred in one-fifth of Lake Okeechobee. The bloom was discovered to be the result of fertilizers from the Everglades Agricultural Area. Although laws stated in 1979 that the chemicals used in the EAA should not be deposited into the lake, they were flushed into the canals that fed the Everglades Water Conservation Areas, and eventually pumped into the lake. Microbiologists discovered that, although phosphorus assists plant growth, it destroys periphyton, one of the basic building blocks of marl in the Everglades. Marl is one of two types of Everglades soil, along with peat; it is found where parts of the Everglades are flooded for shorter periods of time as layers of periphyton dry. Most of the phosphorus compounds also rid peat of dissolved oxygen and promote algae growth, causing native invertebrates to die, and sawgrass to be replaced with invasive cattails that grow too tall and thick to allow nesting for birds and alligators. Tested water showed 500 parts per billion (ppb) of phosphorus near sugarcane fields. State legislation in 1987 mandated a 40% reduction of phosphorus by 1992.
Attempts to correct phosphorus levels in the Everglades met with resistance. The sugarcane industry, dominated by two companies named U.S. Sugar and Flo-Sun, was responsible for more than half of the crop in the EAA. They were well represented in state and federal governments by lobbyists who enthusiastically protected their interests. According to the Audubon Society, the sugar industry, nicknamed "Big Sugar", donated more money to political parties and candidates than General Motors. The sugar industry attempted to block government-funded studies of polluted water, and when the federal prosecutor in Miami faulted the sugar industry in legal action to protect Everglades National Park, Big Sugar tried to get the lawsuit withdrawn and the prosecutor fired. A costly legal battle ensued from 1988 to 1992 between the State of Florida, the U.S. government, and the sugar industry to resolve who was responsible for water quality standards, the maintenance of Everglades National Park and the Arthur R. Marshall Loxahatchee National Wildlife Refuge.
A different concern about water quality arose when mercury was discovered in fish during the 1980s. Because mercury is damaging to humans, warnings were posted for fishermen that cautioned against eating fish caught in South Florida, and scientists became alarmed when a Florida panther was found dead near Shark River Slough with mercury levels high enough to be fatal to humans. When mercury is ingested it adversely affects the central nervous system, and can cause brain damage and birth defects. Studies of mercury levels found that it is bioaccumulated through the food chain: animals that are lower on the chain have decreased amounts, but as larger animals eat them, the amount of mercury is multiplied. The dead panther's diet consisted of small animals, including raccoons and young alligators. The source of the mercury was found to be waste incinerators and fossil fuel power plants that expelled the element in the atmosphere, which precipitated with rain, or in the dry season, dust. Naturally occurring bacteria in the Everglades that function to reduce sulfur also transform mercury deposits into methylmercury. This process was more dramatic in areas where flooding was not as prevalent. Because of requirements that reduced power plant and incinerator emissions, the levels of mercury found in larger animals decreased as well: approximately a 60% decrease in fish and a 70% decrease in birds, though some levels still remain a health concern for people.
Everglades Forever Act
In an attempt to resolve the political quagmire over water quality, Governor Lawton Chiles introduced a bill in 1994 to clean up water within the EAA that was being released to the lower Everglades. The bill stated that the "Everglades ecosystem must be restored both in terms of water quality and water quantity and must be preserved and protected in a manner that is long term and comprehensive". It ensured the Florida Department of Environmental Protection (DEP) and the South Florida Water Management District (SFWMD) would be responsible for researching water quality, enforcing water supply improvement, controlling exotic species, and collecting taxes, with the aim of decreasing the levels of phosphorus in the region. It allowed for purchase of land where pollutants would be sent to "treat and improve the quality of waters coming from the EAA".
Critics of the bill argued that the deadline for meeting the standards was unnecessarily delayed until 2006—a period of 12 years—to enforce better water quality. They also maintained that it did not force sugarcane farmers, who were the primary polluters, to pay enough of the costs, and increased the threshold of what was an acceptable amount of phosphorus in water from 10 ppb to 50 ppb. Governor Chiles initially named it the Marjory Stoneman Douglas Act, but Douglas was so unimpressed with the action it took against polluters that she wrote to Chiles and demanded her name be stricken from it. Despite criticism, the Florida legislature passed the Act in 1994. The SFWMD stated that its actions have exceeded expectations earlier than anticipated, by creating Stormwater Treatment Areas (STA) within the EAA that contain a calcium-based substance such as lime rock layered between peat, and filled with calcareous periphyton. Early tests by the Army Corps of Engineers revealed this method reduced phosphorus levels from 80 ppb to 10 ppb. The STAs are intended to treat water until the phosphorus levels are low enough to be released into the Loxahatchee National Wildlife Refuge or other WCAs.
Wildlife concerns
The intrusion of urban areas into wilderness has had a substantial impact on wildlife, and several species of animals are considered endangered in the Everglades region. One animal that has benefited from endangered species protection is the American alligator (Alligator mississippiensis), whose holes give refuge to other animals, often allowing many species to survive during times of drought. Once abundant in the Everglades, the alligator was listed as an endangered species in 1967, but a combined effort by federal and state organizations and the banning of alligator hunting allowed it to rebound; it was pronounced fully recovered in 1987 and is no longer an endangered species. However, alligators' territories and average body masses have been found to be generally smaller than in the past, and because populations have been reduced, their role during droughts has become limited.
The American Crocodile (Crocodylus acutus) is also native to the region and has been designated as endangered since 1975. Unlike their relatives the alligators, crocodiles tend to thrive in brackish or salt-water habitats such as estuarine or marine coasts. Their most significant threat is disturbance by people. Too much contact with humans causes females to abandon their nests, and males in particular are often victims of vehicle collisions while roaming over large territories and attempting to cross U.S. 1 and Card Sound Road in the Florida Keys. There are an estimated 500 to 1,000 crocodiles in southern Florida.
The most critically endangered of any animal in the Everglades region is the Florida panther (Puma concolor coryi), a species that once lived throughout the southeastern United States: there were only 25–30 in the wild in 1995. The panther is most threatened by urban encroachment, because males require approximately for breeding territory. A male and two to five females may live within that range. When habitat is lost, panthers will fight over territory. After vehicle collisions, the second most frequent cause of death for panthers is intra-species aggression. In the 1990s urban expansion crowded panthers from southwestern Florida as Naples and Ft. Myers began to expand into the western Everglades and Big Cypress Swamp. Agencies such as the Army Corps of Engineers and the U.S. Fish and Wildlife Service were responsible for maintaining the Clean Water Act and the Endangered Species Act, yet still approved 99% of all permits to build in wetlands and panther territory. A limited genetic pool is also a danger. Biologists introduced eight female Texas cougars (Puma concolor) in 1995 to diversify genes, and there are between 80 and 120 panthers in the wild .
Perhaps the most dramatic loss of any group of animals has been to wading birds. Their numbers were estimated by eyewitness accounts to be approximately 2.5 million in the late 19th century. However, snowy egrets (Egretta thula), roseate spoonbills (Platalea ajaja), and reddish egrets (Egretta rufescens) were hunted to the brink of extinction for the colorful feathers used in women's hats. After about 1920 when the fashion passed, their numbers returned in the 1930s, but over the next 50 years actions by the C&SF further disturbed populations. When the canals were constructed, natural water flow was restricted from the mangrove forests near the coast of Florida Bay. From one wet season to the next, fish were unable to reach traditional locations to repopulate when water was withheld by the C&SF. Birds were forced to fly farther from their nests to forage for food. By the 1970s, bird numbers had decreased 90%. Many of the birds moved to smaller colonies in the WCAs to be closer to a food source, making them more difficult to count. Yet they remain significantly fewer in number than before the canals were constructed.
Invasive species
Around 6 million people moved to South Florida between 1940 and 1965. With a thousand people moving to Miami each week, urban development quadrupled. As the human population grew rapidly, the problem of exotic plant and animal species also grew. Many species of plants were brought into South Florida from Asia, Central America, or Australia as decorative landscaping. Exotic animals imported by the pet trade have escaped or been released. Biological controls that keep invasive species smaller in size and fewer in number in their native lands often do not exist in the Everglades, and they compete with the embattled native species for food and space. Of imported plant species, melaleuca trees (Melaleuca quinquenervia) have caused the most problems. Melaleucas grow on average in the Everglades, as opposed to in their native Australia. They were brought to southern Florida as windbreaks and deliberately seeded in marsh areas because they absorb vast amounts of water. In a region that is regularly shaped by fire, melaleucas are fire-resistant and their seeds are more efficiently spread by fire. They are too dense for wading birds with large wingspans to nest in, and they choke out native vegetation. Costs of controlling melaleucas topped $2 million in 1998 for Everglades National Park. In Big Cypress National Preserve, melaleucas covered at their most pervasive in the 1990s.
Brazilian pepper (Schinus terebinthifolius) was brought to Southern Florida as an ornamental shrub and was dispersed by the droppings of birds and other animals that ate its bright red berries. It thrives on abandoned agricultural land growing in forests too dense for wading birds to nest in, similar to melaleucas. It grows rapidly especially after hurricanes and has invaded pineland forests. Following Hurricane Andrew, scientists and volunteers cleared damaged pinelands of Brazilian pepper so the native trees would be able to return to their natural state.
The species that is causing the most impediment to restoration is the Old World climbing fern (Lygodium microphyllum), introduced in 1965. The fern grows rapidly and thickly on the ground, making passage for land animals such as black bears and panthers problematic. The ferns also grow as vines into taller portions of trees, and fires climb the ferns in "fire ladders" to scorch portions of the trees that are not naturally resistant to fire.
Several animal species have been introduced to Everglades waterways. Many tropical fish are released, the most detrimental being the blue tilapia (Oreochromis aureus), which builds large nests in shallow waters. Tilapia also consume vegetation which would normally be used by young native fishes for cover and protection.
Reptiles have a particular affinity for the South Florida ecosystem. Virtually all lizards appearing in the Everglades have been introduced, such as the brown anole (Anolis sagrei) and the tropical house gecko (Hemidactylus mabouia). The herbivorous green iguana (Iguana iguana) can reproduce rapidly in wilderness habitats. However, the reptile that has earned media attention for its size and potential to harm children and domestic pets is the Burmese python (Python bivittatus), which has spread quickly throughout the area. The python can grow up to long and competes with alligators for the top of the food chain.
Though exotic birds such as parrots and parakeets are also found in the Everglades, their impact is negligible. Conversely, perhaps the animal that causes the most damage to native wildlife is the domestic or feral cat. Across the U.S., cats are responsible for approximately a billion bird deaths annually. They are estimated to number 640 per square mile; cats living in suburban areas have devastating effects on migratory birds and marsh rabbits.
Homestead Air Force Base
Hurricane Andrew struck Miami in 1992, with catastrophic damage to Homestead Air Force Base in Homestead. A plan to rejuvenate the property in 1993 and convert it into a commercial airport was met with enthusiasm from local municipal and commercial entities hoping to recoup $480 million and 11,000 jobs lost in the local community by the destruction and subsequent closing of the base. On March 31, 1994, the base was designated as a reserve base, functioning only part-time. A cursory environmental study performed by the Air Force was deemed insufficient by local conservation groups, who threatened to sue in order to halt the acquisition when estimates of 650 flights a day were projected. Groups had previously been alarmed in 1990 by the inclusion of Homestead Air Force Base on a list of the U.S. Government's most polluted properties. Their concerns also included noise, and the inevitable collisions with birds using the mangrove forests as rookeries. The Air Force base is located between Everglades National Park and Biscayne National Park, giving it the potential to cause harm to both. In 2000, Secretary of the Interior Bruce Babbitt and the director of the U.S. Environmental Protection Agency expressed their opposition to the project, despite other Clinton Administration agencies previously working to ensure the base would be turned over to local agencies quickly and smoothly as "a model of base disposal".
Although attempts were made to make the base more environmentally friendly, in 2001 local commercial interests promoting the airport lost federal support.
Comprehensive Everglades Restoration Plan
Sustainable South Florida
Despite the successes of the Everglades Forever Act and the decreases in mercury levels, the focus intensified on the Everglades in the 1990s as quality of life in the South Florida metropolitan areas diminished. It was becoming clear that urban populations were consuming increasingly unsustainable levels of natural resources. A report entitled "The Governor's Commission for a Sustainable South Florida", submitted to Lawton Chiles in 1995, identified the problems the state and municipal governments were facing. The report remarked that the degradation of the natural quality of the Everglades, Florida Bay, and other bodies of water in South Florida would cause a significant decrease in tourism (12,000 jobs and $200 million annually) and income from compromised commercial fishing (3,300 jobs and $52 million annually). The report noted that past abuses and neglect of the environment had brought the region to "a precipitous juncture" where the inhabitants of South Florida faced health hazards in polluted air and water; furthermore, crowded and unsafe urban conditions hurt the reputation of the state. It noted that though the population had increased by 90% over the previous two decades, registered vehicles had increased by 166%. On the quality and availability of water, the report stated, "[The] frequent water shortages ... create the irony of a natural system dying of thirst in a subtropical environment with over 53 inches of rain per year".
Restoration of the Everglades, however, briefly became a bipartisan cause in national politics. A controversial penny-a-pound (2 cent/kg) tax on sugar was proposed to fund some of the necessary changes to be made to help decrease phosphorus and make other improvements to water. State voters were asked to support the tax, and environmentalists paid $15 million to encourage the issue. Sugar lobbyists responded with $24 million in advertising to discourage it and succeeded; it became the most expensive ballot issue in state history. How restoration might be funded became a political battleground and seemed to stall without resolution. However, in the 1996 election year, Republican senator Bob Dole proposed that Congress give the State of Florida $200 million to acquire land for the Everglades. Democratic Vice President Al Gore promised the federal government would purchase of land in the EAA to turn it over for restoration. Politicking reduced the number to , but both Dole's and Gore's gestures were approved by Congress.
Central and South Florida Project Restudy
As part of the Water Resources Development Act of 1992, Congress authorized an evaluation of the effectiveness of the Central and Southern Florida Flood Control Project. A report known as the "Restudy", written by the U.S. Army Corps of Engineers and the South Florida Water Management District, was submitted to Congress in 1999. It cited indicators of harm to the system: a 50% reduction in the original Everglades, diminished water storage, harmful timing of water release, an 85 to 90% decrease in wading bird populations over the past 50 years, and the decline of output from commercial fisheries. Bodies of water including Lake Okeechobee, the Caloosahatchee River, St. Lucie estuary, Lake Worth Lagoon, Biscayne Bay, Florida Bay, and the Everglades reflected drastic water level changes, hypersalinity, and dramatic changes in marine and freshwater ecosystems. The Restudy noted the overall decline in water quality over the past 50 years was caused by loss of wetlands that act as filters for polluted water. It predicted that without intervention the entire South Florida ecosystem would deteriorate. Canals took roughly of water to the Atlantic Ocean or Gulf of Mexico daily, so there was no opportunity for water storage, yet flooding was still a problem. Without changes to the current system, the Restudy predicted water restrictions would be necessary every other year, and annually in some locations. It also warned that revising some portions of the project without dedicating efforts to an overall comprehensive plan would be insufficient and probably detrimental.
After evaluating ten plans, the Restudy recommended a comprehensive strategy that would cost $7.8 billion over 20 years. The plan advised taking the following actions:
Create surface water storage reservoirs to capture of water in several locations taking up .
Create water preserve areas between Miami-Dade and Palm Beach and the eastern Everglades to treat runoff water.
Manage Lake Okeechobee as an ecological resource to avoid the drastic rise and fall of water levels in the lake that are harmful to aquatic plant and animal life and disturb the lake sediments.
Improve water deliveries to estuaries to reduce the rapid discharge of excess water to the Caloosahatchee and St. Lucie estuaries that upset nutrient balances and cause lesions on fish. Stormwater discharge would be sent instead to reservoirs.
Increase underground water storage to hold a day in wells, or reservoirs in the Floridan Aquifer, to be used later in dry periods, in a method called Aquifer Storage and Recovery (ASR).
Construct treatment wetlands as Stormwater Treatment Areas throughout , that would decrease the amount of pollutants in the environment.
Improve water deliveries to the Everglades by increasing them at a rate of approximately 26% into Shark River Slough.
Remove barriers to sheetflow by destroying or removing of canals and levees, specifically removing the Miami Canal and reconstructing the Tamiami Trail from a highway to culverts and bridges to allow sheetflow to return to a more natural rate of water flow into Everglades National Park.
Store water in quarries and reuse wastewater by employing existing quarries to supply the South Florida metropolitan area as well as Florida Bay and the Everglades. Construct two wastewater treatment plants capable of discharging a day to recharge the Biscayne Aquifer.
The implementation of all of the advised actions, the report stated, would "result in the recovery of healthy, sustainable ecosystems throughout south Florida". The report admitted that it did not have all the answers, though no plan could. However, it predicted that it would restore the "essential defining features of the pre-drainage wetlands over large portions of the remaining system", that populations of all animals would increase, and animal distribution patterns would return to their natural states. Critics expressed concern over some unused technology; scientists were unsure if the quarries would hold as much water as was being suggested, and whether the water would harbor harmful bacteria from the quarries. Overtaxing the aquifers was another concern—it was not a technique that had been previously attempted.
Though it was optimistic, the Restudy noted, It is important to understand that the 'restored' Everglades of the future will be different from any version of the Everglades that has existed in the past. While it certainly will be vastly superior to the current ecosystem, it will not completely match the pre-drainage system. This is not possible, in light of the irreversible physical changes that have made (sic) to the ecosystem. It will be an Everglades that is smaller and somewhat differently arranged than the historic ecosystem. But it will be a successfully restored Everglades, because it will have recovered those hydrological and biological patterns which defined the original Everglades, and which made it unique among the world's wetland systems. It will become a place that kindles the wildness and richness of the former Everglades.
The report was the result of many cooperating agencies that often had conflicting goals. An initial draft was submitted to Everglades National Park management who asserted not enough water would be released to the park quickly enough—that the priority went to delivering water to urban areas. When they threatened to refuse to support it, the plan was rewritten to provide more water to the park. However, the Miccosukee Indians have a reservation in between the park and water control devices, and they threatened to sue to ensure their tribal lands and a $50 million casino would not be flooded. Other special interests were also concerned that businesses and residents would take second priority after nature. The Everglades, however, proved to be a bipartisan cause. The Comprehensive Everglades Restoration Plan (CERP) was authorized by the Water Resources Development Act of 2000 and signed into law by President Bill Clinton on December 11, 2000. It approved the immediate use of $1.3 billion for implementation to be split by the federal government and other sources.
Implementation
The State of Florida reports that it has spent more than $2 billion on the various projects since CERP was signed. More than of Stormwater Treatment Areas (STA) have been constructed to filter of phosphorus from Everglades waters. An STA covering was constructed in 2004, making it the largest environmental restoration project in the world. Fifty-five percent of the land necessary for restoration, totaling , has been purchased by the State of Florida. A plan named "Acceler8", to hasten the construction and funding of the project, was put into place, spurring the start of six of eight construction projects, including that of three large reservoirs.
Despite the bipartisan goodwill and declarations of the importance of the Everglades, the region still remains in danger. Political maneuvering continues to impede CERP: sugar lobbyists promoted a bill in the Florida legislature in 2003 that increased the acceptable amount of phosphorus in Everglades waterways from 10 ppb to 15 ppb and extended the deadline for the mandated decrease by 20 years. A compromise of 2016 was eventually reached. Environmental organizations express concern that attempts to speed up some of the construction through Acceler8 are politically motivated; the six projects Acceler8 focuses on do not provide more water to natural areas in desperate need of it, but rather to projects in populated areas bordering the Everglades, suggesting that water is being diverted to make room for more people in an already overtaxed environment. Though Congress promised half the funds for restoration, after the War in Iraq began and two of CERP's major supporters in Congress retired, the federal role in CERP was left unfulfilled. According to a story in The New York Times, state officials say the restoration is lost in a maze of "federal bureaucracy, a victim of 'analysis paralysis' ". In 2007, the release of $2 billion for Everglades restoration was approved by Congress, overriding President George W. Bush's veto of the entire Water Development Project the money was a part of. Bush's rare veto went against the wishes of Florida Republicans, including his brother, Governor Jeb Bush. A lack of subsequent action by the Congress prompted Governor Charlie Crist to travel to Washington D.C. in February 2008 and inquire about the promised funds. By June 2008, the federal government had spent only $400 million of the $7.8 billion legislated. Carl Hiaasen characterized George W. Bush's attitude toward the environment as "long-standing indifference" in June 2008, exemplified when Bush stated he would not intervene to change the Environmental Protection Agency's (EPA) policy allowing the release of water polluted with fertilizers and phosphorus into the Everglades.
Reassessment of CERP
Florida still receives a thousand new residents daily and lands slated for restoration and wetland recovery are often bought and sold before the state has a chance to bid on them. The competitive pricing of real estate also drives it beyond the purchasing ability of the state. Because the State of Florida is assisting with purchasing lands and funding construction, some of the programs under CERP are vulnerable to state budget cuts. In June 2008 Governor Crist announced that the State of Florida will buy U.S. Sugar for $1.7 billion. The idea came when sugar lobbyists were trying to persuade Crist to relax restriction of U.S. Sugar's practice of pumping phosphorus-laden water into the Everglades. According to one of the lobbyists who characterized it as a "duh moment", Crist said, "If sugar is polluting the Everglades, and we're paying to clean the Everglades, why don't we just get rid of sugar?" The largest producer of cane sugar in the U.S. will continue operations for six years, and when ownership transfers to Florida, of the Everglades will remain undeveloped to allow it to be restored to its pre-drainage state.
In September 2008 the National Research Council (NRC), a nonprofit agency providing science and policy advice to the federal government, submitted a report on the progress of CERP. The report noted "scant progress" in restoration because of problems in budgeting, planning, and bureaucracy. The NRC report called the Everglades one of the "world's treasured ecosystems" that is being further endangered by lack of progress: "Ongoing delay in Everglades restoration has not only postponed improvements—it has allowed ecological decline to continue". It cited the shrinking tree islands, and the negative population growth of the endangered Rostrhamus sociabilis or Everglades snail kite, and Ammodramus maritimus mirabilis, the Cape Sable seaside sparrow. The lack of water reaching Everglades National Park was characterized as "one of the most discouraging stories" in implementation of the plan. The NRC recommended improving planning on the state and federal levels, evaluating each CERP project annually, and further acquisition of land for restoration. Everglades restoration was earmarked $96 million in federal funds as part of the American Recovery and Reinvestment Act of 2009 with the intention of providing civil service and construction jobs while simultaneously implementing the legislated repair projects.
In January 2010, work began on the C-111 canal, built in the 1960s to drain irrigated farmland, to reconstruct it to keep from diverting water from Everglades National Park. Two other projects focusing on restoration were also scheduled to start in 2010. Governor Crist announced the same month that $50 million would be earmarked for Everglades restoration. In April of the same year, a federal district court judge sharply criticized both state and federal failures to meet deadlines, describing the cleanup efforts as being slowed by "glacial delay" and government neglect of environmental law enforcement "incomprehensible".
See also
Draining and development of the Everglades
Everglades National Park
Geography and ecology of the Everglades
History of Miami, Florida
Indigenous people of the Everglades region
Notes and references
Bibliography
Barnett, Cynthia (2007). Mirage: Florida and the Vanishing Water of the Eastern U.S., University of Michigan Press.
Douglas, Marjory; Rothchild, John (1987). Marjory Stoneman Douglas: Voice of the River. Pineapple Press.
Grunwald, Michael (2006). The Swamp: The Everglades, Florida, and the Politics of Paradise, Simon & Schuster.
Lodge, Thomas E. (1994). The Everglades Handbook: Understanding the Ecosystem. CRC Press.
U.S. Army Corps of Engineers and South Florida Water Management District (April 1999). "Summary", Central and Southern Florida Project Comprehensive Review Study.
Further reading
Alderson, Doug. 2009. New Dawn for the Kissimmee River. Gainesville, FL: University Press of Florida.
The Everglades in the Time of Marjory Stoneman Douglas Photo exhibit created by the State Archives of Florida
External links
CERP: A Visual Explanation of the Comprehensive Everglades Restoration Project (SFWMD)
C-44 Reservoir Storm Water Treatment Area Project (SFWMD/CERP)
Everglades
Everglades
History of sugar
Constructed wetlands
Sugar industry of Florida | Restoration of the Everglades | [
"Chemistry",
"Engineering",
"Biology"
] | 8,391 | [
"Bioremediation",
"Constructed wetlands",
"Environmental engineering"
] |
17,602,635 | https://en.wikipedia.org/wiki/Suaeda%20australis | Suaeda australis, the austral seablite, is a species of plant in the family Amaranthaceae, native to Australia. It grows to in height, with a spreading habit and branching occurring from the base. The leaves are up to 40 mm in length and are succulent, linear and flattened. They are light green to purplish-red in colour.
The species occurs on shorelines in coastal or estuarine areas or in salt marshes. It is native across Australia including the states of Queensland, New South Wales, Victoria, Tasmania, South Australia and the south-west of Western Australia.
In irrigated areas, the species is known as a salinity indicator plant and is referred to as redweed.
References
External links
Online Field guide to Common Saltmarsh Plants of Queensland
Suaeda australis occurrence data from Australasian Virtual Herbarium
australis
Caryophyllales of Australia
Halophytes
Flora of New South Wales
Flora of Queensland
Flora of South Australia
Flora of Tasmania
Flora of Victoria (state)
Eudicots of Western Australia | Suaeda australis | [
"Chemistry"
] | 224 | [
"Halophytes",
"Salts"
] |
17,602,697 | https://en.wikipedia.org/wiki/Delta-K | The Delta-K was an American rocket stage, developed by McDonnell Douglas and Aerojet. It was first used on 27 August 1989 as the second stage for the Delta 4000 series.
It continued to serve as the second stage for subsequent variants of the Delta rocket.
It was propelled by a single AJ10-118K rocket engine, fueled by Aerozine 50 and dinitrogen tetroxide, which are hypergolic.
The Delta-K had a long heritage to the first Able stage used in Project Vanguard. The AJ-10 engine was first used in the Able second stage of the Vanguard rocket, as the AJ10-118 configuration. It was initially fueled by nitric acid and UDMH. An AJ10 engine was first fired in flight during the third Vanguard launch, on 17 March 1958, which successfully placed the Vanguard 1 satellite into orbit.
As of 25 May 2008, 138 have been launched, and excluding one which was destroyed by the explosion of a lower stage, none have failed.
The Delta-K was used as the second stage of the Delta II rocket from 1989 to 2018. This second stage was retired at conclusion of the ICESat-2 launch on 15 September 2018.
See also
Delta Cryogenic Second Stage
Advanced Common Evolved Stage
Transtage
References
Rocket stages | Delta-K | [
"Astronomy"
] | 258 | [
"Rocketry stubs",
"Astronomy stubs"
] |
17,604,025 | https://en.wikipedia.org/wiki/Jam%20nut | A jam nut is a low profile type of nut, typically half as tall as a standard nut. It is commonly used as a type of locknut, where it is "jammed" up against a standard nut to lock the two in place. It is also used in situations where a standard nut would not fit.
The term "jam nut" can also refer to any nut that is used in the same function (even a standard nut used for the jamming purpose). Jam nuts, other types of locknuts, lock washers, and thread-locking fluid are ways to prevent vibration from loosening a bolted joint.
Use of two nuts to prevent self-loosening
In normal use, a nut-and-bolt joint holds together because the bolt is under a constant tensile stress called the preload. The preload pulls the nut threads against the bolt threads, and the nut face against the bearing surface, with a constant force, so that the nut cannot rotate without overcoming the friction between these surfaces. If the joint is subjected to vibration, however, the preload increases and decreases with each cycle of movement. If the minimum preload during the vibration cycle is not enough to hold the nut firmly in contact with the bolt and the bearing surface, then the nut is likely to become loose.
Specialized locking nuts exist to prevent this problem, but sometimes it is sufficient to add a second nut. For this technique to be reliable, each nut must be tightened to the correct torque. The inner nut is tightened to about a quarter to a half of the torque of the outer nut. It is then held in place by a wrench while the outer nut is tightened on top using the full torque. This arrangement causes the two nuts to push against each other, creating a tensile stress in the short section of the bolt that lies between them. Even when the main joint is vibrated, the stress between the two nuts remains constant, thus holding the nut threads in constant contact with the bolt threads and preventing self-loosening. When the joint is assembled correctly, the outer nut bears the full tension of the joint. The inner nut functions merely to add a small additional force to the outer nut and does not need to be as strong, so a thin nut can be used.
The jam nut essentially acts as the "other object", as the two nuts are tightened against each other. They can also be used to secure an item on a fastener without applying force to that object. This is achieved by first tightening one of the nuts onto the item. Then the other nut is screwed down on top of the first nut. The inner nut is then slackened back and tightened against the outer nut.
Jam nuts can also be used in situations where a threaded rod must be rotated. Since threaded rods have no bolt heads, it is difficult or impossible to apply torque to a threaded rod. A pair of jam nuts is used to create a point where a wrench may be used.
Jam nuts can be unreliable under significant loads. If the inner nut is torqued more than the outer nut, the outer nut may yield. If the outer nut is torqued more than the inner nut, the inner nut may loosen up.
References
Nuts (hardware)
Kontermutter | Jam nut | [
"Engineering"
] | 657 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
17,604,036 | https://en.wikipedia.org/wiki/German%20Statutory%20Accident%20Insurance | German Statutory Accident Insurance or workers' compensation is among the oldest branches of German social insurance. Occupational accident insurance was established in Germany by statute in 1884. It is now a national, compulsory program that insures workers for injuries or illness incurred through their employment, or the commute to or from their employment. Wage earners, apprentices, family helpers and students including children in kindergarten are covered by this program. Almost all self-employed persons can voluntarily become insured. The German workers' compensation laws were the first of their kind.
History
In 1871, the German Empire was founded at the end of the Franco-Prussian War. Formerly Chancellor of Prussia, Otto von Bismarck, now Chancellor of the new German Empire, introduced highly-progressive welfare legislation by the standards of Europe at the time.
The Sickness Bill became law in 1883 and the Accident Bill in 1884. Otto von Bismarck, Chancellor of the German Empire, introduced the programs to assist workers in the event of accidental injury, illness or old age. The initial system was financed by workers and employers. The Sickness Insurance law paid indemnity for up to 13 weeks. The first 4 weeks were at 50% of prior wages, from the fifth week on the benefit was 66.7% of previous earnings. Workers who were completely disabled received benefits at 67% after the 13 week, financed entirely by employers. If the disabled person required constant care, up to 100% of previous wages were awarded.
The agencies in charge of providing the form of insurance are the industrial and agricultural employers' liability funds as well as public sector accident insurance funds, which include both municipal accident insurance association and other accident funds. While employers'liability funds are organized according to industry, the public sector accident insurance funds are the most part organized regionally.
The accident insurance funds govern themselves (self-administration) with equal representation divided between employers, entrepreneurs and employees. The organs of self-administration are the member’s assembly and executive board. That arrangement ensures that the interests of all participants are represented.
The legal basis for occupational accident insurance is formed by the German Social Code, in particular Book VII (SGB VII).
The German compensation system was used as a model for many other nations' workers' compensation programs.
Modern compensation system in Germany
Today, in Germany, every worker is a member of a related Workers Compensation Institute (Berufsgenossenschaft), and almost all self-employed persons may voluntarily become insured members of an institute as well. The institutes have an approximately 90% return-to-work rate, using vocational retraining and upgraded vocational qualifications as key strategies.
All accidents in the workplace or in the commute to and from it are covered. Also, 80 diseases are considered occupational diseases and are also covered by the program.
The workers' compensation program is funded by employers (except for the government's coverage
for students and children and a government subsidy to the Agricultural Accident Fund). The average employer contribution was in 2019 at 1.14% of payroll.
Injured workers have a right to appeal to the committee of their Institute. The next level of appeal after the committee is to a Sozialgericht court. The appeal to the German Social Courts (Sozialgerichte) is free of cost for the worker.
Financing
Statutory occupational accident insurance is among the oldest branches of German social insurance. Unlike health, long-term care, pension and unemployment insurance, statutory occupational accident insurance is contribution-free for those insured. The costs for comprehensive insurance coverage for prevention and rehabilitation are borne by employers. For public sector jobs, the federal, state and municipal governments carry the costs.
The contribution rates are determinates according to the pay-as-you-go principle, based on expenditures in prior years. This means that at the end of each fiscal year the statutory accident insurance funds allocate their expenditures among the member companies. The calculation basis is thus formed by actual financing needs: the allocation amount to be put aside, the wages and salaries of the insured and the hazard class of the particular industry concerned. For the municipal accident insurance associations and accidents funds, the contributions are based on the population, the number of insured persons, or wages and salaries.
Who is insured
Every year, around 1 million accidents occur in the Federal Republic of Germany involving employees who are either working or on their way to or from work. These are joined by around 18,000 cases of recognized occupational illnesses and some 1.2 million school accidents. For those affected, the consequences often entail wide-ranging changes in their way of life. Restoring these people’s health and, as far as possible, their ability to work is the task of statutory accident insurance.
Every employee and trainee is covered by statutory occupational accident insurance. In industry and agriculture the employer’s liability insurance fund (Berufsgenossenschaften) is responsible for accident insurance. Providing coverage in the public sector are the municipal accident insurance associations (Gemeindeunfallversicherungsverbände) and other public-sector accident funds.
Coverage is provided for accidents at work or school or on the way to or from work or school, as well as for occupational illnesses.
Benefits
Statutory occupational accident insurance has the task of undertaking measures to prevent job-related accidents and illnesses, as well as to protect workers from on-the-job hazards. If occupational accidents or illnesses occur, accident insurance provides assistance toward restoring the health and working ability of the persons involved and compensation to the insured persons or their persons or their survivors through the provision of cash benefits.
The primary mission of statutory occupational accident insurance according to the legislation is the use all means at its disposal to prevent occupational accidents and illness from occurring in the first place and to minimize potential job-related hazards. The focus is placed on advising companies in all matters having to do with industrial safety and health. That includes providing employers and employees with comprehensive instructions and guideline, as well as international media. Accident insurance agencies also hold free informational, motivational events on the subject of safety at work.
If an insured person has an accident at work or suffers from an occupational illness, statutory occupational accident insurance covers the resulting costs. That means that the insurance fund provides the best possible medical, occupational and social rehabilitation, as well as financial compensation if applicable.
In the event of an occupational accident or illness, statutory occupational accident insurance provides:
payment for full medical treatment
occupational integration assistance (for example, retraining)
social integration assistance and supplementary assistance
cash benefits to the insured and their surviving dependents.
The top priority of the accident insurance fund is to restore the health and ability to work of the insured person. Pensions are paid to fund members only if it is not possible to fully restore their ability to work: for those whose earning capacity is reduced by at least 20 percent.
Disability benefits are paid as a weekly "wage loss” compensation. Workers unable to perform their current job due to injury or illness receive periodic payments of 80% of their prior gross earnings until they return to work (up to a maximum total payment). If rehabilitation is prognosticated to be impossible, the worker receives the benefit for 78 weeks.
Wages are paid for six weeks by the employer before the employee goes onto short-term disability benefits.
Workers who have a loss of earning capacity for work injury or occupational disease of 20% or more receive a pension equal to 66.7% of their previous year's earnings up to the specified maximum. That is paid until the age of 65 unless they begin to receive an old-age pension earlier than that age.
Medical care benefits are comprehensive, with the total cost of physical rehabilitation and appliances being covered. Institutes provide all medical care benefits and control the choice of doctor and hospital.
References
Further reading
German International Culture
Germany
External links
German Statutory Accident Insurance Association
Actuarial science
Employee benefits
German labour law
Social security in Germany
Trade unions in Germany
Types of insurance | German Statutory Accident Insurance | [
"Mathematics"
] | 1,605 | [
"Applied mathematics",
"Actuarial science"
] |
17,604,070 | https://en.wikipedia.org/wiki/Knurled%20nut | A knurled nut is a nut with a knurled outside surface. This facilitates tightening by hand (thumb nut) or secures the nut into a handle or cover (insertion nut).
Uses
Knurled nuts are commonly used in any application where the fasteners will be removed regularly but not needed for structural support. They can commonly be found on electrical panel covers, precision measuring tools, squares, and service covers. The advantages of using a knurled fastener in this situation are: it improves the ease of removal, deters the possibly over-tightening/stripping, and does not require any tools to manipulate the fastener.
However, there are knurled nuts available that have a slot cut into them for the use of a Phillips head screwdriver. This expands the versatility of the nut and provides the option to use tools. Nuts with the Phillips slot are common in applications where vibration is a concern.
References
What Are Thumb Screws and Where Are They Used, www.rabcomponents.com/blog/what-are-thumb-screws-and-where-are-they-used.html.
“Understanding Benefits and Applications of Knurled Thumb Screws.” Norwood, norwoodscrewmachine.com/blog/understanding-benefits-applications-knurled-thumb-screws/.
Nuts (hardware)
Mechanical fasteners | Knurled nut | [
"Engineering"
] | 286 | [
"Mechanical fasteners",
"Mechanical engineering"
] |
17,604,172 | https://en.wikipedia.org/wiki/Tecticornia%20arbuscula | Tecticornia arbuscula, the shrubby glasswort or scrubby samphire, is a species of plant in the family Amaranthaceae, native to Australia. It is a shrub that grows to 2 metres in height, with a spreading habit. It has succulent swollen branchlets with small leaf lobes.
The species occurs on shorelines in coastal or estuarine areas or in salt marshes, especially marshes subject to occasional inundation by the ocean. It has a patchy distribution across south coastal Australia, occurring in southern Western Australia, South Australia, Victoria, New South Wales and Tasmania.
Seeds of the species are enclosed in a hard, vaguely pyramid-shaped pericarp which reveal 1.5 mm long, narrow seeds. these seeds appear as golden brown, transparent and unornamented.
Originally published by Robert Brown under the name Salicornia arbuscula, it was transferred into Sclerostegia by Paul G. Wilson in 1980, before being merged into Tecticornia in 2007.
References
arbuscula
Caryophyllales of Australia
Eudicots of Western Australia
Flora of South Australia
Flora of Victoria (state)
Flora of New South Wales
Flora of the Northern Territory
Halophytes
Taxa named by Robert Brown (botanist, born 1773) | Tecticornia arbuscula | [
"Chemistry"
] | 265 | [
"Halophytes",
"Salts"
] |
17,604,272 | https://en.wikipedia.org/wiki/Cerrosafe | Cerrosafe is a fusible alloy with a low melting point. It is a non-eutectic mixture consisting of 42.5% bismuth, 37.7% lead, 11.3% tin, and 8.5% cadmium that melts between and . It is useful for making reference castings whose dimensions can be correlated to those of the mold or other template due to its well-known thermal expansion properties during cooling. The alloy contracts during the first 30 minutes, allowing easy removal from a mold, then expands during the next 30 minutes to return to the exact original size. It then continues expanding at a known rate for 200 hours, allowing conversion of measurements of the casting back to those of the mold.
Similar metals
References
External links
Examples of chamber casts using low-temp metal
Fusible alloys
Bismuth alloys
Cadmium alloys
Lead alloys
Tin alloys | Cerrosafe | [
"Chemistry",
"Materials_science"
] | 179 | [
"Lead alloys",
"Alloy stubs",
"Metallurgy",
"Bismuth alloys",
"Fusible alloys",
"Tin alloys",
"Alloys",
"Cadmium alloys"
] |
17,604,891 | https://en.wikipedia.org/wiki/Keps%20nut | A Keps nut, (also called a k-lock nut or washer nut), is a nut with an attached, free-spinning washer.
It is used to make assembly more convenient. Common washer types are star-type lock washers, conical, and flat washers.
'Keps' trademark
Keps is a trademark of ITW Shakeproof. The name comes from "kep" in ShaKEProof, and the "s" is because usually more than one are purchased.
References
Notes
Bibliography
.
Nuts (hardware) | Keps nut | [
"Engineering"
] | 112 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
17,604,902 | https://en.wikipedia.org/wiki/Serrated%20face%20nut | A serrated face nut is a locknut with ridges on the face of the nut that bite into the surface it is tightened against. The serrations are angled such that they keep the nut from rotating in the direction that would loosen the nut. Due to the serrations they cannot be used with a washer or on surfaces that cannot be scratched. Sometimes both faces of the nut are serrated, permitting either side to lock.
See also
Serrated flange nut
References
Nuts (hardware) | Serrated face nut | [
"Engineering"
] | 100 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
17,605,257 | https://en.wikipedia.org/wiki/Lambda%20diode | A lambda diode is an electronic circuit that combines a complementary pair of junction gated field effect transistors into a two-terminal device that exhibits an area of differential negative resistance much like a tunnel diode. The term refers to the shape of the V–I curve of the device, which resembles the Greek letter λ (lambda).
Lambda diodes work at higher voltage than tunnel diodes. Whereas a typical tunnel diode may exhibit negative differential resistance approximately between 70 mV and 350 mV, this region occurs approximately between 1.5 V and 6 V in a lambda diode due to the higher pinch-off voltages of typical JFET devices. A lambda diode therefore cannot replace a tunnel diode directly.
Moreover, in a tunnel diode the current reaches a minimum of about 20% of the peak current before rising again towards higher voltages. The lambda diode current approaches zero as voltage increases, before rising quickly again at a voltage high enough to cause gate–source Zener breakdown in the FETs.
It is also possible to construct a device similar to a lambda diode by combining an n-channel JFET with a PNP bipolar transistor.
A suggested modulatable variant but is a bit more difficult to build uses a PNP based optocoupler and can be tweaked by using its IR diode. This has the advantage that its properties can be fine tuned with a simple bias driver and used for high sensitivity radio applications. Sometimes, a modified open can PNP transistor with IR LED can be used instead.
Applications
Like the tunnel diode, the negative resistance aspect of the lambda diode lends itself naturally to application in oscillator circuits and amplifiers. In addition, bistable circuits such as memory cells have been described.
References
Literature
Analog circuits | Lambda diode | [
"Engineering"
] | 370 | [
"Analog circuits",
"Electronic engineering"
] |
17,606,723 | https://en.wikipedia.org/wiki/3D%20city%20model | A 3D city model is digital model of urban areas that represent terrain surfaces, sites, buildings, vegetation, infrastructure and landscape elements in three-dimensional scale as well as related objects (e.g., city furniture) belonging to urban areas. Their components are described and represented by corresponding two- and three-dimensional spatial data and geo-referenced data. 3D city models support presentation, exploration, analysis, and management tasks in a large number of different application domains. In particular, 3D city models allow "for visually integrating heterogeneous geoinformation within a single framework and, therefore, create and manage complex urban information spaces."
Storage
To store 3D city models, both file-based and database approaches are used. There is no single, unique representation schema due to the heterogeneity and diversity of 3d city model contents.
Encoding of components
The Components of 3D city models are encoded by common file and exchange formats for 2D raster-based GIS data (e.g., GeoTIFF), 2D vector-based GIS data (e.g., AutoCAD DXF), 3D models (e.g., .3DS, .OBJ), and 3D scenes (e.g., Collada, Keyhole Markup Language) such as supported by CAD, GIS, and computer graphics tools and systems. All components of a 3D city model have to be transformed into a common geographic coordinate system.
Databases
A database for 3D city models stores its components in a hierarchically structured, multi-scale way, which allows for a stable and reliable data management and facilitates complex GIS modeling and analysis tasks. For example, the 3D City Database is a free 3D geo database to store, represent, and manage virtual 3D city models on top of a standard spatial relational database. A database is required if 3D city models have to be continuously managed. 3D city model databases form a key element in 3D spatial data infrastructures that require support for storing, managing, maintenance, and distribution of 3D city model contents. Their implementation requires support of a multitude of formats (e.g., based on FME multi formats). As common application, geodata download portals can be set up for 3D city model contents (e.g., virtualcityWarehouse).
CityGML
The Open Geospatial Consortium (OGC) defines an explicit XML-based exchange format for 3D city models, CityGML, which supports not only geometric descriptions of 3D city model components but also the specification of semantics and topology information.
CityJSON
CityJSON is a JSON-based format for storing 3D city models. It mostly follows the CityGML data model, but aims to be developer- and user-friendly by avoiding most of the complexities of its usual GML encoding. Due to its simple encoding and the use of JSON, it is also suitable for web applications.
Construction
Level of detail
3D city models are typically constructed at various levels of detail (LOD) to provide notions of multiple resolutions and at different levels of abstraction. Other metrics such as the level of spatio-semantic coherence and resolution of the texture can be considered a part of the LOD. For example, CityGML defines five LODs for building models:
LOD 0: 2.5D footprints
LOD 1: Buildings represented by block models (usually extruded footprints)
LOD 2: Building models with standard roof structures
LOD 3: Detailed (architectural) building models
LOD 4: LOD 3 building models supplemented with interior features.
There exist also approaches to generalize a given detailed 3D city model by means of automated generalization. For example, a hierarchical road network (e.g., OpenStreetMap) can be used to group 3D city model components into "cells"; each cell is abstracted by aggregating and merging contained components.
GIS data
GIS data provide the base information to build a 3D city model such as by digital terrain models, road networks, land use maps, and related geo-referenced data. GIS data also includes cadastral data that can be converted into simple 3D models as, for example, in the case of extruded building footprints. Core components of 3D city models form digital terrain models (DTM) represented, for example, by TINs or grids.
CAD data
Typical sources of data for 3D city model also include CAD models of buildings, sites, and infrastructure elements. They provide a high level of detail, possible not required by 3D city model applications, but can be incorporated either by exporting their geometry or as encapsulated objects.
BIM data
Building information models represent another category of geo-spatial data that can be integrated into a 3D city model providing the highest level of detail for building components.
Integration at visualization level
Complex 3D city models typically are based on different sources of geodata such as geodata from GIS, building and site models from CAD and BIM. It is one of their core properties to establish a common reference frame for heterogeneous geo-spatial and geo-referenced data, i.e., the data need not to be merged or fused based on one common data model or schema. The integration is possible by sharing a common geo-coordinate system at the visualization level.
Building reconstruction
The simplest form of building model construction consist in extruding the footprint polygons of buildings, e.g., taken from the cadaster, by pre-compute average heights. In practice, 3D models of buildings of urban regions are generated based on capturing and analyzing 3D point clouds (e.g., sampled by terrestrial or aerial laser scanning) or by photogrammetric approaches. To achieve a high percentage of geometrically and topologically correct 3D building models, digital terrain surfaces and 2D footprint polygons are required by automated building reconstruction tools such as BREC. One key challenge is to find building parts with their corresponding roof geometry. "Since fully automatic image understanding is very hard to solve, semi-automatic components are usually required to at least support the recognition of very complex buildings by a human operator." Statistical approaches are common for roof reconstruction based on airborne laser scanning point clouds.
Fully automated processes exist to generate LOD1 and LOD2 building models for large regions. For example, the Bavarian Office for Surveying and Spatial Information is responsible for about 8 million building models at LOD1 and LOD2.
Visualization
The visualization of 3D city models represents a core functionality required for interactive applications and systems based on 3D city models.
Real-time rendering
Providing high quality visualization of massive 3D city models in a scalable, fast, and cost efficient manner is still a challenging task due to the complexity in terms of 3D geometry and textures of 3D city models. Real-time rendering provides a large number of specialized 3D rendering techniques for 3D city models.
Examples of specialized real-time 3D rendering include:
Real-time 3D rendering of road networks on high resolution terrain models.
Real-time 3D rendering of water surfaces with cartography-oriented design.
Real-time 3D rendering of day and night sky phenomena.
Real-time 3D rendering of grid-based terrain models.
Real-time 3D rendering using different levels of abstraction, ranging between 2D map views and 3D views.
Real-time 3D rendering of multiperspective views on 3D city models.
Real-time rendering algorithms and data structures are listed by the virtual terrain project.
Service-based rendering
Service-oriented architectures (SOA) for visualizing 3D city models offer a separation of concerns into management and rendering and their interactive provision by client applications. For SOA-based approaches, 3D portrayal services are required, whose main functionality represents the portrayal in the sense of 3D rendering and visualization. SOA-based approaches can be distinguished into two main categories, currently discussed in the Open Geospatial Consortium:
Web 3D service (W3DS): This type of service handles geodata access and mapping to computer graphics primitives such as scene graphs with textured 3D geometry models as well as their delivery to the requesting client applications. The client applications are responsible for the 3D rendering of delivered scene graphs, i.e., they are responsible for the interactive display using their own 3D graphics hardware.
Web view service (WVS): This type of service encapsulates the 3D rendering process for 3D city models at the server side. The server generates views of the 3D scene or intermediate, image-based representations (e.g., virtual panoramas or G-buffer cube maps), which are streamed and uploaded to requesting client applications. The client applications are responsible for re-construction the 3D scene based on the intermediate representations. Client applications do not have to process 3D graphics data, but to provide management for loading, caching, and displaying the image-based representations of 3D scenes and do not have to process the original (and possibly large) 3D city model.
Map-based visualization
A map-based technique, the "smart map" approach, aims at providing "massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles." The map tiles are synthesized by an automatic 3D rendering process of the 3D city model; the map tiles, generated for different levels-of-detail, are stored on the server. This way, the 3D rendering is completely performed on the server's side, simplifying access and usage of 3D city models. The 3D rendering process can apply advanced rendering techniques (e.g., global illumination and shadow calculation, illustrative rendering), but does not require client devices to have advanced 3D graphics hardware. Most importantly, the map-based approach allows for distributing and using complex 3D city models with having to stream the underlying data to client devices - only the pre-generated map tiles are sent. This way, "(a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach."
Applications
3D city models can be used for a multitude of purposes in a growing number of different application domains. Examples:
Navigation systems: 3D navigation maps have become omnipresent both in automotive and pedestrian navigation systems, which include 3D city models, in particular, terrain models and 3D building models, to enhance the visual depiction and to simplify the recognition of locations.
Urban planning and architecture: To set up, analyze, and disseminate urban planning concepts and projects, 3D city models serve as communication and participation medium. 3D city models provide means for project communication, better acceptance of development projects through visualization, and therefore avoid monetary loss through project delays; they also help to prevent planning errors.
Spatial data infrastructures (SDIs): 3D city models extend spatial data infrastructures and support the management, storage, and usage of 3D models within SDIs; they require not only tools and processes for the initial construction and storage of 3D city models but also have to provide efficient data management and data distribution to support workflows and applications.
GIS: GIS support 3D geodata and provide computational algorithms to construct, transform, validate, and analyze 3D city model components.
Emergency management: For emergency, risk, and disaster management systems, 3D city models provide the computational framework. In particular, they serve to simulate fire, floodings, and explosions For example, the DETORBA project aims at simulating and analyzing effects of explosion in urban areas at high precision to support prediction of effects for the structural integrity and soundness of the urban infrastructure and safety preparations of rescue forces.
Spatial analysis: 3D city models provide the computational framework for 3D spatial analysis and simulation. For example, they can be used to compute solar potential for 3D roof surfaces of cities, visibility analysis within the urban space, noise simulation, thermographic inspections of buildings
Geodesign: In geodesign, virtual 3D models of the environment (e.g., landscape models or urban models) facilitate exploration and presentation as well as analysis and simulation.
Gaming: 3D city models can be used to obtain base data for virtual 3D scenes used in online and video games.
Cultural heritage: 3D city model tools and systems are applied for modeling, design, exploration, and analysis tasks in the scope of cultural heritage. For example, archeological data can be embedded in 3D city models.
City information systems: 3D city models represent the framework for interactive 3D city information systems and 3D city maps. For example, municipalities apply 3D city models as centralized information platform for location marketing.
Property management: 3D city model technology can extend systems and applications used in real-estate and property management.
Intelligent transportation systems: 3D city models can be applied to intelligent transportation systems.
Augmented reality: 3D city models can be used as reference frame for augmented reality applications.
See also
City map
References
External links
3D City Model Systems and Tools Management and infrastructure components for 3D city models.
Map-based Visualization of 3D City Models Components for 3d city model applications.
OGC 3D Portrayal IE 3D portrayal interoperability experiment of the Open Geospatial Consortium.
3D City Model of Berlin Example of a massive 3D city models for an urban area.
3D City Model of Roman Cologne Example of a 3D city model for cultural heritage applications.
Applications of geographic information systems
Earth sciences graphics software
Geographic data and information
3D computer graphics
Maps of cities | 3D city model | [
"Technology"
] | 2,778 | [
"Geographic data and information",
"Data"
] |
17,606,886 | https://en.wikipedia.org/wiki/OZI%20rule | The Okubo–Zweig–Iizuka rule or OZI rule is a consequence of quantum chromodynamics (QCD) that explains why certain decay modes appear less frequently than otherwise might be expected. It was independently proposed by Susumu Okubo, George Zweig and Jugoro Iizuka in the 1960s.
It states that any strongly occurring process will be suppressed if, through only the removal of internal gluon lines, its Feynman diagram can be separated into two disconnected diagrams: one containing all of the initial-state particles and one containing all of the final-state particles.
An example of such a suppressed decay is the Phi meson into pions: It would be expected that this decay mode would dominate over other decay modes such as which have much lower values. In actuality, it is seen that φ decays to kaons 84% of the time, suggesting the decay path to pions is suppressed.
An explanation of the OZI rule can be seen from the decrease of the coupling constant in QCD with increasing energy (or momentum transfer). For the OZI suppressed channels, the gluons must have high 2 (at least as much as the rest mass energies of the quarks into which they decay) and so the coupling constant will appear small to these gluons.
Another explanation of the OZI rule comes from the large- limit, in which the number of colors is assumed to be infinite. The OZI suppressed processes have a higher ratio of vertices (which contribute factors of ) to independent fermion loops (which contribute factors of ) when compared to the non-suppressed processes, and so these processes are much less common.
A further example is given by the decays of excited states of charmonium (bound state of charm quark and antiquark).
For states lighter than the charged D mesons, the decay must proceed just like the above example into three pions, with three virtual gluons mediating the interaction, each of which must have enough energy to produce a quark-antiquark pair.
But above the D meson threshold, the original valence quarks need not annihilate; they can propagate into the final states. In this case, only two gluons are required, which share the energy of the light quark-antiquark pair that is spontaneously nucleated. They are thus lower in energy than the three gluons of the OZI-suppressed annihilation. The suppression arises from both the smaller values of the QCD coupling constant at high energies, as well as the greater number of interaction vertices.
See also
J/ψ meson
References
Sources
Quantum chromodynamics | OZI rule | [
"Physics"
] | 557 | [
"Particle physics stubs",
"Particle physics"
] |
17,608,149 | https://en.wikipedia.org/wiki/Close%20stool | A close stool was an early type of portable toilet, made in the shape of a cabinet or box at sitting height with an opening in the top. The external structure contained a pewter or earthenware chamberpot to receive the user's excrement and urine when they sat on it; this was normally covered (closed) by a folding lid. "Stool" has two relevant meanings: as a type of seat and as human feces. Close stools were used from the Middle Ages (the Oxford English Dictionary gives the first citation as 1410) until the introduction of the indoor flush toilet.
At the Tudor Court
Records of the English court mention the "close stool" and detail its construction. As an example, the furniture maker and upholsterer William Green made a "close stool" in August 1537 for the Lady Mary. The stool was upholstered with crimson velvet and a silkwoman, Mistress Margery Vaughan, provided crimson silk fringes and ribbons for its decoration. Green made a leather carrying case for the stool. Close stools belonging to Cardinal Wolsey were covered with scarlet cloth and black velvet. Lady Jane Grey ordered crimson velvet to cover two close stools in July 1553.
Other names
In Scotland, equivalent close stools appear in inventories and were sometimes called "dry stools" or "stools of ease". James V of Scotland and his daughter Mary, Queen of Scots, both owned silk canopies which were suspended from the ceiling over the stool.
The close stool was sometimes called a necessary stool or a night stool. The eighteenth-century euphemism was convenience; the term was further euphemised in the nineteenth century with the term night commode, which John Gloag suggested may have derived its significance from a "balance night stool" described in Thomas Sheraton's Cabinet Dictionary (London, 1803). Sheraton's design was "made to have the appearance of a small commode standing upon legs; when it is used the seat part presses down to a proper height by the hand, and afterwards it rises by means of lead weights, hung to the seat, by lines passing over pulleys at each end, all which are enclosed in a case." This appears to be the link between "commode" as an elegant article of French furniture, and "commode" as a prosaic invalid toilet. "Close stool", in turn, is itself a euphemism for toilet chair. One meaning of commode survived into the twentieth century to refer to the flush toilet; "toilet" itself originally euphemistic.
The French term for this item of furniture is a chaise percée ("pierced chair"), as it often takes the form of a chair with a seat which raises to show the opening to the pot; similar items were made specifically as a moveable bidet.
The French secretary of Mary, Queen of Scots, Claude Nau described her talking to the Countess of Huntly about their plans to escape from Holyroodhouse after the murder of David Rizzio, while she was sitting on her chaise percée.
Developments
A nineteenth century development is the thunderbox.
Cultural significance
The Groom of the Stool was a high-ranking courtier who assisted the monarch with the close stool.
See also
Commode
Potty chair
References
History of furniture
Chairs
Toilets | Close stool | [
"Biology"
] | 698 | [
"Excretion",
"Toilets"
] |
17,608,453 | https://en.wikipedia.org/wiki/Danofloxacin | Danofloxacin is a fluoroquinolone antibiotic used in veterinary medicine.
References
Fluoroquinolone antibiotics
Cyclopropyl compounds
Veterinary drugs
Carboxylic acids | Danofloxacin | [
"Chemistry"
] | 41 | [
"Carboxylic acids",
"Functional groups"
] |
17,608,460 | https://en.wikipedia.org/wiki/Texas%20ratio | The Texas ratio is a metric used to assess the extent of a bank's credit problems.
Developed by Gerard Cassidy and others at RBC Capital Markets, it is calculated by dividing the value of the lender's non-performing assets (NPL + Real Estate Owned) by the sum of its tangible common equity capital and loan loss reserves.
While analyzing Texas banks during the early 1980s recession, Cassidy observed that banks typically failed when this ratio reached 1:1, or 100%. He later identified a similar pattern among New England banks during the early 1990s recession.
References
External links
Current Texas Ratios for all US Banks and Credit Unions
Current Texas Ratios for US Banks Updated May 21st 2010 by Amateur Investors
Complete list of US banks and their Texas ratios as published in December of 2008 and an updated listing published in October of 2009; the original blog entry includes notes how the tables were created (that the ratio was multiplied by 100 for easier comprehension, etc.)
Banking
Credit
Debt
Financial ratios | Texas ratio | [
"Mathematics"
] | 198 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
17,608,461 | https://en.wikipedia.org/wiki/Wireless%20identity%20theft | Wireless identity theft, also known as contactless identity theft or RFID identity theft, is a form of identity theft described as "the act of compromising an individual’s personal identifying information using wireless (radio frequency) mechanics." Numerous articles have been written about wireless identity theft and broadcast television has produced several investigations of this phenomenon. According to Marc Rotenberg of the Electronic Privacy Information Center, wireless identity theft is a serious issue as the contactless (wireless) card design is inherently flawed, increasing the vulnerability to attacks.
Overview
Wireless identity theft is a relatively new technique for gathering individuals' personal information from RF-enabled cards carried on a person in their access control, credit, debit, or government issued identification cards. Each of these cards carry a radio frequency identification chip which responds to certain radio frequencies. When these "tags" come into contact with radio waves, they respond with a slightly altered signal. The response can contain encoded personally identifying information, including the card holder's name, address, Social Security Number, phone number, and pertinent account or employee information.
Upon capturing (or ‘harvesting’) this data, one is then able to program other cards to respond in an identical fashion (‘cloning’). Many websites are dedicated to teaching people how to do this, as well as supplying the necessary equipment and software.
The financial industrial complex is migrating from the use of magnetic stripes on debit and credit cards which technically require a swipe through a magnetic card swipe reader. The number of transactions per minute can be increased, and more transactions can be processed in a shorter time, therefore making for arguably shorter lines at the cashier.
Controversies
Academic researchers and ‘White-Hat’ hackers have analysed and documented the covert theft of RFID credit card information and been met with both denials and criticisms from RFID card-issuing agencies. Nevertheless, after public disclosure of information that could be stolen by low-cost jerry-rigged detectors which were used to scan cards in mailing envelopes (and in other studies also even via drive-by data attacks), the design of security features on various cards was upgraded to remove card owners’ names and other data. Additionally, a number of completely unencrypted card designs were converted to encrypted data systems.
RSA report
The issues raised in a 2006 report were of importance due to the tens of millions of cards that have already been issued. Credit and debit card data could be stolen via special low cost radio scanners without the cards being physically touched or removed from their owner's pocket, purse or carry bag. Among the findings of the 2006 research study "Vulnerabilities in First-Generation RFID-Enabled Credit Cards", and in reports by other white-hat hackers:
some scanned credit cards revealed their owners’ names, card numbers and expiration dates;
that the short maximum scanning distance of the cards and tags (normally measured in inches or centimetres) could be extended to several feet via technological modifications;
that even without range-extension technologies, Black Hatters walking through crowded venues or delivering fliers could easily capture card data from other individuals and from mail envelopes;
that security experts who reviewed the study findings were startled by the breaches of privacy of the study (conducted in 2006);
that other e-systems, such as ExxonMobil’s Speedpass keychain payment device, used weak encryption methods which could be compromised by a half-hour or so of computing time;
that some cards’ scanned stolen data quickly yielded actual credit card numbers and didn't use data tokens;
that data illicitly obtained from some cards was successfully used to trick a regular commercial card-reader (used by the study group) into accepting purchase transactions from an online store that didn't require the entry of the cards’ validation codes;
that while higher level security systems have been and continue to be developed, and are available for RFID credit cards, it is only the actual banks which decide how much security they want to deploy for their cardholders;
that every one of the 20 cards tested in the study was defeated by at least one of the attacks the researchers deployed;
another related security threat concerned a different product: new government issued ePassports (passports that now incorporate RFID tags similar to credit and debit cards). The RFID tags in ePassports are also subject to data theft and cloning attacks. The United States government has been issuing ePassports since 2006.
In a related issue, privacy groups and individuals have also raised "Big Brother" concerns, where there is a threat to individuals from their aggregated information and even tracking of their movements by either card issuing agencies, other third party entities, and even by governments. Industry observers have stated that ‘...RFID certainly has the potential to be the most invasive consumer technology ever.’
Credit card issuing agencies have issued denial statements regarding wireless identity theft or fraud and provided marketing information that either directly criticized or implied that:
beyond the card data itself, other data protection and anti-fraud measures in their payment systems are in place to protect consumers;
the academic study conducted in 2006 used a sample of only 20 RFID cards, and was not accurately representative of the general RFID marketplace which generally used higher security than the tested cards;
unencrypted plain text information on the cards was "...basically useless" (by itself), since financial transactions they were tied to used verifications systems with powerful encryption technologies;
even if consumers were victims of RFID credit card fraud or identity theft, they would not be financially liable for such credit card fraud (a marketing strategy that ignores the other serious consequences to card holders after they've been associated with fraudulent transactions or have their identity stolen);
After the release of the study results, all of the credit card companies contacted during the New York Times' investigative report said that they were removing card holder names from the data being transmitted with their new second generation RFID cards.
Compromised U.S. identification documents
Certain official identification documents issued by the U.S. government, U.S. Passports, Passport Cards, and also enhanced driver's licenses issued by States of New York and Washington, contain RFID chips for the purpose of assisting those policing the U.S. border. Various security issues have been identified with their use, including the ability of black hats to harvest their identifier numbers at a distance and apply them to blank counterfeit documents and cards, thus assuming those people's identifiers.
Various issues and potential issues with their use have been identified, including privacy concerns. Although the RFID identifier number associated with each document is not supposed to include personal identification information, "...numbers evolve over time, and uses evolve over time, and eventually these things can reveal more information than we initially expect" stated Tadayoshi Kohno, an assistant professor of computer science, at University of Washington who participated in a study of such government issued documents.
See also
Identity theft
RFID
HID Global
Credit card fraud
References
Further reading
.
.
(on how deliberately corrupted RFID tags could introduce viruses into computer systems).
.
Crime
Data security
Credit cards
Identity theft
Radio-frequency identification | Wireless identity theft | [
"Engineering"
] | 1,467 | [
"Radio-frequency identification",
"Cybersecurity engineering",
"Data security",
"Radio electronics"
] |
17,608,555 | https://en.wikipedia.org/wiki/Omaha%20Ford%20Motor%20Company%20Assembly%20Plant | The Omaha Ford Motor Company Assembly Plant is located at 1514-1524 Cuming Street in North Omaha, Nebraska. In its 16 years of operation, the plant employed 1,200 people and built approximately 450,000 cars and trucks. In the 1920s, it was Omaha's second-biggest shipper.
History
Ford plant
The plant was designed by Albert Kahn as a Model T assembly plant, and built in 1916. Its design represents an important step in the development of Ford's assembly process. Previously, each step in the assembly of an automobile had taken place in a different building, which entailed a cost in time and labor to move the product from one building to another. From 1903 to 1916, Kahn designed "all-under-one-roof" buildings for a variety of manufacturers. In such buildings, Ford's usual practice was to begin assembly on the top floor and move downward until the product was finished at ground level. The Omaha plant was an exception to this: assembly began on the lowest floor and moved upward. It is speculated that the roof was used for storage of finished automobiles.
In 1917, Kahn designed the first single-floor assembly plant with a continuous moving assembly line at Ford's Rouge River plant. This design supplanted the older one; the Model A, which replaced the Model T, used a continuous line that could not be installed in the Omaha plant. Assembly ceased at the Omaha plant in 1932. Ford continued to use the building as a sales and service center until 1955.
Post-Ford
After Ford's departure, the building was used as a warehouse by the Western Electric Company from 1956 to 1959. It was then vacant until 1963, when it was occupied by Tip Top Products, an Omaha manufacturer of liquid solder, hair accessories, and other plastic goods founded by Carl W. Renstrom. Tip Top left the building in 1986, after which it was again vacant for several years. It served as a tire warehouse and retail outlet for some time, but then fell vacant again.
In 2005, the building was opened as TipTop Apartments, a mixed-use building with office space on the first floor and with 96 loft-style apartments on the upper levels; an adjoining building houses a banquet-and-conference center.
See also
History of Omaha
References
Industrial buildings completed in 1916
National Register of Historic Places in Omaha, Nebraska
Ford factories
Motor vehicle assembly plants in Nebraska
Buildings and structures in Omaha, Nebraska
History of Downtown Omaha, Nebraska
Industrial buildings and structures on the National Register of Historic Places in Nebraska
Motor vehicle manufacturing plants on the National Register of Historic Places
Transportation buildings and structures on the National Register of Historic Places in Nebraska
Defunct manufacturing companies based in Nebraska
Mill architecture | Omaha Ford Motor Company Assembly Plant | [
"Engineering"
] | 546 | [
"Mill architecture",
"Architecture"
] |
17,608,618 | https://en.wikipedia.org/wiki/Harald%20zur%20Hausen | Harald zur Hausen NAS EASA APS (; 11 March 1936 – 29 May 2023) was a German virologist. He carried out research on cervical cancer and discovered the role of papilloma viruses in cervical cancer, for which he received the Nobel Prize in Physiology or Medicine in 2008. He was chairman of the German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ) in Heidelberg.
Early life and education
Zur Hausen was born in Gelsenkirchen in a Catholic family. He completed his Abitur at Antonianum Grammar School in Vechta, then studied medicine at the universities of Bonn from 1955, Hamburg from 1957, and Düsseldorf from 1958, and received a Doctor of Medicine degree there in 1960. He pursued internships in Wimbern, Isny, Gelsenkirchen, and Düsseldorf, qualifying as a physician in 1962.
Career
He joined the Institute for Microbiology at the University of Düsseldorf as a laboratory assistant in 1962. After three and a half years there, he moved to Philadelphia to work at the Virus Laboratories of Children's Hospital of Philadelphia together with eminent virologists Werner and Gertrude Henle, who had escaped from Nazi Germany. In 1967, he contributed to a ground-breaking study that for the first time proved a virus (Epstein–Barr virus) can turn healthy cells (lymphocytes) into cancer cells. He became an assistant professor at the University of Pennsylvania in 1968. In 1969, he returned to Germany to become a regular teaching and researching professor at the University of Würzburg's Institute for Virology. In 1972, he moved to the University of Erlangen–Nuremberg. In 1977, he moved on to the University of Freiburg (Breisgau), where he headed the Department of Virology and Hygiene.
Working with Lutz Gissmann, zur Hausen first isolated human papillomavirus 6 by simple centrifugation from genital warts. He isolated HPV 6 DNA from genital warts, suggesting a possible new way of identifying viruses in human tumours. This discovery paid off several years later, in 1983, when zur Hausen identified HPV 16 DNA in cervical cancer tumours by means of Southern blot hybridization. This was followed by the discovery of HPV18 a year later, thus identifying the causes of approximately 75% of human cervical cancer. The announcement of his breakthrough sparked a major scientific controversy.
From 1983 until 2003, zur Hausen served as chairman of the board and scientific advisory board member of the German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ) in Heidelberg and as professor of medicine at Heidelberg University.
From 2007 to 2011, zur Hausen was a member of the scientific advisory board of Zukunftskolleg at the University of Konstanz. He was editor-in-chief of the International Journal of Cancer until the end of 2010. On 1 January 2010, zur Hausen became the vice president of German Cancer Aid, the largest cancer charity in Europe.
Scientific merits
Zur Hausen's field of research was the study of oncoviruses. In 1976, he hypothesised that human papillomavirus plays an important role in causing cervical cancer. Together with his collaborators, he then identified HPV16 and HPV18 in cervical cancers in 1983–84. This research made possible the development of the HPV vaccine, the first formulation of which was commercialised in 2006. He is also credited with discovery of the virus causing genital warts (HPV 6) and a monkey lymphotropic polyomavirus that is a close relative to a recently discovered human Merkel cell polyomavirus, as well as of techniques to immortalise cells with Epstein–Barr virus and to induce replication of the virus using phorbol esters. His work on papillomaviruses and cervical cancer received a great deal of scientific criticism when first published but subsequently was confirmed and was used as the basis for research on other high-risk papillomaviruses.
Nobel Prize
Zur Hausen shared the 2008 Nobel Prize in Medicine with Luc Montagnier and Françoise Barré-Sinoussi, for his discovery of human papilloma virus (HPV) causing cervical cancer
The award of the 2008 Nobel Prize to zur Hausen became controversial following the revelation that Bo Angelin, a member of the Nobel Assembly that year, also sat on the board of AstraZeneca, a company that earns patent royalties for HPV vaccines. The controversy was exacerbated by the fact that AstraZeneca had also entered into a partnership with Nobel Web and Nobel Media to sponsor documentaries and lectures to increase awareness of the prize. However, colleagues widely felt that the award was deserved, and the secretary of the Nobel Committee and Assembly issued a statement affirming that Bo Angelin was unaware of AstraZeneca's HPV vaccine patents at the time of the vote.
Personal life
Zur Hausen had three sons from his first marriage, Jan Dirk, Axel and Gerrit. In 1993, he married Ethel-Michele de Villiers, who at the time was a fellow researcher at the German Cancer Research Center, and who in prior years had co-authored many research journal articles with zur Hausen on papilloma virus and genital cancer, dating as far back as 1981. He acknowledged her research contributions and support in his Nobel Prize biography.
Zur Hausen died on 29 May 2023, at age 87.
Books
Awards
Robert Koch Prize (1975)
Lila and Murray Gruber Memorial Cancer Research Award from the American Academy of Dermatology (1985)
Charles S. Mott Prize (1986)
Paul Ehrlich and Ludwig Darmstaedter Prize (1994)
International member of the American Philosophical Society (1998)
Raymond Bourgine Award (2006)
William B. Coley Award for Distinguished Research in Basic and Tumor Immunology (with Ian Frazer) (2006)
Loeffler-Frosch Medal of Erlangen(2007)
Johann-Georg-Zimmermann Medal of Hannover (2007)
Warren Alpert Foundation Prize (2007)
AACR Award for Lifetime Achievement in Cancer Research (2008)
Gairdner Foundation International Award (2008)
Nobel Prize in Physiology or Medicine (2008)
Knight Commander's Cross of the Order of Merit of the Federal Republic of Germany (2009)
Tsungming-Tu Prize (2011)
Ernst Wertheim Prize (2012)
Science of Oncology Award from the American Society of Clinical Oncology (2014)
Mike Price Gold Medal Award from The European Association for Cancer Research (2014)
Memberships
Member of the Academia Europaea (1990)
Member of the American Philosophical Society (1998)
Honorary Member European Academy of Sciences and Arts (2008)
International member of the National Academy of Sciences (2009)
Foreign Member of the Finnish Society of Sciences and Letters (2010)
Honorary Fellow of the World Hellenic Biomedical Association (2013)
Fellow of the American Association for Cancer Research (2013)
Honorary Member of the German Society of Virology (2013)
Corresponding member of the Slovenian Academy of Sciences and Arts (June 2015)
Fellow of the American Association for the Advancement of Science (2017)
Honorary degrees
Zur Hausen received almost 40 honorary doctorates and numerous honorary professorships, including degrees from the universities of Chicago, Umeå, Prague, Salford, Helsinki, Erlangen-Nuremberg, Ferrara, Guadalajara and Sal.
References
Further reading
(interview, CV, publications)
External links
Zur Hausen Nobel Prize lecture
Harald zur Hausen / Nobelpreis für Medizin 2008 (in German) DKFZ 2008
1936 births
2023 deaths
Academic staff of the University of Würzburg
Academic staff of the University of Erlangen-Nuremberg
Academic staff of the University of Freiburg
Academic staff of Heidelberg University
Members of the European Molecular Biology Organization
Members of the German Academy of Sciences at Berlin
Members of the American Philosophical Society
Members of the National Academy of Medicine
Members of the Slovenian Academy of Sciences and Arts
People from Gelsenkirchen
Cancer researchers
German virologists
German medical researchers
German Nobel laureates
People from the Province of Westphalia
Nobel laureates in Physiology or Medicine
Papillomavirus
University of Bonn alumni
University of Hamburg alumni
Heinrich Heine University Düsseldorf alumni
Knights Commander of the Order of Merit of the Federal Republic of Germany
Recipients of the Order of Merit of Baden-Württemberg
University of Pennsylvania faculty
Infectious causes of cancer
Foreign associates of the National Academy of Sciences | Harald zur Hausen | [
"Biology"
] | 1,754 | [
"Viruses",
"Papillomavirus"
] |
1,052,323 | https://en.wikipedia.org/wiki/Sphere%20of%20influence%20%28astrodynamics%29 | A sphere of influence (SOI) in astrodynamics and astronomy is the oblate spheroid-shaped region where a particular celestial body exerts the main gravitational influence on an orbiting object. This is usually used to describe the areas in the Solar System where planets dominate the orbits of surrounding objects such as moons, despite the presence of the much more massive but distant Sun.
In the patched conic approximation, used in estimating the trajectories of bodies moving between the neighbourhoods of different bodies using a two-body approximation, ellipses and hyperbolae, the SOI is taken as the boundary where the trajectory switches which mass field it is influenced by. It is not to be confused with the sphere of activity which extends well beyond the sphere of influence.
Models
The most common base models to calculate the sphere of influence is the Hill sphere and the Laplace sphere, but updated and particularly more dynamic ones have been described.
The general equation describing the radius of the sphere of a planet:
where
is the semimajor axis of the smaller object's (usually a planet's) orbit around the larger body (usually the Sun).
and are the masses of the smaller and the larger object (usually a planet and the Sun), respectively.
In the patched conic approximation, once an object leaves the planet's SOI, the primary/only gravitational influence is the Sun (until the object enters another body's SOI). Because the definition of rSOI relies on the presence of the Sun and a planet, the term is only applicable in a three-body or greater system and requires the mass of the primary body to be much greater than the mass of the secondary body. This changes the three-body problem into a restricted two-body problem.
Table of selected SOI radii
The table shows the values of the sphere of gravity of the bodies of the solar system in relation to the Sun (with the exception of the Moon which is reported relative to Earth):
An important understanding to be drawn from this table is that "Sphere of Influence" here is "Primary". For example, though Jupiter is much larger in mass than say, Neptune, its Primary SOI is much smaller due to Jupiter's much closer proximity to the Sun.
Increased accuracy on the SOI
The Sphere of influence is, in fact, not quite a sphere. The distance to the SOI depends on the angular distance from the massive body. A more accurate formula is given by
Averaging over all possible directions we get:
Derivation
Consider two point masses and at locations and , with mass and respectively. The distance separates the two objects. Given a massless third point at location , one can ask whether to use a frame centered on or on to analyse the dynamics of .
Consider a frame centered on . The gravity of is denoted as and will be treated as a perturbation to the dynamics of due to the gravity of body . Due to their gravitational interactions, point is attracted to point with acceleration , this frame is therefore non-inertial. To quantify the effects of the perturbations in this frame, one should consider the ratio of the perturbations to the main body gravity i.e. . The perturbation is also known as the tidal forces due to body . It is possible to construct the perturbation ratio for the frame centered on by interchanging .
As gets close to , and , and vice versa. The frame to choose is the one that has the smallest perturbation ratio. The surface for which separates the two regions of influence. In general this region is rather complicated but in the case that one mass dominates the other, say , it is possible to approximate the separating surface. In such a case this surface must be close to the mass , denote as the distance from to the separating surface.
The distance to the sphere of influence must thus satisfy and so is the radius of the sphere of influence of body
Gravity well
Gravity well is a metaphorical name for the sphere of influence, highlighting the gravitational potential that shapes a sphere of influence, and that needs to be accounted for to escape or stay in the sphere of influence.
See also
Hill sphere
Sphere of influence (black hole)
Clearing the neighbourhood
References
General references
External links
Project Pluto
Astrodynamics
Orbits | Sphere of influence (astrodynamics) | [
"Engineering"
] | 877 | [
"Astrodynamics",
"Aerospace engineering"
] |
1,052,632 | https://en.wikipedia.org/wiki/Sylvester%E2%80%93Gallai%20theorem | The Sylvester–Gallai theorem in geometry states that every finite set of points in the Euclidean plane has a line that passes through exactly two of the points or a line that passes through all of them. It is named after James Joseph Sylvester, who posed it as a problem in 1893, and Tibor Gallai, who published one of the first proofs of this theorem in 1944.
A line that contains exactly two of a set of points is known as an ordinary line. Another way of stating the theorem is that every finite set of points that is not collinear has an ordinary line. According to a strengthening of the theorem, every finite point set (not all on one line) has at least a linear number of ordinary lines. An algorithm can find an ordinary line in a set of points in time .
History
The Sylvester–Gallai theorem was posed as a problem by . suggests that Sylvester may have been motivated by a related phenomenon in algebraic geometry, in which the inflection points of a cubic curve in the complex projective plane form a configuration of nine points and twelve lines (the Hesse configuration) in which each line determined by two of the points contains a third point. The Sylvester–Gallai theorem implies that it is impossible for all nine of these points to have real coordinates.
claimed to have a short proof of the Sylvester–Gallai theorem, but it was already noted to be incomplete at the time of publication. proved the theorem (and actually a slightly stronger result) in an equivalent formulation, its projective dual. Unaware of Melchior's proof, again stated the conjecture, which was subsequently proved by Tibor Gallai, and soon afterwards by other authors.
In a 1951 review, Erdős called the result "Gallai's theorem", but it was already called the Sylvester–Gallai theorem in a 1954 review by Leonard Blumenthal. It is one of many mathematical topics named after Sylvester.
Equivalent versions
The question of the existence of an ordinary line can also be posed for points in the real projective plane RP2 instead of the Euclidean plane. The projective plane can be formed from the Euclidean plane by adding extra points "at infinity" where lines that are parallel in the Euclidean plane intersect each other, and by adding a single line "at infinity" containing all the added points. However, the additional points of the projective plane cannot help create non-Euclidean finite point sets with no ordinary line, as any finite point set in the projective plane can be transformed into a Euclidean point set with the same combinatorial pattern of point-line incidences. Therefore, any pattern of finitely many intersecting points and lines that exists in one of these two types of plane also exists in the other. Nevertheless, the projective viewpoint allows certain configurations to be described more easily. In particular, it allows the use of projective duality, in which the roles of points and lines in statements of projective geometry can be exchanged for each other. Under projective duality, the existence of an ordinary line for a set of non-collinear points in RP2 is equivalent to the existence of an ordinary point in a nontrivial arrangement of finitely many lines. An arrangement is said to be trivial when all its lines pass through a common point, and nontrivial otherwise; an ordinary point is a point that belongs to exactly two lines.
Arrangements of lines have a combinatorial structure closely connected to zonohedra, polyhedra formed as the Minkowski sum of a finite set of line segments, called generators. In this connection, each pair of opposite faces of a zonohedron corresponds to a crossing point of an arrangement of lines in the projective plane, with one line for each generator. The number of sides of each face is twice the number of lines that cross in the arrangement. For instance, the elongated dodecahedron shown is a zonohedron with five generators, two pairs of opposite hexagon faces, and four pairs of opposite parallelogram faces.
In the corresponding five-line arrangement, two triples of lines cross (corresponding to the two pairs of opposite hexagons) and the remaining four pairs of lines cross at ordinary points (corresponding to the four pairs of opposite parallelograms). An equivalent statement of the Sylvester–Gallai theorem, in terms of zonohedra, is that every zonohedron has at least one parallelogram face (counting rectangles, rhombuses, and squares as special cases of parallelograms). More strongly, whenever sets of points in the plane can be guaranteed to have at least ordinary lines, zonohedra with generators can be guaranteed to have at least parallelogram faces.
Proofs
The Sylvester–Gallai theorem has been proved in many different ways. Gallai's 1944 proof switches back and forth between Euclidean and projective geometry, in order to transform the points into an equivalent configuration in which an ordinary line can be found as a line of slope closest to zero; for details, see . The 1941 proof by Melchior uses projective duality to convert the problem into an equivalent question about arrangements of lines, which can be answered using Euler's polyhedral formula. Another proof by Leroy Milton Kelly shows by contradiction that the connecting line with the smallest nonzero distance to another point must be ordinary. And, following an earlier proof by Steinberg, H. S. M. Coxeter showed that the metric concepts of slope and distance appearing in Gallai's and Kelly's proofs are unnecessarily powerful, instead proving the theorem using only the axioms of ordered geometry.
Kelly's proof
This proof is by Leroy Milton Kelly. call it "simply the best" of the many proofs of this theorem.
Suppose that a finite set of points is not all collinear. Define a connecting line to be a line that contains at least two points in the collection. By finiteness, must have a point and a connecting line that are a positive distance apart such that no other point-line pair has a smaller positive distance. Kelly proved that is ordinary, by contradiction.
Assume that is not ordinary. Then it goes through at least three points of . At least two of these are on the same side of , the perpendicular projection of on . Call them and , with being closest to (and possibly coinciding with it). Draw the connecting line passing through and , and the perpendicular from to on . Then is shorter than . This follows from the fact that and are similar triangles, one contained inside the other.
However, this contradicts the original definition of and as the point-line pair with the smallest positive distance. So the assumption that is not ordinary cannot be true, QED.
Melchior's proof
In 1941 (thus, prior to Erdős publishing the question and Gallai's subsequent proof) Melchior showed that any nontrivial finite arrangement of lines in the projective plane has at least three ordinary points. By duality, this results also says that any finite nontrivial set of points on the plane has at least three ordinary lines.
Melchior observed that, for any graph embedded in the real projective plane, the formula must equal , the Euler characteristic of the projective plane. Here , , and are the number of vertices, edges, and faces of the graph, respectively. Any nontrivial line arrangement on the projective plane defines a graph in which each face is bounded by at least three edges, and each edge bounds two faces; so, double counting gives the additional inequality . Using this inequality to eliminate from the Euler characteristic leads to the inequality . But if every vertex in the arrangement were the crossing point of three or more lines, then the total number of edges would be at least , contradicting this inequality. Therefore, some vertices must be the crossing point of only two lines, and as Melchior's more careful analysis shows, at least three ordinary vertices are needed in order to satisfy the inequality .
As note, the same argument for the existence of an ordinary vertex was also given in 1944 by Norman Steenrod, who explicitly applied it to the dual ordinary line problem.
Melchior's inequality
By a similar argument, Melchior was able to prove a more general result. For every , let be the number of points to which lines are incident. Then
or equivalently,
Axiomatics
writes of Kelly's proof that its use of Euclidean distance is unnecessarily powerful, "like using a sledge hammer to crack an almond". Instead, Coxeter gave another proof of the Sylvester–Gallai theorem within ordered geometry, an axiomatization of geometry in terms of betweenness that includes not only Euclidean geometry but several other related geometries. Coxeter's proof is a variation of an earlier proof given by Steinberg in 1944. The problem of finding a minimal set of axioms needed to prove the theorem belongs to reverse mathematics; see for a study of this question.
The usual statement of the Sylvester–Gallai theorem is not valid in constructive analysis, as it implies the lesser limited principle of omniscience, a weakened form of the law of excluded middle that is rejected as an axiom of constructive mathematics. Nevertheless, it is possible to formulate a version of the Sylvester–Gallai theorem that is valid within the axioms of constructive analysis, and to adapt Kelly's proof of the theorem to be a valid proof under these axioms.
Finding an ordinary line
Kelly's proof of the existence of an ordinary line can be turned into an algorithm that finds an ordinary line by searching for the closest pair of a point and a line through two other points. report the time for this closest-pair search as , based on a brute-force search of all triples of points, but an algorithm to find the closest given point to each line through two given points, in time , was given earlier by , as a subroutine for finding the minimum-area triangle determined by three of a given set of points. The same paper of also shows how to construct the dual arrangement of lines to the given points (as used in Melchior and Steenrod's proof) in the same time, , from which it is possible to identify all ordinary vertices and all ordinary lines. first showed how to find a single ordinary line (not necessarily the one from Kelly's proof) in time , and a simpler algorithm with the same time bound was described by .
The algorithm of is based on Coxeter's proof using ordered geometry. It performs the following steps:
Choose a point that is a vertex of the convex hull of the given points.
Construct a line that passes through and otherwise stays outside of the convex hull.
Sort the other given points by the angle they make with , grouping together points that form the same angle.
If any of the points is alone in its group, then return the ordinary line through that point and .
For each two consecutive groups of points, in the sorted sequence by their angles, form two lines, each of which passes through the closest point to in one group and the farthest point from in the other group.
For each line in the set of lines formed in this way, find the intersection point of with
Return the line whose intersection point with is the closest to .
As the authors prove, the line returned by this algorithm must be ordinary. The proof is either by construction if it is returned by step 4, or by contradiction if it is returned by step 7: if the line returned in step 7 were not ordinary, then the authors prove that there would exist an ordinary line between one of its points and , but this line should have already been found and returned in step 4.
The number of ordinary lines
While the Sylvester–Gallai theorem states that an arrangement of points, not all collinear, must determine an ordinary line, it does not say how many must be determined. Let be the minimum number of ordinary lines determined over every set of non-collinear points. Melchior's proof showed that . raised the question of whether approaches infinity with . confirmed that it does by proving that . conjectured that , for all values of . This is often referred to as the Dirac–Motzkin conjecture; see for example . proved that .
Dirac's conjectured lower bound is asymptotically the best possible, as the even numbers greater than four have a matching upper bound . The construction, due to Károly Böröczky, that achieves this bound consists of the vertices of a regular -gon in the real projective plane and another points (thus, ) on the line at infinity corresponding to each of the directions determined by pairs of vertices. Although there are pairs of these points, they determine only distinct directions. This arrangement has only ordinary lines, the lines that connect a vertex with the point at infinity collinear with the two neighbors of . As with any finite configuration in the real projective plane, this construction can be perturbed so that all points are finite, without changing the number of ordinary lines.
For odd , only two examples are known that match Dirac's lower bound conjecture, that is, with One example, by , consists of the vertices, edge midpoints, and centroid of an equilateral triangle; these seven points determine only three ordinary lines. The configuration in which these three ordinary lines are replaced by a single line cannot be realized in the Euclidean plane, but forms a finite projective space known as the Fano plane. Because of this connection, the Kelly–Moser example has also been called the non-Fano configuration. The other counterexample, due to McKee, consists of two regular pentagons joined edge-to-edge together with the midpoint of the shared edge and four points on the line at infinity in the projective plane; these 13 points have among them 6 ordinary lines. Modifications of Böröczky's construction lead to sets of odd numbers of points with ordinary lines.
proved that except when is seven. Asymptotically, this formula is already of the proven upper bound. The case is an exception because otherwise the Kelly–Moser construction would be a counterexample; their construction shows that . However, were the Csima–Sawyer bound valid for , it would claim that .
A closely related result is Beck's theorem, stating a tradeoff between the number of lines with few points and the number of points on a single line.
Ben Green and Terence Tao showed that for all sufficiently large point sets (that is, for some suitable choice of ), the number of ordinary lines is indeed at least . Furthermore, when is odd, the number of ordinary lines is at least , for some constant . Thus, the constructions of Böröczky for even and odd (discussed above) are best possible. Minimizing the number of ordinary lines is closely related to the orchard-planting problem of maximizing the number of three-point lines, which Green and Tao also solved for all sufficiently large point sets. In the dual setting, where one is looking for ordinary points, one can consider the minimum number of ordinary points in an arrangement of pseudolines. In this context, the Csima-Sawyer lower bound is still valid, though it is not known whether the Green and Tao asymptotic bound still holds.
The number of connecting lines
As Paul Erdős observed, the Sylvester–Gallai theorem immediately implies that any set of points that are not collinear determines at least different lines. This result is known as the De Bruijn–Erdős theorem. As a base case, the result is clearly true for . For any larger value of , the result can be reduced from points to points, by deleting an ordinary line and one of the two points on it (taking care not to delete a point for which the remaining subset would lie on a single line). Thus, it follows by mathematical induction. The example of a near-pencil, a set of collinear points together with one additional point that is not on the same line as the other points, shows that this bound is tight.
Generalizations
The Sylvester–Gallai theorem has been generalized to colored point sets in the Euclidean plane, and to systems of points and lines defined algebraically or by distances in a metric space. In general, these variations of the theorem consider only finite sets of points, to avoid examples like the set of all points in the Euclidean plane, which does not have an ordinary line.
Colored points
A variation of Sylvester's problem, posed in the mid-1960s by Ronald Graham and popularized by Donald J. Newman, considers finite planar sets of points (not all in a line) that are given two colors, and asks whether every such set has a line through two or more points that are all the same color. In the language of sets and families of sets, an equivalent statement is that the family of the collinear subsets of a finite point set (not all on one line) cannot have Property B. A proof of this variation was announced by Theodore Motzkin but never published; the first published proof was by .
Non-real coordinates
Just as the Euclidean plane or projective plane can be defined by using real numbers for the coordinates of their points (Cartesian coordinates for the Euclidean plane and homogeneous coordinates for the projective plane), analogous abstract systems of points and lines can be defined by using other number systems as coordinates. The Sylvester–Gallai theorem does not hold for geometries defined in this way over finite fields: for some finite geometries defined in this way, such as the Fano plane, the set of all points in the geometry has no ordinary lines.
The Sylvester–Gallai theorem also does not directly apply to geometries in which points have coordinates that are pairs of complex numbers or quaternions, but these geometries have more complicated analogues of the theorem. For instance, in the complex projective plane there exists a configuration of nine points, Hesse's configuration (the inflection points of a cubic curve), in which every line is non-ordinary, violating the Sylvester–Gallai theorem. Such a configuration is known as a Sylvester–Gallai configuration, and it cannot be realized by points and lines of the Euclidean plane. Another way of stating the Sylvester–Gallai theorem is that whenever the points of a Sylvester–Gallai configuration are embedded into a Euclidean space, preserving colinearities, the points must all lie on a single line, and the example of the Hesse configuration shows that this is false for the complex projective plane. However, proved a complex-number analogue of the Sylvester–Gallai theorem: whenever the points of a Sylvester–Gallai configuration are embedded into a complex projective space, the points must all lie in a two-dimensional subspace. Equivalently, a set of points in three-dimensional complex space whose affine hull is the whole space must have an ordinary line, and in fact must have a linear number of ordinary lines. Similarly, showed that whenever a Sylvester–Gallai configuration is embedded into a space defined over the quaternions, its points must lie in a three-dimensional subspace.
Matroids
Every set of points in the Euclidean plane, and the lines connecting them, may be abstracted as the elements and flats of a rank-3 oriented matroid. The points and lines of geometries defined using other number systems than the real numbers also form matroids, but not necessarily oriented matroids. In this context, the result of lower-bounding the number of ordinary lines can be generalized to oriented matroids: every rank-3 oriented matroid with elements has at least two-point lines, or equivalently every rank-3 matroid with fewer two-point lines must be non-orientable. A matroid without any two-point lines is called a Sylvester matroid. Relatedly, the Kelly–Moser configuration with seven points and only three ordinary lines forms one of the forbidden minors for GF(4)-representable matroids.
Distance geometry
Another generalization of the Sylvester–Gallai theorem to arbitrary metric spaces was conjectured by and proved by . In this generalization, a triple of points in a metric space is defined to be collinear when the triangle inequality for these points is an equality, and a line is defined from any pair of points by repeatedly including additional points that are collinear with points already added to the line, until no more such points can be added. The generalization of Chvátal and Chen states that every finite metric space has a line that contains either all points or exactly two of the points.
Notes
References
External links
Proof presentation by Terence Tao at the 2013 Minerva Lectures
Euclidean plane geometry
Theorems in discrete geometry
Matroid theory
Articles containing proofs | Sylvester–Gallai theorem | [
"Mathematics"
] | 4,235 | [
"Articles containing proofs",
"Euclidean plane geometry",
"Combinatorics",
"Theorems in discrete mathematics",
"Theorems in geometry",
"Theorems in discrete geometry",
"Planes (geometry)",
"Matroid theory"
] |
1,052,729 | https://en.wikipedia.org/wiki/Sear%20%28firearm%29 | In a firearm, the sear is the part of the trigger mechanism that holds the hammer, striker, or bolt back until the correct amount of pressure has been applied to the trigger, at which point the hammer, striker, or bolt is released to discharge the weapon. The sear may be a separate part or can be a surface incorporated into the trigger. Sear mechanisms are also frequently employed in archery release aids.
Description
As one firearms manufacturer notes:
Sear: A sharp bar, resting in a notch (or in British: "bent") in a hammer (or in British: "tumbler"), holding the hammer back under the tension of the mainspring. When the trigger is pulled, the sear moves out of its notch, releasing the hammer and firing the gun.
The term "sear" is sometimes incorrectly used to describe a complete trigger group.
Within a trigger group, any number of sears may exist. For example, a Ruger Blackhawk single-action revolver contains one for releasing the hammer. A Ruger Redhawk double/single-action revolver contains two, one for single-action release and the other for double-action release. A Browning BLR rifle contains three sears, all used simultaneously for hammer release. On many select-fire weapons, two sears exist, one for semi-automatic fire and the second for full-automatic fire. In this case, the selector switch disengages one over the other.
Trigger sears are a key component for trigger pull characteristics. Larger sears create creep while shorter ones produce a crisp pull. Aftermarket trigger companies, such as Bold, Timney, and Jewell, produce products in which sear contact is adjustable for personal preference. When a gunsmith does a "trigger job" to improve the quality and release of a trigger pull, most often the work includes modifying the sear, such as polishing, lapping, etc.
The sear on many firearms is often connected to a disconnector, which, after a cycle of semi-automatic fire has proceeded, keeps the hammer in place until the trigger is released and the sear takes over. Many firearms, such as the M1911 pistol, use a notch in the slide of the handgun that the top end of the disconnector returns to after the trigger is released. When the trigger is still under pressure by the firearm operator, the disconnector will not retract to its resting position. On other handguns, such as the Series 80 version of the M1911, a firing pin block acts as an internal safety, which is disengaged by the disconnector after the trigger is pulled. However, because of the spring tension placed on the disconnector by the firing pin block, the weight of the trigger pull is significantly increased.
Trigger pull is related to the interaction of the sear with the trigger and the spring. It can be measured, regulated and adjusted, but it is a complicated mechanical problem.
History
The sear has been found on early weapons such as the crossbow. The term may be related to the French verb serrer, "to grip", and the noun serre, "claw, talon, grasp." The term appears in Hamlet: "the Clown
shall make those laugh whose lungs are tickled o'th' sear" (i.e. those who have a 'hair-trigger' laugh reaction).
See also
Notes
Bibliography
Guns by Dudley Pope, 1969, Hamlyn Publishing Group, Ltd.
External links
Animation of a M1911 firing sequence
Firearm components | Sear (firearm) | [
"Technology"
] | 726 | [
"Firearm components",
"Components"
] |
1,052,884 | https://en.wikipedia.org/wiki/Tricycle%20landing%20gear | Tricycle gear is a type of aircraft undercarriage, or landing gear, that is arranged in a tricycle fashion. The tricycle arrangement has one or more nose wheels in a single front undercarriage and two or more main wheels slightly aft of the center of gravity. Tricycle gear aircraft are the easiest for takeoff, landing and taxiing, and consequently the configuration is now the most widely used on aircraft.
History
Several early aircraft had primitive tricycle gear, notably very early Antoinette planes and the Curtiss Pushers of the pre-World War I Pioneer Era of aviation. Waldo Waterman's 1929 tailless Whatsit was one of the first to have a steerable nose wheel.
In 1956, Cessna introduced sprung-steel tricycle landing gear on the Cessna 172. Their marketing department described this as "Land-O-Matic" to imply that these aircraft were much easier to land than tailwheel aircraft.
Tricycle gear and taildraggers compared
Tricycle gear is essentially the reverse of conventional landing gear or taildragger. On the ground, tricycle aircraft have a visibility advantage for the pilot as the nose of the aircraft is level, whereas the high nose of the taildragger can block the view ahead. Tricycle gear aircraft are much less liable to 'nose over' as can happen if a taildragger hits a bump or has the brakes heavily applied. In a nose-over, the aircraft's tail rises and the propeller strikes the ground, causing damage. The tricycle layout reduces the possibility of a ground loop, because the main gear lies behind the center of mass. However, tricycle aircraft can be susceptible to wheel-barrowing. The nosewheel equipped aircraft also is easier to handle on the ground in high winds due to its wing negative angle of attack. Student pilots are able to safely master nosewheel equipped aircraft more quickly.
Tricycle gear aircraft are easier to land because the attitude required to land on the main gear is the same as that required in the flare, and they are less vulnerable to crosswinds. As a result, the majority of modern aircraft are fitted with tricycle gear. Almost all jet-powered aircraft have been fitted with tricycle landing gear to prevent the blast of hot, high-speed gases from causing damage to the ground surface, in particular runways and taxiways. The few exceptions have included the Yakovlev Yak-15, the Supermarine Attacker, and prototypes such as the Heinkel He 178 that pioneered jet flight, the first four prototypes (V1 through V4) of the Messerschmitt Me 262, and the Nene powered version of the Vickers VC.1 Viking. Outside of the United States – where the tricycle undercarriage had solidly begun to take root with its aircraft firms before that nation's World War II involvement at the end of 1941 – the Heinkel firm in World War II Germany began building airframe designs meant to use tricycle undercarriage systems from their beginnings, as early as late 1939 with the Heinkel He 280 pioneering jet fighter demonstrator series, and the unexpectedly successful Heinkel He 219 twin-engined night fighter of 1942 origin.
The taildragger configuration has its own advantages, and is arguably more suited to rougher landing strips. The tailwheel makes the plane sit naturally in a nose-up attitude when on the ground, which is useful for operations on unpaved gravel surfaces where debris could damage the propeller. The tailwheel also transmits loads to the airframe in a way much less likely to cause airframe damage when operating on rough fields. The small tailwheel is much lighter and much less vulnerable than a nosewheel. Also, a fixed-gear taildragger exhibits less interference drag and form drag in flight than a fixed-gear tricycle aircraft whose nosewheel may sit directly in the propeller's slipstream. Tailwheels are smaller and cheaper to buy and to maintain. Most tailwheel aircraft are lower in overall height and thus may fit in lower hangars. Tailwheel aircraft are also more suitable for fitting with skis in wintertime.
References
Aircraft configurations | Tricycle landing gear | [
"Engineering"
] | 835 | [
"Aircraft configurations",
"Aerospace engineering"
] |
1,053,016 | https://en.wikipedia.org/wiki/Paul%20Scherrer%20Institute | The Paul Scherrer Institute (PSI) is a multi-disciplinary research institute for natural and engineering sciences in Switzerland. It is located in the Canton of Aargau in the municipalities Villigen and Würenlingen on either side of the River Aare, and covers an area over 35 hectares in size. Like ETH Zurich and EPFL, PSI belongs to the ETH Domain of the Swiss Confederation. The PSI employs around 3000 people. It conducts basic and applied research in the fields of matter and materials, human health, and energy and the environment. About 37% of PSI's research activities focus on material sciences, 24% on life sciences, 19% on general energy, 11% on nuclear energy and safety, and 9% on particle physics.
PSI develops, builds and operates large and complex research facilities and makes them available to the national and international scientific communities. In 2017, for example, more than 2,500 researchers from 60 different countries came to PSI to take advantage of the concentration of large-scale research facilities in the same location, which is unique worldwide. About 1,900 experiments are conducted each year at the approximately 40 measuring stations in these facilities.
In recent years, the institute has been one of the largest recipients of money from the Swiss lottery fund.
History
The institute, named after the Swiss physicist Paul Scherrer, was created in 1988 when EIR (Eidgenössisches Institut für Reaktorforschung, Swiss Federal Institute for Reactor Research, founded in 1960) was merged with SIN (Schweizerisches Institut für Nuklearphysik, Swiss Institute for Nuclear Research, founded in 1968). The two institutes on opposite sides of the River Aare served as national centres for research: one focusing on nuclear energy and the other on nuclear and particle physics. Over the years, research at the centres expanded into other areas, and nuclear and reactor physics accounts for just 11 percent of the research work at PSI today. Since Switzerland decided in 2011 to phase out nuclear energy, this research has primarily been concerned with questions of safety, such as how to store radioactive waste safely in a deep geological repository.
Since 1984, PSI has operated (initially as SIN) the centre for Proton Therapy for treating patients with eye melanomas and other tumours located deep inside the body. More than 9,000 patients have been treated there until now (status 2020).
The institute is also active in space research. For example, in 1990 PSI engineers built the detector of the EUVITA telescope for the Russian satellite Spectrum X-G, and later also supplied NASA and ESA with detectors to analyse radiation in space. In 1992, physicists used accelerator mass spectrometry and radiocarbon methods to determine the age of Ötzi, the mummy found in a glacier in the Ötztal Alps a year earlier, from small samples of just a few milligrams of bone, tissue and grass. They were analysed at the TANDEM accelerator on the Hönggerberg near Zurich, which at the time was jointly operated by ETH Zurich and PSI.
In 2009, the Indian-born British structural biologist Venkatraman Ramakrishnan was awarded the Nobel Prize in Chemistry for, among other things, his research at the Synchrotron Light Source Switzerland (SLS). The SLS is one of PSI's four large-scale research facilities. His investigations there enabled Ramakrishnan to clarify what ribosomes look like and how they function at the level of individual molecules. Using the information encoded in the genes, ribosomes produce proteins that control many chemical processes in living organisms.
In 2010, an international team of researchers at PSI used negative muons to perform a new measurement of the proton and found that its radius is significantly smaller than previously thought: 0.84184 femtometers instead of 0.8768. According to press reports, this result was not only surprising, it could also call previous models in physics into question. The measurements were only possible with PSI's 590 MeV proton accelerator HIPA because its secondarily generated muon beam is the only one worldwide that is intense enough to conduct the experiment.
In 2011, researchers from PSI and elsewhere succeeded in deciphering the basic structure of the protein molecule rhodopsin with the help of the SLS. This optical pigment acts as a kind of light sensor and plays a decisive role in the process of sight.
A so-called ‘barrel pixel detector’ built at PSI was a central element in the CMS detector at the Geneva nuclear research centre CERN, and was thus involved in detecting the Higgs boson. This discovery, announced on 4 July 2012, was awarded the Nobel Prize in Physics one year later.
In January 2016, 20 kilograms of plutonium were taken from PSI to the USA. According to a newspaper report, the federal government had a secret plutonium storage facility in which the material had been kept since the 1960s to construct an atomic bomb as planned at the time. The Federal Council denied this, maintaining the plutonium-239 content of the material was below 92 percent, which meant it was not weapons-grade material. The idea was rather to use the material obtained from reprocessed fuel rods of the Diorit research reactor, which was operated from 1960 to 1977, to develop a new generation of fuel element types for nuclear power plants. This, however, never happened. By the time it was decided, in 2011, to phase out nuclear power, it had become clear that there was no further use for the material in Switzerland. The Federal Council decided at the Nuclear Security Summit in 2014 to close the Swiss plutonium storage facility. A bilateral agreement between the two countries meant the plutonium could then be transferred to the US for further storage.
In July 2017, the three-dimensional alignment of magnetization inside a three-dimensional magnetic object was investigated and visualized with the help of the SLS without affecting the material. The technology is expected to be useful in developing better magnets, for example for motors or data storage.
Joël François Mesot, the long-standing Director of PSI (2008 to 2018), was elected President of ETH Zurich at the end of 2018. His post was temporarily taken over by the physicist and PSI Chief of Staff Thierry Strässle from January 2019. Since 1 April 2020, the physicist Christian Rüegg has been Director of PSI. He was previously head of the PSI research division Neutrons and Muons.
Numerous PSI spin-off companies have been founded over the years to make the research findings available to the wider society. The largest spin-off, with 120 employees, is the DECTRIS AG, founded in 2006 in nearby Baden, which specializes in the development and marketing of X-ray detectors. SwissNeutronics AG in Klingnau, which sells optical components for neutron research facilities, was founded as early as 1999. Several recent PSI offshoots, such as the manufacturer of metal-organic frameworks novoMOF or the drug developer leadXpro, have settled close to PSI in the Park Innovaare, which was founded in 2015 with the support of several companies and Canton Aargau.
Research Areas and Departments
PSI develops, builds and operates several accelerator facilities, e. g. a 590 MeV high-current cyclotron, which in normal operation supplies a beam current of about 2.2 mA. PSI also operates four large-scale research facilities: a synchrotron light source (SLS), which is particularly brilliant and stable, a spallation neutron source (SINQ), a muon source (SμS) and an X-ray free-electron laser (SwissFEL). This makes PSI currently (2020) the only institute in the world to provide the four most important probes for researching the structure and dynamics of condensed matter (neutrons, muons and synchrotron radiation) on a campus for the international user community. In addition, HIPA's target facilities also produce pions that feed the muon source and the Ultracold Neutron source UCN produces very slow, ultracold neutrons. All these particle types are used for research in particle physics.
Research at PSI is conducted with the help of these facilities. Its focus areas include:
Matter and Material
All the materials humans work with are made up of atoms. The interaction of atoms and their arrangement determine the properties of a material. Most of the researchers in the field of matter and materials at PSI want to find out more about how the internal structure of different materials relates to their observable properties. Fundamental research in this area contributes to the development of new materials with a wide range of applications, for example in electrical engineering, medicine, telecommunications, mobility, new energy storage systems, quantum computers and spintronics. The phenomena investigated include superconductivity, ferro- and antiferromagnetism, spin fluids and topological insulators.
Neutrons are intensively used for materials research at PSI because they enable unique and non-destructive access to the interior of materials on a scale ranging from the size of atoms to objects a centimetre long. They therefore serve as ideal probes for investigating fundamental and applied research topics, such as quantum spin systems and their potential for application in future computer technologies, the functionalities of complex lipid membranes and their use for the transport and targeted release of drug substances, as well as the structure of novel materials for energy storage as key components in intelligent energy networks.
In particle physics, PSI researchers are investigating the structure and properties of the innermost layers of matter and what holds them together. Muons, pions and ultra-cold neutrons are used to test the Standard Model of elementary particles, to determine fundamental natural constants and to test theories that go beyond the Standard Model. Particle physics at PSI holds many records, including the most precise determination of the coupling constants of the weak interaction and the most accurate measurement of the charge radius of the proton. Some experiments aim to find effects that are not foreseen in the Standard Model, but which could correct inconsistencies in the theory or solve unexplained phenomena from astrophysics and cosmology. Their results so far agree with the Standard Model. Examples include the upper limit measured in the MEG experiment of the hypothetical decay of positive muons into positrons and photons as well as that of the permanent electric dipole moment for neutrons.
Muons are not only useful in particle physics, but also in solid-state physics and materials science. The muon spin spectroscopy method (μSR) is used to investigate the fundamental properties of magnetic and superconducting materials as well as of semiconductors, insulators and semiconductor structures, including technologically relevant applications such as for solar cells.
Energy and the Environment
PSI researchers are addressing all aspects of energy use with the aim to make energy supplies more sustainable. Focus areas include: new technologies for renewable energies, low-loss energy storage, energy efficiency, low-pollution combustion, fuel cells, experimental and model-based assessment of energy and material cycles, environmental impacts of energy production and consumption, and nuclear energy research, in particular reactor safety and waste management.
PSI operates the ESI (Energy System Integration) experimental platform to answer specific questions on seasonal energy storage and sector coupling. The platform can be used in research and industry to test promising approaches to integrating renewable energies into the energy system – for example, storing excess electricity from solar or wind power in the form of hydrogen or methane.
At PSI a method for extracting significantly more methane gas from biowaste was developed and successfully tested with the help of the ESI platform together with the Zurich power company Energie 360°. The team was awarded the Watt d'Or 2018 of the Swiss Federal Office of Energy.
A platform for catalyst research is also maintained at PSI. Catalysis is a central component in various energy conversion processes, for example in fuel cells, water electrolysis and the methanation of carbon dioxide.
To test the pollutant emissions of various energy production processes and the behaviour of the corresponding substances in the atmosphere, PSI also operates a smog chamber.
Another area of research at PSI is on the effects of energy production on the atmosphere locally, including in the Alps, in the polar regions of the Earth and in China.
The Nuclear Energy and Safety Division is dedicated to maintaining a good level of nuclear expertise and thus to training scientists and engineers in nuclear energy. For example, PSI maintains one of the few laboratories in Europe for investigating fuel rods in commercial reactors. The division works closely with ETH Zurich, EPFL and the University of Bern, using, for example, their high-performance computers or the CROCUS research reactor at EPFL.
Human health
PSI is one of the leading institutions worldwide in the research and application of proton therapy for the treatment of cancer. Since 1984, the Center for Proton Therapy has been successfully treating cancer patients with a special form of radiation therapy. To date, more than 7500 patients with ocular tumours have been irradiated (status 2020). The success rate for eye therapy using the OPTIS facility is over 98 percent.
In 1996, an irradiation unit (Gantry 1) was equipped for the first time to use the so-called spot-scanning proton technique developed at PSI. With this technique, tumours deep inside the body are scanned three-dimensionally with a proton beam about 5 to 7 mm in width. By superimposing many individual proton spots – about 10,000 spots per litre volume – the tumour is evenly exposed to the necessary radiation dose, which is monitored individually for each spot. This allows an extremely precise, homogeneous irradiation that is optimally adapted to the usually irregular shape of the tumour. The technique enables as much as possible of the surrounding healthy tissue to be spared. The first gantry was in operation for patients from 1996 to the end of 2018. In 2013, the second Gantry 2, developed at PSI, went into operation, and in mid-2018 another treatment station, Gantry 3, was opened.
In the field of radiopharmacy, PSI's infrastructure covers the entire spectrum. In particular, PSI researchers are tackling very small tumours distributed throughout the body. These cannot be treated with the usual radiotherapy techniques. New medically applicable radionuclides have, however, been produced with the help of the proton accelerators and the neutron source SINQ at PSI. When combined for therapy with special biomolecules (antibodies), therapeutic molecules can be formed to selectively and specifically detect tumour cells. These are then labelled with a radioactive isotope. Its radiation can be localized with imaging techniques such as SPECT or PET, which enables the diagnosis of tumours and their metastases. Moreover, it can be dosed so that it also destroys the tumour cells. Several such radioactive substances have been developed at PSI. They are currently being tested in clinical trials, in close cooperation with universities, clinics and the pharmaceutical industry. PSI also supplies local hospitals with radiopharmaceuticals if required.
Since the opening of the Synchrotron Light Source Switzerland (SLS), structural biology has been a further focus of research in the field of human health. Here, the structure and function of biomolecules are being investigated – preferably at atomic resolution. The PSI researchers are primarily concerned with proteins. Every living cell needs a myriad of these molecules in order, for example, to be able to metabolise, receive and transmit signals or to divide. The aim is to understand these life processes better and thus to be able to treat or prevent diseases more effectively.
For example, PSI is investigating the structure of microtubules, filamentous structures which, among other things, pull apart chromosomes during cell division. They consist of long protein chains. When chemotherapy is used to treat cancer, it disturbs the assembly or breakdown of these chains so that the cancer cells can no longer divide. Researchers are closely observing the structure of these proteins and how they change to find out exactly where cancer drugs have to attack the microtubules. With the help of PSI's SwissFEL free-electron X-ray laser, which was inaugurated in 2016, researchers have been able to analyse dynamic processes in biomolecules with extremely high time resolution – less than a trillionth of a second (picosecond). For example, they have detected how certain proteins in the photoreceptors of the retina of our eyes are activated by light.
Accelerators and large research facilities at PSI
Proton accelerator facility
While PSI's proton accelerator, which went into service in 1974, was primarily used in the early days for elementary particle physics, today the focus is on applications for solid-state physics, radiopharmaceuticals and cancer therapy. Since it started operating, it has been constantly developed further, and its performance today is as much as 2.4 mA, which is 24 times higher than the initial 100 μA. This is why the facility is now considered a high-performance proton accelerator, or HIPA (High Intensity Proton Accelerator) for short. Basically, it consists of three accelerators in series: the Cockcroft-Walton, the injector-2 cyclotron, and the ring-cyclotron. They accelerate the protons to around 80 percent of the speed of light.
Proton source and Cockcroft-Walton
In a proton source based on cyclotron resonance, microwaves are used to strip electrons from hydrogen atoms. What remains are the hydrogen atomic nuclei, each consisting of only one proton. These protons leave the source with a potential of 60 kilovolts and are then subjected to a further voltage of 810 kilovolts in an accelerator tube. Both voltages are supplied by a Cockcroft-Walton accelerator. With a total of 870 kilovolts, the protons are accelerated to a speed of 46 million km/h or 4 percent of the speed of light. The protons are then fed into the Injector-2.
Injector-1
With Injector-1, operating currents of 170 μA and peak currents of 200 μA could be reached. It was also used for low energy experiments, for OPTIS eye therapy and for the LiSoR experiment in the MEGAPIE project. Since December 1, 2010, this ring accelerator has been out of operation.
Injector-2
The Injector-2, which was commissioned in 1984 and developed by what was then SIN, replaced the Injector-1 as the injection machine for the 590 MeV ring cyclotron. Initially, it was possible to operate Injector-1 and Injector-2 alternately, but now only Injector-2 is used to feed the proton beam into the ring. The new cyclotron has enabled an increase in the beam current from 1 to 2 mA, which was the absolute record value for the 1980s. Today, the injector-2 delivers a beam current of ≈ 2.2 mA in routine operation and 2.4 mA in high current operation at 72 MeV, which is about 38 percent of the speed of light.
Originally, two resonators were operated at 150 MHz in flat-top mode to enable a clear separation of the proton orbits, but these are now also used for acceleration. Part of the extracted 72 MeV proton beam can be split off for isotope production, while the main part is fed into the Ring Cyclotron for further acceleration.
Ring
Like the Injector-2, the Ring Cyclotron, which has a circumference of about 48 m, went into operation in 1974. It was specially developed at SIN and is at the heart of the PSI proton accelerator facilities. The protons are accelerated to 80 percent of the speed of light on the approximately 4 km long track, which the protons cover inside the ring in 186 laps. This corresponds to a kinetic energy of 590 MeV. Only three such rings exist worldwide, namely: TRIUMF in Vancouver, Canada; LAMPF in Los Alamos, USA; and the one at PSI. TRIUMF has only reached beam currents of 500 μA and LAMPF 1 mA.
In addition to the four original Cavities, a smaller fifth cavity was added in 1979. It is operated at 150 megahertz as a flat-top cavity, and has enabled a significant increase in the number of extracted particles. Since 2008 all the old aluminium cavities of the Ring Cyclotron have been replaced with new copper cavities. These allow higher voltage amplitudes and thus a greater acceleration of the protons per revolution. The number of revolutions of the protons in the cyclotron could thus be reduced from approx. 200 to 186, and the distance travelled by the protons in the cyclotron decreased from 6 km to 4 km. With a beam current of 2.2 mA, this proton facility at PSI is currently the most powerful continuous particle accelerator in the world. The 1.3 MW strong proton beam is directed towards the muon source (SμS) and the spallation neutron source (SINQ).
Swiss Muon Source (SμS)
In the middle of the large experimental hall, the proton beam of the Ring Cyclotron collides with two targets – rings of carbon. During the collisions of the protons with the atomic carbon nuclei, pions are first formed and then decay into muons after about 26 billionths of a second. Magnets then direct these muons to instruments used in materials science and particle physics. Thanks to the Ring Cyclotron's enormously high proton current, the muon source is able to generate the world's most intense muon beams. These enable researchers to conduct experiments in particle physics and materials science that cannot be carried out anywhere else.
The Swiss Muon Source (SμS) has seven beamlines that scientists can use to investigate various aspects of modern physics. Some materials scientists use them for muon spin spectroscopy experiments. PSI is the only place in the world where a muon beam of sufficient intensity is available at a very low energy of only a few kiloelectron volts – thanks to the Muon Source's high muon intensity and a special process. The resulting muons are slow enough to be used to analyse thin layers of material and surfaces. Six measuring stations (FLAME (from 2021), DOLLY, GPD, GPS, HAL-9500, and LEM) with instruments for a wide range of applications are available for such investigations.
Particle physicists are using some of the beamlines to perform high-precision measurements to test the limits of the Standard Model.
Swiss Spallation Neutron Source (SINQ)
The neutron source SINQ, which has been in operation since 1996, was the first, and is still the strongest, of its kind. It delivers a continuous neutron flux of 1014 n cm−2s−1. In SINQ the protons from the large particle accelerator strike a lead target and knock the neutrons out of the lead nuclei, making them available for experiments. In addition to thermal neutrons, a moderator made of liquid deuterium also enables the production of slow neutrons, which have a lower energy spectrum.
The MEGAPIE Target (Megawatt Pilot-Experiment) came into operation in summer 2006. By replacing the solid target with a target made of a lead-bismuth eutectic, the neutron yield could be increased by about another 80%.
Since it would be very costly to dispose of the MEGAPIE target, PSI decided in 2009 not to produce another such target and instead to develop the solid target further as it had already proven its worth. Based on the findings from the MEGAPIE project, it was possible to obtain almost as large an increase in neutron yield for operation with a solid target.
SINQ was one of the first facilities to use specially developed optical guide systems to transport slow neutrons. Metal-coated glass conduits guide neutrons over longer distances (a few tens of metres) by means of total reflection, analogous to the light guidance in glass fibres, with a low loss of intensity. The efficiency of these neutron guides has steadily increased with advances in manufacturing technology. This is why PSI decided to carry out a comprehensive upgrade in 2019. When SINQ goes back into operation in summer 2020, it will be able to provide, on average, five times more neutrons for experiments, and in a special case, even 30 times more.
SINQ's 15 instruments are not only used for PSI research projects but are also available for national and international users.
Ultracold Neutron Source (UCN)
Since 2011, PSI has also been operating a second spallation neutron source for the generation of ultracold neutrons (UCN). Unlike SINQ, it is pulsed and uses HIPA's full beam, but normally only for 8 seconds every 5 minutes. The design is similar to that of SINQ. In order to cool down the neutrons, however, it uses frozen deuterium at a temperature of 5 Kelvin (corresponding to −268 degrees Celsius) as a cold moderator. The UCN generated can be stored in the facility and observed for a few minutes in experiments.
COMET cyclotron
This superconducting 250 MeV cyclotron has been in operation for proton therapy since 2007 and provides the beam for treating tumours in cancer patients. It was the first superconducting cyclotron worldwide to be used for proton therapy. Previously, part of the proton beam from the Ring Cyclotron was split off for this purpose, but since 2007 the medical facility has been producing its own proton beam independently, which supplies several irradiation stations for therapy. Other components of the facility, the peripheral equipment and the control systems have also been improved in the meantime, so that today the facility is available over 98 percent of the time with more than 7000 operating hours per year.
Swiss Light Source (SLS)
The Swiss Light Source (SLS), an electron synchrotron, has been in operation since 1 August 2001. It works like a kind of combined X-ray machine and microscope to screen a wide variety of substances. In the circular structure, the electrons move on a circular path 288 m in circumference, emitting synchrotron radiation in a tangential direction. A total of 350 magnets hold the electron beam on its course and focus it. Acceleration cavities ensure that the beam's speed remains constant.
Since 2008, the SLS has been the accelerator with the thinnest electron beam in the world. PSI researchers and technicians have been working on this for eight years and have repeatedly adjusted each of the many magnets. The SLS offers a very broad spectrum of synchrotron radiation from infrared light to hard X-rays. This enables researchers to take microscopic pictures inside objects, materials and tissue to, for example, improve materials or develop drugs.
In 2017, a new instrument at the SLS made it possible to look inside a computer chip for the first time without destroying it. Structures such as 45 nanometre narrow power lines and 34 nanometre high transistors became visible. This technology enables chip manufacturers to, for example, check whether their products comply with the specifications more easily.
Currently, under the working title "SLS 2.0", plans are being made to upgrade the SLS and thus create a fourth-generation synchrotron light source.
SwissFEL
The SwissFEL free-electron laser was officially opened on 5 December 2016 by the Federal Councillor Johann Schneider-Ammann. In 2018, the first beamline ARAMIS came into operation. The second beamline ATHOS is scheduled to follow in autumn 2020. Worldwide, only four comparable facilities are in operation.
Training Centre
The PSI Education Centre has over 30 years of experience in training and providing further education in technical and interdisciplinary fields. It trains over 3,000 participants annually.
The centre offers a wide range of basic and advanced training courses for both professionals and others working with ionising radiation or radioactive materials. The courses, in which participants acquire the relevant expertise, are recognised by the Federal Office of Public Health (FOPH) and the Swiss Federal Nuclear Safety Inspectorate (ENSI).
It also runs basic and advanced training courses for PSI's staff and interested individuals from the ETH Domain. Since 2015, courses on human resources development (such as conflict management, leadership workshops, communication and transferable skills) have also been held.
The quality of the PSI Education Centre is certified (ISO 29990:2001).
Cooperation with industry
PSI holds about 100 active patent families in, for example, medicine, with investigation techniques for proton therapy against cancer or for the detection of prions, the cause of mad cow disease. Other patent families are in the field of photoscience, with special lithography processes for structuring surfaces, in the environmental sciences for recycling rare earths, for catalysts or for the gasification of biomass, in the materials sciences and in other fields. PSI maintains its own technology transfer office for patents.
Patents have, for example, been granted for detectors used in high-performance X-ray cameras developed for the Swiss Synchrotron Light Source SLS, which can be used to investigate materials at the atomic level. These provided the basis for founding the company DECTRIS, the largest spin-off to date to emerge from PSI. In 2017, the Lausanne-based company Debiopharm licensed the active substance 177Lu-PSIG-2, which was developed at the Centre for Radiopharmaceutical Sciences at PSI. This substance is effective in treating a type of thyroid cancer. It is to be further developed under the name DEBIO 1124 with the aim to have it approved and get it ready for market launch. Another PSI spin-off, GratXray, works with a method based on phase contrasts in lattice interferometry. The method was originally developed to characterize synchrotron radiation and is expected to become the gold standard in screening for breast cancer. The new technology has already been used in a prototype that PSI developed in collaboration with Philips.
See also
Science and technology in Switzerland
Swiss Innovation Park
Proton therapy
References
External links
PSI Homepage
Website of SLS
Website of SINQ
Website of SwissFEL
Proton therapy program
High-Intensity-Proton-Accelerators at PSI
ETH Domain
1988 establishments in Switzerland
Physics research institutes
Neutron facilities
Research institutes in Switzerland
Particle physics facilities
Accelerator physics
Synchrotron radiation
Institutes associated with CERN
Research institutes established in 1988 | Paul Scherrer Institute | [
"Physics"
] | 6,287 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
1,053,052 | https://en.wikipedia.org/wiki/Tetrahydrobiopterin | Tetrahydrobiopterin (BH4, THB), also known as sapropterin (INN), is a cofactor of the three aromatic amino acid hydroxylase enzymes, used in the degradation of amino acid phenylalanine and in the biosynthesis of the neurotransmitters serotonin (5-hydroxytryptamine, 5-HT), melatonin, dopamine, norepinephrine (noradrenaline), epinephrine (adrenaline), and is a cofactor for the production of nitric oxide (NO) by the nitric oxide synthases. Chemically, its structure is that of a (dihydropteridine reductase) reduced pteridine derivative (quinonoid dihydrobiopterin).
Tetrahydrobiopterin is available as a tablet for oral administration in the form of sapropterin dihydrochloride (BH4*2HCL). It was approved for use in the United States as a tablet in December 2007 and as a powder in December 2013. It was approved for use in the European Union in December 2008, Canada in April 2010, and Japan in July 2008. It is sold under the brand names Kuvan and Biopten. The typical cost of treating a patient with Kuvan is per year. BioMarin holds the patent for Kuvan until at least 2024, but Par Pharmaceutical has a right to produce a generic version by 2020.
Medical uses
Sapropterin is indicated in tetrahydrobiopterin deficiency caused by GTP cyclohydrolase I (GTPCH) deficiency, or 6-pyruvoyltetrahydropterin synthase (PTPS) deficiency. Also, BH4*2HCL is FDA approved for use in phenylketonuria (PKU), along with dietary measures. However, most people with PKU have little or no benefit from BH4*2HCL.
Adverse effects
The most common adverse effects, observed in more than 10% of people, include headache and a running or obstructed nose. Diarrhea and vomiting are also relatively common, seen in at least 1% of people.
Interactions
No interaction studies have been conducted. Because of its mechanism, tetrahydrobiopterin might interact with dihydrofolate reductase inhibitors like methotrexate and trimethoprim, and NO-enhancing drugs like nitroglycerin, molsidomine, minoxidil, and PDE5 inhibitors. Combination of tetrahydrobiopterin with levodopa can lead to increased excitability.
Functions
Tetrahydrobiopterin has multiple roles in human biochemistry. The major one is to convert amino acids such as phenylalanine, tyrosine, and tryptophan to precursors of dopamine and serotonin, major monoamine neurotransmitters. It works as a cofactor, being required for an enzyme's activity as a catalyst, mainly hydroxylases.
Cofactor for tryptophan hydroxylases
Tetrahydrobiopterin is a cofactor for tryptophan hydroxylase (TPH) for the conversion of L-tryptophan (TRP) to 5-hydroxytryptophan (5-HTP).
Cofactor for phenylalanine hydroxylase
Phenylalanine hydroxylase (PAH) catalyses the conversion of L-phenylalanine (PHE) to L-tyrosine (TYR). Therefore, a deficiency in tetrahydrobiopterin can cause a toxic buildup of L-phenylalanine, which manifests as the severe neurological issues seen in phenylketonuria.
Cofactor for tyrosine hydroxylase
Tyrosine hydroxylase (TH) catalyses the conversion of L-tyrosine to L-DOPA (DOPA), which is the precursor for dopamine. Dopamine is a vital neurotransmitter, and is the precursor of norepinephrine and epinephrine. Thus, a deficiency of BH4 can lead to systemic deficiencies of dopamine, norepinephrine, and epinephrine. In fact, one of the primary conditions that can result from GTPCH-related BH4 deficiency is dopamine-responsive dystonia; currently, this condition is typically treated with carbidopa/levodopa, which directly restores dopamine levels within the brain.
Cofactor for nitric oxide synthase
Nitric oxide synthase (NOS) catalyses the conversion of a guanidino nitrogen of L-arginine (L-Arg) to nitric oxide (NO). Among other things, nitric oxide is involved in vasodilation, which improves systematic blood flow. The role of BH4 in this enzymatic process is so critical that some research points to a deficiency of BH4 – and thus, of nitric oxide – as being a core cause of the neurovascular dysfunction that is the hallmark of circulation-related diseases such as diabetes. As a co-factor for nitric oxide synthase, tetrahydrobiopterin supplementation has shown beneficial results for the treatment of endothelial dysfunction in animal experiments and clinical trials, although the tendency of BH4 to become oxidized to BH2 remains a problem.
Cofactor for ether lipid oxidase
Ether lipid oxidase (alkylglycerol monooxygenase, AGMO) catalyses the conversion of 1-alkyl-sn-glycerol to 1-hydroxyalkyl-sn-glycerol.
History
Tetrahydrobiopterin was discovered to play a role as an enzymatic cofactor. The first enzyme found to use tetrahydrobiopterin is phenylalanine hydroxylase (PAH).
Biosynthesis and recycling
Tetrahydrobiopterin is biosynthesized from guanosine triphosphate (GTP) by three chemical reactions mediated by the enzymes GTP cyclohydrolase I (GTPCH), 6-pyruvoyltetrahydropterin synthase (PTPS), and sepiapterin reductase (SR).
BH4 can be oxidized by one or two electron reactions, to generate BH4 or BH3 radical and BH2, respectively. Research shows that ascorbic acid (also known as ascorbate or vitamin C) can reduce BH3 radical into BH4, preventing the BH3 radical from reacting with other free radicals (superoxide and peroxynitrite specifically). Without this recycling process, uncoupling of the endothelial nitric oxide synthase (eNOS) enzyme and reduced bioavailability of the vasodilator nitric oxide occur, creating a form of endothelial dysfunction. Ascorbic acid is oxidized to dehydroascorbic acid during this process, although it can be recycled back to ascorbic acid.
Folic acid and its metabolites seem to be particularly important in the recycling of BH4 and NOS coupling.
Research
Other than PKU studies, tetrahydrobiopterin has participated in clinical trials studying other approaches to solving conditions resultant from a deficiency of tetrahydrobiopterin. These include autism, depression, ADHD, hypertension, endothelial dysfunction, and chronic kidney disease. Experimental studies suggest that tetrahydrobiopterin regulates deficient production of nitric oxide in cardiovascular disease states, and contributes to the response to inflammation and injury, for example in pain due to nerve injury. A 2015 BioMarin-funded study of PKU patients found that those who responded to tetrahydrobiopterin also showed a reduction of ADHD symptoms.
Depression
In psychiatry, tetrahydrobiopterin has been hypothesized to be involved in the pathophysiology of depression, although evidence is inconclusive to date.
Autism
In 1997, a small pilot study was published on the efficacy of tetrahydrobiopterin (BH4) on relieving the symptoms of autism, which concluded that it "might be useful for a subgroup of children with autism" and that double-blind trials are needed, as are trials which measure outcomes over a longer period of time. In 2010, Frye et al. published a paper which concluded that it was safe, and also noted that "several clinical trials have suggested that treatment with BH4 improves ASD symptomatology in some individuals."
Cardiovascular disease
Since nitric oxide production is important in regulation of blood pressure and blood flow, thereby playing a significant role in cardiovascular diseases, tetrahydrobiopterin is a potential therapeutic target. In the endothelial cell lining of blood vessels, endothelial nitric oxide synthase is dependent on tetrahydrobiopterin availability. Increasing tetrahydrobiopterin in endothelial cells by augmenting the levels of the biosynthetic enzyme GTPCH can maintain endothelial nitric oxide synthase function in experimental models of disease states such as diabetes, atherosclerosis, and hypoxic pulmonary hypertension. However, treatment of people with existing coronary artery disease with oral tetrahydrobiopterin is limited by oxidation of tetrahydrobiopterin to the inactive form, dihydrobiopterin, with little benefit on vascular function.
Neuroprotection in prenatal hypoxia
Depletion of tetrahydrobiopterin occurs in the hypoxic brain and leads to toxin production. Preclinical studies in mice reveal that treatment with oral tetrahydrobiopterin therapy mitigates the toxic effects of hypoxia on the developing brain, specifically improving white matter development in hypoxic animals.
Programmed cell death
GTPCH (GCH1) and tetrahydrobiopterin were found to have a secondary role protecting against cell death by ferroptosis in cellular models by limiting the formation of toxic lipid peroxides. Tetrahydrobiopterin acts as a potent, diffusable antioxidant that resists oxidative stress and enables cancer cell survival via promotion of angiogenesis.
References
Further reading
External links
Coenzymes
Lactams
Drugs developed by Merck
Orphan drugs
Pteridines
Vicinal diols | Tetrahydrobiopterin | [
"Chemistry"
] | 2,261 | [
"Organic compounds",
"Coenzymes"
] |
1,053,070 | https://en.wikipedia.org/wiki/Zire%2072 | The Zire 72 is Palm, Inc.'s second Personal Digital Assistant with an integrated digital camera. Introduced in 2004, it is the replacement for the Zire 71, having a 1.2 megapixel camera, 32 MB of memory, built-in Bluetooth wireless communication, video recording and playback capability, a built-in microphone, hi-res hi-color screen, SecureDigital smartcard slot, and a 312 MHz Intel PXA270 processor.
Users have complained about several problems, most notably
Blue paint peels off (a "special edition" version has been released that has no blue paint, just silver.)
Camera quality (broken pixels, uncovered lens etc.)
Screen whining
Battery life
Palm, Inc. released a second version of the Zire 72, the Zire 72s, which is silver. This change fixed the problem with the paint. They subsequently released a version of the blue Zire 72, which used a different paint, and does not peel.
As with other Bluetooth enabled devices, turning that connectivity off when not required, extends battery life.
This is the last model in PalmOne's line of multimedia Zire devices, and the last model made with a camera. The Zire 72 has been discontinued and is no longer supported by PalmOne/Palm's successor, Hewlett-Packard.
There have also been many complaints about discolored pixels, such as certain groups of pixels on the screen turning green, red, and blue.
See also
Zire Handheld
Notes
References
Zire 72 review by Mobile Tech Reviews
External links
Product Support from Palm US Website
Palm OS devices | Zire 72 | [
"Technology"
] | 334 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
1,053,191 | https://en.wikipedia.org/wiki/H3%20%28pyrotechnics%29 | H3 is a pyrotechnic composition which is used mostly as a burst charge for small diameter shells. It is friction and shock sensitive, as are most compositions containing chlorates. For this reason, H3 should be mixed using the "diaper method" and not with a ball mill. The composition consists of:
Potassium chlorate (KClO3) (oxidizing agent) - 75%
Charcoal (fuel) - 25%
Dextrin (binder) - 2% (additional percent)
Due to the potassium chlorate, H3 should not be mixed with sulfur or compositions containing sulfur, as sulfur increases the sensitivity of the mixture.
External links
A pyroguide article on H3
Pyrotechnic compositions | H3 (pyrotechnics) | [
"Chemistry"
] | 156 | [
"Pyrotechnic compositions"
] |
1,053,303 | https://en.wikipedia.org/wiki/Statistical%20learning%20theory | Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
Introduction
The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input.
Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be such that
Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set.
Formal description
Take to be the vector space of all possible inputs, and to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space , i.e. there exists some unknown . The training set is made up of samples from this probability distribution, and is notated
Every is an input vector from the training data, and is the output that corresponds to it.
In this formalism, the inference problem consists of finding a function such that . Let be a space of functions called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. Let be the loss function, a metric for the difference between the predicted value and the actual value . The expected risk is defined to be
The target function, the best possible function that can be chosen, is given by the that satisfies
Because the probability distribution is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called the empirical risk
A learning algorithm that chooses the function that minimizes the empirical risk is called empirical risk minimization.
Loss functions
The choice of loss function is a determining factor on the function that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to be convex.
Different loss functions are used depending on whether the problem is one of regression or one of classification.
Regression
The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression. The form is:
The absolute value loss (also known as the L1-norm) is also sometimes used:
Classification
In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification with , this is:
where is the Heaviside step function.
Regularization
In machine learning problems, a major problem that arises is that of overfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input. Empirical risk minimization runs this risk of overfitting: finding a function that matches the data exactly but does not predict future output well.
Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well. Regularization can solve the overfitting problem and give the problem stability.
Regularization can be accomplished by restricting the hypothesis space . A common example would be restricting to linear functions: this can be seen as a reduction to the standard problem of linear regression. could also be restricted to polynomial of degree , exponentials, or bounded functions on L1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero.
One example of regularization is Tikhonov regularization. This consists of minimizing
where is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution.
Bounding empirical risk
Consider a binary classifier . We can apply Hoeffding's inequality to bound the probability that the empirical risk deviates from the true risk to be a Sub-Gaussian distribution.
But generally, when we do empirical risk minimization, we are not given a classifier; we must choose it. Therefore, a more useful result is to bound the probability of the supremum of the difference over the whole class.
where is the shattering number and is the number of samples in your dataset. The exponential term comes from Hoeffding but there is an extra cost of taking the supremum over the whole class, which is the shattering number.
See also
Reproducing kernel Hilbert spaces are a useful choice for .
Proximal gradient methods for learning
Rademacher complexity
Vapnik–Chervonenkis dimension
References
Machine learning
Estimation theory | Statistical learning theory | [
"Engineering"
] | 1,288 | [
"Artificial intelligence engineering",
"Machine learning"
] |
1,053,364 | https://en.wikipedia.org/wiki/GraphicsMagick | GraphicsMagick is a fork of ImageMagick, emphasizing stability of both programming API and command-line options. It was branched off ImageMagick's version 5.5.2 in 2002 after irreconcilable differences emerged in the developers' group.
In addition to the programming language APIs available with ImageMagick, GraphicsMagick also includes a Tcl API, called TclMagick.
GraphicsMagick is used by several websites to process large numbers of uploaded photographs. As of 2023, GraphicsMagick had 4 active code contributors while ImageMagick had 24 active contributors.
The most recent stable release of GraphicsMagick, version 1.3.45, was made available on August 27, 2024.
References
External links
Slides from Web2.0 Expo 2009. (and somethin else interestin’)
Batch Processing Millions and Millions of Images
Command-line software
Free raster graphics editors
Graphics libraries
Software forks
Free software programmed in C
Software using the MIT license | GraphicsMagick | [
"Technology"
] | 211 | [
"Command-line software",
"Computing commands"
] |
1,053,447 | https://en.wikipedia.org/wiki/Physical%20attractiveness | Physical attractiveness is the degree to which a person's physical features are considered aesthetically pleasing or beautiful. The term often implies sexual attractiveness or desirability, but can also be distinct from either. There are many factors which influence one person's attraction to another, with physical aspects being one of them. Physical attraction itself includes universal perceptions common to all human cultures such as facial symmetry, sociocultural dependent attributes, and personal preferences unique to a particular individual.
In many cases, humans subconsciously attribute positive characteristics, such as intelligence and honesty, to physically attractive people, a psychological phenomenon called the Halo effect. Research done in the United States and United Kingdom found that objective measures of physical attractiveness and intelligence are positively correlated, and that the association between the two attributes is stronger among men than among women. Evolutionary psychologists have tried to answer why individuals who are more physically attractive should also, on average, be more intelligent, and have put forward the notion that both general intelligence and physical attractiveness may be indicators of underlying genetic fitness. A person's physical characteristics can signal cues to fertility and health, with statistical modeling studies showing that the facial shape variables that reflect aspects of physiological health, including body fat and blood pressure, also influence observers' perceptions of health. Attending to these factors increases reproductive success, furthering the representation of one's genes in the population.
Heterosexual men tend to be attracted to women who have a youthful appearance and exhibit features such as a symmetrical face, full breasts, full lips, and a low waist–hip ratio. Heterosexual women tend to be attracted to men who are taller than them and who display a high degree of facial symmetry, masculine facial dimorphism, upper body strength, broad shoulders, a relatively narrow waist, and a V-shaped torso.
General contributing factors
Generally, physical attractiveness can be viewed from a number of perspectives; with universal perceptions being common to all human cultures, cultural and social aspects, and individual subjective preferences. The perception of attractiveness can have a significant effect on how people are judged in terms of employment or social opportunities, friendship, sexual behaviour, and marriage.
Some physical features are attractive in both men and women, particularly bodily and facial symmetry, although one contrary report suggests that "absolute flawlessness" with perfect symmetry can be "disturbing". Symmetry may be evolutionarily beneficial as a sign of health because asymmetry "signals past illness or injury". One study suggested people were able to "gauge beauty at a subliminal level" by seeing only a glimpse of a picture for one-hundredth of a second. Other important factors include youthfulness, skin clarity and smoothness of skin; and "vivid colour" in the eyes and hair. However, there are numerous differences based on gender.
A 1921 study of the reports of college students regarding those traits argued that static traits, such as beauty or ugliness of features, hold a position subordinate to groups of physical elements like expressive behaviour, affectionate disposition, grace of manner, aristocratic bearing, social accomplishments and personal habits.
Grammer and colleagues have identified eight "pillars" of beauty: youthfulness, symmetry, averageness, sex-hormone markers, body odor, motion, skin complexion, and hair texture. Traditionally in Samoa, body fat was acceptable or attractive.
Facial features
An Italian study published in 2008 analyzed the positions of the 50 soft-tissue landmarks of the faces of 324 white Northern Italian adolescent boys and girls to compare the features of a group of 93 "beautiful" individuals selected by a commercial casting agency with those of a reference group with normal dentofacial dimensions and proportions. The research found that, in comparison with the reference group, the attractive adolescents tended to have the following characteristics:
the ratio between the volume of the forehead and that of the total face was larger;
the nasal volume was smaller;
the distance between outer canthi was larger;
total facial height and depth were reduced.
Some tendencies differed by age and sex:
the facial volume was smaller in older attractive boys than in their peers, but bigger in attractive girls;
the faces of older attractive adolescents were less rounded (bigger ratio between facial area and volume), but the reverse was true for girls of any age;
attractive older boys had smaller angles of facial convexity with more acute profiles, while in girls the reverse pattern was found;
the nasolabial angle was reduced in girls, but in older boys the effect was reversed;
older attractive boys tended to have more prominent chins.
The study concluded that attractive adolescents had more neotenous and juvenile features, but older attractive boys also showed tendencies towards sexual dimorphism.
Contrary to common misconception, one study finds that non-severe facial scarring increases male attractiveness for short-term relationships.
Symmetry
Symmetrical faces and bodies may be signs of good inheritance to women of child-bearing age seeking to create healthy offspring. Studies suggest women are less attracted to men with asymmetrical faces, and symmetrical faces correlate with long-term mental performance and are an indication that a man has experienced "fewer genetic and environmental disturbances such as diseases, toxins, malnutrition or genetic mutations" while growing. Since achieving symmetry is a difficult task during human growth, requiring billions of cell reproductions while maintaining a parallel structure, achieving symmetry is a visible signal of genetic health.
Studies have also suggested that women at peak fertility were more likely to fantasize about men with greater facial symmetry, and other studies have found that male symmetry was the only factor that could significantly predict the likelihood of a woman experiencing orgasm during sex. Women with partners possessing greater symmetry reported significantly more copulatory female orgasms than were reported by women with partners possessing low symmetry, even with many potential confounding variables controlled. This finding has been found to hold across different cultures. It has been argued that masculine facial dimorphism (in men) and symmetry in faces are signals advertising genetic quality in potential mates. Low facial and body fluctuating asymmetry may indicate good health and intelligence, which are desirable features. Studies have found that women who perceive themselves as being more physically attractive are more likely to favour men with a higher degree of facial symmetry than are women who perceive themselves as being less physically attractive. It has been found that symmetrical females and males have a tendency to begin to have sexual intercourse at an earlier age, to have more sexual partners, and to have more one-night stands. They are also more likely to engage in infidelity. A study of quarterbacks in the American National Football League found a positive correlation between facial symmetry and salaries.
Body scent
Double-blind studies found that women prefer the scent of men who are rated as facially attractive. For example, both males and females were more attracted to the natural scent of individuals who had been rated by consensus as facially attractive. Additionally, it has also been shown that women have a preference for the scent of men with more symmetrical faces, and that women's preference for the scent of more symmetrical men is strongest during the most fertile period of their menstrual cycle. Within the set of normally cycling women, individual women's preference for the scent of men with high facial symmetry correlated with their probability of conception. Men's body odor is also affected by their diet, with women expressing preferences for male body odor associated with increased dietary fruit, vegetable and protein content, and reduced carbohydrate content.
Genetics
Studies have explored the genetic basis behind such issues as facial symmetry and body scent and how they influence physical attraction. In one study in which women wore men's T-shirts, researchers found that women were more attracted to the bodily scents in shirts of men who had a different type of gene section within the DNA called major histocompatibility complex (MHC). MHC is a large gene area within the DNA of vertebrates which encodes proteins dealing with the immune system and which influences individual bodily odors. One hypothesis is that humans are naturally attracted by the sense of smell and taste to others with dissimilar MHC sections, perhaps to avoid subsequent inbreeding while increasing the genetic diversity of offspring. Furthermore, there are studies showing that women's natural attraction for men with dissimilar immune profiles can be distorted with use of birth control pills. Other research findings involving the genetic foundations of attraction suggest that MHC heterozygosity positively correlates with male facial attractiveness. Women judge the faces of men who are heterozygous at all three MHC loci to be more attractive than the faces of men who are homozygous at one or more of these loci. Additionally, a second experiment with genotyped women raters, found these preferences were independent of the degree of MHC similarity between the men and the female rater. With MHC heterozygosity independently seen as a genetic advantage, the results suggest that facial attractiveness in men may be a measure of genetic quality. General genetic heterozygosity has been demonstrated to be related to attractiveness in that people with mixed genetic backgrounds (i.e., mixed race people) as seen as more attractive than people with a more similar genetic parents. (i.e., single race people). However, some studies have not found that mixed race individuals are rated as more attractive, and one found that only certain mixes were rated as more attractive; this study argued that equating race with genetics was incorrect and argued for social influences as the cause.
Youthfulness
A 2010 study by American dating site OkCupid on 200,000 of its male and female users found that heterosexual women except those during their early to mid-twenties are open to relationships with both somewhat older and somewhat younger men; they have a larger potential dating pool than men until age 26. At age 20, women, in a "dramatic change", begin sending private messages to significantly older men. At age 29, they become "even more open to older men". Male desirability to women peaks in the late 20s and does not fall below the average for all men until 36. Other research indicates that women, irrespective of their own age, are attracted to men who are the same age or older.
For the Romans especially, "beardlessness" and "smooth young bodies" were considered beautiful to both men and women. For Greek and Roman men, the most desirable traits of boys were their "youth" and "hairlessness". Pubescent boys were considered a socially appropriate object of male desire, while post-pubescent boys were considered to be "ἔξωροι" or "past the prime". This was largely in the context of pederasty (adult male interest in adolescent boys). Today, men and women's attitudes towards male beauty have changed. For example, body hair on men may even be preferred (see below).
A 1984 study said that gay men tend to prefer gay men of the same age as ideal partners, but there was a statistically significant effect (p < 0.05) of masculinity-femininity. The study said that more feminine men tended to prefer relatively older men than themselves and more masculine men tended to prefer relatively younger men than themselves.
Cross-cultural data shows that the reproductive success of women is tied to their youth and physical attractiveness, such as the pre-industrial Sami where the most reproductively successful women were 15 years younger than their man. One study covering 37 cultures showed that, on average, a woman was 2.5 years younger than her male partner, with the age difference in Nigeria and Zambia being at the far extreme of 6.5 to 7.5 years. As men age, they tend to seek a mate who is younger.
25% of online dating website eHarmony's male customers over the age of 50 request to only be matched with women younger than 40. The 2010 OkCupid study found that female desirability to its male users peaks at age 21, and falls below the average for all women at 31. After age 26, men have a larger potential dating pool than women on the site; and by age 48, their pool is almost twice as large. The median 31-year-old male user searches for women aged 22-to-35, while the median 42-year-old male searches for women 27-to-45. The age skew is even greater with messages to other users; the median 30-year-old male messages teenage girls as often as women his own age, while mostly ignoring women a few years older than him. Excluding the 10% most and 10% least beautiful women, women's attractiveness does not change between 18 and 40. If extremes are included, however, "there's no doubt that younger [women] are more physically attractive – indeed in many ways beauty and youth are inextricable. That's why most of the models you see in magazines are teenagers".
Pheromones (detected by female hormone markers) reflect female fertility and the reproductive value mean. As females age, the estrogen-to-androgen production ratio changes and results in female faces to appear more masculine (thus appearing less "attractive"). In a small (n=148) study performed in the United States, using male college students at one university, the mean age expressed as ideal for a wife was found to be 16.87 years old, while 17.76 was the mean ideal age for a brief sexual encounter. However, the study sets up a framework where "taboos against sex with young girls" are purposely diminished, and biased their sample by removing any participant over the age of 30, with a mean participant age of 19.83. In a study of penile tumescence, men were found most aroused by pictures of young adult females.
Signals of fertility in women are often also seen as signals of youth. The evolutionary perspective proposes the idea that when it comes to sexual reproduction, the minimal parental investment required by men gives them the ability and desire to simply reproduce 'as much as possible.' It therefore makes sense that men are attracted to the features in women which signal youthfulness, and thus fertility. Their chances of reproductive success are much higher than they would be should they pair with someone older—and therefore less fertile.
This may explain why combating age declines in attractiveness occurs from a younger age in women than in men. For example, the removal of one's body hair is considered a very feminine thing to do. This can be explained by the fact that aging results in raised levels of testosterone and thus, body hair growth. Shaving reverts one's appearance to a more youthful stage and although this may not be an honest signal, men will interpret this as a reflection of increased fertile value. Research supports this, showing hairlessness is considered sexually attractive by men.
Leg-to-body ratio
"Leg-to-body ratio" is seen as an indicator of physical attractiveness but there appears to be no single accepted definition of leg-length: the 'perineum-to-floor' measure is the most frequently used, but arguably the distance from the ankle bone to the outer hip bone is more rigorous. With the latter metric, the most attractive male leg-to-body ratio (judged by American women) is 1:1. A Japanese study using the former metric found the same result for male attractiveness, but women with longer legs than the rest of their body were judged to be more attractive. Excessive deviations from the mean were seen as indicative of disease. A study using Polish participants found that legs 5% longer than the average for both sexes was considered most attractive. The study concluded this preference might stem from the influence of long-legged runway models. Another study using British and American participants found "mid-ranging" leg-to-body ratios to be most ideal.
A study by Swami et al. of British male and female undergraduates showed a preference for men with legs as long as the rest of their body and women with 40% longer legs than the rest of their body. The researcher concluded that this preference might be influenced by American culture, in which long-legged women are portrayed as more attractive.
Marco Bertamini criticized the Swami et al. study for using a picture of the same person with digitally altered leg lengths which he felt would make the modified image appear unrealistic. Bertamini also criticized the Swami study for only changing the leg length while keeping the arm length constant. After accounting for these concerns in his own study, Bertamini, using stick figures, also found a preference for women with proportionately longer legs than men. When Bertamini investigated the issue of possible sexual dimorphism of leg length, he found two sources that indicated that men usually have slightly proportionately longer legs than women or that differences in leg length proportion may not exist between men and women. Following this review of existing literature on the subject, he conducted his own calculations using data from 1774 men and 2208 women. Using this data, he similarly found that men usually have slightly proportionately longer legs than women or that differences in leg length proportion may not exist between men and women. These findings made him rule out the possibility that a preference for women with proportionately longer legs than men is due proportionately longer legs being a secondary sex characteristic of women.
Genitalia
A 2006 study of 25,594 heterosexual men found that "men who perceived themselves as having a large penis were more satisfied with their own appearance".
A 2014 study criticized previous studies based on the fact that they relied on images and used terms such as "small", "medium", and "large" when asking for female preference. The new study used 3D models of penises from sizes of long and in circumference to long and in circumference and let the women "view and handle" them. It was found that women overestimated the actual size of the penises they experimented with when asked in a follow-up survey. The study concluded that "women on average preferred the penis in length both for long-term and for one-time partners. Penises with larger girth were preferred for one-time partners."
Evidence from various cultures suggests that heterosexual men tend to find the sight of women's genitalia to be sexually arousing.
Skin colour
Manual labourers who spent extended periods of time outside developed a darker skin tone due to exposure to the sun. As a consequence, an association between dark skin and the lower classes developed. Light skin became an aesthetic ideal because it symbolized wealth. "Over time society attached various meanings to these coloured differences. Including assumptions about a person's race, socioeconomic class, intelligence, and physical attractiveness."
Some research has suggested that redder and yellower skin tones, reflecting higher levels of oxygenated blood, carotenoid and to a lesser extent melanin pigment, and net dietary intakes of fruit and vegetables, appear healthier, and therefore more attractive. However, there is little direct evidence that skin colour is actually related to health or immune system strength.
A historical preference for lighter-skinned women has been documented across many cultures. However, the accuracy of this research has been questioned by other authors. Experimental studies show that white Western men are more attracted to tanned women, rather than pale women, and that women themselves believe that they are more attractive with tan skin. A 2010 study found a preference for lighter-skinned (but not lightest) women in New Zealand and California. However, other research has found that African-American males and females consider medium complexion as more attractive than lighter or darker skin, while white and Hispanic women seek to tan their skin in order to increase their attractiveness to the opposite sex. There is a direct correlation between being tan and self-perceived attractiveness, especially among young women.
According to research from China, since the 2010s, tan skin has emerged as the new beauty ideal for women in China, and Chinese women themselves believe their tan skin is more attractive and healthier than pale skin. Similar findings from Japan have found that the ideal female skin colour is tan, with no spots or roughness. There is a widepread perception in Japan that White women's skin is less beautiful than Japanese women's, as White women are stereotyped as being too pale and roughly textured.
The relationship between attractiveness and skin colour may also intersect with ethnicity and prior experience. Skin colour preferences may shift over time, as in Western culture, where tanned skin used to be associated with the sun-exposed manual labour of the lower-class, but since the mid-20th century it has generally been considered more attractive and healthier than before, with sun tanning becoming fashionable. In the African state of Mali, skin bleaching is common as it is thought to improve one's social standing and attractiveness to the opposite sex, although there has also been vocal opposition to this notion from pop culture icons.
Skin radiance or glowing skin may influence perception of beauty and physical attractiveness.
Hands
Hands have been found to be physically attractive. The type of hands that are physically attractive are those with longer index and ring fingers. Men have a smaller index-to-ring-finger ratio than women. The gender differences in the ratio between the index and ring fingers are said to be influenced by exposure to testosterone within the womb. In a study where participants were shown computer-based images of hands, male participants found feminine hands with a smaller index finger less attractive. Whereas females found masculine hands with a longer ring finger more attractive. The study suggests that finger length has an effect on physical attraction because it gives indication of the desirable sex-hormone dependent traits which one may possess. Another study found that averageness, healthiness of the skin, how fat the hands appear to be, and the grooming of the hands, all affect the attractiveness of hands. What is meant by averageness is the degree to which the hands look like an average of the hands in the population. Average-looking hands give an indication of an individual's health (because there are no abnormalities).
The healthier-looking the skin on the hands, the more attractive they appear. Reasons given for this say skin health may reflect an individual's overall health. Healthy skin can show that someone is free from illness because some illnesses have a bad effect on the look of skin. These features are found attractive because they show that the person has good genes and is therefore a suitable mate to reproduce with. Skin health may also give an indication of socioeconomic status, as rough hands may indicate a low-paying, laborious job. Low socioeconomic status might show that someone does not have resources to provide for the offspring, and is therefore less attractive. The more fat the hands appear, the less attractive they are. This is because of the co-morbidity associated with obesity. If someone is overweight, they may have another disease, which means they may not be able to produce healthy offspring. The attractiveness of the hands also gives an indication of other features of the individual; people with more attractive hands have been found to be taller and slimmer. In most of these hand attractiveness studies, only white, European hands were used, and the participants were 18–26 years old. So, the attractiveness of non-white hands and of different age groups was not tested. Also, the people who rated the hand attractiveness were white Europeans, so their ratings may not represent how individuals of other skin colours and cultures would rate the hands.
Height
Females' sexual attraction towards males may be determined by the height of the man. For example, the dating site eHarmony only matches women with men taller than themselves, because of complaints from women matched with shorter men.
Other studies have shown that heterosexual women often prefer men taller than they are, rather than a man with above average height. While women usually desire men to be at least the same height as themselves or taller, several other factors also determine male attractiveness, and the "taller male" norm is not universal. For example, taller women are more likely to relax the "taller male" norm than shorter women. Furthermore, professor Adam Eyre-Walker, from the University of Sussex, has stated that there is, as yet, no evidence that these preferences are evolutionary preferences, as opposed to merely cultural preferences. Still, the cultural perceived attractiveness preferences for taller men are powerful and confirmed by multiple studies. One study of speed-daters by Stulp found that "women were most likely to choose [men] 25 cm taller than themselves, whereas men were most likely to choose women only 7 cm shorter than themselves".
Additionally, women seem more receptive to an erect posture than men, though both prefer it as an element within beauty. According to one study (Yee N., 2002), gay men who identify as "only tops" tend to prefer shorter men, while gay men who identify as "only bottoms" tend to prefer taller men.
In romances in Middle English literature, all of the "ideal" male heroes are tall, and the vast majority of the "valiant" male heroes are tall too.
Most men tend to be taller than their female partners. In Western societies, it has been found that most men prefer women shorter than themselves. Nevertheless, height is a more important factor for a woman when choosing a man than it is for a man choosing a woman. Western men tend to view women taller than themselves as less attractive, and many people view heterosexual couples where the woman is taller to be less ideal. Women who are 0.7 to 1.7 standard deviations below the mean female height have been reported to be the most reproductively successful, since fewer tall women get married compared to shorter women. However, in other ethnic groups, such as the Hadza people from Tanzania, a study has found that height is irrelevant in choosing a mate. Another study found the same preference in rural Gambia.
In Middle English literature, "tallness" is a characteristic of ideally beautiful women. The British Fashion Model Agents Association (BFMA) says that female models should be at least tall.
Body language
Standing postures
Standing in a contrapposto posture (with bodyweight predominantly supported by one leg which is either straight, or very slightly bent, and with the other leg slightly bent) has been found to be more attractive looking than standing in a more plain, upright posture. This was found to be the case for both men and women. This posture may lower a person's observable waist-hip ratio and make their hips look wider and their waists thinner. For women especially, this can accentuate the curvature of their figure on one side of their body and make them seem more attractive. Such poses have been used in historical sculpture to emphasize an ideal of physical beauty. It has also been demonstrated that the contrapposto posture in women elicits more neural activity in brain areas linked to perception and attractiveness assessments than a standing position.
Movement patterns
The way an individual moves can influence attractiveness and indicate health and age. A study reflecting the views of 700 individuals and that involved animated representations of people walking, found that the physical attractiveness of women increased by about 50 percent when they walked with a hip sway. Similarly, the perceived attractiveness of males doubled when they moved with a swagger in their shoulders.
Male-specific factors
Women, on average, tend to be more attracted to men who have a relatively narrow waist, a V-shaped torso, wide chest and broad shoulders. Women also tend to be more attracted to men who are taller and larger than they are, and display a high degree of facial symmetry, as well as relatively masculine facial dimorphism. Women, regardless of sexual orientation, tend to be more interested in a partner's physical attractiveness than men.
Sexual dimorphism
The degree of differences between male and female anatomical traits is called sexual dimorphism. Female respondents in the follicular phase of their menstrual cycle were significantly more likely to choose a masculine face than those in menses and luteal phases, (or in those taking hormonal contraception). This distinction supports the sexy son hypothesis, which posits that it is evolutionarily advantageous for women to select potential fathers who are more genetically attractive, rather than the best caregivers. However, women's likeliness to exert effort to view male faces does not seem to depend on their masculinity, but to a general increase with women's testosterone levels.
It is suggested that the masculinity of facial features is a reliable indication of good health, or, alternatively, that masculine-looking males are more likely to achieve high status. However, the correlation between attractive facial features and health has been questioned. Sociocultural factors, such as self-perceived attractiveness, status in a relationship and degree of gender-conformity, have been reported to play a role in female preferences for male faces. Studies have found that women who perceive themselves as physically attractive are more likely to choose men with masculine facial dimorphism, than are women who perceive themselves as physically unattractive. In men, facial masculinity significantly correlates with facial symmetry – it has been suggested that both are signals of developmental stability and genetic health. One study called into question the importance of facial masculinity in physical attractiveness in men, arguing that when perceived health, which is factored into facial masculinity, is discounted it makes little difference in physical attractiveness. In a cross-country study involving 4,794 women in their early twenties, a difference was found in women's average "masculinity preference" between countries.
A study found that the same genetic factors cause facial masculinity in both males and females such that a male with a more masculine face would likely have a sister with a more masculine face due to the siblings having shared genes. The study also found that, although female faces that were more feminine were judged to be more attractive, there was no association between male facial masculinity and male facial attractiveness for female judges. With these findings, the study reasoned that if a woman were to reproduce with a man with a more masculine face, then her daughters would also inherit a more masculine face, making the daughters less attractive. The study concluded that there must be other factors that advantage the genetics for masculine male faces to offset their reproductive disadvantage in terms of "health", "fertility" and "facial attractiveness" when the same genetics are present in females. The study reasoned that the "selective advantage" for masculine male faces must "have (or had)" been due to some factor that is not directly tied to female perceptions of male facial attractiveness.
In a study of 447 gay men in China, researchers said that tops preferred feminized male faces, bottoms preferred masculinized male faces and versatiles had no preference for either feminized or masculinized male faces.
In pre-modern Chinese literature, the ideal man in caizi jiaren romances was said to have "rosy lips, sparkling white teeth" and a "jasper-like face" ().
In Middle English literature, a beautiful man should have a long, broad and strong face.
Waist-to-chest ratio
The physique of a slim waist, broad shoulders and muscular chest are often found to be attractive to both females and males. Further research has shown that, when choosing a mate, the traits females look for indicate higher social status, such as dominance, resources, and protection. An indicator of health in males (a contributing factor to physical attractiveness) is the android fat distribution pattern which is categorized as more fat distributed on the upper body and abdomen, commonly referred to as the "V shape." When asked to rate other men, both heterosexual and homosexual men found low waist-to-chest ratios (WCR) to be more attractive on other men, with the gay men showing a preference for lower WCR (more V-shaped) than the straight men.
Other researchers found waist-to-chest ratio the largest determinant of male attractiveness, with body mass index and waist-to-hip ratio not as significant.
Women focus primarily on the ratio waist to chest or more specifically waist to shoulder. This is analogous to the waist to hip ratio (WHR) that men prefer. Some studies have shown that attractive bodily traits in the eyes of a heterosexual woman would include a tall, athletic physique, with wide shoulders, and a slim waist area. Research has additionally shown that college males had a better satisfaction with their body than college females. The research also found that when a college female's waist to hip ratio went up, their body image satisfaction decreased.
Some research has shown that body weight may have a stronger effect than WHR when it comes to perceiving attractiveness of the opposite sex. It was found that waist to hip ratio played a smaller role in body preference than body weight in regards to both sexes.
Psychologists Viren Swami and Martin J. Tovee compared female preference for male attractiveness cross culturally, between Britain and Malaysia. They found that females placed more importance on WCR (and therefore body shape) in urban areas of Britain and Malaysia, while females in rural areas placed more importance on BMI (therefore weight and body size). Both WCR and BMI are indicative of male status and ability to provide for offspring, as noted by evolutionary theory.
Females have been found to desire males that are normal weight and have the average WHR for a male. Females view these males as attractive and healthy. Males who had the average WHR but were overweight or underweight are not perceived as attractive to females. This suggests that WHR is not a major factor in male attractiveness, but a combination of body weight and a typical male WHR seem to be the most attractive. Research has shown that men who have a higher waist to hip ratio and a higher salary are perceived as more attractive to women.
Flat abdomen
A 1982 study found that an abdomen that protrudes was the "least attractive" trait for men.
In Middle English literature, a beautiful man should have a flat abdomen.
Musculature
Men's bodies portrayed in magazines marketed to men are more muscular than the men's bodies portrayed in magazines marketed to women. From this, some have concluded that men perceive a more muscular male body to be ideal, as distinct from a woman's ideal male, which is less muscular than what men perceive to be ideal. This is due to the within-gender prestige granted by increased muscularity and within-gender competition for increased muscularity. Men perceive the attractiveness of their own musculature by how closely their bodies resemble the "muscle man." This "muscle man" ideal is characterized by large muscular arms, especially biceps, a large muscular chest that tapers to their waist and broad shoulders. Among Australian university students, the male body composition found to be most attractive (12.16 kg fat, 63.27 kg muscle) was in line with the composition that was perceived as healthiest, and was well within the healthy range.
In a study of stated profile preferences on Match.com, a greater percentage of gay men than lesbians selected their ideal partner's body type as "Athletic and Toned" as opposed to the other two options of "Average" or "Overweight".
In pre-modern Chinese literature, such as in Romance of the Western Chamber, a type of masculinity called "scholar masculinity" is depicted wherein the "ideal male lover" is "weak, vulnerable, feminine, and pedantic".
In Middle English literature, a beautiful man typically has thick, broad shoulders, a square and muscular chest, a muscular back, strong sides that taper to a small waist, large hands and arms and legs with huge muscles.
Body hair
Studies based in the United States, New Zealand, and China have shown that women rate men with no trunk (chest and abdominal) hair as most attractive, and that attractiveness ratings decline as hairiness increases. Another study, however, found that moderate amounts of trunk hair on men was most attractive, to the sample of British and Sri Lankan women. Further, a degree of hirsuteness (hairiness) and a waist-to-shoulder ratio of 0.6 is often preferred when combined with a muscular physique.
In a study using Finnish women, women with hairy fathers were more likely to prefer hairy men, suggesting that preference for hairy men is the result of either genetics or imprinting. Among gay men, another study reported gay males who identify as "only tops" prefer less hairy men, while gay males who identify as "only bottoms" prefer more hairy men.
Facial hair
One study shows that men with facial hair covering the cheeks, upper lip, and lower jaw were perceived as more physically attractive than men with patchy facial hair. In this study, men's facial hair was split into four categories, each differing in the thickness and coverage: very light, light, medium, and heavy. Light facial hair was rated as the most attractive, followed by medium, heavy, and the least attractive was 'very light'. This study suggests that some facial hair is better than none because it shows masculine development, as beard growth requires the conversion of testosterone. An earlier study found that women from Western and Oceanic cultures are more attracted to clean-shaven faces than beards. However, they also rated full-bearded men as having higher status than clean-shaven men.
Jawline
More angular male jawlines tend to be selected as ideal in Western countries, while the ideal female jawline is rounder and softer.
Most research shows that attractive bigonial width and Ramus measurements have similarities, but the jutting square chin is a prominently European-heritage trait which means it should not be held as a universal indicator of attractiveness. Men with low submental fat were viewed to have "better jawlines" and a more "youthful look".
Female-specific factors
Research indicates that heterosexual men tend to be attracted to young and beautiful women with bodily symmetry. Rather than decreasing it, modernity has only increased the emphasis men place on women's looks. Evolutionary psychologists attribute such attraction to an evaluation of the fertility potential in a prospective mate.
Facial features
General
Research has attempted to determine which facial features communicate attractiveness. Facial symmetry has been shown to be considered attractive in women, and men have been found to prefer full lips, high forehead, broad face, small chin, small nose, short and narrow jaw, high cheekbones, clear and smooth skin, and wide-set eyes. The shape of the face in terms of "how everything hangs together" is an important determinant of beauty. Women with thick, dark limbal rings in their eyes have also been found to be more attractive. The explanation given is that because the ring tends to fade with age and medical problems, a prominent limbal ring gives an honest indicator of youth.
In Persian literature, beautiful women are said to have noses like hazelnuts. In Arabian society in the Middle Ages, a component of the female beauty ideal was for women to have straight and fine noses.In Jewish Rabbinic literature, the rabbis considered a delicate nose to be the ideal type of nose for women. In Japan, during the Edo period, a component of the female beauty ideal was for women to have tall noses which were straight and not "too tall".
In a cross-cultural study, more neotenized (i.e., youthful looking) female faces were found to be most attractive to men while less neotenized female faces were found to be less attractive to men, regardless of the females' actual age. In a study of Italian women who have won beauty competitions, it was found that their faces had more "babyish" (pedomorphic) traits than those of the "normal" women used as a reference.
In a cross-cultural study, Marcinkowska et al. said that 18-to-45-year-old heterosexual men in all 28 countries surveyed preferred photographs of 18-to-24-year-old white women whose faces were feminized using facial image editing software over faces of 18-to-24-year-old white women that were masculinized using that software, but there were differences in preferences for femininity across countries. The higher the National Health Index (based on eight national health statistics taken from the World Health Organization Statistical Information Service using data from 2002 to 2006) of a country, the more were the feminized faces preferred over the masculinized faces. Among the countries surveyed, Japan had the highest femininity preference and Nepal had the lowest femininity preference.
Michael R. Cunningham of the Department of Psychology at the University of Louisville found, using a panel of East Asian, Hispanic and White judges, that the female faces tended to be judged as more attractive if they had a mixture of youthful and sexually mature features. Using a panel of African Americans and Whites as judges, Cunningham found more neotenous faces were perceived as having both higher "femininity" and "sociability". The authors found no evidence of ethnocentric bias in the Asian or White samples, as Asians and Whites did not differ significantly in preference for neonate cues, and positive ratings of White women did not increase with exposure to Western media.
Rather than finding evidence for purely "neonate" faces being most appealing, Cunningham found faces with "sexually-mature" features at the "periphery" of the face combined with "neonate" features in the "centre of the face" most appealing in women. Upon analyzing the results of his study, Cunningham concluded that preference for "neonate features may display the least cross-cultural variability" in terms of "attractiveness ratings" and, in another study, Cunningham concluded that there exists a large agreement on the characteristics of an attractive face.
In computer face averaging tests, women with averaged faces have been shown to be considered more attractive. This is possibly due to average features being more familiar and, therefore, more comfortable.
According to Chinese scholar Liu Jieyu (2008), there is more pressure on women than men to be physically attractive. Whereas there are various criteria that women might be expected to meet, a man might only need to be tall to be considered attractive.
On average, symmetrical features are one ideal, while unusual, stand-out features are another. A study performed by the University of Toronto found that the most attractive facial dimensions were those found in the average female face. However, that particular University of Toronto study looked only at white women.
A 2011 study, by Wilkins, Chan and Kaiser found correlations between perceived femininity and attractiveness; that is, women's faces which were seen as more feminine were judged by both men and women to be more attractive. The study also found that East Asian women's faces were more "prototypically" feminine than White women's, a finding that was replicated by several follow-up studies which found that this explains the higher attractiveness ratings of East Asian women compared to White women.
External links
Interpersonal attraction
Human sexuality
Seduction
Human appearance
Beauty | Physical attractiveness | [
"Biology"
] | 8,837 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
1,053,470 | https://en.wikipedia.org/wiki/Public%20toilet | A public toilet, restroom, bathroom or washroom is a room or small building with toilets (or urinals) and sinks for use by the general public. The facilities are available to customers, travelers, employees of a business, school pupils or prisoners. Public toilets are typically found in many different places: inner-city locations, offices, factories, schools, universities and other places of work and study. Similarly, museums, cinemas, bars, restaurants, and entertainment venues usually provide public toilets. Railway stations, filling stations, and long distance public transport vehicles such as trains, ferries, and planes usually provide toilets for general use. Portable toilets are often available at large outdoor events.
Public toilets are commonly separated by sex (or gender) into male and female toilets, although some are unisex (gender-neutral), especially for small or single-occupancy public toilets, public toilets are sometimes accessible to people with disabilities. Depending on the culture, there may be varying degrees of separation between males and females and different levels of privacy. Typically, the entire room, or a stall or cubicle containing a toilet, is lockable. Urinals, if present in a male toilet, are typically mounted on a wall with or without a divider between them.
Local authorities or commercial businesses may provide public toilet facilities. Some are unattended while others are staffed by an attendant. In many cultures, it is customary to tip the attendant, especially if they provide a specific service, such as might be the case at upscale nightclubs or restaurants. Public toilets may be municipally owned or managed and entered directly from the street. Alternatively, they may be within a building that, while privately owned, allows public access, such as a department store, or it may be limited to the business's customers, such as a restaurant. Some public toilets are free of charge, while others charge a fee. In the latter case they are also called pay toilets and sometimes have a charging turnstile. In the most basic form, a public toilet may just be a street urinal known as a pissoir, after the French term.
Public toilets are known by many other names depending on the country; examples are: restroom, bathroom, men's room, women's room, powder room (US); washroom (Canada); and toilets, lavatories, water closet (W.C.), ladies and gents (Europe).
Alternative names
Public toilets are known by many names in different varieties of English.
In American English, "restroom" commonly denotes a facility featuring toilets and sinks designed for use by the public, but "restroom" and "bathroom" are often used interchangeably for any room with a toilet (both in public and in private homes). "Restroom" is considered by some to be slightly more formal or polite. "Bathroom" is quite common in schools. "Comfort station" sometimes refers to a visitor welcome center such as those in national parks. The term restroom derived from the fact that in the early 1900s through to the middle of the century up-scale restaurants, theatres and performing facilities would often have comfortable chairs or sofas located within or in a room directly adjacent to the actual toilet and sink facilities, something which can be seen in some movies of the time period. An example of this is the description of a "movie palace" which was opening in 1921 which was described as including " ... a rest-room for the fair sex and a lounging room for the sterner sex ... off these rooms are the toilets."
In Canadian English, public facilities are frequently called and signed as "washrooms", although usage varies regionally. The word "toilet" generally denotes the fixture itself rather than the room. The word "washroom" is rarely used to mean "utility room" or "mud room" as it is in some parts of the United States. "Bathroom" is generally used to refer to the room in a person's home that includes a bathtub or shower while a room with only a toilet and sink in a person's residence is typically called a "washroom" because one would wash one's hands in it upon returning home or before a meal or a "powder room" because women would fix their make-up on their faces in that room. These terms are the terms typically used on floor plans for residences or other buildings. Real estate advertisements for residences often refer to "three-piece washrooms" (include a bathtub or shower) and "two-piece washrooms" (only toilet and sink). In public athletic or aquatic facilities, showers are available in locker rooms.
In Britain, Australia, Hong Kong, Singapore, and New Zealand, the terms in use are "public toilet", "public lavatory" (abbreviated "lav"), "public convenience", and more informally, "public loo". As public toilets were traditionally signed as "gentlemen" or "ladies", the colloquial terms "the gents' room" and "the ladies' room", or simply "the gents" and "the ladies" are used to indicate the facilities themselves. The British Toilet Association, sponsor of the Loo of the Year Awards, refers to public toilets collectively as "away-from-home" toilets.
In Philippine English, "comfort room", or "C.R.", is the most common term in use.
Some European languages use words cognate with "toilet" (e.g. les toilettes in French; туалет (tualet) in Russian), or the initialism "W.C.", an abbreviation for "water closet", an older term for the flush toilet. In Slavic languages, such as Russian and Belarusian, the term sanuzel (санузел; short for sanitarny uzel — sanitary unit/hub) is sometimes used for public facilities which include a toilet, sink, and possibly a shower, bathtub, and / or bidet. Public urinals (pissoir) are known in several Romance languages by the name of a Roman Emperor: vespasienne in French and vespasiani in Italian.
Mosques, madrassas (schools), and other places Muslims gather, have public sex-separated "ablution rooms" since Islam requires specific procedures for cleansing parts of the body before prayer. These rooms normally adjoin the toilets, which are also subject to Muslim hygienical jurisprudence and Islamic toilet etiquette.
Types
Many public toilets are permanent small buildings visible to passers-by on the street. Others are underground, including older facilities in Britain and Canada. Contemporary street toilets include automatic, self-cleaning toilets in self-contained pods; an example is the Sanisette, which first became popular in France. As part of its campaign against open defecation, the Indian government introduced the remotely-monitored eToilet to some public spaces in 2014.
Public toilets may use seated toiletsas in most Western countriesor squat toilets. Squat toilets are common in many Asian and African countries, and, to a lesser extent, in Southern European countries. In many of those countries, anal cleansing with water is also the cultural norm and easier to perform while squatting than seated.
Another traditional type that has been modernized is the screened French street urinal known as a pissoir (vespasienne).
The telescopic toilet is designed to extend and retract vertically from a cylinder relative to street level depending on the time of day. It is typically installed in entertainment districts and operational only during weekends, evenings, and nights. The first such toilet was a telescopic urinal invented in the Netherlands, which now also offers pop-up toilets for women.
Private firms may maintain permanent public toilets. The companies are then permitted to use the external surfaces of the enclosures for advertising. The installations are part of a street furniture contract between the out-of-home advertising company and the city government and allow these public conveniences to be installed and maintained without requiring funds from the municipal budget.
Various portable toilet technologies are used as public toilets. Portables can be moved into place where and when needed and are popular at outdoor festivals and events. A portable toilet can either be connected to the local sewage system or store the waste in a holding tank until it is emptied by a vacuum truck. Portable composting toilets require removal of the container to a composting facility.
The standard wheelchair-accessible public toilet features wider doors, ample space for turning, lowered sinks, and grab-bars for safety. Features above and beyond this standard are advocated by the Changing Places campaign. Features include a hoist for an adult, a full-sized changing bench, and space for up to two caregivers.
Public toilets have frequently been inaccessible to people with certain disabilities.
Purposes
As an "away-from-home" toilet room, a public toilet can provide far more than access to the toilet for urination and defecation. People also wash their hands, use the mirrors for grooming, get drinking water (e.g. refilling water bottles), attend to menstrual hygiene needs, and use the waste bins. Public toilets may also become places for harassment of others or illegal activities, particularly if principles of Crime prevention through environmental design (CPTED) are not applied in the design of the facility.
History
Europe
Public toilets were part of the sanitation system of ancient Rome. These latrines housed long benches with holes accommodating multiple simultaneous users, with no division between individuals or groups. Using the facilities was considered a social activity.
By the Middle Ages public toilets became uncommon, with only few attested in Frankfurt in 1348, in London in 1383, and in Basel in 1455. A public toilet was built in Ottoman Sarajevo in 1530 just outside a mosque's exterior courtyard wall which is still operating today.
Sociologist Dara Blumenthal notes changing bodily habits, attitudes, and practices regarding hygiene starting in the 16th century, which eventually led to a resurgence of public toilets. While it had been perfectly acceptable to relieve oneself anywhere, civility increasingly required the removal of waste product from contact with others.
New instruction manuals, schoolbooks, and court regulations dictated what was appropriate. For instance, in Galateo: or, A Treatise on Politeness and Delicacy of Manners, Giovanni della Casa states “It does not befit a modest, honourable man to prepare to relieve nature in the presence of other people, nor do up his clothes afterward in their presence. Similarly, he will not wash his hands on returning to decent society from private places, as the reason for his washing will arouse disagreeable thoughts in people.” Historian Lawrence Stone contends that the development of these new behaviours had nothing to do with problems of hygiene and bacterial infection, but rather with conforming to increasingly artificial standards of gentlemanly behaviour.
These standards were internalized at an early age. Over time, much that had to be explained earlier was no longer mentioned, due to successful social conditioning. This resulted in substantial reduction of explicit text on these topics in subsequent editions of etiquette literature; for example, the same passage in Les règles de la bienséance et de la civilité Chrétienne by Jean-Baptiste de la Salle is reduced from 208 words in the 1729 edition, to 74 words in the 1774 edition.
The first modern flush toilet had been invented in 1596, but it did not gain popularity until the Victorian era. When hygiene became a heightened concern, rapid advancements in toilet technology ensued. In the 19th century, large cities in Europe started installing modern flushing public toilets.
George Jennings, the sanitary engineer, introduced public toilets, which he called "monkey closets", to the Crystal Palace for The Great Exhibition of 1851. Public toilets were also known as "retiring rooms." They included separate amenities for men and women, and were the first flush toilet facilities to introduce sex-separation to the activity. The next year, London's first public toilet facility was opened.
Underground public toilets were introduced in the United Kingdom in the Victorian era, in built-up urban areas where no space was available to provide them above ground. The facilities were accessible by stairs, and lit by glass brick on the pavement. Local health boards often built underground public toilets to a high standard, although provisions were higher for men than women. Most have been closed as they did not have disabled access, and were more prone to vandalism and sexual encounters, especially in the absence of an attendant. A few remain in London, but others have been converted into alternative uses such as cafes, bars and even dwellings.
Hong Kong
In the early days of the colony of Hong Kong, people would go to the toilet in sewers, barrels or in alleys. Once Hong Kong opened up for trade (1856–1880), the British Hong Kong government determined that the appalling hygiene situation in Hong Kong was becoming critical. Thus, the government set up public toilets (squat toilets) for people in 1867. But these toilets needed to be cleaned and emptied manually every day and were not popular. In 1894, plague broke out in Hong Kong and 2,500 people died, especially public toilet cleaners. The government decided to act, setting up underground toilet facilities to improve this situation, though these toilets also had to be cleaned and emptied manually.
Early in 1940, the colonial government built the first public flush toilet. In 1953, a fire broke out in Shek Kip Mei. After that, the government embarked on a major public housing project in Hong Kong including public toilets for residents. More than ten people shared each toilet and they used them for bathing, doing their laundry as well as going to the toilet. Finally, in the 1970s, the government decided that one toilet for four or five families was insufficient and renovated all public housing providing separate flush pedestal toilets for all residents.
United States
In the United States, concerns over public health and sanitation spurred the sanitarian movement during the late 1800s. Reforms to standardize plumbing codes and household plumbing were advocated for; the intersection of advancements in technology and desire for cleanliness and disease-free spaces spurred the development of public toilets.
Facilities for women sometimes had a wider emphasis, providing a safe and comfortable private space in the public sphere. The Ladies Rest Room is one example of the non-euphemistic use of the term: literally, a place to rest. Historically such rooms pre-dated the washroom and washrooms were added afterwards. Subsequent integrated designs resulted in the "women's restroom lounge".
A notable early example of a public toilet in the United States is the Old School Privy. The American architect Frank Lloyd Wright claimed to have "invented the hung wall for the w.c. (easier to clean under)" when he designed the Larkin Administration Building in Buffalo, New York in 1904.
According to a 2021 study by QS Supplies, the United States has just 8 public toilets for every 100,000 people, a rate that ties the country with Botswana in terms of access to toilet facilities. In the 1970s there were 50,000 coin-operated public restrooms in the U.S., but they were eliminated by 1980, and public facilities did not replace them.
South Africa
During the apartheid years in South Africa, public toilets were usually segregated by race.
Legislation
Mandatory requirements
In Brazil, there exists no federal law or regulation that makes public toilets provision compulsory. The lack of public toilets across Brazil results in frequent acts of public urination.
Sex separation
United States
Massachusetts passed the first law requiring sex separation of public toilets in 1887. By 1920, this was mandated in 43 states.
In jurisdictions using the Uniform Plumbing Code in the U.S., sex separation is a legal mandate via the building code.
Toilets for employees and customers
Various countries have legislation stipulating how many public toilets are required in a given area for employees or for customers.
United States
The Restroom Access Act is legislation several U.S. States passed that requires retail establishments with toilet facilities for employees to also allow customers to use the facilities if the customer suffers from an inflammatory bowel disease or other medical condition requiring immediate access to a toilet.
United Kingdom
In the United Kingdom, the Workplace (Health, Safety and Welfare) Regulations 1992 requires businesses to provide toilets for their employees, along with washing facilities including soap or other suitable means of cleaning. The Workplace (Health, Safety and Welfare) Approved Code of Practice and Guidance L24, available from Health and Safety Executive Books, outlines guidance on the number of toilets to provide and the type of washing facilities associated with them.
Local authorities are not legally required to provide public toilets, and while in 2008 the House of Commons Communities and Local Government Committee called for a duty on local authorities to develop a public toilet strategy, the Government rejected the proposal.
In 2022 the UK Government Equality Minister Kemi Badenoch announced plans to make provision of single-sex toilets compulsory in new public buildings above a certain size. The technical review consultation on increasing accessibility and provision of toilets for men and women in municipal and private sector locations outlined the context in a call for evidence to be submitted:
Equality of access
The presence or absence of public toilets has also long been a reflection of a society's class inequalities and social hierarchies.
In the UK the number of public toilets fell by nearly 20% from 3,154 in 2015/16 to 2,556 in 2020/21 This loss leads to health and mobility inequality issues for a range of people, including the homeless, disabled, outdoor workers and those whose illnesses mean that they frequently need to access a toilet. The decline of the great British public toilet is described by the Royal Society for Public Health as creating a “urinary leash” which restricts how far people can travel out from their homes.
Access for females
The lack of public toilets for females reflects their exclusion from the public sphere in the Victorian era. During this period, after leaving their parents' home, women were expected to maintain careers as homemakers and wives. Thus, safe and private public toilets were rarely available for women. The result was that they were often restricted in how far they could travel away from home without returning. Alternatively, they had to make do in the public streets as best they could. They often experienced sexual harassment as men tried to "sneak a peek" or otherwise bothered them. Some females experienced even worse if they could not secure safety and privacy even at home or in their workplaces. These problems continue for women and girls in all parts of the world.
The practice of pay toilets emerged in the US in the late 19th century. In these spaces, public toilets could only be accessed by paying a fee. Sex-separated pay toilets were available at the Chicago World's Fair (US) in 1893. Females complained that these were practically unavailable to them; authorities allowed them to be free, but on Fridays only. In the twentieth century, activist groups in the U.S., including the Committee to End Pay Toilets in America, claimed that such practices disadvantaged women and girls because men and boys did not have to pay for urinals. As an act of protest against this phenomenon, in 1969 California Assemblywoman March Fong Eu destroyed a toilet on the steps of the California State Capitol. By the 1990s most US jurisdictions had migrated away from pay toilets. Until 1992, U.S. female senators had to use toilets located on different floor levels than the ones they were working on, a reflection of their intrusion in an all-male profession.
While some public facilities were available to females in London by 1890, there were many fewer than those available to males.
Toilets also were assigned strong moral overtones. While public water closets were considered necessary for sanitation reasons, they were viewed as offending public sensibilities. It has been said that because public facilities were associated with access to public spaces, extending these rights to women was viewed as "immoral" and an "abomination". As a result of Victorian era codes, women were delegated to the private sphere, away from the public, fulfilling their roles as dutiful wives and mothers where any association with sexuality or private body parts was taboo. For women, the female lavatory in a public space was associated with danger and immoral sexual conduct.
According to World Bank data from 2017, over 500 million females lacked access to sanitation facilities to go to the bathroom or manage menstrual hygiene. Risk of sexual assault is high, in India as high as 50%. Amnesty International includes sex-separated toilets among its list of suggested measures to ensure the safety of women and girls in schools.
In many places the queues for the female toilets are longer than those for the males; efforts to deal with this are known as potty parity. It has been estimated that females can take up to 50% longer in the toilet. The reasons given include the requirement to use a cubicle rather than a urinal, pregnancy, managing menstruation, health conditions (such as cystitis), clothing design, and helping others. Women are more likely to be accompanied by very young children, disabled, or older people.
Access for African-American people (racial segregation)
After slavery ended in the United States, southern states attempted to replicate social economic oppression by passing laws requiring that blacks and whites be separated in all public and private venues. Racial segregation included public toilets, mandated by Jim Crow laws prior to the Civil Rights Act of 1964. Justifications provided for segregated facilities included "protection of a certain group, privacy, cleanliness, and morality.” This segregation imposed significant restrictions on the lives of African-Americans. Strategies to keep African-Americans out of sight included the "basement solution," which involved locating public toilets for black people in the basement next to janitor supply rooms. Black workers often had to walk long distances to get to the toilets they were assigned.
Those who were able to afford cars could avoid the indignities of segregated trains and buses, but they faced the difficulty of finding a public toilet they were allowed to use. Courtland Milloy of the Washington Post recalled that on cross-country road trips in the 1950s his parents were reluctant to stop the car to allow the children to relieve themselves – it just was not safe. One solution to this was to carry a portable toilet (a sort of bucket-like arrangement) in the trunk of the car. This treatment led to the creation of The Negro Motorist Green Book, an annually updated guidebook. Once the traveler found the correct "colored restroom", it could serve "as a respite from the insults of the white world", akin to what is now called safe space.
Following the 1941 executive order which prohibited “discrimination in the employment of workers in defence industries or government,” white women refused to share bathrooms with black women throughout the South. Engaging in numerous labor strikes and walkouts against Fair Employment Practice Committee politics, they erroneously claimed that racial integration would cause them to catch syphilis from toilet seats. Similar arguments equating equal access to restrooms with contracting venereal diseases were made by white women after the 1954 court ruling against segregated public schools which led to the desegregation of Little Rock Central High School.
Samuel Younge Jr., then a student at Tuskegee Institute, was murdered in 1966 after trying to use a "whites-only" restroom. He was the first black college student to be killed for his actions supporting the Civil Rights Movement.
Access for people with disabilities
Public toilets have frequently been inaccessible to people with disabilities. In the United States, all public toilets in federal buildings were required to be accessible to people with disabilities by the Architectural Barriers Act of 1968. These requirements were extended to all public buildings by the Americans with Disabilities Act of 1990.
Access for transgender and gender non-conforming people
Access to public toilets for transgender and gender non-conforming people is often contested. In the United States, various bathroom bills have been put forward to define who can have public toilet access, and on what terms. Many of these bills seek to criminalize usage by people whose gender identity does not match the sex on their birth certificates.
A variety of reasons have been put forward for these measures, including protecting the privacy of females, avoidance of retraumatization in females affected by male violence, and to protect females from being assaulted by males donning disguises, although there is no evidence of the latter ever having occurred in the past. The UK's Equality and Human Rights Commission published guidance in 2022 outlining scenarios where it considered exclusion of transgender people from single-sex spaces to be justifiable and proportionate. While transgender public toilet usage has been labelled as a moral panic, the ongoing discourse continues to have significant impacts on this group.
Health aspects
Health problems from lack of public toilets
Public toilets play a role in community health and individual well-being. Where toilets are available, people can enjoy outings and physical activities in their communities. By letting people get out of their cars and onto their feet, bicycles and mass transit, public toilets can contribute to improved environmental health. Mental well-being is enhanced when people are out with families and friends and know a place "to go" is available.
Public toilets also serve people who are "toilet challenged". First, some people need to go very frequently, including young and old people, people who are pregnant or menstruating, and those with some medical conditions. Second, some people need toilet access urgently, suddenly and without warning: such as those with chronic conditions such as Crohn's disease and colitis, and those temporarily afflicted with food-borne illnesses.
The inability to satisfy essential physiological needs because no toilet is available contributes to health issues such as urinary tract infections, kidney infections, and digestive problems, which can later develop into severe health problems. Inadequate access to a public toilets when required can lead to substantial problems for people with prostate problems, people who are menstruating or going through the menopause, and people with urinary and fecal incontinence.
A 2015 study by the National Center for Transgender Equality found that 8% of transgender Americans reported having developed urinary tract infections, kidney infections, and other kidney-related problems as a result of avoiding, or not being granted access to, the facilities. In another survey, the group DC Trans Coalition found that 54% of its respondents (located in Washington, DC) reported physical problems from avoiding using public toilets, such as dehydration, kidney infections, and urinary tract infections.
According to the Government of Australia, more than 3.8 million Australians of all ages are estimated to suffer continence issues. This represents 18% of the Australian population. Therefore, the Department of Health and Ageing maintains the National Public Toilet Map to enable the public to find the closest facility.
Workers have legal rights to access a toilet during their work day. In the United States, the Department of Labor's Occupational Health and Safety protects workers' rights to toilet breaks because of the documented health risks. This protected right to a toilet is a function of the workplace and is lost when workers leave the workplace.
If bus and truck drivers on timed schedules have difficulty in accessing toilets, this puts them at risk of bladder and digestive health problems. Furthermore, if the concentration of a driver in urgent need is compromised, it becomes a broader public safety concern.
Design
Entry
Doorless entry
Modern public toilets may be designed with a labyrinth entrance (doorless entry), which prevents the spread of disease that might otherwise occur when coming in contact with a door. Doorless entry provides visual privacy while simultaneously offering a measure of security by allowing the passage of sound. Doorless entry also helps deter vandalism; fewer audible clues to another person entering discourages some vandals. Doorless entry may also be achieved simply by keeping an existing door propped open, closed only when necessary.
Coin operated entry
Pay toilets usually have some form of coin operated turnstile, or they have an attendant who collects the fee.
Privacy
People often expect a high level of privacy when using public toilets. Privacy expectations may include toilet cubicles, cubicle doors, urinal partitions and similar.
The World Health Organization states that toilets should be "suitable, private and safe to use for all intended users, taking into consideration their gender, age and physical mobility (e.g. disabled, sick etc.)" and "All shared or public toilets should have [...] doors that can be locked from the inside, and lights".
Service access
Modern public toilets often have a service entrance, utilities passage, and the like, that run behind all the fixtures. Sensors are installed in a separate room, behind the fixtures. Usually, the separate room is just a narrow corridor or passageway.
Sensors
Sensor-operated fixtures (faucets, soap dispensers, hand dryers, paper towel dispensers) prevent the spread of disease by allowing patrons to circumvent the need to touch common surfaces. Sensor-operated toilets also help conserve water by limiting the amount used per flush, and require less routine maintenance. Each sensor views through a small window into each fixture. Sometimes the metal plates that house the sensor windows are bolted on from behind, to prevent tampering. Additionally, all of the electrical equipment is safely behind the walls, so that there is no danger of electric shock. However, a residual-current device must be used for all such electrical equipment.
Some public toilets have an automatic sensor-controlled flushing system that flushes the toilet when the user steps away from the sensor. They might also have an additional button that the user can push to provide a second flush.
Urinals
Urinals for males are common in public toilets as they are more space efficient than toilets (for urination). Urinals in public toilets are common in Western countries but less so in Muslim countries, partly due to Islamic toilet etiquette rules. Urinals for females exist but are rare. Urinals can be with automatic or manual flushing, or without flush water as is the case for waterless urinals. They can be arranged as single sanitary fixtures (with or without privacy walls) or in a trough design without privacy walls. The body posture for users of urinals is specifically the standing position. Compared with urination in a general-purpose toilet, usage is faster and more sanitary because at the urinal there are no fecal germs, no additional doors or locks to touch, and no seat to turn up. A urinal takes less space, is simpler, and consumes less water per flush (or even no water at all) than a flush toilet. Urinal setups can have individual urinals (with or without privacy partitions) or a communal urinal (also called a trough urinal) which is used by multiple men.
Lighting
Service lighting consisting of windows that run all the way around the outside of the toilet using electric lights behind the windows, to create the illusion of extensive natural light, even when the toilets are underground or otherwise do not have access to natural light. The windows are sometimes made of glass brick, permanently cemented in place. Lighting installed in service tunnels that run around the outside of the toilets provides optimum safety from electrical shock (keeping the lights outside the toilet), hygiene (no cracks or openings), security (no way for vandals to access the light bulbs), and aesthetics (clean architectural lines that maintain a continuity of whatever aesthetic design is present, e.g., the raw industrial urban aesthetic that works well with glass brick).
Cisterns (tanks)
Older toilets infrequently have service ducts, and often in old toilets that have been modernized, the toilet cistern is hidden in a tiled over purpose-built 'box'. Often old toilets still have high-level cisterns in the service ducts. On the outside, the toilet is flushed by a handle (just like an ordinary low-level cistern toilet) although behind the wall this handle activates a chain. Sometimes a long flushing trough is used to allow closets to be flushed repeatedly without waiting for the cistern to refill. This trend of hiding cisterns and fittings behind the walls started in the late 1930s in the United States and in the United Kingdom from the 1950s, and by the late 1960s it was unusual for toilet cisterns to be visible in public toilets. In some buildings such as schools, however, a cistern can still be visible, although high-level cisterns had become outdated by the 1970s. Many schools now have low-level cisterns.
Hand drying options
An option for hand drying is usually provided next to the sink. This can be either a paper towel dispenser (sometimes they have auto-sensors for touchless dispensing) or a mechanical hand dryer (used manually or with auto-sensors). Drying of washed hands is important for convenience but also because wet hands are more easily recontaminated. Paper towels are more hygienic than electric air dryers.
Other fixtures
Public toilets by their nature see heavy usage. Some high-vandalism settings, such as beaches or stadiums, will use metal toilets. Public toilets generally contain several of the following fixtures.
In the lockable cubicle (stall)
Toilet cubicle door
Toilet with toilet seat; whereas a home toilet seat has a lid, a public toilet may or may not, and may not even have a seat
Toilet paper, often within a lockable dispenser
Coat hook
"Pull-down" purse holder
sanitary protection bin for menstrual products; this may be classified as clinical waste and be subject to special regulations concerning disposal
Dispenser for flushable paper toilet seat covers
Toilet cubicle door lock sign. The toilet cubicle door lock signs are indicated in either colour: Vacant is marked in green, while Engaged is marked in red
At the point of handwashing
Faucets (taps); some are at a lower level for children and wheelchair users
Antiseptic hand-wash dispenser or soap dispensers, pump bottles or auto dispensers (not commonly supplied)
Mirror (usually over sinks)
Waste container / rubbish bin
Elsewhere
Urinals (almost exclusively in public toilets for males; although see female urinal)
Vending machines dispensing condoms, diapers (nappies), painkillers, energy drinks, perfume, breath mints, facial tissue, confectionery, undergarments, swimwear, soap, sex toys, or sanitary napkins or tampons
Air fresheners or odour control systems
Infant changing table, often fold-down (usually in women's rooms, but increasingly also in men's rooms.) They are sometimes placed within a, usually large, toilet cubicle.
Sometimes showers are also present, often with soap, shampoo, or similar dispensers (often at truck stops)
Cleaning, maintenance and management
Thorough cleaning and maintenance are important for public toilets. This task is usually performed by a "public toilet attendant" (who is there during an entire shift) or by professional cleaning staff. They maintain and clean the facilities, ensuring that toilet paper, soap, paper towels, and other necessary items are kept stocked.
Public toilets need both periodic maintenance and emergency cleaning. Volunteer-managed facilities may also be an option in some cases.
There are now durable options for restroom stall materials such as solid plastic that were designed to help fight vandalism. Solid plastic allows for scratches to be less noticeable due to the solid color throughout the product compared to powder coated steel. Powder coated steel chips easily lead to obvious damage compared with solid plastic. Solid plastic is also easier to clean and maintain in public restrooms with high traffic volumes.
Costs and economics
User fees
Toilets that require the user to pay may be street furniture or be inside a building, e.g. a shopping mall, department store, or railway station. The reason for charging money is usually for the maintenance of the equipment. Paying to use a toilet can be traced back almost 2,000 years — the Roman emperor Vespasian is believed to have begun charging his citizens to use toilet facilities 74 AD. The payment may be taken by a bathroom attendant, or by a coin-operated turnstile or cubicle door. (see John Nevil Maskelyne, who invented a door lock requiring the insertion of a penny coin, hence the euphemism to "spend a penny") The first pay toilet in the United States was installed in 1910 in Terre Haute, Indiana.
Privatization and closures
In some places, the provision of public toilet facilities is under great pressure. One response by public authorities is to close the buildings, often citing criminal activity. The United Kingdom government austerity programme has led to major council cut-backs to public toilet provision, with knock-on effects on the public realm as a whole. Some of the buildings, particularly the underground ones, are sold and used for other purposes, e.g. as a bar.
Another response is to privatise the toilets, so that a public good is provided by a contractor, just as private prisons are. The toilets may fall under the category of privately owned public space - anyone can use them, but the land ultimately belongs to the corporation in question. When toilets that have been privatised are improperly run, or closed, there may be calls to take them back into the control of the public authority.
Society and culture
Unisex (gender neutral)
Public toilets are often separated by sex. In many cultures, this separation is so characteristic that pictograms of a man or a woman often suffice to indicate the facility, without explicit reference to the fixtures themselves. In restaurants and other private locations, the identifications can be designed to match the decoration of the premises. Toilet facilities for people with disabilities, especially those reliant on a wheelchair, may be either gender-specific or unisex. Gender-neutral toilets are usual in cases where sex-separated ones are not practical, such as in aircraft lavatories and passenger train toilets.
In the 21st century, with support from the transgender rights movement, some initiatives have called for gender-neutral public toilets, also called unisex public toilets (also called gender-inclusive, or all-gender). These may be instead of, or in addition to, gendered toilets, depending on the circumstances. Many groups are re-imagining what public toilets can look like; for instance, architect Joel Sanders, transgender historian Susan Stryker, and legal scholar Terry Kogan launched Stalled!, an open source website which offers lectures, workshops, and design guidelines for unisex public toilets.
In addition to accommodating transgender and gender non-conforming individuals, gender-neutral public toilets facilitate usage for people who may require assistance from a caretaker of another gender, such as people with disabilities, elderly people, and children.
An additional consideration with regard to gendered public restrooms is the availability of baby changing tables. Sometimes, these tables have only been installed in women's restrooms, owing to stereotypical assumptions that only women were likely to be accompanied by babies needing to have their diapers changed. This can be an impediment for fathers with their children and other male caregivers. Advocates have worked for changing tables to be installed in men's restrooms. Unisex washrooms would provide access to either regardless.
Graffiti and street art
Public toilets have long been associated with graffiti, often of a transgressive, gossipy, or low-brow humorous nature (cf. toilet humour). The word latrinalia—from latrine (toilet) and -alia (a collection)—was coined to describe this kind of graffiti. A famous example of such artwork was featured on the album cover of the satirical Tony Award Broadway musical Urinetown, using felt-tip pen scribblings.
As graffiti merged into street art, so some public street-level toilets began to make a feature of their visibility. The Hundertwasser toilet block is a colourful example in Kawakawa, New Zealand, designed by an Austrian artist and viewed as a tourist draw in a small town.
Drugs, vandalism and violent crime
Some public toilets are known for drug-taking and drug-selling, as well as vandalism. This type of criminal activity is associated with all "neglected, unsupervised buildings", not just toilets, and good cleanliness and maintenance, and ideally an attendant on the premises, can act as a protection against these problems.
Violent crime inside public toilets can be a problem in areas where the rate of such crimes in general is very high. In South Africa, for instance, many people have reported being afraid to use public toilets. There have been several highly publicized murders in public toilets, such as the Seocho-dong public toilet murder case in South Korea in 2016. In the US, an infamous case was the murder of a 9-year-old boy in 1998 in a San Diego county public toilet.
Increasing public toilet provision can help to protect women from violent attacks. Research studies have found increased risk of women and girls being raped where there is limited or no access to safe toilets at night.
Several billion people lack access to improved water and sanitation and must travel long distances or wait until nighttime to defecate under cover of darkness. Women and girls managing menstruation increases their water and sanitation requirements for several days each month. Amongst the UN sustainable development goals, there is specific reference to achieving access to adequate and equitable sanitation and hygiene for all, paying special attention to the needs of women and girls in vulnerable situations (indicator 6.2).
A study conducted by the UCLA School of Law's Williams Institute found no significant change in the number of crimes since the passage of various laws that enable transgender public toilet usage. Transgender and gender non-conforming people are at risk of violence when using the public toilet (see: trans bashing). A 2015 study by the National Center for Transgender Equality found that 59% of transgender Americans avoided using public facilities for fear of confrontation. This landmark study, which included 27,715 respondents, found that 24% of respondents had their presence in the restroom questioned, 12% had experienced verbal harassment, physical assault, or sexual assault when attempting to use the restroom, and 9% were denied access entirely. Several studies have found that preventing transgender people from using public toilets has negative mental health impacts, leading to a higher risk of suicide.
Anonymous sex
Before the gay liberation movement, public toilets were amongst the few places where men too young to enter gay bars legally could meet others who they knew with certainty to be gay. Many, if not most, gay and bisexual men at the time were closeted, and almost no public gay social groups were available for those under legal drinking age. The privacy and anonymity public toilets provided made them a convenient and attractive location to engage in sexual acts then.
Sexual acts in public toilets are outlawed in many jurisdictions (e.g. the Sexual Offences Act 2003 in the UK). It is likely that the element of risk involved in cottaging makes it an attractive activity to some.
Symbols in unicode
Unicode provides several symbols for public toilets.
Toilets in particular locations
Shopping centres
Customers often expect retail stores and shopping centres to offer public toilets. Customers rank complimentary toilets highly, and their availability influences shopping behaviour. By offering appropriate customer toilets, retail stores and shopping centres may enhance their profits and image; however, many retailers pay insufficient attention to their customer toilet facilities. Due to the potential of customer toilets to increase profits and improve store image, retailers could benefit from regarding toilets as a marketing investment rather than a property expense. Some businesses, like Starbucks, have officially opted to let anyone use their toilets, without having to purchase anything. This decision was made after a highly publicized instance of racial profiling.
Schools
Lack of adequate school toilets is a very serious problem in many developing countries, and contributes to many problems from poor child health to school dropouts. Many pit latrines are not adequately built for young children, which has resulted in the tragic consequence of children dying by falling inside the hole. Globally, about 620 million children do not have adequate toilets at school, around 900 million cannot wash their hands properly, and almost half of schools do not provide soap.
The situation of inadequate school toilets violates children's right to education and right to water and sanitation. Such situations are common in many parts of the world, especially in Africa and South Asia, but also in other regions. For example, in the Caucasus and Central Asia, 30% of schools do not provide adequate toilets and 37% of schools do not have access to adequate water supplies. The presence of soap and toilet paper is very important, but is largely non-existent in many regions. Missing or inadequate doors and partitioning are observed in both high- and low-income countries, which can affect children's self-esteem, especially around puberty; in the case of girls, lack of menstrual hygiene management and privacy (such as the availability of functional toilet doors with locks, disposal facilities and menstrual hygiene products in schools, soap and toilet paper) can severely impact upon their well-being and is considered a form of violation of girls' rights.
In Japan, there is still squat toilets in most schools, which most Japanese children are not able to use or find it uncomfortable, which causes constipation.
The Bill & Melinda Gates Foundation has funded several research projects for provision of community, shared or school toilets in developing countries since 2011, when they launched their "Reinvent the Toilet Challenge".
Prisons
It is today accepted in the countries of the Council of Europe that a lack of basic privacy is a violation of fundamental rights. For example, the European Court of Human Rights ruled in Szafrański v. Poland (2015) that the forcing of prisoners to use the toilet without adequate privacy amounts to a violation of Article 8 (right to respect for private life) of the European Convention on Human Rights.
In Vietnam
In Vietnam, many cities, especially large and densely populated cities, are experiencing a severe shortage of public toilets due to lack of land for toilet construction. The general situation of toilets in Vietnam is insufficient, poorly installed, and dirty. Many public places do not have toilets, leading to the situation of littering everywhere.
Gallery
See also
Accessible toilet
Toilet room, in a private setting
EToilet
National Public Toilet Map (in Australia)
Human right to water and sanitation
Sanitation
Self-cleaning floor
Spray-and-vac cleaning, a method of professional cleaning
World Toilet Day
World Toilet Organization
References
External links
British Toilet Association Campaigning for Better Public Toilets for All
American Restroom Association America's advocate for the availability of clean, safe, well designed public restrooms
Australia's National Public Toilet Map shows the location of more than 14,000 public and private public toilet facilities across Australia.
Public Toilets Database Locations of public toilets in 18 countries. New locations and comments can be added. Detailed information includes the geographic coordinates and quality of the facility.
Needaloo The Uk Online Disabled Loo Locator
PHLUSH Volunteer advocacy group for public toilets
Urinal Dot Net
Rooms
Sanitation
Equality rights
Social inequality
Women's health | Public toilet | [
"Engineering"
] | 9,680 | [
"Rooms",
"Architecture"
] |
1,053,500 | https://en.wikipedia.org/wiki/DNA%20extraction | The first isolation of deoxyribonucleic acid (DNA) was done in 1869 by Friedrich Miescher. DNA extraction is the process of isolating DNA from the cells of an organism isolated from a sample, typically a biological sample such as blood, saliva, or tissue. It involves breaking open the cells, removing proteins and other contaminants, and purifying the DNA so that it is free of other cellular components. The purified DNA can then be used for downstream applications such as PCR, sequencing, or cloning. Currently, it is a routine procedure in molecular biology or forensic analyses.
This process can be done in several ways, depending on the type of the sample and the downstream application, the most common methods are: mechanical, chemical and enzymatic lysis, precipitation, purification, and concentration. The specific method used to extract the DNA, such as phenol-chloroform extraction, alcohol precipitation, or silica-based purification.
For the chemical method, many different kits are used for extraction, and selecting the correct one will save time on kit optimization and extraction procedures. PCR sensitivity detection is considered to show the variation between the commercial kits.
There are many different methods for extracting DNA, but some common steps include:
Lysis: This step involves breaking open the cells to release the DNA. For example, in the case of bacterial cells, a solution of detergent and salt (such as SDS) can be used to disrupt the cell membrane and release the DNA. For plant and animal cells, mechanical or enzymatic methods are often used.
Precipitation: Once the DNA is released, proteins and other contaminants must be removed. This is typically done by adding a precipitating agent, such as alcohol (such as ethanol or isopropanol), or a salt (such as ammonium acetate). The DNA will form a pellet at the bottom of the solution, while the contaminants will remain in the liquid.
Purification: After the DNA is precipitated, it is usually further purified by using column-based methods. For example, silica-based spin columns can be used to bind the DNA, while contaminants are washed away. Alternatively, a centrifugation step can be used to purify the DNA by spinning it down to the bottom of a tube.
Concentration: Finally, the amount of DNA present is usually increased by removing any remaining liquid. This is typically done by using a vacuum centrifugation or a lyophilization (freeze-drying) step.
Some variations on these steps may be used depending on the specific DNA extraction protocol. Additionally, some kits are commercially available that include reagents and protocols specifically tailored to a specific type of sample.
What does it deliver?
DNA extraction is frequently a preliminary step in many diagnostic procedures used to identify environmental viruses and bacteria and diagnose illnesses and hereditary diseases. These methods consist of, but are not limited to:
Fluorescence In Situ Hybridization (FISH) technique was developed in the 1980s. The basic idea is to use a nucleic acid probe to hybridize nuclear DNA from either interphase cells or metaphase chromosomes attached to a microscopic slide. It is a molecular method used, among other things, to recognize and count particular bacterial groupings.
To recognize, define, and quantify the geographical and temporal patterns in marine bacterioplankton communities, researchers employ a technique called terminal restriction fragment length polymorphism (T-RFLP).
Sequencing: Whole or partial genomes and other chromosomal components, ended for comparison with previously published sequences.
Basic procedure
Cells that are to be studied need to be collected.
Breaking the cell membranes open exposes the DNA along with the cytoplasm within (cell lysis).
Lipids from the cell membrane and the nucleus are broken down with detergents and surfactants.
Breaking down proteins by adding a protease (optional).
Breaking down RNA by adding an RNase (optional).
The solution is treated with a concentrated salt solution (saline) to make debris such as broken proteins, lipids, and RNA clump together.
Centrifugation of the solution, which separates the clumped cellular debris from the DNA.
DNA purification from detergents, proteins, salts, and reagents is used during the cell lysis step. The most commonly used procedures are:
Ethanol precipitation usually by ice-cold ethanol or isopropanol. Since DNA is insoluble in these alcohols, it will aggregate together, giving a pellet upon centrifugation. Precipitation of DNA is improved by increasing ionic strength, usually by adding sodium acetate.
Phenol–chloroform extraction in which phenol denatures proteins in the sample. After centrifugation of the sample, denatured proteins stay in the organic phase while the aqueous phase containing nucleic acid is mixed with chloroform to remove phenol residues from the solution.
Minicolumn purification relies on the fact that the nucleic acids may bind (adsorption) to the solid phase (silica or other) depending on the pH and the salt concentration of the buffer.
Cellular and histone proteins bound to the DNA can be removed either by adding a protease or having precipitated the proteins with sodium or ammonium acetate or extracted them with a phenol-chloroform mixture before the DNA precipitation.
After isolation, the DNA is dissolved in a slightly alkaline buffer, usually in a TE buffer, or in ultra-pure water.
Common chemicals
The most common chemicals used for DNA extraction include:
Detergents, such as SDS or Tween-20, which are used to break open cells and release the DNA.
Protease enzymes, such as Proteinase K, which are used to digest proteins that may be binding to the DNA.
Phenol and chloroform, which are used to separate the DNA from other cellular components.
Ethanol or isopropanol, which are used to precipitate the DNA.
Salt, such as NaCl, which is often used to help dissolve the DNA and maintain its stability.
EDTA, which is used to chelate the metals ions that can damage the DNA.
Tris-HCL, which is used to maintain the pH at the optimal condition for DNA extraction.
Method selection
Some of the most common DNA extraction methods include organic extraction, Chelex extraction, and solid phase extraction. These methods consistently yield isolated DNA, but they differ in both the quality and the quantity of DNA yielded. When selecting a DNA extraction method, there are multiple factors to consider, including cost, time, safety, and risk of contamination.
Organic extraction involves the addition of incubation in multiple different chemical solutions; including a lysis step, a phenol-chloroform extraction, an ethanol precipitation, and washing steps. Organic extraction is often used in laboratories because it is cheap, and it yields large quantities of pure DNA. Though it is easy, there are many steps involved, and it takes longer than other methods. It also involves the unfavorable use of the toxic chemicals phenol and chloroform, and there is an increased risk of contamination due to transferring the DNA between multiple tubes. Several protocols based on organic extraction of DNA were effectively developed decades ago, though improved and more practical versions of these protocols have also been developed and published in the last years.
The chelex extraction method involves adding the Chelex resin to the sample, boiling the solution, then vortexing and centrifuging it. The cellular materials bind to the Chelex beads, while the DNA is available in the supernatant. The Chelex method is much faster and simpler than organic extraction, and it only requires one tube, which decreases the risk of DNA contamination. Unfortunately, Chelex extraction does not yield as much quantity and the DNA yielded is single-stranded, which means it can only be used for PCR-based analyses and not for RFLP.
Solid phase extraction such as using a spin-column-based extraction method takes advantage of the fact that DNA binds to silica. The sample containing DNA is added to a column containing a silica gel or silica beads and chaotropic salts. The chaotropic salts disrupt the hydrogen bonding between strands and facilitate the binding of the DNA to silica by causing the nucleic acids to become hydrophobic. This exposes the phosphate residues so they are available for adsorption. The DNA binds to the silica, while the rest of the solution is washed out using ethanol to remove chaotropic salts and other unnecessary constituents. The DNA can then be rehydrated with aqueous low-salt solutions allowing for elution of the DNA from the beads.
This method yields high-quality, largely double-stranded DNA which can be used for both PCR and RFLP analysis. This procedure can be automated and has a high throughput, although lower than the phenol-chloroform method. This is a one-step method i.e. the entire procedure is completed in one tube. This lowers the risk of contamination making it very useful for the forensic extraction of DNA. Multiple solid-phase extraction commercial kits are manufactured and marketed by different companies; the only problem is that they are more expensive than organic extraction or Chelex extraction.
Special types
Specific techniques must be chosen for the isolation of DNA from some samples. Typical samples with complicated DNA isolation are:
archaeological samples containing partially degraded DNA, see ancient DNA
samples containing inhibitors of subsequent analysis procedures, most notably inhibitors of PCR, such as humic acid from the soil, indigo and other fabric dyes or haemoglobin in blood
samples from microorganisms with thick cellular walls, for example, yeast
samples containing mixed DNA from multiple sources
Extrachromosomal DNA is generally easy to isolate, especially plasmids may be easily isolated by cell lysis followed by precipitation of proteins, which traps chromosomal DNA in insoluble fraction and after centrifugation, plasmid DNA can be purified from soluble fraction.
A Hirt DNA Extraction is an isolation of all extrachromosomal DNA in a mammalian cell. The Hirt extraction process gets rid of the high molecular weight nuclear DNA, leaving only low molecular weight mitochondrial DNA and any viral episomes present in the cell.
Detection of DNA
A diphenylamine (DPA) indicator will confirm the presence of DNA. This procedure involves chemical hydrolysis of DNA: when heated (e.g. ≥95 °C) in acid, the reaction requires a deoxyribose sugar and therefore is specific for DNA. Under these conditions, the 2-deoxyribose is converted to w-hydroxylevulinyl aldehyde, which reacts with the compound, diphenylamine, to produce a blue-colored compound. DNA concentration can be determined by measuring the intensity of absorbance of the solution at the 600 nm with a spectrophotometer and comparing to a standard curve of known DNA concentrations.
Measuring the intensity of absorbance of the DNA solution at wavelengths 260 nm and 280 nm is used as a measure of DNA purity. DNA can be quantified by cutting the DNA with a restriction enzyme, running it on an agarose gel, staining with ethidium bromide (EtBr) or a different stain and comparing the intensity of the DNA with a DNA marker of known concentration.
Using the Southern blot technique, this quantified DNA can be isolated and examined further using PCR and RFLP analysis. These procedures allow differentiation of the repeated sequences within the genome. It is these techniques which forensic scientists use for comparison, identification, and analysis.
High-molecular-weight DNA extraction method
In this method, plant nuclei are isolated by physically grinding tissues and reconstituting the intact nuclei in a unique Nuclear Isolation Buffer (NIB). The plastid DNAs are released from organelles and eliminated with an osmotic buffer by washing and centrifugation. The purified nuclei are then lysed and further cleaned by organic extraction, and the genomic DNA is precipitated with a high concentration of CTAB. The highly pure, high molecular weight gDNA is extracted from the nuclei, dissolved in a high pH buffer, allowing for stable long-term storage.
DNA storage
DNA storage is an important aspect of DNA extraction projects as it ensures the integrity and stability of the extracted DNA for downstream applications.
One common method of DNA storage is ethanol precipitation, which involves adding ethanol and a salt, such as sodium chloride or potassium acetate, to the extracted DNA to precipitate it out of solution. The DNA is then pelleted by centrifugation and washed with 70% ethanol to remove any remaining contaminants. The DNA pellet is then air-dried and resuspended in a buffer, such as Tris-EDTA (TE) buffer, for storage.
Another method is freezing the DNA in a buffer such as TE buffer, or in a cryoprotectant such as glycerol or DMSO, at -20 or -80 degrees Celsius. This method preserves the integrity of the DNA and slows down the activity of any enzymes that may degrade it.
It's important to note that the choice of storage buffer and conditions will depend on the downstream application for which the DNA is intended. For example, if the DNA is to be used for PCR, it may be stored in TE buffer at 4 degrees Celsius, while if it is to be used for long-term storage or shipping, it may be stored in ethanol at -20 degrees Celsius. The extracted DNA should be regularly checked for its quality and integrity, such as by running a gel electrophoresis or spectrophotometry. The storage conditions should be also noted and controlled, such as the temperature and humidity.
It's also important to consider the long-term stability of the DNA and the potential for degradation over time. The extracted DNA should be stored for as short a time as possible, and the conditions for storage should be chosen to minimize the risk of degradation.
In general, the extracted DNA should be stored under the best possible conditions to ensure its stability and integrity for downstream applications.
Quality control
There are several quality control techniques used to ensure the quality of extracted DNA, including:
Spectrophotometry: This is a widely used method for measuring the concentration and purity of a DNA sample. Spectrophotometry measures the absorbance of a sample at different wavelengths, typically at 260 nm and 280 nm. The ratio of absorbance at 260 nm and 280 nm is used to determine the purity of the DNA sample.
Gel electrophoresis: This technique is used to visualize and compare the size and integrity of DNA samples. The DNA is loaded onto an agarose gel and then subjected to an electric field, which causes the DNA to migrate through the gel. The migration of the DNA can be visualized using ethidium bromide, which intercalates into the DNA and fluoresces under UV light.
Fluorometry: Fluorometry is a method to determine the concentration of nucleic acids by measuring the fluorescence of the sample when excited by a specific wavelength of light. Fluorometry uses dyes that specifically bind to nucleic acids and have a high fluorescence intensity.
PCR: Polymerase Chain Reaction (PCR) is a technique that amplifies a specific region of DNA, it is also used as a QC method by amplifying a small fragment of the DNA, if the amplification is successful, it means the extracted DNA is of good quality and it's not degraded.
Qubit Fluorometer: The Qubit Fluorometer is an instrument that uses fluorescent dyes to measure the concentration of DNA and RNA in a sample. It is a quick and sensitive method that can be used to determine the concentration of DNA samples.
Bioanalyzer: The bioanalyzer is an instrument that uses electrophoresis to separate and analyze DNA, RNA, and protein samples. It can provide detailed information about the size, integrity, and purity of a DNA sample.
See also
Boom method
DNA fingerprinting
DNA sequencing
DNA structure
Ethanol precipitation
Plasmid preparation
Polymerase chain reaction
SCODA DNA purification
References
Further reading
Li, Richard (2015). Forensic Biology. Boca Raton: CRC Press, Taylor & Francis Group. .
Sambrook, Michael R.; Green, Joseph (2012). Molecular Cloning (4th ed.). Cold Spring Harbor, N.Y.: Cold Spring Harbor Laboratory Pr. . .
External links
How to extract DNA from anything living
DNA Extraction Virtual Lab
Biochemical separation processes
Genetics techniques
Molecular biology
Laboratory techniques
DNA
Polymerase chain reaction
Forensic genetics
lt:DNR išskyrimas | DNA extraction | [
"Chemistry",
"Engineering",
"Biology"
] | 3,499 | [
"Biochemistry methods",
"Genetics techniques",
"Separation processes",
"Polymerase chain reaction",
"Genetic engineering",
"Biochemical separation processes",
"nan",
"Molecular biology",
"Biochemistry"
] |
1,053,506 | https://en.wikipedia.org/wiki/Influx%20of%20disease%20in%20the%20Caribbean | The first European contact in 1492 started an influx of communicable diseases into the Caribbean. Diseases originating in the Old World (Afro-Eurasia) came to the New World (the Americas) for the first time, resulting in demographic and sociopolitical changes due to the Columbian Exchange from the late 15th century onwards. The Indigenous peoples of the Americas had little immunity to the predominantly Old World diseases, resulting in significant loss of life and contributing to their enslavement and exploitation perpetrated by the European colonists. Waves of enslaved Africans were brought to replace the dwindling Indigenous populations, solidifying the position of disease in triangular trade.
Infectious diseases
Before the first wave of European colonization, the Indigenous peoples of the Americas and the Caribbean are thought to have lived with infrequent epidemic diseases, brought about by limited contact between tribes. This left them socially and biologically unprepared when the Italian explorer Christopher Columbus and his crew introduced several infectious diseases, including typhus, smallpox, influenza, whooping cough, and measles following his 1492 voyage to the Americas. The Old World diseases spread from the carriers to the Indigenous populations, who had no immunity, leading to more serious cases and higher mortality. Because the Indigenous societies of the Americas were not used to the diseases as European nations were at the time, there was no system in place to care for the sick.
Smallpox is among the most notable of diseases in the Columbian Exchange due to the high number of deaths and impact on life for Indigenous societies. Smallpox first broke out in the Americas on the island of Hispaniola in 1518. The disease was carried over from Europe, where it had been endemic for over seven hundred years. Like the other diseases introduced in the time period, the Europeans were familiar with the treatment of the disease and had some natural immunity, which reduced mortality and facilitated quicker recovery. The Taíno people, who inhabited Hispaniola, had no natural smallpox immunity and were unfamiliar with treating epidemic disease.
In 1493, the first recorded influenza epidemic to strike the Americas occurred on the island of Hispaniola in the northern Spanish settlement of Isabela. The virus was introduced to the Isle of Santo Domingo by the Cristóbal Cólon, which docked at La Isabela on 10 December 1493, carrying about 2,000 Spanish passengers. Despite the general poor health of the colony, Columbus returned in 1494 and found that the Native American population had been affected by disease even more catastrophically than Isabela's first settlers were. By 1506, only a third of the native population remained. The Taíno population before European contact is estimated to have been between 60,000 and 8 million people, and the entire nation was virtually extinct 50 years after contact, which has primarily been attributed to the infectious diseases.
After the first European contact, social disruption and epidemic diseases led to a decline in the Amerindian population. Because the Indigenous societies, including the Taínos, were unfamiliar with the diseases, they were not prepared to deal with the social consequences. The high number of people incapacitated by the disease disrupted the normal cycles of agriculture and hunting that sustained the Native American populations. This led to increased dependence on the Europeans, and reduced capacity to resist the European invasion. The eventual enslavement of the Taíno people by the Europeans compounded the effects of the epidemics in the downfall of the Indigenous societies.
Impact of the transatlantic slave trade
As the population of enslaved Indigenous peoples fell due to disease and abuse, the Spanish and Portuguese conquistadors began to import enslaved workers from Africa in 1505. Until 1800 the population rose as slaves arrived from West Africa. Because there was already an established European colonial presence in Africa at the time, the enslaved Africans were less vulnerable to disease than the Taíno people on Hispaniola. However, they came carrying their own diseases, including malaria. At the time, malaria was endemic both in Europe and Africa, though more prevalent in the latter continent. The climate of the Caribbean was hospitable to mosquitoes of the genus Anopheles, which acts as a vector for the disease and allowed it to spread. Many of the African-born enslaved people had genetic protections against malaria that Indigenous enslaved people did not. As malaria, smallpox, and other diseases spread the Indigenous populations continued to fall, which increased the motivation for the Spanish and Portuguese colonists to continue to import more enslaved workers from Africa. This enslaved people worked in mining and agriculture, driving the development of triangular trade.
See also
Catholic Church and the Age of Discovery
Columbian Exchange
HIV/AIDS in the Caribbean
Malaria and the Caribbean
Native American disease and epidemics
Seasoning (colonialism)
Timeline of European imperialism
Triangular trade
Virgin soil epidemic
References
Bibliography
Engerman, Stanley L. "A Population History of the Caribbean", pp. 483–528 in A Population History of North America Michael R. Haines and Richard Hall Steckel (Eds.), Cambridge University Press, 2000, .
1490s in the Caribbean
15th-century epidemics
16th-century epidemics
17th-century epidemics
18th-century epidemics
Atlantic slave trade
Age of Discovery
Disease transmission
Epidemiology
European colonization of the Caribbean
Health in the Caribbean
History of Indigenous peoples of North America
History of indigenous peoples of the Americas
History of the Caribbean
Indigenous health
Portuguese colonization of the Americas
Spanish colonization of the Americas | Influx of disease in the Caribbean | [
"Environmental_science"
] | 1,086 | [
"Epidemiology",
"Environmental social science"
] |
1,053,553 | https://en.wikipedia.org/wiki/List%20of%20cosmologists | This is a list of people who have made noteworthy contributions to cosmology (the study of the history and large-scale structure of the universe) and their cosmological achievements.
A
Tom Abel (1970–) studied primordial star formation
Roberto Abraham (1965–) studied the shapes of early galaxies
Andreas Albrecht studied the formation of the early universe, cosmic structure, and dark energy
Hannes Alfvén (1908–1995) theorized that galactic magnetic fields could be generated by plasma currents
Ralph A. Alpher (1921–2007) argued that observed proportions of hydrogen and helium in the universe could be explained by the big bang model, predicted cosmic background radiation
Aristarchus of Samos (310–230 BC) early proponent of heliocentrism
Aristotle (circa 384–322 BC) posited a geocentric cosmology that was widely accepted for many centuries
Aryabhata (476–550) described a geocentric model with slow and fast epicycles
B
Ja'far ibn Muhammad Abu Ma'shar al-Balkhi (787–886) conveyed Aristotle's theories from Persia to Europe
James M. Bardeen (1939–2022) studied the mathematics of black holes and of vacua under general relativity
John D. Barrow (1952–2020) popularized the anthropic cosmological principle
Charles L. Bennett (1956–) studied the large-scale structure of the universe by mapping irregularities in microwave background radiation
Orfeu Bertolami (1959–) studied the cosmological constant, cosmic inflation, dark energy–dark matter unification and interaction, alternative gravity theories
Somnath Bharadwaj (1964–) studied large-scale structure formation
James Binney (1950–) studied galactic dynamics and supernova disruption of galactic gasses
Martin Bojowald (1973–) studied loop quantum gravity and established loop quantum cosmology
Hermann Bondi (1919–2005) developed the steady-state model
Mustapha Ishak Boushaki (1967–) physicist researcher on cosmology
Tycho Brahe (1546–1601) promoted a geo-heliocentric system of epicycles
Robert Brandenberger (1956–) formulated the theory of string gas cosmology, with colleague Cumrun Vafa, and developed cosmological perturbation theory
C
Bernard J. Carr (1949–) promoted the anthropic principle, studied primordial black holes
Sean M. Carroll (1966–) researched dark energy, general relativity, and spontaneous cosmic inflation
Gennady V. Chibisov (1946–2008) origin of cosmological density perturbations from quantum fluctuations
Peter Coles (1963–) modeled galactic clustering and authored several cosmology books
C. B. Collins used the anthropic principle to solve the flatness problem
Asantha Cooray (1973–) studied dark energy, halo models of large structure, and cosmic microwave radiation
Nicolaus Copernicus (1473–1543) formulated a heliocentric cosmology
D
Paul Davies (1946–) developed a vacuum model that explains microwave background fluctuation, studies time's arrow, and has written many popular-press books
Marc Davis (1947–) was lead astronomer of a survey of 50,000 high-redshift galaxies
Avishai Dekel (1951–) studied galaxy formation and large scale structure of the cosmos in dark matter-dark energy dominated universes
Robert H. Dicke (1916–1997) measured background radiation, used an early version of the anthropic principle to relate the gravitational constant to the age of the universe
Mike J. Disney (1937–) discovered low surface brightness galaxies
E
George Efstathiou (1955–) pioneering computer simulations, observations of galaxy clustering and studies of the fluctuations in the cosmic microwave background
Jürgen Ehlers (1929–2008) described gravitational lensing and studied the mathematical implications of an isotropic microwave background
Jaan Einasto (1929–) studied structure in the large-scale distribution of superclusters of galaxies, early proponent of dark matter
Albert Einstein (1879–1955) introduced general relativity and the cosmological constant
George F. R. Ellis (1939–) theorized a cylindrical steady-state universe with a naked singularity as recycling mechanism
Richard S. Ellis (1950–) used gravitational lensing and high-redshift supernovae to study the origin of galaxies, large scale structure, and dark matter
F
Sandra M. Faber (1944–) discovered the Great Attractor, a supercluster-scale gravitational anomaly; co-inventor of the theory of cold dark matter
Hume A. Feldman (1953–) studies cosmological perturbations and the statistical and dynamical properties of the large scale structure of the universe
Pedro G. Ferreira (1968–) his main interests are in general relativity and theoretical cosmology
Carlos S. Frenk (1951–) studied cosmic structure formation
Alexander Friedmann (1888–1925) discovered the expanding-universe solution to general relativity
G
George Gamow (1904–1968) argued that observed proportions of hydrogen and helium in the universe could be explained by the big bang model, modeled the mass and radius of primordial galaxies
Margaret J. Geller (1947–) discovered the Great Wall, a superstructure-scale filament of galaxies
Thomas Gold (1920–2004) proposed the steady-state theory
Gerson Goldhaber (1924–2010) used supernova observations to measure the energy density of the universe
J. Richard Gott (1947–) proposed the use of cosmic strings for time travel
Alan Guth (1947–) explained the isotropy of the universe by theorizing a phase of exponential cosmic inflation soon after the big bang
H
Stephen W. Hawking (1942–2018) described singularities in general relativity and developed singularity-free models of the big bang; predicted primordial black holes
Charles W. Hellaby described models of general relativity with nonconstant metric signature
Michał Heller (1936–) researched noncommutative approaches to quantum gravity
Robert C. Herman (1914–1997) predicted the background radiation temperature
Lars Hernquist (1954–) studied galaxy formation and evolution
Chris Hirata (1982–) researched weak gravitational lensing
Honorius Augustodunensis (c.1080−1151) wrote a popular encyclopedia of cosmology, geography, and world history
Hanns Hörbiger (1860–1931) formulated a pseudoscientific theory of ice as the basic substance of all cosmic processes
Fred Hoyle (1915–2001) promoted the steady state theory, used the anthropic principle to explain the energy levels of carbon nuclei
Edwin P. Hubble (1889–1953) demonstrated the existence of other galaxies and confirmed the relation between redshift and distance
John P. Huchra (1948–2010) discovered the Great Wall, a superstructure-scale filament of galaxies
I
Mustapha Ishak Boushaki (1967–) physicist researcher on Cosmology
Jamal Nazrul Islam (1939–2013) published seven books on Cosmology
K
Ronald Kantowski (1939–) discovered spatially homogeneous but anisotropic solutions to general relativity
Johannes Kepler (1571–1630) pioneered heliocentrism, discovered elliptical planetary motion, attempted to explain heavenly motions through physical causes
Isaak Markovich Khalatnikov (1919–2021) conjectured an oscillatory model with an essential singularity for the evolution of the universe
Tom W. B. Kibble (1932–2016) introduced the concept of cosmic strings
Robert Kirshner (1949–) discovered the Boötes void, a large region sparsely populated with galaxies, and wrote a popular book on cosmology
Edward Kolb (1951–) studied big bang cosmology including the emergence of baryons and dark matter, and wrote a popular textbook on cosmology
Lawrence M. Krauss (1954–) author of popular science books on cosmology including A Universe from Nothing
L
Ofer Lahav (1959–) studied dark matter and dark energy
Tod R. Lauer (1957–) catalogued massive black holes at galaxy centers and correlated their mass with other properties of the galaxies' structures
Georges Henri Lemaître (1894–1966) proposed the big bang theory and the distance-redshift relation
Janna Levin (1967–) seeks evidence for a bounded universe of nontrivial topology
Andrew R. Liddle (1965–) studied inflationary models, wrote two books on inflation and primordial inhomogeneities
Evgeny M. Lifshitz (1915–1985) conjectured an oscillatory model with an essential singularity for the evolution of the universe
Andrei Linde (1948–) pioneered cosmic inflationary models and proposed eternal chaotic inflation of universes from the false vacuum
Abraham (Avi) Loeb (1962–) researched primordial stars, primordial black holes, quasars, reionization, gravitational lensing, and gamma-ray bursts
Jean-Pierre Luminet (1951–) studied black holes and the topology of the Universe
David H. Lyth (1940–) studied particle cosmology, wrote two books on cosmic inflation and primordial inhomogeneities
M
João Magueijo (1967–) proposed much faster speeds of light in the young universe as an alternative explanation to cosmic inflation for its homogeneity
Richard Massey (1977–) mapped dark matter in the universe
Charles W. Misner (1932–2023) studied solutions to general relativity including the mixmaster universe and Misner space, wrote influential text on gravitation
John Moffat (1932–) proposed much faster speeds of light in the young universe, developed antisymmetric theories of gravity
Lauro Moscardini (1961–) modeled galaxy clustering in the early universe
N
Jayant Narlikar (1938–) promoted steady state theories
Isaac Newton (1642–1727) formulated the law of universal gravitation and supported the heliocentric model
P
György Paál (1934–1992) in the late 1950s studied the quasar and galaxy cluster distributions, in 1970 from redshift quantization came up with the idea that the Universe might have nontrivial topological structure
Thanu Padmanabhan (1957–2021) studied quantum gravity and quantum cosmology
Leonard Parker (1938–) established the study of quantum field theory within general relativity
P. James E. Peebles (1935–) predicted cosmic background radiation, contributed to structure theory, developed models that avoid dark matter
Roger Penrose (1931–) linked singularities to gravitational collapse, conjectured the nonexistence of naked singularities, and used gravitational entropy to explain homogeneity
Arno Penzias (1933–2024) was the first to observe the cosmic background radiation
Saul Perlmutter (1959–) used supernova observations to measure the expansion of the universe
Mark M. Phillips (1951–) used supernova observations to discover acceleration in the expansion of the universe, calibrated the supernova distance scale
Joel Primack (1945–) co-invented the theory of cold dark matter
Ptolemy (90–168) wrote the only surviving ancient text on astronomy, conjectured a model of the universe as a set of nested spheres with epicycles
Q
Ali Qushji (1403–1474) challenged Aristotelian physics, in particular presenting empirical evidence against a stationary Earth, and may have influenced Copernicus
R
Lisa Randall (1962–) contributed to Randall–Sundrum models, which describe the world in terms of a warped geometry higher-dimensional universe
Martin Rees (1942–) proposed that quasars are powered by black holes, disproved steady state by studying distribution of quasars
Yoel Rephaeli used the distortion of the cosmic background by high-energy electrons to infer the existence of galaxy clusters
Adam Riess (1969–) found evidence in supernova data that the expansion of the universe is accelerating and confirming dark energy models
Wolfgang Rindler (1924–2019) coined the phrase "event horizon", Rindler coordinates, and popularized the use of spinors (with Roger Penrose)
Howard P. Robertson (1903–1961) solved the two-body problem in an approximation to general relativity, developed the standard model of general relativity
Vera Rubin (1928–2016) discovered discrepancies in galactic rotation rates leading to the theory of dark matter
S
Rainer K. Sachs (1932–2024) discovered gravitationally induced redshifts in the cosmic background radiation
Carl Sagan (1934–1996) American astrophysicist, cosmologist and author
Andrei Sakharov (1921–1989) invented the theory of twins, CPT-symmetric universes
Allan Sandage (1936–2010) set the cosmological distance scale and accurately estimated the speed of expansion of the universe
Brian P. Schmidt (1967–) used supernova data to measure the acceleration in the expansion of the universe
David N. Schramm (1945–1997) was an expert on big bang theory and an early proponent of dark matter
Dennis W. Sciama (1926–1999) studied many aspects of cosmology and supervised many other leading cosmologists
Irving Segal (1918–1998) created chronometric cosmology with alternative explanation of redshift in spectra of distant sources
Seleucus of Seleucia (c.190–c.150 BC) used tidal observations to support a heliocentric model
Roman Ulrich Sexl (1939–1986) developed an ether-based theory of absolute simultaneity that is mathematically equivalent to special relativity
Al-Sijzi (c. 945–1020) invented an astrolabe based on the Earth's rotation
Joseph Silk (1942–) explained the homogeneity of the early universe using photon diffusion damping
Willem de Sitter (1872–1934) developed a theory of dark matter with Einstein, found an expanding matterless solution to general relativity
Vesto Slipher (1875–1969) performed the first measurements of radial velocities for galaxies, providing the empirical basis for the expansion of the universe
Lee Smolin (1955–) studied quantum gravity, popularized a theory of cosmological natural selection
George F. Smoot (1945–) used Cosmic Background Explorer satellite to measure the temperature and anisotropy of the early universe
David N. Spergel (1961–) used Wilkinson Microwave Anisotropy Probe satellite to measure the temperature and anisotropy of the early universe
Paul Steinhardt (1952–) pioneered inflationary cosmology, introduced first example of eternal inflation, introduced quintessential dark energy, introduced the concept of strongly self-interacting dark matter, studied brane cosmology and cyclic models of the universe
Abd al-Rahman al-Sufi (903–986) wrote the Book of Fixed Stars, which lists over forty constellations and the stars within them
Nicholas B. Suntzeff (1952–) used supernova observations to discover acceleration in the expansion of the universe, calibrated the supernova distance scale
Rashid Sunyaev (1943–) developed a theory of density fluctuations in the early universe, described how to use cosmic background distortion to observe large-scale density fluctuations
Alex Szalay (1949–) was working on structure formation in a neutrino-dominated universe, biased galaxy formation in a cold dark matter dominated universe and computing the power spectrum in hot, cold and warm dark matter dominated universes
T
Max Tegmark (1967–) determined the parameters of the lambda-cold dark matter model using Sloan Survey data, studied mathematical models of multiverses
Trinh Xuan Thuan (1948–) researched galaxy formation and evolution
William G. Tifft theorized that galactic redshifts are quantized
Beatrice Tinsley (1941–1981) researched galactic evolution, the creation of lightweight elements, and accelerated expansion of the universe
Frank J. Tipler (1947–) proved that time travel requires singularities, promoted the anthropic principle
Richard C. Tolman (1881–1948) showed that the cosmic background keeps a black-body profile as the universe expands
Mark Trodden (1968–) studied cosmological implications of topological defects in field theories
Michael S. Turner (1949–) coined the term dark energy
Neil Turok (1958–) predicted correlations between polarization and temperature anisotropy in the cosmic background, explained the big bang as a brane collision
Henry Tye (1947–) proposed brane-antibrane interactions as a cause of cosmic inflation
V
Alexander Vilenkin (1949–) showed that eternal inflation is generic, studied cosmic strings, theorized the creation of the universe from quantum fluctuations
W
Robert M. Wald (1947–) wrote a popular textbook on general relativity, studied the thermodynamics of black holes and created an axiomatic formulation of quantum field theory in curved spacetime
Arthur Geoffrey Walker (1909–2001) developed the standard model of general relativity and studied the mathematics of relativistic reference frames
David Wands studied inflation, superstrings, and density perturbations in the early universe
Yun Wang (1964–) uses supernova and galactic redshift data to probe dark energy
Jeffrey Weeks (1956–) used cosmic background patterns to determine the topology of the universe
Simon D. White (1951–) studied galaxy formation in the lambda-cold dark matter model
David Todd Wilkinson (1935–2002) used satellite probes to measure the cosmic background radiation
Edward L. Wright (1947–) promoted big bang theories, studied the effect of dust absorption on measurements of the cosmic background radiation
Z
Yakov Borisovich Zel'dovich (1914–1987) used accretion disks of massive black holes to explain quasars, predicted Compton scattering of the cosmic background radiation
Fritz Zwicky (1898–1974) along with Walter Baade (1893–1960) coined the term "supernova", contributions in understanding neutron stars, supernovae as standard candles, gravitational lensing, and dark matter.
See also
Timeline of cosmological theories
Cosmologists, List of
Physical cosmology
Cosmologists | List of cosmologists | [
"Physics",
"Astronomy"
] | 3,741 | [
"Astronomy-related lists",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
1,053,747 | https://en.wikipedia.org/wiki/Vladimir%20Drinfeld | Vladimir Gershonovich Drinfeld (; ; born February 14, 1954), surname also romanized as Drinfel'd, is a mathematician from the former USSR, who emigrated to the United States and is currently working at the University of Chicago.
Drinfeld's work connected algebraic geometry over finite fields with number theory, especially the theory of automorphic forms, through the notions of elliptic module and the theory of the geometric Langlands correspondence. Drinfeld introduced the notion of a quantum group (independently discovered by Michio Jimbo at the same time) and made important contributions to mathematical physics, including the ADHM construction of instantons, algebraic formalism of the quantum inverse scattering method, and the Drinfeld–Sokolov reduction in the theory of solitons.
He was awarded the Fields Medal in 1990.
In 2016, he was elected to the National Academy of Sciences. In 2018 he received the Wolf Prize in Mathematics. In 2023 he was awarded the Shaw Prize in Mathematical Sciences.
Biography
Drinfeld was born into a Jewish mathematical family, in Kharkiv, Ukrainian SSR, Soviet Union in 1954. In 1969, at the age of 15, Drinfeld represented the Soviet Union at the International Mathematics Olympiad in Bucharest, Romania, and won a gold medal with the full score of 40 points. He was, at the time, the youngest participant to achieve a perfect score, a record that has since been surpassed by only four others including Sergei Konyagin and Noam Elkies. Drinfeld entered Moscow State University in the same year and graduated from it in 1974. Drinfeld was awarded the Candidate of Sciences degree in 1978 and the Doctor of Sciences degree from the Steklov Institute of Mathematics in 1988. He was awarded the Fields Medal in 1990. From 1981 till 1999 he worked at the Verkin Institute for Low Temperature Physics and Engineering (Department of Mathematical Physics). Drinfeld moved to the United States in 1999 and has been working at the University of Chicago since January 1999.
Contributions to mathematics
In 1974, at the age of twenty, Drinfeld announced a proof of the Langlands conjectures for GL2 over a global field of positive characteristic. In the course of proving the conjectures, Drinfeld introduced a new class of objects that he called "elliptic modules" (now known as Drinfeld modules). Later, in 1983, Drinfeld published a short article that expanded the scope of the Langlands conjectures. The Langlands conjectures, when published in 1967, could be seen as a sort of non-abelian class field theory. It postulated the existence of a natural one-to-one correspondence between Galois representations and some automorphic forms. The "naturalness" is guaranteed by the essential coincidence of L-functions. However, this condition is purely arithmetic and cannot be considered for a general one-dimensional function field in a straightforward way. Drinfeld pointed out that instead of automorphic forms one can consider automorphic perverse sheaves or automorphic D-modules. "Automorphicity" of these modules and the Langlands correspondence could be then understood in terms of the action of Hecke operators.
Drinfeld has also worked in mathematical physics. In collaboration with his advisor Yuri Manin, he constructed the moduli space of Yang–Mills instantons, a result that was proved independently by Michael Atiyah and Nigel Hitchin. Drinfeld coined the term "quantum group" in reference to Hopf algebras that are deformations of simple Lie algebras, and connected them to the study of the Yang–Baxter equation, which is a necessary condition for the solvability of statistical mechanical models. He also generalized Hopf algebras to quasi-Hopf algebras and introduced the study of Drinfeld twists, which can be used to factorize the R-matrix corresponding to the solution of the Yang–Baxter equation associated with a quasitriangular Hopf algebra.
Drinfeld has also collaborated with Alexander Beilinson to rebuild the theory of vertex algebras in a coordinate-free form, which have become increasingly important to two-dimensional conformal field theory, string theory, and the geometric Langlands program. Drinfeld and Beilinson published their work in 2004 in a book titled "Chiral Algebras."
See also
Drinfeld reciprocity
Drinfeld upper half plane
Manin–Drinfeld theorem
Quantum group
Chiral algebra
Quasitriangular Hopf algebra
Ruziewicz problem
Notes
References
Victor Ginzburg, Preface to the special volume of Transformation Groups (vol 10, 3–4, December 2005, Birkhäuser) on occasion of Vladimir Drinfeld's 50th birthday, pp 277–278,
Report by Manin
External links
Langlands Seminar homepage
1954 births
20th-century Ukrainian mathematicians
21st-century Ukrainian mathematicians
Moscow State University alumni
Fields Medalists
Living people
Algebraic geometers
Number theorists
Soviet mathematicians
Ukrainian Jews
Scientists from Kharkiv
International Mathematical Olympiad participants
University of Chicago faculty
Institute for Advanced Study visiting scholars
Members of the United States National Academy of Sciences
Corresponding members of the National Academy of Sciences of Ukraine
Russian scientists | Vladimir Drinfeld | [
"Mathematics"
] | 1,051 | [
"Number theorists",
"Number theory"
] |
1,053,858 | https://en.wikipedia.org/wiki/Functional%20genomics | Functional genomics is a field of molecular biology that attempts to describe gene (and protein) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing). Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach.
Definition and goals
In order to understand functional genomics it is important to first define function. In their paper Graur et al. define function in two possible ways. These are "selected effect" and "causal role". The "selected effect" function refers to the function for which a trait (DNA, RNA, protein etc.) is selected for. The "causal role" function refers to the function that a trait is sufficient and necessary for. Functional genomics usually tests the "causal role" definition of function.
The goal of functional genomics is to understand the function of genes or proteins, eventually all components of a genome. The term functional genomics is often used to refer to the many technical approaches to study an organism's genes and proteins, including the "biochemical, cellular, and/or physiological properties of each and every gene product" while some authors include the study of nongenic elements in their definition. Functional genomics may also include studies of natural genetic variation over time (such as an organism's development) or space (such as its body regions), as well as functional disruptions such as mutations.
The promise of functional genomics is to generate and synthesize genomic and proteomic knowledge into an understanding of the dynamic properties of an organism. This could potentially provide a more complete picture of how the genome specifies function compared to studies of single genes. Integration of functional genomics data is often a part of systems biology approaches.
Techniques and applications
Functional genomics includes function-related aspects of the genome itself such as mutation and polymorphism (such as single nucleotide polymorphism (SNP) analysis), as well as the measurement of molecular activities. The latter comprise a number of "-omics" such as transcriptomics (gene expression), proteomics (protein production), and metabolomics. Functional genomics uses mostly multiplex techniques to measure the abundance of many or all gene products such as mRNAs or proteins within a biological sample. A more focused functional genomics approach might test the function of all variants of one gene and quantify the effects of mutants by using sequencing as a readout of activity. Together these measurement modalities endeavor to quantitate the various biological processes and improve our understanding of gene and protein functions and interactions.
At the DNA level
Genetic interaction mapping
Systematic pairwise deletion of genes or inhibition of gene expression can be used to identify genes with related function, even if they do not interact physically. Epistasis refers to the fact that effects for two different gene knockouts may not be additive; that is, the phenotype that results when two genes are inhibited may be different from the sum of the effects of single knockouts.
DNA/Protein interactions
Proteins formed by the translation of the mRNA (messenger RNA, a coded information from DNA for protein synthesis) play a major role in regulating gene expression. To understand how they regulate gene expression it is necessary to identify DNA sequences that they interact with. Techniques have been developed to identify sites of DNA-protein interactions. These include ChIP-sequencing, CUT&RUN sequencing and Calling Cards.
DNA accessibility assays
Assays have been developed to identify regions of the genome that are accessible. These regions of accessible chromatin are candidate regulatory regions. These assays include ATAC-seq, DNase-Seq and FAIRE-Seq.
At the RNA level
Microarrays
Microarrays measure the amount of mRNA in a sample that corresponds to a given gene or probe DNA sequence. Probe sequences are immobilized on a solid surface and allowed to hybridize with fluorescently labeled "target" mRNA. The intensity of fluorescence of a spot is proportional to the amount of target sequence that has hybridized to that spot and therefore to the abundance of that mRNA sequence in the sample. Microarrays allow for the identification of candidate genes involved in a given process based on variation between transcript levels for different conditions and shared expression patterns with genes of known function.
SAGE
Serial analysis of gene expression (SAGE) is an alternate method of analysis based on RNA sequencing rather than hybridization. SAGE relies on the sequencing of 10–17 base pair tags which are unique to each gene. These tags are produced from poly-A mRNA and ligated end-to-end before sequencing. SAGE gives an unbiased measurement of the number of transcripts per cell, since it does not depend on prior knowledge of what transcripts to study (as microarrays do).
RNA sequencing
RNA sequencing has taken over microarray and SAGE technology in recent years, as noted in 2016, and has become the most efficient way to study transcription and gene expression. This is typically done by next-generation sequencing.
A subset of sequenced RNAs are small RNAs, a class of non-coding RNA molecules that are key regulators of transcriptional and post-transcriptional gene silencing, or RNA silencing. Next-generation sequencing is the gold standard tool for non-coding RNA discovery, profiling and expression analysis.
Massively Parallel Reporter Assays (MPRAs)
Massively parallel reporter assays is a technology to test the cis-regulatory activity of DNA sequences. MPRAs use a plasmid with a synthetic cis-regulatory element upstream of a promoter driving a synthetic gene such as Green Fluorescent Protein. A library of cis-regulatory elements is usually tested using MPRAs, a library can contain from hundreds to thousands of cis-regulatory elements. The cis-regulatory activity of the elements is assayed by using the downstream reporter activity. The activity of all the library members is assayed in parallel using barcodes for each cis-regulatory element. One limitation of MPRAs is that the activity is assayed on a plasmid and may not capture all aspects of gene regulation observed in the genome.
STARR-seq
STARR-seq is a technique similar to MPRAs to assay enhancer activity of randomly sheared genomic fragments. In the original publication, randomly sheared fragments of the Drosophila genome were placed downstream of a minimal promoter. Candidate enhancers amongst the randomly sheared fragments will transcribe themselves using the minimal promoter. By using sequencing as a readout and controlling for input amounts of each sequence the strength of putative enhancers are assayed by this method.
Perturb-seq
Perturb-seq couples CRISPR mediated gene knockdowns with single-cell gene expression. Linear models are used to calculate the effect of the knockdown of a single gene on the expression of multiple genes.
At the protein level
Yeast two-hybrid system
A yeast two-hybrid screening (Y2H) tests a "bait" protein against many potential interacting proteins ("prey") to identify physical protein–protein interactions. This system is based on a transcription factor, originally GAL4, whose separate DNA-binding and transcription activation domains are both required in order for the protein to cause transcription of a reporter gene. In a Y2H screen, the "bait" protein is fused to the binding domain of GAL4, and a library of potential "prey" (interacting) proteins is recombinantly expressed in a vector with the activation domain. In vivo interaction of bait and prey proteins in a yeast cell brings the activation and binding domains of GAL4 close enough together to result in expression of a reporter gene. It is also possible to systematically test a library of bait proteins against a library of prey proteins to identify all possible interactions in a cell.
MS and AP/MS
Mass spectrometry (MS) can identify proteins and their relative levels, hence it can be used to study protein expression. When used in combination with affinity purification, mass spectrometry (AP/MS) can be used to study protein complexes, that is, which proteins interact with one another in complexes and in which ratios. In order to purify protein complexes, usually a "bait" protein is tagged with a specific protein or peptide that can be used to pull out the complex from a complex mix. The purification is usually done using an antibody or a compound that binds to the fusion part. The proteins are then digested into short peptide fragments and mass spectrometry is used to identify the proteins based on the mass-to-charge ratios of those fragments.
Deep mutational scanning
In deep mutational scanning, every possible amino acid change in a given protein is first synthesized. The activity of each of these protein variants is assayed in parallel using barcodes for each variant. By comparing the activity to the wild-type protein, the effect of each mutation is identified. While it is possible to assay every possible single amino-acid change due to combinatorics two or more concurrent mutations are hard to test. Deep mutational scanning experiments have also been used to infer protein structure and protein-protein interactions. Deep Mutational Scanning is an example of a multiplexed assays of variant effect (MAVEs), a family of methods that involve mutagenesis of a DNA-encoded protein or regulatory element followed by a multiplexed assay for some aspect of function. MAVEs enable the generation of ‘variant effect maps’ characterizing aspects of the function of every possible single nucleotide change in a gene or functional element of interest.
Mutagenesis and phenotyping
An important functional feature of genes is the phenotype caused by mutations. Mutants can be produced by random mutations or by directed mutagenesis, including site-directed mutagenesis, deleting complete genes, or other techniques.
Knock-outs (gene deletions)
Gene function can be investigated by systematically "knocking out" genes one by one. This is done by either deletion or disruption of function (such as by insertional mutagenesis) and the resulting organisms are screened for phenotypes that provide clues to the function of the disrupted gene. Knock-outs have been produced for whole genomes, i.e. by deleting all genes in a genome. For essential genes, this is not possible, so other techniques are used, e.g. deleting a gene while expressing the gene from a plasmid, using an inducible promoter, so that the level of gene product can be changed at will (and thus a "functional" deletion achieved).
Site-directed mutagenesis
Site-directed mutagenesis is used to mutate specific bases (and thus amino acids). This is critical to investigate the function of specific amino acids in a protein, e.g. in the active site of an enzyme.
RNAi
RNA interference (RNAi) methods can be used to transiently silence or knockdown gene expression using ~20 base-pair double-stranded RNA typically delivered by transfection of synthetic ~20-mer short-interfering RNA molecules (siRNAs) or by virally encoded short-hairpin RNAs (shRNAs). RNAi screens, typically performed in cell culture-based assays or experimental organisms (such as C. elegans) can be used to systematically disrupt nearly every gene in a genome or subsets of genes (sub-genomes); possible functions of disrupted genes can be assigned based on observed phenotypes.
CRISPR screens
CRISPR-Cas9 has been used to delete genes in a multiplexed manner in cell-lines. Quantifying the amount of guide-RNAs for each gene before and after the experiment can point towards essential genes. If a guide-RNA disrupts an essential gene it will lead to the loss of that cell and hence there will be a depletion of that particular guide-RNA after the screen. In a recent CRISPR-cas9 experiment in mammalian cell-lines, around 2000 genes were found to be essential in multiple cell-lines. Some of these genes were essential in only one cell-line. Most of genes are part of multi-protein complexes. This approach can be used to identify synthetic lethality by using the appropriate genetic background. CRISPRi and CRISPRa enable loss-of-function and gain-of-function screens in a similar manner. CRISPRi identified ~2100 essential genes in the K562 cell-line. CRISPR deletion screens have also been used to identify potential regulatory elements of a gene. For example, a technique called ScanDel was published which attempted this approach. The authors deleted regions outside a gene of interest(HPRT1 involved in a Mendelian disorder) in an attempt to identify regulatory elements of this gene. Gassperini et al. did not identify any distal regulatory elements for HPRT1 using this approach, however such approaches can be extended to other genes of interest.
Functional annotations for genes
Genome annotation
Putative genes can be identified by scanning a genome for regions likely to encode proteins, based on characteristics such as long open reading frames, transcriptional initiation sequences, and polyadenylation sites. A sequence identified as a putative gene must be confirmed by further evidence, such as similarity to cDNA or EST sequences from the same organism, similarity of the predicted protein sequence to known proteins, association with promoter sequences, or evidence that mutating the sequence produces an observable phenotype.
Rosetta stone approach
The Rosetta stone approach is a computational method for de-novo protein function prediction. It is based on the hypothesis that some proteins involved in a given physiological process may exist as two separate genes in one organism and as a single gene in another. Genomes are scanned for sequences that are independent in one organism and in a single open reading frame in another. If two genes have fused, it is predicted that they have similar biological functions that make such co-regulation advantageous.
Bioinformatics methods for Functional genomics
Because of the large quantity of data produced by these techniques and the desire to find biologically meaningful patterns, bioinformatics is crucial to analysis of functional genomics data. Examples of techniques in this class are data clustering or principal component analysis for unsupervised machine learning (class detection) as well as artificial neural networks or support vector machines for supervised machine learning (class prediction, classification). Functional enrichment analysis is used to determine the extent of over- or under-expression (positive- or negative- regulators in case of RNAi screens) of functional categories relative to a background sets. Gene ontology based enrichment analysis are provided by DAVID and gene set enrichment analysis (GSEA), pathway based analysis by Ingenuity and Pathway studio and protein complex based analysis by COMPLEAT.
New computational methods have been developed for understanding the results of a deep mutational scanning experiment. 'phydms' compares the result of a deep mutational scanning experiment to a phylogenetic tree. This allows the user to infer if the selection process in nature applies similar constraints on a protein as the results of the deep mutational scan indicate. This may allow an experimenter to choose between different experimental conditions based on how well they reflect nature. Deep mutational scanning has also been used to infer protein-protein interactions. The authors used a thermodynamic model to predict the effects of mutations in different parts of a dimer. Deep mutational structure can also be used to infer protein structure. Strong positive epistasis between two mutations in a deep mutational scan can be indicative of two parts of the protein that are close to each other in 3-D space. This information can then be used to infer protein structure. A proof of principle of this approach was shown by two groups using the protein GB1.
Results from MPRA experiments have required machine learning approaches to interpret the data. A gapped k-mer SVM model has been used to infer the kmers that are enriched within cis-regulatory sequences with high activity compared to sequences with lower activity. These models provide high predictive power. Deep learning and random forest approaches have also been used to interpret the results of these high-dimensional experiments. These models are beginning to help develop a better understanding of non-coding DNA function towards gene-regulation.
Consortium projects
The ENCODE project
The ENCODE (Encyclopedia of DNA elements) project is an in-depth analysis of the human genome whose goal is to identify all the functional elements of genomic DNA, in both coding and non-coding regions. Important results include evidence from genomic tiling arrays that most nucleotides are transcribed as coding transcripts, non-coding RNAs, or random transcripts, the discovery of additional transcriptional regulatory sites, further elucidation of chromatin-modifying mechanisms.
The Genotype-Tissue Expression (GTEx) project
The GTEx project is a human genetics project aimed at understanding the role of genetic variation in shaping variation in the transcriptome across tissues. The project has collected a variety of tissue samples (> 50 different tissues) from more than 700 post-mortem donors. This has resulted in the collection of >11,000 samples. GTEx has helped understand the tissue-sharing and tissue-specificity of eQTLs. The genomic resource was developed to "enrich our understanding of how differences in our DNA sequence contribute to health and disease."
The Atlas of Variant Effects Alliance
The Atlas of Variant Effects Alliance (AVE), founded in 2020, is an international consortium aiming to catalog the impact of all possible genetic variants for disease-related functional genomics by creating variant effect maps that reveal the function of every possible single nucleotide change in a gene or regulatory element. AVE is funded in part through the Brotman Baty Institute at the University of Washington and the National Human Genome Research Institute, via funding from the Center of Excellence in Genome Science grant (NHGRI RM1HG010461).
See also
Systems biology
Structural genomics
Comparative genomics
Pharmacogenomics
MGED Society
Epigenetics
Bioinformatics
Epistasis and functional genomics
Synthetic viability
Protein function prediction
References
External links
European Science Foundation Programme on Frontiers of Functional Genomics
MUGEN NoE — Integrated Functional Genomics in Mutant Mouse Models
Nature insights: functional genomics
ENCODE
Molecular biology
Genomics | Functional genomics | [
"Chemistry",
"Biology"
] | 3,832 | [
"Biochemistry",
"Molecular biology"
] |
1,053,909 | https://en.wikipedia.org/wiki/Stein%20manifold | In mathematics, in the theory of several complex variables and complex manifolds, a Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after . A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry.
Definition
Suppose is a complex manifold of complex dimension and let denote the ring of holomorphic functions on We call a Stein manifold if the following conditions hold:
is holomorphically convex, i.e. for every compact subset , the so-called holomorphically convex hull,
is also a compact subset of .
is holomorphically separable, i.e. if are two points in , then there exists such that
Non-compact Riemann surfaces are Stein manifolds
Let X be a connected, non-compact Riemann surface. A deep theorem of Heinrich Behnke and Stein (1948) asserts that X is a Stein manifold.
Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so . The exponential sheaf sequence leads to the following exact sequence:
Now Cartan's theorem B shows that , therefore .
This is related to the solution of the second Cousin problem.
Properties and examples of Stein manifolds
The standard complex space is a Stein manifold.
Every domain of holomorphy in is a Stein manifold.
Every closed complex submanifold of a Stein manifold is a Stein manifold, too.
The embedding theorem for Stein manifolds states the following: Every Stein manifold of complex dimension can be embedded into by a biholomorphic proper map.
These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic).
Every Stein manifold of (complex) dimension n has the homotopy type of an n-dimensional CW-complex.
In one complex dimension the Stein condition can be simplified: a connected Riemann surface is a Stein manifold if and only if it is not compact. This can be proved using a version of the Runge theorem for Riemann surfaces, due to Behnke and Stein.
Every Stein manifold is holomorphically spreadable, i.e. for every point , there are holomorphic functions defined on all of which form a local coordinate system when restricted to some open neighborhood of .
Being a Stein manifold is equivalent to being a (complex) strongly pseudoconvex manifold. The latter means that it has a strongly pseudoconvex (or plurisubharmonic) exhaustive function, i.e. a smooth real function on (which can be assumed to be a Morse function) with , such that the subsets are compact in for every real number . This is a solution to the so-called Levi problem, named after Eugenio Levi (1911). The function invites a generalization of Stein manifold to the idea of a corresponding class of compact complex manifolds with boundary called Stein domains. A Stein domain is the preimage . Some authors call such manifolds therefore strictly pseudoconvex manifolds.
Related to the previous item, another equivalent and more topological definition in complex dimension 2 is the following: a Stein surface is a complex surface X with a real-valued Morse function f on X such that, away from the critical points of f, the field of complex tangencies to the preimage is a contact structure that induces an orientation on Xc agreeing with the usual orientation as the boundary of That is, is a Stein filling of Xc.
Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology. The initial impetus was to have a description of the properties of the domain of definition of the (maximal) analytic continuation of an analytic function.
In the GAGA set of analogies, Stein manifolds correspond to affine varieties.
Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory".
Relation to smooth manifolds
Every compact smooth manifold of dimension 2n, which has only handles of index ≤ n, has a Stein structure provided n > 2, and when n = 2 the same holds provided the 2-handles are attached with certain framings (framing less than the Thurston–Bennequin framing). Every closed smooth 4-manifold is a union of two Stein 4-manifolds glued along their common boundary.
Notes
References
(including a proof of Behnke-Stein and Grauert–Röhrl theorems)
(including a proof of the embedding theorem)
(definitions and constructions of Stein domains and manifolds in dimension 4)
Complex manifolds
Several complex variables | Stein manifold | [
"Mathematics"
] | 1,086 | [
"Several complex variables",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
1,053,990 | https://en.wikipedia.org/wiki/Figure%20%28music%29 |
A musical figure or figuration is the shortest idea in music; a short succession of notes, often recurring. It may have melodic pitch, harmonic progression, and rhythmic meter. The 1964 Grove's Dictionary defines the figure as "the exact counterpart of the German 'motiv' and the French 'motif: it produces a "single complete and distinct impression". To the self-taught Roger Scruton, however, a figure is distinguished from a motif in that a figure is background while a motif is foreground:
Allen Forte describes the term figuration as being applied to two distinct things:
A phrase originally presented or heard as a motif may become a figure that accompanies another melody, such as in the second movement of Claude Debussy's String Quartet. It is perhaps best to view a figure as a motif when it has special importance in a piece. According to White, motives are, "significant in the structure of the work," while figures or figurations are not and, "may often occur in accompaniment passages or in transitional or connective material designed to link two sections together," with the former being more common.
Minimalist music may be constructed entirely from figures. Scruton describes music by Philip Glass such as Akhnaten as "nothing but figures...endless daisy-chains".
A basic figure is known as a riff in American popular music.
Importance of Figures
Figures play a most important part in instrumental music, in which it is necessary that a strong and definite impression should be produced to answer the purpose of words, and convey the sense of vitality to the otherwise incoherent succession of sounds. In pure vocal music this is not the case, as on the one hand the words assist the audience to follow and understand what they hear, and on the other the quality of voices in combination is such as to render strong characteristic features somewhat inappropriate. But without strongly marked figures the very reason of existence of instrumental movements can hardly be perceived, and the success of a movement of any dimensions must ultimately depend, to a very large extent, on the appropriate development of the figures which are contained in the chief subjects. The common expression that a subject is very 'workable,' merely means that it contains well-marked figures; though it must be observed on the other hand, that there are not a few instances in which masterly treatment has invested with powerful interest a figure which at first sight would seem altogether deficient in character.
Examples
As clear an instance as could be given of the breaking up of a subject into its constituent figures for the purpose of development, is the treatment of the first subject of Beethoven's Pastoral Symphony, which he breaks up into three figures corresponding to the first three bars.As an example of his treatment of (a) may be taken—(b) is twice repeated no less than thirty-six times successively in the development of the movement; and (c) appears at the close as follows:Examples of this kind of treatment of the figures contained in subjects are very numerous in classical instrumental music, in various degrees of refinement and ingenuity; as in the 1st movement of Mozart's G minor Symphony; in the same movement of Beethoven's 8th Symphony; and in a large number of Bach's fugues, as for instance Nos. 2, 7, 16, of the Wohltemperirte Klavier. The beautiful little musical poem, the 18th fugue of that series, contains as happy a specimen of this device as could be cited.
In music of an ideally high order, everything should be recognizable as having a meaning; or, in other words, every part of the music should be capable of being analyzed into figures, so that even the most insignificant instrument in the orchestra should not be merely making sounds to fill up the mass of the harmony, but should be playing something which is worth playing in itself. It is of course impossible for any but the highest genius to carry this out consistently, but in proportion as music approaches to this ideal, it is of a high order as a work of art, and in the measure in which it recedes from it, it approaches more nearly to the mass of base, slovenly, or false contrivances which lie at the other extreme, and are not works of art at all. This will be very well recognized by a comparison of Schubert a method of treating the accompaniment of his songs and the method adopted in the large proportion of the thousands of 'popular' songs which annually make their appearance in this country. For even when the figure is as simple as in Wohin, Mein, or Ave Maria, the figure is there, and is clearly recognized, and is as different from mere sound or stuffing to support the voice as a living creature is from dead and inert clay.
Bach and Beethoven
Bach and Beethoven were the great masters in the use of figures, and both were content at times to make a short figure of three or four notes the basis of a whole movement. As examples of this may be quoted the truly famous rhythmic figure of the C minor Symphony (d), the figure of the Scherzo of the 9th Symphony (e), and the figure of the first movement of the last Sonata, in C minor (f). As a beautiful example from Bach may be quoted the Adagio from the Toccata in D minor (g), but it must be said that examples in his works are almost innumerable, and will meet the student at every turn.A very peculiar use which Bach occasionally makes of figures, is to use one as the bond of connection running through a whole movement by constant repetition, as in Prelude No. 10 of the Wohltemperirte Klavier, and in the slow movement of the Italian Concerto, where it serves as accompaniment to an impassioned recitative. In this case the figure is not identical on each repetition, but is freely modified, in such a way however that it is always recognized as the same, partly by the rhythm and partly by the relative positions of the successive notes. This manner of modifying a given figure shows a tendency in the direction of a mode of treatment which has become a feature in modern music: namely, the practice of transforming figures in order to show different aspects of the same thought, or to establish a connection between one thought and another by bringing out the characteristics they possess in common. As a simple specimen of this kind of transformation, may be quoted a passage from the first movement of Brahms's P.F. Quintet in F minor. The figure stands at first as at (h), then by transposition as at (i). Its first stage of transformation is (j); further (k) (l) (m) are progressive modifications towards the stage (n),which, having been repeated twice in different positions, appears finally as the figure immediately attached to the Cadence in D♭, thus—A similar very fine example—too familiar to need quotation here—is at the close of Beethoven's Overture to Coriolan.
The use which Wagner makes of strongly marked figures is very important, as he establishes a consistent connection between the characters and situations and the music by using appropriate figures (Leitmotive), which appear whenever the ideas or characters to which they belong come prominently forward.
That figures vary in intensity to an immense degree hardly requires to be pointed out; and it will also be obvious that figures of accompaniment do not require to be so marked as figures which occupy positions of individual importance. With regard to the latter it may be remarked that there is hardly any department in music in which true feeling and inspiration are more absolutely indispensable, since no amount of ingenuity or perseverance can produce such figures as that which opens the C-minor Symphony, or such soul-moving figures as those in the death march of Siegfried in Wagner's 'Götterdammerung.'
As the common notion that music chiefly consists of pleasant tunes grows weaker, the importance of figures becomes proportionately greater. A succession of isolated tunes is always more or less inconsequent, however deftly they may be connected together, but by the appropriate use of figures and groups of figures, such as real musicians only can invent, and the gradual unfolding of all their latent possibilities, continuous and logical works of art may be constructed; such as will not merely tickle the hearer's fancy, but arouse profound interest, and raise him mentally and morally to a higher standard.
See also
Alberti bass
References
Bibliography
Attribution
Accompaniment
Formal sections in music analysis
Harmony
Melody
Rhythm and meter | Figure (music) | [
"Physics",
"Technology"
] | 1,759 | [
"Physical quantities",
"Time",
"Formal sections in music analysis",
"Rhythm and meter",
"Spacetime",
"Components"
] |
1,053,995 | https://en.wikipedia.org/wiki/Stevens%27s%20power%20law | Stevens' power law is an empirical relationship in psychophysics between an increased intensity or strength in a physical stimulus and the perceived magnitude increase in the sensation created by the stimulus. It is often considered to supersede the Weber–Fechner law, which is based on a logarithmic relationship between stimulus and sensation, because the power law describes a wider range of sensory comparisons, down to zero intensity.
The theory is named after psychophysicist Stanley Smith Stevens (1906–1973). Although the idea of a power law had been suggested by 19th-century researchers, Stevens is credited with reviving the law and publishing a body of psychophysical data to support it in 1957.
The general form of the law is
where I is the intensity or strength of the stimulus in physical units (energy, weight, pressure, mixture proportions, etc.), ψ(I) is the magnitude of the sensation evoked by the stimulus, a is an exponent that depends on the type of stimulation or sensory modality, and k is a proportionality constant that depends on the units used.
A distinction has been made between local psychophysics, where stimuli can only be discriminated with a probability around 50%, and global psychophysics, where the stimuli can be discriminated correctly with near certainty (Luce & Krumhansl, 1988). The Weber–Fechner law and methods described by L. L. Thurstone are generally applied in local psychophysics, whereas Stevens' methods are usually applied in global psychophysics.
The table to the right lists the exponents reported by Stevens.
Methods
The principal methods used by Stevens to measure the perceived intensity of a stimulus were magnitude estimation and magnitude production. In magnitude estimation with a standard, the experimenter presents a stimulus called a standard and assigns it a number called the modulus. For subsequent stimuli, subjects report numerically their perceived intensity relative to the standard so as to preserve the ratio between the sensations and the numerical estimates (e.g., a sound perceived twice as loud as the standard should be given a number twice the modulus). In magnitude estimation without a standard (usually just magnitude estimation), subjects are free to choose their own standard, assigning any number to the first stimulus and all subsequent ones with the only requirement being that the ratio between sensations and numbers is preserved. In magnitude production a number and a reference stimulus is given and subjects produce a stimulus that is perceived as that number times the reference. Also used is cross-modality matching, which generally involves subjects altering the magnitude of one physical quantity, such as the brightness of a light, so that its perceived intensity is equal to the perceived intensity of another type of quantity, such as warmth or pressure.
Criticisms
Stevens generally collected magnitude estimation data from multiple observers, averaged the data across subjects, and then fitted a power function to the data. Because the fit was generally reasonable, he concluded the power law was correct.
A principal criticism has been that Stevens' approach provides neither a direct test of the power law itself nor the underlying assumptions of the magnitude estimation/production method: it simply fits curves to data points. In addition, the power law can be deduced mathematically from the Weber-Fechner logarithmic function (Mackay, 1963), and the relation makes predictions consistent with data (Staddon, 1978). As with all psychometric studies, Stevens' approach ignores individual differences in the stimulus-sensation relationship, and there are generally large individual differences in this relationship that averaging the data will obscure .
Stevens' main assertion was that using magnitude estimations/productions respondents were able to make judgements on a ratio scale (i.e., if x and y are values on a given ratio scale, then there exists a constant k such that x = ky). In the context of axiomatic psychophysics, formulated a testable property capturing the implicit underlying assumption this assertion entailed. Specifically, for two proportions p and q, and three stimuli, x, y, z, if y is judged p times x, z is judged q times y, then t = pq times x should be equal to z. This amounts to assuming that respondents interpret numbers in a veridical way. This property was unambiguously rejected (, ). Without assuming veridical interpretation of numbers, formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, if y is judged p times x, z is judged q times y, and if y is judged q times x, z is judged p times y, then z should equal z. This property has been sustained in a variety of situations (, ).
Critics of the power law also point out that the validity of the law is contingent on the measurement of perceived stimulus intensity that is employed in the relevant experiments. , under the condition that respondents' numerical distortion function and the psychophysical functions could be separated, formulated a behavioral condition equivalent to the psychophysical function being a power function. This condition was confirmed for just over half the respondents, and the power form was found to be a reasonable approximation for the rest .
It has also been questioned, particularly in terms of signal detection theory, whether any given stimulus is actually associated with a particular and absolute perceived intensity; i.e. one that is independent of contextual factors and conditions. Consistent with this, Luce (1990, p. 73) observed that "by introducing contexts such as background noise in loudness judgements, the shape of the magnitude estimation functions certainly deviates sharply from a power function". Indeed, nearly all sensory judgments can be changed by the context in which a stimulus is perceived.
See also
Perception
Sone
References
Luce, R. D. & Krumhansl, C. (1988) Measurement, scaling, and psychophysics. In R. C. Atkinson, R. J. Herrnstein, G. Lindzey, & R. D. Luce (Eds.) Stevens' Handbook of Experimental Psychology. New York: Wiley. Pp. 1–74.
Smelser, N.J., & Baltes, P.B. (2001). International encyclopedia of the social & behavioral sciences. pp. 15105–15106. Amsterdam; New York: Elsevier. .
Stevens, S.S. (1975), Geraldine Stevens, editor. Psychophysics: introduction to its perceptual, neural, and social prospects, Transaction Publishers, .
Perception
Behavioral concepts
Power laws
Psychophysics
Mathematical psychology
Psychoacoustics
it:Soglia percettiva | Stevens's power law | [
"Physics",
"Mathematics",
"Biology"
] | 1,334 | [
"Behavior",
"Mathematical psychology",
"Applied and interdisciplinary physics",
"Behavioral concepts",
"Applied mathematics",
"Psychophysics",
"Behaviorism"
] |
1,054,286 | https://en.wikipedia.org/wiki/World%20Food%20Prize | The World Food Prize is an international award recognizing the achievements of individuals who have advanced human development by improving the quality, quantity, or availability of food in the world. Conceived by Nobel Peace Prize laureate Norman Borlaug and established in 1986 through the support of General Foods, the prize is envisioned and promoted as the Nobel or the highest honors in the field of food and agriculture. It is now administered by the World Food Prize Foundation with support from numerous sponsors. Since 1987, the prize has been awarded annually to recognize contributions in any field involved in the world food supply, such as animal science, aquaculture, soil science, water conservation, nutrition, health, plant science, seed science, plant pathology, crop protection, food technology, food safety, policy, research, infrastructure, emergency relief, and poverty alleviation and hunger.
Laureates are honored and officially awarded their prize in Des Moines, Iowa, in an award ceremony held at Iowa State Capitol. Laureates are presented with a diploma, a commemorative sculpture designed by Saul Bass and a monetary award of $500,000. The Foundation also has the aim of "inspiring exceptional achievement in assuring adequate food and nutrition for all". A number of associated events and honors include the World Food Prize Symposium or the Borlaug Dialogue, the Iowa Hunger Summit and youth programs such as the Borlaug-Ruan International Internships.
History
Norman Borlaug (1914–2009) was awarded the Nobel Peace Prize in 1970 for contributions that resulted in the extensive increase in global food production. Chairperson of the Nobel Committee Aase Lionæs gave the rationale that the committee had linked providing much needed food to the world as a path for peace. Further, the increase in food production has given policy planners across the world more years in figuring out how to feed the growing population. 12 years later, Borlaug approached the Nobel Foundation to include a prize for food and agriculture. However, the Foundation was bound by Alfred Nobel's will which did not allow for the creation of such a new prize. Borlaug continued his search for a sponsor elsewhere.
In 1986, General Foods Corporation, under Vice President A. S. Clausi's leadership, agreed to establish the prize and be the founding sponsor. The amount they agreed to, US$200,000, was equivalent to the value of the Nobel Prizes at the time. In 1990, the sponsorship was taken over by businessman and philanthropist John Ruan and his family. The Ruan family established the World Food Prize Foundation backed by an endowment of $10 million. In 2000, Kenneth M. Quinn was made the president. Borlaug, Ruan, and Quinn were all from the US state of Iowa. Barbara Stinson succeeded Quinn as the second president in 2019.
The former Des Moines Library was acquired and the Ruan family gave $5 million to renovate the building into the headquarters for the World Food Prize Foundation. A number of sponsors would go on to contribute over US$20 million in a campaign to transform the building into a public museum, the Hall of Laureates, to honor Borlaug and the work of the World Food Prize laureates. Other sponsors have included over 100 charitable foundations, corporations and individuals, who have helped sustain the prize and the Foundation's associated events. The Founder's Boardroom in the Hall of Laureates commemorates 27 individuals who played an important part in the foundation of the prize.
The first chairperson of the World Food Prize laureate selection committee was Norman Borlaug. Borlaug appointed the first laureate M. S. Swaminathan as his successor in 2009. Currently, Gebisa Ejeta, the 2009 laureate, is the chairperson. Apart from the chairperson who is a non-voting member, other members of the selection committee remain anonymous.
On January 24, 2023, the Foundation announced that former Iowa Governor and U.S. Ambassador to China Terry Branstad would take over as president, replacing outgoing former president Barbara Stinson.
Laureates
World Food Prize laureates include the following:
Associated events
The Foundation has expanded into a number of associated events including the Norman E. Borlaug International Symposium, also known as the World Food Prize Symposium or the Borlaug Dialogue. A Youth Institute was established in 1994 to motivate youngsters in agriculture, food, population and connected sciences. Youth Institutes have been set up in 24 states of the United States, and three other countries. based on essays, high school students are selected to take part in the activities of these institutes. Participation in these institutes also makes one eligible for an eight-week internship program.
The Borlaug-Ruan International Internship provides high school students an eight-week opportunity for a hands-on experience, working with scientists and policymakers in hunger and nutrition at research centres around the world. The internship was founded in 1998 and has funded over 350 Borlaug-Ruan interns who have travelled to 34 agricultural research centres around the world. The Iowa Hunger Summit has taken place during the week of the World Food Prize events since 2007. The event is open to the public and celebrates the role Iowans play in fighting hunger and advancing food security each year.
See also
List of agriculture awards
References
Notes
Citations
Bibliography
External links
Food and drink awards
Agriculture awards
Awards established in 1986
General Foods | World Food Prize | [
"Technology"
] | 1,090 | [
"Science and technology awards",
"Agriculture awards"
] |
1,054,394 | https://en.wikipedia.org/wiki/Autothysis | Autothysis (from the Greek roots autos- "self" and thysia "sacrifice") or suicidal altruism is the process where an animal destroys itself via an internal rupturing or explosion of an organ which ruptures the skin. The term was proposed by Ulrich Maschwitz and Eleonore Maschwitz in 1974 to describe the defensive mechanism of Colobopsis saundersi, a species of ant. It is caused by a contraction of muscles around a large gland that leads to the breaking of the gland wall. Some termites (such as the soldiers of Globitermes sulphureus) release a sticky secretion by rupturing a gland near the skin of their neck, producing a tar effect in defense against ants.
Termites
Groups of termites whose soldiers have been found to use autothysis to defend their colonies include: Serritermes serrifer, Dentispicotermes, Genuotermes, and Orthognathotermes. Several species of the soldierless Apicotermitinae, for example those of the Grigiotermes and Ruptitermes genera, have workers that can also use autothysis. This is thought to be one of the most effective forms of defense that termites possess as the ruptured workers block the tunnels running into the nest and it causes a one-to-one exchange between attackers and defenders, meaning attacks have a high energy cost to predators.
The soldiers of the Neotropical termite family Serritermitidae have a defense strategy which involves front gland autothysis, with the body rupturing between the head and abdomen. When outside the nest they try to run away from attackers, and only use autothysis when in the nest to block tunnels up, preventing attackers entering.
Old workers of Neocapritermes taracua develop blue spots on their abdomens that are filled with copper-containing proteins (blue laccase). These react with a secretion from the labial gland upon autothysis to form a mixture which is toxic to other termites.
Ants
Some ants belonging to the genera Camponotus and Colobopsis have adapted to using autothysis as an altruistic defensive trait to better fight against arthropods and to possibly deter vertebrate predators for the benefit of the colony as a whole. These ants use autothysis as a self-destructive defense to protect their territory, but they use it differently from termites, in that their primary uses for autothysis do not include blocking the tunnels of their territory from attackers, but more so for combat purposes during territorial battles.
Early ants used mechanical means of stinging to defend themselves, but the stings showed to be more useful against large vertebrate predators and not as successful against other arthropods. So selection for autothysis in ants evolved as a way to more effectively kill arthropod enemies. The products of autothysis in ants are sticky and corrosive substances, released by the ants' contraction of their gasters, leading to a burst at an intersegmental fold as well as the mandibular glands. The ants use this self-sacrifice to kill one or more enemies which entangle themselves in this sticky substance. The worker ant has been observed to wrap itself around an opponent, placing its dorsal gaster onto the opponent's head before expelling sticky corrosive material from its mouth and gaster, permanently sticking to the opponent while killing itself and the enemy, as well as any other enemies that become stuck to the products.
These ants mostly use autothysis against other arthropods, like invading ant colonies or against termite colonies, and are rather ineffective towards larger vertebrate predators such as lizards or birds. This self-sacrifice is most useful against arthropods because the sticky adhesives in the products work best against the bodies of other arthropods. The compounds used in autothysis, however, have also been explained to have some use in deterring vertebrate predators from eating the ants, because these products are inedible.
See also
Animal suicide
Anti-predator adaptation
Apoptosis
Autohaemorrhaging
Exploding animal
Self-destruct
References
Antipredator adaptations
Exploding animals
Insect ecology | Autothysis | [
"Chemistry",
"Biology"
] | 877 | [
"Antipredator adaptations",
"Biological defense mechanisms",
"Exploding animals",
"Explosions"
] |
1,054,629 | https://en.wikipedia.org/wiki/Setuid | The Unix and Linux access rights flags setuid and setgid (short for set user identity and set group identity) allow users to run an executable with the file system permissions of the executable's owner or group respectively and to change behaviour in directories. They are often used to allow users on a computer system to run programs with temporarily elevated privileges to perform a specific task. While the assumed user id or group id privileges provided are not always elevated, at a minimum they are specific.
The flags setuid and setgid are needed for tasks that require different privileges than what the user is normally granted, such as the ability to alter system files or databases to change their login password. Some of the tasks that require additional privileges may not immediately be obvious, though, such as the ping command, which must send and listen for control packets on a network interface.
File modes
The setuid and setgid bits are normally represented as the values 4 for setuid and 2 for setgid in the high-order octal digit of the file mode. For example, 6711 has both the setuid and setgid bits () set, and also the file read/write/executable for the owner (7), and executable by the group (first 1) and others (second 1). Most implementations have a symbolic representation of these bits; in the previous example, this could be u=rwx,go=x,ug+s.
Typically, chmod does not have a recursive mode restricted to directories, so modifying an existing directory tree must be done manually, with a command such as .
Effects
The setuid and setgid flags have different effects, depending on whether they are applied to a file, to a directory or binary executable or non-binary executable file. The setuid and setgid flags have an effect only on binary executable files and not on scripts (e.g., Bash, Perl, Python).
When set on an executable file
When the setuid or setgid attributes are set on an executable file, then any users able to execute the file will automatically execute the file with the privileges of the file's owner (commonly root) and/or the file's group, depending upon the flags set. This allows the system designer to permit trusted programs to be run which a user would otherwise not be allowed to execute. These may not always be obvious. For example, the ping command may need access to networking privileges that a normal user cannot access; therefore it may be given the setuid flag to ensure that a user who needs to ping another system can do so, even if their account does not have the required privilege for sending packets.
Security impact
For security purposes, the invoking user is usually prohibited by the system from altering the new process in any way, such as by using ptrace, LD_LIBRARY_PATH or sending signals to it, to exploit the raised privilege, although signals from the terminal will still be accepted.
While the setuid feature is very useful in many cases, its improper use can pose a security risk if the setuid attribute is assigned to executable programs that are not carefully designed. Due to potential security issues, many operating systems ignore the setuid attribute when applied to executable shell scripts.
The presence of setuid executables explains why the chroot system call is not available to non-root users on Unix. See limitations of chroot for more details.
When set on a directory
Setting the setgid permission on a directory causes files and subdirectories created within to inherit its group ownership, rather than the primary group of the file-creating process. Created subdirectories also inherit the setgid bit. The policy is only applied during creation and, thus, only prospectively. Directories and files existing when the setgid bit is applied are unaffected, as are directories and files moved into the directory on which the bit is set.
Thus is granted a capacity to work with files amongst a group of users without explicitly setting permissions, but limited by the security model expectation that existing files permissions do not implicitly change.
The setuid permission set on a directory is ignored on most UNIX and Linux systems. However FreeBSD can be configured to interpret setuid in a manner similar to setgid, in which case it forces all files and sub-directories created in a directory to be owned by that directory's owner - a simple form of inheritance. This is generally not needed on most systems derived from BSD, since by default directories are treated as if their setgid bit is always set, regardless of the actual value. As is stated in open(2), "When a new file is created it is given the group of the directory which contains it."
Examples
Checking permissions
Permissions of a file can be checked in octal form and/or alphabetic form with the command line tool stat
[ torvalds ~ ] $ stat -c "%a %A" ~/test/
1770 drwxrwx--T
SUID
4701 on an executable file owned by 'root' and the group 'root'
A user named 'thompson' attempts to execute the file. The executable permission for all users is set (the '1') so 'thompson' can execute the file. The file owner is 'root' and the SUID permission is set (the '4') - so the file is executed as 'root'.
The reason an executable would be run as 'root' is so that it can modify specific files that the user would not normally be allowed to, without giving the user full root access.
A default use of this can be seen with the /usr/bin/passwd binary file. /usr/bin/passwd needs to modify /etc/passwd and /etc/shadow which store account information and password hashes for all users, and these can only be modified by the user 'root'.[ thompson ~ ] $ stat -c "%a %U:%G %n" /usr/bin/passwd
4701 root:root /usr/bin/passwd
[ thompson ~ ] $ passwd
passwd: Changing password for thompson
The owner of the process is not the user running the executable file but the owner of the executable file
SGID
2770 on a directory named 'music' owned by the user 'root' and the group 'engineers'
A user named 'torvalds' who belongs primarily to the group 'torvalds' but secondarily to the group 'engineers' makes a directory named 'electronic' under the directory named 'music'. The group ownership of the new directory named 'electronic' inherits 'engineers.' This is the same when making a new file named 'imagine.txt'
Without SGID the group ownership of the new directory/file would have been 'torvalds' as that is the primary group of user 'torvalds'.
[ torvalds ~ ] $ groups torvalds
torvalds : torvalds engineers
[ torvalds ~ ] $ stat -c "%a %U:%G %n" ./music/
2770 root:engineers ./music/
[ torvalds ~ ] $ mkdir ./music/electronic
[ torvalds ~ ] $ stat -c "%U:%G %n" ./music/electronic/
torvalds:engineers ./music/electronic/
[ torvalds ~ ] $ echo 'NEW FILE' > ./music/imagine.txt
[ torvalds ~ ] $ stat -c "%U:%G %n" ./music/imagine.txt
torvalds:engineers ./music/imagine.txt
[ torvalds ~ ] $ touch ~/test
[ torvalds ~ ] $ stat -c "%U:%G %n" ~/test
torvalds:torvalds ~/test
Sticky bit
1770 on a directory named 'videogames' owned by the user 'torvalds' and the group 'engineers'.
A user named 'torvalds' creates a file named 'tekken' under the directory named 'videogames'. A user named 'wozniak', who is also part of the group 'engineers', attempts to delete the file named 'tekken' but he cannot, since he is not the owner.
Without sticky bit, 'wozniak' could have deleted the file, because the directory named 'videogames' allows read and write by 'engineers'. A default use of this can be seen at the /tmp folder.
[ torvalds /home/shared/ ] $ groups torvalds
torvalds : torvalds engineers
[ torvalds /home/shared/ ] $ stat -c "%a %U:%G %n" ./videogames/
1770 torvalds:engineers ./videogames/
[ torvalds /home/shared/ ] $ echo 'NEW FILE' > videogames/tekken
[ torvalds /home/shared/ ] $ su - wozniak
Password:
[ wozniak ~/ ] $ groups wozniak
wozniak : wozniak engineers
[ wozniak ~/ ] $ cd /home/shared/videogames
[ wozniak /home/shared/videogames/ ] $ rm tekken
rm: cannot remove ‘tekken’: Operation not permitted
Sticky bit with SGID
3171 on a directory named 'blog' owned by the group 'engineers' and the user 'root'
A user named 'torvalds' who belongs primarily to the group 'torvalds' but secondarily to the group 'engineers' creates a file or directory named 'thoughts' inside the directory 'blog'. A user named 'wozniak' who also belongs to the group 'engineers' cannot delete, rename, or move the file or directory named 'thoughts', because he is not the owner and the sticky bit is set. However, if 'thoughts' is a file, then 'wozniak' can edit it.
Sticky bit has the final decision. If sticky bit and SGID had not been set, the user 'wozniak' could rename, move, or delete the file named 'thoughts' because the directory named 'blog' allows read and write by group, and wozniak belongs to the group, and the default 0002 umask allows new files to be edited by group. Sticky bit and SGID could be combined with something such as a read-only umask or an append only attribute.[ torvalds /home/shared/ ] $ groups torvalds
torvalds : torvalds engineers
[ torvalds /home/shared/ ] $ stat -c "%a %U:%G %n" ./blog/
3171 root:engineers ./blog/
[ torvalds /home/shared/ ] $ echo 'NEW FILE' > ./blog/thoughts
[ torvalds /home/shared/ ] $ su - wozniak
Password:
[ wozniak ~/ ] $ cd /home/shared/blog
[ wozniak /home/shared/blog/ ] $ groups wozniak
wozniak : wozniak engineers
[ wozniak /home/shared/blog/ ] $ stat -c "%a %U:%G %n" ./thoughts
664 torvalds:engineers ./thoughts
[ wozniak /home/shared/blog/ ] $ rm thoughts
rm: cannot remove ‘thoughts’: Operation not permitted
[ wozniak /home/shared/blog/ ] $ mv thoughts /home/wozniak/
mv: cannot move ‘thoughts’ to ‘/home/wozniak/thoughts’: Operation not permitted
[ wozniak /home/shared/blog/ ] $ mv thoughts pondering
mv: cannot move ‘thoughts’ to ‘pondering’: Operation not permitted
[ wozniak /home/shared/blog/ ] $ echo 'REWRITE!' > thoughts
[ wozniak /home/shared/blog/ ] $ cat thoughts
REWRITE!
Security
Developers design and implement programs that use this bit on executables carefully in order to avoid security vulnerabilities including buffer overruns and path injection. Successful buffer-overrun attacks on vulnerable applications allow the attacker to execute arbitrary code under the rights of the process exploited. In the event that a vulnerable process uses the setuid bit to run as root, the code will execute with root privileges, in effect giving the attacker root access to the system on which the vulnerable process is running.
Of particular importance in the case of a setuid process is the environment of the process. If the environment is not properly sanitized by a privileged process, its behavior can be changed by the unprivileged process that started it. For example, GNU libc was at one point vulnerable to an exploit using setuid and an environment variable that allowed executing code from untrusted shared libraries.
History
The setuid bit was invented by Dennis Ritchie and included in su. His employer, then Bell Telephone Laboratories, applied for a patent in 1972; the patent was granted in 1979 as patent number . The patent was later placed in the public domain.
See also
References
External links
Chen, Hao; Wagner, David; and Dean, Drew; Setuid Demystified (pdf)
Tsafrir, Dan; Da Silva, Dilma; and Wagner, David; The Murky Issue of Changing Process Identity: Revising Setuid Demystified (pdf)
Pollock, Wayne; Unix File and Directory Permissions and Modes
Computer security procedures
Unix file system technology
Patents placed into the public domain | Setuid | [
"Engineering"
] | 2,905 | [
"Cybersecurity engineering",
"Computer security procedures"
] |
1,054,913 | https://en.wikipedia.org/wiki/Straight-fourteen%20engine | A straight-14 engine or inline-14 engine is a fourteen-cylinder piston engine with all fourteen cylinders mounted in a straight line along the crankcase. This design results in a very long engine, therefore it has only been used as marine propulsion engines in large ships.
The only straight-14 engine known to reach production is part of the Wärtsilä-Sulzer RTA96-C family of 6-cylinder to 14-cylinder two-stroke marine engines. This engine is used in the Emma Mærsk, which was the world's largest container ship when it was built in 2006. The engine produces and displaces , has a bore of and a stroke of . The engine is long, high and weighs .
References
Straight-14
14-cylinder engines
14 | Straight-fourteen engine | [
"Engineering"
] | 160 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
1,055,001 | https://en.wikipedia.org/wiki/Contra-rotating%20propellers | Aircraft equipped with contra-rotating propellers (CRP) coaxial contra-rotating propellers, or high-speed propellers, apply the maximum power of usually a single piston engine or turboprop engine to drive a pair of coaxial propellers in contra-rotation. Two propellers are arranged one behind the other, and power is transferred from the engine via a planetary gear or spur gear transmission. Although contra-rotating propellers are also known as counter-rotating propellers, the term is much more widely used when referring to airscrews on separate non-coaxial shafts turning in opposite directions.
Operation
When airspeed is low, the mass of the air flowing through the propeller disk (thrust) causes a significant amount of tangential or rotational air flow to be created by the spinning blades. The energy of this tangential air flow is wasted in a single-propeller design, and causes handling problems at low speed as the air strikes the vertical stabilizer, causing the aircraft to yaw left or right, depending on the direction of propeller rotation. To use this wasted effort, the placement of a second propeller behind the first takes advantage of the disturbed airflow.
A well designed contra-rotating propeller will have no rotational air flow, pushing a maximum amount of air uniformly through the propeller disk, resulting in high performance and low induced energy loss. It also serves to counter the asymmetrical torque effect of a conventional propeller (see P-factor). Some contra-rotating systems were designed to be used at takeoff for maximum power and efficiency under such conditions, and allowing one of the propellers to be disabled during cruise to extend flight time.
Advantages and disadvantages
The torque on the aircraft from a pair of contra-rotating propellers effectively cancels out.
Contra-rotating propellers have been found to be between 6% and 16% more efficient than normal propellers.
However they can be very noisy, with increases in noise in the axial (forward and aft) direction of up to 30 dB, and tangentially 10 dB. Most of this extra noise can be found in the higher frequencies. These substantial noise problems limit commercial applications. One possibility is to enclose the contra-rotating propellers in a shroud. It is also helpful if the tip speed or the loading of the blades is reduced, if the aft propeller has fewer blades or a smaller diameter than the fore propeller, or if the spacing between the aft and fore propellers is increased.
The efficiency of a contra-rotating propeller is somewhat offset by its mechanical complexity and the added weight of this gearing that makes the aircraft heavier, thus some performance is sacrificed to carry it. Nonetheless, coaxial contra-rotating propellers and rotors have been used in several military aircraft, such as the Tupolev Tu-95 "Bear".
They are also being examined for use in airliners.
Use in aircraft
While several nations experimented with contra-rotating propellers in aircraft, only the United Kingdom and Soviet Union produced them in large numbers. The first aircraft to be fitted with a contra-rotating propeller to fly was in the US when two inventors from Ft Worth, Texas tested the concept on an aircraft.
United Kingdom
A contra-rotating propeller was patented by F. W. Lanchester in 1907.
Some of the more successful British aircraft with contra-rotating propellers are the Avro Shackleton, powered by the Rolls-Royce Griffon engine, and the Fairey Gannet, which used the Double Mamba Mk.101 engine. In the Double Mamba two separate power sections drove one propeller each, allowing one power section (engine) to be shut down in flight, increasing endurance.
Another naval aircraft, the Westland Wyvern had contra-rotating propellers. The Martin-Baker MB 5 test aircraft also used this propeller type.
Later variants of the Supermarine Spitfire and Seafire used the Griffon with contra-rotating props. In the Spitfire/Seafire and Shackleton's case the primary reason for using contra-rotating propellers was to increase the propeller blade-area, and hence absorb greater engine power, within a propeller diameter limited by the height of the aircraft's undercarriage. The Short Sturgeon used two Merlin 140s with contra-rotating propellers.
The Bristol Brabazon prototype airliner used eight Bristol Centaurus engines driving four pairs of contra-rotating propellers, each engine driving a single propeller.
The post-war SARO Princess prototype flying boat airliner also had eight of its ten engines driving contra-rotating propellers.
USSR, Russia and Ukraine
In the 1950s, the Soviet Union's Kuznetsov Design Bureau developed the NK-12 turboprop. It drives an eight-blade contra-rotating propeller and, at , it is the most powerful turboprop in service. Four NK-12 engines power the Tupolev Tu-95 Bear, the only turboprop bomber to enter service, as well as one of the fastest propeller-driven aircraft. The Tu-114, an airliner derivative of the Tu-95, holds the world speed record for propeller aircraft. The Tu-95 was also the first Soviet bomber to have intercontinental range. The Tu-126 AEW aircraft and Tu-142 maritime patrol aircraft are two more NK-12 powered designs derived from the Tu-95.
The NK-12 engine powers another well-known Soviet aircraft, the Antonov An-22 Antheus, a heavy-lift cargo aircraft. At the time of its introduction, the An-22 was the largest aircraft in the world and is still by far the world's largest turboprop-powered aircraft. From the 1960s through the 1970s, it set several world records in the categories of maximum payload-to-height ratio and maximum payload lifted to altitude.
Of lesser note is the use of the NK-12 engine in the A-90 Orlyonok, a mid-size Soviet ekranoplan. The A-90 uses one NK-12 engine mounted at the top of its T-tail, along with two turbofans installed in the nose.
In the 1980s, Kuznetsov continued to develop powerful contra-rotating engines. The NK-110, which was tested in the late 1980s, had a contra-rotating propeller configuration with four blades in front and four in back, like the NK-12. Its was smaller than the NK-12's diameter, but it produced a power output of , delivering a takeoff thrust of . Even more powerful was the NK-62, which was in development throughout most of the decade. The NK-62 had an identical propeller diameter and blade configuration to the NK-110, but it offered a higher takeoff thrust of . The associated NK-62M had a takeoff thrust of , and it could deliver of emergency thrust. Unlike the NK-12, however, these later engines were not adopted by any of the aircraft design bureaus.
In 1994, Antonov produced the An-70, a heavy transport aircraft. It is powered by four Progress D-27 propfan engines driving contra-rotating propellers. The characteristics of the D-27 engine and its propeller make it a propfan, a hybrid between a turbofan engine and a turboprop engine.
United States
The United States worked with several prototypes, including the Northrop XB-35, XB-42 Mixmaster, the Douglas XTB2D Skypirate, the Curtiss XBTC, the A2J Super Savage, the Boeing XF8B, the XP-56 Black Bullet, the Fisher P-75 Eagle and the tail-sitting Convair XFY "Pogo" and Lockheed XFV "Salmon" VTOL fighters and the Hughes XF-11 reconnaissance plane. The Convair R3Y Tradewind flying boat entered service with contra-rotating propellers. However, both piston-engined and turboprop-powered propeller-driven aircraft were reaching their zenith and new technological developments such as the advent of the pure turbojet and turbofan engines, both without propellers, meant that the designs were quickly eclipsed.
The US propeller manufacturer, Hamilton Standard, bought a Fairey Gannet in 1983 to study the effects of counter rotation on propeller noise and blade vibratory stresses. The Gannet was particularly suitable because the independently-driven propellers provided a comparison between counter and single rotation.
Ultralight applications
An Austrian company, Sun Flightcraft, distributes a contra-rotating gearbox for use on Rotax 503 and 582 engines on ultralight and microlight aircraft. The Coax-P was developed by Hans Neudorfer of NeuraJet and allows powered hang-gliders and parachutes to develop 15 to 20 percent more power while reducing torque moments. The manufacturer also reports reduced noise levels from dual contra-rotating props using the Coax-P gearbox.
See also
Contra-rotating marine propellers
Toroidal propeller ("looped propeller")
References
External links
Luftfahrtmuseum.com – Further information and pictures of contra rotators for the Fairey Gannet and Shackleton
A History of Aircraft Using Contra-Rotating Propellers (Part 1) – Aircraft Engine Historical Society
A History of Aircraft Using Contra-Rotating Propellers (Part 2) – Aircraft Engine Historical Society
A History of Aircraft Using Contra-Rotating Propellers (Part 3) – Aircraft Engine Historical Society
A History of Aircraft Using Contra-Rotating Propellers (Part 4) – Aircraft Engine Historical Society
Aircraft engines
Aircraft configurations
Propellers | Contra-rotating propellers | [
"Technology",
"Engineering"
] | 1,907 | [
"Aerospace engineering",
"Aircraft configurations",
"Engines",
"Aircraft engines"
] |
1,055,010 | https://en.wikipedia.org/wiki/Oxo%20%28food%29 | Oxo (stylized OXO) is a brand of food products, including stock cubes, herbs and spices, dried gravy, and yeast extract. The original product was the beef stock cube, and the company now also markets chicken and other flavour cubes, including versions with Chinese and Indian spices. The cubes are broken up and used as flavouring in meals or gravy or dissolved into boiling water to produce a bouillon.
In the United Kingdom, the OXO brand belongs to Premier Foods. In South Africa, the Oxo brand is owned and manufactured by Mars, Incorporated and in Canada is owned and manufactured by Knorr.
History
Around 1840, Justus von Liebig developed a concentrated meat extract. Liebig's Extract of Meat Company (Lemco; established in the United Kingdom) promoted it, starting in 1866. The original product was a viscous liquid, containing only meat extract and 4% salt. In 1899, the company introduced the trademark Oxo; the origin of the name is unknown, but presumably comes from the word "ox." Since the cost of liquid Oxo remained beyond the reach of many families, the company launched a research project to develop a solid version that could be sold in cubes for a penny. After much research, Oxo produced their first cubes in 1910 and further increased Oxo's popularity. During World War I, 100 million Oxo cubes were provided to the British armed forces, all of them individually hand-wrapped.
The Vestey Group acquired Lemco in 1924, and the factory was renamed El Anglo. Vestey merged with Brooke Bond in 1968, which was in turn acquired by Unilever in 1984. Unilever sold the Oxo brand to the Campbell Soup Company in 2001, and Premier Foods bought Campbell's UK operation in 2006. This sale included sites at both Worksop and Kings Lynn. The Worksop plant currently produces Oxo cubes.
In South Africa, Oxo is now a brand of Mars, Incorporated. The only product marketed under the Oxo brand in South Africa was a yeast-extract-based spread. The product also contained a small portion of beef extract, giving it a slightly "beefier" taste than other yeast extracts. At the beginning of 2015, Mars Consumer Products Africa (Pty), Ltd discontinued Oxo spread in South Africa, with no prior communication to the public.
Marketing
In 1908, Oxo (alongside Odol mouthwash and Indian Foot Powder) was one of the sponsors of the London Olympic Games (despite claims by Coca-Cola of being the "first" commercial sponsor of the games) and supplied marathon runners with Oxo drinks "to fortify them." During the first half of the 20th century, Oxo was promoted through the issue of recipes, gifts, and sponsorships, before fading into the background as a part of the fabric of English life in the latter parts of the century.
In the 1920s, the Liebig's Extract of Meat Company acquired the a Wharf on the south bank of the river Thames in London. There they erected a factory, demolishing most of the original building and constructing a tower, with distinctive windows appearing to spell out 'OXO', which became known as the Oxo Tower
In 1958, Oxo commenced their longest running television advertising campaign, "Life with Katie." Katie was played by Mary Holland, and her long-suffering husband by Peter Moynihan. The campaign ran until the early 1970s, including two seasons where the family traveled to the US to film. By this time, the couple were joined by their "son."
As styles and tastes changed, Oxo moved to a more up-to-date format with Dennis Waterman as the sole face of the brand in the mid '70s.
In 1966, Oxo had a sponsored show on the offshore radio station, Wonderful Radio London. The show was presented by Tony Windsor; his assistant was a woman called "Katie." Oxo presented a new recipe in each episode.
Oxo launched another long-running advertising campaign in the UK in 1983, when a second "Oxo Family" debuted on commercial television. The father was played by Michael Redfern, the mother was played by Lynda Bellingham, while the children were played by Blair MacKichan, Colin McCoy and Alison Reynolds. The advertisements typically featured the family sitting down to a meal at which Oxo gravy would be served. The product was not always mentioned by name, occasionally appearing only as a logo in the corner of the screen at the end of the commercial. Throughout the 1980s and 1990s, the family were seen to grow older, and, when the campaign was retired in 1999, the family moved out of the house.
On 11 November 2014, it was announced that a 1984 Oxo advert starring Lynda Bellingham would be screened on Christmas Day as a tribute to the actress, who had died of colon cancer the previous month. It was aired during a commercial break of Coronation Street.
See also
Bovril
Oxo Tower
References
External links
History of Campbell Soup UK
Food brands of the United Kingdom
Food ingredients
Campbell Soup Company brands
Mars brands
Yeast extract spreads
Umami enhancers
Companies based in Nottinghamshire
Food paste
Food and drink companies of the United Kingdom
Former Unilever brands
Premier Foods brands | Oxo (food) | [
"Technology"
] | 1,090 | [
"Food ingredients",
"Components"
] |
1,055,255 | https://en.wikipedia.org/wiki/Altazimuth%20mount | An altazimuth mount or alt-azimuth mount is a simple two-axis mount for supporting and rotating an instrument about two perpendicular axes – one vertical and the other horizontal. Rotation about the vertical axis varies the azimuth (compass bearing) of the pointing direction of the instrument. Rotation about the horizontal axis varies the altitude angle (angle of elevation) of the pointing direction.
These mounts are used, for example, with telescopes, cameras, radio antennas, heliostat mirrors, solar panels, and guns and similar weapons.
Several names are given to this kind of mount, including altitude-azimuth, azimuth-elevation and various abbreviations thereof. A gun turret is essentially an alt-azimuth mount for a gun, and a standard camera tripod is an alt-azimuth mount as well.
Astronomical telescope altazimuth mounts
When used as an astronomical telescope mount, the biggest advantage of an alt-azimuth mount is the simplicity of its mechanical design. The primary disadvantage is its inability to follow astronomical objects in the night sky as the Earth spins on its axis. On the other hand, an equatorial mount only needs to be rotated about a single axis, at a constant rate, to follow the rotation of the night sky (diurnal motion). Altazimuth mounts need to be rotated about both axes at variable rates, achieved via microprocessor based two-axis drive systems, to track equatorial motion. This imparts an uneven rotation to the field of view that also has to be corrected via a microprocessor based counter rotation system. On smaller telescopes an equatorial platform is sometimes used to add a third "polar axis" to overcome these problems, providing an hour or more of motion in the direction of right ascension to allow for astronomical tracking. The design also does not allow for the use of mechanical setting circles to locate astronomical objects although modern digital setting circles have removed this shortcoming.
Another limitation is the problem of gimbal lock at zenith pointing. When tracking at elevations close to 90°, the azimuth axis must rotate very quickly; if the altitude is exactly 90°, the speed is infinite. Thus, altazimuth telescopes, although they can point in any direction, cannot track smoothly within a "zenith blind spot", commonly 0.5 or 0.75 degrees from the zenith. (i.e. at elevations greater than 89.5° or 89.25° respectively.)
Current applications
Typical current applications of altazimuth mounts include the following.
Research telescopes
In the largest telescopes, the mass and cost of an equatorial mount is prohibitive and they have been superseded by computer-controlled altazimuth mounts. The simple structure of an altazimuth mount allows significant cost reductions, in spite of the additional cost associated with the more complex tracking and image-orienting mechanisms. An altazimuth mount also reduces the cost in the dome structure covering the telescope since the simplified motion of the telescope means the structure can be more compact.
Amateur telescopes
Beginner telescopes: Altazimuth mounts are cheap and simple to use.
Dobsonian telescopes: John Dobson popularized a simplified altazimuth mount design for Newtonian reflectors because of its ease of construction; Dobson's innovation was to use non-machined parts for the mount that could be found in any hardware store such as plywood, formica, and plastic plumbing parts combined with modern materials like nylon or teflon.
"GoTo" telescopes: It has often proved more convenient to build a mechanically simpler altazimuth mount and use a motion controller to manipulate both axes simultaneously to track an object, when compared with a more mechanically complex equatorial mount that requires minimally complex control of a single motor.
Gallery
See also
Dobsonian mount
Equatorial mount
Heliostat
Horizontal coordinate system – a system to locate objects on the celestial sphere via Alt-azimuth coordinates
Parallactic angle
Solar tracker
Tripod
References
External links
Images of the Unitron altazimuth mount
Telescopes | Altazimuth mount | [
"Astronomy"
] | 821 | [
"Telescopes",
"Astronomical instruments"
] |
1,055,257 | https://en.wikipedia.org/wiki/Ammonium%20sulfate%20precipitation | Ammonium sulfate precipitation is one of the most commonly used methods for large and laboratory scale protein purification and fractionation that can be used to separate proteins by altering their solubility in the presence of a high salt concentration.
Properties
Ammonium sulfate is an inorganic salt with a high solubility that disassociates into ammonium () and sulfate () in aqueous solutions. Ammonium sulfate is especially useful as a precipitant because it is highly soluble, stabilizes protein structure, has a relatively low density, is readily available, and is relatively inexpensive.
Mechanism
Ammonium sulfate, as well as other neutral salts, will stabilize proteins by preferential solvation. Proteins are usually stored in ammonium sulfate because it inhibits bacterial growth. With the addition of ammonium sulfate, proteins unfolded by denaturants can be pushed into their native conformations. This can be seen with the folding of recombinant proteins.
The solubility of proteins varies according to the ionic strength of the solution, thus according to the salt concentration. At low ion concentrations (less than 0.5 mol/L), the solubility of proteins increases with increasing salt concentration, an effect termed "salting in". As the salt concentration is further increased, the solubility of the protein begins to decrease. At a sufficiently high ionic strength, the protein will precipitate out of the solution, an effect termed "salting out". When the ammonium () and sulfate () ions are within the aqueous solution they are attracted to the opposite charges evident on the compound that is being purified. This attraction of opposite charges prevents the water molecules from interacting with the compound being purified, leading to the precipitation or "salting out".
Proteins differ markedly in their solubilities at high ionic strength, therefore, "salting out" is a very useful procedure to assist in the purification of the desired protein. Ammonium sulfate is commonly used for precipitation because of its high solubility, additionally, it forms two ions high in the Hofmeister series. Because these two ions are at the end of Hofmeister series, ammonium sulfate can also stabilize a protein structure. The ammonium sulfate solubility behavior for a protein is usually expressed as a function of the percentage of saturation. A solubility curve can be determined by plotting the log of the experimentally determined solubility, expressed as mg/mL, versus the percentage saturation of ammonium sulfate.
With the mechanism of salting-out, there is an omission of the salt from the layer of water, which is closely associated with the surface of the protein, known as the hydration layer. The hydration layer plays a vital role in sustaining solubility and suitable natural conformation. There are three main protein-water interaction: ion hydration between charged side chains, hydrogen bonding between polar groups and water, and hydrophobic hydration. Once salt is added to the mixture, there is an increase in the surface tension of the water, thus increasing hydrophobic interactions between water and the protein of interest. The protein of interest then reduces its surface area, which diminishes its contact with the solvent. This is shown by the folding and self-association, which ultimately leads to precipitation. The folding and self-association of the protein pushes out free water, leading to an increase in entropy and making this process energetically favorable.
Procedure
Typically, the ammonium sulfate concentration is increased stepwise, and the precipitated protein is recovered at each stage. This is usually done by adding solid ammonium sulfate; however, calculating the amount of ammonium sulfate that should be added to add to a solution to achieve the desired concentration may be difficult because the addition of ammonium sulfate significantly increases the volume of the solution. The amount of ammonium sulfate that should be added to the solution can be determined from published nomograms or by using an online calculator. The direct addition of solid ammonium sulfate does change the pH of the solution, which can lead to loss of enzyme activity. In those cases, the addition of saturated ammonium sulfate in a suitable buffer is used as an alternative to adding solid ammonium sulfate. In either approach, the resulting protein precipitate can be dissolved individually in a standard buffer and assayed to determine the total protein content.
The ammonium sulfate concentration added should be increased to a value that will precipitate most of the protein of interest whilst leaving the maximum amount of protein contaminants still in the solution. The precipitated protein of interest can subsequently be recovered by centrifugation and dissolved in standard buffer to prepare the sample for the next stage of purification.
In the next stage of purification, all this added salt needs to be removed from the protein. One way to do so is using dialysis, but dialysis further dilutes the concentrated protein. The better way of removing ammonium sulfate from the protein is mixing the precipitate protein in a buffer containing a mixture of SDS, Tris-HCl, and phenol and centrifuging the mixture. The precipitate that comes out of this centrifugation will contain salt-less concentrated protein.
Applications
Ammonium sulfate precipitation is a useful technique as an initial step in protein purification because it enables quick, bulk precipitation of cellular proteins. It is also often employed during the later stages of purification to concentrate protein from dilute solution following procedures such as gel filtration. The drawback of this method is that oftentimes different substances can precipitate along with the protein, and other purification techniques must be performed, such as ion chromatography or size-exclusion chromatography.
References
Biochemical separation processes | Ammonium sulfate precipitation | [
"Chemistry",
"Biology"
] | 1,187 | [
"Biochemistry methods",
"Separation processes",
"Biochemical separation processes"
] |
1,055,334 | https://en.wikipedia.org/wiki/Equatorial%20mount | An equatorial mount is a mount for instruments that compensates for Earth's rotation by having one rotational axis, called polar axis, parallel to the Earth's axis of rotation. This type of mount is used for astronomical telescopes and cameras. The advantage of an equatorial mount lies in its ability to allow the instrument attached to it to stay fixed on any celestial object with diurnal motion by driving one axis at a constant speed. Such an arrangement is called a sidereal drive or clock drive. Equatorial mounts achieve this by aligning their rotational axis with the Earth, a process known as polar alignment.
Astronomical telescope mounts
In astronomical telescope mounts, the equatorial axis (the right ascension) is paired with a second perpendicular axis of motion (known as the declination). The equatorial axis of the mount is often equipped with a motorized "clock drive", that rotates that axis one revolution every 23 hours and 56 minutes in exact sync with the apparent diurnal motion of the sky. They may also be equipped with setting circles to allow for the location of objects by their celestial coordinates. Equatorial mounts differ from mechanically simpler altazimuth mounts, which require variable speed motion around both axes to track a fixed object in the sky. Also, for astrophotography, the image does not rotate in the focal plane, as occurs with altazimuth mounts when they are guided to track the target's motion, unless a rotating erector prism or other field-derotator is installed.
Equatorial telescope mounts come in many designs. In the last twenty years motorized tracking has increasingly been supplemented with computerized object location. There are two main types. Digital setting circles take a small computer with an object database that is attached to encoders. The computer monitors the telescope's position in the sky. The operator must push the telescope. Go-to systems use (in most cases) a worm and ring gear system driven by servo or stepper motors, and the operator need not touch the instrument at all to change its position in the sky. The computers in these systems are typically either hand-held in a control "paddle" or supplied through an adjacent laptop computer which is also used to capture images from an electronic camera. The electronics of modern telescope systems often include a port for autoguiding. A special instrument tracks a star and makes adjustment in the telescope's position while photographing the sky. To do so the autoguider must be able to issue commands through the telescope's control system. These commands can compensate for very slight errors in the tracking performance, such as periodic error caused by the worm drive that makes the telescope move.
In new observatory designs, equatorial mounts have been out of favor for decades in large-scale professional applications. Massive new instruments are most stable when mounted in an alt-azimuth (up down, side-to-side) configuration. Computerized tracking and field-derotation are not difficult to implement at the professional level. At the amateur level, however, equatorial mounts remain popular, particularly for astrophotography.
German equatorial mount
In the German equatorial mount, (sometimes called a "GEM" for short) the primary structure is a T-shape, where the lower bar is the right ascension axis (lower diagonal axis in image), and the upper bar is the declination axis (upper diagonal axis in image). The mount was developed by Joseph von Fraunhofer for the Great Dorpat Refractor that was finished in 1824. The telescope is placed on one end of the declination axis (top left in image), and a suitable counterweight on other end of it (bottom right). The right ascension axis has bearings below the T-joint, that is, it is not supported above the declination axis.
Open fork mount
The Open Fork mount has a Fork attached to a right ascension axis at its base. The telescope is attached to two pivot points at the other end of the fork so it can swing in declination. Most modern mass-produced catadioptric reflecting telescopes (200 mm or larger diameter) tend to be of this type. The mount resembles an Altazimuth mount, but with the azimuth axis tilted and lined up to match earth rotation axis with a piece of hardware usually called a "wedge".
Many mid-size professional telescopes also have equatorial forks, these are usually in range of 0.5-2.0 meter diameter.
English or Yoke mount
The English mount or Yoke mount has a frame or "yoke" with right ascension axis bearings at the top and the bottom ends, and a telescope attached inside the midpoint of the yoke allowing it to swing on the declination axis. The telescope is usually fitted entirely inside the fork, although there are exceptions such as the Mount Wilson 2.5 m reflector, and there are no counterweights as with the German mount.
The original English fork design is disadvantaged in that it does not allow the telescope to point too near the north or south celestial pole.
Horseshoe mount
The horseshoe mount overcomes the design disadvantage of English or Yoke mounts by replacing the polar bearing with an open "horseshoe" structure to allow the telescope to access Polaris and stars near it. The Hale Telescope is the most prominent example of a horseshoe mount in use.
Cross-axis mount
The Cross-axis or English cross axis mount is like a big "plus" sign (+). The right ascension axis is supported at both ends, and the declination axis is attached to it at approximately midpoint with the telescope on one end of the declination axis and a counter weight on the other.
Equatorial platform
An equatorial platform is a specially designed platform that allows any device sitting on it to track on an equatorial axis. It achieves this by having a surface that pivots about a "virtual polar axis". This gives equatorial tracking to anything sitting on the platform, from small cameras up to entire observatory buildings. These platforms are often used with altazimuth mounted amateur astronomical telescopes, such as the common Dobsonian telescope type, to overcome that type of mount's inability to track the night sky.
See also
Altazimuth mount
Barn door tracker
Equatorial room
Hexapod-Telescope
List of telescope parts and construction
List of telescope types
Parallactic angle
Polar mount - a similar mount used with satellite dishes
Poncet Platform
References
Telescopes
de:Montierung#Parallaktische Montierungen | Equatorial mount | [
"Astronomy"
] | 1,315 | [
"Telescopes",
"Astronomical instruments"
] |
1,055,357 | https://en.wikipedia.org/wiki/Sheaf%20cohomology | In mathematics, sheaf cohomology is the application of homological algebra to analyze the global sections of a sheaf on a topological space. Broadly speaking, sheaf cohomology describes the obstructions to solving a geometric problem globally when it can be solved locally. The central work for the study of sheaf cohomology is Grothendieck's 1957 Tôhoku paper.
Sheaves, sheaf cohomology, and spectral sequences were introduced by Jean Leray at the prisoner-of-war camp Oflag XVII-A in Austria. From 1940 to 1945, Leray and other prisoners organized a "université en captivité" in the camp.
Leray's definitions were simplified and clarified in the 1950s. It became clear that sheaf cohomology was not only a new approach to cohomology in algebraic topology, but also a powerful method in complex analytic geometry and algebraic geometry. These subjects often involve constructing global functions with specified local properties, and sheaf cohomology is ideally suited to such problems. Many earlier results such as the Riemann–Roch theorem and the Hodge theorem have been generalized or understood better using sheaf cohomology.
Definition
The category of sheaves of abelian groups on a topological space X is an abelian category, and so it makes sense to ask when a morphism f: B → C of sheaves is injective (a monomorphism) or surjective (an epimorphism). One answer is that f is injective (respectively surjective) if and only if the associated homomorphism on stalks Bx → Cx is injective (respectively surjective) for every point x in X. It follows that f is injective if and only if the homomorphism B(U) → C(U) of sections over U is injective for every open set U in X. Surjectivity is more subtle, however: the morphism f is surjective if and only if for every open set U in X, every section s of C over U, and every point x in U, there is an open neighborhood V of x in U such that s restricted to V is the image of some section of B over V. (In words: every section of C lifts locally to sections of B.)
As a result, the question arises: given a surjection B → C of sheaves and a section s of C over X, when is s the image of a section of B over X? This is a model for all kinds of local-vs.-global questions in geometry. Sheaf cohomology gives a satisfactory general answer. Namely, let A be the kernel of the surjection B → C, giving a short exact sequence
of sheaves on X. Then there is a long exact sequence of abelian groups, called sheaf cohomology groups:
where H0(X,A) is the group A(X) of global sections of A on X. For example, if the group H1(X,A) is zero, then this exact sequence implies that every global section of C lifts to a global section of B. More broadly, the exact sequence makes knowledge of higher cohomology groups a fundamental tool in aiming to understand sections of sheaves.
Grothendieck's definition of sheaf cohomology, now standard, uses the language of homological algebra. The essential point is to fix a topological space X and think of cohomology as a functor from sheaves of abelian groups on X to abelian groups. In more detail, start with the functor E ↦ E(X) from sheaves of abelian groups on X to abelian groups. This is left exact, but in general not right exact. Then the groups Hi(X,E) for integers i are defined as the right derived functors of the functor E ↦ E(X). This makes it automatic that Hi(X,E) is zero for i < 0, and that H0(X,E) is the group E(X) of global sections. The long exact sequence above is also straightforward from this definition.
The definition of derived functors uses that the category of sheaves of abelian groups on any topological space X has enough injectives; that is, for every sheaf E there is an injective sheaf I with an injection E → I. It follows that every sheaf E has an injective resolution:
Then the sheaf cohomology groups Hi(X,E) are the cohomology groups (the kernel of one homomorphism modulo the image of the previous one) of the chain complex of abelian groups:
Standard arguments in homological algebra imply that these cohomology groups are independent of the choice of injective resolution of E.
The definition is rarely used directly to compute sheaf cohomology. It is nonetheless powerful, because it works in great generality (any sheaf of abelian groups on any topological space), and it easily implies the formal properties of sheaf cohomology, such as the long exact sequence above. For specific classes of spaces or sheaves, there are many tools for computing sheaf cohomology, some discussed below.
Functoriality
For any continuous map f: X → Y of topological spaces, and any sheaf E of abelian groups on Y, there is a pullback homomorphism
for every integer j, where f*(E) denotes the inverse image sheaf or pullback sheaf. If f is the inclusion of a subspace X of Y, f*(E) is the restriction of E to X, often just called E again, and the pullback of a section s from Y to X is called the restriction s|X.
Pullback homomorphisms are used in the Mayer–Vietoris sequence, an important computational result. Namely, let X be a topological space which is a union of two open subsets U and V, and let E be a sheaf on X. Then there is a long exact sequence of abelian groups:
Sheaf cohomology with constant coefficients
For a topological space and an abelian group , the constant sheaf means the sheaf of locally constant functions with values in . The sheaf cohomology groups with constant coefficients are often written simply as , unless this could cause confusion with another version of cohomology such as singular cohomology.
For a continuous map f: X → Y and an abelian group A, the pullback sheaf f*(AY) is isomorphic to AX. As a result, the pullback homomorphism makes sheaf cohomology with constant coefficients into a contravariant functor from topological spaces to abelian groups.
For any spaces X and Y and any abelian group A, two homotopic maps f and g from X to Y induce the same homomorphism on sheaf cohomology:
It follows that two homotopy equivalent spaces have isomorphic sheaf cohomology with constant coefficients.
Let X be a paracompact Hausdorff space which is locally contractible, even in the weak sense that every open neighborhood U of a point x contains an open neighborhood V of x such that the inclusion V → U is homotopic to a constant map. Then the singular cohomology groups of X with coefficients in an abelian group A are isomorphic to sheaf cohomology with constant coefficients, H*(X,AX). For example, this holds for X a topological manifold or a CW complex.
As a result, many of the basic calculations of sheaf cohomology with constant coefficients are the same as calculations of singular cohomology. See the article on cohomology for the cohomology of spheres, projective spaces, tori, and surfaces.
For arbitrary topological spaces, singular cohomology and sheaf cohomology (with constant coefficients) can be different. This happens even for H0. The singular cohomology H0(X,Z) is the group of all functions from the set of path components of X to the integers Z, whereas sheaf cohomology H0(X,ZX) is the group of locally constant functions from X to Z. These are different, for example, when X is the Cantor set. Indeed, the sheaf cohomology H0(X,ZX) is a countable abelian group in that case, whereas the singular cohomology H0(X,Z) is the group of all functions from X to Z, which has cardinality
For a paracompact Hausdorff space X and any sheaf E of abelian groups on X, the cohomology groups Hj(X,E) are zero for j greater than the covering dimension of X. (This does not hold in the same generality for singular cohomology: for example, there is a compact subset of Euclidean space R3 that has nonzero singular cohomology in infinitely many degrees.) The covering dimension agrees with the usual notion of dimension for a topological manifold or a CW complex.
Flabby and soft sheaves
A sheaf E of abelian groups on a topological space X is called acyclic if Hj(X,E) = 0 for all j > 0. By the long exact sequence of sheaf cohomology, the cohomology of any sheaf can be computed from any acyclic resolution of E (rather than an injective resolution). Injective sheaves are acyclic, but for computations it is useful to have other examples of acyclic sheaves.
A sheaf E on X is called flabby (French: flasque) if every section of E on an open subset of X extends to a section of E on all of X. Flabby sheaves are acyclic. Godement defined sheaf cohomology via a canonical flabby resolution of any sheaf; since flabby sheaves are acyclic, Godement's definition agrees with the definition of sheaf cohomology above.
A sheaf E on a paracompact Hausdorff space X is called soft if every section of the restriction of E to a closed subset of X extends to a section of E on all of X. Every soft sheaf is acyclic.
Some examples of soft sheaves are the sheaf of real-valued continuous functions on any paracompact Hausdorff space, or the sheaf of smooth (C∞) functions on any smooth manifold. More generally, any sheaf of modules over a soft sheaf of commutative rings is soft; for example, the sheaf of smooth sections of a vector bundle over a smooth manifold is soft.
For example, these results form part of the proof of de Rham's theorem. For a smooth manifold X, the Poincaré lemma says that the de Rham complex is a resolution of the constant sheaf RX:
where ΩXj is the sheaf of smooth j-forms and the map ΩXj → ΩXj+1 is the exterior derivative d. By the results above, the sheaves ΩXj are soft and therefore acyclic. It follows that the sheaf cohomology of X with real coefficients is isomorphic to the de Rham cohomology of X, defined as the cohomology of the complex of real vector spaces:
The other part of de Rham's theorem is to identify sheaf cohomology and singular cohomology of X with real coefficients; that holds in greater generality, as discussed above.
Čech cohomology
Čech cohomology is an approximation to sheaf cohomology that is often useful for computations. Namely, let be an open cover of a topological space X, and let E be a sheaf of abelian groups on X. Write the open sets in the cover as Ui for elements i of a set I, and fix an ordering of I. Then Čech cohomology is defined as the cohomology of an explicit complex of abelian groups with jth group
There is a natural homomorphism . Thus Čech cohomology is an approximation to sheaf cohomology using only the sections of E on finite intersections of the open sets Ui.
If every finite intersection V of the open sets in has no higher cohomology with coefficients in E, meaning that Hj(V,E) = 0 for all j > 0, then the homomorphism from Čech cohomology to sheaf cohomology is an isomorphism.
Another approach to relating Čech cohomology to sheaf cohomology is as follows. The Čech cohomology groups are defined as the direct limit of over all open covers of X (where open covers are ordered by refinement). There is a homomorphism from Čech cohomology to sheaf cohomology, which is an isomorphism for j ≤ 1. For arbitrary topological spaces, Čech cohomology can differ from sheaf cohomology in higher degrees. Conveniently, however, Čech cohomology is isomorphic to sheaf cohomology for any sheaf on a paracompact Hausdorff space.
The isomorphism implies a description of H1(X,E) for any sheaf E of abelian groups on a topological space X: this group classifies the E-torsors (also called principal E-bundles) over X, up to isomorphism. (This statement generalizes to any sheaf of groups G, not necessarily abelian, using the non-abelian cohomology set H1(X,G).) By definition, an E-torsor over X is a sheaf S of sets together with an action of E on X such that every point in X has an open neighborhood on which S is isomorphic to E, with E acting on itself by translation. For example, on a ringed space (X,OX), it follows that the Picard group of invertible sheaves on X is isomorphic to the sheaf cohomology group H1(X,OX*), where OX* is the sheaf of units in OX.
Relative cohomology
For a subset Y of a topological space X and a sheaf E of abelian groups on X, one can define relative cohomology groups:
for integers j. Other names are the cohomology of X with support in Y, or (when Y is closed in X) local cohomology. A long exact sequence relates relative cohomology to sheaf cohomology in the usual sense:
When Y is closed in X, cohomology with support in Y can be defined as the derived functors of the functor
the group of sections of E that are supported on Y.
There are several isomorphisms known as excision. For example, if X is a topological space with subspaces Y and U such that the closure of Y is contained in the interior of U, and E is a sheaf on X, then the restriction
is an isomorphism. (So cohomology with support in a closed subset Y only depends on the behavior of the space X and the sheaf E near Y.) Also, if X is a paracompact Hausdorff space that is the union of closed subsets A and B, and E is a sheaf on X, then the restriction
is an isomorphism.
Cohomology with compact support
Let X be a locally compact topological space. (In this article, a locally compact space is understood to be Hausdorff.) For a sheaf E of abelian groups on X, one can define cohomology with compact support Hcj(X,E). These groups are defined as the derived functors of the functor of compactly supported sections:
There is a natural homomorphism Hcj(X,E) →
Hj(X,E), which is an isomorphism for X compact.
For a sheaf E on a locally compact space X, the compactly supported cohomology of X × R with coefficients in the pullback of E is a shift of the compactly supported cohomology of X:
It follows, for example, that Hcj(Rn,Z) is isomorphic to Z if j = n and is zero otherwise.
Compactly supported cohomology is not functorial with respect to arbitrary continuous maps. For a proper map f: Y → X of locally compact spaces and a sheaf E on X, however, there is a pullback homomorphism
on compactly supported cohomology. Also, for an open subset U of a locally compact space X and a sheaf E on X, there is a pushforward homomorphism known as extension by zero:
Both homomorphisms occur in the long exact localization sequence for compactly supported cohomology, for a locally compact space X and a closed subset Y:
Cup product
For any sheaves A and B of abelian groups on a topological space X, there is a bilinear map, the cup product
for all i and j. Here A⊗B denotes the tensor product over Z, but if A and B are sheaves of modules over some sheaf OX of commutative rings, then one can map further from Hi+j(X,A⊗ZB) to Hi+j(X,A⊗OXB). In particular, for a sheaf OX of commutative rings, the cup product makes the direct sum
into a graded-commutative ring, meaning that
for all u in Hi and v in Hj.
Complexes of sheaves
The definition of sheaf cohomology as a derived functor extends to define cohomology of a topological space X with coefficients in any complex E of sheaves:
In particular, if the complex E is bounded below (the sheaf Ej is zero for j sufficiently negative), then E has an injective resolution I just as a single sheaf does. (By definition, I is a bounded below complex of injective sheaves with a chain map E → I that is a quasi-isomorphism.) Then the cohomology groups Hj(X,E) are defined as the cohomology of the complex of abelian groups
The cohomology of a space with coefficients in a complex of sheaves was earlier called hypercohomology, but usually now just "cohomology".
More generally, for any complex of sheaves E (not necessarily bounded below) on a space X, the cohomology group Hj(X,E) is defined as a group of morphisms in the derived category of sheaves on X:
where ZX is the constant sheaf associated to the integers, and E[j] means the complex E shifted j steps to the left.
Poincaré duality and generalizations
A central result in topology is the Poincaré duality theorem: for a closed oriented connected topological manifold X of dimension n and a field k, the group Hn(X,k) is isomorphic to k, and the cup product
is a perfect pairing for all integers j. That is, the resulting map from Hj(X,k) to the dual space Hn−j(X,k)* is an isomorphism. In particular, the vector spaces Hj(X,k) and Hn−j(X,k)* have the same (finite) dimension.
Many generalizations are possible using the language of sheaf cohomology. If X is an oriented n-manifold, not necessarily compact or connected, and k is a field, then cohomology is the dual of cohomology with compact support:
For any manifold X and field k, there is a sheaf oX on X, the orientation sheaf, which is locally (but perhaps not globally) isomorphic to the constant sheaf k. One version of Poincaré duality for an arbitrary n-manifold X is the isomorphism:
More generally, if E is a locally constant sheaf of k-vector spaces on an n-manifold X and the stalks of E have finite dimension, then there is an isomorphism
With coefficients in an arbitrary commutative ring rather than a field, Poincaré duality is naturally formulated as an isomorphism from cohomology to Borel–Moore homology.
Verdier duality is a vast generalization. For any locally compact space X of finite dimension and any field k, there is an object DX in the derived category D(X) of sheaves on X called the dualizing complex (with coefficients in k). One case of Verdier duality is the isomorphism:
For an n-manifold X, the dualizing complex DX is isomorphic to the shift oX[n] of the orientation sheaf. As a result, Verdier duality includes Poincaré duality as a special case.
Alexander duality is another useful generalization of Poincaré duality. For any closed subset X of an oriented n-manifold M and any field k, there is an isomorphism:
This is interesting already for X a compact subset of M = Rn, where it says (roughly speaking) that the cohomology of Rn−X is the dual of the sheaf cohomology of X. In this statement, it is essential to consider sheaf cohomology rather than singular cohomology, unless one makes extra assumptions on X such as local contractibility.
Higher direct images and the Leray spectral sequence
Let f: X → Y be a continuous map of topological spaces, and let E be a sheaf of abelian groups on X. The direct image sheaf f*E is the sheaf on Y defined by
for any open subset U of Y. For example, if f is the map from X to a point, then f*E is the sheaf on a point corresponding to the group E(X) of global sections of E.
The functor f* from sheaves on X to sheaves on Y is left exact, but in general not right exact. The higher direct image sheaves Rif*E on Y are defined as the right derived functors of the functor f*. Another description is that Rif*E is the sheaf associated to the presheaf
on Y. Thus, the higher direct image sheaves describe the cohomology of inverse images of small open sets in Y, roughly speaking.
The Leray spectral sequence relates cohomology on X to cohomology on Y. Namely, for any continuous map f: X → Y and any sheaf E on X, there is a spectral sequence
This is a very general result. The special case where f is a fibration and E is a constant sheaf plays an important role in homotopy theory under the name of the Serre spectral sequence. In that case, the higher direct image sheaves are locally constant, with stalks the cohomology groups of the fibers F of f, and so the Serre spectral sequence can be written as
for an abelian group A.
A simple but useful case of the Leray spectral sequence is that for any closed subset X of a topological space Y and any sheaf E on X, writing f: X → Y for the inclusion, there is an isomorphism
As a result, any question about sheaf cohomology on a closed subspace can be translated to a question about the direct image sheaf on the ambient space.
Finiteness of cohomology
There is a strong finiteness result on sheaf cohomology. Let X be a compact Hausdorff space, and let R be a principal ideal domain, for example a field or the ring Z of integers. Let E be a sheaf of R-modules on X, and assume that E has "locally finitely generated cohomology", meaning that for each point x in X, each integer j, and each open neighborhood U of x, there is an open neighborhood V ⊂ U of x such that the image of Hj(U,E) → Hj(V,E) is a finitely generated R-module. Then the cohomology groups Hj(X,E) are finitely generated R-modules.
For example, for a compact Hausdorff space X that is locally contractible (in the weak sense discussed above), the sheaf cohomology group Hj(X,Z) is finitely generated for every integer j.
One case where the finiteness result applies is that of a constructible sheaf. Let X be a topologically stratified space. In particular, X comes with a sequence of closed subsets
such that each difference Xi−Xi−1 is a topological manifold of dimension i. A sheaf E of R-modules on X is constructible with respect to the given stratification if the restriction of E to each stratum Xi−Xi−1 is locally constant, with stalk a finitely generated R-module. A sheaf E on X that is constructible with respect to the given stratification has locally finitely generated cohomology. If X is compact, it follows that the cohomology groups Hj(X,E) of X with coefficients in a constructible sheaf are finitely generated.
More generally, suppose that X is compactifiable, meaning that there is a compact stratified space W containing X as an open subset, with W–X a union of connected components of strata. Then, for any constructible sheaf E of R-modules on X, the R-modules Hj(X,E) and Hcj(X,E) are finitely generated. For example, any complex algebraic variety X, with its classical (Euclidean) topology, is compactifiable in this sense.
Cohomology of coherent sheaves
In algebraic geometry and complex analytic geometry, coherent sheaves are a class of sheaves of particular geometric importance. For example, an algebraic vector bundle (on a locally Noetherian scheme) or a holomorphic vector bundle (on a complex analytic space) can be viewed as a coherent sheaf, but coherent sheaves have the advantage over vector bundles that they form an abelian category. On a scheme, it is also useful to consider the quasi-coherent sheaves, which include the locally free sheaves of infinite rank.
A great deal is known about the cohomology groups of a scheme or complex analytic space with coefficients in a coherent sheaf. This theory is a key technical tool in algebraic geometry. Among the main theorems are results on the vanishing of cohomology in various situations, results on finite-dimensionality of cohomology, comparisons between coherent sheaf cohomology and singular cohomology such as Hodge theory, and formulas on Euler characteristics in coherent sheaf cohomology such as the Riemann–Roch theorem.
Sheaves on a site
In the 1960s, Grothendieck defined the notion of a site, meaning a category equipped with a Grothendieck topology. A site C axiomatizes the notion of a set of morphisms Vα → U in C being a covering of U. A topological space X determines a site in a natural way: the category C has objects the open subsets of X, with morphisms being inclusions, and with a set of morphisms Vα → U being called a covering of U if and only if U is the union of the open subsets Vα. The motivating example of a Grothendieck topology beyond that case was the étale topology on schemes. Since then, many other Grothendieck topologies have been used in algebraic geometry: the fpqc topology, the Nisnevich topology, and so on.
The definition of a sheaf works on any site. So one can talk about a sheaf of sets on a site, a sheaf of abelian groups on a site, and so on. The definition of sheaf cohomology as a derived functor also works on a site. So one has sheaf cohomology groups Hj(X, E) for any object X of a site and any sheaf E of abelian groups. For the étale topology, this gives the notion of étale cohomology, which led to the proof of the Weil conjectures. Crystalline cohomology and many other cohomology theories in algebraic geometry are also defined as sheaf cohomology on an appropriate site.
See also
de Rham theorem
Notes
References
. English translation.
External links
The thread "Sheaf cohomology and injective resolutions" on MathOverflow
The "Sheaf cohomology" on Stack Exchange
Cohomology theories
Homological algebra
Sheaf theory
Topological methods of algebraic geometry | Sheaf cohomology | [
"Mathematics"
] | 5,965 | [
"Mathematical structures",
"Fields of abstract algebra",
"Topology",
"Sheaf theory",
"Category theory",
"Homological algebra"
] |
1,055,365 | https://en.wikipedia.org/wiki/Sangaku | Sangaku or san gaku () are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes.
History
The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain.
Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of sangaku problems, his Shimpeki Sampo (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the Zoku Shimpeki Sampo.
During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation.
Select examples
A typical problem, which is presented on an 1824 tablet in Gunma Prefecture, covers the relationship of three touching circles with a common tangent, a special case of Descartes' theorem. Given the size of the two outer large circles, what is the size of the small circle between them? The answer is:
(See also Ford circle.)
Soddy's hexlet, thought previously to have been discovered in the west in 1937, had been discovered on a sangaku dating from 1822.
One sangaku problem from Sawa Masayoshi and other from Jihei Morikawa were solved only recently.
See also
Equal incircles theorem
Japanese theorem for concyclic polygons
Japanese theorem for concyclic quadrilaterals
Problem of Apollonius
Recreational mathematics
Seki Takakazu
Notes
References
Fukagawa, Hidetoshi, and Dan Pedoe. (1989). Japanese temple geometry problems = Sangaku. Winnipeg: Charles Babbage. ; OCLC 474564475
__ and Dan Pedoe. (1991) Tōkyō : Mori Kitashuppan. ; OCLC 47500620
__ and Tony Rothman. (2008). Sacred Mathematics: Japanese Temple Geometry. Princeton: Princeton University Press. ; OCLC 181142099
Huvent, Géry. (2008). Sangaku. Le mystère des énigmes géométriques japonaises. Paris: Dunod. ; OCLC 470626755
Rehmeyer, Julie, "Sacred Geometry", Science News, March 21, 2008.
External links
Sangaku (Japanese votive tablets featuring mathematical puzzles)
Japanese Temple Geometry Problem
Sangaku: Reflections on the Phenomenon
Sangaku Journal of Mathematics
Euclidean geometry
Japanese mathematics
Recreational mathematics | Sangaku | [
"Mathematics"
] | 628 | [
"Recreational mathematics"
] |
1,055,370 | https://en.wikipedia.org/wiki/Lucas%20pseudoprime | Lucas pseudoprimes and Fibonacci pseudoprimes are composite integers that pass certain tests which all primes and very few composite numbers pass: in this case, criteria relative to some Lucas sequence.
Baillie-Wagstaff-Lucas pseudoprimes
Baillie and Wagstaff define Lucas pseudoprimes as follows: Given integers P and Q, where P > 0 and ,
let Uk(P, Q) and Vk(P, Q) be the corresponding Lucas sequences.
Let n be a positive integer and let be the Jacobi symbol. We define
If n is a prime that does not divide Q, then the following congruence condition holds:
If this congruence does not hold, then n is not prime.
If n is composite, then this congruence usually does not hold. These are the key facts that make Lucas sequences useful in primality testing.
The congruence () represents one of two congruences defining a Frobenius pseudoprime. Hence, every Frobenius pseudoprime is also a Baillie-Wagstaff-Lucas pseudoprime, but the converse does not always hold.
Some good references are chapter 8 of the book by Bressoud and Wagon (with Mathematica code), pages 142–152 of the book by Crandall and Pomerance, and pages 53–74 of the book by Ribenboim.
Lucas probable primes and pseudoprimes
A Lucas probable prime for a given (P, Q) pair is any positive integer n for which equation () above is true (see, page 1398).
A Lucas pseudoprime for a given (P, Q) pair is a positive composite integer n for which equation () is true (see, page 1391).
A Lucas probable prime test is most useful if D is chosen such that the Jacobi symbol is −1
(see pages 1401–1409 of, page 1024 of, or pages 266–269 of
). This is especially important when combining a Lucas test with a strong pseudoprime test, such as the Baillie–PSW primality test. Typically implementations will use a parameter selection method that ensures this condition (e.g. the Selfridge method recommended in and described below).
If then equation () becomes
If congruence () is false, this constitutes a proof that n is composite.
If congruence () is true, then n is a Lucas probable prime.
In this case, either n is prime or it is a Lucas pseudoprime.
If congruence () is true, then n is likely to be prime (this justifies the term probable prime), but this does not prove that n is prime.
As is the case with any other probabilistic primality test, if we perform additional Lucas tests with different D, P and Q, then unless one of the tests proves that n is composite, we gain more confidence that n is prime.
Examples: If P = 3, Q = −1, and D = 13, the sequence of Us is : U0 = 0, U1 = 1, U2 = 3, U3 = 10, etc.
First, let n = 19. The Jacobi symbol is −1, so δ(n) = 20, U20 = 6616217487 = 19·348221973 and we have
Therefore, 19 is a Lucas probable prime for this (P, Q) pair. In this case 19 is prime, so it is not a Lucas pseudoprime.
For the next example, let n = 119. We have = −1, and we can compute
However, 119 = 7·17 is not prime, so 119 is a Lucas pseudoprime for this (P, Q) pair.
In fact, 119 is the smallest pseudoprime for P = 3, Q = −1.
We will see below that, in order to check equation () for a given n, we do not need to compute all of the first n + 1 terms in the U sequence.
Let Q = −1, the smallest Lucas pseudoprime to P = 1, 2, 3, ... are
323, 35, 119, 9, 9, 143, 25, 33, 9, 15, 123, 35, 9, 9, 15, 129, 51, 9, 33, 15, 21, 9, 9, 49, 15, 39, 9, 35, 49, 15, 9, 9, 33, 51, 15, 9, 35, 85, 39, 9, 9, 21, 25, 51, 9, 143, 33, 119, 9, 9, 51, 33, 95, 9, 15, 301, 25, 9, 9, 15, 49, 155, 9, 399, 15, 33, 9, 9, 49, 15, 119, 9, ...
Strong Lucas pseudoprimes
Now, factor into the form where is odd.
A strong Lucas pseudoprime for a given (P, Q) pair is an odd composite number n with GCD(n, D) = 1, satisfying one of the conditions
or
for some 0 ≤ r < s; see page 1396 of. A strong Lucas pseudoprime is also a Lucas pseudoprime (for the same (P, Q) pair), but the converse is not necessarily true.
Therefore, the strong test is a more stringent primality test than equation ().
There are infinitely many strong Lucas pseudoprimes, and therefore, infinitely many Lucas pseudoprimes.
Theorem 7 in states: Let and be relatively prime positive integers for which is positive but not a square. Then there is a positive constant (depending on and ) such that the number of strong Lucas pseudoprimes not exceeding is greater than , for sufficiently large.
We can set Q = −1, then and are P-Fibonacci sequence and P-Lucas sequence, the pseudoprimes can be called strong Lucas pseudoprime in base P, for example, the least strong Lucas pseudoprime with P = 1, 2, 3, ... are 4181, 169, 119, ...
An extra strong Lucas pseudoprimeis a strong Lucas pseudoprime for a set of parameters (P, Q) where Q = 1, satisfying one of the conditions
or
for some . An extra strong Lucas pseudoprime is also a strong Lucas pseudoprime for the same pair. No number can be a strong Lucas pseudoprime to more than of all bases, or an extra strong Lucas pseudoprime to more than of all bases.
Implementing a Lucas probable prime test
Before embarking on a probable prime test, one usually verifies that n, the number to be tested for primality, is odd, is not a perfect square, and is not divisible by any small prime less than some convenient limit. Perfect squares are easy to detect using Newton's method for square roots.
We choose a Lucas sequence where the Jacobi symbol , so that δ(n) = n + 1.
Given n, one technique for choosing D is to use trial and error to find the first D in the sequence 5, −7, 9, −11, ... such that . Note that .
(If D and n have a prime factor in common, then ).
With this sequence of D values, the average number of D values that must be tried before we encounter one whose Jacobi symbol is −1 is about 1.79.
Once we have D, we set and .
It is a good idea to check that n has no prime factors in common with P or Q.
This method of choosing D, P, and Q was suggested by John Selfridge.
(This search will never succeed if n is square, and conversely if it does succeed, that is proof that n is not square. Thus, some time can be saved by delaying testing n for squareness until after the first few search steps have all failed.)
Given D, P, and Q, there are recurrence relations that enable us to quickly compute and in steps; see . To start off,
First, we can double the subscript from to in one step using the recurrence relations
Next, we can increase the subscript by 1 using the recurrences
At each stage, we reduce all of the variables modulo n. When dividing by 2 modulo n, if the numerator is odd add n (which does not change the value modulo n) to make it even before dividing by 2.'
We use the bits of the binary expansion of n to determine which terms in the sequence to compute. For example, if n+1 = 44 (= 101100 in binary), then, taking the bits one at a time from left to right, we obtain the sequence of indices to compute: 12 = 1, 102 = 2, 1002 = 4, 1012 = 5, 10102 = 10, 10112 = 11, 101102 = 22, 1011002 = 44. Therefore, we compute U1, U2, U4, U5, U10, U11, U22, and U44. We also compute the same-numbered terms in the V sequence, along with Q1, Q2, Q4, Q5, Q10, Q11, Q22, and Q44.
By the end of the calculation, we will have computed Un+1, Vn+1, and Qn+1, (mod n).
We then check congruence () using our expected value of Un+1.
When the parameters D, P, and Q are chosen as described above, the first 10 Lucas pseudoprimes are:
323, 377, 1159, 1829, 3827, 5459, 5777, 9071, 9179, and 10877
The strong versions of the Lucas test can be implemented in a similar way. With the same parameters, the first 10 strong Lucas pseudoprimes are: 5459, 5777, 10877, 16109, 18971, 22499, 24569, 25199, 40309, and 58519
Extra strong Lucas pseudoprimes use different parameters: fix .
Then try P = 3, 4, 5, 6, ..., until a value of is found so that the Jacobi symbol . The first 10 extra strong Lucas pseudoprimes are
989, 3239, 5777, 10877, 27971, 29681, 30739, 31631, 39059, and 72389
Checking additional congruence conditions
If we have checked that congruence () is true, there are additional congruence conditions we can check that have almost no additional computational cost. By providing an additional opportunity for n to be proved composite, these increase the reliability of the test.
If n is an odd prime and , then we have the following:
Although this congruence condition is not part of the Lucas probable prime test proper, it is almost free to check this condition because, as noted above, the easiest way to compute Un+1 is to compute Vn+1 as well.
If Selfridge's method (above) for choosing parameters is modified so that, if it selects D = 5, it uses the parameters P = Q = 5 rather than P = 1, Q = −1, then 913 = 11·83 is the only composite less than 108 for which congruence () is true (see page 1409 and Table 6 of;). More extensive calculations show that, with this method of choosing D, P, and Q, there are only five odd, composite numbers less than 1015 for which congruence () is true.
If (and GCD(n, Q) = 1), then an Euler–Jacobi probable prime test to the base Q can also be implemented at minor computational cost.
The computation of depends on and . This is times , and if n is prime, then by Euler's criterion,
.
(Here, is the Legendre symbol; if n is prime, this is the same as the Jacobi symbol).
Therefore, if n is prime, we must have,
The Jacobi symbol on the right side is easy to compute, so this congruence is easy to check.
If this congruence does not hold, then n cannot be prime. Provided GCD(n, Q) = 1 then testing for congruence () is equivalent to augmenting our Lucas test with a "base Q" Solovay–Strassen primality test.
There is one more congruence condition on and which must be true if n is prime and can be checked.
Comparison with the Miller–Rabin primality test
k applications of the Miller–Rabin primality test declare a composite n to be probably prime with a probability at most (1/4)k.
There is a similar probability estimate for the strong Lucas probable prime test.
Aside from two trivial exceptions (see below), the fraction of (P,Q) pairs (modulo n) that declare a composite n to be probably prime is at most (4/15).
Therefore, k applications of the strong Lucas test would declare a composite n to be probably prime with a probability at most (4/15)k.
There are two trivial exceptions. One is n = 9. The other is when n = p(p+2) is the product of two twin primes. Such an n is easy to factor, because in this case, n+1 = (p+1)2 is a perfect square. One can quickly detect perfect squares using Newton's method for square roots.
By combining a Lucas pseudoprime test with a Fermat primality test, say, to base 2, one can obtain very powerful probabilistic tests for primality, such as the Baillie–PSW primality test.
Fibonacci pseudoprimes
When P = 1 and Q = −1, the Un(P,Q) sequence represents the Fibonacci numbers.
A Fibonacci pseudoprime is often
defined as a composite number n not divisible by 5 for which congruence () holds with P = 1 and Q = −1. By this definition, the Fibonacci pseudoprimes form a sequence:
323, 377, 1891, 3827, 4181, 5777, 6601, 6721, 8149, 10877, ... .
The references of Anderson and Jacobsen below use this definition.
If n is congruent to 2 or 3 modulo 5, then Bressoud, and Crandall and Pomerance point out that it is rare for a Fibonacci pseudoprime to also be a Fermat pseudoprime base 2. However, when n is congruent to 1 or 4 modulo 5, the opposite is true, with over 12% of Fibonacci pseudoprimes under 1011 also being base-2 Fermat pseudoprimes.
If n is prime and GCD(n, Q) = 1, then we also have
This leads to an alternative definition of Fibonacci pseudoprime:
a Fibonacci pseudoprime is a composite number n for which congruence () holds with P = 1 and Q = −1.
This definition leads the Fibonacci pseudoprimes form a sequence:
705, 2465, 2737, 3745, 4181, 5777, 6721, 10877, 13201, 15251, ... ,
which are also referred to as Bruckman-Lucas pseudoprimes.
Hoggatt and Bicknell studied properties of these pseudoprimes in 1974. Singmaster computed these pseudoprimes up to 100000. Jacobsen lists all 111443 of these pseudoprimes less than 1013.
It has been shown that there are no even Fibonacci pseudoprimes as defined by equation (5). However, even Fibonacci pseudoprimes do exist under the first definition given by ().
A strong Fibonacci pseudoprime is a composite number n for which congruence () holds for Q = −1 and all P. It follows that an odd composite integer n is a strong Fibonacci pseudoprime if and only if:
n is a Carmichael number
2(p + 1) | (n − 1) or 2(p + 1) | (n − p) for every prime p dividing n.
The smallest example of a strong Fibonacci pseudoprime is 443372888629441 = 17·31·41·43·89·97·167·331.
Pell pseudoprimes
A Pell pseudoprime''' may be defined as a composite number n for which equation () above is true with P = 2 and Q = −1; the sequence Un then being the Pell sequence. The first pseudoprimes are then 35, 169, 385, 779, 899, 961, 1121, 1189, 2419, ...
This differs from the definition in which may be written as:
with (P, Q) = (2, −1) again defining Un as the Pell sequence. The first pseudoprimes are then 169, 385, 741, 961, 1121, 2001, 3827, 4879, 5719, 6215 ...
A third definition uses equation (5) with (P, Q'') = (2, −1), leading to the pseudoprimes 169, 385, 961, 1105, 1121, 3827, 4901, 6265, 6441, 6601, 7107, 7801, 8119, ...
References
External links
Anderson, Peter G. Fibonacci Pseudoprimes, their factors, and their entry points.
Anderson, Peter G. Fibonacci Pseudoprimes under 2,217,967,487 and their factors.
Jacobsen, Dana Pseudoprime Statistics, Tables, and Data (data for Lucas, Strong Lucas, AES Lucas, ES Lucas pseudoprimes below 1014; Fibonacci and Pell pseudoprimes below 1012)
Fibonacci numbers
Pseudoprimes | Lucas pseudoprime | [
"Mathematics"
] | 3,845 | [
"Fibonacci numbers",
"Mathematical relations",
"Golden ratio",
"Recurrence relations"
] |
1,055,399 | https://en.wikipedia.org/wiki/Barium%20chloride | Barium chloride is an inorganic compound with the formula . It is one of the most common water-soluble salts of barium. Like most other water-soluble barium salts, it is a white powder, highly toxic, and imparts a yellow-green coloration to a flame. It is also hygroscopic, converting to the dihydrate , which are colourless crystals with a bitter salty taste. It has limited use in the laboratory and industry.
Preparation
On an industrial scale, barium chloride is prepared via a two step process from barite (barium sulfate). The first step requires high temperatures.
The second step requires reaction between barium sulfide and hydrogen chloride:
or between barium sulfide and calcium chloride:
In place of HCl, chlorine can be used. Barium chloride is extracted out from the mixture with water. From water solutions of barium chloride, its dihydrate () can be crystallized as colorless crystals.
Barium chloride can in principle be prepared by the reaction between barium hydroxide or barium carbonate with hydrogen chloride. These basic salts react with hydrochloric acid to give hydrated barium chloride.
Structure and properties
crystallizes in two forms (polymorphs). At room temperature, the compound is stable in the orthorhombic cotunnite () structure, whereas the cubic fluorite structure () is stable between 925 and 963 °C. Both polymorphs accommodate the preference of the large ion for coordination numbers greater than six. The coordination of is 8 in the fluorite structure and 9 in the cotunnite structure. When cotunnite-structure is subjected to pressures of 7–10 GPa, it transforms to a third structure, a monoclinic post-cotunnite phase. The coordination number of increases from 9 to 10.
In aqueous solution behaves as a simple salt; in water it is a 1:2 electrolyte and the solution exhibits a neutral pH. Its solutions react with sulfate ion to produce a thick white solid precipitate of barium sulfate.
This precipitation reaction is used in chlor-alkali plants to control the sulfate concentration in the feed brine for electrolysis.
Oxalate effects a similar reaction:
When it is mixed with sodium hydroxide, it gives barium hydroxide, which is moderately soluble in water.
is stable in the air at room temperature, but loses one water of crystallization above , becoming , and becomes anhydrous above . may be formed by shaking the dihydrate with methanol.
readily forms eutectics with alkali metal chlorides.
Uses
Although inexpensive, barium chloride finds limited applications in the laboratory and industry.
Its main laboratory use is as a reagent for the gravimetric determination of sulfates. The sulfate compound being analyzed is dissolved in water and hydrochloric acid is added. When barium chloride solution is added, the sulfate present precipitates as barium sulfate, which is then filtered through ashless filter paper. The paper is burned off in a muffle furnace, the resulting barium sulfate is weighed, and the purity of the sulfate compound is thus calculated.
In industry, barium chloride is mainly used in the purification of brine solution in caustic chlorine plants and also in the manufacture of heat treatment salts, case hardening of steel. It is also used to make red pigments such as Lithol red and Red Lake C. Its toxicity limits its applicability.
Toxicity
Barium chloride, along with other water-soluble barium salts, is highly toxic. It irritates eyes and skin, causing redness and pain. It damages kidneys. Fatal dose of barium chloride for a human has been reported to be about 0.8-0.9 g. Systemic effects of acute barium chloride toxicity include abdominal pain, diarrhea, nausea, vomiting, cardiac arrhythmia, muscular paralysis, and death. The ions compete with the ions, causing the muscle fibers to be electrically unexcitable, thus causing weakness and paralysis of the body. Sodium sulfate and magnesium sulfate are potential antidotes because they form barium sulfate BaSO4, which is relatively non-toxic because of its insolubility in water.
Barium chloride is not classified as a human carcinogen.
References
External links
International Chemical Safety Card 0614. (anhydrous)
International Chemical Safety Card 0615. (dihydrate)
Barium chloride's use in industry.
ChemSub Online: Barium chloride.
Chlorides
Alkaline earth metal halides
Barium compounds
Inorganic compounds
Pyrotechnic colorants
Fluorite crystal structure | Barium chloride | [
"Chemistry"
] | 979 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
1,055,437 | https://en.wikipedia.org/wiki/Deniable%20encryption | In cryptography and steganography, plausibly deniable encryption describes encryption techniques where the existence of an encrypted file or message is deniable in the sense that an adversary cannot prove that the plaintext data exists.
The users may convincingly deny that a given piece of data is encrypted, or that they are able to decrypt a given piece of encrypted data, or that some specific encrypted data exists. Such denials may or may not be genuine. For example, it may be impossible to prove that the data is encrypted without the cooperation of the users. If the data is encrypted, the users genuinely may not be able to decrypt it. Deniable encryption serves to undermine an attacker's confidence either that data is encrypted, or that the person in possession of it can decrypt it and provide the associated plaintext.
In their pivotal 1996 paper, Ran Canetti, Cynthia Dwork, Moni Naor, and Rafail Ostrovsky introduced the concept of deniable encryption, a cryptographic breakthrough that ensures privacy even under coercion. This concept allows encrypted communication participants to plausibly deny the true content of their messages. Their work lays the foundational principles of deniable encryption, illustrating its critical role in protecting privacy against forced disclosures. This research has become a cornerstone for future advancements in cryptography, emphasizing the importance of deniable encryption in maintaining communication security. The notion of was used by Julian Assange and Ralf Weinmann in the Rubberhose filesystem.
Function
Deniable encryption makes it impossible to prove the origin or existence of the plaintext message without the proper decryption key. This may be done by allowing an encrypted message to be decrypted to different sensible plaintexts, depending on the key used. This allows the sender to have plausible deniability if compelled to give up their encryption key.
Scenario
In some jurisdictions, statutes assume that human operators have access to such things as encryption keys. An example is the United Kingdom's Regulation of Investigatory Powers Act, which makes it a crime not to surrender encryption keys on demand from a government official authorized by the act. According to the Home Office, the burden of proof that an accused person is in possession of a key rests on the prosecution; moreover, the act contains a defense for operators who have lost or forgotten a key, and they are not liable if they are judged to have done what they can to recover a key.
In cryptography, rubber-hose cryptanalysis is a euphemism for the extraction of cryptographic secrets (e.g. the password to an encrypted file) from a person by coercion or torture—such as beating that person with a rubber hose, hence the name—in contrast to a mathematical or technical cryptanalytic attack.
An early use of the term was on the sci.crypt newsgroup, in a message posted 16 October 1990 by Marcus J. Ranum, alluding to corporal punishment:...the rubber-hose technique of cryptanalysis. (in which a rubber hose is applied forcefully and frequently to the soles of the feet until the key to the cryptosystem is discovered, a process that can take a surprisingly short time and is quite computationally inexpensive).Deniable encryption allows the sender of an encrypted message to deny sending that message. This requires a trusted third party. A possible scenario works like this:
Bob suspects his wife Alice is engaged in adultery. That being the case, Alice wants to communicate with her secret lover Carl. She creates two keys, one intended to be kept secret, the other intended to be sacrificed. She passes the secret key (or both) to Carl.
Alice constructs an innocuous message M1 for Carl (intended to be revealed to Bob in case of discovery) and an incriminating love letter M2 to Carl. She constructs a cipher-text C out of both messages, M1 and M2, and emails it to Carl.
Carl uses his key to decrypt M2 (and possibly M1, in order to read the fake message, too).
Bob finds out about the email to Carl, becomes suspicious and forces Alice to decrypt the message.
Alice uses the sacrificial key and reveals the innocuous message M1 to Bob. Since it is impossible for Bob to know for sure that there might be other messages contained in C, he might assume that there are no other messages.
Another scenario involves Alice sending the same ciphertext (some secret instructions) to Bob and Carl, to whom she has handed different keys. Bob and Carl are to receive different instructions and must not be able to read each other's instructions. Bob will receive the message first and then forward it to Carl.
Alice constructs the ciphertext out of both messages, M1 and M2, and emails it to Bob.
Bob uses his key to decrypt M1 and isn't able to read M2.
Bob forwards the ciphertext to Carl.
Carl uses his key to decrypt M2 and isn't able to read M1.
Forms of deniable encryption
Normally, ciphertexts decrypt to a single plaintext that is intended to be kept secret. However, one form of deniable encryption allows its users to decrypt the ciphertext to produce a different (innocuous but plausible) plaintext and plausibly claim that it is what they encrypted. The holder of the ciphertext will not be able to differentiate between the true plaintext, and the bogus-claim plaintext. In general, one ciphertext cannot be decrypted to all possible plaintexts unless the key is as large as the plaintext, so it is not practical in most cases for a ciphertext to reveal no information whatsoever about its plaintext. However, some schemes allow decryption to decoy plaintexts that are close to the original in some metric (such as edit distance).
Modern deniable encryption techniques exploit the fact that without the key, it is infeasible to distinguish between ciphertext from block ciphers and data generated by a cryptographically secure pseudorandom number generator (the cipher's pseudorandom permutation properties).
This is used in combination with some decoy data that the user would plausibly want to keep confidential that will be revealed to the attacker, claiming that this is all there is. This is a form of steganography.
If the user does not supply the correct key for the truly secret data, decrypting it will result in apparently random data, indistinguishable from not having stored any particular data there.
Examples
Layers
One example of deniable encryption is a cryptographic filesystem that employs a concept of abstract "layers", where each layer can be decrypted with a different encryption key. Additionally, special "chaff layers" are filled with random data in order to have plausible deniability of the existence of real layers and their encryption keys. The user can store decoy files on one or more layers while denying the existence of others, claiming that the rest of space is taken up by chaff layers. Physically, these types of filesystems are typically stored in a single directory consisting of equal-length files with filenames that are either randomized (in case they belong to chaff layers), or cryptographic hashes of strings identifying the blocks. The timestamps of these files are always randomized. Examples of this approach include Rubberhose filesystem.
Rubberhose (also known by its development codename Marutukku) is a deniable encryption program which encrypts data on a storage device and hides the encrypted data. The existence of the encrypted data can only be verified using the appropriate cryptographic key. It was created by Julian Assange as a tool for human rights workers who needed to protect sensitive data in the field and was initially released in 1997.
The name Rubberhose is a joking reference to the cypherpunks term rubber-hose cryptanalysis, in which encryption keys are obtained by means of violence.
It was written for Linux kernel 2.2, NetBSD and FreeBSD in 1997–2000 by Julian Assange, Suelette Dreyfus, and Ralf Weinmann. The latest version available, still in alpha stage, is v0.8.3.
Container volumes
Another approach used by some conventional disk encryption software suites is creating a second encrypted volume within a container volume. The container volume is first formatted by filling it with encrypted random data, and then initializing a filesystem on it. The user then fills some of the filesystem with legitimate, but plausible-looking decoy files that the user would seem to have an incentive to hide. Next, a new encrypted volume (the hidden volume) is allocated within the free space of the container filesystem which will be used for data the user actually wants to hide. Since an adversary cannot differentiate between encrypted data and the random data used to initialize the outer volume, this inner volume is now undetectable. LibreCrypt and BestCrypt can have many hidden volumes in a container; TrueCrypt is limited to one hidden volume.
Other software
OpenPuff, freeware semi-open-source steganography for MS Windows.
LibreCrypt, open-source transparent disk encryption for MS Windows and PocketPC PDAs that provides both deniable encryption and plausible deniability. Offers an extensive range of encryption options, and doesn't need to be installed before use as long as the user has administrator rights.
Off-the-Record Messaging, a cryptographic technique providing true deniability for instant messaging.
StegFS, the current successor to the ideas embodied by the Rubberhose and PhoneBookFS filesystems.
VeraCrypt (a successor to a discontinued TrueCrypt), an on-the-fly disk encryption software for Windows, Mac and Linux providing limited deniable encryption and to some extent (due to limitations on the number of hidden volumes which can be created) plausible deniability, without needing to be installed before use as long as the user has full administrator rights.
Vanish, a research prototype implementation of self-destructing data storage.
Detection
The existence of hidden encrypted data may be revealed by flaws in the implementation. It may also be revealed by a so-called watermarking attack if an inappropriate cipher mode is used.
The existence of the data may be revealed by it 'leaking' into non-encrypted disk space where it can be detected by forensic tools.
Doubts have been raised about the level of plausible deniability in 'hidden volumes' – the contents of the "outer" container filesystem have to be 'frozen' in its initial state to prevent the user from corrupting the hidden volume (this can be detected from the access and modification timestamps), which could raise suspicion. This problem can be eliminated by instructing the system not to protect the hidden volume, although this could result in lost data.
Drawbacks
Possession of deniable encryption tools could lead attackers to continue torturing a user even after the user has revealed all their keys, because the attackers could not know whether the user had revealed their last key or not. However, knowledge of this fact can disincentivize users from revealing any keys to begin with, since they will never be able to prove to the attacker that they have revealed their last key.
Deniable authentication
Some in-transit encrypted messaging suites, such as Off-the-Record Messaging, offer deniable authentication which gives the participants plausible deniability of their conversations. While deniable authentication is not technically "deniable encryption" in that the encryption of the messages is not denied, its deniability refers to the inability of an adversary to prove that the participants had a conversation or said anything in particular.
This is achieved by the fact that all information necessary to forge messages is appended to the encrypted messages – if an adversary is able to create digitally authentic messages in a conversation (see hash-based message authentication code (HMAC)), they are also able to forge messages in the conversation. This is used in conjunction with perfect forward secrecy to assure that the compromise of encryption keys of individual messages does not compromise additional conversations or messages.
See also
References
Further reading
Cryptography | Deniable encryption | [
"Mathematics",
"Engineering"
] | 2,586 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,055,454 | https://en.wikipedia.org/wiki/Stroboscope | A stroboscope, also known as a strobe, is an instrument used to make a cyclically moving object appear to be slow-moving, or stationary. It consists of either a rotating disk with slots or holes or a lamp such as a flashtube which produces brief repetitive flashes of light. Usually, the rate of the stroboscope is adjustable to different frequencies. When a rotating or vibrating object is observed with the stroboscope at its vibration frequency (or a submultiple of it), it appears stationary. Thus stroboscopes are also used to measure frequency.
The principle is used for the study of rotating, reciprocating, oscillating or vibrating objects. Machine parts and vibrating string are common examples. A stroboscope used to set the ignition timing of internal combustion engines is called a timing light.
Mechanical
In its simplest mechanical form, a stroboscope can be a rotating cylinder (or bowl with a raised edge) with evenly spaced holes or slots placed in the line of sight between the observer and the moving object. The observer looks through the holes/slots on the near and far side at the same time, with the slots/holes moving in opposite directions. When the holes/slots are aligned on opposite sides, the object is visible to the observer.
Alternately, a single moving hole or slot can be used with a fixed/stationary hole or slot. The stationary hole or slot limits the light to a single viewing path and reduces glare from light passing through other parts of the moving hole/slot.
Viewing through a single line of holes/slots does not work, since the holes/slots appear to just sweep across the object without a strobe effect.
The rotational speed is adjusted so that it becomes synchronised with the movement of the observed system, which seems to slow and stop. The illusion is caused by temporal aliasing, commonly known as the stroboscopic effect.
Electronic
In electronic versions, the perforated disc is replaced by a lamp capable of emitting brief and rapid flashes of light. Typically a gas-discharge or solid-state lamp is used, because they are capable of emitting light nearly instantly when power is applied, and extinguishing just as fast when the power is removed.
By comparison, incandescent lamps have a brief warm-up when energized, followed by a cool-down period when power is removed. These delays result in smearing and blurring of detail of objects partially illuminated during the warm-up and cool-down periods. For most applications, incandescent lamps are too slow for clear stroboscopic effects. Yet when operated from an AC source they are mostly fast enough to cause audible hum (at double mains frequency) on optical audio playback such as on film projection.
The frequency of the flash is adjusted so that it is an equal to, or a unit fraction of the object's cyclic speed, at which point the object is seen to be either stationary or moving slowly backward or forward, depending on the flash frequency.
Neon lamps or light-emitting diodes are commonly used for low-intensity strobe applications. Neon lamps were more common before the development of solid-state electronics, but are being replaced by LEDs in most low-intensity strobe applications.
Xenon flash lamps are used for medium- and high-intensity strobe applications. Sufficiently rapid or bright flashing may require active cooling such as forced-air or water cooling to prevent the xenon flash lamp from melting.
History
Joseph Plateau of Belgium is generally credited with the invention of the stroboscope in 1832, when he used a disc with radial slits which he turned while viewing images on a separate rotating wheel. Plateau's device became known as the "Phenakistoscope". There was an almost simultaneous and independent invention of the device by the Austrian Simon Ritter von Stampfer, which he named the "Stroboscope", and it is his term which is used today. The etymology is from the Greek words στρόβος - strobos, meaning "whirlpool" and σκοπεῖν - skopein, meaning "to look at".
As well as having important applications for scientific research, the earliest inventions received immediate popular success as methods for producing moving pictures, and the principle was used for numerous toys. Other early pioneers employed rotating mirrors, or vibrating mirrors known as mirror galvanometers.
In 1917, French engineer Etienne Oehmichen patented the first electric stroboscope, building at the same time a camera capable of shooting 1,000 frames per second.
Harold Eugene Edgerton ("Doc" Edgerton) employed a flashing lamp to study machine parts in motion. General Radio Corporation then went on to produce this device in the form of their "Strobotac", an early example of a commercially successful stroboscope.
Edgerton later used very short flashes of light as a means of producing still photographs of fast-moving objects, such as bullets in flight.
Applications
Stroboscopes play an important role in the study of stresses on machinery in motion, and in many other forms of research. Bright stroboscopes are able to overpower ambient lighting and make stop-motion effects apparent without the need for dark ambient operating conditions.
They are also used as measuring instruments for determining cyclic speed. As a timing light they are used to set the ignition timing of internal combustion engines.
In medicine, stroboscopes are used to view the vocal cords for the diagnosis of conditions that have produced dysphonia (hoarseness). The patient hums or speaks into a microphone which in turn activates the stroboscope at either the same or a slightly different frequency. The light source and a camera are positioned by endoscopy.
Another application of the stroboscope can be seen on many gramophone turntables. The edge of the platter has marks at specific intervals so that when viewed under fluorescent lighting powered at mains frequency, provided the platter is rotating at the correct speed, the marks appear to be stationary. This will not work well under incandescent lighting, as incandescent bulbs do not significantly strobe. For this reason, some turntables have a neon bulb or LED next to the platter. The LED must be driven by a half wave rectifier from the mains transformer, or by an oscillator.
Flashing lamp strobes have also been adapted as a lighting effect for discotheques and night clubs where they give the impression of dancing in slow motion. The strobe rate of these devices is typically not very precise or very fast, because entertainment applications do not usually require a high degree of performance.
Fechner color
Rapid flashing of the stroboscopic light can give the illusion that white light is tinged with color, known as Fechner color. Within certain ranges, the apparent color can be controlled by the frequency of the flash. Effective stimuli frequencies go from 3 Hz upwards, with optimal frequencies of about 4–6 Hz. The colours are an illusion generated in the mind of the observer and not a real color. The Benham's top demonstrates the effect.
See also
Electrotachyscope
Flip book
Reciprocating motion
Phenakistoscope
Praxinoscope
Strobe light
Strobe tuner
Tachometer
Thaumatrope
Zoetrope
References
External links
How the Germans Measured Milliseconds MECHANICALLY - Smarter Every Day 283
Demonstration of Phenakistoscope and Stroboscope at North Carolina School of Science and Mathematics
Audiovisual introductions in 1832
Measuring instruments
Medical equipment | Stroboscope | [
"Technology",
"Engineering",
"Biology"
] | 1,578 | [
"Medical equipment",
"Measuring instruments",
"Medical technology"
] |
1,055,486 | https://en.wikipedia.org/wiki/List%20of%20endemic%20species%20of%20Taiwan | The endemic species of Taiwan are organisms that are endemic to the island of Taiwan – that is, they naturally occur nowhere else on Earth.
Percentages of endemic species and subspecies in selected animal groups in Taiwan:
Percentages of endemic plants of all living species in Taiwan.
Endemic fauna
Endemic mammals
Order: Carnivora (carnivorans)
Formosan black bear – Ursus thibetanus formosanus
Formosan ferret-badger – Melogale subaurantiaca (Swinhoe)
Order Artiodactyla (even-toed ungulates)
Formosan boar – Sus scrofa taivanus
Formosan sika deer – Cervus nippon taiouanus
Taiwan serow – Naemorhedus swinhoei (Gray)
Order Eulipotyphla (shrews and kin)
Taiwanese mole shrew – Anourosorex yamashinai Kuroda
Tada's shrew – Crocidura tadae Tokuda & Kano
Formosan shrew – Episoriculus fumidus Thomas
Koshun shrew – Chodsigoa sodalis Thomas
Kano's mole – Mogera kanoana Kawada et al.
Order Rodentia (rodents)
Formosan field vole – Apodemus semotus Thomas
Spinous country-rat – Niviventer coxingi (Swinhoe)
Formosan white-bellied rat – Niviventer culturatus (Thomas)
Kikuchi's field vole – Microtus kikuchii (Kuroda)
Formosan giant flying squirrel – Petaurista grandis (Swinhoe)
Taiwan red-and-white giant flying squirrel – Petaurista lena Thomas
Order Primate (primates)
Formosan macaque – Macaca cyclopis (Swinhoe)
Order Chiroptera (bats)
Formosan long-eared bat – Plecotus taivanus Yoshiyuki
Long-toed myotis – Myotis secundus Ruedi, Csorba, Lin & Chou
Reddish myotis – Myotis soror Ruedi, Csorba, Lin & Chou
Formosan mouse-eared bat – Myotis taiwanensis Linde
Formosan broad-muzzled bat – Submyotodon latirostris Kishida
Bicolored tube-nosed bat – Murina bicolor Kuo, Fang, Csorba & Lee
Slender tube-nosed bat – Murina gracilis Kuo et al.
Formosan tube-nosed bat – Murina puta Kishida
Murina recondita Kuo, Fang, Csorba & Lee, 2009
Yellow-necked bat – Thainycteris torquatus Gabor & Lee
Formosan leaf-nosed bat – Hipposideros terasensis Kisida
Formosan greater horseshoe bat – Rhinolophus formosae Sanborn
Formosan lesser horseshoe bat – Rhinolophus monoceros Andersen
Endemic birds
32 endemic bird species and another 52 endemic subspecies of Taiwan have been identified (out of a total of 686 bird species). The thirty-two endemic species make up about 4.6% of all birds living in Taiwan.
Order Passeriformes (passerines and relatives)
Taiwan yellow tit – Machlolophus holsti
Chestnut-bellied tit – Sittiparus castaneoventris
White-whiskered laughingthrush – Garrulax morrisonianus (Ogilvie-Grant)
White-throated laughingthrush – Pterorhinus albogularis
Rusty laughingthrush – Pterorhinus poecilorhynchus
White-eared sibia – Heterophasia auricularis (Swinhoe)
Taiwan yuhina – Yuhina brunneiceps Ogilvie-Grant
Taiwan scimitar babbler – Pomatorhinus musicus
Black-necklaced scimitar babbler – Erythrogenys erythrocnemis
Taiwan cupwing – Pnoepyga formosana
Taiwan barwing – Actinodura morrisoniana Ogilvie-Grant
Steere's liocichla – Liocichla steerii Swinhoe
Morrison's fulvetta – Alcippe morrisonia
Taiwan blue magpie – Urocissa caerulea (Gould)
Styan's bulbul – Pycnonotus taivanus Styan
Taiwan whistling thrush – Myophonus insularis Gould
Collared bush robin – Tarsiger johnstoniae (Ogilvie-Grant)
White-browed bush robin – Tarsiger indicus
Taiwan bullfinch – Pyrrhula owstoni
Taiwan rosefinch – Carpodacus formosanus
Taiwan vivid niltava – Niltava vivida
Taiwan thrush – Turdus niveiceps
Taiwan shortwing – Brachypteryx goodfellowi
Taiwan fulvetta – Fulvetta formosana
Flamecrest – Regulus goodfellowi Ogilvie-Grant
Taiwan bush warbler – Bradypterus alishanensis
Taiwan hwamei – Garrulax taewanus
Order Galliformes (chicken-like birds)
Taiwan partridge – Arborophila crudigularis (Swinhoe)
Taiwan bamboo partridge – Bambusicola sonorivox
Mikado pheasant – Syrmaticus mikado (Ogilvie-Grant)
Swinhoe's pheasant – Lophura swinhoii (Gould)
Order Piciformes (woodpeckers and relatives)
Taiwan barbet – Megalaima nuchalis
Endemic reptiles
Order Squamata (lizards and snakes)
Formosan smooth skink – Scincella formosensis (Van Denburgh)
Taiwan forest skink – Sphenomorphus taiwanensis Chen & Lue
Plestiodon leucostictus (Hikida, 1988)
Lanyu scaly-toed gecko – Lepidodactylus yami Ota
Kikuchi's gecko – Gekko kikuchii (Oshima)
Gekko guishanicus – Lin & Yao, 2016
Short-legged japalure – Diploderma brevipes Gressitt
Diploderma luei – Ota, Chen & Shang, 1998
Maki's japalura – Diploderma makii Ota
Swinhoe's japalura – Diploderma swinhonis Gunther
Formosan legless lizard – Ophisaurus formosensis Kishida
Formosa grass lizard – Takydromus formosanus Boulenger
Sauteri grass lizard – Takydromus sauteri Van Denburgh
Hsuehshan grass lizard – Takydromus hsuehshanensis Lin & Cheng
Stejneger's grass lizard – Takydromus stejnegeri Van Denburgh
Hengchun blind snake – Argyrophis koshunensis Oshima
Atayal slug-eating snake – Pareas atayal You, Poyarkov & Lin, 2015
Taiwan slug-eating snake – Pareas formosensis (Van Dengurgh)
Formosa odd-scaled snake – Achalinus formosanus Boulenger
Black odd-scaled snake – Achalinus niger Maki
Maki's keelback – Hebius miyajimae Maki
Swinhoe's grass snake – Rhabdophis swinhonis (Günther)
Formosa coral snake – Sinomicrurus sauteri (Steindachner)
Swinhoe's temperate Asian coralsnake – Sinomicrurus swinhoei (Van Dengurgh)
Taiwan pit viper – Trimeresurus gracilis Oshima
Endemic amphibians
Order Anura (frogs and toads)
Central Formosa toad – Bufo bankorensis
Stejneger's narrow-mouthed toad – Micryletta steinegeri
Swinhoe's brown frog – Odorrana swinhoana
Taipa frog – Rana longicrus
Sauter's brown frog – Rana sauteri
Ota's stream tree frog – Buergeria otai
Robust Buerger's frog – Buergeria robusta
Kurixalus berylliniris
Temple treefrog – Kurixalus idiootocus
Kurixalus wangi
Farmland green treefrog – Zhangixalus arvalis
Orange-belly treefrog – Zhangixalus aurantiventris
Moltrecht's green tree frog – Rhacophorus moltrechti
Emerald green treefrog – Rhacophorus prasinatus
Taipei tree frog – Zhangixalus taipeianus
Order Urodela (salamanders and newts)
Alishan salamander – Hynobius arisanensis
Formosan salamander – Hynobius formosanus
Taiwan lesser salamander – Hynobius fuca
Nanhu salamander – Hynobius glacialis
Sonani's salamander – Hynobius sonani
Endemic freshwater fishes
Order Cypriniformes (minnows and carp)
Taitung river loach – Hemimyzon taitungensis Tzeng & Shen
Formosan river loach – Hemimyzon formosanus (Boulenger)
Shen's river loach – Hemimyzon sheni Chen & Fang
River loach – Formosania lacustris (Steindachner)
Pulin river loach – Sinogastromyzon puliensis Liang
Metzia mesembrinum (Regan)
Pararasbora moltrechti Regan
Acrossocheilus paradoxus (Günther)
Lake Candidus dace – Candidia barbata (Regan)
Microphysogobio alticorpus Banarescu & Nalbant
Taiwan ku fish – Onychostoma alticorpus (Oshima)
Freshwater minnow – Opsariichthys pachycephalus Günther
Gobiobotia cheni Banarescu & Nalbant
Microphysogobio brevirostris (Günther)
Aphyocypris kikuchii (Oshima)
Squalidus iijimae (Oshima)
Order Gobiiformes (gobies and their relatives)
Goby – Cryptocentrus yatsui Tomiyama
Goby – Myersina yangii (Chen)
Goby – Rhinogobius candidianus Regan
Goby – Rhinogobius gigas Aonuma & Chen
Goby – Rhinogobius formosanus Oshima
Goby – Rhinogobius nantaiensis Aonuma & Chen
Goby – Rhinogobius henchuenensis Chen & Shao
Goby – Rhinogobius delicatus Chen & Shao
Goby – Rhinogobius maculafasciatus Chen & Shao
Goby – Rhinogobius rubromaculatus Lee & Chang
Goby – Rhinogobius lanyuensis Chen, Miller & Fang
Order Siluriformes (catfishes)
Formosan torrent catfish – Liobagrus formosanus Regan
Bagrid catfish – Pseudobagrus adiposalis Oshima
Bagrid catfish – Pseudobagrus brevianalis Regan
Order Osmeriformes (smelts, galaxiids, and relatives)
Ariake icefish – Neosalanx acuticeps Regan (endangered)
Order Salmoniformes (salmons and trouts)
Formosan landlocked salmon – Oncorhynchus formosanus (Jordan & Oshima)
Endemic flora
Taiwan is home to several hundred endemic plant species and subspecies, including four endemic genera – Conjugatovarium, Kudoacanthus, Shaolinchiana, and Tangshuia. Plants are sorted by family.
Acanthaceae
Dicliptera longiflora
Hygrophila pogonocalyx
Kudoacanthus
Kudoacanthus albonervosus
Lepidagathis palinensis
Rungia taiwanensis
Strobilanthes chihsinlinensis
Strobilanthes formosana
Strobilanthes lanyuensis
Strobilanthes longespicata
Strobilanthes taiwanensis
Actinidiaceae
Actinidia chinensis var. setosa
Celosia taitoensis
Amaranthaceae
Amaranthus shengkuangensis
Amaryllidaceae
Allium taiwanianum
Apiaceae
Angelica dahurica var. formosana
Angelica hohuanshanensis
Angelica morrisonicola
Angelica tarokoensis
Anthriscus taiwanensis
Bupleurum kaoi
Chaerophyllum involucratum
Chaerophyllum nanhuense
Chaerophyllum taiwanianum
Cnidium monnieri var. formosanum
Cnidium warburgii
Conioselinum morrisonense
Pimpinella niitakayamensis
Pimpinella tagawae
Apocynaceae
Anodendron benthamianum
Cynanchum lanhsuense
Melodinus angustifolius
Trachelospermum tulinense
Vincetoxicum insulicola
Vincetoxicum lui
Vincetoxicum oshimae
Vincetoxicum sui
Vincetoxicum taiwanense
Aquifoliaceae
Ilex arisanensis
Ilex guanwuensis
Ilex hayatana
Ilex hengchunensis
Ilex lonicerifolia
Ilex lonicerifolia var. hakkuensis
Ilex lonicerifolia var. lonicerifolia
Ilex lonicerifolia var. matsudae
Ilex matsudae
Ilex nokoensis
Ilex parvifolia
Ilex rarasanensis
Ilex rubroantheriana
Ilex suzukii
Ilex tugitakayamensis
Ilex yunnanensis var. parvifolia
Araceae
Amorphophallus henryi
Amorphophallus hirtus
Arisaema consanguineum subsp. kelung-insulare
Arisaema formosanum
Arisaema guanwuense
Arisaema ilanense
Arisaema taiwanense
Arisaema taiwanense var. brevipedunculatum
Arisaema taiwanense var. taiwanense
Arisaema thunbergii subsp. autumnale
Arum taiwanianum
Homalomena kelungensis
Rhaphidophora formosana
Araliaceae
Aralia castanopsicola – northern & central Taiwan
Aralia decaisneana
Dendropanax pellucidopunctatus
Fatsia polycarpa
Hedera rhombea var. formosana
Hedera siyuanwukouensis
Heptapleurum taiwanianum
Hydrocotyle setulosa
Sinopanax formosanus
Arecaceae
Arenga engleri
Calamus beccarii – southwestern Taiwan
Calamus formosanus
Pinanga tashiroi
Aristolochiaceae
Aristolochia cucurbitifolia
Aristolochia hohuanensis
Aristolochia pahsienshanianum
Aristolochia yujungiana
Asarum ampulliflorum
Asarum chatienshanianum – northern Taiwan
Asarum crassisepalum
Asarum hypogynum
Asarum macranthum – northern Taiwan
Asarum pubitessellatum
Asarum shoukaense
Asarum taipingshanianum
Asarum tawushanianum - southern Taiwan
Asarum tungyanshanianum
Asarum villisepalum
Asparagaceae
Aspidistra attenuata – central and southern Taiwan
Aspidistra daibuensis – southeastern and southern Taiwan
Aspidistra daibuensis var. daibuensis – southeastern and southern Taiwan
Aspidistra daibuensis var. longkiauensis
Aspidistra longiconnectiva
Aspidistra mushaensis – central Taiwan
Aspidistra shoukaensis
Disporopsis shaolinchiensis
Heteropolygonatum alte-lobatum
Maianthemum formosanum
Maianthemum harae
Maianthemum shaolinchii
Ophiopogon shuangliuensis
Peliosanthes kaoi – southeastern Taiwan (Taidong Xian)
Polygonatum arisanense
Polygonatum arisanense var. arisanense
Polygonatum arisanense var. chingshuishanianum
Polygonatum arisanense var. formosanum
Aspleniaceae
Asplenium cuneatiforme
Asplenium matsumurae
Asplenium subtrialatum
Asplenium × wangii (A. bullatum × A. wrightii)
Asplenium wilfordii var. densum
Athyrium erythropodum
Athyrium leiopodum
Athyrium tripinnatum
Diplazium chioui
Diplazium ketagalaniorum
Diplazium kuoi
Hymenasplenium adiantifrons
Thelypteris × erubesquirolica (T. erubescens × T. esquirolii)
Thelypteris longipetiolata
Woodsia okamotoi
Asteraceae
Ainsliaea chaochiei
Ainsliaea lalashanensis
Ainsliaea paucicapitata
Anaphalis horaimontana
Anaphalis nagasawae
Anaphalis transnokoensis
Artemisia kawakamii
Artemisia morrisonensis
Artemisia niitakayamensis
Artemisia oligocarpa
Artemisia somae
Artemisia somae var. batakensis
Artemisia somae var. somae
Artemisia tsugitakaensis
Aster altaicus var. taitoensis - southeastern Taiwan
Aster chingshuiensis
Aster guanwuensis
Aster ilanmontanus
Aster itsunboshi
Aster kanoi
Aster morrisonensis
Aster oldhamii – northern Taiwan
Aster ovalifolius
Aster siyuanensis
Aster taiwanensis var. taiwanensis
Aster takasago-montanus
Aster taoshanensis
Aster taoyuenensis
Blumea chishangensis
Blumea hsinbaiyangensis
Blumea humilis
Blumea linearis
Blumea luoshaoensis
Carpesium parvicapitulum
Carpesium taiwanense
Chrysanthemum horaimontanum
Chrysanthemum morii
Cirsium arisanense
Cirsium chilaishanense
Cirsium ferum
Cirsium guanwuense
Cirsium nanhutashanense
Cirsium pitouchaoense
Cirsium suzukii – northern Taiwan
Cirsium taiwanense
Cirsium tatakaense
Crepidiastrum hualienianum
Crepidiastrum taiwanianum
Crepis meifenggensis
Erigeron fukuyamae
Erigeron morrisonensis
Erigeron taiwanensis
Eupatorium amabile
Eupatorium guanyuanense
Eupatorium hualienense
Eupatorium loshanense
Eupatorium tashiroi
Gynura formosana
Gynura taitungensis
Gynura tungyanshanensis
Hieracium morii
Hieracium taiwanense
Inula taiwanensis
Ixeridium calcicola
Ixeridium guanwuense
Ixeridium hohuanshanense
Ixeridium sandaiolingwaterfallense
Ixeridium transnokoense
Ixeris hopingtunnelensis
Jacobaea kuanshanensis
Jacobaea morrisonensis
Jacobaea tarokoensis
Lactuca mansuensis
Lapsanastrum takasei
Launaea taiwanensis
Leontopodium microphyllum
Ligularia kojimae
Melanthera taiwanensis
Nemosenecio formosanus
Paraprenanthes nanhutashanensis
Paraprenanthes shaolinchiensis
Paraprenanthes yangtoushanensis
Parasenecio morrisonensis
Parasenecio nokoensis
Parasenecio sylviaensis
Pertya simozawae
Petasites formosanus
Picris angustifolia subsp. morrisonensis
Picris angustifolia subsp. ohwiana
Saussurea glandulosa
Saussurea kanzanensis
Saussurea kiraisiensis
Scorzonera taiwanensis
Senecio bilushenmulatus
Senecio loshanensis
Senecio scandens var. crataegifolius
Senecio shaoakoulatus
Senecio tulinensis
Syneilesis hayatae
Syneilesis subglabrata
Taraxacum hohuanshanense
Tephroseris taitoensis
Youngia japonica subsp. monticola
Youngia lalashanensis
Youngia macrophylla
Balsaminaceae
Impatiens devolii
Impatiens tayemonii
Impatiens uniflora
Begoniaceae
Begonia austrotaiwanensis - southern Taiwan
Begonia bouffordii – Nantou
Begonia × buimontana (B. palmata × B. taiwaniana)
Begonia chitoensis – northern and central Taiwan
Begonia × chungii (B. longifolia × B. palmata)
Begonia chuyunshanensis – southern Taiwan
Begonia hohuanensis
Begonia lukuana
Begonia nantoensis – Nantou
Begonia pinglinensis – northern Taiwan
Begonia ravenii – western Taiwan
Begonia shitoushanensis
Begonia shoukaensis
Begonia × taipeiensis (B. formosana × B. longifolia) – northern Taiwan
Begonia taiwaniana – southern Taiwan
Begonia tengchiana – south-central Taiwan
Begonia tungyanshanensis
Begonia wutaiana – south-central Taiwan
Berberidaceae
Berberis alpicola – Mt. Alishan
Berberis aristatoserrulata – central Taiwan
Berberis brevisepala – central Taiwan
Berberis chingshuiensis – east-central Taiwan
Berberis hayatana – north-central Taiwan
Berberis japonica
Berberis kawakamii – central Taiwan
Berberis mingetsensis – west-central Taiwan
Berberis morii – eastern Taiwan
Berberis morrisonensis
Berberis nantoensis – northern and north-central Taiwan
Berberis pengii – southern Taiwan
Berberis ravenii – southern Taiwan
Berberis schaaliae – eastern Taiwan
Berberis tarokoensis – north-central Taiwan
Betulaceae
Alnus formosana
Alnus henryi
Carpinus hebestroma – Hualien Xian
Carpinus kawakamii var. minutiserrata
Carpinus rankanensis
Carpinus rankanensis var. mutsudae
Boraginaceae
Cynoglossum alpestre – above 2500 m elevation
Ehretia lengshuikengensis
Heliotropium formosanum
Thyrocarpus cuifengensis
Trichodesma calycosum var. formosanum
Trigonotis formosana var. elevatovenosa – northern Taiwan
Trigonotis nankotaizanensis
Brassicaceae
Arabis piluchiensis
Arabis shengkuangshanensis
Arabis taihumilis
Barbarea taiwaniana – northern Taiwan
Brassica taiwanensis
Draba sekiyana – Taiwan mountains
Thismia huangii – northwestern Taiwan
Thismia taiwanensis – south-central Taiwan (Gaoxiong)
Buxaceae
Buxus sinica var. intermedia
Sarcococca taiwaniana
Campanulaceae
Adenophora morrisonensis
Adenophora morrisonensis subsp. morrisonensis
Adenophora morrisonensis subsp. uehatae
Adenophora taiwaniana
Codonopsis kawakamii – central Taiwan
Wahlenbergia taiwaniana
Caprifoliaceae
Lonicera kawakamii
Lonicera taiwanensis
Lonicera tulinensis
Patrinia glabrifolia – central and eastern Taiwan
Scabiosa lacerifolia
Valeriana hsui
Valeriana kawakamii
Valeriana siyuaniana
Caryophyllaceae
Arenaria fulungensis
Arenaria siyuanakouensis
Arenaria taiwanensis
Cerastium morrisonense
Cerastium nanhutashanense
Cerastium parvipetalum – southern Taiwan
Cerastium subpilosum – central Taiwan
Cerastium takasagomontanum – central Taiwan
Dianthus palinensis – Taoyuan Xian: Bali
Dianthus pygmaeus
Dianthus taoshanensis
Nubelaria arisanensis
Silene morrisonmontana
Silene morrisonmontana var. glabella – northern Taiwan
Silene morrisonmontana var. morrisonmontana
Silene ohwii northern and central Taiwan
Spergularia hohuanshanensis
Stellaria taiwanensis
Celastraceae
Euonymus spraguei
Euonymus wulinensis
Glyptopetalum pallidifolium
Clusiaceae
Garcinia linii – eastern Taiwan
Colchicaceae
Disporum cantoniense var. kawakamii
Disporum kawakamii
Disporum sessile var. intermedium – central Taiwan
Disporum shimadae – northern Taiwan
Commelinaceae
Commelina bicaeruloflora
Cyanotis kawakamii – southern Taiwan
Convolvulaceae
Argyreia formosana – southern Taiwan
Ipomoea fangliaoensis
Ipomoea taiwanensis
Crassulaceae
Crassula nanshanchunensis
Kalanchoe tachingshuii
Kalanchoe tashiroi – southeastern Taiwan
Phedimus subcapitatum
Sedum actinocarpum
Sedum arisanense
Sedum brachyrhinchum
Sedum cirenianum
Sedum erythrospermum subsp. erythrospermum
Sedum kwanwuense
Sedum microsepalum – north-central Taiwan
Sedum morrisonense
Sedum nokoense
Sedum parviflorum
Sedum parvisepalum subsp. parvisepalum – central Taiwan
Sedum sasakii – northern Taiwan
Sedum sekiteiense – northern Taiwan
Sedum shaoakouense
Sedum shengkuangense
Sedum tachingshuianum
Sedum taiwanalpinum
Sedum taiwanianum
Sedum tarokoense – eastern Taiwan
Sedum triangulisepalum
Sinocrassula parvifoliana
Sinocrassula shaolinchiana
Sinocrassula taiwaniana
Cucurbitaceae
Sinobaijiania taiwaniana
Trichosanthes homophylla
Trichosanthes taiwanensis
Cupressaceae
Calocedrus formosana – northern and central Taiwan
Chamaecyparis formosensis – northern and central Taiwan
Chamaecyparis obtusa var. formosana – northern and central Taiwan
Juniperus morrisonicola
Juniperus tairukouensis
Juniperus tsukusiensis var. taiwanensis – Mt. Chingshui
Cyatheaceae
Cibotium taiwanense
Cyperaceae
Carex ayako-maedae
Carex caucasica subsp. jisaburo-ohwiana
Carex dissitiflora subsp. taiwanensis
Carex dolichostachya subsp. trichosperma – Mt. Alishan
Carex fulvorubescens subsp. fulvorubescens
Carex gentilis var. nakaharae
Carex morii
Carex orthostemon
Carex urelytra
Fimbristylis shimadana
Fimbristylis subinclinata – eastern Taiwan
Daphniphyllaceae
Daphniphyllum × lanyuense (D. macropodum × D. pentandrum) – Lan Yü
Dioscoreaceae
Dioscorea formosana
Ebenaceae
Diospyros fengchangensis
Diospyros kotoensis – Lan Yü
Elaeagnaceae
Elaeagnus darenensis
Elaeagnus formosana
Elaeagnus formosensis – southern Taiwan
Elaeagnus grandifolia – central Taiwan
Elaeagnus tarokoensis
Elaeagnus thunbergii
Elaeocarpaceae
Elaeocarpus decipiens var. changii – southern Taiwan
Elaeocarpus hayatae – southern Taiwan including Lan Yü
Ericaceae
Chimaphila monticola subsp. taiwaniana
Gaultheria taiwaniana – central Taiwan
Pyrola alboreticulata
Pyrola morrisonensis
Rhododendron breviperulatum – northern and eastern Taiwan
Rhododendron chilanshanense – northern Taiwan (Mt. Chilan)
Rhododendron chiliangense
Rhododendron formosanum – southern Taiwan
Rhododendron huanshanense
Rhododendron hyperythrum
†Rhododendron kanehirae – northern Taiwan (Peishi River). Last recorded in 1984
Rhododendron kawakamii
Rhododendron lasiostylum – central Taiwan
Rhododendron longiperulatum – southern Taiwan
Rhododendron morii - central Taiwan
Rhododendron nakaharae – northern Taiwan
Rhododendron nantouense
Rhododendron noriakianum – northern and central Taiwan
Rhododendron oldhamii
Rhododendron pachysanthum – central Taiwan
Rhododendron pseudochrysanthum – central Taiwan
Rhododendron rubropilosum – central Taiwan
Rhododendron rubropilosum var. grandiflorum – Nantou
Rhododendron rubropilosum var. rubropilosum – central Taiwan
Rhododendron shaoakouense
Rhododendron sikayotaizanense
Rhododendron taiwanalpinum – central Taiwan
Vaccinium delavayi subsp. merrillianum
Vaccinium dunalianum var. caudatifolium
Vaccinium japonicum var. lasiostemon
Vaccinium kengii – northern and central Taiwan
Vaccinium wrightii var. formosanum – eastern Taiwan
Euphorbiaceae
Acalypha eastmostpointensis
Acalypha matsudae – Hengchun Peninsula
Euphorbia garanbiensis – southern Taiwan
Euphorbia hsinchuensis – Taiwan (Xinzhu)
Euphorbia taihsiensis – western Taiwan
Euphorbia tarokoensis – eastern Taiwan (Hualien)
Euphorbia tzitanshaniana
Excoecaria formosana var. formosana
Excoecaria kawakamii – southern Taiwan including Lan Yü
Mallotus paniculatus var. formosanus
Fabaceae
Astragalus nankotaizanensis
Astragalus nokoensis – central Taiwan
Bauhinia longiracemosa
Crotalaria similis – Hengchun Peninsula
Dendrolobium dispermum
Derris lasiantha
Derris laxiflora
Dumasia villosa subsp. bicolor
Glycine dolichocarpa
Glycine max subsp. formosana – northern and central Taiwan
Hylodesmum taiwanianum
Indigofera byobiensis
Indigofera hopingensis
Indigofera ramulosissima
Indigofera taiwaniana
Maackia taiwanensis
Millettia pulchra var. microphylla
Mucuna gigantea subsp. tashiroi
Ormosia formosana
Ormosia hengchuniana
Smithia yehii
Sohmaea gracillima – southern Taiwan
Tephrosia ionophlebia
Tephrosia purpurea var. glabra
Zornia intecta
Fagaceae
Fagus hayatae – northern Taiwan
Lithocarpus dodonaeifolius
Lithocarpus formosanus
Lithocarpus kawakamii
Lithocarpus lepidocarpus – central and southern Taiwan
Lithocarpus nantoensis – central and southern Taiwan
Lithocarpus shinsuiensis – southern Taiwan
Quercus hypophaea
Quercus liaoi
Quercus longinux
Quercus morii
Quercus spinosa subsp. miyabei – central Taiwan
Quercus stenophylloides – central Taiwan
Quercus tarokoensis – eastern Taiwan
Quercus tatakaensis
Gentianaceae
Gentiana arisanensis
Gentiana bambuseti – central Taiwan
Gentiana davidi var. formosana
Gentiana flavomaculata
Gentiana flavomaculata subsp. flavomaculata
Gentiana flavomaculata subsp. tatakensis – central Taiwan
Gentiana kaohsiungensis
Gentiana scabrida subsp. horaimontana – central Taiwan
Gentiana scabrida subsp. itzershanensis – central Taiwan
Gentiana scabrida subsp. scabrida
Gentiana taiwanialbiflora
Gentiana taiwanica
Gentiana tarokoensis
Gentiana zollingeri subsp. tentyoensis – eastern Taiwan
Lomatogonium chilaiensis – Mt. Chilaishan
Swertia arisanensis – central and eastern Taiwan
Swertia changii – central Taiwan
Swertia tozanensis
Tripterospermum alutaceofolium – northern Taiwan
Tripterospermum cordifolium
Tripterospermum guanwuense
Tripterospermum hualienense
Tripterospermum lanceolatum
Tripterospermum lilungshanense – south-central Taiwan
Tripterospermum microphyllum
Tripterospermum shaolinchianum
Tripterospermum taiwanense – central and southern Taiwan
Geraniaceae
Geranium hayatanum
Geranium suzukii
Gesneriaceae
Lysionotus pauciflorus var. ikedae – Lan Yü
Lysionotus tairukouensis
Rhynchotechum brevipedunculatum
Rhynchotechum lalashanense
Rhynchotechum uniflorum
Whytockia sasakii
Gleicheniaceae
Dicranopteris tetraphylla
Grossulariaceae
Ribes formosanum
Hydrangeaceae
Deutzia taiwanensis – northern Taiwan
Hydrangea lalashanensis
Hydrangea longifolia
Hydrangea pingtungensis
Hydrangea taiwaniana – central Taiwan
Hymenophyllaceae
Hymenophyllum alishanense
Hymenophyllum chamaecyparicola
Hymenophyllum devolii
Hymenophyllum exquisitum
Hymenophyllum okadae
Hymenophyllum parallelocarpum
Hymenophyllum semialatum – Mt. Tahan
Hymenophyllum taiwanense
Hypericaceae
Hypericum eastmostianum
Hypericum formosanum – northern Taiwan
Hypericum geminiflorum subsp. simplicistylum – north-central and central Taiwan
Hypericum gouanyuanianum
Hypericum lalashanense
Hypericum nagasawae – north-central and central Taiwan
Hypericum nakamurae – east-northeastern Taiwan (Hualian)
Hypericum nokoense east-central Taiwan
Hypericum subalatum – northern and northeastern Taiwan
Iridaceae
Iris formosana – northeastern Taiwan
Iris nantouensis – central Taiwan
Isoetaceae
Isoetes taiwanensis
Isoetes taiwanensis var. kinmenensis
Isoetes taiwanensis var. taiwanensis
Iteaceae
Itea parviflora
Juncaceae
Juncus kuohii
Luzula formosana – central Taiwan
Luzula taiwaniana
Lamiaceae
Ajuga rubrobracteosa
Callicarpa hengchunensis
Callicarpa hypoleucophylla – southern Taiwan
Callicarpa lalashanensis
Callicarpa pilosissima
Callicarpa randaiensis
Callicarpa remotiflora – Hengchun Peninsula
Callicarpa remotiserrulata – Hengchun Peninsula
Callicarpa rubrocarpa
Callicarpa tikusikensis – northern Taiwan
Callicarpa tungyanensis
Clerodendrum ohwii
Clinopodium cirenianum
Clinopodium cuifengense
Clinopodium laxiflorum
Clinopodium loshanense
Clinopodium shaofengkouensis
Clinopodium wulinianum
Clinopodium wutaianum
Collinsonia macrobracteata
Comanthosphace formosana
Elsholtzia oldhamii
Elsholtzia taiwanensis
Lamium taiwanense
Paraphlomis cauliflora
Paraphlomis parviflora
Paraphlomis tomentosocapitata
Platostoma taiwanense
Pogostemon formosanus
Pogostemon monticola
Salvia hayatae
Salvia hayatae var. hayatae
Salvia hayatae var. pinnata
Salvia japonica var. formosana – northern Taiwan
Salvia muratae
Salvia shaofengkouensis
Salvia siyuanensis
Scutellaria hsiehii
Scutellaria lilungensis
Scutellaria playfairii
Scutellaria playfairii var. playfairii
Scutellaria playfairii var. procumbens
Scutellaria taiwanensis – Ali Shan
Scutellaria tarokoensis
Suzukia shikikunensis – central and eastern Taiwan
Teucrium guanwuense
Teucrium taiwanianum
Teucrium taoshanense
Lardizabalaceae
Stauntonia hengchunensis
Stauntonia purpurea – central Taiwan
Lauraceae
Actinodaphne mushaensis
Camphora kanahirae
Camphora officinarum var. nominale – eastern and southern Taiwan
Cinnamomum chingchuanium
Cinnamomum insularimontanum
Cinnamomum kotoense – Lan Yü
Cinnamomum osmophloeum – northern and central Taiwan
Cinnamomum reticulatum
Lindera akoensis
Litsea akoensis
Litsea akoensis var. akoensis
Litsea akoensis var. sasakii
Litsea hayatae
Litsea hypophaea
Litsea morrisonensis
Machilus konishii – central and southern Taiwan
Machilus obovatifolius – southern Taiwan
Machilus obovatifolius var. obovatifolius – southern Taiwan
Machilus obovatifolius var. taiwuensis – southeastern Taiwan
Machilus zuihoensis
Machilus zuihoensis var. mushaensis
Machilus zuihoensis var. zuihoensis
Neolitsea acuminatissima
Neolitsea buisanensis f. sutsuoensis – southern Taiwan
Neolitsea daibuensis – southern Taiwan
Neolitsea hiiranensis – southern Taiwan
Neolitsea parvigemma – south-central Taiwan
Neolitsea variabillima – central Taiwan
Sassafras randaiense – central and southern Taiwan
Liliaceae
Lilium formosanum
Lilium lalashanense
Lilium linearifolianum
Lilium longiflorum var. scabrum
Lilium × shimenianum
Tricyrtis bilushenmulata
Tricyrtis formosana var. glandosa – northeastern and central Taiwan
Tricyrtis lasiocarpa – western and southern Taiwan
Tricyrtis suzukii – northeastern Taiwan
Tricyrtis × tachingshuii
Tricyrtis uniflora
Linderniaceae
Lindernia sandaiolingensis
Vandellia scutellariiformis – Tainan Xian
Loranthaceae
Loranthus kaoi
Scurrula phoebe-formosanae
Taxillus limprichtii var. ritozanensis
Taxillus liquidambaricola var. liquidambaricola
Taxillus nigrans var. longifolius
Taxillus pseudochinensis – southern Taiwan
Taxillus theifer
Taxillus tsaii – southern Taiwan
Lycopodiaceae
Huperzia changii – Last recorded in 2010
Huperzia myriophyllifolia
Lycopodium yueshanense
Lythraceae
Rotala taiwaniana – eastern Taiwan
Magnoliaceae
Magnolia kachirachirai – southeastern Taiwan
Malvaceae
Corchorus aestuans var. brevicaulis
Hibiscus indicus var. integrilobus – southern Taiwan (Hengchun)
Hibiscus taiwanensis – Alishan
Melochia taiwaniana
Sida austrotaiwaniana
Marrataceae
Angiopteris × itoi (A. lygodiifolia × A. somae)
Mazaceae
Mazus alpinus
Mazus fauriei – northern Taiwan
Mazus lalashanensis
Mazus somggangensis
Mazus tainanensis – Tainan city
Mazus uniflorus
Melanthiaceae
Helonias umbellata
Paris taiwanensis
Trillium taiwanense – eastern Taiwan
Veratrum formosanum
Melastomataceae
Bredia dulanica
Bredia hirsuta var. scandens
Bredia oldhamii
Medinilla formosana
Medinilla hayatana – Lan Yü
Melastoma kudoi – central Taiwan
Melastoma scaberrimum
Memecylon pendulum
Tashiroea laisherana
Meliaceae
Aglaia taiwaniana
Menispermaceae
Cocculus taiwanianus
Cyclea ochiaiana
Paratinospora dentata
Stephania merrillii
Mitrastemonaceae
Mitrastemon yamamotoi var. kanehirae
Moraceae
Ficus tannoensis – southern Taiwan
Ficus vaccinioides – southern Taiwan
Musaceae
Musa × formobisiana (M. balbisiana × M. itinerans var. formosana)
Musa insularimontana
Musa itinerans var. chiumei
Musa itinerans var. formosana
Musa itinerans var. kavalanensis
Musa yamiensis
Myrtaceae
Syzygium densinervium var. insulare – Hengchun Peninsula, Lü Dao, and Lan Yü
Syzygium euphlebium – Hengchun Peninsula
Syzygium formosanum
Syzygium kusukusuense – Hengchun Peninsula
Syzygium taiwanicum – Lan Yu, Pengjia Yu
Nyctaginaceae
Boerhavia hualienensis – eastern Taiwan
Oleaceae
Ligustrum morrisonense
Osmanthus kaoi
Osmanthus lanceolatus
Onagraceae
Circaea cireniana
Circaea hsuehshanensis
Circaea lalashanensis
Epilobium hohuanense
Epilobium nanhualpinum
Epilobium nankotaizanense
Epilobium pengii
Epilobium taiwanianum
Epilobium tulinianum
Orchidaceae
Agrostophyllum formosanum – southern Taiwan
Anoectochilus lalashanensis
Anoectochilus semiresupinatus
Aphyllorchis montana var. membranacea
Aphyllorchis montana f. pingtungensis
Aphyllorchis montana var. rotundatipetala
Appendicula reflexa var. kotoensis – Lan Yü
Bulbophyllum albociliatum
Bulbophyllum albociliatum var. albociliatum – central and southern Taiwan
Bulbophyllum albociliatum var. remotifolium – Taiwan (Hualien)
Bulbophyllum albociliatum var. shanlinshiense – Taiwan (Nantou)
Bulbophyllum albociliatum var. weiminianum – southern Taiwan
Bulbophyllum brevipedunculatum – eastern Taiwan
Bulbophyllum cryptomeriicola
Bulbophyllum fimbriperianthium – southern Taiwan
Bulbophyllum flaviflorum – central and southern Taiwan
Bulbophyllum insulsoides – central and southern Taiwan
Bulbophyllum karenkoensis
Bulbophyllum karenkoensis var. calvum
Bulbophyllum karenkoensis var. karenkoensis
Bulbophyllum karenkoensis var. puniceum
Bulbophyllum kuanwuense – southern Taiwan
Bulbophyllum linearibractium
Bulbophyllum maxi
Bulbophyllum × omerumbellatum (B. omerandrum × B. umbellatum)
Bulbophyllum pingtungense – southern Taiwan (east Henchun Peninsula)
Bulbophyllum sasakii
Bulbophyllum setaceum
Bulbophyllum setaceum var. confragosum
Bulbophyllum setaceum var. setaceum – central Taiwan
Bulbophyllum somae – northern Taiwan
Bulbophyllum taiwanense – southern Taiwan
Bulbophyllum tenuislinguae
Bulbophyllum tokioi – northern and central Taiwan
Calanthe arcuata subsp. caudatilabella – southern Taiwan
Calanthe arisanensis
Calanthe dolichopoda
Calanthe formosana
Calanthe × hsinchuensis (C. arisanensis × C. striata)
Cheirostylis nantouensis
Cheirostylis pusilla var. simplex
Cheirostylis tabiyahanensis – southeastern Taiwan (Mt. Ayushan)
Cheirostylis tortilacinia var. rubrifolia – southern Taiwan
Cheirostylis tortilacinia var. wutaiensis
Chiloschista segawae – south-central Taiwan
Corybas puniceus – Yunlin
Corybas taiwanensis – northern Taiwan (Taoyuen)
Crepidium × cordilabium (C. matsudae × C. ophrydis)
Crepidium roohutuense – southern Taiwan
Cymbidium formosanum
Cypripedium formosanum – central Taiwan
Cypripedium segawae – east-central Taiwan
Cypripedium taiwanalpinum
Cyrtosia taiwanica
Dendrobium furcatopedicellatum – central and southern Taiwan
Dendrobium leptocladum – central and southern Taiwan
Dendrobium sanseiense
Dendrobium somae – eastern and southern Taiwan
Dienia shuicae
Epipactis fascicularis
Epipactis ohwii – central Taiwan
Epipogium kentingense
Epipogium lalashanense
Epipogium meridianum
Epipogium taiwanense
Erythrodes aggregata
Erythrodes chinensis var. triantherae – Lan Yü
Eulophia brachycentra – southern Taiwan
Eulophia segawae – southeastern Taiwan
Gastrochilus deltoglossus
Gastrochilus guanwuensis
Gastrochilus × hsuehshanensis – (G. formosanus × G. rantabunensis)
Gastrochilus linii – central Taiwan
Gastrochilus matsudae – southern Taiwan
Gastrochilus matsudae var. hoi
Gastrochilus matsudae var. matsudae
Gastrochilus raraensis
Gastrochilus shaolinchianus
Gastrochilus somae
Gastrochilus yehii
Gastrodia appendiculata – central Taiwan
Gastrodia confusoides
Gastrodia flavilabella – central Taiwan
Gastrodia kaohsiungensis
Gastrodia leoui
Gastrodia leucochila
Gastrodia nantoensis
Gastrodia rubinea
Gastrodia sui
Goodyera daibuzanensis
Goodyera maculata
Goodyera yamiana – Lu Dao
Habenaria alishanensis
Habenaria longiracema – central and southern Taiwan
Habenaria tsaiana
Hemipilia alpestris – northern and central Taiwan
Hemipilia × alpestroides (H. alpestris × H. kiraishiensis)
Hemipilia kiraishiensis
Hemipilia taiwanensis – central and southern Taiwan
Hemipilia takasago-montana central and eastern Taiwan
Hemipilia tominagae
Holcoglossum pumilum
Holcoglossum quasipinifolium
× Holcosia pseudotaiwaniana (Holcoglossum quasipinifolium × Luisia megasepala)
× Holcosia taiwaniana (Holcoglossum quasipinifolium × Luisia teres}
Hylophila nipponica – southern Taiwan, including Lan Yü
Lecanorchis cerina – Mt. Tatungshan
Lecanorchis latens
Lecanorchis multiflora var. bihuensis
Lecanorchis multiflora var. subpelorica
Lecanorchis ohwii
Lecanorchis thalassica var. thalassica – central Taiwan
Liparis amabilis – northern Taiwan (Mt. Chiaopanshan)
Liparis derchiensis – Taichung
Liparis elongata – northern and eastern Taiwan
Liparis formosamontana
Liparis henryi – Hengchun Peninsula
Liparis laurisilvatica
Liparis liangzuensis
Liparis monoceros
Liparis nakaharae
Liparis reckoniana
Liparis rubrotincta
Liparis sasakii – central Taiwan
Luisia cordata – southern Taiwan
Luisia lui
Luisia megasepala – central and southern Taiwan
Neottia atayalica
Neottia breviscapa
Neottia cinsbuensis
Neottia deltoidea – northeastern and southern Taiwan
Neottia fukuyamae – central Taiwan
Neottia hohuanshanensis
Neottia kuanshanensis – south-central Taiwan
Neottia meifongensis – central Taiwan
Neottia microauriculata
Neottia morrisonicola
Neottia nankomontana – northern and central Taiwan
Neottia piluchiensis
Neottia pseudonipponica – central Taiwan (T'aichung)
Neottia shenlengiana
Neottia taizanensis – northern Taiwan (Mt. Nanhutashan)
Neottia tatakaensis
Nervilia hungii
Nervilia lanyuensis – Lan Yü
Nervilia linearilabia
Nervilia purpureotincta
Nervilia septemtrionarius
Nervilia tahanshanensis – southern Taiwan
Nervilia taitoensis – southern Taiwan (T'aitung)
Nervilia taiwaniana var. ratis
Oberonia formosana
Oberonia linguae
Oberonia segawae – central and southern Taiwan
Odontochilus bisaccatus
Odontochilus brevistylis subsp. candidus
Odontochilus formosanus
Odontochilus gouanyuanensis
Odontochilus humilis
Odontochilus integrus – Lan Yü
Oreorchis bilamellata – central and southern Taiwan
Oreorchis wumanae
Peristylus gracilis subsp. insularis – Lan Yü
Phalaenopsis formosana – southeastern Taiwan, including islands
Phreatia morii
Phreatia taiwaniana
Platanthera alboflora
Platanthera brevicalcarata subsp. brevicalcarata
Platanthera devolii – northern and central Taiwan
Platanthera formosana
Platanthera hohuanshanensis
Platanthera longicalcarata – north-central and central Taiwan
Platanthera nantousylvatica
Platanthera pachyglossa – central and south-central Taiwan
Platanthera peichatieniana – northern Taiwan (Beicha Shan)
Platanthera quadricalcarata
Platanthera taiwanensis – central and southern Taiwan
Rhomboda lalashanensis
Sarcophyton taiwanianum – southern Taiwan
Spiranthes nivea
Spiranthes nivea var. nivea – southern Taiwan
Spiranthes nivea var. papillata – northeastern Taiwan
Tainia dunnii f. caterva
Tainia elliptica – northern Taiwan
Tainia hohuanshanensis
Tainia hualienia – eastern Taiwan
Tipularia odorata – northern and central Taiwan
Tuberolabium kotoense – southern Taiwan including Lan Yü
Vanda lamellata var. taiwuensis
Zeuxine arisanensis
Zeuxine flava var. pingtungensis
Zeuxine kantokeiensis – central Taiwan
Zeuxine lalashanensis
Zeuxine niijimae – central Taiwan (Nantou)
Zeuxine yehii
Orobanchaceae
Euphrasia nankotaizanensis
Euphrasia tarokoana – Hualian Xian
Euphrasia transmorrisonensis
Euphrasia transmorrisonensis var. durietziana
Euphrasia transmorrisonensis var. transmorrisonensis
Pedicularis ikomae – northeastern Taiwan
Pedicularis nanfutashanensis
Pedicularis refracta var. transmorrisonensis – north-central Taiwan
Striga crispata
Oxalidaceae
Oxalis daitunensis
Oxalis griffithii subsp. taimonii
Oxalis taitastricta
Papaveraceae
Corydalis campulicarpa
Pentaphylacaceae
Adinandra formosana
Adinandra formosana var. formosana
Adinandra formosana var. obtusissima – southern Taiwan
Adinandra lasiostyla – central and southern Taiwan
Adinandra taiwanensis
Cleyera japonica var. taipinensis – northern and central Taiwan
Cleyera lipingensis var. taipinensis – northern and central Taiwan
Cleyera longicarpa – northern Taiwan
Eurya chichaoyangensis
Eurya citrifolia
Eurya crenatifolia – northern and eastern Taiwan
Eurya glaberrima
Eurya guanwuensis
Eurya leptophylla – central and eastern Taiwan
Eurya rengechiensis – Taizhong
Eurya septata
Eurya shaolinchiensis
Eurya taianensis
Eurya taitungensis – Hualian
Phyllanthaceae
Flueggea taiwanensis
Glochidion lanyuense – Lan Yü
Phyllanthus niinamii
Phyllanthus oligospermus subsp. oligospermus
Pinaceae
Abies kawakamii – central Taiwan
Keteleeria davidiana var. formosana
Picea morrisonicola – central Taiwan
Pinus armandi var. mastersiana – Ali-shan and Yu-shan
Pinus morrisonicola
Pinus taiwanensis
Pinus taiwanensis var. fragilissima – eastern Taiwan (Kuan Shan)
Pinus taiwanensis var. taiwanensis
Piperaceae
Peperomia tairukouensis
Piper kwashoense – southern Taiwan
Piper lanyuense
Piper okamotoi
Piper taiwanense
Pittosporaceae
Pittosporum daphniphylloides var. daphniphylloides
Pittosporum tobira var. calvescens – northern Taiwan
Pittosporum viburnifolium – southern Taiwan
Plantaginaceae
Callitriche raveniana
Veronica morrisonicola
Veronica oligosperma
Veronica shichengensis
Veronica taiwanica – Ilan Xian
Veronica wulingensis
Veronicastrum formosanum – Hualian Xian
Veronicastrum loshanense – eastern Taiwan
Plumbaginaceae
Limonium wrightii var. luteum
Poaceae
Ampelocalamus naibunensis
Arundinella taiwanica
Arundo formosana var. gracilis – northern and western Taiwan
Bambusa odashimae
Bambusa utilis
Brachypodium kawakamii
Digitaria fauriei
Elymus formosanus
Eragrostis fauriei
Festuca hondae
Helictotrichon abietetorum
Lolium formosanum – northeastern Taiwan
Microstegium fauriei
Mnesithea laevis var. chenii
Panicum taiwanense
Poa nankoensis
Poa takasagomontana
Poa tenuicula
Spodiopogon formosanus
Podocarpaceae
Podocarpus nakaii
Polygalaceae
Polygala arcuata – central and southern Taiwan
Polygala sandiaochiaoensis
Polygala taiwanensis
Polygonaceae
Koenigia yatagaiana
Persicaria pilushanensis
Polygonum hohuanshanense
Polygonum ilanense
Polygonum loshanense
Polygonum taiwanense
Polypodiaceae
Arachniodes pseudoaristata
Bolbitis lianhuachihensis
Bolbitis × nanjenensis (B. appendiculata × B. heteroclita)
Cyrtomium simadae
Cyrtomium taiwanianum
Davallia chrysanthemifolia
Dryopteris × holttumii (D. apiciflora × D. maximowicziana)
Dryopteris kwanzanensis
Dryopteris pseudolunanensis
Dryopteris pseudosieboldii
Dryopteris subatrata
Dryopteris subexaltata
Dryopteris tenuipes
Goniophlebium raishaense
Grammitis moorei
Grammitis nuda
Grammitis taiwanensis
Lepisorus kawakamii
Lepisorus megasorus
Lepisorus monilisorus
Lepisorus pseudoussuriensis
Loxogramme biformis
Loxogramme remote-frondigera
Polystichum × gemmilachenense
Polystichum integripinnum
Polystichum parvipinnulum
Polystichum pseudodeltodon
Polystichum × pseudoparvipinnulum
Polystichum pseudostenophyllum
Polystichum × silviamontanum
Polystichum subapiciflorum
Polystichum subobliquum
Polystichum taizhongense
Selliguea echinospora
Selliguea falcatopinnata – southern Taiwan and Lan Yü
Selliguea taiwanensis
Tectaria subfuscipes
Primulaceae
Ardisia cornudentata
Ardisia cornudentata subsp. cornudentata
Ardisia cornudentata subsp. morrisonensis
Ardisia cornudentata var. stenosepala
Ardisia violacea
Lysimachia chingshuiensis – Ch'ing-shui Shan
Lysimachia taiwaniana
Maesa hengchunensis
Maesa hotungensis
Maesa lanyuensis
Maesa perlaria var. formosana
Maesa tairukouensis
Primula miyabeana
Proteaceae
Helicia rengetiensis
Helicia yingtzulinia
Pteridaceae
Adiantum formosanum
Adiantum meishanianum
Adiantum menglianense – south-central Taiwan
Adiantum taiwanianum
Antrophyum castaneum
Haplopteris heterophylla
Pteris angustipinna
Pteris austrotaiwanensis – southern Taiwan
Pteris incurvata
Pteris longipinna
Pteris rugosifolia
Ranunculaceae
Aconitum formosanum – northern Taiwan
Aconitum fukutomei
Actaea taiwanensis
Aquilegia kozakii – northeastern Taiwan (Mt. Taipingshan)
Calathodes polycarpa
Clematis akoensis – southern Taiwan
Clematis chinensis var. tatushanensis
Clematis formosana – eastern and southern Taiwan
Clematis lishanensis – central Taiwan
Clematis morii – central Taiwan
Clematis parviloba var. bartlettii – central Taiwan
Clematis psilandra – central Taiwan
Clematis tamurae
Clematis terniflora var. garanbiensis – southern Taiwan
Clematis tsugetorum – northern Taiwan
Dichocarpum arisanense
Dichocarpum uniflorum
Ranunculus cheirophyllus
Ranunculus formosa-montanus – Nanhu Dashan
Ranunculus junipericola
Ranunculus matsudae
Ranunculus morii – northern Taiwan
Ranunculus nankotaizanus – Nanhu Dashan
Ranunculus taisanensis
Ranunculus taiwanensis
Thalictrum lecoyeri var. debilistylum
Thalictrum myriophyllum – northern Taiwan
Thalictrum oshimae
Thalictrum rubescens – northern Taiwan
Thalictrum sessile
Thalictrum urbaini
Thalictrum urbaini var. majus
Thalictrum urbaini var. urbaini
Trollius taihasenzanensis
Rhamnaceae
Berchemia arisanensis
Berchemia fenchifuensis
Berchemia paniculata
Rhamnus formosana
Rhamnus nakaharae
Rhamnus pilushanensis
Rhamnus salixiophylla
Rhamnus utilis var. chingshuiensis
Sageretia randaiensis – northern and central Taiwan
Ventilago elegans
Rosaceae
Argentina tugitakensis
Cotoneaster chingshuiensis
Cotoneaster hualiensis
Cotoneaster konishii
Cotoneaster morrisonensis
Cotoneaster nantouensis
Cotoneaster rosiflorus
Cotoneaster siyuanensis
Cotoneaster taiwanensis
Cotoneaster tetrapetalus
Filipendula kiraishiensis
Fragaria hayatae
Fragaria tayulinensis
Macromeles formosana
Photinia chingshuiensis
Photinia serratifolia var. ardisiifolia – eastern Taiwan
Photinia serratifolia var. daphniphylloides – eastern Taiwan
Photinia serratifolia var. lasiopetala – central Taiwan
Potentilla morrisonensis
Pourthiaea lucida
Prinsepia scandens
Prunus chiliangensis
Prunus guanwuensis
Prunus tayulinensis
Prunus transarisanensis
Pyracantha koidzumii
Pyrus alpinotaiwaniana
Rhaphiolepis indica f. impressivena
Rhaphiolepis indica var. shilanensis
Rhaphiolepis indica var. tashiroi – northern and southern Taiwan
Rosa hohuanlinparvifolia
Rosa morrisonensis – Yushan
Rosa pricei
Rosa shaolinchiensis
Rosa taiwanensis
Rosa yilanalpina
Rubus arachnoideus – eastern Taiwan
Rubus cuifengensis
Rubus glandulosocalycinus – central and northern Taiwan
Rubus hohuanshanensis
Rubus kawakamii
Rubus lanyuensis
Rubus liui – northeastern Taiwan
Rubus parviaraliifolius
Rubus parvifolius var. toapiensis – eastern Taiwan
Rubus siyuanensis
Rubus taitoensis
Rubus taitoensis var. aculeatiflorus
Rubus taitoensis var. taitoensis
Rubus taiwanicola – central Taiwan
Rubus tayulinensis
Rubus yuenfengensis
Rubus yuliensis – eastern Taiwan
Sorbus randaiensis
Spiraea japonica var. formosana
Spiraea morrisonicola
Spiraea morrisonicola var. hayatana
Spiraea morrisonicola var. morrisonicola
Spiraea prunifolia var. pseudoprunifolia
Spiraea tarokoensis – eastern Taiwan
Spiraea tatakaensis – central Taiwan
Rubiaceae
Conjugatovarium
Conjugatovarium lalashanianum
Damnacanthus angustifolius
Galium alboflorum
Galium echinocarpum – central Taiwan
Galium formosense – Gaoxiong
Galium guanwuense
Galium hohuanshanense
Galium lishanense
Galium minutissimum – Hualian
Galium morii – Jiayi
Galium nanhumontanum
Galium nankotaizanum – mountains of Taiwan
Galium shengkuangense
Galium siyuanianum
Galium taiwanense – northern Taiwan mountains
Galium takasagomontanum
Galium tarokoense
Lasianthus simizui
Leptopetalum taiwanense – Taiwan including Lü Tao
Mussaenda acalycophylla
Mussaenda darenensis
Mussaenda horenensis
Neanotis formosana
Oldenlandia butensis – Yilan
Ophiorrhiza hayatana
Rubia linii
Scleromitrion sirayanum
Shaolinchiana
Shaolinchiana lalashaniana
Shaolinchiana taiwaniana
Shaolinchiana tungyanshaniana
Tangshuia
Tangshuia pitouchaoensis
Theligonum formosanum – western Taiwan (Pingtung: Tawushan)
Wendlandia erythroxylon
Rutaceae
Glycosmis erythrocarpa
Skimmia japonica subsp. distincte-venulosa
Skimmia japonica var. orthoclada
Zanthoxylum wutaiense – Pingdong
Sabiaceae
Sabia transarisanensis
Salicaceae
Salix doii
Salix fulvopubescens
Salix kunyangensis
Salix kusanoi
Salix morrisonicola
Salix morrisonicola var. morrisonicola
Salix morrisonicola var. takasagoalpina
Salix okamotoana
Salix pilushanensis
Salix tagawana
Salix taiwanalpina
Santalaceae
Viscum taiwanianum
Sapindaceae
Acer albopurpurascens
Acer buergerianum var. formosanum – northern and central Taiwan
Acer caudatifolium
Acer morrisonense
Acer serrulatum – northern and central Taiwan
Acer tutcheri subsp. formosanum
Koelreuteria elegans subsp. formosana
Saxifragaceae
Asimitellaria formosana
Astilbe longicarpa
Astilbe macroflora – central Taiwan
Chrysosplenium hebetatum
[[Chrysosplenium lanuginosum|Chrysosplenium lanuginosum var. formosanum]] Chrysosplenium taiwanianum
SchisandraceaeIllicium arborescens Illicium × rubellum
Schisandra arisanensis subsp. arisanensis
ScrophulariaceaeScrophularia formosana – Taidong XianScrophularia yoshimurae
SelaginellaceaeSelaginella devolii
Selaginella helvetica subsp. pseudonipponica
Simaroubaceae
Ailanthus altissima var. tanakae – northern Taiwan
SmilacaceaeSmilax horridiramula – central and eastern TaiwanSmilax insularis – southern TaiwanSmilax luei – central TaiwanSmilax nantoensis – central TaiwanSmilax taipeiensis – northern TaiwanSmilax taiwanensis Smilax tungyuanensis
SolanaceaeSolanum chingchunense Solanum peikuoense
StaphyleaceaeStaphylea formosana
StyracaceaeAlniphyllum pterospermum
Styrax suberifolius var. hayataianus
SymplocaceaeSymplocos juiyenensis – central TaiwanSymplocos koidzumiana Symplocos migoi Symplocos nokoensis Symplocos shilanensis
Symplocos sonoharae var. formosana
Symplocos sumuntia var. modesta Symplocos taiwanensis
TaxaceaeAmentotaxus formosana – southeastern Taiwan
TheaceaeCamellia chinmeiae Camellia guanwuensis Camellia hsinpeiensis Camellia tungyanshanensis
ThymelaeaceaeDaphne arisanensis Daphne chingshuishaniana – Ch'ing-shui ShanDaphne morrisonensis – Mt. YushanDaphne nana – eastern TaiwanDaphne yangtoushanensis Wikstroemia mononectaria Wikstroemia taiwanensis
UlmaceaeUlmus uyematsui – central Taiwan
UrticaceaeChamabainia guanwuensis Chamabainia meifenggensis Dendrocnide kotoensis – TaidongElatostema acuteserratum – southeastern Taiwan including Lan YüElatostema amoenum Elatostema caudifolium Elatostema elongatopeduncellatum Elatostema guanwuense Elatostema hirtellipedunculata Elatostema × hybrida (E. lineolatum × E. platyphyllum)Elatostema hypoglaucum Elatostema lalashanense Elatostema liutangshuii Elatostema nanhumontanum Elatostema pauciflorum Elatostema rivulare Elatostema siyuanwukouense Elatostema strigillosum – TaidongElatostema subcoriaceum – Lan YüElatostema taiwanense Elatostema taoyuanense Elatostema tungyanshanense Elatostema villosum – southern TaiwanLaportea taiwanensis Parietaria taiwaniana Pilea funkikensis Pilea loshanensis Pilea matsudae Pilea rotundinucula Pilea somae – southern TaiwanPilea taiwanensis Pilea yingshaoyaoana
Pouzolzia sanguinea var. formosana Pouzolzia taiwaniana Urtica taiwaniana – central Taiwan
Urtica thunbergiana subsp. perserrata
Viburnaceae
Viburnum formosanum subsp. formosanumViburnum hayatae
Viburnum odoratissimum var. arboricola Viburnum parvifolium Viburnum pilushanicum
Viburnum plicatum var. formosanum – northern TaiwanViburnum taiwanianum – central and southern Taiwan
ViolaceaeViola adenothrix Viola adenothrix var. adenothrixViola adenothrix var. tsugitakaensis
Viola betonicifolia var. yuanfengia Viola formosana Viola formosana var. formosanaViola formosana var. kawakamii Viola lungtungensis Viola nagasawae Viola nagasawae var. nagasawaeViola nagasawae var. pricei
Viola obtusa var. tsuifengensis – NantouViola pilushanensis Viola pitouchaoensis Viola pubipetala Viola sandaiojiaoensis Viola senzanensis Viola shaoyoukengensis Viola shinchikuensis Viola wulinfarmensis Viola xibaoensis
VitaceaeCissus pingtungensis Pseudocayratia pengiana Tetrastigma lanyuense
XyridaceaeXyris formosana
ZingiberaceaeAlpinia × ilanensis (A. japonica × A. pricei) – northeastern TaiwanAlpinia kawakamii – southern TaiwanAlpinia koshunensis – southern TaiwanAlpinia kusshakuensis (perhaps A. shimadae × A. uraiensis) – northern TaiwanAlpinia lalashanensis Alpinia mesanthera Alpinia nantoensis Alpinia oui Alpinia pricei – eastern TaiwanAlpinia sessiliflora – central TaiwanAlpinia shimadae Alpinia shoukaensis Alpinia tonrokuensis – northern TaiwanAlpinia uraiensis – northern TaiwanZingiber chengii Zingiber kawagoei Zingiber pleiostachyum – southern TaiwanZingiber shuanglongense
Cultivated crops endemic to TaiwanSpodiopogon formosanus Symplocos trichoclada''
See also
List of protected species in Taiwan
:Category:Endemic flora of Taiwan
:Category:Endemic fauna of Taiwan
References
External links
Taiwan Endemic Species Research Institute
Taiwan Biodiversity National Information Network
Taiwan Ecological Research Network
Ecogrid project
Biota of Taiwan
Endemic | List of endemic species of Taiwan | [
"Biology"
] | 14,732 | [
"Biota by country",
"Biota of Taiwan"
] |
1,055,795 | https://en.wikipedia.org/wiki/DSTN | DSTN (double super twisted nematic), also known as dual-scan super twisted nematic or simply dual-scan, is an LCD technology in which a screen is divided in half, which are simultaneously refreshed giving faster refresh rate than traditional passive matrix screens. It is an improved form of supertwist nematic display that offers low power consumption but inferior sharpness and brightness compared to TFT screens.
History
For several years (early 1990s to early 2000s), TFT screens were only found in high-end laptops due to them being more expensive and lower-end laptops offering DSTN screens only. This was at a time when the screen was often the most expensive component of laptops. The price difference between a laptop with DSTN and one with TFT could easily be $400 or more. However, TFT gradually became cheaper and essentially captured the entire market, before being replaced with IPS (itself in the process of being replaced with OLED, starting with high-end).
DSTN display quality is poor compared to TFT, with visible noise, smearing, much lower contrast and slow response. Such screens are unsuitable for viewing movies or playing video games of any kind.
References
Liquid crystal displays
Display technology | DSTN | [
"Engineering"
] | 259 | [
"Electronic engineering",
"Display technology"
] |
1,055,864 | https://en.wikipedia.org/wiki/Federal%20Chancellery%20of%20Germany | The Federal Chancellery (, ) is a German federal agency serving the executive office of the chancellor of Germany, the head of the federal government, currently Olaf Scholz. The Chancellery's primary function is to assist the chancellor in coordinating the activities of the federal government. The head of the Chancellery () holds the rank of either a Secretary of State () or a Federal Minister (), currently held by Wolfgang Schmidt. The headquarters of the German Chancellery is at the Federal Chancellery building in Berlin, which is the largest government headquarters in the world.
History
When the North German Confederation was created in 1867, the constitution mentioned only the Bundeskanzler as the responsible executive officer. There was no collegial government with ministers. Federal Chancellor Otto von Bismarck in the beginning only established a Bundeskanzleramt as his office. It was the only 'ministry' of the country until in early 1870 the Prussian foreign office became the North German foreign office. At that occasion, the Bundeskanzleramt lost some tasks to the foreign office.
Reichskanzleramt
When the North German Confederation became the German Empire in 1871, the Bundeskanzleramt was renamed to Reichskanzleramt. It originally had its seat in the Radziwiłł Palace (also known as Reichskanzlerpalais), originally built by Prince Antoni Radziwiłł on Wilhelmstraße 77 in Berlin. More and more imperial offices were separated from the Reichskanzleramt, e.g. the Reichsjustizamt (Office for National Justice) in 1877. What remained of the Reichskanzleramt became in 1879 the Reichsamt des Innern (the home office).
Reichskanzlei
In 1878 Imperial Chancellor Bismarck created a new office for the chancellor's affairs, the Reichskanzlei. It kept its name over the years, also in the republic since 1919. In 1938–39, the building Neue Reichskanzlei (New Imperial Chancellery), designed by Albert Speer, was built; its main entrance was located at Voßstraße 6, while the building occupied the entire northern side of the street. It was damaged during World War II and later demolished by Soviet occupation forces.
Bundeskanzleramt
In 1949, the Federal Republic was created. Bonn was made the provisional capital. Federal Chancellor Konrad Adenauer used the Museum Koenig for the first two months and then moved the Bundeskanzleramt into Palais Schaumburg until a new Chancellery building was completed in 1976. The new West German Chancellery building was a black structure completed in the International Style, in an unassuming example of modernism.
In 1999, the headquarters of the Federal Chancellery were moved from Bonn to Berlin under the Berlin-Bonn Act, first into the Staatsratsgebäude, then in 2001 to the new building on the Spreebogen; since 2001 the secondary seat of the Federal Chancellery has been the Palais Schaumburg. A separate building, the Kanzlerbungalow served as private residence of the Chancellor and his family 1964–1999.
Headquarters
Bundeskanzleramt is also the name of the building in Berlin that houses the personal offices of the chancellor and the Chancellery staff. Palais Schaumburg in Bonn is the secondary official seat of the German Federal Chancellery.
Opened in the spring of 2001, the current Chancellery building was designed by Charlotte Frank and Axel Schultes and was built by a joint venture of Royal BAM Group's subsidiary Wayss & Freytag and the Spanish Acciona Occupying 12,000 square meters (129,166 square feet), it is also the largest government headquarters building in the world. By comparison, the new Chancellery building is ten times the size of the White House.
Because of its distinctive but controversial architecture, journalists, tourist guides and some locals refer to the buildings as Kohllosseum (as a mix of Colosseum and former chancellor Helmut Kohl under whom it was built), Bundeswaschmaschine (federal laundry machine, because of the round-shaped windows and its cubic form), or Elefantenklo (elephant loo).
Access for the general public is only possible on particular days during the year. Since 1999, the German government has welcomed the general public for one weekend per year to visit its buildings – usually in August.
Heads of the Chancellery
Heads of the German Chancellery (Chef des Bundeskanzleramts, ChefBK) attend Cabinet meetings. They may also sit as members of the Cabinet if they are also given the position of Minister for Special Affairs (Minister für besondere Aufgaben). They are often called "Kanzleramtsminister" (chancellery minister). Otherwise, they have the rank of a secretary of state (comparable to a minor or vice minister in other countries).
The current Head of the Chancellery is Wolfgang Schmidt.
Typically a ChefBK is a very close advisor of the chancellor, being the primary contact to the cabinet ministers. Many of them became cabinet ministers (with other portfolios) themselves, several ministers of the interior. Frank Walter Steinmeier who served as minister of the chancery under Schröder (1999-2005) later served as minister of foreign affairs (2005-2009 and 2013–2017) candidate for chancellor (2009) leader of the opposition (2009-2013) and ultimately in the largely ceremonial role of federal president (2017-).
See also
Berlin Police
German Chancery Deutsche Kanzlei - government agency located in London during the reign of the Hanoverian kings in the UK
Wachbataillon
References
External links
1871 establishments in Germany
History of Berlin
Bonn
German federal agencies
Buildings and structures in Berlin
Government buildings completed in 2001
Postmodern architecture
Germany | Federal Chancellery of Germany | [
"Engineering"
] | 1,219 | [
"Postmodern architecture",
"Architecture"
] |
1,055,890 | https://en.wikipedia.org/wiki/Sustainable%20energy | Energy is sustainable if it "meets the needs of the present without compromising the ability of future generations to meet their own needs." Definitions of sustainable energy usually look at its effects on the environment, the economy, and society. These impacts range from greenhouse gas emissions and air pollution to energy poverty and toxic waste. Renewable energy sources such as wind, hydro, solar, and geothermal energy can cause environmental damage but are generally far more sustainable than fossil fuel sources.
The role of non-renewable energy sources in sustainable energy is controversial. Nuclear power does not produce carbon pollution or air pollution, but has drawbacks that include radioactive waste, the risk of nuclear proliferation, and the risk of accidents. Switching from coal to natural gas has environmental benefits, including a lower climate impact, but may lead to a delay in switching to more sustainable options. Carbon capture and storage can be built into power plants to remove their carbon dioxide () emissions, but this technology is expensive and has rarely been implemented.
Fossil fuels provide 85% of the world's energy consumption, and the energy system is responsible for 76% of global greenhouse gas emissions. Around 790 million people in developing countries lack access to electricity, and 2.6 billion rely on polluting fuels such as wood or charcoal to cook. Cooking with biomass plus fossil fuel pollution causes an estimated 7 million deaths each year. Limiting global warming to will require transforming energy production, distribution, storage, and consumption. Universal access to clean electricity can have major benefits to the climate, human health, and the economies of developing countries.
Climate change mitigation pathways have been proposed to limit global warming to . These include phasing out coal-fired power plants, conserving energy, producing more electricity from clean sources such as wind and solar, and switching from fossil fuels to electricity for transport and heating buildings. Power output from some renewable energy sources varies depending on when the wind blows and the sun shines. Switching to renewable energy can therefore require electrical grid upgrades, such as the addition of energy storage. Some processes that are difficult to electrify can use hydrogen fuel produced from low-emission energy sources. In the International Energy Agency's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023.
Wind and solar market share grew to 8.5% of worldwide electricity in 2019, and costs continue to fall. The Intergovernmental Panel on Climate Change (IPCC) estimates that 2.5% of world gross domestic product (GDP) would need to be invested in the energy system each year between 2016 and 2035 to limit global warming to . Governments can fund the research, development, and demonstration of new clean energy technologies. They can also build infrastructure for electrification and sustainable transport. Finally, governments can encourage clean energy deployment with policies such as carbon pricing, renewable portfolio standards, and phase-outs of fossil fuel subsidies. These policies may also increase energy security.
Definitions and background
Definitions
The United Nations Brundtland Commission described the concept of sustainable development, for which energy is a key component, in its 1987 report Our Common Future. It defined sustainable development as meeting "the needs of the present without compromising the ability of future generations to meet their own needs". This description of sustainable development has since been referenced in many definitions and explanations of sustainable energy.
There is no universally accepted interpretation of how the concept of sustainability applies to energy on a global scale. Working definitions of sustainable energy encompass multiple dimensions of sustainability such as environmental, economic, and social dimensions. Historically, the concept of sustainable energy development has focused on emissions and on energy security. Since the early 1990s, the concept has broadened to encompass wider social and economic issues.
The environmental dimension of sustainability includes greenhouse gas emissions, impacts on biodiversity and ecosystems, hazardous waste and toxic emissions, water consumption, and depletion of non-renewable resources. Energy sources with low environmental impact are sometimes called green energy or clean energy. The economic dimension of sustainability covers economic development, efficient use of energy, and energy security to ensure that each country has constant access to sufficient energy. Social issues include access to affordable and reliable energy for all people, workers' rights, and land rights.
Environmental impacts
The current energy system contributes to many environmental problems, including climate change, air pollution, biodiversity loss, the release of toxins into the environment, and water scarcity. As of 2019, 85% of the world's energy needs are met by burning fossil fuels. Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. The 2015 international Paris Agreement on climate change aims to limit global warming to well below and preferably to 1.5 °C (2.7 °F); achieving this goal will require that emissions be reduced as soon as possible and reach net-zero by mid-century.
The burning of fossil fuels and biomass is a major source of air pollution, which causes an estimated 7 million deaths each year, with the greatest attributable disease burden seen in low and middle-income countries. Fossil-fuel burning in power plants, vehicles, and factories is the main source of emissions that combine with oxygen in the atmosphere to cause acid rain. Air pollution is the second-leading cause of death from non-infectious disease. An estimated 99% of the world's population lives with levels of air pollution that exceed the World Health Organization recommended limits.
Cooking with polluting fuels such as wood, animal dung, coal, or kerosene is responsible for nearly all indoor air pollution, which causes an estimated 1.6 to 3.8 million deaths annually, and also contributes significantly to outdoor air pollution. Health effects are concentrated among women, who are likely to be responsible for cooking, and young children.
Environmental impacts extend beyond the by-products of combustion. Oil spills at sea harm marine life and may cause fires which release toxic emissions. Around 10% of global water use goes to energy production, mainly for cooling in thermal energy plants. In dry regions, this contributes to water scarcity. Bioenergy production, coal mining and processing, and oil extraction also require large amounts of water. Excessive harvesting of wood and other combustible material for burning can cause serious local environmental damage, including desertification.
Sustainable development goals
Meeting existing and future energy demands in a sustainable way is a critical challenge for the global goal of limiting climate change while maintaining economic growth and enabling living standards to rise. Reliable and affordable energy, particularly electricity, is essential for health care, education, and economic development. As of 2020, 790 million people in developing countries do not have access to electricity, and around 2.6 billion rely on burning polluting fuels for cooking.
Improving energy access in the least-developed countries and making energy cleaner are key to achieving most of the United Nations 2030 Sustainable Development Goals, which cover issues ranging from climate action to gender equality. Sustainable Development Goal 7 calls for "access to affordable, reliable, sustainable and modern energy for all", including universal access to electricity and to clean cooking facilities by 2030.
Energy conservation
Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals.
Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Another approach is to use fewer materials whose production requires a lot of energy, for example through better building design and recycling. Behavioural changes such as using videoconferencing rather than business flights, or making urban trips by cycling, walking or public transport rather than by car, are another way to conserve energy. Government policies to improve efficiency can include building codes, performance standards, carbon pricing, and the development of energy-efficient infrastructure to encourage changes in transport modes.
The energy intensity of the global economy (the amount of energy consumed per unit of gross domestic product (GDP)) is a rough indicator of the energy efficiency of economic production. In 2010, global energy intensity was 5.6 megajoules (1.6 kWh) per US dollar of GDP. United Nations goals call for energy intensity to decrease by 2.6% each year between 2010 and 2030. In recent years this target has not been met. For instance, between 2017 and 2018, energy intensity decreased by only 1.1%.
Efficiency improvements often lead to a rebound effect in which consumers use the money they save to buy more energy-intensive goods and services. For example, recent technical efficiency improvements in transport and buildings have been largely offset by trends in consumer behaviour, such as selecting larger vehicles and homes.
Sustainable energy sources
Renewable energy sources
Renewable energy sources are essential to sustainable energy, as they generally strengthen energy security and emit far fewer greenhouse gases than fossil fuels. Renewable energy projects sometimes raise significant sustainability concerns, such as risks to biodiversity when areas of high ecological value are converted to bioenergy production or wind or solar farms.
Hydropower is the largest source of renewable electricity while solar and wind energy are growing rapidly. Photovoltaic solar and onshore wind are the cheapest forms of new power generation capacity in most countries. For more than half of the 770 million people who currently lack access to electricity, decentralised renewable energy such as solar-powered mini-grids is likely the cheapest method of providing it by 2030. United Nations targets for 2030 include substantially increasing the proportion of renewable energy in the world's energy supply.
According to the International Energy Agency, renewable energy sources like wind and solar power are now a commonplace source of electricity, making up 70% of all new investments made in the world's power generation. The Agency expects renewables to become the primary energy source for electricity generation globally in the next three years, overtaking coal.
Solar
The Sun is Earth's primary source of energy, a clean and abundantly available resource in many regions. In 2019, solar power provided around 3% of global electricity, mostly through solar panels based on photovoltaic cells (PV). Solar PV is expected to be the electricity source with the largest installed capacity worldwide by 2027. The panels are mounted on top of buildings or installed in utility-scale solar parks. Costs of solar photovoltaic cells have dropped rapidly, driving strong growth in worldwide capacity. The cost of electricity from new solar farms is competitive with, or in many places, cheaper than electricity from existing coal plants. Various projections of future energy use identify solar PV as one of the main sources of energy generation in a sustainable mix.
Most components of solar panels can be easily recycled, but this is not always done in the absence of regulation. Panels typically contain heavy metals, so they pose environmental risks if put in landfills. It takes fewer than two years for a solar panel to produce as much energy as was used for its production. Less energy is needed if materials are recycled rather than mined.
In concentrated solar power, solar rays are concentrated by a field of mirrors, heating a fluid. Electricity is produced from the resulting steam with a heat engine. Concentrated solar power can support dispatchable power generation, as some of the heat is typically stored to enable electricity to be generated when needed. In addition to electricity production, solar energy is used more directly; solar thermal heating systems are used for hot water production, heating buildings, drying, and desalination.
Wind power
Wind has been an important driver of development over millennia, providing mechanical energy for industrial processes, water pumps, and sailing ships. Modern wind turbines are used to generate electricity and provided approximately 6% of global electricity in 2019. Electricity from onshore wind farms is often cheaper than existing coal plants and competitive with natural gas and nuclear. Wind turbines can also be placed offshore, where winds are steadier and stronger than on land but construction and maintenance costs are higher.
Onshore wind farms, often built in wild or rural areas, have a visual impact on the landscape. While collisions with wind turbines kill both bats and to a lesser extent birds, these impacts are lower than from other infrastructure such as windows and transmission lines. The noise and flickering light created by the turbines can cause annoyance and constrain construction near densely populated areas. Wind power, in contrast to nuclear and fossil fuel plants, does not consume water. Little energy is needed for wind turbine construction compared to the energy produced by the wind power plant itself. Turbine blades are not fully recyclable, and research into methods of manufacturing easier-to-recycle blades is ongoing.
Hydropower
Hydroelectric plants convert the energy of moving water into electricity. In 2020, hydropower supplied 17% of the world's electricity, down from a high of nearly 20% in the mid-to-late 20th century.
In conventional hydropower, a reservoir is created behind a dam. Conventional hydropower plants provide a highly flexible, dispatchable electricity supply. They can be combined with wind and solar power to meet peaks in demand and to compensate when wind and sun are less available.
Compared to reservoir-based facilities, run-of-the-river hydroelectricity generally has less environmental impact. However, its ability to generate power depends on river flow, which can vary with daily and seasonal weather. Reservoirs provide water quantity controls that are used for flood control and flexible electricity output while also providing security during drought for drinking water supply and irrigation.
Hydropower ranks among the energy sources with the lowest levels of greenhouse gas emissions per unit of energy produced, but levels of emissions vary enormously between projects. The highest emissions tend to occur with large dams in tropical regions. These emissions are produced when the biological matter that becomes submerged in the reservoir's flooding decomposes and releases carbon dioxide and methane. Deforestation and climate change can reduce energy generation from hydroelectric dams. Depending on location, large dams can displace residents and cause significant local environmental damage; potential dam failure could place the surrounding population at risk.
Geothermal
Geothermal energy is produced by tapping into deep underground heat and harnessing it to generate electricity or to heat water and buildings. The use of geothermal energy is concentrated in regions where heat extraction is economical: a combination is needed of high temperatures, heat flow, and permeability (the ability of the rock to allow fluids to pass through). Power is produced from the steam created in underground reservoirs. Geothermal energy provided less than 1% of global energy consumption in 2020.
Geothermal energy is a renewable resource because thermal energy is constantly replenished from neighbouring hotter regions and the radioactive decay of naturally occurring isotopes. On average, the greenhouse gas emissions of geothermal-based electricity are less than 5% that of coal-based electricity. Geothermal energy carries a risk of inducing earthquakes, needs effective protection to avoid water pollution, and releases toxic emissions which can be captured.
Bioenergy
Biomass is renewable organic material that comes from plants and animals. It can either be burned to produce heat and electricity or be converted into biofuels such as biodiesel and ethanol, which can be used to power vehicles.
The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide; those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will absorb carbon dioxide from the air as they grow. However, the establishment and cultivation of bioenergy crops can displace natural ecosystems, degrade soils, and consume water resources and synthetic fertilisers.
Approximately one-third of all wood used for traditional heating and cooking in tropical areas is harvested unsustainably. Bioenergy feedstocks typically require significant amounts of energy to harvest, dry, and transport; the energy usage for these processes may emit greenhouse gases. In some cases, the impacts of land-use change, cultivation, and processing can result in higher overall carbon emissions for bioenergy compared to using fossil fuels.
Use of farmland for growing biomass can result in less land being available for growing food. In the United States, around 10% of motor gasoline has been replaced by corn-based ethanol, which requires a significant proportion of the harvest. In Malaysia and Indonesia, clearing forests to produce palm oil for biodiesel has led to serious social and environmental effects, as these forests are critical carbon sinks and habitats for diverse species. Since photosynthesis captures only a small fraction of the energy in sunlight, producing a given amount of bioenergy requires a large amount of land compared to other renewable energy sources.
Second-generation biofuels which are produced from non-food plants or waste reduce competition with food production, but may have other negative effects including trade-offs with conservation areas and local air pollution. Relatively sustainable sources of biomass include algae, waste, and crops grown on soil unsuitable for food production.
Carbon capture and storage technology can be used to capture emissions from bioenergy power plants. This process is known as bioenergy with carbon capture and storage (BECCS) and can result in net carbon dioxide removal from the atmosphere. However, BECCS can also result in net positive emissions depending on how the biomass material is grown, harvested, and transported. Deployment of BECCS at scales described in some climate change mitigation pathways would require converting large amounts of cropland.
Marine energy
Marine energy has the smallest share of the energy market. It includes OTEC, tidal power, which is approaching maturity, and wave power, which is earlier in its development. Two tidal barrage systems in France and in South Korea make up 90% of global production. While single marine energy devices pose little risk to the environment, the impacts of larger devices are less well known.
Non-renewable energy sources
Fossil fuel switching and mitigation
Switching from coal to natural gas has advantages in terms of sustainability. For a given unit of energy produced, the life-cycle greenhouse-gas emissions of natural gas are around 40 times the emissions of wind or nuclear energy but are much less than coal. Burning natural gas produces around half the emissions of coal when used to generate electricity and around two-thirds the emissions of coal when used to produce heat. Natural gas combustion also produces less air pollution than coal. However, natural gas is a potent greenhouse gas in itself, and leaks during extraction and transportation can negate the advantages of switching away from coal. The technology to curb methane leaks is widely available but it is not always used.
Switching from coal to natural gas reduces emissions in the short term and thus contributes to climate change mitigation. However, in the long term it does not provide a path to net-zero emissions. Developing natural gas infrastructure risks carbon lock-in and stranded assets, where new fossil infrastructure either commits to decades of carbon emissions, or has to be written off before it makes a profit.
The greenhouse gas emissions of fossil fuel and biomass power plants can be significantly reduced through carbon capture and storage (CCS). Most studies use a working assumption that CCS can capture 85–90% of the carbon dioxide () emissions from a power plant. Even if 90% of emitted is captured from a coal-fired power plant, its uncaptured emissions are still many times greater than the emissions of nuclear, solar or wind energy per unit of electricity produced.
Since coal plants using CCS are less efficient, they require more coal and thus increase the pollution associated with mining and transporting coal. CCS is one of the most expensive ways of reducing emissions in the energy sector. Deployment of this technology is very limited. As of 2024, CCS is used in only 5 power plants and in 39 other facilities.
Nuclear power
Nuclear power has been used since the 1950s as a low-carbon source of baseload electricity. Nuclear power plants in over 30 countries generate about 10% of global electricity. As of 2019, nuclear generated over a quarter of all low-carbon energy, making it the second largest source after hydropower.
Nuclear power's lifecycle greenhouse gas emissions—including the mining and processing of uranium—are similar to the emissions from renewable energy sources. Nuclear power uses little land per unit of energy produced, compared to the major renewables. Additionally, Nuclear power does not create local air pollution. Although the uranium ore used to fuel nuclear fission plants is a non-renewable resource, enough exists to provide a supply for hundreds to thousands of years. However, uranium resources that can be accessed in an economically feasible manner, at the present state, are limited and uranium production could hardly keep up during the expansion phase. Climate change mitigation pathways consistent with ambitious goals typically see an increase in power supply from nuclear.
There is controversy over whether nuclear power is sustainable, in part due to concerns around nuclear waste, nuclear weapon proliferation, and accidents. Radioactive nuclear waste must be managed for thousands of years. For each unit of energy produced, nuclear energy has caused far fewer accidental and pollution-related deaths than fossil fuels, and the historic fatality rate of nuclear is comparable to renewable sources. Public opposition to nuclear energy often makes nuclear plants politically difficult to implement.
Reducing the time and the cost of building new nuclear plants have been goals for decades but costs remain high and timescales long. Various new forms of nuclear energy are in development, hoping to address the drawbacks of conventional plants. Fast breeder reactors are capable of recycling nuclear waste and therefore can significantly reduce the amount of waste that requires geological disposal, but have not yet been deployed on a large-scale commercial basis. Nuclear power based on thorium (rather than uranium) may be able to provide higher energy security for countries that do not have a large supply of uranium. Small modular reactors may have several advantages over current large reactors: It should be possible to build them faster and their modularization would allow for cost reductions via learning-by-doing.
Several countries are attempting to develop nuclear fusion reactors, which would generate small amounts of waste and no risk of explosions. Although fusion power has taken steps forward in the lab, the multi-decade timescale needed to bring it to commercialization and then scale means it will not contribute to a 2050 net zero goal for climate change mitigation.
Energy system transformation
Decarbonisation of the global energy system
The emissions reductions necessary to keep global warming below 2°C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change. For example, transitioning from oil to solar power as the energy source for cars requires the generation of solar electricity, modifications to the electrical grid to accommodate fluctuations in solar panel output or the introduction of variable battery chargers and higher overall demand, adoption of electric cars, and networks of electric vehicle charging facilities and repair shops.
Many climate change mitigation pathways envision three main aspects of a low-carbon energy system:
The use of low-emission energy sources to produce electricity
Electrification – that is increased use of electricity instead of directly burning fossil fuels
Accelerated adoption of energy efficiency measures
Some energy-intensive technologies and processes are difficult to electrify, including aviation, shipping, and steelmaking. There are several options for reducing the emissions from these sectors: biofuels and synthetic carbon-neutral fuels can power many vehicles that are designed to burn fossil fuels, however biofuels cannot be sustainably produced in the quantities needed and synthetic fuels are currently very expensive. For some applications, the most prominent alternative to electrification is to develop a system based on sustainably-produced hydrogen fuel.
Full decarbonisation of the global energy system is expected to take several decades and can mostly be achieved with existing technologies. In the IEA's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Technologies that are relatively immature include batteries and processes to create carbon-neutral fuels. Developing new technologies requires research and development, demonstration, and cost reductions via deployment.
The transition to a zero-carbon energy system will bring strong co-benefits for human health: The World Health Organization estimates that efforts to limit global warming to 1.5 °C could save millions of lives each year from reductions to air pollution alone. With good planning and management, pathways exist to provide universal access to electricity and clean cooking by 2030 in ways that are consistent with climate goals. Historically, several countries have made rapid economic gains through coal usage. However, there remains a window of opportunity for many poor countries and regions to "leapfrog" fossil fuel dependency by developing their energy systems based on renewables, given adequate international investment and knowledge transfer.
Integrating variable energy sources
To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems require flexibility. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly.
There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale: there is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With grid energy storage, energy produced in excess can be released when needed. Further flexibility could be provided from sector coupling, that is coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles.
Building overcapacity for wind and solar generation can help ensure that enough electricity is produced even during poor weather. In optimal weather, energy generation may have to be curtailed if excess electricity cannot be used or stored. The final demand-supply mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas.
Energy storage
Energy storage helps overcome barriers to intermittent renewable energy and is an important aspect of a sustainable energy system. The most commonly used and available storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, especially lithium-ion batteries, are also deployed widely. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons.
Costs of utility-scale batteries in the US have fallen by around 70% since 2015, however the cost and low energy density of batteries makes them impractical for the very large energy storage needed to balance inter-seasonal variations in energy production. Pumped hydro storage and power-to-gas (converting electricity to gas and back) with capacity for multi-month usage has been implemented in some locations.
Electrification
Compared to the rest of the energy system, emissions can be reduced much faster in the electricity sector. As of 2019, 37% of global electricity is produced from low-carbon sources (renewables and nuclear energy). Fossil fuels, primarily coal, produce the rest of the electricity supply. One of the easiest and fastest ways to reduce greenhouse gas emissions is to phase out coal-fired power plants and increase renewable electricity generation.
Climate change mitigation pathways envision extensive electrification—the use of electricity as a substitute for the direct burning of fossil fuels for heating buildings and for transport. Ambitious climate policy would see a doubling of energy share consumed as electricity by 2050, from 20% in 2020.
One of the challenges in providing universal access to electricity is distributing power to rural areas. Off-grid and mini-grid systems based on renewable energy, such as small solar PV installations that generate and store enough electricity for a village, are important solutions. Wider access to reliable electricity would lead to less use of kerosene lighting and diesel generators, which are currently common in the developing world.
Infrastructure for generating and storing renewable electricity requires minerals and metals, such as cobalt and lithium for batteries and copper for solar panels. Recycling can meet some of this demand if product lifecycles are well-designed, however achieving net zero emissions would still require major increases in mining for 17 types of metals and minerals. A small group of countries or companies sometimes dominate the markets for these commodities, raising geopolitical concerns. Most of the world's cobalt, for instance, is mined in the Democratic Republic of the Congo, a politically unstable region where mining is often associated with human rights risks. More diverse geographical sourcing may ensure a more flexible and less brittle supply chain.
Hydrogen
Hydrogen gas is widely discussed in the context of energy, as an energy carrier with potential to reduce greenhouse gas emissions. This requires hydrogen to be produced cleanly, in quantities to supply in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited. These applications include heavy industry and long-distance transport.
Hydrogen can be deployed as an energy source in fuel cells to produce electricity, or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all of the world's current supply of hydrogen is created from fossil fuels.
The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess , in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself.
Electricity can be used to split water molecules, producing sustainable hydrogen provided the electricity was generated sustainably. However, this electrolysis process is currently more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. It can be further transformed into liquid fuels such as green ammonia and green methanol. Innovation in hydrogen electrolysers could make large-scale production of hydrogen from electricity more cost-competitive.
Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. For steelmaking, hydrogen can function as a clean energy carrier and simultaneously as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.
Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle.
Energy usage technologies
Transport
Transport accounts for 14% of global greenhouse gas emissions, but there are multiple ways to make transport more sustainable. Public transport typically emits fewer greenhouse gases per passenger than personal vehicles, since trains and buses can carry many more passengers at once. Short-distance flights can be replaced by high-speed rail, which is more efficient, especially when electrified. Promoting non-motorised transport such as walking and cycling, particularly in cities, can make transport cleaner and healthier.
The energy efficiency of cars has increased over time, but shifting to electric vehicles is an important further step towards decarbonising transport and reducing air pollution. A large proportion of traffic-related air pollution consists of particulate matter from road dust and the wearing-down of tyres and brake pads. Substantially reducing pollution from these non-tailpipe sources cannot be achieved by electrification; it requires measures such as making vehicles lighter and driving them less. Light-duty cars in particular are a prime candidate for decarbonization using battery technology. 25% of the world's emissions still originate from the transportation sector.
Long-distance freight transport and aviation are difficult sectors to electrify with current technologies, mostly because of the weight of batteries needed for long-distance travel, battery recharging times, and limited battery lifespans. Where available, freight transport by ship and rail is generally more sustainable than by air and by road. Hydrogen vehicles may be an option for larger vehicles such as lorries. Many of the techniques needed to lower emissions from shipping and aviation are still early in their development, with ammonia (produced from hydrogen) a promising candidate for shipping fuel. Aviation biofuel may be one of the better uses of bioenergy if emissions are captured and stored during manufacture of the fuel.
Buildings
Over one-third of energy use is in buildings and their construction. To heat buildings, alternatives to burning fossil fuels and biomass include electrification through heat pumps or electric heaters, geothermal energy, central solar heating, reuse of waste heat, and seasonal thermal energy storage. Heat pumps provide both heat and air conditioning through a single appliance. The IEA estimates heat pumps could provide over 90% of space and water heating requirements globally.
A highly efficient way to heat buildings is through district heating, in which heat is generated in a centralised location and then distributed to multiple buildings through insulated pipes. Traditionally, most district heating systems have used fossil fuels, but modern and cold district heating systems are designed to use high shares of renewable energy.Cooling of buildings can be made more efficient through passive building design, planning that minimises the urban heat island effect, and district cooling systems that cool multiple buildings with piped cold water. Air conditioning requires large amounts of electricity and is not always affordable for poorer households. Some air conditioning units still use refrigerants that are greenhouse gases, as some countries have not ratified the Kigali Amendment to only use climate-friendly refrigerants.
Cooking
In developing countries where populations suffer from energy poverty, polluting fuels such as wood or animal dung are often used for cooking. Cooking with these fuels is generally unsustainable, because they release harmful smoke and because harvesting wood can lead to forest degradation. The universal adoption of clean cooking facilities, which are already ubiquitous in rich countries, would dramatically improve health and have minimal negative effects on climate. Clean cooking facilities, e.g. cooking facilities that produce less indoor soot, typically use natural gas, liquefied petroleum gas (both of which consume oxygen and produce carbon-dioxide) or electricity as the energy source; biogas systems are a promising alternative in some contexts. Improved cookstoves that burn biomass more efficiently than traditional stoves are an interim solution where transitioning to clean cooking systems is difficult.
Industry
Over one-third of energy use is by industry. Most of that energy is deployed in thermal processes: generating heat, drying, and refrigeration. The share of renewable energy in industry was 14.5% in 2017—mostly low-temperature heat supplied by bioenergy and electricity. The most energy-intensive activities in industry have the lowest shares of renewable energy, as they face limitations in generating heat at temperatures over .
For some industrial processes, commercialisation of technologies that have not yet been built or operated at full scale will be needed to eliminate greenhouse gas emissions. Steelmaking, for instance, is difficult to electrify because it traditionally uses coke, which is derived from coal, both to create very high-temperature heat and as an ingredient in the steel itself. The production of plastic, cement, and fertilisers also requires significant amounts of energy, with limited possibilities available to decarbonise. A switch to a circular economy would make industry more sustainable as it involves recycling more and thereby using less energy compared to investing energy to mine and refine new raw materials.
Government policies
Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality simultaneously, and in many cases can also increase energy security and lessen the financial burden of using energy.
Environmental regulations have been used since the 1970s to promote more sustainable use of energy. Some governments have committed to dates for phasing out coal-fired power plants and ending new fossil fuel exploration. Governments can require that new cars produce zero emissions, or new buildings are heated by electricity instead of gas. Renewable portfolio standards in several countries require utilities to increase the percentage of electricity they generate from renewable sources.
Governments can accelerate energy system transformation by leading the development of infrastructure such as long-distance electrical transmission lines, smart grids, and hydrogen pipelines. In transport, appropriate infrastructure and incentives can make travel more efficient and less car-dependent. Urban planning that discourages sprawl can reduce energy use in local transport and buildings while enhancing quality of life. Government-funded research, procurement, and incentive policies have historically been critical to the development and maturation of clean energy technologies, such as solar and lithium batteries. In the IEA's scenario for a net zero-emission energy system by 2050, public funding is rapidly mobilised to bring a range of newer technologies to the demonstration phase and to encourage deployment.
Carbon pricing (such as a tax on emissions) gives industries and consumers an incentive to reduce emissions while letting them choose how to do so. For example, they can shift to low-emission energy sources, improve energy efficiency, or reduce their use of energy-intensive products and services. Carbon pricing has encountered strong political pushback in some jurisdictions, whereas energy-specific policies tend to be politically safer. Most studies indicate that to limit global warming to 1.5°C, carbon pricing would need to be complemented by stringent energy-specific policies.
As of 2019, the price of carbon in most regions is too low to achieve the goals of the Paris Agreement. Carbon taxes provide a source of revenue that can be used to lower other taxes or help lower-income households afford higher energy costs. Some governments, such as the EU and the UK, are exploring the use of carbon border adjustments. These place tariffs on imports from countries with less stringent climate policies, to ensure that industries subject to internal carbon prices remain competitive.
The scale and pace of policy reforms that have been initiated as of 2020 are far less than needed to fulfil the climate goals of the Paris Agreement. In addition to domestic policies, greater international cooperation is required to accelerate innovation and to assist poorer countries in establishing a sustainable path to full energy access.
Countries may support renewables to create jobs. The International Labour Organization estimates that efforts to limit global warming to 2 °C would result in net job creation in most sectors of the economy. It predicts that 24 million new jobs would be created by 2030 in areas such as renewable electricity generation, improving energy-efficiency in buildings, and the transition to electric vehicles. Six million jobs would be lost, in sectors such as mining and fossil fuels. Governments can make the transition to sustainable energy more politically and socially feasible by ensuring a just transition for workers and regions that depend on the fossil fuel industry, to ensure they have alternative economic opportunities.
Finance
Raising enough money for innovation and investment is a prerequisite for the energy transition. The IPCC estimates that to limit global warming to 1.5 °C, US$2.4 trillion would need to be invested in the energy system each year between 2016 and 2035. Most studies project that these costs, equivalent to 2.5% of world GDP, would be small compared to the economic and health benefits. Average annual investment in low-carbon energy technologies and energy efficiency would need to be six times more by 2050 compared to 2015. Underfunding is particularly acute in the least developed countries, which are not attractive to the private sector.
The United Nations Framework Convention on Climate Change estimates that climate financing totalled $681 billion in 2016. Most of this is private-sector investment in renewable energy deployment, public-sector investment in sustainable transport, and private-sector investment in energy efficiency. The Paris Agreement includes a pledge of an extra $100 billion per year from developed countries to poor countries, to do climate change mitigation and adaptation. This goal has not been met and measurement of progress has been hampered by unclear accounting rules. If energy-intensive businesses like chemicals, fertilizers, ceramics, steel, and non-ferrous metals invest significantly in R&D, its usage in industry might amount to between 5% and 20% of all energy used.
Fossil fuel funding and subsidies are a significant barrier to the energy transition. Direct global fossil fuel subsidies were $319 billion in 2017. This rises to $5.2 trillion when indirect costs are priced in, like the effects of air pollution. Ending these could lead to a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Funding for clean energy has been largely unaffected by the COVID-19 pandemic, and pandemic-related economic stimulus packages offer possibilities for a green recovery.
References
Sources
External links
Climate change mitigation
Climate change policy
Emissions reduction
Energy economics
Environmental impact of the energy industry
Sustainable development | Sustainable energy | [
"Physics",
"Chemistry",
"Environmental_science"
] | 8,489 | [
"Physical quantities",
"Emissions reduction",
"Energy economics",
"Energy (physics)",
"Energy",
"Greenhouse gases",
"Environmental social science"
] |
1,055,940 | https://en.wikipedia.org/wiki/Completely%20multiplicative%20function | In number theory, functions of positive integers which respect products are important and are called completely multiplicative functions or totally multiplicative functions. A weaker condition is also important, respecting only products of coprime numbers, and such functions are called multiplicative functions. Outside of number theory, the term "multiplicative function" is often taken to be synonymous with "completely multiplicative function" as defined in this article.
Definition
A completely multiplicative function (or totally multiplicative function) is an arithmetic function (that is, a function whose domain is the natural numbers), such that f(1) = 1 and f(ab) = f(a)f(b) holds for all positive integers a and b.
In logic notation: and .
Without the requirement that f(1) = 1, one could still have f(1) = 0, but then f(a) = 0 for all positive integers a, so this is not a very strong restriction. If one did not fix , one can see that both and are possibilities for the value of in the following way:
The definition above can be rephrased using the language of algebra: A completely multiplicative function is a homomorphism from the monoid (that is, the positive integers under multiplication) to some other monoid.
Examples
The easiest example of a completely multiplicative function is a monomial with leading coefficient 1: For any particular positive integer n, define f(a) = an. Then f(bc) = (bc)n = bncn = f(b)f(c), and f(1) = 1n = 1.
The Liouville function is a non-trivial example of a completely multiplicative function as are Dirichlet characters, the Jacobi symbol and the Legendre symbol.
Properties
A completely multiplicative function is completely determined by its values at the prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) = f(p)a f(q)b ...
While the Dirichlet convolution of two multiplicative functions is multiplicative, the Dirichlet convolution of two completely multiplicative functions need not be completely multiplicative. Arithmetic functions which can be written as the Dirichlet convolution of two completely multiplicative functions are said to be quadratics or specially multiplicative multiplicative functions. They are rational arithmetic functions of order (2, 0) and obey the Busche-Ramanujan identity.
There are a variety of statements about a function which are equivalent to it being completely multiplicative. For example, if a function f is multiplicative then it is completely multiplicative if and only if its Dirichlet inverse is where is the Möbius function.
Completely multiplicative functions also satisfy a distributive law. If f is completely multiplicative then
where * represents the Dirichlet product and represents pointwise multiplication. One consequence of this is that for any completely multiplicative function f one has
which can be deduced from the above by putting both , where is the constant function.
Here is the divisor function.
Proof of distributive property
Dirichlet series
The L-function of completely (or totally) multiplicative Dirichlet series satisfies
which means that the sum all over the natural numbers is equal to the product all over the prime numbers.
See also
Arithmetic function
Dirichlet L-function
Dirichlet series
Multiplicative function
References
T. M. Apostol, Some properties of completely multiplicative arithmetical functions, Amer. Math. Monthly 78 (1971) 266-271.
P. Haukkanen, On characterizations of completely multiplicative arithmetical functions, in Number theory, Turku, de Gruyter, 2001, pp. 115–123.
E. Langford, Distributivity over the Dirichlet product and completely multiplicative arithmetical functions, Amer. Math. Monthly 80 (1973) 411–414.
V. Laohakosol, Logarithmic operators and characterizations of completely multiplicative functions, Southeast Asian Bull. Math. 25 (2001) no. 2, 273–281.
K. L. Yocom, Totally multiplicative functions in regular convolution rings, Canad. Math. Bull. 16 (1973) 119–128.
Multiplicative functions | Completely multiplicative function | [
"Mathematics"
] | 949 | [
"Multiplicative functions",
"Number theory"
] |
1,056,003 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20curves | In differential geometry, the fundamental theorem of space curves states that every regular curve in three-dimensional space, with non-zero curvature, has its shape (and size or scale) completely determined by its curvature and torsion.
Use
A curve can be described, and thereby defined, by a pair of scalar fields: curvature and torsion , both of which depend on some parameter which parametrizes the curve but which can ideally be the arc length of the curve. From just the curvature and torsion, the vector fields for the tangent, normal, and binormal vectors can be derived using the Frenet–Serret formulas. Then, integration of the tangent field (done numerically, if not analytically) yields the curve.
Congruence
If a pair of curves are in different positions but have the same curvature and torsion, then they are congruent to each other.
See also
Differential geometry of curves
Gaussian curvature
References
Further reading
Theorems about curves
Theorems in differential geometry | Fundamental theorem of curves | [
"Mathematics"
] | 206 | [
"Theorems in differential geometry",
"Theorems about curves",
"Theorems in geometry"
] |
1,056,006 | https://en.wikipedia.org/wiki/Vulcan%20Iron%20Works | Vulcan Iron Works was the name of several iron foundries in both England and the United States during the Industrial Revolution and, in one case, lasting until the mid-20th century. Vulcan, the Roman god of fire and smithery, was a popular namesake for these foundries.
England
During the Industrial Revolution, numerous entrepreneurs independently founded factories named Vulcan Iron Works in England, notably that of Robinson Thwaites and Edward Carbutt at Bradford, and that of Thomas Clunes at Worcester, England. The largest of all the ironworks of Victorian England, the Cleveland Works of Bolckow Vaughan in Middlesbrough, were on Vulcan Street.
Thwaites & Carbutt, Bradford
The Vulcan Works at Thornton Road, Bradford was a spacious and handsome factory. It was described in Industries of Yorkshire as
Ley's, Derby
The Vulcan Iron Works at Osmaston Road, Derby was founded in 1874 by Francis Ley (1846-1916). On a site occupying 11 acres by the Birmingham and Derby Junction Railway, he manufactured castings for motor cars. The company became the Ley's Malleable Castings Company Ltd. In the London Gazette of April 14, 1876, Ley was granted a patent for "improvements in apparatus for locking and fastening nuts on fish plate and other bolts". The iron foundry was closed and demolished in 1986.
McKenzie, Clunes & Holland, Worcester
The Vulcan Iron Works at Cromwell Street, Worcester was founded in 1857 by Thomas Clunes (b. 1818, d. 28 September 1879). The firm started out as "Engineers, Millwrights, Iron & Brass Founders, Plumbers etc", according to the listing in Kelly's Directory. The works had a single tall tapering square chimney, a covered area with open sides, and a handsome main building on a largely open site on the west side of the Worcester and Birmingham Canal.
By 1861, Clunes, a former "Plumber and Brass Founder" from Aberdeen, Scotland living in St Martin's, Worcester, with nine children, was a "Master Engineer employing 104 men and 10 boys"; his son Robert at age 11 was an "Apprentice to Engineer". In 1861, Clunes was joined by two former railwaymen, McKenzie and Holland, and the firm moved into railway signalling equipment. Clunes retired to Fowey, Cornwall, and his name was dropped from the company's name in the 1870s. The entry in the Worcestershire Post Office Directory for 1876 is simply "RAILWAY SIGNAL MANFRS. McKenzie & Holland, Vulcan Iron Works, Worcester."
Vulcan Iron Works, Langley Mill
The G R Turner company's Vulcan Iron Works at Langley Mill, Derbyshire was built in 1874. GR Turner produced railway rolling stock until the 1960s; at its peak it employed 350 men. According to Grace's Guide, G R Turner was established in 1863; it became a Limited Company in 1902, and was registered on 29 January 1903 as acquiring T N Turner's business of "engineer, wheel and wagon maker"; in 1914 it was described as "Colliery Engineers" as well as making rolling stock, with 800 engineers.
Vulcan Ironworks, Preston
In 1857 the firm of Baxendale and Gregson was founded in Shepherd Street, Preston, Lancashire. When the works there became too small, the business moved to a new Vulcan Ironworks, built at Salter Street, just off North Road, Preston, under the name Gregson and Monk.
In 1873, James Gregson bought 82 acres of land at Fulwood; in 1876 he built Highgate Park mansion with the land as its extensive gardens. He owned much property in Preston and was a councillor of Fulwood District. His son George Frederick Gregson ran the firm after him.
When Monk retired in March 1874, James Gregson became sole proprietor. He employed about 400 men, making up to 100 weaving looms per week. Over 25,000 looms made by Gregson were claimed to be at work in or near Preston in 1884.
The machines made by the firm included:
The ironworks was reported in 1884 to have grindstones of 7 ft (2 metre) diameter; "two cupolas blown by fans, one of which is capable of melting twenty tons of metal per day"; cranes and hoists; a brass moulding shop; a sand mill (for the mouldings); and a machine for grinding coal to dust. The buildings included a draughtsmen's office; a pattern makers' and joiners' shop; a packing room; an erecting and turning shop; and a smithy. All the machines were driven by rope from a single large wheel; two horizontal steam engines powered the entire ironworks. The journalist noted that "The death rate among grinders is very high indeed, which it is almost impossible to prevent."
United States
Seattle
The Vulcan Iron Works in Seattle had Jacob Furth as its president. Furth ran the Vulcan Iron Works along with the Puget Sound Electric Railway and street railways on the Puget Sound.
San Francisco
A Vulcan Iron Works was established at 135 Fremont Street, San Francisco in 1850 during the California gold rush. The factory occupied the block bounded by Fremont, Mission, Howard, and First Streets. The factory maintained the name through a number of owners building boilers, steam engines, mining machinery, sawmills, and some relatively primitive steam locomotives for 19th century California railroads. It built the Oregon Pony in 1861. The factory was destroyed by the 1906 San Francisco earthquake, but steel fabrication activities resumed on the site after the quake.
Charleston
There was a Vulcan Iron Works on Cumberland Street, Charleston, South Carolina in 1865.
See also
Vulcan (motor vehicles)
References
External links
Preserved Vulcan Iron Works steam locomotive list
Photograph of Vulcan Iron Works Worcester steel at Shrub Hill station
Photograph of Gregson and Monk Engineers, Salter Street, Preston
Photograph of a Gregson and Monk power loom
Photograph of James Gregson's Highgate Park mansion, Preston in 1900
Photograph of a grate, cast by Vulcan Iron Works San Francisco
Finding Aid for Vulcan Iron Works collection at Hagley Library
Industrial machine manufacturers
History of Worcester, England
Foundries in the United States
Ironworks and steel mills in the United States
Industrial buildings in England
Foundries in the United Kingdom
Buildings and structures destroyed by the 1906 San Francisco earthquake | Vulcan Iron Works | [
"Engineering"
] | 1,275 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
1,056,047 | https://en.wikipedia.org/wiki/Hao%20Wang%20%28academic%29 | Hao Wang (; 20 May 1921 – 13 May 1995) was a Chinese-American logician, philosopher, mathematician, and commentator on Kurt Gödel.
Biography
Born in Jinan, Shandong, in the Republic of China (today in the People's Republic of China), Wang received his early education in China. He obtained a BSc degree in mathematics from the National Southwestern Associated University in 1943 and an M.A. in philosophy from Tsinghua University in 1945, where his teachers included Feng Youlan and Jin Yuelin, after which he moved to the United States for further graduate studies. He studied logic under W. V. O. Quine at Harvard University, culminating in a Ph.D. in 1948. He was appointed to an assistant professorship at Harvard the same year.
During the early 1950s, Wang studied with Paul Bernays in Zürich. In 1956, he was appointed Reader in the Philosophy of Mathematics at the University of Oxford. In 1959, Wang wrote on an IBM 704 computer a program that in only 9 minutes mechanically proved several hundred mathematical logic theorems in Whitehead and Russell's Principia Mathematica. In 1961, he was appointed Gordon McKay Professor of Mathematical Logic and Applied Mathematics at Harvard. From 1967 until 1991, he headed the logic research group at Rockefeller University in New York City, where he was professor of logic. In 1972, Wang joined in a group of Chinese American scientists led by Chih-Kung Jen as the first such delegation from the U.S. to the People's Republic of China.
One of Wang's most important contributions was the Wang tile. He showed that any Turing machine can be turned into a set of Wang tiles. The domino problem is to find an algorithm that uses a set of Wang tiles to tile the plane. The first noted example of aperiodic tiling is a set of Wang tiles, whose nonexistence Wang had once conjectured, discovered by his student Robert Berger in 1966. Wang also had a significant influence on theory of computational complexity.
A philosopher in his own right, Wang also developed a penetrating interpretation of Ludwig Wittgenstein's later philosophy of mathematics, which he called "anthropologism." Later he broadened this reading in the foundations of mathematics. He chronicled Kurt Gödel's philosophical ideas and authored several books on the subject, thereby providing contemporary scholars many insights elucidating Gödel's later philosophical thought. He saw his own philosophy of "substantial factualism" as a middle ground that includes both abstract theoretical formulations and the ordinary language of everyday discourse.
In 1983 he was presented with the first Milestone Prize for Automated Theorem-Proving, sponsored by the International Joint Conference on Artificial Intelligence.
On 13 May 1995, Wang died at New York Hospital one week from his 74th birthday. According to his wife Hanne Tierney, Wang's cause of death was from lymphoma. In addition to Tierney, Wang was survived by a daughter and two sons.
Books
Les Systèmes axiomatiques de la Théorie des Ensembles, Gauthier-Villars; Paris, 1953. [Wang 1953a, with Robert McNaughton].
A Survey of Mathematical Logic. Peking: Science Press; Amsterdam: North-Holland, 1962. [Wang 1962a].
From Mathematics to Philosophy. London: Routledge & Kegan Paul, 1974. [Wang 1974a].
Popular Lectures on Mathematical Logic. New York: Van Nostrand, 1981. [Wang 1981a]. . Dover reprint 2014.
Beyond Analytic Philosophy: Doing Justice to What We Know. Cambridge, Massachusetts: MIT Press, 1985. [Wang 1985a]. .
Reflections on Kurt Gödel. Cambridge, Massachusetts: MIT Press, 1987. [Wang 1987a]. .
Computation, Logic, Philosophy. A Collection of Essays. Beijing: Science Press; Dordrecht: Kluwer Academic, 1990. [Wang 1990a]. .
A Logical Journey: From Gödel to Philosophy. Cambridge, Massachusetts: MIT Press, 1996. [Wang 1996a]. .
References
External links
Video interview with Hao Wang and Robin Gandy (and portrait of Wang)
Detailed bibliography
"A Bibliography of Hao Wang" from Philosophia Mathematica. References in square brackets are to this source.
1921 births
1995 deaths
20th-century American mathematicians
Philosophers of mathematics
Chinese emigrants to the United States
Chinese logicians
American logicians
Harvard Graduate School of Arts and Sciences alumni
Harvard University Department of Philosophy faculty
Tsinghua University alumni
American writers of Chinese descent
Writers from Dezhou
Educators from Shandong
Scientists from Shandong
Philosophers from Shandong
Corresponding fellows of the British Academy
National Southwestern Associated University alumni | Hao Wang (academic) | [
"Mathematics"
] | 953 | [
"Philosophers of mathematics"
] |
1,056,090 | https://en.wikipedia.org/wiki/Fatality%20%28Mortal%20Kombat%29 | Fatality is a gameplay feature in the Mortal Kombat fighting game series, in which the victor of the match inflicts a brutally murderous finishing move onto their defeated opponent. Prompted by the announcer saying "Finish Him" or "Finish Her", players have a short time window to execute a Fatality by entering specific commands while positioned at a specific distance from the opponent. The Fatality and its derivations are notable features of the Mortal Kombat series and have caused controversies.
History
The origins of the Fatality concept have been traced back to several violent Asian martial arts media. In The Street Fighter (1974), a Japanese martial arts film, Sonny Chiba performs x-ray fatality finishing moves, which at the time was seen as a gimmick to distinguish it from other martial arts films. In the Japanese shōnen manga and anime series Fist of the North Star, the protagonist Kenshiro performs gory fatalities in the form of finishing moves which consist of attacking pressure points that cause heads and bodies to explode. The Japanese seinen manga and anime series Riki-Oh (1988 debut), along with its Hong Kong martial arts film adaptation Story of Ricky (1991), featured gory fatalities in the form of finishing moves similar to those that later appeared in Mortal Kombat. The nature of graphic violence depicted in Fatalities from the original Mortal Kombat was considered highly controversial and contributed to the formation of the Entertainment Software Rating Board (ESRB), a regulatory system for video game content. The impression of Fatality inspired other video game franchises to have finishing moves, including Killer Instinct, Gears of War, War Gods, and ClayFighter.
While creating Mortal Kombat, Ed Boon and John Tobias started with the idea of a Street Fighter II-style system and retained many of its conventions but tweaked others. The most notable additions were graphic blood effects, more brutal fighting techniques, and especially the fatal finishing moves (this was a novelty as the traditional fighting games ended with the loser simply knocked unconscious and the victor posing for the players). According to Boon, it started with an idea to enable the player to hit a dizzied opponent at the end of the match with a "free hit", and that idea "quickly evolved into something nasty". However, Tobias recalled it differently, stating that Fatalities were not initially part of the game's design. Early development focused on using a finishing move exclusively for the final boss, Shang Tsung, who was envisioned decapitating his opponent with a sword. This concept evolved when developers considered allowing players to perform similar finishing moves on their opponents. The positive reactions from players solidified Fatalities as a core mechanic of the game, leading to their prominence within the franchise. In Mortal Kombat 1 (2023), an accessibility feature for visually impaired individuals was made available, in which the properties of Fatalities are explained through in-game narration.
Gameplay
Fatalities, like special moves, often have specific requirements. Each character has a unique Fatality that must be performed at a specific distance: close (right next to the opponent), sweep/mid (one or two steps away, within sweeping kick range), or far (about one jump's length away).
Alternatives
Animality: Allows a character to morph into an animal and maul their opponent. Introduced in Mortal Kombat 3. According to Ed Boon, this finisher was rumored to be in Mortal Kombat II and was later added to Mortal Kombat 3 due to high fan demand.
Babality: Introduced in Mortal Kombat II, it turns defeated opponents into an infant version of themselves.
Brutality: Introduced in Ultimate Mortal Kombat 3, this involves a multi-hit combo that would cause the opponent to explode. In the later games, Brutalities are tied to specific moves serving as finishing blows.
Death Traps: Featured in Mortal Kombat: Armageddon, allowing players to kill opponents using interactive stage elements, but unlike Stage Fatalities, they can be executed at any point of a match.
Faction Kill: Appearing only in Mortal Kombat X, this finisher aligns with the game's faction system, offering faction-themed finishing moves as a reward for allegiance to the respective faction.
Fergality: The Sega Genesis version of Mortal Kombat II featured an exclusive finishing move that allowed Raiden to transform his opponent into Probe Ltd. employee Fergus McGovern.
Friendship: This finishing move is an alternative to Fatalities, used for ending a match in a friendly manner.
Hara-Kiri: Introduced in Mortal Kombat: Deception. It is a finishing move in which a losing player kills themselves rather than being finished by their opponent.
Heroic Brutality: This is exclusive to the 2008 crossover game Mortal Kombat vs. DC Universe. In addition to the Mortal Kombat characters' Fatalities toned down to maintain the game's "Teen" rating, the Heroic Brutalities represented the DC characters' moral code against killing.
Kreate-A-Fatality: For Mortal Kombat: Armageddon, the Fatality concept was completely revised, which focused more on combinations of attacks instead of character-specific finishers.
Mercy: Allows players to spare opponents instead of finishing them.
Multality: Mortal Kombat: Shaolin Monks features Multalities, which are Fatalities performed on multiple enemies simultaneously.
Nudalities: Planned for Mortal Kombat 3, however, they were canceled by one of the game's publishers, Williams Entertainment.
Seasonal Fatality: This concept of Fatality was introduced in Mortal Kombat 1, in which the Fatalities are themed around a special festival. Examples include Thanksgiving, Halloween, and Christmas.
Stage Fatality: It brought environmental interaction within the series, occurring when a player uses a part of the stage to kill an opponent. Some examples of Stage Fatalities are having the victim fall into a pool of acid or a pit of spikes or colliding with a subway train.
Quitality: Introduced in Mortal Kombat X, it occurs when a player disconnects during an online match. This results in losing a match and a character instantly dying or snapping their own neck.
Notable Fatalities
In December 1994, GamePro conducted a reader poll to determine the most popular Fatalities from MKII. The results, published in March 1995, highlighted Jax's "Arm Rip", Sub-Zero's "Ice Grenade", and Shang Tsung's "Soul Stealer" as fan favorites. Years later, in November 2008, GamePro Patrick Shaw ranked his "12 Lamest Fatalities" across various fighting games. Among those from the Mortal Kombat series from least to highest ranking were Liu Kang's "Death by Arcade Machine" (MK3), The Flash's "Tornado Slam" (MK vs. DCU), Jax's "Amazing Growing Man" (MK3), Scorpion's and Rain's Animalities (UMK3/MKT), Sindel's "Killer Hair" (MK3), Kano's "Stomach Pounce" (MK vs. DCU), and the censored Super NES version of his "Heart Rip" Fatality from the original Mortal Kombat.
In May 2010, Dan Ryckert from Game Informer reviewed the Fatalities, categorizing them into the best, worst, and most confusing. Best Fatalities: Sub-Zero's "Spine Rip" (MK 1992), Liu Kang's "Dragon", Reptile's "Head Snack", and Jax's "Arm Pull" (all three from MKII); Sektor's "Compactor" and Sindel's "Scream" (MK3); Jade's "Head Gymnastics" and Dairou's "Ribs to the Eyes" (MKD). Worst Fatalities: Liu Kang's "Cartwheel" (MK 1992), Kitana's "Kiss of Death" (MKII), Kabal's "Inflating Head" and "Scary Face" (MK3), Rain's "Upside-Down Uppercut" (MKT), Bo' Rai Cho's "Fart of Doom" (MKD), and Kano's "Knee Stomp" (MK vs. DCU). Confusing Fatalities: Johnny Cage's "Three Head Punch", Liu Kang's "Arcade Machine", Jax's "Giant Stomp", Cyrax's "Self-Destruct", Kano's "Skeleton Pull", and Smoke's "Blow Up The World" (all six from MK3), and Darrius' "Rearranger" (MKD).
In February 2011, UGO Entertainment K. Thor Jensen ranked the top 50 "Most Gruesome Finishing Moves Ever" in video games, with several Mortal Kombat Fatalities making the list. The least to highest ranking Fatalities: Sub-Zero's "Spine Rip" (MK 1992), Johnny Cage's "Triple Uppercut" (MKII), the Joker's "Last Joke" (MK vs. DCU), Kung Lao's "Hat Slice" (MKII), Johnny Cage's "Nutbuster" (MKSM), the "Pit" Fatality, Sektor's "Iron Clamp" (MK3), Dairou's "Ribeyes" (MKD), and Smoke's "Armageddon" (MK3). In April 2014, Prima Games Robert Workman compiled a list of the top 50 Fatalities. The order from top 10 to top 1 included Baraka's "Lifting Stab" (MKII), Noob Saibot's "Make a Wish" (MK9), Kitana's "Kiss of Death" (MKII), Johnny Cage's "Nut Buster" (MKSM), Ermac's "Mind Over Splatter" (MK9), the "Pit" Fatality, Dairou's "Eye Stab" (MKD), Kung Lao's "Blade Drag" (MK9), Kano's "Heart Rip" and Sub-Zero's "Beheading, Complete with Spine" (MK 1992).
In May 2020, Gavin Jasper of Den of Geek selected his top 3 Fatalities from each Mortal Kombat game, spanning the series from its original release to MK11. Highlights from the original Mortal Kombat included Kano's "Heart Rip", Scorpion's "Toasty", and Sub-Zero's "Spine Rip". From MKII: Mileena's "Devourer", Baraka's "Blade Elevation", and Kung Lao's "Hat Splitter". From MK3/UMK3/MKT: Sektor's "Compactor", Shang Tsung's "Soul Steal", and Scorpion's "Hell Hand". From MK4/MKG: Raiden's "Overload", Reiko's "Throwing Stars", and Quan Chi's "Leg Beatdown". From MKDA: Kano's "Organ Robbery", Kenshi's "Telekinetic Destruction", and Kung Lao's "Splitting Headache". From MKD/MKU: Goro's "Limb Tear", Havik's "Arm Feast", and Sub-Zero's "Leg Shatter". From MKSM: "The Tearing Down of Kintaro", Johnny Cage's "Crotch Destroyer", and Scorpion's "Judgment Day". From MK vs. DCU: The Joker's "Cards" and "Gun", and Scorpion's "Trip to Hell". From MK9: Kung Lao's "Hat Trick", Sheeva's "Lend a Hand", and Noob Saibot's "Make a Wish". From MKX: Quan Chi's "Mind Game", Mileena's "Tasty Treat", and Cassie Cage's "Selfie". Lastly, from MK11: The Terminator's "Target Terminated", D'Vorah's "New Species", and Johnny Cage's "Who Hired This Guy?".
In October 2022, Justin Clark of GameSpot celebrated the 30th anniversary of the Mortal Kombat series by selecting the 10 best and worst Fatalities in its history. Among the best were Sub-Zero's "Spine Rip" (MK 1992), Kung Lao's "Hat Split" and Shang Tsung's Kintaro transformation (MKII), Quan Chi's "Shake a Leg" (MK4), Sub-Zero's "The Pitch" (MKD), Scorpion's "Nether Gate" (MK9), Ermac's "Inner Workings" and Cassie Cage's "Selfie" (MKX), Shang Tsung's "Kondemned to the Damned" and D'Vorah's "New Species" (MK11). The worst included Liu Kang's "Cartwheel Uppercut" (MK 1992/MKII), Jade's "Shaky Staff" and Classic Sub-Zero's "Blackout" (UMK3), Quan Chi's "Neck Stretch" (MKDA), Scorpion's "Only a Flesh Wound" and Ashrah's "Voodoo Doll" (MKD), "Ultimate Fatalities" (MKA), Kano's "Stomp, Drop, and Roll" (MK vs. DCU), Cassie Cage's "I <3 You" and Skarlet's "Heart Condition" (MK11). Fatalities from 2023's MK1 have drawn critical attention, including Liu Kang's Fatality, where he transports his opponent to outer space and summons a black hole, and the "Thanksgiving" Fatality, described by Polygon Michael McWhertor as the "most disgusting finishing move yet".
Further reading
References
Mortal Kombat
Video game terminology | Fatality (Mortal Kombat) | [
"Technology"
] | 2,908 | [
"Computing terminology",
"Video game terminology"
] |
1,056,276 | https://en.wikipedia.org/wiki/NEC%20SX-6 | The SX-6 is a NEC SX supercomputer built by NEC Corporation that debuted in 2001; the SX-6 was sold under license by Cray Inc. in the U.S. Each SX-6 single-node system contains up to eight vector processors, which share up to 64 GB of computer memory. The SX-6 processor is a single chip implementation containing a vector processor unit and a scalar processor fabricated in a 0.15 μm CMOS process with copper interconnects, whereas the SX-5 was a multi-chip implementation. The Earth Simulator is based on the SX-6 architecture.
The vector processor is made up of eight vector pipeline units each with seventy-two 256-word vector registers. The vector unit performs add/shift, multiply, divide and logical operations. The scalar unit is 64 bits wide and contains a 64 KB cache. The scalar unit can decode, issue and complete four instructions per clock cycle. Branch prediction and speculative execution is supported. A multi-node system is configured by interconnecting up to 128 single-node systems via a high-speed, low-latency IXS (Internode Crossbar Switch).
The peak performance of the SX-6 series vector processors is 8 GFLOPS. Thus a single-node system provides a peak performance of 64 GFLOPS, while a multi-node system provides up to 8 TFLOPS of peak floating-point performance.
The SX-6 uses SUPER-UX, a Unix-like operating system developed by NEC. A SAN-based global file system (NEC's GFS) is available for a multinode installation. The default batch processing system is NQSII, but open source batch systems such as Sun Grid Engine are also supported.
See also
SUPER-UX
NEC SX
Earth Simulator
NEC Corporation
References
External links
SX-6 Specifications
Scalable Vector Supercomputer - SX Series Downloads
Sx-6
Vector supercomputers | NEC SX-6 | [
"Technology"
] | 418 | [
"Computing stubs",
"Computer hardware stubs"
] |
1,056,460 | https://en.wikipedia.org/wiki/Electronic%20brakeforce%20distribution | Electronic brakeforce distribution (EBD or EBFD) or electronic brakeforce limitation (EBL) is an automobile brake technology that automatically varies the amount of force applied to each of a vehicle's wheels, based on road conditions, speed, loading, etc, thus providing intelligent control of both brake balance and overall brake force. Always coupled with anti-lock braking systems (ABS), EBD can apply more or less braking pressure to each wheel in order to maximize stopping power whilst maintaining vehicular control. Typically, the front end carries more weight and EBD distributes less braking pressure to the rear brakes so the rear brakes do not lock up and cause a skid. In some systems, EBD distributes more braking pressure at the rear brakes during initial brake application before the effects of weight transfer become apparent.
ABS
Vehicle wheels may lock-up due to excessive wheel torque over tire–road friction forces available, caused by too much hydraulic line pressure. The ABS monitors wheel speeds and releases pressure on individual wheel brake lines, rapidly pulsing individual brakes to prevent lock-up. During heavy braking, preventing wheel lock-up helps the driver maintain steering control. Four channel ABS systems have an individual brake line for each of the four wheels, enabling different braking pressure on different road surfaces. Three channel systems are equipped with a sensor for each wheel, but control the rear brakes as a single unit. For example, less braking pressure is needed to lock a wheel on ice than a wheel that is on bare asphalt. If the left wheels are on asphalt and the right wheels are on ice, during an emergency stop, ABS detects the right wheels are about to lock and reduces braking force on the right front wheel. Four channel systems also reduce brake force on the right rear wheel, while a three channel system would also reduce force on both back wheels. Both systems help avoid lock-up and loss of vehicle control.
EBD
As per the technical paper published by Buschmann et al.,
"The job of the EBD as a subsystem of the ABS system is to control the effective adhesion utilization by the rear wheels. The pressure of the rear wheels are approximated to the ideal brake force distribution in a partial braking operation. To do so, the conventional brake design is modified in the direction of rear axle overbraking, and the components of the ABS are used. EBD reduces the strain on the hydraulic brake force proportioning valve in the vehicle. EBD optimizes the brake design with regard to: adhesion utilization; driving stability; wear; temperature stress; and pedal force."
EBD may work in conjunction with ABS and electronic stability control (ESC) to minimize yaw accelerations during turns. ESC compares the steering wheel angle to vehicle turning rate using a yaw rate sensor. "Yaw" is the vehicle's rotation around its vertical center of gravity (turning left or right). If the yaw sensor detects less(more) yaw than the steering wheel angle should create, the car is understeering(oversteering) and ESC activates one of the front or rear brakes to rotate the car back onto its intended course. For example, if a car is making a left turn and begins to understeer (the car plows forward to the outside of the turn) ESC activates the left rear brake, which will help turn the car left. The sensors are so sensitive and the actuation is so quick that the system may correct direction before the driver reacts. ABS helps prevent wheel lock-up and EBD helps apply appropriate brake force to make ESC work effectively and easily.
See also
Brake assist
Cornering brake control
Automobile safety
References
Vehicle braking technologies
Vehicle safety technologies
Mechanical power control | Electronic brakeforce distribution | [
"Physics"
] | 771 | [
"Mechanics",
"Mechanical power control"
] |
1,056,500 | https://en.wikipedia.org/wiki/Return%20on%20capital | Return on capital (ROC), or return on invested capital (ROIC), is a ratio used in finance, valuation and accounting, as a measure of the profitability and value-creating potential of companies relative to the amount of capital invested by shareholders and other debtholders. It indicates how effective a company is at turning capital into profits.
The ratio is calculated by dividing the after tax operating income (NOPAT) by the average book-value of the invested capital (IC).
Return on invested capital formula
There are three main components of this measurement:
While ratios such as return on equity and return on assets use net income as the numerator, ROIC uses net operating income after tax (NOPAT), which means that after-tax expenses (income) from financing activities are added back to (deducted from) net income.
While many financial computations use market value instead of book value (for instance, calculating debt-to-equity ratios or calculating the weights for the weighted average cost of capital (WACC)), ROIC uses book values of the invested capital as the denominator. This procedure is done because, unlike market values which reflect future expectations in efficient markets, book values more closely reflect the amount of initial capital invested to generate a return.
The denominator represents the average value of the invested capital rather than the value of the end of the year. This is because the NOPAT represents a sum of money flows, while the value of the invested capital changes every day (e.g., the invested capital on December 31 could be 30% lower than the invested capital on December 30). Because the exact average is difficult to calculate, it is often estimated by taking the average between the IC at the beginning of the year and the IC at the end of the year.
Some practitioners make an additional adjustment to the formula to add depreciation, amortization, and depletion charges back to the numerator. These charges are considered by some to be "non-cash expenses" which are often included as part of operating expenses. The practice of adding these back is said to more closely reflect the cash return of a firm over a given period of time. However, others (such as Warren Buffett) argue that depreciation should not be excluded seeing that it represents a real cash outflow. When a company purchases a depreciating asset, the cost is not immediately expensed on the income statement. Instead, it is capitalized on the balance sheet as an asset. Over time, the depreciation expenses on the income statement will reduce the asset value on the balance sheet. In turn, depreciation represents the delayed expensing of the initial cash outflow that purchased the asset, and is thus a rather liberal accounting practice.
Relationship with WACC
Because financial theory states that the value of an investment is determined by both the amount of and risk of its expected cash flows to an investor, ROIC and its relationship to the weighted average cost of capital (WACC).
The cost of capital is the return expected from investors for bearing the risk that the projected cash flows of an investment deviate from expectations. It is said that for investments in which future cash flows are incrementally less certain, rational investors require incrementally higher rates of return as compensation for bearing higher degrees of risk. In corporate finance, WACC is a common measurement of the minimum expected weighted average return of all investors in a company given the riskiness of its future cash flows.
Since return on invested capital is said to measure the ability of a firm to generate a return on its capital, and since WACC is said to measure the minimum expected return demanded by the firm's capital providers, the difference between ROIC and WACC is sometimes referred to as a firm's "excess return", or "economic profit".
See also
Cash flow return on investment (CFROI)
Fairfield Plaza, Inc. v. Commissioner
Negative return (finance)
Profit maximization
Profitability
Rate of profit
Rate of return on a portfolio
Recovery of capital doctrine
Return on assets (RoA)
Return on brand (ROB)
Return on capital employed (ROCE)
Return on net assets (RoNA)
Tendency of the rate of profit to fall
References
Financial ratios
Investment indicators | Return on capital | [
"Mathematics"
] | 874 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
178,649 | https://en.wikipedia.org/wiki/General%20topology | In mathematics, general topology (or point set topology) is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology.
The fundamental concepts in point-set topology are continuity, compactness, and connectedness:
Continuous functions, intuitively, take nearby points to nearby points.
Compact sets are those that can be covered by finitely many sets of arbitrarily small size.
Connected sets are sets that cannot be divided into two pieces that are far apart.
The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space.
Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces.
History
General topology grew out of a number of areas, most importantly the following:
the detailed study of subsets of the real line (once known as the topology of point sets; this usage is now obsolete)
the introduction of the manifold concept
the study of metric spaces, especially normed linear spaces, in the early days of functional analysis.
General topology assumed its present form around 1940. It captures, one might say, almost everything in the intuition of continuity, in a technically adequate form that can be applied in any area of mathematics.
A topology on a set
Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if:
Both the empty set and X are elements of τ
Any union of elements of τ is an element of τ
Any intersection of finitely many elements of τ is an element of τ
If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ.
The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both (clopen set), or neither. The empty set and X itself are always both closed and open.
Basis for a topology
A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them.
Subspace and quotient
Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space.
A quotient space is defined as follows: if X is a topological space and Y is a set, and if f : X→ Y is a surjective function, then the quotient topology on Y is the collection of subsets of Y that have open inverse images under f. In other words, the quotient topology is the finest topology on Y for which f is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X. The map f is then the natural projection onto the set of equivalence classes.
Examples of topological spaces
A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space.
Discrete and trivial topologies
Any set can be given the discrete topology, in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique.
Cofinite and cocountable topologies
Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set.
Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations.
Topologies on the real and complex numbers
There are many ways to define a topology on R, the set of real numbers. The standard topology on R is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces Rn can be given a topology. In the usual topology on Rn the basic open sets are the open balls. Similarly, C, the set of complex numbers, and Cn have a standard topology in which the basic open sets are open balls.
The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [a, b). This topology on R is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it.
The metric topology
Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms.
Further examples
There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general.
Every manifold has a natural topology, since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from Rn.
The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On Rn or Cn, the closed sets of the Zariski topology are the solution sets of systems of polynomial equations.
A linear graph has a natural topology that generalises many of the geometric aspects of graphs with vertices and edges.
Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function.
Any local field has a topology native to it, and this can be extended to vector spaces over that field.
The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics.
If Γ is an ordinal number, then the set Γ = [0, Γ) may be endowed with the order topology generated by the intervals (a, b), [0, b) and (a, Γ) where a and b are elements of Γ.
Continuous functions
Continuity is expressed in terms of neighborhoods: is continuous at some point if and only if for any neighborhood of , there is a neighborhood of such that . Intuitively, continuity means no matter how "small" becomes, there is always a containing that maps inside and whose image under contains . This is equivalent to the condition that the preimages of the open (closed) sets in are open (closed) in . In metric spaces, this definition is equivalent to the ε–δ-definition that is often used in analysis.
An extreme example: if a set is given the discrete topology, all functions
to any topological space are continuous. On the other hand, if is equipped with the indiscrete topology and the space set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose range is indiscrete is continuous.
Alternative definitions
Several equivalent definitions for a topological structure exist and thus there are several equivalent ways to define a continuous function.
Neighborhood definition
Definitions based on preimages are often difficult to use directly. The following criterion expresses continuity in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above δ-ε definition of continuity in the context of metric spaces. However, in general topological spaces, there is no notion of nearness or distance.
Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Sequences and nets
In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). Thus sequentially continuous functions "preserve sequential limits". Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions.
Closure operator definition
Instead of specifying the open subsets of a topological space, the topology can also be determined by a closure operator (denoted cl), which assigns to any subset A ⊆ X its closure, or an interior operator (denoted int), which assigns to any subset A of X its interior. In these terms, a function
between topological spaces is continuous in the sense above if and only if for all subsets A of X
That is to say, given any element x of X that is in the closure of any subset A, f(x) belongs to the closure of f(A). This is equivalent to the requirement that for all subsets A' of X'
Moreover,
is continuous if and only if
for any subset A of X.
Properties
If f: X → Y and g: Y → Z are continuous, then so is the composition g ∘ f: X → Z. If f: X → Y is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology τ1 is said to be coarser than another topology τ2 (notation: τ1 ⊆ τ2) if every open subset with respect to τ1 is also open with respect to τ2. Then, the identity map
idX: (X, τ2) → (X, τ1)
is continuous if and only if τ1 ⊆ τ2 (see also comparison of topologies). More generally, a continuous function
stays continuous if the topology τY is replaced by a coarser topology and/or τX is replaced by a finer topology.
Homeomorphisms
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. In fact, if an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f−1 need not be continuous. A bijective continuous function with continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
Defining topologies via continuous functions
Given a function
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f−1(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus the final topology can be characterized as the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S has a basis of open sets given by those sets of the form f^(-1)(U) where U is open in X . If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus the initial topology can be characterized as the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions into all topological spaces X. Dually, a similar idea can be applied to maps
Compact sets
Formally, a topological space X is called compact if each of its open covers has a finite subcover. Otherwise it is called non-compact. Explicitly, this means that for every arbitrary collection
of open subsets of such that
there is a finite subset of such that
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
Every closed interval in R of finite length is compact. More is true: In Rn, a set is compact if and only if it is closed and bounded. (See Heine–Borel theorem).
Every continuous image of a compact space is compact.
A compact subset of a Hausdorff space is closed.
Every continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism.
Every sequence of points in a compact metric space has a convergent subsequence.
Every compact finite-dimensional manifold can be embedded in some Euclidean space Rn.
Connected sets
A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice.
For a topological space X the following conditions are equivalent:
X is connected.
X cannot be divided into two disjoint nonempty closed sets.
The only subsets of X that are both open and closed (clopen sets) are X and the empty set.
The only subsets of X with empty boundary are X and the empty set.
X cannot be written as the union of two nonempty separated sets.
The only continuous functions from X to {0,1}, the two-point space endowed with the discrete topology, are constant.
Every interval in R is connected.
The continuous image of a connected space is connected.
Connected components
The maximal connected subsets (ordered by inclusion) of a nonempty topological space are called the connected components of the space.
The components of any topological space X form a partition of X: they are disjoint, nonempty, and their union is the whole space.
Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets, which are not open.
Let be the connected component of x in a topological space X, and be the intersection of all open-closed sets containing x (called quasi-component of x.) Then where the equality holds if X is compact Hausdorff or locally connected.
Disconnected spaces
A space in which all components are one-point sets is called totally disconnected. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open neighborhoods U of x and V of y such that X is the union of U and V. Clearly any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers Q, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.
Path-connected sets
A path from a point x to a point y in a topological space X is a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y. A path-component of X is an equivalence class of X under the equivalence relation, which makes x equivalent to y if there is a path from x to y. The space X is said to be path-connected (or pathwise connected or 0-connected) if there is at most one path-component; that is, if there is a path joining any two points in X. Again, many authors exclude the empty space.
Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve.
However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of Rn or Cn are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces.
Products of spaces
Given X such that
is the Cartesian product of the topological spaces Xi, indexed by , and the canonical projections pi : X → Xi, the product topology on X is defined as the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology.
The open sets in the product topology are unions (finite or infinite) of sets of the form , where each Ui is open in Xi and Ui ≠ Xi only finitely many times. In particular, for a finite product (in particular, for the product of two topological spaces), the products of base elements of the Xi gives a basis for the product .
The product topology on X is the topology generated by sets of the form pi−1(U), where i is in I and U is an open subset of Xi. In other words, the sets {pi−1(U)} form a subbase for the topology on X. A subset of X is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form pi−1(U). The pi−1(U) are sometimes called open cylinders, and their intersections are cylinder sets.
In general, the product of the topologies of each Xi forms a basis for what is called the box topology on X. In general, the box topology is finer than the product topology, but for finite products they coincide.
Related to compactness is Tychonoff's theorem: the (arbitrary) product of compact spaces is compact.
Separation axioms
Many of these names have alternative meanings in some of mathematical literature, as explained on History of the separation axioms; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous.
Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles.
In all of the following definitions, X is again a topological space.
X is T0, or Kolmogorov, if any two distinct points in X are topologically distinguishable. (It is a common theme among the separation axioms to have one version of an axiom that requires T0 and one version that doesn't.)
X is T1, or accessible or Fréchet, if any two distinct points in X are separated. Thus, X is T1 if and only if it is both T0 and R0. (Though you may say such things as T1 space, Fréchet topology, and Suppose that the topological space X is Fréchet, avoid saying Fréchet space in this context, since there is another entirely different notion of Fréchet space in functional analysis.)
X is Hausdorff, or T2 or separated, if any two distinct points in X are separated by neighbourhoods. Thus, X is Hausdorff if and only if it is both T0 and R1. A Hausdorff space must also be T1.
X is T2½, or Urysohn, if any two distinct points in X are separated by closed neighbourhoods. A T2½ space must also be Hausdorff.
X is regular, or T3, if it is T0 and if given any point x and closed set F in X such that x does not belong to F, they are separated by neighbourhoods. (In fact, in a regular space, any such x and F is also separated by closed neighbourhoods.)
X is Tychonoff, or T3½, completely T3, or completely regular, if it is T0 and if f, given any point x and closed set F in X such that x does not belong to F, they are separated by a continuous function.
X is normal, or T4, if it is Hausdorff and if any two disjoint closed subsets of X are separated by neighbourhoods. (In fact, a space is normal if and only if any two disjoint closed sets can be separated by a continuous function; this is Urysohn's lemma.)
X is completely normal, or T5 or completely T4, if it is T1 and if any two separated sets are separated by neighbourhoods. A completely normal space must also be normal.
X is perfectly normal, or T6 or perfectly T4, if it is T1 and if any two disjoint closed sets are precisely separated by a continuous function. A perfectly normal Hausdorff space must also be completely normal Hausdorff.
The Tietze extension theorem: In a normal space, every continuous real-valued function defined on a closed subspace can be extended to a continuous map defined on the whole space.
Countability axioms
An axiom of countability is a property of certain mathematical objects (usually in a category) that requires the existence of a countable set with certain properties, while without it such sets might not exist.
Important countability axioms for topological spaces:
sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set
first-countable space: every point has a countable neighbourhood basis (local base)
second-countable space: the topology has a countable base
separable space: there exists a countable dense subspace
Lindelöf space: every open cover has a countable subcover
σ-compact space: there exists a countable cover by compact spaces
Relations:
Every first countable space is sequential.
Every second-countable space is first-countable, separable, and Lindelöf.
Every σ-compact space is Lindelöf.
A metric space is first-countable.
For metric spaces second-countability, separability, and the Lindelöf property are all equivalent.
Metric spaces
A metric space is an ordered pair where is a set and is a metric on , i.e., a function
such that for any , the following holds:
(non-negative),
iff (identity of indiscernibles),
(symmetry) and
(triangle inequality) .
The function is also called distance function or simply distance. Often, is omitted and one just writes for a metric space if it is clear from the context what metric is used.
Every metric space is paracompact and Hausdorff, and thus normal.
The metrization theorems provide necessary and sufficient conditions for a topology to come from a metric.
Baire category theorem
The Baire category theorem says: If X is a complete metric space or a locally compact Hausdorff space, then the interior of every union of countably many nowhere dense sets is empty.
Any open subspace of a Baire space is itself a Baire space.
Main areas of research
Continuum theory
A continuum (pl continua) is a nonempty compact connected metric space, or less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. These objects arise frequently in nearly all areas of topology and analysis, and their properties are strong enough to yield many 'geometric' features.
Dynamical systems
Topological dynamics concerns the behavior of a space and its subspaces over time when subjected to continuous change. Many examples with applications to physics and other areas of math include fluid dynamics, billiards and flows on manifolds. The topological characteristics of fractals in fractal geometry, of Julia sets and the Mandelbrot set arising in complex dynamics, and of attractors in differential equations are often critical to understanding these systems.
Pointless topology
Pointless topology (also called point-free or pointfree topology) is an approach to topology that avoids mentioning points. The name 'pointless topology' is due to John von Neumann. The ideas of pointless topology are closely related to mereotopologies, in which regions (sets) are treated as foundational without explicit reference to underlying point sets.
Dimension theory
Dimension theory is a branch of general topology dealing with dimensional invariants of topological spaces.
Topological algebras
A topological algebra A over a topological field K is a topological vector space together with a continuous multiplication
that makes it an algebra over K. A unital associative topological algebra is a topological ring.
The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931).
Metrizability theory
In topology and related areas of mathematics, a metrizable space is a topological space that is homeomorphic to a metric space. That is, a topological space is said to be metrizable if there is a metric
such that the topology induced by d is . Metrization theorems are theorems that give sufficient conditions for a topological space to be metrizable.
Set-theoretic topology
Set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC). A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC.
See also
List of examples in general topology
Glossary of general topology for detailed definitions
List of general topology topics for related articles
Category of topological spaces
References
Further reading
Some standard books on general topology include:
Bourbaki, Topologie Générale (General Topology), .
Stephen Willard, General Topology, .
James Munkres, Topology, .
George F. Simmons, Introduction to Topology and Modern Analysis, .
Paul L. Shick, Topology: Point-Set and Geometric, .
Ryszard Engelking, General Topology, .
O.Ya. Viro, O.A. Ivanov, V.M. Kharlamov and N.Yu. Netsvetaev, Elementary Topology: Textbook in Problems, .
The arXiv subject code is math.GN.
External links | General topology | [
"Mathematics"
] | 6,409 | [
"General topology",
"Topology"
] |
178,655 | https://en.wikipedia.org/wiki/Ernst%20Zermelo | Ernst Friedrich Ferdinand Zermelo (, ; 27 July 187121 May 1953) was a German logician and mathematician, whose work has major implications for the foundations of mathematics. He is known for his role in developing Zermelo–Fraenkel axiomatic set theory and his proof of the well-ordering theorem. Furthermore, his 1929 work on ranking chess players is the first description of a model for pairwise comparison that continues to have a profound impact on various applied fields utilizing this method.
Life
Ernst Zermelo graduated from Berlin's Luisenstädtisches Gymnasium (now ) in 1889. He then studied mathematics, physics and philosophy at the University of Berlin, the University of Halle, and the University of Freiburg. He finished his doctorate in 1894 at the University of Berlin, awarded for a dissertation on the calculus of variations (Untersuchungen zur Variationsrechnung). Zermelo remained at the University of Berlin, where he was appointed assistant to Planck, under whose guidance he began to study hydrodynamics. In 1897, Zermelo went to the University of Göttingen, at that time the leading centre for mathematical research in the world, where he completed his habilitation thesis in 1899.
In 1910, Zermelo left Göttingen upon being appointed to the chair of mathematics at Zurich University, which he resigned in 1916.
He was appointed to an honorary chair at the University of Freiburg in 1926, which he resigned in 1935 because he disapproved of Adolf Hitler's regime. At the end of World War II and at his request, Zermelo was reinstated to his honorary position in Freiburg.
Research in set theory
In 1900, in the Paris conference of the International Congress of Mathematicians, David Hilbert challenged the mathematical community with his famous Hilbert's problems, a list of 23 unsolved fundamental questions which mathematicians should attack during the coming century. The first of these, a problem of set theory, was the continuum hypothesis introduced by Cantor in 1878, and in the course of its statement Hilbert mentioned also the need to prove the well-ordering theorem.
Zermelo began to work on the problems of set theory under Hilbert's influence and in 1902 published his first work concerning the addition of transfinite cardinals. By that time he had also discovered the so-called Russell paradox. In 1904, he succeeded in taking the first step suggested by Hilbert towards the continuum hypothesis when he proved the well-ordering theorem (every set can be well ordered). This result brought fame to Zermelo, who was appointed Professor in Göttingen, in 1905. His proof of the well-ordering theorem, based on the powerset axiom and the axiom of choice, was not accepted by all mathematicians, mostly because the axiom of choice was a paradigm of non-constructive mathematics. In 1908, Zermelo succeeded in producing an improved proof making use of Dedekind's notion of the "chain" of a set, which became more widely accepted; this was mainly because that same year he also offered an axiomatization of set theory.
Zermelo began to axiomatize set theory in 1905; in 1908, he published his results despite his failure to prove the consistency of his axiomatic system. See the article on Zermelo set theory for an outline of this paper, together with the original axioms, with the original numbering.
In 1922, Abraham Fraenkel and Thoralf Skolem independently improved Zermelo's axiom system. The resulting system, now called Zermelo–Fraenkel axioms (ZF), is now the most commonly used system for axiomatic set theory.
Zermelo's navigation problem
Proposed in 1931, the Zermelo's navigation problem is a classic optimal control problem. The problem deals with a boat navigating on a body of water, originating from a point O to a destination point D. The boat is capable of a certain maximum speed, and we want to derive the best possible control to reach D in the least possible time.
Without considering external forces such as current and wind, the optimal control is for the boat to always head towards D. Its path then is a line segment from O to D, which is trivially optimal. With consideration of current and wind, if the combined force applied to the boat is non-zero, the control for no current and wind does not yield the optimal path.
Publications
Jean van Heijenoort, 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press.
1904. "Proof that every set can be well-ordered," 139−41.
1908. "A new proof of the possibility of well-ordering," 183–98.
1908. "Investigations in the foundations of set theory I," 199–215.
1913. "On an Application of Set Theory to the Theory of the Game of Chess" in Rasmusen E., ed., 2001. Readings in Games and Information, Wiley-Blackwell: 79–82.
1930. "On boundary numbers and domains of sets: new investigations in the foundations of set theory" in Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford University Press: 1219–33.
Works by others:
Zermelo's Axiom of Choice, Its Origins, Development, & Influence, Gregory H. Moore, being Volume 8 of Studies in the History of Mathematics and Physical Sciences, Springer Verlag, New York, 1982.
See also
Axiom of choice
Axiom of infinity
Axiom of limitation of size
Axiom of union
Boltzmann brain
Choice function
Cumulative hierarchy
Pairwise comparison
Von Neumann universe
14990 Zermelo, asteroid
References
Citations
External links
Zermelo Navigation
1871 births
1953 deaths
20th-century German philosophers
19th-century German mathematicians
Mathematical logicians
Writers from Berlin
People from the Province of Brandenburg
German set theorists
Academic staff of the University of Zurich
Humboldt University of Berlin alumni
Martin Luther University of Halle-Wittenberg alumni
University of Freiburg alumni
Academic staff of the University of Freiburg
Academic staff of the University of Göttingen
German male writers
20th-century German mathematicians | Ernst Zermelo | [
"Mathematics"
] | 1,266 | [
"Mathematical logic",
"Mathematical logicians"
] |
178,702 | https://en.wikipedia.org/wiki/Pound%20%28force%29 | The pound of force or pound-force (symbol: lbf, sometimes lbf,) is a unit of force used in some systems of measurement, including English Engineering units and the foot–pound–second system.
Pound-force should not be confused with pound-mass (lb), often simply called "pound", which is a unit of mass; nor should these be confused with foot-pound (ft⋅lbf), a unit of energy, or pound-foot (lbf⋅ft), a unit of torque.
Definitions
The pound-force is equal to the gravitational force exerted on a mass of one avoirdupois pound on the surface of Earth. Since the 18th century, the unit has been used in low-precision measurements, for which small changes in Earth's gravity (which varies from equator to pole by up to half a percent) can safely be neglected.
The 20th century, however, brought the need for a more precise definition, requiring a standardized value for acceleration due to gravity.
Product of avoirdupois pound and standard gravity
The pound-force is the product of one avoirdupois pound (exactly ) and the standard acceleration due to gravity, approximately .
The standard values of acceleration of the standard gravitational field (gn) and the international avoirdupois pound (lb) result in a pound-force equal to
().
This definition can be rephrased in terms of the slug. A slug has a mass of 32.174049 lb. A pound-force is the amount of force required to accelerate a slug at a rate of , so:
Conversion to other units
Foot–pound–second (FPS) systems of units
In some contexts, the term "pound" is used almost exclusively to refer to the unit of force and not the unit of mass. In those applications, the preferred unit of mass is the slug, i.e. lbf⋅s2/ft. In other contexts, the unit "pound" refers to a unit of mass. The international standard symbol for the pound as a unit of mass is lb.
In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's second law, , requires another factor, gc, usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s2).
"Absolute" systems are coherent systems of units: by using the slug as the unit of mass, the "gravitational" FPS system (left column) avoids the need for such a constant. The SI is an "absolute" metric system with kilogram and meter as base units.
Pound of thrust
The term pound of thrust is an alternative name for pound-force in specific contexts. It is frequently seen in US sources on jet engines and rocketry, some of which continue to use the FPS notation. For example, the thrust produced by each of the Space Shuttle's two Solid Rocket Boosters was , together .
See also
Foot-pound (energy)
Ton-force
Kip (unit)
Mass in general relativity
Mass in special relativity
Mass versus weight for the difference between the two physical properties
Newton
Poundal
Pounds per square inch, a unit of pressure
Notes and references
General sources
Obert, Edward F. (1948). Thermodynamics. New York: D. J. Leggett Book Company. Chapter I "Survey of Dimensions and Units", pp. 1-24.
Customary units of measurement in the United States
Imperial units
Units of force | Pound (force) | [
"Physics",
"Mathematics"
] | 787 | [
"Force",
"Physical quantities",
"Quantity",
"Units of force",
"Units of measurement"
] |
178,711 | https://en.wikipedia.org/wiki/Cable%20length | A cable length or length of cable is a nautical unit of measure equal to one tenth of a nautical mile or approximately 100 fathoms. Owing to anachronisms and varying techniques of measurement, a cable length can be anywhere from , depending on the standard used.
Etymology and origin
The modern word cable is directly descended from the Middle English cable, cabel or kabel and also occurs in Middle Dutch and Middle German. Ultimately the word comes from Romanic, probably from a cattle halter. A cable in this usage cable is a thick rope or by transference a chain cable. The OED gives quotations from onwards. A cable's length (often "cable length" or just "cable") is simply the standard length in which cables came, which by 1555 had settled to around or .
Traditionally rope is made on long ropewalks, the length of which determines the maximum length of rope it is possible to make. As rope is "closed" (the final stage in manufacture) the length reduces, thus the ropewalk at Chatham Dockyard is long in order to produce standard coils.
Definition
The definition varies:
International: 185.2 m, equivalent to nautical mile
UK traditional: , though (The Admiralty) used of a sea mile, 1 minute of latitude locally.
US customary (US Navy):
In 2008 the Royal Navy in a handbook defined it as
References
Citations
. Also "fathom", from the same work (pp. 88–89, retrieved 12 January 2017).
Various subpages within the ropery section.
.
Nautical terminology
Units of length
Customary units of measurement in the United States | Cable length | [
"Mathematics"
] | 331 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
178,713 | https://en.wikipedia.org/wiki/Light-second | The light-second is a unit of length useful in astronomy, telecommunications and relativistic physics. It is defined as the distance that light travels in free space in one second, and is equal to exactly (approximately or ).
Just as the second forms the basis for other units of time, the light-second can form the basis for other units of length, ranging from the light-nanosecond ( or just under one international foot) to the light-minute, light-hour and light-day, which are sometimes used in popular science publications. The more commonly used light-year is also currently defined to be equal to precisely , since the definition of a year is based on a Julian year (not the Gregorian year) of exactly , each of exactly .
Use in telecommunications
Communications signals on Earth rarely travel at precisely the speed of light in free space. Distances in fractions of a light-second are useful for planning telecommunications networks.
One light-nanosecond is almost 300 millimetres (299.8 mm, 5 mm less than one foot), which limits the speed of data transfer between different parts of a computer.
One light-microsecond is about 300 metres.
The mean distance, over land, between opposite sides of the Earth is 66.8 light-milliseconds.
Communications satellites are typically 1.337 light-milliseconds (low Earth orbit) to 119.4 light-milliseconds (geostationary orbit) from the surface of the Earth. Hence there will always be a delay of at least a quarter of a second in a communication via geostationary satellite (119.4 ms times 2); this delay is just perceptible in a transoceanic telephone conversation routed by satellite. The answer will also be delayed with a quarter of a second and this is clearly noticeable during interviews or discussions on TV when sent over satellite.
Use in astronomy
The light-second is a convenient unit for measuring distances in the inner Solar System, since it corresponds very closely to the radiometric data used to determine them. (The match is not exact for an Earth-based observer because of a very small correction for the effects of relativity.) The value of the astronomical unit (roughly the distance between Earth and the Sun) in light-seconds is a fundamental measurement for the calculation of modern ephemerides (tables of planetary positions). It is usually quoted as "light-time for unit distance" in tables of astronomical constants, and its currently accepted value is s.
The mean diameter of Earth is about 0.0425 light-seconds.
The average distance between Earth and the Moon (the lunar distance) is about 1.282 light-seconds.
The diameter of the Sun is about 4.643 light-seconds.
The average distance between Earth and the Sun (the astronomical unit) is 499.0 light-seconds.
Multiples of the light-second can be defined, although apart from the light-year, they are more used in popular science publications than in research works. For example:
A light-minute is 60 light-seconds, and so the average distance between Earth and the Sun is 8.317 light-minutes.
The average distance between Pluto and the Sun (34.72 AU) is 4.81 light-hours.
Humanity's most distant artificial object, Voyager 1, has an interstellar velocity of 3.57 AU per year, or 29.7 light-minutes per year. As of 2023 the probe, launched in 1977, is over 22 light-hours from Earth and the Sun, and is expected to reach a distance of one light-day around November 2026 – February 2027.
See also
100 megametres
Geometrized unit system
Light-year
References
Units of length
Units of measurement in astronomy | Light-second | [
"Astronomy",
"Mathematics"
] | 777 | [
"Quantity",
"Units of measurement in astronomy",
"Units of measurement",
"Units of length"
] |
178,789 | https://en.wikipedia.org/wiki/Experimental%20Aircraft%20Association | The Experimental Aircraft Association (EAA) is an international organization of aviation enthusiasts based in Oshkosh, Wisconsin. Since its inception, it has grown internationally with over 300,000 members and nearly 1,000 chapters worldwide. It hosts the largest aviation gathering of its kind in the world, EAA AirVenture Oshkosh.
History
The EAA was founded in 1953 by veteran aviator Paul Poberezny along with other aviation enthusiasts. The organization began as more or less a flying club. Poberezny explains the nature of the organization's name, "Because the planes we flew were modified or built from scratch, they were required to display an EXPERIMENTAL placard where it could be seen on the door or cockpit, so it was quite natural that we call ourselves the "Experimental Aircraft Association". The EAA was incorporated in Wisconsin on 22 March 1955. Homebuilding is still a large part of EAA, but the organization has grown over the years to include almost every aspect of aviation and aeronautics.
EAA's first location was in the basement of Poberezny's Hales Corners, Wisconsin home. In the early 1960s, the association's first headquarters was built in the Milwaukee suburb of Franklin. That was the headquarters for the organization until 1983, when EAA combined its headquarters and fly-in site in Oshkosh, Wisconsin. The EAA Aviation Center also includes the EAA Aviation Museum, with more than 200 aircraft, approximately 130 of which are on display at any given time.
In 1953, the Experimental Aircraft Association released a two-page newsletter named The Experimenter. The newsletter was written and published by founding members Paul and Audrey Poberezny along with other volunteers. The newsletter transitioned to a magazine format and was renamed Sport Aviation and became a membership benefit. The Experimenter name lives on, however, in an online magazine specifically for amateur-built and light plane enthusiasts that debuted in 2012. It was folded into the monthly Sport Aviation print magazine in 2015.
In 2010, the United States' national aeromodeling organization, the Academy of Model Aeronautics, was involved in negotiations with the EAA homebuilt aviation organization, that resulted in a "memorandum of understanding" that is intended to encourage collaboration between the two American-based sport aviation organizations, in developing, in the words of the AMA's then-President Dave Mathewson, "the creation of new concepts that will promote aviation, both full-scale and modeling, as a perfect family recreational and educational activity". This link with the AMA has further strengthened in the face of unprecedented FAA concern of aeromodeling as a form of UAS activity they now have a reason to regulate, and are now tasked with regulating - the EAA, in late November 2019, stated that "We see model aviation as an important pathway to manned flight," adding that "Our goal in this risk assessment process is to represent the safety concerns of our members while allowing the highest degree of freedom for legacy model aircraft, which have flown alongside us in the airspace for decades."
In 2015, the EAA and EAA Young Eagles were inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
Museum
First opened in 1983 and located adjacent to EAA's headquarters in Oshkosh, Wisconsin, the EAA Aviation Museum is an extensive collection of aircraft and aviation displays. The Museum is home to EAA's collection of more than 200 aircraft, of which more than 90 are on display inside the museum at any time. The museum's Pioneer Airport is a re-creation of a vintage aerodrome, with more than 40 additional airplanes on display. From May through mid-October (daily Memorial Day through Labor Day), flights are offered in vintage aircraft.
Programs and activities
Technical Counselor program
To help ensure that all amateur-built aircraft are well-constructed, safe aircraft, the EAA organizes a group of volunteers, known as Technical Counselors, who will visit the construction project to identify any areas of concern. Technical Counselors are EAA members who volunteer their time and who have met at least one of the following criteria:
Have built an experimental category aircraft
Have restored an antique/classic aircraft
Hold an A&P, IA, DAR, DER or Aerospace Engineer rating in the United States, an equivalent international rating or have the qualifications for those ratings.
There is no charge for this on-site review. The program is strictly voluntary. The recommendations of the Technical Counselor are advisory only. The EAA recommends a minimum of three Technical Counselor visits over the course of construction.
Flight Advisor program
The Flight Advisor Program is designed to increase homebuilt aircraft safety by developing a corps of volunteers who have demonstrated expertise in specific areas of flying and making them available to EAA members who may be preparing to fly an unfamiliar aircraft. A Flight Advisor helps the pilot conduct a self-evaluation as well as evaluate the flying characteristics of the aircraft. Pilots use that evaluation to decide whether they are capable of flying that airplane. If not capable, the Flight Advisor explains where and how the pilot can get the proper instruction, or alternatively find someone to make the initial flights.
Under the EAA Flight Advisor Program, the Advisor does not fly or decide whether or not the pilot is capable of flying the airplane to be tested. The Advisor provides the pilot with the pros and cons as they relate to this specific combination of pilot and airplane. The pilot makes the final decision on how to proceed with the flight testing program.
EAA AirVenture Oshkosh
Each summer EAA presents the largest annual general aviation event in the world, EAA AirVenture Oshkosh, also commonly known as the "Oshkosh Airshow". During the event, the city's airport, Wittman Regional Airport, named after Steve Wittman, is the busiest airport in the world (in terms of traffic movements). The week-long event annually attracts around 10,000-12,000 planes and a total attendance of more than 500,000. The event also attracts more than 800 exhibitors, hosts nearly 1,000 forums, seminars and workshops, and welcomes more than 700 journalists each year.
The annual fly-in was first held in 1953 at what is now Timmerman Field in Milwaukee. In 1959, the growing event moved to the Rockford, Illinois airport. Attendance at the fly-in continued to grow until the Rockford airport (now Chicago Rockford International Airport) was too small to accommodate the crowds, and so it was moved to Oshkosh in 1970. A study conducted in 2008 by the University of Wisconsin–Oshkosh determined that the 500,000 annual fly-in attendance generates $110 million of tourist income for the three counties surrounding the airport. In 2017, that economic impact was estimated at over $170 million and a total attendance of nearly 600,000 people.
Young Eagles
The EAA also sponsors the Young Eagles program, which was started in 1992 by Tom Poberezny and others, with the aim of giving one million children an airplane ride by 17 December 2003, the Centennial of Flight (see Wright brothers and Wright Flyer). The program reached that goal, and has continued, with more than 2 million young people flown as of July 2016 and more introduced to and educated around the benefits of general aviation.
The Young Eagles program has been overseen by a series of nationally famous chairmen:
Cliff Robertson - founding chairman, film and stage actor (1992–1994)
Chuck Yeager - USAF General and first man to fly faster than the speed of sound (1994–2004)
Harrison Ford - film and stage actor (2004–2009)
Chesley Sullenberger and Jeffrey B. Skiles - pilots of US Airways Flight 1549 (2009–2013)
Sean D. Tucker - aerobatic pilot (2013–present)
Jimmy Graham - NFL tight end (2018–present)
Sun 'N Fun Airshow
The other major yearly airshow attended by EAA members and staff is Sun 'n Fun, held every April in Lakeland, Florida. Sun 'n Fun has been an independent organization from the EAA since its first show in 1975, although the event has always involved significant EAA participation.
The two organizations signed an agreement in January 1989 recognizing their independence. On 30 March 2005 Sun 'n Fun issued a press release affirming the independence of the two organizations but assuring the aviation public that they would continue to work together. As such Sun 'n Fun remains a show with participation from EAA chapters and a presence from the national EAA staff, but it is not an EAA event.
Organizational structure
The organization is overseen by a chairman, a president, a CEO and a board of directors. Paul Poberezny assumed the duties of president and CEO at the 1953 founding. In 1989 he assumed the (newly created) position of chairman of the board, and his son, aerobatic pilot Tom Poberezny, became president and CEO. In March 2009, Paul Poberezny resigned, and the board voted to elevate Tom Poberezny to chairman of the board. At AirVenture 2010, it was announced that businessman Rod Hightower would succeed Tom Poberezny as president of the organization, effective September 2010.
Hightower resigned on 22 October 2012 "effective immediately", directly after a board of directors meeting during which former Cessna chairman, president and CEO Jack J. Pelton was elected chairman. Hightower indicated he was resigning to spend more time with his family and would not relocate from St. Louis to Oshkosh. Pelton was named acting CEO and will oversee the hiring process for Hightower's permanent replacement. In response to questions about Hightower's resignation, Mac McClellan, EAA vice president of publications, stated that it was due to Hightower failing to relocate himself from his home in St. Louis to EAA headquarters in Oshkosh, as the board had expected him to. McClellan said, "I know there's all kinds of complaints, but that's not it. [The residency] was the unsolvable requirement. The board sees the president/CEO living in the Fox Valley as essential to the mission."
Dec 2022 The EAA Board of Directors welcomed Shelly deZevallos, Ed. D., as a Class III Director.
Local chapters may be formed whenever ten or more EAA members reside in a given area.
Chapters are encouraged to meet monthly. The first chapter meeting occurred at Flabob Airport in California, with noted aircraft designer and builder Ray Stits presiding.
EAA Freedom of Flight Award
In addition to the Dr. August Raspet Memorial Award, EAA also presents the Freedom of Flight Award, which the organization considers its highest honor. The award is "bestowed annually to recognize contributions to aviation who closely mirror the integrity, entrepreneurship, and innovativeness of EAA members."
List of Recipients
2024 – Pete Bunce
2023 – Jim Irwin and Aircraft Spruce
2022 – James Inhofe
2021 – Jerry Gregoire
2020 – (no recipient, AirVenture canceled)
2019 – the Brown family and Hartzell Propeller
2018 – Andrew Barker and Robert Hamilton
2017 – Sebastien Heintz
2016 – Mark Van Tine
2015 – Chesley “Sully” Sullenberger and Jeff Skiles
2014 – Audrey Poberezny
2013 – John Monnett
2012 – Charles McGee
2011 – Bob Hoover
2010 – Sean D. Tucker
2009 – Harrison Ford
2008 – Jack J. Pelton
2007 – Dale and Alan Klapmeier
2006 – Scott Crossfield
2005 – Mike Melvill
2004 – Dick VanGrunsven
2003 – Jeanie MacPherson
2002 – Steven J. Brown
2001 – Dick Rutan
2000 – Dick Hansen
1999 – Dan Goldin
1998 – Ed Stimpson
1997 – Sam Johnson
1996 – Burt Rutan
1995 – (no recipient)
1994 – Barron Hilton
1993 – John Denver
1992 – James C. Ray
1991 – Ray Scholler
1990 – Paul Poberezny
1989 – Robert "Hoot" Gibson
1988 – Neil Armstrong
1987 – Cliff Robertson
1986 – Steve Wittman
Aircraft
EAA Biplane
EAA Spirit of St. Louis replica
EAA Wright Flyer Model B replica
See also
Aircraft Kit Industry Association
Aircraft Owners and Pilots Association
Tannkosh
References
Further reading
Povletich, William. "The Little Fly-in That Could: How Oshkosh Landed the Largest Annual Aviation Event in the World". Wisconsin Magazine of History, vol. 105, no. 4 (Summer 2022), pp. 24-37.
External links
EAA AirVenture Oshkosh
EAA Aviation Museum
EAA Young Eagles program
Aviation organizations based in the United States
1953 establishments in Wisconsin
Organizations based in Wisconsin
Organizations established in 1953
Non-profit organizations based in Wisconsin | Experimental Aircraft Association | [
"Engineering"
] | 2,610 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
178,816 | https://en.wikipedia.org/wiki/Tiwanaku | Tiwanaku ( or ) is a Pre-Columbian archaeological site in western Bolivia, near Lake Titicaca, about 70 kilometers from La Paz, and it is one of the largest sites in South America. Surface remains currently cover around 4 square kilometers and include decorated ceramics, monumental structures, and megalithic blocks. It has been conservatively estimated that the site was inhabited by 10,000 to 20,000 people in AD 800.
The site was first recorded in written history in 1549 by Spanish conquistador Pedro Cieza de León while searching for the southern Inca capital of Qullasuyu.
Jesuit chronicler of Peru Bernabé Cobo reported that Tiwanaku's name once was taypiqala, which is Aymara meaning "stone in the center", alluding to the belief that it lay at the center of the world. The name by which Tiwanaku was known to its inhabitants may have been lost as they had no written language. Heggarty and Beresford-Jones suggest that the Puquina language is most likely to have been the language of Tiwanaku.
Site history
The dating of the site has been significantly refined over the last century. From 1910 to 1945, Arthur Posnansky maintained that the site was 11,000–17,000 years old based on comparisons to geological eras and archaeoastronomy. Beginning in the 1970s, Carlos Ponce Sanginés proposed the site was first occupied around 1580 BC, the site's oldest radiocarbon date. This date is still seen in some publications and museums in Bolivia. Since the 1980s, researchers have recognized this date as unreliable, leading to the consensus that the site is no older than 200 or 300 BC. More recently, a statistical assessment of reliable radiocarbon dates estimates that the site was founded around AD 110 (50–170, 68% probability), a date supported by the lack of ceramic styles from earlier periods.
Tiwanaku began its steady growth in the early centuries of the first millennium AD. From approximately 375 to 700 AD, this Andean city grew to significance. At its height, the city of Tiwanaku spanned an area of roughly 4 square kilometers (1.5 square miles) and had a population greater than 10,000 individuals. The growth of the city was due to its complex agropastoral economy, supported by trade.
The site appeared to have collapsed around 1000 AD, however the reasoning behind this is still open to debate. Recent studies by geologist Elliott Arnold of the University of Pittsburgh have shown evidence of a greater amount of aridity in the region around the time of collapse. A drought in the region would have affected local systems of agriculture and likely played a role in the collapse of Tiwanaku.
Relationships
The people of Tiwanaku held a tight relationship with the Wari culture. The Wari and Tiwanaku civilizations shared the same iconography, referred to as the "Southern Andean Iconographic Series". The relationship between the two civilizations is presumed to be trade based or military based. The Wari aren't the only other civilization that Tiwanaku could have had contact with. Inca cities also contained similar types of architecture Infrastructure seen in Tiwanaku. From this it can be expected that the Inca took some inspiration from the city of Tiwanaku and other early civilizations in the Andean basin.
Structures
The structures that have been excavated by researchers at Tiwanaku include the terraced platform mound Akapana, Akapana East, and Pumapunku stepped platforms, the Kalasasaya, the Kantatallita, the Kheri Kala, and Putuni enclosures, and the Semi-Subterranean Temple.
The Akapana is a "half Andean Cross"-shaped structure that is 257 m wide, 197 m broad at its maximum, and 16.5 m tall. At its center appears to have been a sunken court. This was nearly destroyed by a deep looters excavation that extends from the center of this structure to its eastern side. Material from the looter's excavation was dumped off the eastern side of the Akapana. A staircase is present on its western side. Possible residential complexes might have occupied both the northeast and southeast corners of this structure.
Originally, the Akapana was thought to have been developed from a modified hill. Twenty-first-century studies have shown that it is an entirely man-made earthen mound, faced with a mixture of large and small stone blocks. The dirt comprising Akapana appears to have been excavated from the "moat" that surrounds the site. The largest stone block within the Akapana, made of andesite, is estimated to weigh 65.7 tons. Tenon stone blocks in the form of puma and human heads stud the upper terraces.
The Akapana East was built on the eastern side of early Tiwanaku. Later it was considered a boundary between the ceremonial center and the urban area. It was made of a thick, prepared floor of sand and clay, which supported a group of buildings. Yellow and red clay was used in different areas for what seems like aesthetic purposes. It was swept clean of all domestic refuse, signaling its great importance to the culture.
The Pumapunku is a man-made platform built on an east-west axis like the Akapana. It is a T-shaped, terraced earthen platform mound faced with megalithic blocks. It is 167.36 m wide along its north-south axis and 116.7 m broad along its east-west axis and is 5 m tall. Identical 20-meter-wide projections extend 27.6 meters north and south from the northeast and southeast corners of the Pumapunku. Walled and unwalled courts and an esplanade are associated with this structure.
A prominent feature of the Pumapunku is a large stone terrace; it is 6.75 by 38.72 meters in dimension and paved with large stone blocks. It is called the "Plataforma Lítica" and contains the largest stone block found in the Tiwanaku site. According to Ponce Sangines, the block is estimated to weigh 131 metric tonnes. The second-largest stone block found within the Pumapunku is estimated to be 85 metric tonnes.
Scattered around the site of the Puma Punku are various types of cut stones. Due to the complexity of the stonework the site is often cited by conspiracy theorists to be a site of ancient alien intervention. These claims are entirely unsubstantiated.
The Kalasasaya is a large courtyard more than 300 feet long, outlined by a high gateway. It is located to the north of the Akapana and west of the Semi-Subterranean Temple. Within the courtyard is where explorers found the Gateway of the Sun. Since the late 20th century, researchers have theorized that this was not the gateway's original location.
Near the courtyard is the Semi-Subterranean Temple; a square sunken courtyard that is unique for its north-south rather than east-west axis. The walls are covered with tenon heads of many different styles, suggesting that the structure was reused for different purposes over time. It was built with walls of sandstone pillars and smaller blocks of Ashlar masonry. The largest stone block in the Kalasasaya is estimated to weigh 26.95 metric tons.
Within many of the site's structures are impressive gateways; the ones of monumental scale are placed on artificial mounds, platforms, or sunken courts. One gateway shows the iconography of a front-facing figure in Staff God pose. This iconography also is used on some oversized vessels, indicating an importance to the culture. The iconography of the Gateway of the Sun called Southern Andean Iconographic Series can be seen on several stone sculptures, Qirus, snuff trays and other Tiwanaku artifacts.
The unique carvings on the top of the Gate of the sun depict animals and other beings. Some have claimed that the symbolism represents a calendar system unique to the people of Tiwanaku, although there is no definitive evidence that this theory is correct.
The Gateway of the Sun and others located at Pumapunku are not complete. They are missing part of a typical recessed frame known as a chambranle, which typically have sockets for clamps to support later additions. These architectural examples, as well as the Akapana Gate, have unique detail and demonstrate high skill in stone-cutting. This reveals a knowledge of descriptive geometry. The regularity of elements suggests they are part of a system of proportions.
Many theories for the skill of Tiwanaku's architectural construction have been proposed. One is that they used a luk’ a, which is a standard measurement of about sixty centimeters. Another argument is for the Pythagorean Ratio. This idea calls for right triangles at a ratio of five to four to three used in the gateways to measure all parts. Lastly, Protzen and Nair argue that Tiwanaku had a system set for individual elements dependent on context and composition. This is shown in the construction of similar gateways ranging from diminutive to monumental size, proving that scaling factors did not affect proportion. With each added element, the individual pieces were shifted to fit together.
As the population grew, occupational niches developed, and people began to specialize in certain skills. There was an increase in artisans, who worked in pottery, jewelry, and textiles. Like the later Inca, the Tiwanaku had few commercial or market institutions. Instead, the culture relied on elite redistribution. That is, the elites of the state controlled essentially all economic output but were expected to provide each commoner with all the resources needed to perform his or her function. Selected occupations include agriculturists, herders, pastoralists, etc. Such separation of occupations was accompanied by hierarchical stratification within the state.
Some authors believe that the elites of Tiwanaku lived inside four walls that were surrounded by a moat. This theory is called "Tiwanaku moat theory". This moat, some believe, was to create the image of a sacred island. Inside the walls were many images devoted to human origin, which only the elites would see. Commoners may have entered this structure only for ceremonial purposes since it was home to the holiest of shrines.
Cosmology
In many Andean cultures, mountains are venerated and may be considered sacred objects. The site of Tiwanaku is located in the valley between two sacred mountains, Pukara and Chuqi Q’awa. At such temples in ancient times, ceremonies were conducted to honor and pay gratitude to the gods and spirits. They were places of worship and rituals that helped unify Andean peoples through shared symbols and pilgrimage destinations.
Tiwanaku became a center of pre-Columbian religious ceremonies for both the general public and elites. For example, human sacrifice was used in several pre-Columbian civilizations to appease a god in exchange for good fortune. Excavations of the Akapana at Tiwanaku revealed the remains of sacrificial dedications of humans and camelids. Researchers speculate that the Akapana may also have been used as an astronomical observatory. It was constructed so that it was aligned with the peak of Quimsachata, providing a view of the rotation of the Milky Way from the southern pole. Other structures like Kalasasaya are positioned to provide optimal views of the sunrise on the Equinox, Summer Solstice, and Winter Solstice. Although the symbolic and functional value of these monuments can only be speculated upon, the Tiwanaku were able to study and interpret the positions of the sun, moon, Milky Way and other celestial bodies well enough to give them a significant role in their architecture.
Aymara legends place Tiwanaku at the center of the universe, probably because of the importance of its geographical location. The Tiwanaku were highly aware of their natural surroundings and would use them and their understanding of astronomy as reference points in their architectural plans. The most significant landmarks in Tiwanaku are the mountains and Lake Titicaca. The lake level of Lake Titicaca has fluctuated significantly over time. The spiritual importance and location of the lake contributed to the religious significance of Tiwanaku. In the Tiwanaku worldview, Lake Titicaca is the spiritual birthplace of their cosmic beliefs. According to Incan mythology, Lake Titicaca is the birthplace of Viracocha, who was responsible for creating the sun, moon, people, and the cosmos. In the Kalasasaya at Tiwanaku, carved atop a monolith known as the Gate of the Sun, is a front-facing figure holding a spear-thrower and snuff. Some speculate that this is a representation of Viracocha. However, it is also possible that this figure represents a deity that the Aymara refer to as “Tunuupa” who, like Viracocha, is associated with legends of creation and destruction.
The Aymara, who are thought to be descendants of the Tiwanaku, have a complex belief system similar to the cosmology of several other Andean civilizations. They believe in the existence of three spaces: Arajpacha, the upper world; Akapacha, the middle or inner world; and Manqhaoacha, the lower world. Often associated with the cosmos and Milky Way, the upper world is considered to be where celestial beings live. The middle world is where all living things are, and the lower world is where life itself is inverted.
Archaeology
As the site has suffered from looting and amateur excavations since shortly after Tiwanaku's fall, archeologists must attempt to interpret it with the understanding that materials have been jumbled and destroyed. This destruction continued during the Spanish conquest and colonial period, and during 19th century and the early 20th century. Other damage was committed by people quarrying stone for building and railroad construction, and target practice by military personnel.
No standing buildings have survived at the modern site. Only public, non-domestic foundations remain, with poorly reconstructed walls. The ashlar blocks used in many of these structures were mass-produced in similar styles so that they could possibly be used for multiple purposes. Throughout the period of the site, certain buildings changed purposes, causing a mix of artifacts found today.
Detailed study of Tiwanaku began on a small scale in the mid-nineteenth century. In the 1860s, Ephraim George Squier visited the ruins and later published maps and sketches completed during his visit. German geologist Alphons Stübel spent nine days in Tiwanaku in 1876, creating a map of the site based on careful measurements. He also made sketches and created paper impressions of carvings and other architectural features. A book containing major photographic documentation was published in 1892 by engineer Georg von Grumbkow, With commentary by archaeologist Max Uhle, this was the first in-depth scientific account of the ruins.
Von Grumbkow had first visited Tiwanaku between the end of 1876 and the beginning of 1877, when he accompanied as a photographer the expedition of French adventurer Théodore Ber, financed by American businessman Henry Meiggs, against Ber’s promise of donating the artifacts he will find, on behalf of Meiggs, to Washington's Smithsonian Institution and the American Museum of Natural History in New York. Ber’s expedition was cut short by the violent hostility of the local population, instigated by the Catholic parish priest, but von Grumbkow’s early pictures survive.
Pictures of archaeological excavations in 1903
Contemporary excavation and restoration
In the 1960s, the Bolivian government initiated an effort to restore the site and reconstruct part of it. The walls of the Kalasasaya are almost all reconstructed. The reconstruction was not sufficiently based on evidence. The reconstruction does not have as high quality of stonework as was present in Tiwanaku.
Early visitors compared Kalasasaya to Englands Stonehenge. Ephraim Squier called it "American Stonehenge". Before the reconstruction, it had more of a "Stonehenge"-like appearance as the filler stones between the large stone pillars were all looted.
As noted, the Gateway of the Sun, now in the Kalasasaya, is believed to have been moved from its original location.
Modern, academically sound archaeological excavations were performed from 1978 through the 1990s by University of Chicago anthropologist Alan Kolata and his Bolivian counterpart, Oswaldo Rivera. Among their contributions are the rediscovery of the suka kollus, accurate dating of the civilization's growth and influence, and evidence for a drought-based collapse of the Tiwanaku civilization.
Archaeologists such as Paul Goldstein have argued that the Tiwanaku empire ranged outside of the altiplano area and into the Moquegua Valley in Peru. Excavations at Omo settlements show signs of similar architecture characteristic of Tiwanaku, such as a temple and terraced mound. Evidence of similar types of cranial vault modification in burials between the Omo site and the main site of Tiwanaku is also being used for this argument.
Today Tiwanaku has been designated as a UNESCO World Heritage Site, administered by the Bolivian government.
Recently, the Department of Archaeology of Bolivia (DINAR, directed by Javier Escalante) has been conducting excavations on the terraced platform mound Akapana. The Proyecto Arqueologico Pumapunku-Akapana (Pumapunku-Akapana Archaeological Project, PAPA) run by the University of Pennsylvania, has been excavating in the area surrounding the terraced platform mound for the past few years, and also conducting Ground Penetrating Radar surveys of the area.
In former years, an archaeological field school offered through Harvard's Summer School Program, conducted in the residential area outside the monumental core, has provoked controversy amongst local archaeologists. The program was directed by Gary Urton, of Harvard, who was an expert on quipus, and Alexei Vranich of the University of Pennsylvania. The controversy was over allowing a team of untrained students to work on the site, even under professional supervision. It was so important that only certified professional archaeologists with documented funding were allowed access. The controversy was charged with nationalistic and political undertones. The Harvard field school lasted for three years, beginning in 2004 and ending in 2007. The project was not renewed in subsequent years, nor was permission sought to do so.
In 2009 state-sponsored restoration work on Akapana was halted due to a complaint from UNESCO. The restoration had consisted of facing the platform mound with adobe, although researchers had not established this as appropriate.
In 2013, marine archaeologists exploring Lake Titicaca's Khoa reef discovered an ancient ceremonial site and lifted artifacts such as a lapis lazuli and ceramic figurines, incense burners and a ceremonial medallion from the lake floor. The artifacts are representative of the lavishness of the ceremonies and the Tiwanaku culture.
When a topographical map of the site was created in 2016 by the use of a drone, a "set of hitherto unknown structures" was revealed. These structures spanned over 411 hectares, and included a stone temple and about one hundred circular or rectangular structures of vast dimensions, which were possibly domestic units.
Aerial surveillance
Between 2005 and 2007 various types of aerial surveillance methods were used by UNESCO to create an aerial picture of the site. Lidar, aerial photography, drones, and terrestrial laser scanning were all used in this process. Data concluded from this research includes topographical maps that show the principal structures at the site along with mapping of multiple structures in the Mollo Kuntu area. Over 300 million data points were placed from these methods and have helped redefine main structures that have not fully been excavated such as the Puma Punku.
Important authors
Alan Kolata of the University of Chicago conducted research at Tiwanaku in the late 1900s from which he made descriptions of the City and its structure and culture in his book The Tiwanaku. He later published Valley of The Spirits which described more aspects of Tiwanaku culture such as astrology and mythology.
John Wayne Janusek of Vanderbilt University spent time in the late 1900s as well at the site of Tiwanaku recording findings of the excavations going on. In 2008 he published Ancient Tiwanaku which described his findings on the architecture, agriculture and other aspects of Tiwanaku life.
Jean-Pierre Protzen was an architecture professor of the University of California at Berkeley and spent much of his life studying the architecture of Tiwanaku. In 2013, he published The Stones of Tiahuanaco which gives great descriptions of the architecture and stonework seen at Tiwanaku. His work has played a huge role in creating potential reconstructions of what many of the structures look like, especially the puma punku.
See also
Arthur Posnansky
Kalasasaya
Kimsa Chata
Las Ánimas complex
List of megalithic sites
List of World Heritage Sites in South America
Qhunqhu Wankani
Tiwanaku Empire
Wari culture
Wari Empire
References
Bibliography
Bermann, Marc Lukurmata Princeton University Press (1994) .
Bruhns, Karen Olsen, Ancient South America, Cambridge University Press, Cambridge, UK, c. 1994.
Arthur Posnansky; Tihuanacu cuna del hombre americano (edición bilingüe inglés-castellano); Nueva York, 1945.
Goldstein, Paul, "Tiwanaku Temples and State Expansion: A Tiwanaku Sunken-Court Temple in Moduegua, Peru", Latin American Antiquity, Vol. 4, No. 1 (March 1993), pp. 22–47, Society for American Archaeology.
Hoshower, Lisa M., Jane E. Buikstra, Paul S. Goldstein, and Ann D. Webster, "Artificial Cranial Deformation at the Omo M10 Site: A Tiwanaku Complex from the Moquegua Valley, Peru", Latin American Antiquity, Vol. 6, No. 2 (June, 1995) pp. 145–64, Society for American Archaeology.
Janusek, John Wayne Ancient Tiwanaku Cambridge University Press (2008) .
Kolata, Alan L., "The Agricultural Foundations of the Tiwanaku State: A View from the Heartland", American Antiquity, Vol. 51, No. 4 (October 1986), pp. 748–762, Society for American Archaeology.
.
Protzen, Jean-Pierre and Stella E. Nair, "On Reconstructing Tiwanaku Architecture", The Journal of the Society of Architectural Historians, Vol. 59, No. 3 (September 2000), pp. 358–71, Society of Architectural Historians.
Reinhard, Johan, "Chavin and Tiahuanaco: A New Look at Two Andean Ceremonial Centers." National Geographic Research 1(3): 395–422, 1985.
.
.
.
External links
Short BBC documentary on Tiwanaku
1st-millennium BC establishments
Archaeological sites in Bolivia
World Heritage Sites in Bolivia
Former populated places in Bolivia
Buildings and structures in La Paz Department (Bolivia)
Tourist attractions in La Paz Department (Bolivia)
Prehistory of Bolivia
Archaeoastronomy
Tiwanaku culture | Tiwanaku | [
"Astronomy"
] | 4,748 | [
"Archaeoastronomy",
"Astronomical sub-disciplines"
] |
178,870 | https://en.wikipedia.org/wiki/Atavism | In biology, an atavism is a modification of a biological structure whereby an ancestral genetic trait reappears after having been lost through evolutionary change in previous generations. Atavisms can occur in several ways, one of which is when genes for previously existing phenotypic features are preserved in DNA, and these become expressed through a mutation that either knocks out the dominant genes for the new traits or makes the old traits dominate the new one. A number of traits can vary as a result of shortening of the fetal development of a trait (neoteny) or by prolongation of the same. In such a case, a shift in the time a trait is allowed to develop before it is fixed can bring forth an ancestral phenotype. Atavisms are often seen as evidence of evolution.
In social sciences, atavism is the tendency of reversion: for example, people in the modern era reverting to the ways of thinking and acting of a former time.
The word atavism is derived from the Latin atavus—a great-great-great-grandfather or, more generally, an ancestor.
Biology
Evolutionarily traits that have disappeared phenotypically do not necessarily disappear from an organism's DNA. The gene sequence often remains, but is inactive. Such an unused gene may remain in the genome for many generations. As long as the gene remains intact, a fault in the genetic control suppressing the gene can lead to it being expressed again. Sometimes, the expression of dormant genes can be induced by artificial stimulation.
Atavisms have been observed in humans, such as with infants born with vestigial tails (called a "coccygeal process", "coccygeal projection", or "caudal appendage"). Atavism can also be seen in humans who possess large teeth, like those of other primates. In addition, a case of "snake heart", the presence of "coronary circulation and myocardial architecture [that closely] resemble those of the reptilian heart", has also been reported in medical literature. Atavism has also recently been induced in avian dinosaur (bird) fetuses to express dormant ancestral non-avian dinosaur (non-bird) features, including teeth.
Other examples of observed atavisms include:
Hind limbs in cetaceans and sirenians.
Extra toes of the modern horse.
Reappearance of limbs in limbless vertebrates.
Re-evolution of sexuality from parthenogenesis in oribatid mites.
Teeth in avian dinosaurs (birds).
Dewclaws in dogs.
Reappearance of prothoracic wings in insects.
Reappearance of wings on wingless stick insects and leaf insects and earwigs.
Atavistic muscles in several birds and mammals such as the beagle and the jerboa.
Extra toes in guinea pigs.
Reemergence of sexual reproduction in the flowering plant Hieracium pilosella and the Crotoniidae family of mites.
Webbed feet in adult axolotls.
Human tails (not pseudo-tails) and supernumerary nipples in humans (and other primates).
Color blindness in humans.
Culture
Atavism is a term in Joseph Schumpeter's explanation of World War I in twentieth-century liberal Europe. He defends the liberal international relations theory that an international society built on commerce will avoid war because of war's destructiveness and comparative cost. His reason for World War I is termed "atavism", in which he asserts that senescent governments in Europe (those of the German Empire, Russian Empire, Ottoman Empire, and Austro-Hungarian Empire) pulled the liberal Europe into war, and that the liberal regimes of the other continental powers did not cause it. He used this idea to say that liberalism and commerce would continue to have a soothing effect in international relations, and that war would not arise between nations which are connected by commercial ties. This latter idea is very similar to the later Golden Arches theory.
University of London professor Guy Standing has identified three distinct sub-groups of the precariat, one of which he refers to as "atavists", who long for what they see as a lost past.
Social Darwinism
During the interval between the acceptance of evolution in the mid-1800s and the rise of the modern understanding of genetics in the early 1900s, atavism was used to account for the reappearance in an individual of a trait after several generations of absence—often called a "throw-back". The idea that atavisms could be made to accumulate by selective breeding, or breeding back, led to breeds such as Heck cattle. This had been bred from ancient landraces with selected primitive traits, in an attempt of "reviving" the aurochs, an extinct species of wild cattle. The same notions of atavisms were used by social Darwinists, who claimed that "inferior" races displayed atavistic traits, and represented more primitive traits than other races. Both atavism's and Ernst Haeckel's recapitulation theory are related to evolutionary progress, as development towards a greater complexity and a superior ability.
In addition, the concept of atavism as part of an individualistic explanation of the causes of criminal deviance was popularised by the Italian criminologist Cesare Lombroso in the 1870s. He attempted to identify physical characteristics common to criminals and labeled those he found as atavistic, 'throw-back' traits that determined 'primitive' criminal behavior. His statistical evidence and the closely related idea of eugenics have long since been abandoned by the scientific community, but the concept that physical traits may affect the likelihood of criminal or unethical behavior in a person still has some scientific support.
See also
Atavistic regression
Exaptation
Spandrel (biology)
Torna atrás
References
External links
Photograph of an additional (third) hoof of cows
Evolutionary biology
Genetics | Atavism | [
"Biology"
] | 1,237 | [
"Evolutionary biology",
"Genetics"
] |
178,937 | https://en.wikipedia.org/wiki/High%20frequency | High frequency (HF) is the ITU designation for the band of radio waves with frequency between 3 and 30 megahertz (MHz). It is also known as the decameter band or decameter wave as its wavelengths range from one to ten decameters (ten to one hundred meters). Frequencies immediately below HF are denoted medium frequency (MF), while the next band of higher frequencies is known as the very high frequency (VHF) band. The HF band is a major part of the shortwave band of frequencies, so communication at these frequencies is often called shortwave radio. Because radio waves in this band can be reflected back to Earth by the ionosphere layer in the atmosphere – a method known as "skip" or "skywave" propagation – these frequencies can be used for long-distance communication across intercontinental distances and for mountainous terrains which prevent line-of-sight communications. The band is used by international shortwave broadcasting stations (3.95–25.82 MHz), aviation communication, government time stations, weather stations, amateur radio and citizens band services, among other uses.
Propagation characteristics
The dominant means of long-distance communication in this band is skywave ("skip") propagation, in which radio waves directed at an angle into the sky refract back to Earth from layers of ionized atoms in the ionosphere. By this method HF radio waves can travel beyond the horizon, around the curve of the Earth, and can be received at intercontinental distances. However, suitability of this portion of the spectrum for such communication varies greatly with a complex combination of factors:
Sunlight/darkness at site of transmission and reception
Transmitter/receiver proximity to solar terminator
Season
Sunspot cycle
Solar activity
Polar aurora
At any point in time, for a given "skip" communication path between two points, the frequencies at which communication is possible are specified by these parameters:
Maximum usable frequency (MUF)
Lowest usable high frequency (LUF) and a
Frequency of optimum transmission (FOT)
The maximum usable frequency regularly drops below 10 MHz in darkness during the winter months, while in summer during daylight it can easily surpass 30 MHz. It depends on the angle of incidence of the waves; it is lowest when the waves are directed straight upwards, and is higher with less acute angles. This means that at longer distances, where the waves graze the ionosphere at a very blunt angle, the MUF may be much higher. The lowest usable frequency depends on the absorption in the lower layer of the ionosphere (the D-layer). This absorption is stronger at low frequencies and is also stronger with increased solar activity (for example in daylight); total absorption often occurs at frequencies below 5 MHz during the daytime. The result of these two factors is that the usable spectrum shifts towards the lower frequencies and into the Medium Frequency (MF) range during winter nights, while on a day in full summer the higher frequencies tend to be more usable, often into the lower VHF range.
When all factors are at their optimum, worldwide communication is possible on HF. At many other times it is possible to make contact across and between continents or oceans. At worst, when a band is "dead", no communication beyond the limited groundwave paths is possible no matter what powers, antennas or other technologies are brought to bear. When a transcontinental or worldwide path is open on a particular frequency, digital, SSB and Morse code communication is possible using surprisingly low transmission powers, often of the order of milliwatts, provided suitable antennas are in use at both ends and that there is little or no artificial or natural interference. On such an open band, interference originating over a wide area affects many potential users. These issues are significant to military, safety and amateur radio users of the HF bands.
There is some propagation by ground waves, the main propagation mode in the lower bands, but transmission distance decreases with frequency due to greater absorption in the earth. At the top end of the band ground wave transmission distances are limited to 10-20 miles. Short range communication can occur by a combination of line-of-sight (LOC), ground bounce, and ground wave paths, but multipath interference can cause fading.
Uses
The main uses of the high frequency spectrum are:
Military and governmental communication systems
Aviation air-to-ground communications
Amateur radio
Shortwave international and regional broadcasting
Maritime sea-to-shore and ship-to-ship services
Over-the-horizon radar systems
Global Maritime Distress and Safety System (GMDSS) communication
Citizen's Band Radio services worldwide (generally 26-28 MHz, the higher portion of the HF band, that behaves more like low-VHF)
Coastal ocean dynamics applications radar
The high frequency band is very popular with amateur radio operators, who can take advantage of direct, long-distance (often inter-continental) communications and the "thrill factor" resulting from making contacts in variable conditions. International shortwave broadcasting utilizes this set of frequencies, as well as a seemingly declining number of "utility" users (marine, aviation, military, and diplomatic interests), who have, in recent years, been swayed over to less volatile means of communication (for example, via satellites), but may maintain HF stations after switch-over for back-up purposes.
However, the development of Automatic Link Establishment technology based on MIL-STD-188-141 for automated connectivity and frequency selection, along with the high costs of satellite usage, have led to a renaissance in HF usage in government networks. The development of higher speed modems such as those conforming to MIL-STD-188-110C which support data rates up to 120 kilobit/s has also increased the usability of HF for data communications and video transmission. Other standards development such as STANAG 5066 provides for error free data communications through the use of ARQ protocols.
Some modes of communication, such as continuous wave Morse code transmissions (especially by amateur radio operators) and single sideband voice transmissions are more common in the HF range than on other frequencies, because of their bandwidth-conserving nature, but broadband modes, such as TV transmissions, are generally prohibited by HF's relatively small chunk of electromagnetic spectrum space.
Noise, especially man-made interference from electronic devices, tends to have a great effect on the HF bands. In recent years, concerns have risen among certain users of the HF spectrum over "broadband over power lines" (BPL) Internet access, which has an almost destructive effect on HF communications. This is due to the frequencies on which BPL operates (typically corresponding with the HF band) and the tendency for the BPL signal to leak from power lines. Some BPL providers have installed notch filters to block out certain portions of the spectrum (namely the amateur radio bands), but a great amount of controversy over the deployment of this access method remains. Other electronic devices including plasma televisions can also have a detrimental effect on the HF spectrum.
In aviation, HF communication systems are required for all trans-oceanic flights. These systems incorporate frequencies down to 2 MHz to include the 2182 kHz international distress and calling channel.
The upper section of HF (26.5-30 MHz) shares many characteristics with the lower part of VHF. The parts of this section not allocated to amateur radio are used for local communications. These include CB radios around 27 MHz, studio-to-transmitter (STL) radio links, radio control devices for models and radio paging transmitters.
Some radio frequency identification (RFID) tags utilize HF. These tags are commonly known as HFID's or HighFID's (High-Frequency Identification).
Antennas
The most common antennas in this band are wire antennas such as wire dipoles or rhombic antennas; in the upper frequencies, multielement dipole antennas such as the Yagi, quad, and log-periodic antennas. Powerful shortwave broadcasting stations often use large wire curtain arrays.
Antennas for transmitting skywaves are typically made from horizontal dipoles or bottom-fed loops, both of which emit horizontally polarized waves. The preference for horizontally polarized transmission is because (approximately) only half of the signal power transmitted by an antenna travels directly into the sky; about half travels downward towards the ground and must "bounce" into the sky. For frequencies in the upper HF band, the ground is a better reflector of horizontally polarized waves, and better absorber of power from vertically polarized waves. The effect diminishes for longer wavelengths.
For receiving, random wire antennas are often used. Alternatively, the same directional antennas used for transmitting are helpful for receiving, since most noise comes from all directions, but the desired signal comes from only one direction. Long-distance (skywave) receiving antennas can generally be oriented either vertically or horizontally since refraction through the ionosphere usually scrambles signal polarization, and signals are received directly from the sky to the antenna.
The antenna should have a wide enough bandwidth to cover the desired frequency range. Broadband antennas can operate over a wider range of frequencies, while narrowband antennas are more efficient at specific frequencies.
To improve the transmit and receive sensitivity of an HF antenna, the more metal parts are exposed to the air, this helps to increase the receive sensitivity. However, in places with a lot of radio signal noise, such as urban areas, the surrounding noise signals are also heard, so the design method is applied by using a directional High frequency(HF) Radio Antenna, or using an HF antenna in a remote area with a low HF Noise Floor level and connecting the HF transceiver.
See also
High-frequency Active Auroral Research Program
High Frequency Internet Protocol
Radio propagation
Space weather
Critical frequency
References
Further reading
Maslin, N.M. "HF Communications - A Systems Approach". , Taylor & Francis Ltd, 1987
Johnson, E.E., et al., "Advanced High-Frequency Radio Communications". , Artech House, 1997
External links
Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". IK1QFK Home Page (vlf.it).
Douglas C. Smith, High Frequency Measurements Web Page; Index and Technical Tidbits. D. C. Smith Consultants, Los Gatos, CA.
High Frequency Propagation Models, its.bldrdoc.gov.
High Frequency Wave Propagation, cscamm.umd.edu.
"High frequency noise" (PDF)
"Advantages of HF Radio" Codan
Solar conditions for HF-radio
Radio spectrum
Wireless | High frequency | [
"Physics",
"Engineering"
] | 2,174 | [
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Wireless"
] |
178,952 | https://en.wikipedia.org/wiki/Kimchi | Kimchi (; , ) is a traditional Korean side dish (banchan) consisting of salted and fermented vegetables, most often napa cabbage or Korean radish. A wide selection of seasonings are used, including gochugaru (Korean chili powder), spring onions, garlic, ginger, and jeotgal (a salted seafood). Kimchi is also used in a variety of soups and stews. Kimchi is a staple food in Korean cuisine and is eaten as a side dish with almost every Korean meal.
There are hundreds of different types of kimchi made with different vegetables as the main ingredients. Traditionally, winter kimchi, called gimjang, was stored in large earthenware fermentation vessels, called onggi, in the ground to prevent freezing during the winter months and to keep it cool enough to slow down the fermentation process during summer months. The process of making kimchi was called kimjang and was a way for the whole village to participate. The vessels are also kept outdoors in special terraces called jangdokdae. In contemporary times, household kimchi refrigerators are more commonly used.
Etymology
Ji
The term ji (), which has its origins in archaic Korean dihi (), has been used to refer to kimchi since ancient times. The sound change can be roughly described as:
dihi () > di () > ji ()
The Middle Korean form dihi is found in several books from the Joseon period (1392–1897). In Modern Korean, the word remains as the suffix -ji in the standard language (as in jjanji, seokbak-ji), and as the suffix -ji as well as the noun ji in Gyeongsang and Jeolla dialects. The unpalatalized form di is preserved in P'yŏngan dialect.
Kimchi
Kimchi () is the accepted word in both North and South Korean standard languages. Earlier forms of the word include (), a Middle Korean transcription of the Sino-Korean word (literally "submerged vegetable"). appears in Sohak Eonhae, the 16th century Korean rendition of the Chinese book, Xiaoxue. Sound changes from Middle Korean to Modern Korean regarding the word can be described as:
(; ) > () > () > () > ()
The aspirated first consonant of became unaspirated in , then underwent palatalization in . The word then became with the loss of the vowel () in Korean language, then kimchi, with the depalatalized word-initial consonant. In Modern Korean, the hanja characters are pronounced chimchae (), and are not used to refer to kimchi, or anything else. The word kimchi is not considered as a Sino-Korean word. Older forms of the word are retained in many regional dialects: jimchae (Jeolla, Hamgyŏng dialects), jimchi (Chungcheong, Gangwon, Gyeonggi, Gyeongsang, Hamgyŏng, Jeolla dialects), and dimchi (P'yŏngan dialect).
The English word "kimchi" perhaps originated from kimch'i, the McCune–Reischauer transcription of the Korean word kimchi ().
History
Early history
Samguk Sagi, a historical record of the Three Kingdoms of Korea, mentions the pickle jar used to ferment vegetables, which indicates that fermented vegetables were commonly eaten during this time. Attributed with the earliest kimchi, the Goguryeo people were skilled at fermenting and widely consumed fermented food. During the Silla dynasty (57 BCE – CE 935), kimchi became prevalent as Buddhism caught on throughout the nation and fostered a vegetarian lifestyle.
The pickling of vegetables was an ideal method, prior to refrigerators, that helped to preserve the lifespan of foods. In Korea, kimchi was made during the winter by fermenting vegetables, and burying it in the ground in traditional brown ceramic pots called onggi. This labor further allowed a bonding among women within the family. A poem on Korean radish written by Yi Gyubo, a 13th-century literatus, shows that radish kimchi was common in Goryeo (918–1392).
Kimchi has been a staple in Korean culture, but historical versions were not a spicy dish. Early records of kimchi do not mention garlic or chili pepper. Chili peppers, now a standard ingredient in kimchi, had been unknown in Korea until the early seventeenth century due to its being a New World crop. Chili peppers, originally native to the Americas, were introduced to East Asia by Portuguese traders. The first mention of chili pepper is found in Jibong yuseol, an encyclopedia published in 1614. Sallim gyeongje, a 17‒18th century book on farm management, wrote on kimchi with chili peppers. However, it was not until the 19th century that the use of chili peppers in kimchi became widespread. Recipes from the early 19th century closely resemble today's kimchi.
A 1766 book, Jeungbo sallim gyeongje, reports kimchi varieties made with myriad ingredients, including chonggak-kimchi (kimchi made with chonggak radish), oi-sobagi (with cucumber), seokbak-ji (with jogi-jeot), and dongchimi. However, napa cabbage was introduced to Korea only at the end of 19th century, and whole-cabbage kimchi similar to its current form is described in Siuijeonseo, a cookbook published around that time.
Modern history
During South Korea's involvement in the Vietnam War, the industrialization and commercialization of kimchi production became increasingly important because the Korean government wanted to provide rations for its troops. The Korean government requested American help to ensure that South Korean troops, reportedly "desperate" for the food, could obtain it in the field.
In 2008, South Korean scientists created a special low-calorie, vitamin-rich "space kimchi" for Yi So-yeon, the first Korean astronaut, to take to space. It was bacteria-free, unlike normal kimchi in which bacteria are essential for fermentation. It was feared that cosmic rays might mutate the bacteria.
South Korea developed programs for adult Korean adoptees to return to South Korea and learn about what it means to be Korean. One of these programs was learning how to make kimchi.
1996 kimchi standard dispute with Japan
In 1996, Korea protested against Japanese commercial production of kimchi arguing that the Japanese-produced product (kimuchi, ) was different from kimchi. In particular, Japanese kimchi was not fermented and was more similar to asazuke. Korea lobbied for an international standard from the Codex Alimentarius, an organization associated with the World Health Organization that defines voluntary standards for food preparation for international trade purposes. In 2001, the Codex Alimentarius published a voluntary standard defining kimchi as "a fermented food that uses salted napa cabbages as its main ingredient mixed with seasonings, and goes through a lactic acid production process at a low temperature", but which neither specified a minimum amount of fermentation nor forbade the use of any additives. Following the inclusion of the kimchi standard, kimchi exports in Korea did increase, but so did the production of kimchi in China and the import of Chinese kimchi into Korea.
2010 Kimchi ingredient price crisis
Due to heavy rainfall shortening the harvesting time for cabbage and other main ingredients for kimchi in 2010, the price of kimchi ingredients and kimchi itself rose greatly. Korean and international newspapers described the rise in prices as a national crisis. Some restaurants stopped offering kimchi as a free side dish, which The New York Times compared to an American hamburger restaurant no longer offering free ketchup. In response to the kimchi price crisis, the South Korean government announced the temporary reduction of tariffs on imported cabbage to coincide with the kimjang season.
Intangible Cultural Heritage of Humanity
Kimchi-related items have been inscribed on UNESCO's Representative List of the Intangible Cultural Heritage of Humanity by both South and North Korea. This makes kimchi the second intangible heritage that was submitted by two countries, the other one being the folk song "Arirang" which was also submitted by both the Koreas. "The culture of kimjang" was the subject of the Intangible Cultural Heritage: kimchi is not registered by itself.
Submitted by South Korea (inscribed 2013)
Kimjang, the tradition of making and sharing kimchi that usually takes place in late autumn, was added to the list as "Gimjang, making and sharing kimchi in the Republic of Korea". The practice of Gimjang reaffirms Korean identity and strengthens family cooperation. Gimjang is also an important reminder for many Koreans that human communities need to live in harmony with nature.
Submitted by North Korea (inscribed 2015)
North Korean kimchi-making was inscribed on the list in December 2015 as "Tradition of kimchi-making in the Democratic People's Republic of Korea". North Korean kimchi tends to be less spicy and less red than South Korean kimchi. Seafood is used less often and less salt is added. Additional sugar is used to help with fermentation in the cold climate.
Kimchi Day
In the United States, states California, Virginia, Maryland and New York, and capital city Washington D.C. have issued proclamations declaring 22 November as 'Kimchi Day' to recognize the importance of the dish as part of Korean culture.
2012 effective ban by China of Korean kimchi imports
Since 2012, the Chinese government has effectively banned the import of Korean kimchi through government regulations. Ignoring the standards of kimchi outlined by the Codex Alimentarius, China defined kimchi as a derivative of one of its own cuisines, called pao cai. However, due to significantly different preparation techniques from pao cai, kimchi has significantly more lactic acid bacteria through its fermentation process, which exceeds China's regulations. Since 2012, commercial exports of Korean kimchi to China has reached zero; the only minor amounts of exports accounting for Korean kimchi are exhibition events held in China.
2017 boycott in China
A 2017 article in The New York Times said that anti-Korean sentiment in China had risen after South Korea's acceptance of the deployment of THAAD in South Korea. Government-run Chinese news media encouraged the boycott of South Korean goods, and some Chinese nationalists vowed to not eat kimchi. The move was criticized by other Chinese nationalists, who noted that China officially considered Koreans an integral ethnic group in the multinational state, and that kimchi is also indigenous to the Yanbian Korean Autonomous Prefecture.
2020 kimchi ISO standard dispute with China
In November 2020, the International Organization for Standardization (ISO) posted ISO 24220:2020, new regulations for the making of pao cai. The same month, BBC News reported that Chinese news organization Global Times claimed the new ISO standard was "an international standard for the kimchi industry led by China" despite the standard clearly stating "this document does not apply to kimchi". This sparked strong anger from South Korean media and people, as well as the responses from some Chinese people who argued China held the right to claim kimchi as their own.
However clarifications from both countries, later revealed that the controversy was triggered over a misunderstanding of a translation of the Chinese word pao cai. After the controversy emerged, Global Times explained it was simply a "misunderstanding in translation", where they had meant to refer to Chinese pao cai, and their Chinese language article had used the term pao cai, but their English language version had "erroneously" translated it as "kimchi", and that the dispute arose from being innocently "lost in translation". They acknowledged that kimchi and pao cai are two different foods, where "Kimchi refers to a kind of fermented cabbage dish that plays an integral role in Korean cuisine, while pàocài, or Sichuan pàocài, refers to pickled vegetables that are popular originally in Southwest China's Sichuan Province, but now in most parts of northern China." Global Times also reported that Baidu Baike, a Chinese online encyclopedia, removed the controversial phrase "Korean kimchi originated from China" after the request.
According to Sojin Lim, co-director of the Institute of Korean Studies of the University of Central Lancashire, Korean kimchi is often called pao cai in China, but China has its own Sichuanese fermented vegetable dish that it also calls pao cai. In 2021, the South Korean Ministry of Culture, Sports and Tourism subsequently presented the guidelines to set the term xīnqí as the new proper Chinese translation of kimchi, while pàocài was no longer the acceptable translation. However, CNN reported that the new Chinese translation of kimchi was unpopular with both Chinese and Korean netizens, and that some Chinese people complained that they do recognise the difference between dishes, but don't like to be told how to translate Kimchi in Chinese. There were also complaints among Koreans that Korea is appropriating their own traditional culture for the Chinese, by trying to promote a Chinese term for Kimchi which doesn't have an authentic Korean sound.
Ingredients
Kimchi varieties are determined by the main vegetable ingredients and the mix of seasoning used to flavor the kimchi.
Vegetables
Cabbages (napa cabbages, bomdong, headed cabbages) and radishes (Korean radishes, ponytail radishes, gegeol radishes, yeolmu radishes) are the most commonly used kimchi vegetables. Other kimchi vegetables include: aster, balloon flower roots, burdock roots, celery, chamnamul, cilantro, cress, crown daisy greens, cucumber, eggplant, garlic chives, garlic scapes, ginger, Korean angelica-tree shoots, Korean parsley, Korean wild chive, lotus roots, mustard greens, onions, perilla leaves, bamboo shoot, momordica charantia, pumpkins, radish greens, rapeseed leaves, scallions, seaweed, soybean sprouts, spinach, sugar beets, sweet potato vines, and tomatoes.
Seasonings
Brining salt (with a larger grain size compared to kitchen salt) is used mainly for initial salting of kimchi vegetables. Being minimally processed, it serves to help develop flavors in fermented foods. Cabbage is usually salted twice when making spicy kimchi.
Commonly used seasonings include gochugaru (chili powder), scallions, garlic, ginger, and jeotgal (salted seafood) Jeotgal can be replaced with raw seafood in colder Northern parts of the Korean peninsula. If used, milder saeu-jeot (salted shrimp) or jogi-jeot (salted croaker) is preferred and the amount of jeotgal is also reduced in Northern and Central regions. In Southern Korea, on the other hand, a generous amount of stronger myeolchi-jeot (salted anchovies) and galchi-jeot (salted hairtail) is commonly used. Raw seafood or daegu-agami-jeot (salted cod gills) are used in the East coast areas.
Salt, scallions, garlic, fish sauce, and sugar are commonly added to flavor the kimchi.
Production
To make kimchi, start by slicing cabbage or daikon into small, uniform pieces to increase surface area. The pieces are then coated with salt to draw out water, which helps preserve them by preventing the growth of harmful microorganisms. This salting process can use 5–7% salt for 12 hours or 15% salt for 3–7 hours.
After salting, drain the excess water and mix in seasoning ingredients. Adding sugar can also help by binding any remaining water. Finally, pack the brined vegetables into an airtight jar and let them ferment at room temperature for 24 to 48 hours. The ideal salt concentration during fermentation is about 3%.
Since the fermentation process results in the production of carbon dioxide, the jar should be "burped" daily to release the gas. The more fermentation that occurs, the more carbon dioxide will be incorporated, which results in a very carbonated-drink-like effect.
Microorganisms in kimchi
The microorganisms present in kimchi include Bacillus mycoides, B. pseudomycoides, B. subtilis, Lactobacillus brevis, Lb. curvatus, Lb. kimchii, Lb. parabrevis, Lb. pentosus, Lb. plantarum, Lb. sakei, Lb. spicheri, Lactococcus carnosum, Lc. gelidum, Lc. lactis, Leuconostoc carnosum, Ln. citreum, Ln. gasicomitatum, Ln. gelidum, Ln. holzapfelii, Ln. inhae, Ln. kimchii, Ln. lactis, Ln. mesenteroides, Serratia marcescens, Weissella cibaria, W. confusa, W. kandleri, W. kimchii. W. koreensis, and W. soli. Archaea and yeasts, such as Saccharomyces, Candida, Pichia, and Kluyveromyces are also present in kimchi, with the latter being responsible for undesirable white colonies that sometimes form in the product as well as food spoilages and off-flavors.
In early fermentation stages, the Leuconostoc variety is found more dominantly in kimchi fermentation because of its lower acid tolerance and microaerophilic properties; the Leuconostoc variety also grows better at low salt concentrations. Throughout the fermentation process, as acidity rises, the Lactobacillus and Weissella variety become dominant because of their higher acid tolerance. Lactobacillus also grows better in conditions with a higher salt concentration.
These microorganisms are present due to the natural microflora provided by utilizing unsterilized food materials in the production of kimchi. The step of salting the raw materials as well as the addition of red pepper powder inhibit the pathogenic and putrefactive bacteria present in the microflora, allowing the lactic acid bacteria (LAB) to flourish and become the dominant microorganism. These anaerobic microorganisms steadily increase in number during the middle stages of fermentation, and prefer to be kept at low temperatures of about 10°C, pH of 4.2-4, and remain in the presence of 1.5% – 4% NaCl. A faster fermentation at a higher temperature may be chosen as well to accelerate the growth of bacterial cultures for a faster decrease in pH level.
Since the raw cruciferous vegetables themselves are the source of LAB required for fermentation, no starter culture is required for the production of kimchi; rather, spontaneous fermentation occurs. The total population of microorganisms present at the beginning of processing determine the outcome of fermentation, causing the final product to be highly variable in terms of quality and flavor. Currently, there are no recommended approaches to control the microbial community during fermentation to predict the outcome. In the industrial production of kimchi, starter cultures made up of Leu. mesenteroides, Leu. citreum, and Lb. plantarum are used, which are often unsuccessful because they fail to outcompete the naturally occurring cultures on the raw materials.
By-products of microorganisms
The lactic acid bacteria (LAB) produce lactic acid, hydrogen peroxide, and carbon dioxide as by-products during metabolism. Lactic acid quickly lowers the pH, creating an acidic environment that is uninhabitable for most other microorganisms that survived salting. This also modifies the flavor of sub-ingredients and can increase the nutritive value of the raw materials, as the microbial community in the fermentation process can synthesize B vitamins and hydrolyze cellulose in plant tissues to free nutrients that are normally indigestible by the human gastrointestinal tract. Hydrogen peroxide is formed by the oxidation of reduced nicotinamide adenine dinucleotide (NADH) and provides an antibiotic to inhibit some undesirable microorganisms. Carbon dioxide functions as a preservative, flushing out oxygen to create an anaerobic environment, as well as creating the desired carbonation in the final product.
Odor
Kimchi is known for its strong, spicy, flavors and odors, although milder varieties exist. Variations in the fermentation process cause the final product to be highly variable in terms of quality and flavor. The strong odor is especially tied to the sulfur compounds from garlic and ginger of kimchi, which can be less appealing to non-Koreans. Thus, scientists are experimenting with the types of bacteria used in its production to minimize the odor to increase the appeal for international markets. These efforts are not universally appreciated by lovers of kimchi, as the flavor is affected in the process, and some see that "South Korea's narrative about its own culinary staple" is being manipulated to suit the foreigners' tastes.
Varieties
Kimchi is one of the most important staples of Korean cuisine. The Korean term "Kimchi" refers to fermented vegetables, and encompasses salt and seasoned vegetables. It is mainly served as a side dish with every meal, but also can be served as a main dish. Kimchi is mainly recognized as a spicy fermented cabbage dish globally.
New variations of kimchi continue to be created, and the taste can vary depending on the region and season. Conventionally, the secret of kimchi preparation was passed down by mothers to their daughters in a bid to make them suitable wives to their husbands. However, with the current technological advancement and increase in social media use, many individuals worldwide can now access recipes for kimchi preparation.
Kimchi can be categorized by main ingredients, regions or seasons. Korea's northern and southern sections have a considerable temperature difference. There are over 180 recognized varieties of kimchi. The most common kimchi variations are:
Baechu-kimchi () spicy napa cabbage kimchi, made from whole cabbage leaves
Baechu-geotjeori () unfermented napa cabbage kimchi
Bossam-kimchi () wrapped kimchi
Baek-kimchi () white kimchi, made without chili pepper
Dongchimi () a non-spicy watery kimchi
Nabak-kimchi () a mildly spicy watery kimchi
Chonggak-kimchi () cubed chonggak "ponytail" radish, a popular spicy kimchi
Kkakdugi () spicy cubed Korean radish strongly-scented kimchi containing fermented shrimp
Oi-sobagi () cucumber kimchi that can be stuffed with seafood and chili paste, and is a popular choice during the spring and summer seasons
Pa-kimchi () spicy green onion kimchi
Yeolmu-kimchi () is also a popular choice during the spring and summer, and is made with yeolmu radishes, and does not necessarily have to be fermented.
Gat-kimchi (), made with Indian mustard
Yangbaechu-kimchi (양배추 김치) spicy cabbage kimchi, made from "headed" cabbage leaves (as opposed to napa cabbage)
Kimchi from the northern parts of Korea tends to have less salt and red chili and usually does not include brined seafood for seasoning. Northern kimchi often has a watery consistency. Kimchi made in the southern parts of Korea, such as Jeolla Province and Gyeongsang Province, uses salt, chili peppers and myeolchijeot (, brined anchovy allowed to ferment) or saeujeot (, brined shrimp allowed to ferment), myeolchiaekjeot (), anchovy fish sauce, kkanariaekjeot (), liquid anchovy jeot, similar to fish sauce used in Southeast Asia, but thicker.
Saeujeot () or myeolchijeot is not added to the kimchi spice-seasoning mixture, but is simmered first to reduce odors, eliminate tannic flavor and fats, and then is mixed with a thickener made of rice or wheat starch (). This technique has been falling into disuse in the past 40 years.
Color
White kimchi is neither red nor spicy. It includes white napa cabbage kimchi and other varieties such as white radish kimchi (dongchimi). Watery white kimchi varieties are sometimes used as an ingredient in a number of dishes such as cold noodles in dongchimi brine (dongchimi-guksu).
Age
Geotjeori (): fresh, unfermented kimchi.
Mugeun-ji (), also known as mugeun-kimchi (): aged kimchi
Region
The following regional classification dates to the 1960s. Since then, kimchi-making practices and trends in Korea have diverged from it.
Pyongan Province Non-traditional ingredients have been adopted in rural areas due to severe food shortages.
Hamgyong Province: Due to its proximity to the ocean, people in this particular region use fresh fish and oysters to season their kimchi.
Hwanghae Province: The taste of kimchi in Hwanghae Province is not bland but not extremely spicy. Most kimchi from this region has less color since red chili flakes are not used. The typical kimchi for Hwanghae Province is called hobakji (호박지). It is made with pumpkin (bundi).
Gyeonggi Province
Chungcheong Province: Instead of using fermented fish, people in the region rely on salt and fermentation to make savory kimchi. Chungcheong Province has the most varieties of kimchi.
Gangwon Province, South Korea/Kangwon Province, North Korea: In Gangwon Province, kimchi is stored for longer periods. Unlike other coastal regions in Korea, kimchi in this area does not contain much salted fish.
Jeolla Province: Salted yellow corvina and salted butterfish are used in this region to create different seasonings for kimchi.
Gyeongsang Province: This region's cuisine is saltier and spicier. The most common seasoning components include myeolchijeot () which produce a briny and savory flavor. They also use oysters in their kimchi.
Foreign countries: In some places of the world people sometimes make kimchi with western cabbage and many other alternative ingredients such as broccoli.
Seasonal variations
Different types of kimchi were traditionally made at different times of the year, based on when various vegetables were in season and also to take advantage of hot and cold seasons before the era of refrigeration. Although the advent of modern refrigeration – including kimchi refrigerators specifically designed with precise controls to keep different varieties of kimchi at optimal temperatures at various stages of fermentation – has made this seasonality unnecessary, Koreans continue to consume kimchi according to traditional seasonal preferences.
Spring
After a long period of consuming gimjang kimchi () during the winter, fresh potherbs and vegetables were used to make kimchi. These kinds of kimchi were not fermented or even stored for long periods of time but were consumed fresh.
Summer
Yeolmu radishes and cucumbers are summer vegetables made into kimchi, yeolmu-kimchi () which is eaten in several bites. Brined fish or shellfish can be added, and freshly ground dried chili peppers are often used.
Autumn
Baechu kimchi is prepared by inserting blended stuffing materials, called sok (literally inside), between layers of salted leaves of uncut, whole Napa cabbage. The ingredients of sok () can vary, depending on the regions and weather conditions. Generally, baechu kimchi used to have a strong salty flavor until the late 1960s, before which a large amount of myeolchijeot or saeujeot had been used.
Gogumasoon Kimchi is made from sweet potato stems.
Winter
Traditionally, the greatest varieties of kimchi were available during the winter. In preparation for the long winter months, many types of kimjang kimchi () were prepared in early winter and stored in the ground in large kimchi pots. Today, many city residents use modern kimchi refrigerators offering precise temperature controls to store kimjang kimchi. November and December are traditionally when people begin to make kimchi; women often gather together in each other's homes to help with winter kimchi preparations. "Baechu kimchi" is made with salted baechu filled with thin strips of radish, parsley, pine nuts, pears, chestnuts, shredded red pepper, manna lichen (), garlic, and ginger.
Korean preference
As of 2004, the preference of kimchi preparation in Korean households from the most prepared type of kimchi to less prepared types of kimchi was: baechu kimchi, being the most prepared type of kimchi, then kkakdugi, then dongchimi and then chonggak kimchi. Baechu kimchi comprised more than seventy percent of marketed kimchi and radish kimchi comprised about twenty percent of marketed kimchi.
Nutrition
Kimchi is made of various vegetables and contains a high concentration of dietary fiber, while being low in food energy. The vegetables used in kimchi also contribute to intake of vitamin A, thiamine (B1), riboflavin (B2), calcium, and iron.
A 2003 article said that South Koreans consume 18kg (40lbs) of kimchi per person annually. Many credit the Korean Miracle in part to eating the dish. Adult Koreans eat from to of kimchi a day.
Trade
South Korea spent around $129 million in 2017 to purchase 275,000 metric tons of foreign kimchi, more than 11 times the amount it exported, according to data released by the Korea Customs Service in 2017. South Korea consumes 1.85 million metric tons of kimchi annually, or 36.1 kg per person. It imports a significant fraction of that, mostly from China, and runs a $47.3 million kimchi trade deficit.
Consumption
In 2021, Koreans collectively consumed 1,965,000 tons of Kimchi, with average Korean consuming 88.3 grams of Kimchi daily. This average has been steadily declining from 109.9 grams per day in 2010, marking a 19.6% decrease. Males tend to consume more Kimchi than females, with an average of 106.6 grams compared to 70.0 grams.
Food regulations
The Canadian Food Inspection Agency has regulations for the commercial production of kimchi. The final product should have a pH ranging from 4.2 to 4.5. Any low-acidity ingredients with a pH above 4.6, including white daikon and napa cabbage, should not be left under conditions that enable the growth of undesirable microorganisms and require a written illustration of the procedure designed to ensure this is available if requested. This procedural design should include steps that maintain sterility of the equipment and products used, and the details of all sterilization processes. The cutoff pH of 4.6 is a value common to many food safety regulations, initially defined because botulism toxin is not produced below this level.
Gallery
See also
– a variety of kimchi made of carrots by Koryo-saram
.
References
Further reading
Banchan
Brassica dishes
Cabbage dishes
Korean cuisine
National dishes
Fermented foods | Kimchi | [
"Biology"
] | 6,677 | [
"Fermented foods",
"Biotechnology products"
] |
178,959 | https://en.wikipedia.org/wiki/Kruithof%20curve | The Kruithof curve describes a region of illuminance levels and color temperatures that are often viewed as comfortable or pleasing to an observer. The curve was constructed from psychophysical data collected by Dutch physicist Arie Andries Kruithof, though the original experimental data is not present on the curve itself. Lighting conditions within the bounded region were empirically assessed as being pleasing or natural, whereas conditions outside the region were considered uncomfortable, displeasing or unnatural. The Kruithof curve is a sufficient model for describing sources that are considered natural or closely resemble Planckian black bodies, but its value in describing human preference has been consistently questioned by further studies on interior lighting.
For example, natural daylight has a color temperature of 6500 K and an illuminance of about 104 to 105 lux. This color temperature–illuminance pair results in natural color rendition, but if viewed at a low illuminance, would appear bluish. At typical indoor office illuminance levels of about 400 lux, pleasing color temperatures are lower (between 3000 and 6000 K), and at typical home illuminance levels of about 75 lux, pleasing color temperatures are even lower (between 2400 and 2700 K). These color temperature-illuminance pairs are often achieved with fluorescent and incandescent sources, respectively. The pleasing region of the curve contains color temperatures and illuminance levels comparable to naturally lit environments.
History
At the emergence of fluorescent lighting in 1941, Kruithof conducted psychophysical experiments to provide a technical guide to design artificial lighting. Using gas-discharge fluorescent lamps, Kruithof was able to manipulate the color of emitted light and ask observers to report as to whether or not the source was pleasing to them. The sketch of his curve as presented consists of three major regions: the middle region, which corresponds to light sources considered pleasing; the lower region, which corresponds to colors that are considered cold and dim; and the upper region, which corresponds to colors that are warm and unnaturally colorful. These regions, while approximate, are still used to determine appropriate lighting configurations for homes or offices.
Perception and adaptation
Kruithof's findings are directly related to human adaptation to changes in illumination. As illuminance decreases, human sensitivity to blue light increases. This is known as the Purkinje effect. The human visual system switches from photopic (cone-dominated) vision to scotopic (rod-dominated) vision when luminance levels decrease. Rods have a very high spectral sensitivity to blue energy, whereas cones have varying spectral sensitivities to reds, greens and blues. Since the dominating photoreceptor in scotopic vision is most sensitive to blue, human sensitivity to blue light is therefore increased. Because of this, intense sources of higher (bluer) color temperatures are all generally considered to be displeasing at low luminance levels, and a narrow range of pleasing sources exist. Subsequently, the range of pleasing sources increases in photopic vision as luminance levels are increased.
Criticism
While the curve has been used as a guide to design artificial lighting for indoor spaces, with the general suggestion to use sources with low correlated color temperatures (CCT) at low illuminances, Kruithof did not describe the method of evaluation, the independent variables, nor the test sample that were used to develop the curve. Without these data, nor other validation, the conclusions should not be considered credible. The relationship between illuminance and CCT was not supported by subsequent work.
Illuminance and CCT has been examined in many studies of interior lighting and these studies consistently demonstrate a different relationship to that suggested by Kruithof. Rather than having upper and lower boundaries, these studies do not suggest CCT to have significant effect and for illuminance suggest only to avoid levels below 300 lux. Current studies have not explored the main critical part that is the low illumination regime or the low CCT range beneath 3000K in general, though some of the studies above reached down to 2850K. This lacunae in the data is particularly important as it relates to almost all "lifestyle" environments in which lighting designers operate - hotels, restaurants and residential settings. Further evaluation of these areas would serve well, given the implications for recent learning on the health implications of light on the circadian rhythm.
Further studies
The Kruithof curve, as presented, does not contain experimental data points and serves as an approximation for desirable lighting conditions. Therefore, its scientific accuracy has been reassessed.
Color rendering index is a metric for describing the appearance of a source and whether or not it is considered pleasing. The color rendering index of a given source is a measure of that source's ability to faithfully reproduce colors of an object. Light sources, like candles or incandescent light bulbs produce spectrums of electromagnetic energy that closely resemble Planckian black bodies; they look much like natural sources. Many fluorescent lamps or LED light bulbs have spectrums that do not match those of Planckian blackbodies and are considered unnatural. Therefore, the way that they render the perceived colors of an environment may be also considered unnatural. While these newer sources can still achieve correlated color temperatures and illuminance levels that are within the comfortable region of the Kruithof curve, variability in their color rendering indices may cause these sources to ultimately be displeasing.
Different activities or scenarios call for different color temperature–illuminance pairs: preferred light sources change depending on the scenario the source is illuminating. Individuals did prefer color temperature–illuminance pairs within the comfortable region for dining, socializing and studying, but also preferred color temperature–illuminance pairs that were in the lower uncomfortable region for night time activities and preparing for bed. This is linked to the Purkinje effect; individuals who desire some light at night time desire lower (redder) color temperatures even if luminance levels are very low.
Kruithof's findings may also vary as a function of culture or geographic location. Desirable sources are based on an individual's previous experiences of perceiving color, and as different regions of the world may have their own lighting standards, each culture would likely have its own acceptable light sources.
The illuminance of a source is the dominating factor for deciding as to whether or not a source is pleasing or comfortable, as viewers participating in this experiment evaluated a range of correlated color temperatures and illuminance levels, yet their impressions remained generally unchanged as correlated color temperature changed. Additionally, there is a relationship between correlated color temperature and apparent brightness of a source. From these findings, it is evident that color rendering index, in place of correlated color temperature, may be a more appropriate metric for determining as to whether or not a certain source is considered pleasing.
See also
f.lux
Melanopsin
Melatonin
References
Further reading
(A study in which the average luminance was 8 cd/m2, or the illumination 200–400 lux, with an average of about 330 lux.)
External links
Daylight: Is it in the eye of the beholder? by Kevin P. McGuire.
Color
Vision
Lighting
Psychophysics | Kruithof curve | [
"Physics"
] | 1,474 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
179,017 | https://en.wikipedia.org/wiki/Dirichlet%20convolution | In mathematics, the Dirichlet convolution (or divisor convolution) is a binary operation defined for arithmetic functions; it is important in number theory. It was developed by Peter Gustav Lejeune Dirichlet.
Definition
If are two arithmetic functions from the positive integers to the complex numbers, the Dirichlet convolution is a new arithmetic function defined by:
where the sum extends over all positive divisors d of n, or equivalently over all distinct pairs of positive integers whose product is n.
This product occurs naturally in the study of Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients:
Properties
The set of arithmetic functions forms a commutative ring, the , under pointwise addition, where is defined by , and Dirichlet convolution. The multiplicative identity is the unit function ε defined by if and if . The units (invertible elements) of this ring are the arithmetic functions f with .
Specifically, Dirichlet convolution is associative,
distributive over addition
,
commutative,
,
and has an identity element,
= .
Furthermore, for each having , there exists an arithmetic function with , called the of .
The Dirichlet convolution of two multiplicative functions is again multiplicative, and every not constantly zero multiplicative function has a Dirichlet inverse which is also multiplicative. In other words, multiplicative functions form a subgroup of the group of invertible elements of the Dirichlet ring. Beware however that the sum of two multiplicative functions is not multiplicative (since ), so the subset of multiplicative functions is not a subring of the Dirichlet ring. The article on multiplicative functions lists several convolution relations among important multiplicative functions.
Another operation on arithmetic functions is pointwise multiplication: is defined by . Given a completely multiplicative function , pointwise multiplication by distributes over Dirichlet convolution: . The convolution of two completely multiplicative functions is multiplicative, but not necessarily completely multiplicative.
Properties and examples
In these formulas, we use the following arithmetical functions:
is the multiplicative identity: , otherwise 0 ().
is the constant function with value 1: for all . Keep in mind that is not the identity. (Some authors denote this as because the associated Dirichlet series is the Riemann zeta function.)
for is a set indicator function: iff , otherwise 0.
is the identity function with value n: .
is the kth power function: .
The following relations hold:
, the Dirichlet inverse of the constant function is the Möbius function (see proof). Hence:
if and only if , the Möbius inversion formula
, the kth-power-of-divisors sum function σk
, the sum-of-divisors function
, the number-of-divisors function
, by Möbius inversion of the formulas for σk, σ, and τ
, proved under Euler's totient function
, by Möbius inversion
, from convolving 1 on both sides of
where λ is Liouville's function
where Sq = {1, 4, 9, ...} is the set of squares
, Jordan's totient function
, where is von Mangoldt's function
where is the prime omega function counting distinct prime factors of n
, the characteristic function of the prime powers.
where is the characteristic function of the primes.
This last identity shows that the prime-counting function is given by the summatory function
where is the Mertens function and is the distinct prime factor counting function from above. This expansion follows from the identity for the sums over Dirichlet convolutions given on the divisor sum identities page (a standard trick for these sums).
Dirichlet inverse
Examples
Given an arithmetic function its Dirichlet inverse may be calculated recursively: the value of is in terms of for .
For :
, so
. This implies that does not have a Dirichlet inverse if .
For :
,
,
For :
,
,
For :
,
,
and in general for ,
Properties
The following properties of the Dirichlet inverse hold:
The function f has a Dirichlet inverse if and only if .
The Dirichlet inverse of a multiplicative function is again multiplicative.
The Dirichlet inverse of a Dirichlet convolution is the convolution of the inverses of each function: .
A multiplicative function f is completely multiplicative if and only if .
If f is completely multiplicative then whenever and where denotes pointwise multiplication of functions.
Other formulas
An exact, non-recursive formula for the Dirichlet inverse of any arithmetic function f is given in Divisor sum identities. A more partition theoretic expression for the Dirichlet inverse of f is given by
The following formula provides a compact way of expressing the Dirichlet inverse of an invertible arithmetic function f :
where the expression stands for the arithmetic function convoluted with itself k times. Notice that, for a fixed positive integer , if then , this is because and every way of expressing n as a product of k positive integers must include a 1, so the series on the right hand side converges for every fixed positive integer n.
Dirichlet series
If f is an arithmetic function, the Dirichlet series generating function is defined by
for those complex arguments s for which the series converges (if there are any). The multiplication of Dirichlet series is compatible with Dirichlet convolution in the following sense:
for all s for which both series of the left hand side converge, one of them at least converging
absolutely (note that simple convergence of both series of the left hand side does not imply convergence of the right hand side!). This is akin to the convolution theorem if one thinks of Dirichlet series as a Fourier transform.
Related concepts
The restriction of the divisors in the convolution to unitary, bi-unitary or infinitary divisors defines similar commutative operations which share many features with the Dirichlet convolution (existence of a Möbius inversion, persistence of multiplicativity, definitions of totients, Euler-type product formulas over associated primes, etc.).
Dirichlet convolution is a special case of the convolution multiplication for the incidence algebra of a poset, in this case the poset of positive integers ordered by divisibility.
The Dirichlet hyperbola method computes the summation of a convolution in terms of its functions and their summation functions.
See also
Arithmetic function
Divisor sum identities
Möbius inversion formula
References
External links
Arithmetic functions
Bilinear maps
de:Zahlentheoretische Funktion#Faltung | Dirichlet convolution | [
"Mathematics"
] | 1,458 | [
"Arithmetic functions",
"Number theory"
] |
179,045 | https://en.wikipedia.org/wiki/Land%20Camera | The Land Camera is a model of self-developing film camera manufactured by Polaroid between 1948 and 1983. It is named after the inventor, American scientist Edwin Land, who developed a process for self-developing photography between 1943 and 1947. After Edwin Land's retirement from Polaroid in 1982, the name 'Land' was dropped from the camera name. The first commercially available model was the Model 95, which produced sepia-colored prints in about 1 minute. It was first sold to the public on November 26, 1948.
Film
The photography developing process, invented by Polaroid founder Edwin Land, employs diffusion transfer to move the dyes from the negative to the positive via a reagent. A negative sheet was exposed inside the camera, then lined up with a positive sheet and squeezed through a set of rollers which spread a reagent between the two layers, creating a developing film "sandwich". The negative developed quickly, after which some of the unexposed silver halide grains (and the latent image it contained) were solubilized by the reagent and transferred by diffusion from the negative to the positive. After a minute, the back of the camera was opened and the negative peeled away to reveal the print.
In 1963, Land introduced Polacolor pack film, which made instant color photographs possible. This process involved pulling two tabs from the camera, the second which pulled the film sandwich through the rollers to develop out of the camera. The instant color process is much more complex, involving a negative which contains three layers of emulsion sensitive to blue, green, and red. Underneath each layer are dye developing molecules in their complementary colors of yellow, magenta, and cyan. When light strikes an emulsion layer, it blocks the complementary dye below it. For instance, when blue strikes the blue sensitive emulsion layer, it blocks the yellow dye, but allows the magenta and cyan dyes to transfer to the positive, which combine to create blue. When green and red (yellow) strikes their respective layers, it blocks the complementary dyes of magenta and cyan below them, allowing only yellow dye to transfer to the positive.
In 1972, integral film was introduced which did not require the user to time the development or peel apart the negative from the positive. This process was similar to Polacolor film with added timing and receiving layers. The film itself integrates all the layers to expose, develop, and fix the photo into a plastic envelope commonly associated with a Polaroid photo. The Polaroid SX-70 was the first camera to use this film.
Improvements in SX-70 film led to the higher speed 600 series film, then to different formats such as 500 series (captiva), and spectra.
Cameras
Roll film
The original cameras folded into the body and used bellows to protect the light path. The film was put on two spools, one with the negative roll, and one with the positive paper and reagent pods. The film developed inside the camera. The exception to this is the Polaroid Swinger, a hard-bodied roll-film camera whose film was pulled out of the camera body to develop outside the camera. The film for roll-film cameras was discontinued in 1992.
100 Series Pack cameras
These cameras were developed after the rollfilm models and were designed to use the newly developed 100 series pack film. As with the Swinger the film sandwich was pulled out of the camera to develop outside of the camera, but instead of two separate rolls the film was built into a compact easy loading film pack which contained 8 exposures. Hard body plastic models were marketed later a low cost alternative to the more expensive models with bellows.
There are four generations of folding colorpack cameras: the 100, the 200, the 300, and 400 series. Polaroid announced in 2008 the discontinuation of all of its film by 2009, and Fujifilm stopped producing pack film in 2016. Polaroid B.V. manufactures and sells Polaroid integrative type film for 600 and SX-70 cameras. In September 2019, Spectra/Image film was discontinued.
Meanwhile, Bob Crowley, New55, the investor David Bohnett and Florian Kaps, known as the founder of Impossible Project (now Polaroid B.V.), managed to produce packfilm for the folding colourpack cameras under the label New55. New55 FILM ended operations on December 31, 2017.
Supersence (also founded by Florian Kaps) currently produces single-shot packs handmade from original 8x10 film manufactured by Polaroid named ONE INSTANT.
References
External links
The Land List
Cameras
Polaroid cameras | Land Camera | [
"Technology"
] | 947 | [
"Recording devices",
"Cameras"
] |
179,076 | https://en.wikipedia.org/wiki/Scapegoating | Scapegoating is the practice of singling out a person or group for unmerited blame and consequent negative treatment. Scapegoating may be conducted by individuals against individuals (e.g. "he did it, not me!"), individuals against groups (e.g., "I couldn't see anything because of all the tall people"), groups against individuals (e.g., "He was the reason our team didn't win"), and groups against groups.
A scapegoat may be an adult, child, sibling, employee, peer, ethnic, political or religious group, or country. A whipping boy, identified patient, or "fall guy" are forms of scapegoat.
Scapegoating has its origins in the scapegoat ritual of atonement described in chapter 16 of the Biblical Book of Leviticus, in which a goat (or ass) is released into the wilderness bearing all the sins of the community, which have been placed on the goat's head by a priest.
At the individual level
A medical definition of scapegoating is:
Scapegoated groups throughout history have included almost every imaginable group of people: genders, religions, people of different races, nations, or sexual orientations, people with different political beliefs, or people differing in behaviour from the majority. However, scapegoating may also be applied to organizations, such as governments, corporations, or various political groups.
Its archetype
Jungian analyst Sylvia Brinton Perera situates its mythology of shadow and guilt. Individuals experience it at the archetypal level. As an ancient social process to rid a community of its past evil deeds and reconnect it to the sacred realm, the scapegoat appeared in a biblical rite, which involved two goats and the pre-Judaic, chthonic god Azazel. In the modern scapegoat complex, however, "the energy field has been radically broken apart" and the libido "split off from consciousness". Azazel's role is deformed into an accuser of the scapegoated victim.
Blame for breaking a perfectionist moral code, for instance, might be measured out by aggressive scapegoaters. Themselves often wounded, the scapegoaters can be sadistic, superego accusers with brittle personas, who have driven their own shadows underground from where such are projected onto the victim. The scapegoated victim may then live in a hell of felt unworthiness, retreating from consciousness, burdened by shadow and transpersonal guilt, and hiding from the pain of self-understanding. Therapy includes modeling self-protective skills for the victim's battered ego, and guidance in the search for inner integrity, to find the victim's own voice.
Projection
Unwanted thoughts and feelings can be unconsciously projected onto another who becomes a scapegoat for one's own problems. This concept can be extended to projection by groups. In this case the chosen individual, or group, becomes the scapegoat for the group's problems. "Political agitation in all countries is full of such projections, just as much as the backyard gossip of little groups and individuals." Swiss psychiatrist Carl Jung considered indeed that "there must be some people who behave in the wrong way; they act as scapegoats and objects of interest for the normal ones".
Scapegoat theory of intergroup conflict
The scapegoat theory of intergroup conflict provides an explanation for the correlation between times of relative economic despair and increases in prejudice and violence toward outgroups. Studies of anti-black violence (racist violence) in the southern United States between 1882 and 1930 show a correlation between poor economic conditions and outbreaks of violence (e.g. lynchings) against black people. The correlation between the price of cotton (the principal product of the area at that time) and the number of lynchings of black men by whites ranged from −0.63 to −0.72, suggesting that a poor economy induced white people to take out their frustrations by attacking an outgroup.
Scapegoating as a group necessitates that ingroup members settle on one specific target to blame for their problems.
In management, scapegoating is a known practice in which a lower staff employee is blamed for the mistakes of senior executives. This is often due to lack of accountability in upper management.
Scapegoat mechanism
Literary critic and philosopher Kenneth Burke first coined and described the expression scapegoat mechanism in his books Permanence and Change (1935), and A Grammar of Motives (1945). These works influenced some philosophical anthropologists, such as Ernest Becker and René Girard.
Girard developed the concept much more extensively as an interpretation of human culture. In Girard's view, it is humankind, not God, who has need for various forms of atoning violence. Humans are driven by desire for that which another has or wants (mimetic desire). This causes a triangulation of desire and results in conflict between the desiring parties. This mimetic contagion increases to a point where society is at risk; it is at this point that the scapegoat mechanism is triggered. This is the point where one person is singled out as the cause of the trouble and is expelled or killed by the group. This person is the scapegoat. Social order is restored as people are contented that they have solved the cause of their problems by removing the scapegoated individual, and the cycle begins again.
Scapegoating serves as a psychological relief for a group of people. Girard contends that this is what happened in the narrative of Jesus of Nazareth, the central figure in Christianity. The difference between the scapegoating of Jesus and others, Girard believes, is that in the resurrection of Jesus from the dead, he is shown to be an innocent victim; humanity is thus made aware of its violent tendencies and the cycle is broken. Thus Girard's work is significant as a reconstruction of the Christus Victor atonement theory.
See also
References
Notes
Further reading
Books
Colman, A.D. Up from Scapegoating: Awakening Consciousness in Groups (1995)
Douglas, Tom Scapegoats: Transferring Blame (1995)
Dyckman, JM & Cutler JA Scapegoats at Work: Taking the Bull's-Eye Off Your Back (2003)
Girard, René: Violence and the Sacred (1972)
Girard, René: The Scapegoat (1986)
Jasinski, James: "Sourcebook on Rhetoric" (2001)
Perera, Sylvia Brinton, The Scapegoat Complex: Toward a Mythology of Shadow and Guilt (Toronto: Inner City 1986), Studies in Jungian Psychology By Jungian Analysts
Pillari V Scapegoating in Families: Intergenerational Patterns of Physical and Emotional Abuse (1991)
Quarmby K Scapegoat: Why We Are Failing Disabled People (2011)
Wilcox C.W. Scapegoat: Targeted for Blame (2009)
Zemel, Joel: Scapegoat, the extraordinary legal proceedings following the 1917 Halifax Explosion (2012)
Academic articles
Reference books
Diversionary tactics
Abuse
Aggression
Injustice
Persecution
Political metaphors referring to people | Scapegoating | [
"Biology"
] | 1,513 | [
"Abuse",
"Behavior",
"Aggression",
"Human behavior"
] |
179,088 | https://en.wikipedia.org/wiki/SPSS | SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics.
The software name originally stood for Statistical Package for the Social Sciences (SPSS), reflecting the original market, then later changed to Statistical Product and Service Solutions.
Overview
SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, industries, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping and creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
The many features of SPSS Statistics are accessible via pull-down menus or can be programmed with a proprietary 4GL command syntax language. Command syntax programming has the benefits of reproducible output, simplifying repetitive tasks, and handling complex data manipulations and analyses. Additionally, some complex applications can only be programmed in syntax and are not accessible through the menu structure. The pull-down menu interface also generates command syntax: this can be displayed in the output, although the default settings have to be changed to make the syntax visible to the user. They can also be pasted into a syntax file using the "paste" button present in each menu. Programs can be run interactively or unattended, using the supplied Production Job Facility.
A "macro" language can be used to write command language subroutines. A Python programmability extension can access the information in the data dictionary and data and dynamically build command syntax programs. This extension, introduced in SPSS 14, replaced the less functional SAX Basic "scripts" for most purposes, although SaxBasic remains available. In addition, the Python extension allows SPSS to run any of the statistics in the free software package R. From version 14 onwards, SPSS can be driven externally by a Python or a VB.NET program using supplied "plug-ins". (From version 20 onwards, these two scripting facilities, as well as many scripts, are included on the installation media and are normally installed by default.)
SPSS Statistics places constraints on internal file structure, data types, data processing, and matching files, which together considerably simplify programming. SPSS datasets have a two-dimensional table structure, where the rows typically represent cases (such as individuals or households) and the columns represent measurements (such as age, sex, or household income). Only two data types are defined: numeric and text (or "string"). All data processing occurs sequentially case-by-case through the file (dataset). Files can be matched one-to-one and one-to-many, but not many-to-many. In addition to that cases-by-variables structure and processing, there is a separate Matrix session where one can process data as matrices using matrix and linear algebra operations.
The graphical user interface has two views which can be toggled. The 'Data View' shows a spreadsheet view of the cases (rows) and variables (columns). Unlike spreadsheets, the data cells can only contain numbers or text, and formulas cannot be stored in these cells. The 'Variable View' displays the metadata dictionary, where each row represents a variable and shows the variable name, variable label, value label(s), print width, measurement type, and a variety of other characteristics. Cells in both views can be manually edited, defining the file structure and allowing data entry without using command syntax. This may be sufficient for small datasets. Larger datasets such as statistical surveys are more often created in data entry software, or entered during computer-assisted personal interviewing, by scanning and using optical character recognition and optical mark recognition software, or by direct capture from online questionnaires. These datasets are then read into SPSS.
SPSS Statistics can read and write data from ASCII text files (including hierarchical files), other statistics packages, spreadsheets and databases. It can also read and write to external relational database tables via ODBC and SQL.
Statistical output is to a proprietary file format (*.spv file, supporting pivot tables) for which, in addition to the in-package viewer, a stand-alone reader can be downloaded. The proprietary output can be exported to text or Microsoft Word, PDF, Excel, and other formats. Alternatively, output can be captured as data (using the OMS command), as text, tab-delimited text, PDF, XLS, HTML, XML, SPSS dataset or a variety of graphic image formats (JPEG, PNG, BMP and EMF).
Several variants of SPSS Statistics exist. SPSS Statistics Gradpacks are highly discounted versions sold only to students. SPSS Statistics Server is a version of the software with a client/server architecture. Add-on packages can enhance the base software with additional features (examples include complex samples, which can adjust for clustered and stratified samples, and custom tables, which can create publication-ready tables). SPSS Statistics is available under either an annual or a monthly subscription license.
Version 25 of SPSS Statistics launched on August 8, 2017. This added new and advanced statistics, such as random effects solution results (GENLINMIXED), robust standard errors (GLM/UNIANOVA), and profile plots with error bars within the Advanced Statistics and Custom Tables add-on. V25 also includes new Bayesian statistics capabilities, a method of statistical inference, and publication ready charts, such as powerful new charting capabilities, including new default templates and the ability to share with Microsoft Office applications.
Versions and ownership history
SPSS 1 - 1968
SPSS 2 - 1983
SPSS 5 - 1993
SPSS 6.1 - 1995
SPSS 7.5 - 1997
SPSS 8 - 1998
SPSS 9 - 1999
SPSS 10 - 1999
SPSS 11 - 2002
SPSS 12 - 2004
SPSS 13 - 2005
SPSS 14 - 2006
SPSS 15 - 2006
SPSS 16 - 2007
SPSS 17 - 2008
PASW 17 - 2009
PASW 18 - 2009
SPSS 19 - 2010
SPSS 20 - 2011
SPSS 21 - 2012
SPSS 22 - 2013
SPSS 23 - 2015
SPSS 24 - 2016, March
SPSS 25 - 2017, July
SPSS 26 - 2018
SPSS 27 - 2019, June (and 27.0.1 in November, 2020)
SPSS 28 - 2021, May
SPSS 29 - 2022, Sept
SPSS 30 - 2024, Sept
SPSS was released in its first version in 1968 as the Statistical Package for the Social Sciences (SPSS) after being developed by Norman H. Nie, Dale H. Bent, and C. Hadlai Hull. Those principals incorporated as SPSS Inc. in 1975. Early versions of SPSS Statistics were written in Fortran and designed for batch processing on mainframes, including for example IBM and ICL versions, originally using punched cards for data and program input. A processing run read a command file of SPSS commands and either a raw input file of fixed-format data with a single record type, or a 'getfile' of data saved by a previous run. To save precious computer time an 'edit' run could be done to check command syntax without analysing the data. From version 10 (SPSS-X) in 1983, data files could contain multiple record types.
Prior to SPSS 16.0, different versions of SPSS were available for Windows, Mac OS X and Unix.
SPSS Statistics version 13.0 for Mac OS X was not compatible with Intel-based Macintosh computers, due to the Rosetta emulation software causing errors in calculations. SPSS Statistics 15.0 for Windows needed a downloadable hotfix to be installed in order to be compatible with Windows Vista.
From version 16.0, the same version runs under Windows, Mac, and Linux. The graphical user interface is written in Java. The Mac OS version is provided as a Universal binary, making it fully compatible with both PowerPC and Intel-based Mac hardware.
SPSS Inc announced on July 28, 2009, that it was being acquired by IBM for US$1.2 billion. Because of a dispute about ownership of the name "SPSS", between 2009 and 2010, the product was referred to as PASW (Predictive Analytics SoftWare). As of January 2010, it became "SPSS: An IBM Company". Complete transfer of business to IBM was done by October 1, 2010. By that date, SPSS: An IBM Company ceased to exist. IBM SPSS is now fully integrated into the IBM Corporation, and is one of the brands under IBM Software Group's Business Analytics Portfolio, together with IBM Algorithmics, IBM Cognos and IBM OpenPages.
Companion software in the "IBM SPSS" family are used for data mining and text analytics (IBM SPSS Modeler), realtime credit scoring services (IBM SPSS Collaboration and Deployment Services), and structural equation modeling (IBM SPSS Amos).
SPSS Data Collection and SPSS Dimensions were sold in 2015 to UNICOM Systems, Inc., a division of UNICOM Global, and merged into the integrated software suite UNICOM Intelligence (survey design, survey deployment, data collection, data management and reporting).
(Interactive Data Analysis)
IDA (Interactive Data Analysis) was a software package that originated at what was formerly the National Opinion Research Center (NORC), at the University of Chicago. Initially offered on the HP-2000, somewhat later, under the ownership of SPSS, it was also available on MUSIC/SP. Regression analysis was one of IDA's strong points.
- Conversational / Columnar SPSS
SCSS was a software product intended for online use of IBM mainframes.
Although the "C" was for "conversational", it also represented a distinction regarding how the data was stored: it used a column-oriented rather than a row-oriented (internal) database.
This gave good interactive response time for the SPSS Conversational Statistical System (SCSS), whose strong point, as with SPSS, was Cross-tabulation.
Project NX
In October, 2020 IBM announced the start of an Early Access Program for the "New SPSS Statistics", codenamed Project NX. It contains "many of your favorite SPSS capabilities presented in a new easy to use interface, with integrated guidance, multiple tabs, improved graphs and much more".
In December, 2021, IBM opened up the Early Access Program for the next generation of SPSS Statistics for more users and shared more visuals about it.
See also
Comparison of statistical packages
JASP and jamovi, both open-source and free of charge alternatives, offering frequentist and Bayesian models
PSPP, a free SPSS replacement from the GNU Project
SPSS Modeler
References
Further reading
External links
Official SPSS User Community
50 years of SPSS history
Raynald Levesque's SPSS Tools – library of worked solutions for SPSS programmers (FAQ, command syntax; macros; scripts; Python)
Archives of SPSSX-L Discussion – SPSS Listserv active since 1996. Discusses programming, statistics and analysis
UCLA ATS Resources to help you learn SPSS – Resources for learning SPSS
UCLA ATS Technical Reports – Report 1 compares Stata, SAS, and SPSS against R (R is a language and environment for statistical computing and graphics).
SPSS Community?ref=wikipedia – Support for developers of applications using SPSS products, including materials and examples of the Python and R programmability features
Biomedical Statistics - An educational website dedicated to statistical evaluation of biomedical data using SPSS software
IBM software
Business intelligence software
Java platform software
Science software for Linux
Proprietary commercial software for Linux
Data mining and machine learning software
Statistical software
Statistical programming languages
Econometrics software
Time series software
Data warehousing
Proprietary cross-platform software
Extract, transform, load tools
Mathematical optimization software
Numerical software | SPSS | [
"Mathematics"
] | 2,557 | [
"Statistical software",
"Numerical software",
"Mathematical software"
] |
179,093 | https://en.wikipedia.org/wiki/Aphasiology | Aphasiology is the study of language impairment usually resulting from brain damage, due to neurovascular accident—hemorrhage, stroke—or associated with a variety of neurodegenerative diseases, including different types of dementia. These specific language deficits, termed aphasias, may be defined as impairments of language production or comprehension that cannot be attributed to trivial causes such as deafness or oral paralysis. A number of aphasias have been described, but two are best known: expressive aphasia (Broca's aphasia) and receptive aphasia (Wernicke's or sensory aphasia).
Acute aphasias
Acute aphasias are often the result of tissue damage following a stroke.
Expressive aphasia
First described by the French neurologist Paul Broca in the nineteenth century, expressive aphasia causes the speech of those affected to display a considerable vocabulary but to show grammatical deficits. It is characterized by a halting speech consisting mainly of content words, i.e. nouns and verbs, and, at least in English, distinctly lacking small grammatical function words such as articles and prepositions. This observation gave rise to the terms telegraphic speech and, more recently, agrammatism. The extent to which expressive aphasics retain knowledge of grammar is a matter of considerable controversy. Nonetheless, because their comprehension of spoken language is mostly preserved, and because their speech is usually good enough to get their point across, the agrammatic nature of their speech suggests that the disorder chiefly involves the expressive mechanisms of language that turn thoughts into well-formed sentences.
The view of expressive aphasia as an expressive disorder is supported by its frequent co-occurrence with facial motor difficulties, and its anatomical localization. Although expressive aphasia may be caused by brain damage to many regions, it is most commonly associated with the inferior frontal gyrus, a region that overlaps with motor cortex controlling the mouth and tongue, extending into the periventricular white matter. Not surprisingly, this region has come to be known as "Broca's area". However, an intriguing line of research has demonstrated specific comprehension deficits in expressive aphasics as well. These deficits generally involve sentences that are grammatical, but atypical in their word order. The simplest example is sentences in the passive voice, such as "The boy was chased by the girl." Expressive aphasics may have quite a hard time realizing that the girl is doing the chasing, but they do much better with "The mouse was chased by the cat," where world knowledge constraints contribute to the correct interpretation. However, "The cat was chased by the mouse" would likewise be incomprehensible. This evidence suggests that grammatical competence may be a specific function of Broca's area.
Lesions exclusive to Broca's area (the foot of the inferior frontal gyrus) do not produce Broca's aphasia but instead mild dysprosody and agraphia, sometimes accompanied by word-finding pauses and mild dysarthria. Not much is known about what other areas must be damaged in order to produce Broca's aphasia, but some maintain damage to the inferior pre-Rolandic motor strip (the motor cortex region responsible glossopharyngeal muscle control) is also necessary.
Receptive aphasia
Receptive aphasia was originally described by the German neurologist Karl Wernicke, a contemporary of Broca. Receptive aphasics produce speech that seems fluent and grammatical, but is largely devoid of sensible content. Comprehension is severely impaired, but while patients display a great deal of difficulty comprehending individual words, they can more easily understand words in context. Receptive aphasia is associated with the posterior third of the superior temporal gyrus in the distribution of the inferior division of the middle cerebral artery, known as "Wernicke's area", an area adjacent to the cortex responsible for auditory processing. If the damage extends posteriorly, visual connections are disrupted, and the patient will have difficulty understanding written language. Therefore, the localization of the two best-known aphasias mirrors the grossest dichotomy in brain organization: anterior areas are specialized for motor output, and posterior areas for sensory processing.
A fascinating corollary of this has come from research on aphasias in deaf users of sign language, who show deficits in signing and comprehension analogous to Expressive and Receptive aphasias in hearing populations. These studies demonstrate that the grammatical functions of Broca's area and the semantic functions of Wernicke's area are indeed deep, abstract properties of the language system independent of its modality of expression.
Global aphasia
Another less commonly known aphasia is global aphasia, which generally manifests itself after a stroke affecting an extensive portion of the brain occurs, including infarction of both divisions of the middle cerebral artery and generally both Broca's area and Wernicke's area. Survivors with global aphasia may have great difficulty understanding and forming words and sentences, and generally experience a great deal of difficulty when trying to communicate. With considerable speech therapy rehabilitation, global aphasia may progress into expressive aphasia or receptive aphasia.
Anomic aphasia
A person with anomic aphasia have word-finding difficulties. Anomic aphasia, also known as anomia, is a non-fluent aphasia, which means the person speaks hesitantly because of a difficulty naming words or producing correct syntax. The person struggles to find the right words for speaking and writing. Subjects tend to use circumlocutions, in which they speak around the word they can not find, to make up for their loss. People also with anomic aphasia tend to know how to use an object, but rather can not name the aforementioned object. Any damage in or near the zone of language can result in anomic aphasia. Other forms of aphasia often transition into a syndrome of primarily anomic aphasia in the process of recovery.
Conduction aphasia
Conduction Aphasia is a rare form of aphasia in which fibres in the arcuate fasciculus and superior longitudinal fasciculus are damaged. These fibres are the link between the Wernicke's and Broca's area. Damage to the area connecting comprehension and expression together has the following symptoms: fluent speech, good comprehension, poor oral reading, repetition is poor and transpositions of sounds within words is very common.
Primary progressive aphasias
Primary progressive aphasia is a rare disorder where people slowly lose their ability to talk, read, write, and comprehend what they hear in conversation over a period of time. It was first described as a distinct syndrome by Mesulam in 1982. There are three variants: progressive nonfluent aphasia (PNFA), semantic dementia (SD), and logopenic progressive aphasia (LPA).
History
The nineteenth century marked the most important time in the evolution of aphasiology, beginning with the works of Franz Josef Gall. Gall is the founder of the more modern localization theory and is the origin of the idea of a language center in the brain. However, supporting evidence for the theory that language had its own anatomical representation was not found until the case study of Mr. Leborgne, also known as Tan, by Paul Broca in 1861. The discovery of what is now known as Broca's area was followed years later by Carl Wernicke's famous work, 'The Symptom-Complex of Aphasia: A Psychological Study on an Anatomical Basis' in 1874. This paper is regarded as one of the most influential works in the history of the field of aphasiology. In it, Wernicke described many of the different classifications of aphasia and is the basis for the classical model of aphasia.
See also
Aphasiology (journal)
Neurolinguistics
References
Aphasias
Pathology
Neurolinguistics | Aphasiology | [
"Biology"
] | 1,671 | [
"Pathology"
] |
179,098 | https://en.wikipedia.org/wiki/Racetrack%20%28game%29 | Racetrack is a paper and pencil game that simulates a car race, played by two or more players. The game is played on a squared sheet of paper, with a pencil line tracking each car's movement. The rules for moving represent a car with a certain inertia and physical limits on traction, and the resulting line is reminiscent of how real racing cars move. The game requires players to slow down before bends in the track, and requires some foresight and planning for successful play. The game is popular as an educational tool teaching vectors.
The game is also known under names such as Vector Formula, Vector Rally, Vector Race, Graph Racers, PolyRace, Paper and pencil racing, or the Graph paper race game.
The basic game
The rules are here explained in simple terms. As will follow from a later section, if the mathematical concept of vectors is known, some of the rules may be stated more briefly. The rules may also be stated in terms of the physical concepts velocity and acceleration.
The track
On a sheet of quadrille paper ("quad pad", e.g. Letter preprinted with a 1/4" square grid, or A4 with a 5 mm square grid), a freehand loop is drawn as the outer boundary of the racetrack. A large ellipse will do for a first game, but some irregularities are needed to make the game interesting. Another freehand loop is drawn inside the first. It can be more or less parallel with the outer loop, or the track can have wider and narrower spots (pinch spots), with usually at least two squares between the loops. A straight starting and finishing line is drawn across the two loops, and a direction for the race is chosen (e.g., counter clockwise).
Preparing to play
The order of players is agreed upon. Each player chooses a color or mark (such as x and o) to represent the player's car. Each player marks a starting point for their car - a grid intersection at or behind the starting line.
The moves
All moves will be from one grid point to another grid point. Each grid point has eight neighbouring grid points: Up, down, left, right, and the four diagonal directions. Players take turns to move their cars according to some simple rules. Each move is marked by drawing a line from the starting point of this move to a new point.
Each player's first move must be to one of the eight neighbours of their starting position. (The player can also choose to stand still.)
On each turn after that, the player can choose to move the same number of squares in the same direction as on the previous turn; the grid point reached by this move is called the principal point for this turn. (E.g., if the previous move was four squares to the right and two squares upwards, then the principal point is found by moving another four squares to the right and two more squares upwards.) However, the player also has the choice of any of the eight neighbours of this principal point.
Cars must stay within the boundaries of the racetrack; otherwise they crash.
Finding a winner
The winner is the first player to complete a lap (cross the finish line).
Additional and alternative rules
Combining the following rules in various ways, there are many variants of the game.
The track
The track need not be a closed curve; the starting and finishing lines could be different.
Before starting to play, the players may go over the track, agreeing in advance about each grid point near the boundaries as to whether that point is inside or outside the track.
Alternatively, the track may be drawn with straight lines only, with corners at grid points only. This removes the need to decide dubious points. Players may or may not be allowed to touch the walls, but not to cross them.
The moves
Instead of allowing moves to any of eight neighbours of the principal point, one may use the four neighbours rule, limiting moves to the principal point or any of its four nearest neighbours.
When drawing the track, slippery regions with oil spill may be marked, wherein the cars cannot change velocity at all, or only according to the four neighbours rule. Also, turbo regions may be marked with an arrow with a specific length and direction, wherein possible moves are given by a principal point displaced as indicated by the arrow. These rules may apply to all moves either beginning in, or ending in, or beginning and ending in, or passing through, the marked region.
Collisions and crashes
Usually, cars are required to stay on the track for the entire length of the move, not just the start and end. On heavily convoluted racetracks, allowing the line segment representing a move to cross the boundary twice (with start and end points inside the track), some unreasonable shortcuts may be allowed.
Several cars may be allowed to occupy the same point simultaneously. However, the most common and entertaining rule is that while the line segments are allowed to intersect, a car cannot move to or through a grid point that is occupied by another car, as they would collide.
If a player is unable to move according to these rules, the player has crashed. A crashed car may leave the game, or various systems for penalizing crashes can be devised.
A player running off the track may be allowed to continue, but is required to brake and turn around, and re-enter the track again crossing the boundary at a point behind the point where it left. At high speeds, this will take a considerable number of moves.
Another possibility is to penalize a car with "damage points" for each crash. E.g., if it runs off the track or collides, it receives 1 damage point for each square of the last movement, and comes to an immediate stand-still. A car with 5 damage points, say, cannot run anymore.
Finding a winner
At the end of the game, one may complete a round. E.g., with three players A, B and C (starting on that order), if B is the first to cross the finish line, C is allowed one more move to complete the A-B-C cycle. The winner is the player whose car is the greatest distance beyond the finish line.
If the collision rule mentioned above is used, there is still a considerable advantage in moving first. This may be partially counterbalanced by having the players choose their individual starting points in reverse order. E.g., first C chooses a start point, then B, then A. Then, A makes the first move, followed by B, then C.
Another possible rule is to let the loser move first in the next game.
Mathematics and physics
Each move may be represented by a vector. E.g., a move four squares to the right and two up may be represented by the vector (4,2).
The eight neighbour rule allows changing each coordinate of the vector by ±1. E.g., if the previous move was (4,2), the next one may be any of the following nine:
(3,3) (4,3) (5,3)
(3,2) (4,2) (5,2)
(3,1) (4,1) (5,1)
If each round represents 1 second and each square represents 1 metre, the vector representing each move is a velocity vector in metres per second. The four neighbour rule allows accelerations up to 1 metre per second squared, and the eight neighbours rule allows accelerations up to metres per second squared. A more realistic maximum acceleration for car racing would be 10 metres per second squared, e.g. corresponding to assuming each round to represent a reaction time of 0.5 seconds, and each square to represent 2.5 metres (using 4 neighbour rule).
The speed built up by acceleration can only be reduced at the same rate. This restriction reflects the inertia or momentum of the car. Note that in physics, speeding, braking, and turning right or left all are forms of acceleration, represented by one vector. For a sports car, having the same maximum acceleration without loss of traction in all directions is not unrealistic; see Circle of forces. Note, however, that the circle of forces strictly applies to an individual tyre rather than an entire vehicle, that a slightly elongated ellipse would be more realistic than a circle, and that the theory of traction involving this circle or ellipse is quite simplified.
History and contemporary use
The origins of the game are unknown, but it certainly existed as early as the 1960s. The rules for the game, and a sample track game was published by Martin Gardner in January 1973 in his "Mathematical Games" column in Scientific American; and it was again described in Car and Driver magazine, in August 1973, page 65. Today, the game is used by math and physics teachers around the world when teaching vectors and kinematics. However, the game has a certain charm of its own, and may be played as a pure recreation.
Martin Gardner noted that the game was "virtually unknown" in the United States, and called it "a truly remarkable simulation of automobile racing". He mentions having learned the game from Jürg Nievergelt, "a computer scientist at the University of Illinois who picked it up on a recent trip to Switzerland". Car and Driver described it as having an "almost supernatural" resemblance to actual racing, commenting that "If you enter a turn too rapidly, you will spin. If you "brake" too early, it will take you longer to accelerate out of the turn."
Triplanetary was a science fiction rocket ship racing game that was sold commercially between 1973 and 1981. It used similar rules to Racetrack but on a hexagonal grid and with the spaceships being placed in the center of the grid cells rather than at the vertices. The game used a laminated board which could be written on with a grease pencil.
References
See also
Paper soccer
Mathematical games
Paper-and-pencil games
Racing games | Racetrack (game) | [
"Mathematics"
] | 2,037 | [
"Recreational mathematics",
"Mathematical games"
] |
179,132 | https://en.wikipedia.org/wiki/Spacelab | Spacelab was a reusable laboratory developed by European Space Agency (ESA) and used on certain spaceflights flown by the Space Shuttle. The laboratory comprised multiple components, including a pressurized module, an unpressurized carrier, and other related hardware housed in the Shuttle's cargo bay. The components were arranged in various configurations to meet the needs of each spaceflight.
Spacelab components flew on a total of about 32 Shuttle missions, depending on how such hardware and missions are tabulated. Spacelab allowed scientists to perform experiments in microgravity in geocentric orbit. There was a variety of Spacelab-associated hardware, so a distinction can be made between the major Spacelab program missions with European scientists running missions in the Spacelab habitable module, missions running other Spacelab hardware experiments, and other Space Transportation System (STS) missions that used some component of Spacelab hardware. There is some variation in counts of Spacelab missions, in part because there were different types of Spacelab missions with a large range in the amount of Spacelab hardware flown and the nature of each mission. There were at least 22 major Spacelab missions between 1983 and 1998, and Spacelab hardware was used on a number of other missions, with some of the Spacelab pallets being flown as late as 2008.
Background and history
In August 1973, NASA and European Space Research Organisation (ESRO), now European Space Agency or ESA, signed a memorandum of understanding (MOU) to build a science laboratory for use on Space Shuttle flights. Construction of Spacelab was started in 1974 by Entwicklungsring Nord (ERNO), a subsidiary of VFW-Fokker GmbH, after merger with Messerschmitt-Bölkow-Blohm (MBB) named MBB/ERNO, and merged into EADS SPACE Transportation in 2003. The first lab module, LM1, was donated to NASA in exchange for flight opportunities for European astronauts. A second module, LM2, was bought by NASA for its own use from ERNO.
Construction on the Spacelab modules began in 1974 by what was then the company ERNO-VFW-Fokker.
In the early 1970s NASA shifted its focus from the Lunar missions to the Space Shuttle, and also space research. The Administrator of NASA at the time moved the focus from a new space station to a space laboratory for the planned Space Shuttle. This would allow technologies for future space stations to be researched and harness the capabilities of the Space Shuttle for research.
Spacelab was produced by European Space Research Organisation (ESRO), a consortium of ten European countries including:
Austria
Belgium
Denmark
France
West Germany/Germany
Italy
Netherlands
Spain
Switzerland
United Kingdom
Components
In addition to the laboratory module, the complete set also included five external pallets for experiments in vacuum built by British Aerospace (BAe) and a pressurized "Igloo" containing the subsystems needed for the pallet-only flight configuration operation. Eight flight configurations were qualified, though more could be assembled if needed.
The system had some unique features including an intended two-week turn-around time (for the original Space Shuttle launch turn-around time) and the roll-on-roll-off for loading in aircraft (Earth-transportation).
Spacelab consisted of a variety of interchangeable components, with the major one being a crewed laboratory that could be flown in the Space Shuttle orbiter's bay and returned to Earth. However, the habitable module did not have to be flown to conduct a Spacelab-type mission and there was a variety of pallets and other hardware supporting space research. The habitable module expanded the volume for astronauts to work in a shirt-sleeve environment and had space for equipment racks and related support equipment. When the habitable module was not used, some of the support equipment for the pallets could instead be housed in the smaller Igloo, a pressurized cylinder connected to the Space Shuttle orbiter crew area.
Spacelab missions typically supported multiple experiments, and the Spacelab 1 mission had experiments in the fields of space plasma physics, solar physics, atmospheric physics, astronomy, and Earth observation. The selection of appropriate modules was part of mission planning for Spacelab Shuttle missions, and for example, a mission might need less habitable space and more pallets, or vice versa.
Habitable module
The habitable Spacelab laboratory module comprised a cylindrical environment in the rear of the Space Shuttle orbiter payload bay, connected to the orbiter crew compartment by a tunnel. The laboratory had an outer diameter of , and each segment a length of . The laboratory module consisted at minimum of a core segment, which could be used alone in a short module configuration. The long module configuration included an additional experiment segment. It was also possible to operate Spacelab experiments from the orbiter's aft flight deck.
The pressurized tunnel had its connection point at the orbiter's mid-deck. There were two different length tunnels depending on the location of the habitable module in the payload bay. When the laboratory module was not used, but additional space was needed for support equipment, another structure called the Igloo could be used.
Two laboratory modules were built, identified as LM1 and LM2. LM1 is on display at the Steven F. Udvar-Hazy Center at the Smithsonian Air and Space Museum behind the Space Shuttle Discovery. LM2 was on display in the Bremenhalle exhibition in the Bremen Airport of Bremen, Germany from 2000 to 2010. It resides in building 4c at the nearby Airbus Defence and Space plant since 2010 and can only be viewed during guided tours.
Pallet
The Spacelab Pallet is a U-shaped platform for mounting instrumentation, large instruments, experiments requiring exposure to space, and instruments requiring a large field of view, such as telescopes. The pallet has several hard points for mounting heavy equipment. The pallet can be used in single configuration or stacked end to end in double or triple configurations. Up to five pallets can be configured in the Space Shuttle cargo bay by using a double pallet plus triple pallet configurations.
The Spacelab Pallet used to transport both Canadarm2 and Dextre to the International Space Station is currently at the Canada Aviation and Space Museum, on loan from NASA through the Canadian Space Agency (CSA).
A Spacelab Pallet was transferred to the Swiss Museum of Transport for permanent display on 5 March 2010. The Pallet, nicknamed Elvis, was used during the eight-day STS-46 mission, 31 July – 8 August 1992, when ESA astronaut Claude Nicollier was on board Space Shuttle Atlantis to deploy ESA's European Retrievable Carrier (Eureca) scientific mission and the joint NASA/ASI (Italian Space Agency) Tethered Satellite System (TSS-1). The Pallet carried TSS-1 in the Shuttle's cargo bay.
Another Spacelab Pallet is on display at the U.S. National Air and Space Museum in Washington, D.C. There was a total of ten space-flown Spacelab pallets.
Igloo
On spaceflights where a habitable module was not flown, but pallets were flown, a pressurized cylinder known as the Igloo carried the subsystems needed to operate the Spacelab equipment. The Igloo was tall, had a diameter of , and weighed . Two Igloo units were manufactured, both by Belgium company SABCA, and both were used on spaceflights. An Igloo component was flown on Spacelab 2, ASTRO-1, ATLAS-1, ATLAS-2, ATLAS-3, and ASTRO-2.
A Spacelab Igloo is on display at the James S. McDonnell Space Hangar at the Steven F. Udvar-Hazy Center in the US.
Instrument Pointing System
The IPS was a gimbaled pointing device, capable of aiming telescopes, cameras, or other instruments. IPS was used on three different Space Shuttle missions between 1985 and 1995. IPS was manufactured by Dornier, and two units were made. The IPS was primarily constructed out of aluminum, steel, and multi-layer insulation.
IPS would be mounted inside the payload bay of the Space Shuttle Orbiter, and could provide gimbaled 3-axis pointing. It was designed for a pointing accuracy of less than 1 arcsecond (a unit of degree), and three pointing modes including Earth, Sun, and Stellar focused modes. The IPS was mounted on a pallet exposed to outer space in the payload bay.
IPS missions:
Spacelab 2, a.k.a. STS-51-F launched 1985
Astro-1, a.k.a. STS-35 launched in 1990
Astro-2, a.k.a. STS-67 launched in 1995
The Spacelab 2 mission flew the Infrared Telescope (IRT), which was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. IRT collected infrared data on 60% of the galactic plane.
List of parts
Examples of Spacelab components or hardware:
EVA Airlock
Tunnel
Tunnel adapter
Igloo
Spacelab module
Forward end cone
Aft end cone
Core segment/module
Experiment racks
Experiment segment/module
Electrical Ground Support Equipment
Mechanical Ground Support Equipment
Electrical Power Distribution Subsystem
Command and Data Management Subsystem
Environmental Control Subsystem
Instrument Pointing System
Pallet Structure
Multi-Purpose Experiment Support Structure (MPESS)
The Extended Duration Orbiter (EDO) assembly was not Spacelab hardware, strictly speaking. However, it was used most often on Spacelab flights. Also, NASA later used it with the SpaceHab modules.
Missions
Spacelab components flew on 22 Space Shuttle missions from November 1983 to April 1998. The Spacelab components were decommissioned in 1998, except the Pallets. Science work was moved to the International Space Station (ISS) and Spacehab module, a pressurized carrier similar to the Spacelab Module. A Spacelab Pallet was recommissioned in 2000 for flight on STS-99. The "Spacelab Pallet – Deployable 1 (SLP-D1) with Canadian Dextre (Purpose Dexterous Manipulator)" was launched on STS-123. The Spacelab components were used on 41 Shuttle missions in total.
The habitable modules were flown on 16 Space Shuttle missions in the 1980s and 1990s. Spacelab Pallet missions were flown 6 times and Spacelab Pallets were flown on other missions 19 times.
Mission name acronyms:
ATLAS: Atmospheric Laboratory for Applications and Science
ASTRO: Not an acronym; abbreviation for "astronomy"
IML: International Microgravity Laboratory
LITE: Lidar In-space Technology Experiment
LMS: Life and Microgravity Sciences
MSL: Materials Science Laboratory
SLS: Spacelab Life Sciences
SRL: Space Radar Laboratory
TSS: Tethered Satellite System
USML: U.S. Microgravity Laboratory
USMP: U.S. Microgravity Payload
Besides contributing to ESA missions, Germany and Japan each funded their own Space Shuttle and Spacelab missions. Although superficially similar to other flights, they were actually the first and only non-U.S. and non-European human space missions with complete German and Japanese control.
The first West German mission Deutschland 1 (Spacelab-D1, DLR-1, NASA designation STS-61-A) took place in 1985. A second similar mission, Deutschland 2 (Spacelab-D2, DLR-2, NASA designation STS-55), was first planned for 1988, but due to the Space Shuttle Challenger disaster, was delayed until 1993. It became the first German human space mission after German reunification.
The only Japan mission, Spacelab-J (NASA designation STS-47), took place in 1992.
Other missions
STS-92, October 2000, PMA-3, ()
STS-108, December 2001, Lightweight Mission Peculiar Support Structure Carrier (LMC) ()
STS-123, March 2008, Pallet (), Dextre
Cancelled missions
Spacelab-4, Spacelab-5, and other planned Spacelab missions were cancelled due to the late development of the Shuttle and the Challenger disaster.
Gallery
Legacy
The legacy of Spacelab lives on in the form of the MPLMs and the systems derived from it. These systems include the ATV and Cygnus spacecraft used to transfer payloads to the International Space Station, and the Columbus, Harmony and Tranquility modules of the International Space Station.
The Spacelab 2 mission surveyed 60% of the galactic plane in infrared in 1985.
Spacelab was an extremely large program, and this was enhanced by different experiments and multiple payloads and configurations over two decades. For example, in a subset of just one part of the Spacelab 1 (STS-9) mission, no less than eight different imaging systems were flown into space. Including those experiments, there was a total of 73 separate experiments across different disciplines on the Spacelab 1 flight alone. Spacelab missions conducted experiments in materials, life, solar, astrophysics, atmospheric, and Earth science.
Diagram, Spacelab Module and Pallet
See also
Columbus Man-Tended Free Flyer
Hermes (spacecraft)
International Space Station
Columbus (ISS module)
Space Shuttle retirement
Space Station Freedom
Spacehab module (various, not to be confused with Spacelab)
Spacelab, a 1978 song by Kraftwerk
References
External links
Spacelab history on NASA.gov
Spacelab: An International Short-Stay Orbiting Laboratory, NASA-EP-165 on NASA.gov
Science in Orbit: The Shuttle & Spacelab Experience, 1981–1986, NASA-NP-119 on NASA.gov
Spacelab Payloads on Shuttle Flights on NASA.gov
James Downey Collection, UAH Archives and Special Collections files of James A. Downey III, project manager for Spacelab payloads
Lord, Douglas R. Spacelab An international success story, NASA-SP-487 NASA, January 1, 1987
SLP/2104-2: Spacelab Payload Accommodation Handbook
Crewed space observatories
Space hardware returned to Earth intact
Space science
Space Shuttle program | Spacelab | [
"Astronomy"
] | 2,877 | [
"Space science",
"Space telescopes",
"Outer space",
"Crewed space observatories"
] |
179,211 | https://en.wikipedia.org/wiki/Lignotuber | A lignotuber is a woody swelling of the root crown possessed by some plants as a protection against destruction of the plant stem, such as by fire. Other woody plants may develop basal burls as a similar survival strategy, often as a response to coppicing or other environmental stressors. However, lignotubers are specifically part of the normal course of development of the plants that possess them, and often develop early on in growth. The crown contains buds from which new stems may sprout, as well as stores of starch that can support a period of growth in the absence of photosynthesis. The term "lignotuber" was coined in 1924 by Australian botanist Leslie R. Kerr.
Plants possessing lignotubers include many species in Australia: Eucalyptus marginata (jarrah), Eucalyptus brevifolia (snappy gum) and Eucalyptus ficifolia (scarlet gum) all of which can have lignotubers wide and deep, as well as most mallees (where it is also known as a mallee root) and many Banksia species.
Plants possessing lignotubers on the western coast of the USA include California buckeye, coast redwood, California bay laurel (aka Oregon myrtle), and multiple species of manzanita and Ceanothus.
At least 14 species in the Mediterranean region have been identified as having lignotubers (as of 1993). Lignotubers develop from the cotyledonary bud in seedlings of several oak species including cork oak Quercus suber, but do not develop in several other oak species, and are not apparent in mature cork oak trees.
The fire-resistant lignotubers of Erica arborea, known as "briar root", are commonly used to make smoking pipes.
The largest known lignotubers (also called "root collar burls") are those of the Coast Redwood (Sequoia sempervirens) of central and northern California and extreme southwestern Oregon. A lignotuber washed into Big Lagoon, California, by the full gale storm of 1977 was in diameter and about half as tall and estimated to weigh . The largest dicot lignotubers are those of the Chinese Camphor Tree, or Kusu (Cinnamomum camphora) of Japan, China and the Koreas. Ones at the Vergelegen Estate in Cape Town, South Africa, which were planted in the late 1600s have muffin-shaped lignotubers up to high and about in diameter. Perhaps the largest lignotuber in Australia would be that of "Old Bottle Butt", a Red Bloodwood Tree (Corymbia gummifera) near Wauchope, New South Wales, that has a lignotuber about in height and in circumference at breast height.
Many plants with lignotubers grow in a shrubby habit, but with multiple stems arising from the lignotuber. The term lignotuberous shrub is used to describe this habit.
See also
California chaparral and woodlands
Chaparral
Crown sprouting
Epicormic shoot, also fire-induced buds
Fire ecology
Geoxyle
Resprouter
References
Plant morphology
Wildfire ecology | Lignotuber | [
"Biology"
] | 658 | [
"Plant morphology",
"Plants"
] |
179,252 | https://en.wikipedia.org/wiki/Gastropoda | Gastropods (), commonly known as slugs and snails, belong to a large taxonomic class of invertebrates within the phylum Mollusca called Gastropoda ().
This class comprises snails and slugs from saltwater, freshwater, and from the land. There are many thousands of species of sea snails and slugs, as well as freshwater snails, freshwater limpets, land snails and slugs.
The class Gastropoda is a diverse and highly successful class of mollusks within the phylum Mollusca. It contains a vast total of named species, second only to the insects in overall number. The fossil history of this class goes back to the Late Cambrian. , 721 families of gastropods are known, of which 245 are extinct and appear only in the fossil record, while 476 are currently extant with or without a fossil record.
Gastropoda (previously known as univalves and sometimes spelled "Gasteropoda") are a major part of the phylum Mollusca, and are the most highly diversified class in the phylum, with 65,000 to 80,000 living snail and slug species. The anatomy, behavior, feeding, and reproductive adaptations of gastropods vary significantly from one clade or group to another, so stating many generalities for all gastropods is difficult.
The class Gastropoda has an extraordinary diversification of habitats. Representatives live in gardens, woodland, deserts, and on mountains; in small ditches, great rivers, and lakes; in estuaries, mudflats, the rocky intertidal, the sandy subtidal, the abyssal depths of the oceans, including the hydrothermal vents, and numerous other ecological niches, including parasitic ones.
Although the name "snail" can be, and often is, applied to all the members of this class, commonly this word means only those species with an external shell big enough that the soft parts can withdraw completely into it. Slugs are gastropods that have no shell or a very small, internal shell; semislugs are gastropods that have a shell that they can partially retreat into but not entirely.
The marine shelled species of gastropods include species such as abalone, conches, periwinkles, whelks, and numerous other sea snails that produce seashells that are coiled in the adult stage—though in some, the coiling may not be very visible, for example in cowries. In a number of families of species, such as all the various limpets, the shell is coiled only in the larval stage, and is a simple conical structure after that.
Etymology
In the scientific literature, gastropods were described as "gasteropodes" by in 1795. The word gastropod comes from Greek ( 'stomach') and ( 'foot'), a reference to the fact that the animal's "foot" is positioned below its guts.
The earlier name "univalve" means one valve (or shell), in contrast to bivalves, such as clams, which have two valves or shells.
Diversity
At all taxonomic levels, gastropods are second only to insects in terms of their diversity.
Gastropods have the greatest numbers of named mollusk species. However, estimates of the total number of gastropod species vary widely, depending on cited sources. The number of gastropod species can be ascertained from estimates of the number of described species of Mollusca with accepted names: about 85,000 (minimum 50,000, maximum 120,000). But an estimate of the total number of Mollusca, including undescribed species, is about 240,000 species. The estimate of 85,000 mollusks includes 24,000 described species of terrestrial gastropods.
Different estimates for aquatic gastropods (based on different sources) give about 30,000 species of marine gastropods, and about 5,000 species of freshwater and brackish gastropods. Many deep-sea species remain to be discovered, as only 0.0001% of the deep-sea floor has been studied biologically. The total number of living species of freshwater snails is about 4,000.
Recently extinct species of gastropods (extinct since 1500) number 444, 18 species are now extinct in the wild (but still exist in captivity), and 69 species are "possibly extinct".
The number of prehistoric (fossil) species of gastropods is at least 15,000 species.
In marine habitats, the continental slope and the continental rise are home to the highest diversity, while the continental shelf and abyssal depths have a low diversity of marine gastropods.
Habitat
Gastropods are found in a wide range of aquatic and terrestrial habitats, from deep ocean trenches to deserts.
Some of the more familiar and better-known gastropods are terrestrial gastropods (the land snails and slugs). Some live in fresh water, but most named species of gastropods live in a marine environment.
Gastropods have a worldwide distribution, from the near Arctic and Antarctic zones to the tropics. They have become adapted to almost every kind of existence on earth, having colonized nearly every available medium.
In habitats where not enough calcium carbonate is available to build a really solid shell, such as on some acidic soils on land, various species of slugs occur, and also some snails with thin, translucent shells, mostly or entirely composed of the protein conchiolin.
Snails such as Sphincterochila boissieri and Xerocrassa seetzeni have adapted to desert conditions. Other snails have adapted to an existence in ditches, near deepwater hydrothermal vents, in oceanic trenches 10,000 meters (6 miles) below the surface, the pounding surf of rocky shores, caves, and many other diverse areas.
Gastropods can be accidentally transferred from one habitat to another by other animals, e.g. by birds.
Anatomy
Snails are distinguished by an anatomical process known as torsion, where the visceral mass of the animal rotates 180° to one side during development, such that the anus is situated more or less above the head. This process is unrelated to the coiling of the shell, which is a separate phenomenon. Torsion is present in all gastropods, but the opisthobranch gastropods are secondarily untorted to various degrees.
Torsion occurs in two stages. The first, mechanistic stage is muscular, and the second is mutagenetic. The effects of torsion are primarily physiological. The organism develops by asymmetrical growth, with the majority of growth occurring on the left side. This leads to the loss of right-side anatomy that in most bilaterians is a duplicate of the left side anatomy. The essential feature of this asymmetry is that the anus generally lies to one side of the median plane. The gill-combs, the olfactory organs, the foot slime-gland, nephridia, and the auricle of the heart are single or at least are more developed on one side of the body than the other. Furthermore, there is only one genital orifice, which lies on the same side of the body as the anus. Furthermore, the anus becomes redirected to the same space as the head. This is speculated to have some evolutionary function, as prior to torsion, when retracting into the shell, first the posterior end would get pulled in, and then the anterior. Now, the front can be retracted more easily, perhaps suggesting a defensive purpose.
Gastropods typically have a well-defined head with two or four sensory tentacles with eyes, and a ventral foot. The foremost division of the foot is called the propodium. Its function is to push away sediment as the snail crawls. The larval shell of a gastropod is called a protoconch.
Shell
Most shelled gastropods have a one piece shell (with exceptional bivalved gastropods), typically coiled or spiraled, at least in the larval stage. This coiled shell usually opens on the right-hand side (as viewed with the shell apex pointing upward). Numerous species have an operculum, which in many species acts as a trapdoor to close the shell. This is usually made of a horn-like material, but in some molluscs it is calcareous. In the land slugs, the shell is reduced or absent, and the body is streamlined.
Some gastropods have adult shells which are bottom heavy due to the presence of a thick, often broad, convex ventral callus deposit on the inner lip and adapical to the aperture which may be important for gravitational stability.
Body wall
Some sea slugs are very brightly colored. This serves either as a warning, when they are poisonous or contain stinging cells, or to camouflage them on the brightly colored hydroids, sponges, and seaweeds on which many of the species are found.
Lateral outgrowths on the body of nudibranchs are called cerata. These contain an outpocketing of digestive glands called the diverticula.
Sensory organs and nervous system
The sensory organs of gastropods include olfactory organs, eyes, statocysts and mechanoreceptors. Gastropods have no hearing.
In terrestrial gastropods (land snails and slugs), the olfactory organs, located on the tips of the four tentacles, are the most important sensory organ. The chemosensory organs of opisthobranch marine gastropods are called rhinophores.
The majority of gastropods have simple visual organs, eye spots either at the tip or base of the tentacles. However, "eyes" in gastropods range from simple ocelli that only distinguish light and dark, to more complex pit eyes, and even to lens eyes. In land snails and slugs, vision is not the most important sense, because they are mainly nocturnal animals.
The nervous system of gastropods includes the peripheral nervous system and the central nervous system. The central nervous system consists of ganglia connected by nerve cells. It includes paired ganglia: the cerebral ganglia, pedal ganglia, osphradial ganglia, pleural ganglia, parietal ganglia and the visceral ganglia. There are sometimes also buccal ganglia.
Digestive system
The radula of a gastropod is usually adapted to the food that a species eats. The simplest gastropods are the limpets and abalone, herbivores that use their hard radula to rasp at seaweeds on rocks.
Many marine gastropods are burrowers, and have a siphon that extends out from the mantle edge. Sometimes the shell has a siphonal canal to accommodate this structure. A siphon enables the animal to draw water into their mantle cavity and over the gill. They use the siphon primarily to "taste" the water to detect prey from a distance. Gastropods with siphons tend to be either predators or scavengers.
Respiratory system
Almost all marine gastropods breathe with a gill, but many freshwater species, and the majority of terrestrial species, have a pallial lung. The respiratory protein in almost all gastropods is hemocyanin, but one freshwater pulmonate family, the Planorbidae, have hemoglobin as the respiratory protein.
In one large group of sea slugs, the gills are arranged as a rosette of feathery plumes on their backs, which gives rise to their other name, nudibranchs. Some nudibranchs have smooth or warty backs with no visible gill mechanism, such that respiration may likely take place directly through the skin.
Circulatory system
Gastropods have open circulatory system and the transport fluid is hemolymph. Hemocyanin is present in the hemolymph as the respiratory pigment.
Excretory system
The primary organs of excretion in gastropods are nephridia, which produce either ammonia or uric acid as a waste product. The nephridium also plays an important role in maintaining water balance in freshwater and terrestrial species. Additional organs of excretion, at least in some species, include pericardial glands in the body cavity, and digestive glands opening into the stomach.
Reproductive system
Courtship is a part of mating behavior in some gastropods, including some of the Helicidae. Again, in some land snails, an unusual feature of the reproductive system of gastropods is the presence and utilization of love darts.
In many marine gastropods other than the opisthobranchs, there are separate sexes (dioecious/gonochoric); most land gastropods, however, are hermaphrodites.
Life cycle
Courtship is a part of the behavior of mating gastropods with some pulmonate families of land snails creating and utilizing love darts, the throwing of which have been identified as a form of sexual selection.
The main aspects of the life cycle of gastropods include:
Egg laying and the eggs of gastropods
The embryonic development of gastropods
The larvae or larval stadium: some gastropods may be trochophore and/or veliger
Estivation and hibernation (each of these are present in some gastropods only)
The growth of gastropods
Courtship and mating in gastropods: fertilization is internal or external according to the species. External fertilization is common in marine gastropods.
Feeding behavior
The diet of gastropods differs according to the group considered. Marine gastropods include some that are herbivores, detritus feeders, predatory carnivores, scavengers, parasites, and also a few ciliary feeders, in which the radula is reduced or absent. Land-dwelling species can chew up leaves, bark, fruit, fungi, and decomposing animals while marine species can scrape algae off the rocks on the seafloor. Certain species such as the Archaeogastropda maintain horizontal rows of slender marginal teeth. In some species that have evolved into endoparasites, such as the eulimid Thyonicola doglieli, many of the standard gastropod features are strongly reduced or absent.
A few sea slugs are herbivores and some are carnivores. The carnivorous habit is due to specialisation. Many gastropods have distinct dietary preferences and regularly occur in close association with their food species.
Some predatory carnivorous gastropods include: cone shells, Testacella, Daudebardia, turrids, ghost slugs and others.
Terrestrial gastropods
Studies based on direct observations, fecal and gut analyses, as well as food-choice experiments, have revealed that snails and slugs consume a wide variety of food resources. Their diet spans from living plants at various developmental stages such as pollen, seeds, seedlings, and wood, to decaying plant material like leaf litter. Additionally, they feed on fungi, lichens, algae, soil, and even other animals, both living and dead, including their feces. Given this diverse diet, terrestrial gastropods can be classified as herbivores, omnivores, carnivores, and detritivores. However, the majority are microbivores, primarily consuming microbes associated with decaying organic material. Despite their ecological importance, there is a notable lack of research exploring the specific roles that terrestrial gastropods play within soil food webs.
Fungivory
Many terrestrial gastropod mollusks are known to consume fungi, a behavior observed in various species of snails and slugs across distinct families. Notable examples of fungivore slugs include members of the family Philomycidae, which feed on slime molds (myxomycetes), and the Ariolimacidae, which primarily consume mushrooms (basidiomycetes). Snail families that contain fungivore species include Clausiliidae, Macrocyclidae, and Polygyridae.
Mushroom-producing fungi used as a food source by snails and slugs include species from several genera. Some examples are milk-caps (Lactarius spp.), the oyster mushroom (Pleurotus ostreatus), and the penny bun. Additionally, slugs feed on fungi from other genera, such as Agaricus, Pleurocybella, and Russula. Snails have also been reported to feed on penny buns as well as Coprinellus, Aleurodiscus, Armillaria, Grifola , Marasmiellus, Mycena, Pholiota, and Ramaria. As for slime molds, commonly consumed species include Stemonitis axifera and Symphytocarpus flaccidus.
Feeding behaviors in slugs exhibit considerable variation. Some species display selectivity, consuming specific parts or developmental stages of fungi. For instance, certain slugs may target fungi only at particular stages of maturity, such as immature fruiting bodies or spore-producing structures. Conversely, other species show little to no selectivity, consuming entire mushrooms regardless of developmental stage. This variability stresses the diverse dietary adaptations among slug species and their ecological roles in fungal consumption. Moreover, by consuming fungi, snails and slugs can also indirectly help in their dispersal by carrying along some of their spores or the fungi themselves.
Genetics
Gastropods exhibit an important degree of variation in mitochondrial gene organization when compared to other animals. Main events of gene rearrangement occurred at the origin of Patellogastropoda and Heterobranchia, whereas fewer changes occurred between the ancestors of Vetigastropoda (only tRNAs D, C and N) and Caenogastropoda (a large single inversion, and translocations of the tRNAs D and N). Within Heterobranchia, gene order seems relatively conserved, and gene rearrangements are mostly related with transposition of tRNA genes.
Geological history and evolution
The first gastropods were exclusively marine, with the earliest known representatives appearing in the Late Cambrian (e.g., Chippewaella, Strepsodiscus). However, their only definitive gastropod feature is a coiled shell, which raises the possibility that they may belong to the stem lineage of gastropods, or might not be gastropods at all. Early Cambrian species such as Helcionella, Barskovia, and Scenella are no longer considered gastropods, and the small coiled Aldanella from the same period is probably not even a mollusk.
It is not until the Ordovician that true crown-group gastropods appear. By this time, gastropods had diversified into a variety of forms and inhabited a range of aquatic environments. Fossil gastropods from the early Paleozoic are often poorly preserved, making identification difficult. However, the Silurian genus Poleumita contains at least 15 identified species. Overall, gastropods were less common in the Paleozoic than bivalves.
Most Paleozoic gastropods belong to primitive groups, some of which still exist today. By the Carboniferous period, many gastropod shell shapes found in fossils resemble those of modern species, though most of these early forms are not directly related to living gastropods. It was during the Mesozoic era that the ancestors of many extant gastropods evolved. One of the earliest known terrestrial gastropods is Anthracopupa (or Maturipupa), found in the Carboniferous Coal Measures of Europe. However, land snails and their relatives were rare before the Cretaceous period.
In Mesozoic rocks, gastropods become more common in the fossil record, with well-preserved shells. Fossils are found in ancient beds from both freshwater and marine environments. Notable examples include the Purbeck Marble of the Jurassic and the Sussex Marble of the early Cretaceous, both from southern England. These limestones contain abundant remains of the pond snail Viviparus. Cenozoic rocks yield vast numbers of gastropod fossils, many of which are closely related to modern species. The diversity of gastropods increased significantly at the start of this era, alongside that of bivalves.
Certain trail-like markings preserved in ancient sedimentary rocks are thought to have been made by gastropods crawling over the soft mud and sand. Although these trace fossils are of debatable origin, some of them do resemble the trails made by living gastropods today.
Gastropod fossils may sometimes be confused with ammonites or other shelled cephalopods. An example of this is Bellerophon from the limestones of the Carboniferous period in Europe, the shell of which is planispirally coiled and can be mistaken for the shell of a cephalopod.
Gastropods also provide important evidence of faunal changes during the Pleistocene epoch, reflecting the impacts of advancing and retreating ice sheets.
Phylogeny
A cladogram showing the phylogenic relationships of Gastropoda with example species:
Cocculiniformia, Neomphalina and Lower Heterobranchia are not included in the above cladogram.
Taxonomy
Current classification
The present backbone classification of gastropods relies on the results of phylogenomic analyses. Consensus has not been reached yet considering the relationships at the very base of the gastropod tree of life, but otherwise the major groups are known with confidence.
Gastropoda
Adenogonogastropoda (Angiogastropoda)
Apogastropoda
Caenogastropoda
Heterobranchia
Neritimorpha
Patellogastropoda
Vetigastropoda (including Neomphaliones)
History
Since Darwin, biological taxonomy has attempted to reflect the phylogeny of organisms, i.e., the tree of life. The classifications used in taxonomy attempt to represent the precise interrelatedness of the various taxa. However, the taxonomy of the Gastropoda is constantly being revised and so the versions shown in various texts can differ in major ways.
In the older classification of the gastropods, there were four subclasses:
Opisthobranchia (gills to the right and behind the heart).
Gymnomorpha (no shell)
Prosobranchia (gills in front of the heart).
Pulmonata (with a lung instead of gills)
The taxonomy of the Gastropoda is still under revision, and more and more of the old taxonomy is being abandoned, as the results of DNA studies slowly become clearer. Nevertheless, a few of the older terms such as "opisthobranch" and "prosobranch" are still sometimes used in a descriptive way.
New insights based on DNA sequencing of gastropods have produced some revolutionary new taxonomic insights. In the case of the Gastropoda, the taxonomy is now gradually being rewritten to embody strictly monophyletic groups (only one lineage of gastropods in each group). Integrating new findings into a working taxonomy remain challenging. Consistent ranks within the taxonomy at the level of subclass, superorder, order, and suborder have already been abandoned as unworkable. Ongoing revisions of the higher taxonomic levels are expected in the near future.
Convergent evolution, which appears to exist at especially high frequency in gastropods, may account for the observed differences between the older phylogenies, which were based on morphological data, and more recent gene-sequencing studies.
In 2004, Brian Simison and David R. Lindberg showed possible diphyletic origins of the Gastropoda based on mitochondrial gene order and amino acid sequence analyses of complete genes.
In 2005, Philippe Bouchet and Jean-Pierre Rocroi made sweeping changes in the systematics, resulting in the Bouchet & Rocroi taxonomy, which is a step closer to the evolutionary history of the phylum. The Bouchet & Rocroi classification system is based partly on the older systems of classification, and partly on new cladistic research. In the past, the taxonomy of gastropods was largely based on phenetic morphological characters of the taxa. The recent advances are more based on molecular characters from DNA and RNA research. This has made the taxonomical ranks and their hierarchy controversial.
In 2017, Bouchet, Rocroi, and other collaborators published a significantly updated version of the 2005 taxonomy. In the Bouchet et al. taxonomy, the authors used unranked clades for taxa above the rank of superfamily (replacing the ranks suborder, order, superorder and subclass), while using the traditional Linnaean approach for all taxa below the rank of superfamily. Whenever monophyly has not been tested, or is known to be paraphyletic or polyphyletic, the term "group" or "informal group" has been used. The classification of families into subfamilies is often not well resolved.
Fixed ranks like family, genus, and species however remain useful for practical classification and remain used in the World Register of Marine Species (WoRMS). Also many researchers continue to use traditional ranks because they are entrenched in the literature and familiar to specialists and non-specialists alike.
Ecology and conservation
Many gastropod species face threats from habitat destruction, pollution, and climate change. Some species are endangered or have become extinct due to these factors. Conservation efforts often focus on protecting their habitats, especially in freshwater and terrestrial ecosystems.
Predators
Gastropods are prey to a wide range of organisms depending on the environment. In marine habitats, gastropods are preyed upon by fish, marine birds, marine mammals, crustaceans, and other mollusks such as cephalopods. In terrestrial environments, gastropod predators include insects, arachnids (spiders, harvestmen), birds, and mammals, among others.
References
Sources
This article incorporates CC-BY-2.0 text from the following source: Cunha, R. L.; Grande, C.; Zardoya, R. (2009). "Neogastropod phylogenetic relationships based on entire mitochondrial genomes". BMC Evolutionary Biology. 9: 210. doi:10.1186/1471-2148-9-210. PMC 2741453. PMID 19698157.
Abbott, R. T. (1989): Compendium of Landshells. A color guide to more than 2,000 of the World's Terrestrial Shells. 240 S., American Malacologists. Melbourne, Fl, Burlington, Ma.
Abbott, R. T. & Dance, S. P. (1998): Compendium of Seashells. A full-color guide to more than 4,200 of the world's marine shells. 413 S., Odyssey Publishing. El Cajon, Calif.
Parkinson, B., Hemmen, J. & Groh, K. (1987): Tropical Landshells of the World. 279 S., Verlag Christa Hemmen. Wiesbaden.
Ponder, W. F. & Lindberg, D. R. (1997): Towards a phylogeny of gastropod molluscs: an analysis using morphological characters. Zoological Journal of the Linnean Society, 119 83–265.
Robin, A. (2008): Encyclopedia of Marine Gastropods. 480 S., Verlag ConchBooks. Hackenheim.
External links
Gastropod reproductive behavior
2004 Linnean taxonomy of gastropods
– Article about social learning also in gastropods.
Gastropod photo gallery, mostly fossils, a few modern shells
A video of a crawling Garden Snail (Cornu aspersum), YouTube
Grove, S.J. (2018). A Guide to the Seashells and other Marine Molluscs of Tasmania: Molluscs of Tasmania with images
Mollusc classes
Asymmetry
Articles containing video clips
Extant Cambrian first appearances
Taxa named by Georges Cuvier | Gastropoda | [
"Physics"
] | 5,670 | [
"Symmetry",
"Asymmetry"
] |
179,260 | https://en.wikipedia.org/wiki/No-hair%20theorem | The no-hair theorem (which is a hypothesis) states that all stationary black hole solutions of the Einstein–Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three independent externally observable classical parameters: mass, angular momentum, and electric charge. Other characteristics (such as geometry and magnetic moment) are uniquely determined by these three parameters, and all other information (for which "hair" is a metaphor) about the matter that formed a black hole or is falling into it "disappears" behind the black-hole event horizon and is therefore permanently inaccessible to external observers after the black hole "settles down" (by emitting gravitational and electromagnetic waves). Physicist John Archibald Wheeler expressed this idea with the phrase "black holes have no hair", which was the origin of the name.
In a later interview, Wheeler said that Jacob Bekenstein coined this phrase.
Richard Feynman objected to the phrase that seemed to me to best symbolize the finding of one of the graduate students: graduate student Jacob Bekenstein had shown that a black hole reveals nothing outside it of what went in, in the way of spinning electric particles. It might show electric charge, yes; mass, yes; but no other features or as he put it, "A black hole has no hair". Richard Feynman thought that was an obscene phrase and he didn't want to use it. But that is a phrase now often used to state this feature of black holes, that they don't indicate any other properties other than a charge and angular momentum and mass.
The first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was quickly generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, and mathematicians refer to it as the no-hair conjecture. Even in the case of gravity alone (i.e., zero electric fields), the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the additional hypothesis of non-degenerate event horizons and the technical, restrictive and difficult-to-justify assumption of real analyticity of the space-time continuum.
Example
Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole was made by collapsing ordinary matter whereas the second was made out of antimatter; nevertheless, then the conjecture states they will be completely indistinguishable to an observer outside the event horizon. None of the special particle physics pseudo-charges (i.e., the global charges baryonic number, leptonic number, etc., all of which would be different for the originating masses of matter that created the black holes) are conserved in the black hole, or if they are conserved somehow then their values would be unobservable from the outside.
Changing the reference frame
Every isolated unstable black hole decays rapidly to a stable black hole; and (excepting quantum fluctuations) stable black holes can be completely described (in a Cartesian coordinate system) at any moment in time by these eleven numbers:
mass–energy ,
electric charge ,
position (three components),
linear momentum (three components),
angular momentum (three components).
These numbers represent the conserved attributes of an object which can be determined from a distance by examining its gravitational and electromagnetic fields. All other variations in the black hole will either escape to infinity or be swallowed up by the black hole.
By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive z axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge. Thus any black hole that has been isolated for a significant period of time can be described by the Kerr–Newman metric in an appropriately chosen reference frame.
Extensions
The no-hair theorem was originally formulated for black holes within the context of a four-dimensional spacetime, obeying the Einstein field equation of general relativity with zero cosmological constant, in the presence of electromagnetic fields, or optionally other fields such as scalar fields and massive vector fields (Proca fields, etc.).
It has since been extended to include the case where the cosmological constant is positive (which recent observations are tending to support).
Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a classical black hole.
Counterexamples
Counterexamples in which the theorem fails are known in spacetime dimensions higher than four; in the presence of non-abelian Yang–Mills fields, non-abelian Proca fields, some non-minimally coupled scalar fields, or skyrmions; or in some theories of gravity other than Einstein's general relativity. However, these exceptions are often unstable solutions and/or do not lead to conserved quantum numbers so that "The 'spirit' of the no-hair conjecture, however, seems to be maintained". It has been proposed that "hairy" black holes may be considered to be bound states of hairless black holes and solitons.
In 2004, the exact analytical solution of a (3+1)-dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. This showed that, apart from mass, electrical charge and angular momentum, black holes can carry a finite scalar charge which might be a result of interaction with cosmological scalar fields such as the inflaton. The solution is stable and does not possess any unphysical properties; however, the existence of a scalar field with the desired properties is only speculative.
Observational results
The results from the first observation of gravitational waves in 2015 provide some experimental evidence consistent with the uniqueness of the no-hair theorem. This observation is consistent with Stephen Hawking's theoretical work on black holes in the 1970s.
Soft hair
A study by Sasha Haco, Stephen Hawking, Malcolm Perry and Andrew Strominger postulates that black holes might contain "soft hair", giving the black hole more degrees of freedom than previously thought. This hair permeates at a very low-energy state, which is why it didn't come up in previous calculations that postulated the no-hair theorem. This was the subject of Hawking's final paper which was published posthumously.
See also
Black hole information paradox
Event Horizon Telescope
References
External links
, Stephen Hawking's purported solution to the black hole unitarity paradox, first reported in July 2004.
Black holes
Theorems in general relativity | No-hair theorem | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,374 | [
"Physical phenomena",
"Black holes",
"Equations of physics",
"Physical quantities",
"Theorems in general relativity",
"Unsolved problems in physics",
"Astrophysics",
"Theorems in mathematical physics",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Physics theorems"
] |
179,340 | https://en.wikipedia.org/wiki/List%20of%20deaths%20from%20drug%20overdose%20and%20intoxication | Drug overdose and intoxication are significant causes of accidental death and can also be used as a form of suicide. Death can occur from overdosing on a single or multiple drugs, or from combined drug intoxication (CDI) due to poly drug use. Poly drug use often carries more risk than use of a single drug, due to an increase in side effects, and drug synergy. For example, the chance of death from overdosing on opiates is greatly increased when they are consumed in conjunction with alcohol. While they are two distinct phenomena, deaths from CDI are often misreported as overdoses. Drug overdoses and intoxication can also cause indirect deaths. For example, while marijuana does not cause fatal overdoses, being intoxicated by it can increase the chance of fatal traffic collisions.
Drug use and overdoses increased significantly in the 1800s due to the commercialization and availability of certain drugs. For example, while opium and coca had been used for centuries, their active ingredients, morphine and the cocaine alkaloid, were not isolated until 1803 and 1855 respectively. Cocaine and various opiates were subsequently mass-produced and sold openly and legally in the Western world, resulting in widespread misuse and addiction. Drug use and addiction also increased significantly following the invention of the hypodermic syringe in 1853, with overdose being a leading cause of death among intravenous drug users.
Efforts to prohibit various drugs began to be enacted in the early 20th century, though the effectiveness of such policies is debated. Deaths from drug overdoses are increasing. Between 2000 and 2014, fatal overdoses rose 137% in the United States, causing nearly half a million deaths in that period, and have also been continually increasing in Australia, Scotland, England, and Wales.
While prohibited drugs are generally viewed as being the most dangerous, the misuse of prescription drugs is linked to more deaths in several countries. Cocaine and heroin combined caused fewer deaths than prescriptions drugs in the United Kingdom in 2013, and fewer deaths than prescription opiates alone in the United States in 2008. , benzodiazepines were most likely to cause fatal overdose in Australia, with diazepam (Valium) being the drug most responsible. While fatal overdoses are highly associated with drugs such as opiates, cocaine and alcohol, deaths from other drugs such as caffeine are extremely rare.
This alphabetical list contains 634 people whose deaths can be reliably sourced to be the result of drug overdose or acute drug intoxication. Where sources indicate drug overdose or intoxication was only suspected to be the cause of death, this will be specified in the 'notes' column. Where sources are able to indicate, deaths are specified as 'suicide', 'accidental', 'undetermined', or otherwise in the 'cause' column. Where sources do not explicitly state intent, they will be listed in this column as 'unknown'. Deaths from accidents or misadventure caused by drug overdoses or intoxication are also included on this list. Deaths from long-term effects of drugs, such as tobacco-related cancers and cirrhosis from alcohol, are not included, nor are deaths from lethal injection or legal euthanasia.
Deaths
See also
List of deaths through alcohol
Lists of people by cause of death
List of deaths from legal euthanasia and assisted suicide
List of people executed by lethal injection
Opioid epidemic
United States drug overdose death rates and totals over time
References
Citations
Book sources
list
Drug-related lists
Drug overdose
Lists of people by cause of death | List of deaths from drug overdose and intoxication | [
"Chemistry"
] | 732 | [
"Drug-related lists"
] |
179,400 | https://en.wikipedia.org/wiki/Urinary%20incontinence | Urinary incontinence (UI), also known as involuntary urination, is any uncontrolled leakage of urine. It is a common and distressing problem, which may have a large impact on quality of life. Urinary incontinence is common in older women and has been identified as an important issue in geriatric health care. The term enuresis is often used to refer to urinary incontinence primarily in children, such as nocturnal enuresis (bed wetting). UI is an example of a stigmatized medical condition, which creates barriers to successful management and makes the problem worse. People may be too embarrassed to seek medical help, and attempt to self-manage the symptom in secrecy from others.
Pelvic surgery, pregnancy, childbirth, attention deficit disorder (ADHD), and menopause are major risk factors. Urinary incontinence is often a result of an underlying medical condition but is under-reported to medical practitioners. There are four main types of incontinence:
Urge incontinence due to an overactive bladder
Stress incontinence due to "a poorly functioning urethral sphincter muscle (intrinsic sphincter deficiency) or to hypermobility of the bladder neck or urethra"
Overflow incontinence due to either poor bladder contraction or blockage of the urethra
Mixed incontinence involving features of different other types
Treatments include behavioral therapy, pelvic floor muscle training, bladder training, medication, surgery, and electrical stimulation. Treatments that incorporate behavioral therapy are more likely to improve or cure stress, urge, and mixed incontinence, whereas, there is limited evidence to support the benefit of hormones and periurethral bulking agents. The complications and long-term safety of the treatments is variable.
Causes
Urinary incontinence can result from both urologic and non-urologic causes. Urologic causes can be classified as either bladder dysfunction or urethral sphincter incompetence and may include detrusor overactivity, poor bladder compliance, urethral hypermobility, or intrinsic sphincter deficiency. Non-urologic causes may include infection, medication or drugs, psychological factors, polyuria, hydrocephalus, stool impaction, and restricted mobility. The causes leading to urinary incontinence are usually specific to each sex, however, some causes are common to both men and women.
Women
The most common types of urinary incontinence in women are stress urinary incontinence and urge urinary incontinence. Women that have symptoms of both types are said to have "mixed" urinary incontinence. After menopause, estrogen production decreases and, in some women, urethral tissue will demonstrate atrophy, becoming weaker and thinner, possibly playing a role in the development of urinary incontinence.
Stress urinary incontinence in women is most commonly caused by loss of support of the urethra, which is usually a consequence of damage to pelvic support structures as a result of pregnancy, childbirth, obesity, age, among others. About 33% of all women experience urinary incontinence after giving birth, and women who deliver vaginally are about twice as likely to have urinary incontinence as women who give birth via a Caesarean section. Stress incontinence is characterized by leaking of small amounts of urine with activities that increase abdominal pressure such as coughing, sneezing, laughing and lifting. This happens when the urethral sphincter cannot close completely due to the damage in the sphincter itself, or the surrounding tissue. Additionally, frequent exercise in high-impact activities can cause athletic incontinence to develop. Urge urinary incontinence, is caused by uninhibited contractions of the detrusor muscle, a condition known as overactive bladder syndrome. This type of urinary incontinence is more commonly seen in women of older age. It is characterized by leaking of large amounts of urine in association with insufficient warning to get to the bathroom in time.
Men
Urge incontinence is the most common type of incontinence in men. Similar to women, urine leakage happens following a very intense feeling of urination, not allowing enough time to reach the bathroom, a condition called overactive bladder syndrome. In men, the condition is commonly associated with benign prostatic hyperplasia (an enlarged prostate), which causes bladder outlet obstruction, a dysfunction of the detrusor muscle (muscle of the bladder), eventually causing overactive bladder syndrome, and the associated incontinence.
Stress urinary incontinence is the other common type of incontinence in men, and it most commonly happens after prostate surgery. Prostatectomy, transurethral resection of the prostate, prostate brachytherapy, and radiotherapy can all damage the urethral sphincter and surrounding tissue, causing it to be incompetent. An incompetent urethral sphincter cannot prevent urine from leaking out of the urinary bladder during activities that increase the intraabdominal pressure, such as coughing, sneezing, or laughing. Continence usually improves within 6 to 12 months after prostate surgery without any specific interventions, and only 5 to 10% of people report persistent symptoms.
Both
Age is a risk factor that increases both the severity and prevalence of UI
Polyuria (excessive urine production) of which, in turn, the most frequent causes are: uncontrolled diabetes mellitus, primary polydipsia (excessive fluid drinking), central diabetes insipidus and nephrogenic diabetes insipidus. Polyuria generally causes urinary urgency and frequency, but does not necessarily lead to incontinence.
Neurogenic disorders like multiple sclerosis, spina bifida, Parkinson's disease, strokes and spinal cord injury can all interfere with nerve function of the bladder. This can lead to neurogenic bladder dysfunction
Overactive bladder syndrome. However, the etiology behind this is usually different between men and women, as mentioned above.
Other suggested risk factors include smoking, caffeine intake and depression
Mechanism
Adults
The body stores urine — water and wastes removed by the kidneys — in the urinary bladder, a balloon-like organ. The bladder connects to the urethra, the tube through which urine leaves the body.
Continence and micturition involve a balance between urethral closure and detrusor muscle activity (the muscle of the bladder). During urination, detrusor muscles in the wall of the bladder contract, forcing urine out of the bladder and into the urethra. At the same time, sphincter muscles surrounding the urethra relax, letting urine pass out of the body. The urethral sphincter is the muscular ring that closes the outlet of the urinary bladder preventing urine to pass outside the body. Urethral pressure normally exceeds bladder pressure, resulting in urine remaining in the bladder, and maintaining continence. The urethra is supported by pelvic floor muscles and tissue, allowing it to close firmly. Any damage to this balance between the detrusor muscle, urethral sphincter, supportive tissue and nerves can lead to some type of incontinence .
For example, stress urinary incontinence is usually a result of the incompetent closure of the urethral sphincter. This can be caused by damage to the sphincter itself, the muscles that support it, or nerves that supply it. In men, the damage usually happens after prostate surgery or radiation, and in women, it's usually caused by childbirth and pregnancy. The pressure inside the abdomen (from coughing and sneezing) is normally transmitted to both urethra and bladder equally, leaving the pressure difference unchanged, resulting in continence. When the sphincter is incompetent, this increase in pressure will push the urine against it, leading to incontinence.
Another example is urge incontinence. This incontinence is associated with sudden forceful contractions of the detrusor muscle (bladder muscle), leading to an intense feeling of urination, and incontinence if the person does not reach the bathroom on time. The syndrome is known as overactive bladder syndrome, and it's related to dysfunction of the detrusor muscle.
Children
Urination, or voiding, is a complex activity. The bladder is a balloon-like muscle that lies in the lowest part of the abdomen. The bladder stores urine and then releases it through the urethra, which is the canal that carries urine to the outside of the body. Controlling this activity involves nerves, muscles, the spinal cord and the brain.
The bladder is made of two types of muscles: the detrusor and the sphincter. The detrusor is a muscular sac that stores urine and squeezes to empty. Connected to the bottom or next of the bladder, the sphincter is a circular group of muscles that automatically stays contracted to hold the urine in. It will automatically relax when the detrusor contracts to let the urine into the urethra. A third group of muscles below the bladder (pelvic floor muscles) can contract to keep urine back.
A baby's bladder fills to a set point, then automatically contracts and empties. As the child gets older, the nervous system develops. The child's brain begins to get messages from the filling bladder and begins to send messages to the bladder to keep it from automatically emptying until the child decides it is the time and place to void.
Failures in this control mechanism result in incontinence. Reasons for this failure range from the simple to the complex.
Diagnosis
The pattern of voiding and urine leakage is important as it suggests the type of incontinence. Other points include straining and discomfort, use of drugs, recent surgery, and illness.
The physical examination looks for signs of medical conditions causing incontinence, such as tumors that block the urinary tract, stool impaction, and poor reflexes or sensations, which may be evidence of a nerve-related cause.
Other tests include:
Stress test – the patient relaxes, then coughs vigorously as the doctor watches for loss of urine.
Urinalysis – urine is tested for evidence of infection, urinary stones, or other contributing causes.
Blood tests – blood is taken, sent to a laboratory, and examined for substances related to causes of incontinence.
Ultrasound – sound waves are used to visualize the kidneys and urinary bladder, assess the capacity of the bladder before voiding, and the remaining amount of urine after voiding. This helps know if there's a problem in emptying.
Cystoscopy – a thin tube with a tiny camera is inserted in the urethra and used to see the inside of the urethra and bladder.
Urodynamics – various techniques measure pressure in the bladder and the flow of urine.
People are often asked to keep a diary for a day or more, up to a week, to record the pattern of voiding, noting times and the amounts of urine produced.
Research projects that assess the efficacy of anti-incontinence therapies often quantify the extent of urinary incontinence. The methods include the 1-h pad test, measuring leakage volume; using a voiding diary, counting the number of incontinence episodes (leakage episodes) per day; and assessing of the strength of pelvic floor muscles, measuring the maximum vaginal squeeze pressure.
Main types
There are 4 main types of urinary incontinence:
Stress incontinence, also known as effort incontinence, is essentially due to incomplete closure of the urinary sphincter, due to problems in the sphincter itself or insufficient strength of the pelvic floor muscles supporting it. This type of incontinence is when urine leaks during activities that increase intra-abdominal pressure, such as coughing, sneezing or bearing down.
Urge incontinence is an involuntary loss of urine occurring while suddenly feeling the need or urge to urinate, usually secondary to overactive bladder syndrome.
Overflow incontinence is the incontinence that happens suddenly without feeling the urge to urinate and without necessarily doing any physical activities. It is also known as under-active bladder syndrome. This usually happens with chronic obstruction of the bladder outlet or with diseases damaging the nerves supplying the urinary bladder. The urine stretches the bladder without the person feeling the pressure, and eventually, it overwhelms the ability of the urethral sphincter to hold it back.
Mixed incontinence contains symptoms of multiple other types of incontinence. It is not uncommon in the elderly female population and can sometimes be complicated by urinary retention.
Other types
Functional incontinence occurs when a person recognizes the need to urinate but cannot make it to the bathroom. The loss of urine may be large. There are several causes of functional incontinence including confusion, dementia, poor eyesight, mobility or dexterity, unwillingness to use the toilet because of depression or anxiety or inebriation due to alcohol. Functional incontinence can also occur in certain circumstances where no biological or medical problem is present. For example, a person may recognize the need to urinate but may be in a situation where there is no toilet nearby or access to a toilet is restricted.
Structural incontinence: Rarely, structural problems can cause incontinence, usually diagnosed in childhood (for example, an ectopic ureter). Fistulas caused by obstetric and gynecologic trauma or injury are commonly known as obstetric fistulas and can lead to incontinence. These types of vaginal fistulas include, most commonly, vesicovaginal fistula and, more rarely, ureterovaginal fistula. These may be difficult to diagnose. The use of standard techniques along with a vaginogram or radiologically viewing the vaginal vault with instillation of contrast media.
Nocturnal enuresis is episodic UI while asleep. It is normal in young children.
Transient incontinence is temporary incontinence most often seen in pregnant women when it subsequently resolves after the birth of the child.
Giggle incontinence is an involuntary response to laughter. It usually affects children.
Double incontinence. There is also a related condition for defecation known as fecal incontinence. Due to involvement of the same muscle group (levator ani) in bladder and bowel continence, patients with urinary incontinence are more likely to have fecal incontinence in addition. This is sometimes termed "double incontinence".
Post-void dribbling is the phenomenon where urine remaining in the urethra after voiding the bladder slowly leaks out after urination.
Coital incontinence (CI) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation. It has been reported to occur in 10% to 24% of sexually active women with pelvic floor disorders.
Climacturia is urinary incontinence at the moment of orgasm. It can be a result of radical prostatectomy.
Screening
Yearly screening is recommended for women by the Women's Preventive Services Initiative (WPSI) and people who test positive in the screening process would need to be referred for further testing to understand how to help treat their condition. Screening questions should inquire about what symptoms they have experienced, how severe the symptoms are, and if the symptoms affect their daily lives. , studies have not shown a change in outcomes with urinary incontinence screenings in women.
Management
Treatment options include conservative treatment, behavioral therapy, bladder retraining, pelvic floor therapy, collecting devices (for men), fixer-occluder devices for incontinence (in men), medications, and surgery. Both nonpharmacological and pharmacological treatments may be effective for treating UI in non-pregnant women. All treatments, except hormones and periurethral bulking agents, are more effective than no treatment in improving or curing UI symptoms or achieving patient satisfaction. For urinary incontinence in women, it is typical in clinical practice to begin with behavioral therapy, then move on to oral medication if behavioral therapy is ineffective. If both behavioral therapy and oral medication are ineffective, the patient may be given bladder botox or neuromodulation therapy.
Behavioral therapy, physical therapy and exercise
Behavioral therapy involves the use of both suppressive techniques (distraction, relaxation) and learning to avoid foods that may worsen urinary incontinence. This may involve avoiding or limiting consumption of caffeine and alcohol. Behavioral therapies, including bladder training, biofeedback, and pelvic floor muscle training, are most effective for improving urinary incontinence in women, with a low risk of adverse events. Behavioral therapy is not curative for urinary incontinence, but it can improve a person's quality of life. Behavioral therapy has benefits as both a monotherapy (behaviorial therapy alone) and as an adjunct to medications (combining different therapies) for symptom reduction. Time voiding while urinating and bladder training are techniques that use biofeedback. In time voiding, the patient fills in a chart of voiding and leaking. From the patterns that appear in the chart, the patient can plan to empty his or her bladder before he or she would otherwise leak. Biofeedback and muscle conditioning, known as bladder training, can alter the bladder's schedule for storing and emptying urine. These techniques are effective for urge and overflow incontinence.
Avoiding heavy lifting and preventing constipation may help with uncontrollable urine leakage. Stopping smoking is also recommended as it is associated with improvements in urinary incontinence in men and women. Weight loss may also be helpful for people who are overweight to improve symptoms of incontinence.
Physical therapy can be effective for women in reducing urinary incontinence. Pelvic floor physical therapists work with patients to identify and treat underlying pelvic muscle dysfunction that can cease urinary incontinence. They may recommend exercises to strengthen the muscles, electrostimulation, or biofeedback treatments. Exercising the muscles of the pelvis such as with Kegel exercises are a first line treatment for women with stress incontinence. Efforts to increase the time between urination, known as bladder training, is recommended in those with urge incontinence. Both these may be used in those with mixed incontinence.
Physical therapy, both by itself and in combination with anticholinergic drugs, was found to be more successful in reducing urinary incontinence in women than anticholinergics by themselves.
Small vaginal cones of increasing weight may be used to help with exercise. They seem to be better than no active treatment in women with stress urinary incontinence, and have similar effects to training of pelvic floor muscles or electrostimulation.
Biofeedback uses measuring devices to help the patient become aware of his or her body's functioning. By using electronic devices or diaries to track when the bladder and urethral muscles contract, the patient can gain control over these muscles. Biofeedback can be used with pelvic muscle exercises and electrical stimulation to relieve stress and urge incontinence. The evidence supporting the role for biofeedback devices in treating urinary incontinence is mixed. There is some very weak evidence that electrical stimulation that is low in frequency may be helpful in combination with other standard treatments for women with overactive bladder condition, however, the evidence supporting a role for biofeedback combined with pelvic floor muscle training is very weak and likely indicates that biofeedback-assistance is not helpful when included with conservative treatments for overactive bladder.
Preoperative pelvic floor muscle training in men undergoing radical prostatectomy was not effective in reducing urinary incontinence.
Alternative exercises have been studied for stress urinary incontinence in women. Evidence was insufficient to support the use of Paula method, abdominal muscle training, Pilates, Tai chi, breathing exercises, postural training, and generalized fitness.
Devices
Individuals who continue to experience urinary incontinence need to find a management solution that matches their individual situation. The use of mechanical devices has not been well studied in women, as of 2014.
Collecting systems (for men) – consists of a sheath worn over the penis funneling the urine into a urine bag worn on the leg. These products come in a variety of materials and sizes for individual fit. Studies show that urisheaths and urine bags are preferred over absorbent products – in particular when it comes to 'limitations to daily activities'. Solutions exist for all levels of incontinence. Advantages with collecting systems are that they are discreet, the skin stays dry all the time, and they are convenient to use both day and night. Disadvantages are that it is necessary to get measured to ensure proper fit, and in some countries, a prescription is needed.
Absorbent products (include shields, incontinence pads, undergarments, protective underwear, briefs, diapers, adult diapers and underpants) are the best-known product types to manage incontinence. They are widely available in pharmacies and supermarkets. The advantages of using these are that they barely need any fitting or introduction by a healthcare specialist. The disadvantages with absorbent products are that they can be bulky, leak, have odors and can cause skin breakdown due to the constant dampness.
Intermittent catheters are single-use catheters that are inserted into the bladder to empty it, and once the bladder is empty they are removed and discarded. Intermittent catheters are primarily used for urinary retention (inability to empty the bladder), but for some people they can be used to reduce or avoid incontinence. These are prescription-only medical devices.
Indwelling catheters (also known as foleys) are often used in hospital settings, or if the user is not able to handle any of the above solutions himself/herself (e.g. severe neurologic injury or neurodegenerative disease). These are also prescription-only medical devices. The indwelling catheter is typically connected to a urine bag that can be worn on the leg or hung on the side of the bed. Indwelling catheters need to be monitored and changed on a regular basis by a healthcare professional. The advantage of indwelling catheters is that because the urine is funneled away from the body, the skin remains dry. However, the disadvantage is that it is very common to incur urinary tract infections when using indwelling catheters. Bladder spasms and other problems can also occur with long-term use of indwelling catheters.
Penis clamp (or penis compression device), which is applied to compress the urethra to compensate for the malfunctioning of the natural urinary sphincter, preventing leakage from the bladder. This management solution is only suitable for light or moderate incontinence.
Vaginal pessaries for women are devices inserted into the vagina. This device provides support to the urethra which passes right in front of it, allowing it to close more firmly.
Medications
A number of medications exist to treat urinary incontinence including: fesoterodine, tolterodine and oxybutynin. These medications work by relaxing smooth muscle in the bladder. While some of these medications appear to have a small benefit, the risk of side effects are a concern. Medications are effective for about one in ten people, and all medications have similar efficacy.
Medications are not recommended for those with stress incontinence and are only recommended in those with urge incontinence who do not improve with bladder training. While medications have been shown to be helpful with treating urinary incontinence, studies have shown that the first line treatment that's most effective against urinary incontinence is behavioral therapy.
Injectable bulking agents may be used to enhance urethral support, however, they are of unclear benefit.
Surgery
Women and men that have persistent incontinence despite optimal conservative therapy may be candidates for surgery. Surgery may be used to help stress or overflow incontinence. Common surgical techniques for stress incontinence include slings, tension-free vaginal tape, bladder suspension, artificial urinary sphincters, among others. It is not clear if antibiotics taken prophylactically after surgery are helpful at decreasing the risk of an infection after surgery.
The use of transvaginal mesh implants and bladder slings is controversial due to the risk of debilitating painful side effects such as vaginal erosion. In 2012 transvaginal mesh implants were classified as a high risk device by the US Food and Drug Administration. Urodynamic testing seems to confirm that surgical restoration of vault prolapse can cure motor urge incontinence.
Traditional suburethral sling operations are probably slightly better than open abdominal retropubic colposuspension and are probably slightly less effective than mid-urethral sling operations in reducing urinary incontinence in women, but it is still uncertain if any of the different types of traditional suburethral sling operations are better than others. Similarly, there is insufficient long term evidence to be certain about the effectiveness or safety of single-incision sling operations for urinary incontinence in women. Traditional suburethral slings may have a higher risk of surgical complications than minimally invasive slings but the risk of complications compared with other types of operation is still uncertain.
Laparoscopic colposuspension (keyhole surgery through the abdomen) with sutures is as effective as open colposuspension for curing incontinence in women up to 18 months after surgery, but it is unclear whether there are fewer risk of complications during or after surgery. There is probably a higher risk of complications with traditional suburethral slings than with open abdominal retropubic suspension.
The artificial urinary sphincter is an implantable device used to treat stress incontinence, mostly in men. The device is made of 2 or 3 parts: The pump, cuff, and balloon reservoir connected to each other by specialized tubes. The cuff wraps around the urethra and closes it. When the person wants to urinate, he presses the pump (implanted in the scrotum), to deflate the cuff, and allow the urine to pass. The cuff regains pressure within a few minutes to regain continence. The European Association of Urology considers the artificial urinary sphincter as the gold standard in surgical management of stress urinary incontinence in men after prostatectomy.
Epidemiology
Globally, up to 35% of the population over the age of 60 years is estimated to be incontinent.
In 2014, urinary leakage affected between 30% and 40% of people over 65 years of age living in their own homes or apartments in the U.S. Twenty-four percent of older adults in the U.S. have moderate or severe urinary incontinence that should be treated medically. People with dementia are three times more likely to have urinary incontinence compared to people of similar ages.
Bladder control problems have been found to be associated with higher incidence of many other health problems such as obesity and diabetes. Difficulty with bladder control results in higher rates of depression and limited activity levels.
Incontinence is expensive both to individuals in the form of bladder control products and to the health care system and nursing home industry. Injury-related to incontinence is a leading cause of admission to assisted living and nursing care facilities. In 1997 more than 50% of nursing facility admissions were related to incontinence.
Women
Approximately 17% of non-pregnant women have urinary incontinence, with the most common types being stress, urgency, and mixed. Bladder symptoms affect women of all ages. However, bladder problems are most prevalent among older women. Women over the age of 60 years are twice as likely as men to experience incontinence; one in three women over the age of 60 years are estimated to have bladder control problems. One reason why women are more affected is the weakening of pelvic floor muscles by pregnancy.
Men
Men tend to experience incontinence less often than women, and the structure of the male urinary tract accounts for this difference. Stress incontinence is common after prostate cancer treatments.
While urinary incontinence affects older men more often than younger men, the onset of incontinence can happen at any age. Estimates around 2007 suggested that 17 percent of men over age 60, an estimated 600,000 men in the US, experienced urinary incontinence, with this percentage increasing with age.
Children
Incontinence happens less often after age 5: About 10 percent of 5-year-olds, 5 percent of 10-year-olds, and 1 percent of 17-year-olds experience episodes of incontinence. It is twice as common in girls as in boys.
History
The management of urinary incontinence with pads is mentioned in the earliest medical book known, the Ebers Papyrus (1500 BC).
Incontinence has historically been a taboo subject in Western culture. However, this situation changed some when Kimberly-Clark aggressively marketed adult diapers in the 1980s with actor June Allyson as spokeswoman. Allyson was initially reticent to participate, but her mother, who had incontinence, convinced her that it was her duty in light of her successful career. The product proved a success.
Law
The case Hiltibran et al v. Levy et al in the United States District Court for the Western District of Missouri resulted in that court issuing an order in 2011. That order requires incontinence briefs funded by Medicaid to be given by Missouri to adults who would be institutionalized without them.
Research
The effectiveness of different therapeutic approaches to treating urinary incontinence is not well studied for some medical conditions. For example, for people who experience urinary incontinence due to stroke, treatment approaches such as physical therapy, cognitive therapy, complementary medicine, and specialized interventions with experienced medical professionals are sometimes suggested, however it is not clear how effective these are at improving incontinence and there is no strong medical evidence to guide clinical practice.
See also
Diaper
Fecal incontinence
Stress incontinence
References
External links
Patient-centered information from the European Urological Association
Independent continence product advisor
Aging-associated diseases
Symptoms and signs: Urinary system | Urinary incontinence | [
"Biology"
] | 6,530 | [
"Senescence",
"Aging-associated diseases"
] |
179,404 | https://en.wikipedia.org/wiki/Fecal%20incontinence | Fecal incontinence (FI), or in some forms, encopresis, is a lack of control over defecation, leading to involuntary loss of bowel contents — including flatus (gas), liquid stool elements and mucus, or solid feces. FI is a sign or a symptom, not a diagnosis. Incontinence can result from different causes and might occur with either constipation or diarrhea. Continence is maintained by several interrelated factors, including the anal sampling mechanism, and incontinence usually results from a deficiency of multiple mechanisms. The most common causes are thought to be immediate or delayed damage from childbirth, complications from prior anorectal surgery (especially involving the anal sphincters or hemorrhoidal vascular cushions), altered bowel habits (e.g., caused by irritable bowel syndrome, Crohn's disease, ulcerative colitis, food intolerance, or constipation with overflow incontinence). Reported prevalence figures vary: an estimated 2.2% of community-dwelling adults are affected, while 8.39% among non-institutionalized U.S adults between 2005 and 2010 has been reported, and among institutionalized elders figures come close to 50%.
Fecal incontinence has three main consequences: local reactions of the perianal skin and urinary tract, including maceration (softening and whitening of the skin due to continuous moisture), urinary tract infections, or decubitus ulcers (pressure sores); a financial expense for individuals (due to the cost of medication and incontinence products, and loss of productivity), employers (days off), and medical insurers and society generally (health care costs, unemployment); and an associated decrease in quality of life. There is often reduced self-esteem, shame, humiliation, depression, a need to organize life around easy access to a toilet, and avoidance of enjoyable activities. FI is an example of a stigmatized medical condition, which creates barriers to successful management and makes the problem worse. People may be too embarrassed to seek medical help and attempt to self-manage the symptom in secrecy from others.
FI is one of the most psychologically and socially debilitating conditions in an otherwise healthy individual and is generally treatable. More than 50% of hospitalized seriously ill patients rated bladder or fecal incontinence as "worse than death". Management may be achieved through an individualized mix of dietary, pharmacologic, and surgical measures. Health care professionals are often poorly informed about treatment options, and may fail to recognize the effect of FI.
Signs and symptoms
FI affects virtually all aspects of peoples' lives, greatly diminishing physical and mental health, and affecting personal, social, and professional life. Emotional effects may include stress, fearfulness, anxiety, exhaustion, fear of public humiliation, feeling dirty, poor body image, reduced desire for sex, anger, humiliation, depression, isolation, secrecy, frustration, and embarrassment. Some patients cope by controlling their emotions or behavior. Physical symptoms such as skin soreness, pain and odor may also affect quality of life. Physical activity such as shopping or exercise is often affected. Travel may be affected, requiring careful planning. Working is also affected for most. Relationships, social activities and self-image likewise often suffer. Symptoms may worsen over time.
Causes
FI is a sign or a symptom, not a diagnosis, and represents an extensive list of causes. Usually, it is the result of a complex interplay of several coexisting factors, many of which may be simple to correct. Up to 80% of people may have more than one abnormality that is contributing. Deficits of individual functional components of the continence mechanism can be partially compensated for a certain period, until the compensating components themselves fail. For example, obstetric injury may precede onset by decades, but postmenopausal changes in the tissue strength reduce in turn the competence of the compensatory mechanisms. The most common factors in the development are thought to be obstetric injury and after-effects of anorectal surgery, especially those involving the anal sphincters and hemorrhoidal vascular cushions. The majority of incontinent persons over the age of 18 fall into one of several groups: those with structural anorectal abnormalities (sphincter trauma, sphincter degeneration, perianal fistula, rectal prolapse), neurological disorders (multiple sclerosis, spinal cord injury, spina bifida, stroke, etc.), constipation/fecal loading (presence of a large amount of feces in the rectum with stool of any consistency), cognitive and/or behavioral dysfunction (dementia, learning disabilities), diarrhea, inflammatory bowel diseases (e.g. ulcerative colitis, Crohn's disease), irritable bowel syndrome, disability related (people who are frail, acutely unwell, or have chronic/acute disabilities), and those cases which are idiopathic (of unknown cause). Diabetes mellitus is also known to be a cause, but the mechanism of this relationship is not well understood.
Congenital
Anorectal anomalies and spinal cord defects may be a cause in children. These are usually picked up and operated upon during early life, but continence is often imperfect thereafter.
Child birth
Vaginal delivery can damage the pelvic floor, resulting in fecal incontinence.
Traumatic
Fecal incontinence caused by trauma is uncommon. Rare causes of traumatic injury to the anal sphincters include military or traffic accidents complicated by pelvic fractures, spine injuries or perineal lacerations, insertion of foreign bodies in the rectum, and anal sexual abuse.
Some correlational research indicates that anal sex may contribute the development of fecal incontinence, although the majority of people who receive anal sex report no issues with fecal incontinence. Associations between receptive anal sex and fecal incontinence are stronger for practices such as anal fisting. A 2024 review concluded that therapeutic exercises (e.g. kegels) may be sufficient for the prevention and treatment of incontinence this population.
Anal canal
The functioning of the anal canal can be damaged, traumatically or atraumatically. Fecal incontinence caused by trauma is uncommon. The resting tone of the anal canal is not the only important factor; both the length of the high-pressure zone and its radial translation of force are required for continence. This means that even with normal anal canal pressure, focal defects such as the keyhole deformity can be the cause of substantial symptoms. External anal sphincter (EAS) dysfunction is associated with impaired voluntary control, whereas internal anal sphincter (IAS) dysfunction is associated with impaired fine-tuning of fecal control. Damage to the nerve supply of the external anal sphincter on one side may not result in severe symptoms because there is substantial overlap in innervation by the nerves on the other side.
Lesions which mechanically interfere with, or prevent the complete closure of the anal canal can cause a liquid stool or mucous rectal discharge. Such lesions include piles (inflamed hemorrhoids), anal fissures, anal cancer, or fistulae. Obstetric injury may tear the anal sphincters, and some of these injuries may be occult (undetected). The risk of injury is greatest when labor has been especially difficult or prolonged, when forceps are used, with higher birth weights, or when a midline episiotomy is performed. Only when there is post-operative investigation of FI such as endoanal ultrasound is the injury discovered. FI is a much under-reported complication of surgery. The IAS is easily damaged with an anal retractor (especially the Park's anal retractor), leading to reduced resting pressure postoperatively. Since the hemorrhoidal vascular cushions contribute 15% of the resting anal tone, surgeries involving these structures may affect continence status.
Partial internal sphincterotomy, fistulotomy, anal stretch (Lord's operation), hemorrhoidectomy or transanal advancement flaps may all lead to FI postoperatively, with soiling being far more common than solid FI. The "keyhole deformity" refers to scarring within the anal canal and is another cause of mucus leakage and minor incontinence. This defect is also described as a groove in the anal canal wall and may occur after posterior midline fissurectomy or fistulotomy, or with lateral IAS defects.
Nontraumatic conditions causing anal sphincter weakness include scleroderma, damage to the pudendal nerves, and IAS degeneration of unknown cause. Radiation-induced FI may involve the anal canal as well as the rectum, when proctitis, anal fistula formation, and diminished function of internal and external sphincter occur. Irradiation may occur during radiotherapy, e.g. for prostate cancer.
Pelvic floor
Many people with FI have a generalized weakness of the pelvic floor, especially puborectalis. A weakened puborectalis leads to widening of the anorectal angle and impaired barrier to the stool in the rectum entering the anal canal, and this is associated with incontinence to solids. Abnormal descent of the pelvic floor can also be a sign of pelvic floor weakness. Abnormal descent manifests as descending perineum syndrome (>4 cm perineal descent). This syndrome initially gives constipation, and later FI. The pelvic floor is innervated by the pudendal nerve and the S3 and S4 branches of the pelvic plexus. With recurrent straining, e.g. during difficult labour or long-term constipation, then stretch injury can damage the nerves supplying levator ani. The pudendal nerve is especially vulnerable to irreversible damage, (stretch-induced pudendal neuropathy) which can occur with a 12% stretch. If the pelvic floor muscles lose their innervation, they cease to contract and their muscle fibres are in time replaced by fibrous tissue, which is associated with pelvic floor weakness and incontinence. Increased pudendal nerve terminal motor latency may indicate pelvic floor weakness. Any damage to the pudendal nerve occurring during childbirth may not become fully apparent until years later, for example at the onset of menopause. Pudendal neuropathy (nerve damage) is detectable in up to 70% of people with FI. The various types of prolapse of the posterior compartment (e.g. external rectal prolapse, mucosal prolapse and internal rectal intussusception & solitary rectal ulcer syndrome) may also cause coexisting obstructed defecation.
Rectum
The rectum needs to be of a sufficient volume to store stool until defecation. The rectal walls need to be "compliant" i.e. able to distend to an extent to accommodate stool. Rectal sensation is required to detect the presence, nature, and amount of rectal contents. The rectum must also be able to evacuate its contents fully. There must also be efficient coordination of rectal sensation and relaxation of the anal canal. If the sensory nerves are damaged, the detection of stool in the rectum is dulled or absent, and the person will not feel the need to defecate until too late. Rectal hyposensitivity may manifest as constipation, FI, or both. Rectal hyposensitivity was reported to be present in 10% of people with FI. Pudendal neuropathy is one cause of rectal hyposensitivity and may lead to fecal loading/impaction, megarectum and overflow FI. Normal evacuation of rectal contents is 90100%. If there is incomplete evacuation during defecation, residual stool will be left in the rectum and threaten continence once defecation is finished. This is a feature of people with soiling secondary to obstructed defecation. Obstructed defecation is often due to anismus (paradoxical contraction or relaxation failure of the puborectalis). Whilst anismus is largely a functional disorder, organic pathologic lesions may mechanically interfere with rectal evacuation. Other causes of incomplete evacuation include non-emptying defects like a rectocele. Straining to defecate pushes stool into the rectocele, which acts like a diverticulum and causes stool sequestration. Once the voluntary attempt to defecate, albeit dysfunctional, is finished, the voluntary muscles relax, and residual rectal contents are then able to descend into the anal canal and cause leaking.
Central nervous system
Continence requires conscious and subconscious networking of information from and to the anorectum. Defects/brain damage may affect the central nervous system focally (e.g. stroke, tumor, spinal cord lesions, trauma, multiple sclerosis) or diffusely (e.g. dementia, multiple sclerosis, infection, Parkinson's disease or drug-induced). FI (and urinary incontinence) may also occur during epileptic seizures. Dural ectasia is an example of a spinal cord lesion that may affect continence.
Diarrhea
Liquid stool is more difficult to control than formed, solid stool. Hence, FI can be exacerbated by diarrhea. Some consider diarrhea to be the most common aggravating factor. Where diarrhea is caused by temporary problems such as mild infections or food reactions, incontinence tends to be short-lived. Chronic conditions, such as irritable bowel syndrome or Crohn's disease, can cause severe diarrhea lasting for weeks or months. Diseases, drugs, and indigestible dietary fats that interfere with the intestineal absorption may cause steatorrhea (oily rectal discharge & fatty diarrhea) and degrees of FI. Respective examples include cystic fibrosis, orlistat, and olestra. Postcholecystectomy diarrhea is diarrhea that occurs following gall bladder removal, due to excess bile acid. Orlistat is an anti-obesity (weight loss) drug that blocks the absorption of fats. This may give side effects of FI, diarrhea, and steatorrhea.
Overflow incontinence
This may occur when there is a large mass of feces in the rectum (fecal loading), which may become hardened (fecal impaction). Liquid stool elements can pass around the obstruction, leading to incontinence. Megarectum (enlarged rectal volume) and rectal hyposensitivity are associated with overflow incontinence. Hospitalized patients and care home residents may develop FI via this mechanism, possibly a result of lack of mobility, reduced alertness, the constipating effect of medication, and/or dehydration. In overflow incontinence, the rectum is constantly distended because of the presence of retained feces in the rectum. Therefore, the recto-anal inhibitory reflex (RAIR) is persistently activated, meaning the internal anal sphincter relaxes, which is not under voluntary control.
Pathophysiology
The mechanisms and factors contributing to normal continence are multiple and interrelated. The puborectalis sling, forming the anorectal angle (see diagram), is responsible for the gross continence of solid stool. The IAS is an involuntary muscle, contributing about 55% of the resting anal pressure. Together with the hemorrhoidal vascular cushions, the IAS maintains continence of flatus and liquid during rest. The EAS is a voluntary muscle, that doubles the pressure in the anal canal during contraction, which is possible for a short time. The rectoanal inhibitory reflex (RAIR) is an involuntary IAS relaxation in response to rectal distension, allowing some rectal contents to descend into the anal canal where it is brought into contact with specialized sensory mucosa to detect consistency. The rectoanal excitatory reflex (RAER) is an initial, semi-voluntary contraction of the EAS and puborectalis which in turn prevents incontinence following the RAIR. Other factors include the specialized anti-peristaltic function of the last part of the sigmoid colon, which keeps the rectum empty most of the time, sensation in the lining of the rectum and the anal canal to detect when there is stool present, its consistency and quantity, and the presence of normal rectoanal reflexes and defecation cycle which completely evacuates stool from the rectum and anal canal. Problems affecting any of these mechanisms and factors may be involved in the cause.
Diagnosis
Identification of the exact causes usually begins with a thorough medical history, including detailed questioning about symptoms, bowel habits, diet, medication, and other medical problems. Digital rectal examination is performed to assess resting pressure and voluntary contraction (maximum squeeze) of the sphincter complex and puborectalis. Anal sphincter defects, rectal prolapse, and abnormal perineal descent may be detected. Anorectal physiology tests assess the functioning of the anorectal anatomy. Anorectal manometry records the pressure exerted by the anal sphincters and puborectalis during rest and contraction. The procedure is also able to assess the sensitivity of the anal canal and rectum. Anal electromyography tests for nerve damage, which is often associated with obstetric injury. Pudendal nerve terminal motor latency tests for damage to the pudendal motor nerves. Proctography, also known as defecography, shows how much stool the rectum can hold, how well the rectum holds it, and how well the rectum can evacuate the stool. It will also highlight defects in the structure of the rectum such as internal rectal intussusception. Dynamic pelvic MRI, also called MRI defecography is an alternative that is better for some problems but not as good for other problems. Proctosigmoidoscopy involves the insertion of an endoscope (a long, thin, flexible tube with a camera) into the anal canal, rectum and sigmoid colon. The procedure allows for visualization of the interior of the gut and may detect signs of disease or other problems that could be a cause, such as inflammation, tumors, or scar tissue. Endoanal ultrasound, which some consider the gold standard for detection of anal canal lesions, evaluates the structure of the anal sphincters and may detect occult sphincter tears that otherwise would go unseen.
Functional FI is common. The Rome process published diagnostic criteria for functional FI, which they defined as "recurrent uncontrolled passage of fecal material in an individual with a developmental age of at least four years". The diagnostic criteria are, one or more of the following factors present for the last 3 months: abnormal functioning of normally innervated and structurally intact muscles, minor abnormalities of sphincter structure/innervation (nerve supply), normal or disordered bowel habits, (i.e., fecal retention or diarrhea), and psychological causes. Furthermore, exclusion criteria are given. These are factors that all must be excluded for a diagnosis of functional FI, and are abnormal innervation caused by lesion(s) within the brain (e.g., dementia), spinal cord (at or below T12), or sacral nerve roots, or mixed lesions (e.g., multiple sclerosis), or as part of a generalized peripheral or autonomic neuropathy (e.g., due to diabetes), anal sphincter abnormalities associated with a multisystem disease (e.g., scleroderma), and structural or neurogenic abnormalities that are the major cause.
Definition
There is no globally accepted definition, but fecal incontinence is generally defined as the recurrent inability to voluntarily control the passage of bowel contents through the anal canal and expel it at a socially acceptable location and time, occurring in individuals over the age of four. "Social continence" has been given various precise definitions for the purposes of research; however, generally it refers to symptoms being controlled to an extent that is acceptable to the individual in question, with no significant effect on their life. There is no consensus about the best way to classify FI, and several methods are used.
Symptoms can be directly or indirectly related to the loss of bowel control. The direct (primary) symptom is a lack of control over bowel contents which tends to worsen without treatment. Indirect (secondary) symptoms, which are the result of leakage, include pruritus ani (an intense itching sensation from the anus), perianal dermatitis (irritation and inflammation of the skin around the anus), and urinary tract infections. Due to embarrassment, people may only mention secondary symptoms rather than acknowledge incontinence. Any major underlying cause will produce additional signs and symptoms, such as protrusion of mucosa in external rectal prolapse. Symptoms of fecal leakage (FL) are similar and may occur after defecation. There may be loss of small amounts of brown fluid and staining of the underwear.
Types
FI can be divided into those people who experience a defecation urge before leakage (urge incontinence), and those who experience no sensation before leakage (passive incontinence or soiling). Urge incontinence is characterized by a sudden need to defecate, with little time to reach a toilet. Urge and passive FI may be associated with weakness of the external anal sphincter (EAS) and internal anal sphincter (IAS) respectively. Urgency may also be associated with reduced rectal volume, reduced ability of the rectal walls to distend and accommodate stool, and increased rectal sensitivity.
There is a continuous spectrum of different clinical presentations from incontinence of flatus (gas), through incontinence of mucus or liquid stool, to solids. The term anal incontinence often is used to describe flatus incontinence (that is, involuntary loss of flatus). In other sources, the term anal incontinence is distinguished as involuntary loss of feces or flatus caused by loss of control of the anal sphincter; whereas fecal incontinence may be given the definition of involuntary loss of solid or liquid feces which may also be caused by enlarged skin tags, poor hygiene, hemorrhoids, rectal prolapse, and fistula in ano. It may occur together with incontinence of liquids or solids, or it may present in isolation. Flatus incontinence may be the first sign of FI. Once continence to flatus is lost, it is rarely restored. Anal incontinence may be equally disabling as the other types. However, the term anal incontinence is also often used interchangeably as a synonym for FI generally, and use a wider definition for FI which includes
uncontrolled passage of feces or gas.
Fecal leakage, fecal soiling and fecal seepage are minor degrees of FI, and describe incontinence of liquid stool, mucus, or very small amounts of solid stool. They cover a spectrum of increasing symptom severity (staining, soiling, seepage, and accidents). Rarely, minor FI in adults may be described as encopresis. Fecal leakage is a related topic to rectal discharge, but this term does not necessarily imply any degree of incontinence. Discharge generally refers to conditions where there is pus or increased mucus production, or anatomical lesions that prevent the anal canal from closing fully, whereas fecal leakage generally concerns disorders of IAS function and functional evacuation disorders which cause a solid fecal mass to be retained in the rectum. Solid stool incontinence may be called complete (or major) incontinence, and anything less as partial (or minor) incontinence (i.e. incontinence of flatus (gas), liquid stool and/or mucus).
In children over the age of four who have been toilet trained, a similar condition is generally termed encopresis (or soiling), which refers to the voluntary or involuntary loss of (usually soft or semi-liquid) stool. The term pseudoincontinence is used when there is FI in children who have anatomical defects (e.g. enlarged sigmoid colon or anal stenosis). Encopresis is a term that is usually applied when there are no such anatomical defects present. The ICD-10 classifies nonorganic encopresis under "behavioural and emotional disorders with onset usually occurring in childhood and adolescence" and organic causes of encopresis along with FI. FI can also be classified according to gender, since the cause in females may be different from males, for example it may develop following radical prostatectomy in males, whereas females may develop FI as an immediate or delayed consequence of damage whilst giving birth. Pelvic anatomy is also different according to gender, with a wider pelvic outlet in females.
Clinical measurement
Several severity scales exist. The Cleveland Clinic (Wexner) fecal incontinence score takes into account five parameters that are scored on a scale from zero (absent) to four (daily) frequency of incontinence to gas, liquid, solid, of need to wear pad, and of lifestyle changes. The Park's incontinence score uses four categories:
those continent for solid and liquid stool and also for flatus.
those continent for solid and liquid stool but incontinent for flatus (with or without urgency).
those continent for solid stool but incontinent for liquid stool or flatus.
those incontinent to formed stool (complete incontinence).
The fecal incontinence severity index is based on four types of leakage (gas, mucus, liquid stool, solid stool) and five frequencies (once to three times per month, once per week, twice per week, once per day, twice or more per day). Other severity scales include AMS, Pescatori, Williams score, Kirwan, Miller score, Saint Mark's score, and the Vaizey scale.
Differential diagnosis
FI may present with signs similar to rectal discharge (e.g. fistulae, proctitis, or rectal prolapse), pseudoincontinence, encopresis (with no organic cause), and irritable bowel syndrome.
Management
FI is generally treatable with conservative management, surgery, or both. The success of treatment depends upon the exact causes and how easily these are corrected. Treatment choice depends on the cause and severity of the disease, and the motivation and general health of the person affected. Commonly, conservative measures are used together, and if appropriate surgery is carried out. Treatments may be attempted until symptoms are satisfactorily controlled. A treatment algorithm based upon the cause has been proposed, including conservative, non-operative and surgical measures (neosphincter refers to either dynamic graciloplasty or artificial bowel sphincter, lavage refers to retrograde rectal irrigation).
Conservative measures include dietary modification, drug treatment, retrograde anal irrigation, biofeedback retraining anal sphincter exercises. Incontinence products refer to devices such as anal plugs and perineal pads and garments such as diapers/nappies. Perineal pads are efficient and acceptable for only minor incontinence. If all other measures are ineffective removing the entire colon may be an option.
Diet
Dietary modification may be important for successful management. Both diarrhea and constipation can contribute to different cases, so dietary advice must be tailored to address the underlying cause or it may be ineffective or counterproductive. In persons with disease aggravated by diarrhea or those with rectal loading by soft stools, the following suggestions may be beneficial: increase dietary fiber; reduce wholegrain cereals/bread; reduce fruit and vegetables which contain natural laxative compounds (rhubarb, figs, prunes/plums); limit beans, pulses, cabbage and sprouts; reduce spices (especially chili); reduce artificial sweeteners (e.g. sugar-free chewing gum); reduce alcohol (especially stout, beer and ale); reduce lactose if there is some degree of lactase deficiency; and reduce caffeine. Caffeine lowers the resting tone of the anal canal and also causes diarrhea. Excessive doses of vitamin C, magnesium, phosphorus and/or calcium supplements may increase FI. Reducing the olestra fat substitute, which can cause diarrhea, may also help.
Medication
Pharmacological management may include anti-diarrheal/constipating agents and laxatives/stool bulking agents. Stopping or substituting any previous medication that causes diarrhea may be helpful in some (see table). There is no good evidence for the use of any medications, however.
In people who have undergone gallbladder removal, the bile acid sequestrant cholestyramine may help minor degrees of FI. Bulking agents also absorb water, so may be helpful for those with diarrhea. A common side effect is bloating and flatulence. Topical agents to treat and prevent dermatitis may also be used, such as topical antifungals when there is evidence of perianal candidiasis or occasionally mild topical anti-inflammatory medication. Prevention of secondary lesions is carried out by perineal cleansing, moisturization, and the use of a skin protectant.
Other measures
Evacuation aids (suppositories or enemas) e.g. glycerine or bisacodyl suppositories may be prescribed. People may have a poor resting tone of the anal canal, and consequently may not be able to retain an enema, in which case transanal irrigation (retrograde anal irrigation) may be a better option, as this equipment utilizes an inflatable catheter to prevent loss of the irrigation tip and to provide a water tight seal during irrigation. A volume of lukewarm water is gently pumped into the colon via the anus. People can be taught how to perform this treatment in their own homes, but it does require special equipment. If the irrigation is efficient, the stool will not reach the rectum again for up to 48 hours. By regularly emptying the bowel using transanal irrigation, controlled bowel function is often re-established to a high degree in patients with bowel incontinence and/or constipation. This enables control over the time and place of evacuation and the development of a consistent bowel routine. However, persistent leaking of residual irrigation fluid during the day may occur and make this option unhelpful, particularly in persons with obstructed defecation syndrome who may have an incomplete evacuation of any rectal contents. Consequently, the best time to carry out the irrigation is typically in the evening, allowing any residual liquid to be passed the next morning before leaving the home. Complications such as electrolyte imbalance and perforation are rare. The effect of transanal irrigation varies considerably. Some individuals experience complete control of incontinence, and others report little or no benefit. It has been suggested that if appropriate, people be offered home retrograde anal irrigation.
Biofeedback (the use of equipment to record or amplify and then feed back activities of the body) is a commonly used and researched treatment, but the benefits are uncertain. Biofeedback therapy varies in the way it is delivered, but it is unknown if one type has benefits over another.
The role of pelvic floor exercises and anal sphincter exercises in FI is poorly determined. While there may be some benefits they appear less useful than implanted sacral nerve stimulators. These exercises aim to increase the strength of the pelvic floor muscles (mainly levator ani). The anal sphincters are not technically part of the pelvic floor muscle group, but the EAS is a voluntary, striated muscle that therefore can be strengthened in a similar manner. It has not been established whether pelvic floor exercises can be distinguished from anal sphincter exercises in practice by the people doing them. This kind of exercise is more commonly used to treat urinary incontinence, for which there is a sound evidence base for effectiveness. More rarely are they used in FI. The effect of anal sphincter exercises are variously stated as an increase in the strength, speed, or endurance of voluntary contraction (EAS).
Electrical stimulation can also be applied to the anal sphincters and pelvic floor muscles, inducing muscle contraction without traditional exercises (similar to transcutaneous electrical nerve stimulation, TENS). The evidence supporting its use is limited, and any benefit is tentative. In light of the above, intra-anal electrical stimulation (using an anal probe as an electrode) appears to be more efficacious than intra-vaginal (using a vaginal probe as an electrode). Rarely, skin reactions may occur where the electrodes are placed, but these issues typically resolve when the stimulation is stopped. Surgically implanted sacral nerve stimulation may be more effective than exercises, and electrical stimulation and biofeedback may be more effective than exercises or electrical stimulation by themselves. TENS is also sometimes used to treat FI by transcutaneous tibial nerve stimulation.
In a minority of people, anal plugs may be useful for either standalone therapy or in concert with other treatments. Anal plugs (sometimes termed tampons) aim to block the involuntary loss of fecal material, and they vary in design and composition. Polyurethane plugs were reported to perform better than those made of polyvinyl-alcohol. Plugs are less likely to help those with frequent bowel movements, and many find them difficult to tolerate.
In women, a device that functions as an inflatable balloon in the vagina has been approved for use in the United States.
Surgery
Surgery may be carried out if conservative measures alone are not sufficient to control incontinence. There are many surgical options, and their relative effectiveness is debated due to a lack of good-quality evidence. The optimal treatment regime may be both surgical and non-surgical treatments. The surgical options can be considered in four categories: restoration and improvement of residual sphincter function (sphincteroplasty, sacral nerve stimulation, tibial nerve stimulation, correction of anorectal deformity), replacement/imitation of the sphincter or its function (anal encirclement, SECCA procedure, non-dynamic graciloplasty, perianal injectable bulking agents and implantable bulking agents), dynamic sphincter replacement (artificial bowel sphincter, dynamic graciloplasty), antegrade continence enema (Malone procedure), and finally fecal diversion (e.g. colostomy). A surgical treatment algorithm has been proposed. Isolated sphincter defects (IAS/EAS) may be initially treated with sphincteroplasty and if this fails, the person can be assessed for sacral nerve stimulation. Functional deficits of the EAS and/or IAS (i.e. where there is no structural defect, or only limited EAS structural defect, or with neurogenic incontinence) may be assessed for sacral nerve stimulation. If this fails, neosphincter with either dynamic graciloplasty or artificial anal sphincter may be indicated. Substantial muscular and/or neural defects may be treated with neosphincter initially.
Epidemiology
FI is thought to be very common, but much under-reported due to embarrassment. One study reported a prevalence of 2.2% in the general population. It affects people of all ages but is more common in older adults (but it should not be considered a normal part of aging). Females are more likely to develop it than males (63% of those with FI over 30 may be female). In 2014, the National Center for Health Statistics reported that one out of every six seniors in the U.S. who lived in their own homes or apartment had FI. Men and women were equally affected. 45–50% of people with FI have severe physical and/or mental disabilities. People with dementia are four times more likely to have fecal incontinence compared to people of similar ages.
Risk factors include age, female gender, urinary incontinence, history of vaginal delivery (non-Caesarean section childbirth), obesity, prior anorectal surgery, poor general health, and physical limitations. Combined urinary and fecal incontinence is sometimes termed double incontinence, and it is more likely to be present in those with urinary incontinence.
Traditionally, FI was thought to be an insignificant complication of surgery, but it is now known that a variety of different procedures are associated with this possible complication, and sometimes at high levels. Examples are midline internal sphincterotomy (8% risk), lateral internal sphincterotomy, fistulectomy, fistulotomy (1852%), hemorrhoidectomy (33%), ileo-anal reservoir reconstruction, lower anterior resection, total abdominal colectomy, ureterosigmoidostomy, and anal dilation (Lord's procedure, 0-50%). Some authors consider obstetric trauma to be the most common cause.
History
While the first mention of urinary incontinence occurs in 1500 BC in the Ebers Papyrus, the first mention of FI in a medical context is unknown. For many centuries, colonic irrigation was the only treatment available. Stoma creation was described in AD 1776, FI associated with rectal prolapse in AD 1873 and anterior sphincter repair in AD 1875. During the mid 20th century, several operations were developed for instances where the sphincters were intact but weakened. Muscle transpositions using the gluteus maximus or the gracilis were devised, but did not become used widely until later. End-to-end sphincteroplasty is shown to have a high failure rate in 1940. In AD 1971, Parks and McPartlin first describe an overlapping sphincteroplasty procedure. Biofeedback is first introduced in 1974. In 1975, Parks describes post anal repair, a technique to reinforce the pelvic floor and EAS to treat idiopathic cases. Endoanal ultrasound is invented in 1991, which starts to demonstrate the high number of occult sphincter tears following vaginal deliveries. In 1994, the use of an endoanal coil during pelvic MRI shows greater detail of the anal canal than previously. During the last 20 years, dynamic graciliplasty, sacral nerve stimulation, injectable perianal bulking agents and radiofrequency ablation have been devised, mainly due to the relatively poor success rates and high morbidity associated with the earlier procedures.
Society and culture
Persons with this symptom are frequently ridiculed and ostracized in public. It has been described as one of the most psychologically and socially debilitating conditions in an otherwise healthy individual. In older people, it is one of the most common reasons for admission into a care home. Persons who develop FI earlier in life are less likely to marry and obtain employment. Often, people will go to great lengths to keep their condition secret. It has been termed "the silent affliction" since many do not discuss the problem with their close family, employers, or clinicians. They may be subject to gossip, hostility, and other forms of social exclusion. The economic cost has not received much attention.
Netherlands
In the Netherlands, a 2004 study estimated that total costs of patients with fecal incontinence were €2169 per patient per year. Over half of this was productivity loss in work.
United States
In the US, the average lifetime cost (treatment and follow-up) was $17,166 per person in 1996. The average hospital charge for sphincteroplasty was $8555 per procedure. Overall, in the US, the total charges associated with surgery increased from $34 million in 1998 to $57.5 million in 2003. Sacral nerve stimulation, dynamic graciloplasty, and colostomy were all shown to be cost-effective.
Japan
Some insults in Japan relate to incontinence, such as kusotare/kusottare and shikkotare which mean shit hanger/leaker/oozer and piss leaker/oozer respectively, though these have not been in common use since the 1980s.
Law
The case Hiltibran et al v. Levy et al in the United States District Court for the Western District of Missouri resulted in that court issuing an order in 2011. That order requires incontinence briefs funded by Medicaid to be given by the State of Missouri to adults who would be institutionalized without them.
Research
Engineered anal sphincters grown from stem cells have been successfully implanted in mice. New blood vessels developed and the tissue displayed normal contraction and relaxation. In the future, these methods may become part of the management of FI, replacing the need for high-morbidity implanted devices such as the artificial bowel sphincter.
See also
Open defecation
References
Further reading
External links
Independent continence product advisor
Gastrointestinal motility disorders
Gastrointestinal tract disorders
Symptoms and signs: Digestive system and abdomen
Incontinence
Defecation | Fecal incontinence | [
"Biology"
] | 8,853 | [
"Incontinence",
"Excretion",
"Defecation"
] |
179,485 | https://en.wikipedia.org/wiki/DMZ%20%28computing%29 | In computer security, a DMZ or demilitarized zone (sometimes referred to as a perimeter network or screened subnet) is a physical or logical subnetwork that contains and exposes an organization's external-facing services to an untrusted, usually larger, network such as the Internet. The purpose of a DMZ is to add an additional layer of security to an organization's local area network (LAN): an external network node can access only what is exposed in the DMZ, while the rest of the organization's network is protected behind a firewall. The DMZ functions as a small, isolated network positioned between the Internet and the private network.
This is not to be confused with a DMZ host, a feature present in some home routers that frequently differs greatly from an ordinary DMZ.
The name is from the term demilitarized zone, an area between states in which military operations are not permitted.
Rationale
The DMZ is seen as not belonging to either network bordering it. This metaphor applies to the computing use as the DMZ acts as a gateway to the public Internet. It is neither as secure as the internal network, nor as insecure as the public internet.
In this case, the hosts most vulnerable to attack are those that provide services to users outside of the local area network, such as e-mail, Web and Domain Name System (DNS) servers. Because of the increased potential of these hosts suffering an attack, they are placed into this specific subnetwork in order to protect the rest of the network in case any of them become compromised.
Hosts in the DMZ are permitted to have only limited connectivity to specific hosts in the internal network, as the content of DMZ is not as secure as the internal network. Similarly, communication between hosts in the DMZ and to the external network is also restricted to make the DMZ more secure than the Internet and suitable for housing these special-purpose services. This allows hosts in the DMZ to communicate with both the internal and external network, while an intervening firewall controls the traffic between the DMZ servers and the internal network clients, and another firewall would perform some level of control to protect the DMZ from the external network.
A DMZ configuration provides additional security from external attacks, but it typically has no bearing on internal attacks such as sniffing communication via a packet analyzer or spoofing such as e-mail spoofing.
It is also sometimes good practice to configure a separate classified militarized zone (CMZ), a highly monitored militarized zone comprising mostly Web servers (and similar servers that interface to the external world i.e. the Internet) that are not in the DMZ but contain sensitive information about accessing servers within the LAN (like database servers). In such architecture, the DMZ usually has the application firewall and the FTP while the CMZ hosts the Web servers. (The database servers could be in the CMZ, in the LAN, or in a separate VLAN altogether.)
Any service that is being provided to users on the external network can be placed in the DMZ. The most common of these services are:
Web servers
Mail servers
FTP servers
VoIP servers
Web servers that communicate with an internal database require access to a database server, which may not be publicly accessible and may contain sensitive information. The web servers can communicate with database servers either directly or through an application firewall for security reasons.
E-mail messages and particularly the user database are confidential, so they are typically stored on servers that cannot be accessed from the Internet (at least not in an insecure manner), but can be accessed from email servers that are exposed to the Internet.
The mail server inside the DMZ passes incoming mail to the secured/internal mail servers. It also handles outgoing mail.
For security, compliance with legal standards such as HIPAA, and monitoring reasons, in a business environment, some enterprises install a proxy server within the DMZ. This has the following benefits:
Obliges internal users (usually employees) to use the proxy server for Internet access.
Reduced Internet access bandwidth requirements since some web content may be cached by the proxy server.
Simplifies recording and monitoring of user activities.
Centralized web content filtering.
A reverse proxy server, like a proxy server, is an intermediary but is used the other way around. Instead of providing a service to internal users wanting to access an external network, it provides indirect access for an external network (usually the Internet) to internal resources.
For example, a back office application access, such as an email system, could be provided to external users (to read emails while outside the company) but the remote user would not have direct access to their email server (only the reverse proxy server can physically access the internal email server). This is an extra layer of security particularly recommended when internal resources need to be accessed from the outside, but it's worth noting this design still allows remote (and potentially malicious) users to talk to the internal resources with the help of the proxy. Since the proxy functions as a relay between the non-trusted network and the internal resource: it may also forward malicious traffic (e.g. application level exploits) towards the internal network; therefore the proxy's attack detection and filtering capabilities are crucial in preventing external attackers from exploiting vulnerabilities present in the internal resources that are exposed via the proxy. Usually such a reverse proxy mechanism is provided by using an application layer firewall that focuses on the specific shape and contents of the traffic rather than just controlling access to specific TCP and UDP ports (as a packet filter firewall would do), but a reverse proxy is usually not a good substitute for a well thought out DMZ design as it has to rely on continuous signature updates for updated attack vectors.
Architecture
There are many different ways to design a network with a DMZ. Two of the most basic methods are with a single firewall, also known as the three-legged model, and with dual firewalls, also known as back to back. These architectures can be expanded to create very complex architectures depending on the network requirements.
Single firewall
A single firewall with at least 3 network interfaces can be used to create a network architecture containing a DMZ. The external network is formed from the ISP to the firewall on the first network interface, the internal network is formed from the second network interface, and the DMZ is formed from the third network interface. The firewall becomes a single point of failure for the network and must be able to handle all of the traffic going to the DMZ as well as the internal network.
The zones are usually marked with colors -for example, purple for LAN, green for DMZ, red for Internet (with often another color used for wireless zones).
Dual firewall
The most secure approach, according to Colton Fralick, is to use two firewalls to create a DMZ. The first firewall (also called the "front-end" or "perimeter" firewall) must be configured to allow traffic destined to the DMZ only. The second firewall (also called "back-end" or "internal" firewall) only allows traffic to the DMZ from the internal network.
This setup is considered more secure since two devices would need to be compromised. There is even more protection if the two firewalls are provided by two different vendors, because it makes it less likely that both devices suffer from the same security vulnerabilities. For example, a security hole found to exist in one vendor's system is less likely to occur in the other one. One of the drawbacks of this architecture is that it's more costly, both to purchase and to manage. The practice of using different firewalls from different vendors is sometimes described as a component of a "defense in depth" security strategy.
DMZ host
Some routers have a feature called DMZ host. This feature could designate one node (PC or other device with an IP address) as a DMZ host. The router's firewall exposes all ports on the DMZ host to the external network and hinders no inbound traffic from the outside going to the DMZ host. This is a less secure alternative to port forwarding, which only exposes a handful of ports. This feature must be avoided, except when:
The node designated as DMZ host is the downstream firewall of the actual DMZ (perhaps the router itself isn't part of a home network)
The node runs a powerful firewall capable of regulating internal security
The sheer number of ports is too great for the port-forwarding feature
Correct port forwarding rules could not be formulated in advance
The router's port forwarding is not capable of handling relevant traffic, e.g., 6in4 or GRE tunnels
In all but the first scenario above, the DMZ host feature is used outside a true DMZ configuration.
See also
Bastion host
Screened subnet
Science DMZ Network Architecture, a DMZ network in high-performance computing
References
Further reading
SolutionBase: Strengthen network defenses by using a DMZ by Deb Shinder at TechRepublic.
Eric Maiwald. Network Security: A Beginner's Guide. Second Edition. McGraw-Hill/Osborne, 2003.
Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus Ranum and Paul Robertson
Computer network security
Wide area networks | DMZ (computing) | [
"Engineering"
] | 1,975 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
179,505 | https://en.wikipedia.org/wiki/Physical%20property | A physical property is any property of a physical system that is measurable. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables.
Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility, viscosity, etc.
Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These
classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined.
Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance.
It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quantum structure.
Physical properties are contrasted with chemical properties which determine the way a material behaves in a chemical reaction.
List of properties
The physical properties of an object that are traditionally defined by classical mechanics are often called mechanical properties. Other broad categories, commonly cited, are electrical properties, optical properties, thermal properties, etc. Physical properties include:
absorption (physical)
absorption (electromagnetic)
albedo
angular momentum
area
boiling point
brittleness
capacitance
color
concentration
density
dielectric
ductility
distribution
efficacy
elasticity
electric charge
electrical conductivity
electrical impedance
electric field
electric potential
emission
flow rate (mass)
flow rate (volume)
fluidity
frequency
hardness
heat capacity
inductance
intrinsic impedance
intensity
irradiance
length
location
luminance
luminescence
luster
malleability
magnetic field
magnetic flux
mass
melting point
moment
momentum
opacity
permeability
permittivity
plasticity
pressure
radiance
resistivity
reflectivity
refractive index
solubility
specific heat
spin
strength
stiffness
temperature
tension
thermal conductivity (and resistance)
velocity
viscosity
volume
wave impedance
See also
List of materials properties
Physical quantity
Physical test
Test method
References
Bibliography
External links
Physical and Chemical Property Data Sources – a list of references which cover several chemical and physical properties of various materials
Physical phenomena | Physical property | [
"Physics"
] | 627 | [
"Physical phenomena",
"Physical properties"
] |
179,599 | https://en.wikipedia.org/wiki/Nadine%20Gordimer | Nadine Gordimer (20 November 192313 July 2014) was a South African writer and political activist. She received the Nobel Prize in Literature in 1991, recognised as a writer "who through her magnificent epic writing has ... been of very great benefit to humanity".
Gordimer was one of the most honored female writers of her generation. She received the Booker Prize for The Conservationist, and the Central News Agency Literary Award for The Conservationist, Burger's Daughter and July's People.
Gordimer's writing dealt with moral and racial issues, particularly apartheid in South Africa. Under that regime, works such as Burger's Daughter were banned. She was active in the anti-apartheid movement, joining the African National Congress during the days when the organisation was banned, and gave Nelson Mandela advice on his famous 1964 defence speech at the trial which led to his conviction for life. She was also active in HIV/AIDS causes.
Early life
Gordimer was born to Jewish parents near Springs, an East Rand mining town outside Johannesburg. She was the second daughter of Isidore Gordimer (1887–1962), a Lithuanian Jewish immigrant watchmaker from Žagarė in Lithuania (then part of the Russian Empire), and Hannah "Nan" ( Myers) Gordimer (1897–1973), a British Jewish immigrant from London. Her father was raised with an Orthodox Jewish education before immigrating with his family to South Africa at the age of 13. Her mother was from an established family and came to South Africa at the age of 6 with her parents. Gordimer was raised in a secular household. Her mother was not religiously observant, and mostly assimilated, whereas her father maintained a membership of the local Orthodox synagogue and attended once a year for the Yom Kippur services.
Family background
Gordimer's early interest in racial and economic inequality in South Africa was shaped in part by her parents. Her father's experience as a refugee from Tsarist Russia helped form Gordimer's political identity, but he was neither an activist nor particularly sympathetic toward the experiences of black people under apartheid. Conversely, Gordimer saw activism by her mother, whose concern about the poverty and discrimination faced by black people in South Africa led her to found a crèche for black children. Gordimer also witnessed government repression first-hand as a teenager; the police raided her family home, confiscating letters and diaries from a servant's room.
Gordimer was educated at a Catholic convent school, but was largely home-bound as a child because her mother, for "strange reasons of her own", did not put her into school (apparently, she feared that Gordimer had a weak heart). Home-bound and often isolated, she began writing at an early age, and published her first stories in 1937 at the age of 13. Her first published work was a short story for children, "The Quest for Seen Gold", which appeared in the Children's Sunday Express in 1937; "Come Again Tomorrow", another children's story, appeared in Forum around the same time. At the age of 16, she had her first adult fiction published.
Career
Gordimer studied for a year at the University of the Witwatersrand, where she mixed for the first time with fellow professionals across the colour bar. She also became involved in the Sophiatown renaissance. She did not complete her degree, but moved to Johannesburg in 1948, where she lived thereafter. While taking classes in Johannesburg, she continued to write, publishing mostly in local South African magazines. She collected many of these early stories in Face to Face, published in 1949.
In 1951, the New Yorker accepted Gordimer's story "A Watcher of the Dead", beginning a long relationship, and bringing Gordimer's work to a much larger public. Gordimer, who said she believed the short story was the literary form for our age, continued to publish short stories in the New Yorker and other prominent literary journals. Her first publisher, Lulu Friedman, was the wife of the Parliamentarian Bernard Friedman, and it was at their house, "Tall Trees" in First Avenue, Lower Houghton, Johannesburg, that Gordimer met other anti-apartheid writers. Gordimer's first novel, The Lying Days, was published in 1953.
Activism and professional life
The arrest of her best friend, Bettie du Toit, in 1960 and the Sharpeville massacre spurred Gordimer's entry into the anti-apartheid movement. Thereafter, she quickly became active in South African politics, and was close friends with Nelson Mandela's defence attorneys (Bram Fischer and George Bizos) during his 1962 trial. She also helped Mandela edit his famous speech "I Am Prepared to Die", given from the defendant's dock at the trial. When Mandela was released from prison in 1990, she was one of the first people he wanted to see.
During the 1960s and 1970s, she continued to live in Johannesburg, although she occasionally left for short periods of time to teach at several universities in the United States. She had begun to achieve international literary recognition, receiving her first major literary award, the W. H. Smith Commonwealth Literary Award, in 1961. Throughout this time, Gordimer continued to demand through both her writing and her activism that South Africa re-examine and replace its long-held policy of apartheid. In 1973, she was nominated for the Nobel Prize in Literature by Artur Lundkvist of the Swedish Academy's Nobel committee.
During this time, the South African government banned several of her works, two for lengthy periods of time. The Late Bourgeois World was Gordimer's first personal experience with censorship; it was banned in 1976 for a decade by the South African government. A World of Strangers was banned for twelve years. Other works were censored for lesser amounts of time. Burger's Daughter, published in June 1979, was banned one month later. The Publications Committee's Appeal Board reversed the censorship of Burger's Daughter three months later, determining that the book was too one-sided to be subversive. Gordimer responded to this decision in Essential Gesture (1988), pointing out that the board banned two books by black authors at the same time it unbanned her own work. Gordimer's subsequent novels escaped censorship under apartheid. In 2001, a provincial education department temporarily removed July's People from the school reading list, along with works by other anti-apartheid writers, describing July's People as "deeply racist, superior and patronising"—a characterisation that Gordimer took as a grave insult, and that many literary and political figures protested.
In South Africa, she joined the African National Congress when it was still listed as an illegal organisation by the South African government. While never blindly loyal to any organisation, Gordimer saw the ANC as the best hope for reversing South Africa's treatment of black citizens. Rather than simply criticising the organisation for its perceived flaws, she advocated joining it to address them. She hid ANC leaders in her own home to aid their escape from arrest by the government, and she said that the proudest day of her life was when she testified at the 1986 Delmas Treason Trial on behalf of 22 South African anti-apartheid activists. (See Simon Nkoli, Mosiuoa Lekota, etc.) Throughout these years she also regularly took part in anti-apartheid demonstrations in South Africa, and traveled internationally speaking out against South African apartheid and discrimination and political repression.
Her works began achieving literary recognition early in her career, with her first international recognition in 1961, followed by numerous literary awards throughout the ensuing decades. Literary recognition for her accomplishments culminated with the Nobel Prize for Literature on 3 October 1991, which noted that Gordimer "through her magnificent epic writing has—in the words of Alfred Nobel—been of very great benefit to humanity".
Gordimer's activism was not limited to the struggle against apartheid. She resisted censorship and state control of information, and fostered the literary arts. She refused to let her work be aired by the South African Broadcasting Corporation because it was controlled by the apartheid government. Gordimer also served on the steering committee of South Africa's Anti-Censorship Action Group. A founding member of the Congress of South African Writers, Gordimer was also active in South African letters and international literary organisations. She was Vice President of International PEN.
In the post-apartheid 1990s and 21st century, Gordimer was active in the HIV/AIDS movement, addressing a significant public health crisis in South Africa. In 2004, she organised about 20 major writers to contribute short fiction for Telling Tales, a fundraising book for South Africa's Treatment Action Campaign, which lobbies for government funding for HIV/AIDS prevention and care. On this matter, she was critical of the South African government, noting in 2004 that she approved of everything President Thabo Mbeki had done except his stance on AIDS.
In 2005, Gordimer went on lecture tours and spoke on matters of foreign policy and discrimination beyond South Africa. For instance, in 2005, when Fidel Castro fell ill, Gordimer joined six other Nobel prize winners in a public letter to the United States warning it not to seek to destabilise Cuba's communist government. Gordimer's resistance to discrimination extended to her even refusing to accept "shortlisting" in 1998 for the Orange Prize, because the award recognizes only women writers. Gordimer also taught at the Massey College of the University of Toronto as a lecturer in 2006.
She was a vocal critic of the ANC government's Protection of State Information Bill, publishing a lengthy condemnation in The New York Review of Books in 2012.
Personal life
Gordimer had a daughter, Oriane (born 1950), by her first marriage in 1949 to Gerald Gavron (Gavronsky), a local dentist, from whom she was divorced within three years. In 1954, she married Reinhold Cassirer, a highly respected art dealer from the well-known German-Jewish Cassirer family. Cassirer established the South African Sotheby's and later ran his own gallery; their "wonderful marriage" lasted until his death from emphysema in 2001. Their son, Hugo, was born in 1955, and is a filmmaker in New York, with whom Gordimer collaborated on at least two documentaries. Gordimer's daughter, Oriane Gavronsky, has two children and lives in the South of France. Gordimer also spent time with her family in France, as she and Cassirer had bought a small hilltop home near Nice.
In a 1979–80 interview Gordimer, who was Jewish, identified herself as an atheist, but added: "I think I have a basically religious temperament, perhaps even a profoundly religious one." She was not involved in Jewish communal life, though both her husbands were Jewish. In a 1996 interview she said: "The only time I seriously enquired into religion was in my mid-thirties, when I experienced a strange kind of loss or lack in myself and thought this may be because I had no religion." She read Teilhard de Chardin, Simone Weil and books about world religions, continuing: "For the first time in my life I learned something about Judaism, the religion of my parents. But it didn't happen. I could not take the leap of faith." She did, however, feel that her moral values emerged from the Judeo-Christian tradition.
She did not feel that being from an oppressed people was the reason that she was engaged in the anti-apartheid struggle: "I get rather annoyed when people suggest that my engagement in the anti-apartheid struggle can somehow be traced back to my Jewishness... I refuse to accept that one must oneself have been exposed to prejudice and exploitation to be opposed to it. I like to think that all decent people, whatever their religious or ethnic background, have an equal responsibility to fight what is evil. To say otherwise is to concede too much."
In 2008, Gordimer defended her decision to attend a Jerusalem Writers Conference in Israel. Gordimer could be critical of Israel, but rejected comparison of its policies to apartheid in South Africa.
Until the end of her life, she lived in the same home in Parktown in Johannesburg for over five decades. In 2006, Gordimer was attacked in her home by robbers, sparking outrage in the country. Gordimer apparently refused to move into a gated complex, against the advice of some friends. Although her children and grandchildren lived overseas and friends had emigrated, she had no plans to leave South Africa permanently: "It's always been a nightmare in my mind, to be cut off."
Unauthorised biography
Ronald Suresh Roberts published a biography of Gordimer, No Cold Kitchen, in 2006. She had granted Roberts interviews and access to her personal papers, with an understanding that she would authorise the biography in return for a right to review the manuscript before publication. However, Gordimer and Roberts failed to reach an agreement over his account of the illness and death of Gordimer's husband Reinhold Cassirer and an affair Gordimer had in the 1950s, as well as criticism of her views on the Israel–Palestine conflict. Gordimer disowned the book, accusing Roberts of breach of trust. Publishers Bloomsbury Publishing in London and Farrar, Straus and Giroux in New York subsequently withdrew from the project. Suresh subsequently criticised Gordimer for her decision and her stances on other issues.
Death
Gordimer died in her sleep at her Johannesburg home on 13 July 2014 at the age of 90.
Works, themes, and reception
Gordimer achieved lasting international recognition for her works, most of which deal with political issues, as well as the "moral and psychological tensions of her racially divided home country." Virtually all of Gordimer's works deal with themes of love and politics, particularly concerning race in South Africa. Always questioning power relations and truth, Gordimer tells stories of ordinary people, revealing moral ambiguities and choices. Her characterisation is nuanced, revealed more through the choices her characters make than through their claimed identities and beliefs. She also weaves in subtle details within the characters' names.
Overview of critical works
Her first published novel, The Lying Days (1953), takes place in Gordimer's home town of Springs, Transvaal, an East Rand mining town near Johannesburg. Arguably a semi-autobiographical work, The Lying Days is a Bildungsroman, charting the growing political awareness of a young white woman, Helen, toward small-town life and South African racial division.
In her 1963 work, Occasion for Loving, Gordimer puts apartheid and love squarely together. Her protagonist, Ann Davis, is married to Boaz Davis, an ethnomusicologist, but in love with Gideon Shibalo, an artist with several failed relationships. Davis is white, however, and Shibalo is black, and South Africa's government criminalised such relationships.
Gordimer collected the James Tait Black Memorial Prize for A Guest of Honour in 1971 and, in common with a number of winners of this award, she was to go on to win the Booker Prize. The Booker was awarded to Gordimer for her 1974 novel, The Conservationist, and was a co-winner with Stanley Middleton's novel Holiday. The Conservationist explores Zulu culture and the world of a wealthy white industrialist through the eyes of Mehring, the antihero. Per Wästberg described The Conservationist as Gordimer's "densest and most poetical novel". Thematically covering the same ground as Olive Schreiner's The Story of an African Farm (1883) and J. M. Coetzee's In the Heart of the Country (1977), the "conservationist" seeks to conserve nature to preserve the apartheid system, keeping change at bay. When an unidentified corpse is found on his farm, Mehring does the "right thing" by providing it a proper burial; but the dead person haunts the work, a reminder of the bodies on which Mehring's vision would be built.
Gordimer's 1979 novel Burger's Daughter is the story of a woman analysing her relationship with her father, a martyr to the anti-apartheid movement. The child of two Communist and anti-apartheid revolutionaries, Rosa Burger finds herself drawn into political activism as well. Written in the aftermath of the 1976 Soweto uprising, the novel was shortly thereafter banned by the South African government. Gordimer described the novel as a "coded homage" to Bram Fischer, the lawyer who defended Nelson Mandela and other anti-apartheid activists.
In July's People (1981), she imagines a bloody South African revolution, in which white people are hunted and murdered after blacks revolt against the apartheid government. The work follows Maureen and Bamford Smales, an educated white couple, hiding for their lives with July, their long-time former servant. The novel plays off the various groups of "July's people": his family and his village, as well as the Smales. The story examines how people cope with the terrible choices forced on them by violence, race hatred, and the state.
The House Gun (1998) was Gordimer's second post-apartheid novel. It follows the story of a couple, Claudia and Harald Lingard, dealing with their son Duncan's murder of one of his housemates. The novel treats the rising crime rate in South Africa and the guns that virtually all households have, as well as the legacy of South African apartheid and the couple's concerns about their son's lawyer, who is black. The novel was optioned for film rights to Granada Productions.
Gordimer's award-winning 2002 novel, The Pickup, considers the issues of displacement, alienation, and immigration; class and economic power; religious faith; and the ability for people to see, and love, across these divides. It tells the story of a couple: Julie Summers, a white woman from a financially secure family, and Abdu, an illegal Arab immigrant in South Africa. After Abdu's visa is refused, the couple returns to his homeland, where she is the alien. Her experiences and growth as an alien in another culture form the heart of the work.
Get a Life, written in 2005 after the death of her long-time spouse, Reinhold Cassirer, is the story of a man undergoing treatment for a life-threatening disease. While clearly drawn from personal life experiences, the novel also continues Gordimer's exploration of political themes. The protagonist is an ecologist, battling installation of a planned nuclear plant. But he is at the same time undergoing radiation therapy for his cancer, causing him personal grief and, ironically, rendering him a nuclear health hazard in his own home. Here, Gordimer again pursues the questions of how to integrate everyday life and political activism. New York Times critic J. R. Ramakrishnan, who noted a similarity with author Mia Alvar, wrote that Gordimer wrote about "long-suffering spouses and (the) familial enablers of political men" in her fiction.
Jewish themes and characters
Gordimer has occasionally given voice to Jewish characters, rituals and themes in her short stories and novels.
Kenneth Bonert, writing in The Forward, expressed the view that Jewish identity was rarely explored in her work: "For all of her Jewish heritage and personal connections (not only were her parents and family Jews, so were both of her husbands), overt signs of Jewishness are largely absent from her body of work. It's impossible to guess from the books alone that Gordimer was Jewish; and it would be easy to assume the contrary, since whenever Jews do appear in her fiction, they tend to be seen through the eyes of a non-Jew, looking in with almost anthropological fascination onto an alien culture."
In The Later Fiction by Nadine Gordimer (Palgrave Macmillan, 1993), edited by Bryce King, Michael Wade fostered a discussion on Jewish identity as a repressed theme in Gordimer's novel, A Sport of Nature (1987): "Any exploration of the Jewish theme in Nadine Gordimer's writing, especially her novels, in an exploration of the absent, the unwritten, the repressed." Wade noted parallels between Gordimer's white, Jewish social milieu with those of Jewish writers living in urban areas on America's east coast: "Jewishness functioning as a mysterious but ineluctable cultural component of individual identity and expressed as an aspect of the nominally Jewish writer's particular, unique quest for identity in a heterogeneous society".
Benjamin Ivry, writing in The Forward, highlighted several examples where Gordimer employed Jewish characters and themes: "Gordimer proved that indeed anything was possible when examining the personal significance of Yiddishkeit."
In 1951, she wrote "A Watcher of the Dead" for The New Yorker. It centres on the death of a Jewish grandmother and her family observing the ritual of Shemira, as they arrange for a shomer to watch over the body from the time of death until burial. The story later appeared in The Soft Voice of the Serpent the following year.
In the same collection, Gordimer's story, "The Defeated" appeared. It follows the narrator's friendship with a young Jewish immigrant, Miriam Saiyetowitz. Miriam's parents operate a Concession store among the mine compound stores. They later study together at university to become teachers, and Miriam marries a doctor. The narrator visits Miriam's parents on an impulse at their store, they feel abandoned by Miriam, who rarely visits from Johannesburg with their grandson. The narrator explained "I stood there in Miriam's guilt before the Saiyetovitzes, and they were silent, in the accusation of the humble." For Wade: "Miriam's punishment of her parents for their otherness is severe and complete, and conceals Gordimer's own desire to avenge her sense of displacement on her parents for their otherness."
In her debut novel The Lying Days (1953), a major character, Joel Aaron, son of a working class Jewish shopkeeper, acts as a voice of conscience. He has progressive, enlightened views about apartheid. His ethical stances and sense of Jewish identity and ancestry impresses his non-Jewish white middle-class friend, Helen: "His nature had for mine the peculiar charm of the courage to be itself without defiance." Joel is known for his intelligence and integrity. In contrast to Miriam in "The Defeated", Aaron effortlessly accepts his parents and their background. He is a Zionist and makes aliyah to Israel.
In A World of Strangers (1958), there is less Jewish character development, with only a reference to an older man at a party with a thick Eastern European accent with an attractive blonde spouse. In Occasion for Loving (1963), a Jewish character, Boaz Davis appears, but for Wade: "the only Jewish thing is his name".
For Wade, Gordimer saw her father as the most emblematic symbol of Jewishness in her household: "she was compelled to make him both the sign of Jewishness and the object of her rejection." The Jewish otherness is also attributed to the patriarch in "Harry's Presence", a 1960 short story by Gordimer. It is notable as Gordimer's only treatment of the Jewish immigrant experience that does not include or mention black characters.
In 1966, Gordimer wrote an original story for The Jewish Chronicle. "The Visit" includes an extract from the Talmud and follows David Levy returning home from a Friday night Shabbat service. In the same year she published "A Third Presence" for The London Magazine. The story follows two Jewish sisters, Rose and Naomi Rasovsky. According to Wade: "The story's ending indicates that Gordimer has not yet broken through the wool-and-iron barriers of confusion and conflict aroused by the question of her Jewish identity."
In 1983, she published "Letter from His Father" in The London Review of Books, a response to Franz Kafka's "Letter to His Father". In the letter, Gordimer makes references to Yiddish, Yom Kippur, Aliyah, Kibbutzim and Yiddish theatre.
Hillela, a Jewish South African woman, figures as the protagonist of A Sport of Nature, (1987). Wade concluded: "By writing A Sport of Nature in the transcendent style she chose, she tried again to give meaning to her personal muddle over Jewish identity and experience, this time by creating Hillela, whose name represents the deepest moral and prophetic tradition in Jewish history, and who, united with Reuel (=Jethro), the great (not-Jewish) guide and adviser of the beginnings of that history, is able to resolve the inherent contradictions of (the writer's?) white-South-African-radical-Jewish identity. But Hillela is perhaps the most striking example in all Gordimer's writing of 'the Jew that went away', and it is not clear that she succeeds in creating the new sign she seems to have sought."
In the short story "My Father Leaves Home", that appears in Jump: And Other Stories (1991), Gordimer describes an Eastern European shtetl, presumably the hometown of the title character. The anti-semitism the character faced in Europe makes him more sensitive to racism against black people in South Africa.
In Gordimer's final novel No Time Like the Present (2012), one of the central characters, Stephen, is half-Jewish and married to a Zulu woman. His nephew's Bar Mitzvah prompts a meditation on his own Jewish background and he fails to grasp his brother's embrace of Judaism.
Nobel Prize in Literature
Gordimer was nominated for Nobel Prize in Literature in 1972 and 1973 by Swedish Academy member Artur Lundkvist.
Honours and awards
W. H. Smith Commonwealth Literary Award for Friday's Footprint (1961)
James Tait Black Memorial Prize for A Guest of Honour (1972)
Booker Prize for The Conservationist (1974)
Central News Agency Literary Award for The Conservationist (1974)
Grand Aigle d'Or (France) (1975)
Orange Prize shortlist; she declined
Central News Agency Literary Award for Burger's Daughter (1979)
Central News Agency Literary Award for July's People (1981)
Scottish Arts Council Neil M. Gunn Fellowship (1981)
Modern Language Association Honorary Fellow (1984)
Rome Prize (1984)
Premio Malaparte (Italy) (1985)
Nelly Sachs Prize (Germany) (1985)
Bennett Award (United States) (1987)
Anisfield-Wolf Book Award for A Sport of Nature (1988)
Inducted as an honorary member into Phi Beta Kappa (1988)
Central News Agency Literary Award for My Son's Story (1990)
Nobel Prize for Literature (1991)
International Botev Prize Laureate (1996)
Commonwealth Writers' Prize for the Best Book from Africa for The Pickup (2002)
Booker Prize longlist for The Pickup (2001)
Officier of the Legion of Honour (2007)
American Philosophical Society, Member (2008)
American Academy of Arts and Letters, Honorary Member (1979)
American Academy of Arts and Sciences, Honorary Member (1980)
Royal Society of Literature, Fellow
Congress of South African Writers, Patron
Ordre des Arts et des Lettres, Commander
15 honorary degrees
Senior Fellow, Massey College of the University of Toronto
Golden Plate Award of the American Academy of Achievement presented by Awards Council member Archbishop Desmond Tutu at an awards ceremony at St. George's Cathedral in Cape Town, South Africa (2009)
Order of the Aztec Eagle
Tribute
On 20 November 2015, Google celebrated her 92nd birthday with a Google Doodle.
Bibliography
Novels
The Lying Days (1953)
A World of Strangers (1958)
Occasion for Loving (1963)
The Late Bourgeois World (1966)
A Guest of Honour (1970)
The Conservationist (1974) – joint winner of the Booker Prize in 1974
Burger's Daughter (1979)
July's People (1981)
A Sport of Nature (1987)
My Son's Story (1990)
None to Accompany Me (1994)
The House Gun (1998)
The Pickup (2001)
Get a Life (2005)
No Time Like the Present (2012)
Plays
The First Circle, in Six One-act Plays by South African Authors (1949)
Short fiction
Collections
Face to Face (1949)
The Soft Voice of the Serpent (1952)
Six Feet of the Country (1956)
Which New Era Would That Be? (1956)
Friday's Footprint (1960)
Not for Publication (1965)
Livingstone's Companions (1970)
"City Lovers" (1975)
Selected Stories (1975)
Some Monday for Sure (1976)
No Place Like: Selected Stories (1978)
A Soldier's Embrace (1980)
Town and Country Lovers (1982), published by Sylvester & Orphanos
Something Out There (1984)
Correspondence Course and other Stories (1984)
The Moment Before the Gun Went Off (1988)
Once Upon a Time (1989)
Crimes of Conscience (1991)
Jump: And Other Stories (1991)
Why Haven't You Written: Selected Stories 1950-1972 (1992)
Something for the Time Being 1950-1972 (1992)
Loot and Other Stories (2003)
Beethoven Was One-Sixteenth Black (2007)
"A Beneficiary" (2007)
Essays, reporting and other contributions
The Black Interpreters (1973)
What Happened to Burger's Daughter or How South African Censorship Works (1980)
The Essential Gesture: Writing, Politics and Places (1988)
Writing and Being: The Charles Eliot Norton Lectures (1995)
Living in Hope and History (1999)
Edited works
Telling Tales (2004)
Other
The Gordimer Stories (1981–82) – adaptations of seven short stories; she wrote screenplays for four of them
On the Mines (1973)
Lifetimes Under Apartheid (1986)
Choosing for Justice: Allan Boesak (1983) (documentary with Hugo Cassirer)
Berlin and Johannesburg: The Wall and the Colour Bar (documentary with Hugo Cassirer)
Source:
Reviews
Girdwood, Alison (1984), Gordimer's South Africa, a review of Something Out There, in Parker, Geoff (ed.), Cencrastus No. 18, Autumn 1984, p. 50,
See also
List of female Nobel laureates
List of Jewish Nobel laureates
References
Further reading
Brief biographies
LitWeb.net: Nadine Gordimer Biography (2003)
Guardian Books "Author Page", with profile and links to further articles
Obituaries
The Guardian
The Independent
The New York Times
The Washington Post
The Wall Street Journal
Critical studies
Stephen Clingman, The Novels of Nadine Gordimer: History from the Inside (1986)
John Cooke, The Novels of Nadine Gordimer
Andrew Vogel Ettin, Betrayals of the Body Politic: The Literary Commitments of Nadine Gordimer (1993)
Dominic Head, Nadine Gordimer (1994)
Christopher Heywood, Nadine Gordimer (1983)
Santayana, Vivek. 2021. Most difficult and least glamorous : the politics of style in the late works of Nadine Gordimer. University of Edinburgh: Doctoral dissertation.
Rowland Smith, editor, Critical Essays on Nadine Gordimer (1990)
Barbara Temple-Thurston, Nadine Gordimer Revisited (1999)
Kathrin Wagner, Rereading Nadine Gordimer (1994)
Louise Yelin, From the Margins of Empire: Christina Stead, Doris Lessing, Nadine Gordimer (1998)
Nadine Gordimer's Politics Article by Jillian Becker in Commentary, February 1992
Articles
Ian Fullerton, Politics and the South African Novel in English, in Bold, Christine (ed.) Cencrastus No. 3, Summer 1980, pp. 22 & 23
Short reviews
Index of New York Times articles on Gordimer
Speeches and interviews
Ian Fullerton & Glen Murray, An Interview with Nadine Gordimer, in Murray, Glen (ed.), Cencrastus No. 6, Autumn 1981, pp. 2 – 5
Nadine Gordimer, Nancy Topping Bazin, and Marilyn Dallman Seymour, Conversations with Nadine Gordimer (1990)
with the Nobel Lecture, 7 December 1991 Writing and Being
Nadine Gordimer: The Ultimate Safari reading from 2007 PEN World Voices Festival
A Conversation with Nadine Gordimer at The Arthur Miller Freedom to Write Lecture, 2007 from PEN American Center
Biographies
Ronald Suresh Roberts, No Cold Kitchen: A Biography of Nadine Gordimer (2005)
Research archives
Collection Index for Nadine Gordimer Short Stories and Novel Manuscript collection, 1958–1965 (Harry Ransom Humanities Research Center, University of Texas, Austin, Texas)
Guide to the Gordimer manuscripts, 1934–1991 (Lilly Library, Indiana University, Bloomington, Indiana)
Nadine Gordimer Collection at the Harry Ransom Center at the University of Texas at Austin
External links
Short Stories by Nadine Gordimer on the Web
1923 births
2014 deaths
Nobel laureates in Literature
South African Nobel laureates
Women Nobel laureates
Booker Prize winners
Fellows of the Royal Society of Literature
James Tait Black Memorial Prize recipients
Jewish dramatists and playwrights
Jewish South African anti-apartheid activists
South African anti-apartheid activists
Jewish atheists
Jewish women writers
Recipients of the Legion of Honour
People from Springs, Gauteng
South African atheists
South African dramatists and playwrights
South African Jews
South African people of British-Jewish descent
South African people of Lithuanian-Jewish descent
South African women novelists
South African women short story writers
South African short story writers
20th-century South African novelists
20th-century South African women writers
21st-century South African novelists
21st-century South African women writers
South African women dramatists and playwrights
White South African anti-apartheid activists
20th-century dramatists and playwrights
20th-century short story writers
21st-century short story writers
Columbia University faculty
The New Yorker people
Academic staff of the University of Toronto
Jewish Nobel laureates
South African secular Jews | Nadine Gordimer | [
"Technology"
] | 7,069 | [
"Women Nobel laureates",
"Women in science and technology"
] |
179,628 | https://en.wikipedia.org/wiki/Pharmaceutical%20Research%20and%20Manufacturers%20of%20America | Pharmaceutical Research and Manufacturers of America (PhRMA, pronounced ), formerly known as the Pharmaceutical Manufacturers Association, is an American trade group representing companies in the pharmaceutical industry. Founded in 1958, PhRMA lobbies on behalf of pharmaceutical companies. PhRMA is headquartered in Washington, D.C.
The organization has lobbied fiercely against allowing Medicare to negotiate drug prices for Medicare recipients, and filed lawsuits against the drug price provisions in the Inflation Reduction Act. At the state level, the organization has lobbied to prevent price limits and greater price transparency for drugs. The organization claims that higher prices incentivize research and development, even though pharmaceutical spending on marketing exceeds that spent on research, including off-label promotion that has resulted in settlements in the billions of dollars.
PhRMA has given substantial dark money donations to right-wing advocacy groups such as the American Action Network (which lobbied heavily against the Affordable Care Act), Americans for Prosperity, and Americans for Tax Reform.
The organization has also lobbied against lowering drug prices internationally. The most visible conflict has been over AIDS drugs in Africa. Despite the role that patents have played in maintaining higher drug costs for public health programs across Africa, the organization worked to minimize the effect of the Doha Declaration, which said that TRIPS should not prevent countries from dealing with public health crises and allowed for compulsory licenses. The organization also opposed a World Trade Organization TRIPS Agreement waiver during the COVID-19 pandemic, which would have reduced the price of COVID-19 vaccines for low-income countries.
Membership
Leadership
Daniel O'Day, Chairman and Chief Executive Officer of Gilead Sciences is chairman of the PhRMA board. Albert Bourla, DVM, PhD, Chairman and Chief Executive Officer of Pfizer, is board chair-elect and Paul Hudson, Chief Executive Officer of Sanofi, is board treasurer.
Since 2015, the president of the organization has been Stephen J. Ubl. Previous leadership includes: John J. Castellani, formerly head of the Business Roundtable, a U.S. advocacy and lobbying group, Billy Tauzin, a former Republican congressman from Louisiana, and John J. Horan, former CEO and chairman of Merck & Co.
Members
Current member companies include Alkermes, Amgen, Astellas Pharma, Bayer, Biogen, BioMarin Pharmaceutical, Boehringer Ingelheim, Bristol Myers Squibb, CSL Behring, Daiichi Sankyo, Eisai, Eli Lilly and Company, EMD Serono, Genentech, Genmab, Gilead Sciences, GlaxoSmithKline, Incyte, Ipsen, Johnson & Johnson, Lundbeck, Merck & Co., Neurocrine Biosciences, Novartis, Novo Nordisk, Otsuka Pharmaceutical, Pfizer, Sage Therapeutics, Sanofi, Takeda Pharmaceutical Company, and UCB.
Programs
SMARxT Disposal is a joint program run by the U.S. Fish and Wildlife Service, the American Pharmacists Association, and PhRMA to encourage consumers to properly dispose of unused medicines to avoid harm to the environment.
The Partnership for Prescription Assistance is a program by PhRMA and its member companies that connects patients in-need with information on low-cost and free prescription medication. PhRMA has in 2017 raised concerns over price increases for generic drugs out of patent by the company Marathon Pharmaceuticals over Duchenne muscular dystrophy treatment.
The company has advocated abroad in South Africa regarding pharmaceutical drug intellectual property rules.
In 2017, the organization had revenue of $455 million, $128 million of which was spent on lobbying activities.
The organization has notably opposed market pricing strategies of Valeant Pharmaceuticals, deriding the firm as having a strategy "reflective of a hedge fund".
In January 2018, the organization introduced the "Let's Talk About Cost" website, which made the argument that much of the cost of medication goes to middlemen unassociated with pharmaceutical companies.
See also
Biotechnology Innovation Organization (BIO)
Association of the British Pharmaceutical Industry
Ethics in pharmaceutical sales
European Federation of Pharmaceutical Industries and Associations (EFPIA)
Generic Pharmaceutical Association
International Federation of Pharmaceutical Manufacturers Associations (IFPMA)
International Intellectual Property Alliance (IIPA)
Japan Pharmaceutical Manufacturers Association
Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme
Pharmaceutical marketing
Portuguese Pharmaceutical Industry Association
References
External links
Official website
Pharmaceutical companies of the United States
1958 establishments in the United States
Organizations established in 1958
Life sciences industry
Pharmaceutical industry trade groups
Health industry trade groups based in the United States
Medical and health organizations based in Washington, D.C.
Lobbying organizations in the United States
1958 establishments in Washington, D.C. | Pharmaceutical Research and Manufacturers of America | [
"Biology"
] | 953 | [
"Life sciences industry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.