id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
19,441,259 | https://en.wikipedia.org/wiki/Plasma%20Acoustic%20Shield%20System | The Plasma Acoustic Shield System, or PASS, is in the process of being developed by Stellar Photonics. The company received a $2.7 million contract from the U.S. Government to build the PASS. It is part of a project supervised by the United States Army Armament Research, Development and Engineering Center. The laser was first tested in 2008, and will continue to be tested into 2009, with the testing of turret-mounted PASS.
Function
The device is able to disorient an enemy using a series of mid-air explosions, and may also use "high-power speakers for hailing or warning, and a dazzler light source" Its low power would mean that it would be unable to do significant damage to a specific enemy. While it would not be classified as a weapon, because of its inability to stun or disable a target, its distracting light or explosions are hoped to impede the progress of those in its path. "The [PASS] creates a “mid-air plasma ball” that “basically ignites the air in front of the person...It creates fireworks right in front of you.”
Description
The PASS uses Synchronized Photo-pulse Detonation (SPD), a technology researched by Stellar Photonics wherein two short but powerful laser pulses first create a ball of plasma, then a supersonic shockwave creates a flash and a loud bang. PASS is the first functioning SPD weapon system, and it may lead to the construction of a "man-portable tuneable laser weapon that could be used in both non-lethal and lethal modes".
References
External links
New Scientist Article about the PASS
Stellar Photonics Webpage
Picatinny Advanced Energy Weapons Systems Webpage
Plasma physics facilities
Military lasers | Plasma Acoustic Shield System | Physics | 351 |
18,856,018 | https://en.wikipedia.org/wiki/Water%20fluoridation%20by%20country | Water fluoridation is the controlled addition of fluoride to a public water supply to reduce tooth decay, and is handled differently by countries across the world.
Water fluoridation is considered very common in the United States, Canada, Ireland, Chile and Australia where over 50% of the population drinks fluoridated water.
Most European countries including Italy, France, Finland, Germany, Sweden, Netherlands, Austria, Poland, Hungary and Switzerland do not fluoridate water.
Fluoridated water contains fluoride at a level that is proven effective for preventing cavities; this can occur naturally or by adding fluoride. Fluoridated water creates low levels of fluoride in saliva, which reduces the rate at which tooth enamel demineralizes, and increases the rate at which it remineralizes in the early stages of cavities. Typically, a fluoridated compound is added to drinking water, a process that in the U.S. costs an average of about $ per person-year. Defluoridation is needed when the naturally occurring fluoride level exceeds recommended limits. In 2011, the World Health Organization suggested a level of fluoride from 0.5 to 1.5 mg/L (milligrams per liter), depending on climate, local environment, and other sources of fluoride. Bottled water typically has unknown fluoride levels.
Health effects
Dental caries remain a major public health concern in most industrialized countries, affecting 60–90% of schoolchildren and the vast majority of adults. Water fluoridation reduces cavities in children, while efficacy in adults is less clear. A Cochrane review estimates that when water fluoridation is used by children who have no other access to sources of fluoride, there is a reduction in cavities by 35% in baby teeth and 26% in permanent teeth. Recent studies suggest that water fluoridation, particularly in industrialized countries, may be unnecessary because topical fluorides (such as in toothpaste) are widely used and cavity rates have become low. For this reason, some scientists consider fluoridation to be unethical due to the lack of informed consent. However, a recent study funded by NHS found no significant difference between individuals who receive fluoridated water and those who don't in terms of missing teeth and reducing social inequities.
Although fluoridation can cause dental fluorosis, which can alter the appearance of developing teeth or enamel fluorosis, the differences are mild and usually not considered to be of aesthetic or public-health concern. There is no clear evidence of other adverse effects from water fluoridation, as revealed by the York review from 2000. A 2007 Australian systematic review used the same inclusion criteria as York's, plus one additional study. This did not affect the York conclusions. Fluoride's effects depend on the total daily intake of fluoride from all sources. Drinking water is typically the largest source; other methods of fluoride therapy include fluoridation of toothpaste, salt, and milk. The views on the most effective method for community prevention of tooth decay are mixed. The Australian government states that water fluoridation is the most effective means of achieving fluoride exposure that is community-wide. The World Health Organization states water fluoridation, when feasible and culturally acceptable, has substantial advantages, especially for subgroups at high risk, while the European Commission finds no advantage to water fluoridation compared with topical use.
Currently about 372 million people (around 5.7% of the world population) receive artificially-fluoridated water in about 24 countries, including Australia, Brazil, Canada, Chile, Republic of Ireland, Malaysia, the U.S., and Vietnam. 57.4 million people receive naturally occurring fluoridated water at or above optimal levels in countries such as Sweden, China, Sri Lanka, Finland, Zimbabwe and Gabon. Community water fluoridation is rare in Continental Europe, with 97–98% choosing not to fluoridate drinking water. Fluoridated salt and milk is promoted in some European countries instead. Water fluoridation has been replaced by other modes in many countries where water supplies are too decentralized for it to be a practical choice, or existing natural fluoride levels were already ample, including Germany, Finland, Japan, Netherlands, Sweden, Switzerland (Switzerland has 1 mg fluoride per 1 liter, USA only between 0.3 mg and 0.7 mg) water Denmark and at a time Israel. Cessation of water fluoridation has been demonstrated in scientific studies such as a recent one in Calgary, Alberta, to result in increased rates of dental decay. While fluoridation can result in mild dental fluorosis, this effect is barely detectable and causes no concerns with the appearance or health of teeth. Countries practicing artificial water fluoridation vary in their recommended fluoride levels according to what health authorities in each have determined to be most effective for its citizens. The US recently reset the recommended optimal level of fluoride in drinking water, lowering it slightly, because of observed increased fluorosis levels, likely due to additional fluoride sources like toothpaste and mouthwash which were not present when this level was originally set.
Africa
Of Africa's 1.1 billion people, about 400,000 get artificially-fluoridated water (in Libya, data pre-2003).
Libya
Before 2003, 400,000 Libyans were receiving artificially-fluoridated water.
Nigeria
Only a fraction of Nigerians receive water from waterworks, so water fluoridation affects very few people. A 2009 study found that about 21% of water sources naturally contain fluoride to the recommended range of 0.3–0.6 ppm. About 62% have fluoride below this range.
South Africa
South Africa's Health Department recommends adding fluoridation chemicals to drinking water in some areas. It also advises removal of fluoride from drinking water (defluoridation) where the fluoride content is too high.
Legislation around mandatory fluoridation was introduced in 2002, but has been delayed since then pending further research after opposition from water companies, municipalities and the public.
Zambia
Approximately 947,000 (7% of the population) receives water with naturally occurring fluoride in it.
Zimbabwe
Roughly 2,600,000 (21% of the population) receives water with naturally occurring fluoride in it.
Asia
China
Many areas in China have fluoride at levels far higher than recommended due to natural occurrence or industrial contamination, which has resulted in a large amount of skeletal fluorosis. Water fluoridation levels are set at a national standard of 1 mg/L, with higher levels for rural areas at 1.2 mg/L. Water fluoridation began in 1965 in the urban area of Guangzhou. It was interrupted during 1976–1978 due to the shortage of sodium silico-fluoride. It was resumed only in the Fangcun district of the city, but was halted in 1983 after opponents claimed that fluoride levels were already sufficiently high in local foods and tea. Later analysis in 1988 found that the incidence of dental caries among 4-year-old children had increased by 62%. The fluoridation reduced the number of cavities, but increased dental fluorosis; the fluoride levels could have been set too high, and low-quality equipment led to inconsistent, and often excessive, fluoride concentrations.
Hong Kong
All Hong Kong residents receive natural occurring fluoride in water, at about half the traditionally-recommended fluoride level. The Water Supplies Department fluoridates rainwater from 17 local reservoirs, in 21 treatment plants. Recent tests showed drinking water to have an average fluoride level of 0.48 mg/L, and a maximum of 0.69 mg/L.
India
Water fluoridation is not practiced in India. Due to naturally-occurring fluoride, both skeletal and dental fluorosis have been endemic in India in at least 20 states, including Uttarakhand, Jharkhand and Chhattisgarh. The maximum permissible limit of fluoride in drinking water in India is 1.2 mg/L, and the government has been obligated to install fluoride removal plants of various technologies to reduce fluoride levels from industrial waste and mineral deposits. Now reverse osmosis plants are widely used. Household and public system reverse osmosis plants are common in the market. Alleppey in Kerala is most affected with over-fluoridated water. Government-installed reverse osmosis plants supply free filtered water. Rotary International Club, Saratoga USA, helped to install 3 RO Plants in rural Alleppey.
, there are 14,132 habitations in 19 States still containing fluoride above the permissible levels in drinking water. Rajasthan has the highest number of habitations (7,670) with high amount of fluoride in drinking water. Telangana has 1174, Karnataka has 1122 and Madhya Pradesh has 1055 habitation. Assam, Andhra Pradesh, Bihar, Chhattisgarh, Maharashtra, Odisha, West Bengal and Uttar Pradesh also has such habitations.
The government of India launched the National Programme for Prevention and Control of Fluorosis in 2008–2009. In 2013–2014, the programme was brought under the National Rural Health Mission, which has so far covered 111 districts. The programme includes surveillance of fluorosis in the community, training and manpower support, establishment of diagnostic facilities, treatment and health education. The Indian Council of Medical Research has formed a task force on fluorosis to address issues related to prevention and control.
Israel
Fluoride was required in water supplies nationwide by legislation passed in 2002, but the requirement was repealed in 2014, and artificial fluoridation was disparaged by national health officials, effectively ending the practice in Israel for a short while. After the election of 2015 the fluoridation program was to be re-debated the new deputy Health Minister Yaakov Litzman.
Mekorot, Israel's national water company states, "In the South of the country, it is unnecessary to add fluoride because it is found naturally in the water." Water fluoridation was introduced in Israel's large cities in 1981, and a national effort to fluoridate all the country's water was approved in 1988.
In 2002, the Union of Local Authorities (ULA) and others petitioned Israel's High Court to stop the Health Ministry from forcing cities to implement water fluoridation. The court soon issued a restraining order, but after half a year ULA withdrew its petition upon the request of the court.
By 2011, about 65% of the municipalities and local authorities in Israel had agreed to allow fluoridation, and there was active opposition to the spread of fluoridation to the towns where it has not yet been instituted. In 2011, the Health and Welfare Committee of the Knesset criticized the Health Ministry for continuation of water fluoridation.
On 26 August 2014, Israel officially stopped adding fluoride to its water supplies. According to a Ministry of Health press release statement, the reasons it ended water fluoridation were: "Only some 1% of the water is used for drinking, while 99% of the water is intended for other uses (industry, agriculture, flushing toilets etc.). There is also scientific evidence that fluoride in large amounts can lead to damage to health. When fluoride is supplied via drinking water, there is no control regarding the amount of fluoride actually consumed, which could lead to excessive consumption. Supply of fluoridated water forces those who do not so wish to also consume water with added fluoride." Many in the medical and dental communities in Israel criticized the decision as a mistake.
After the election of 2015, the new deputy Health Minister Yaakov Litzman announced that the fluoridation program will be re-debated.
In 2022, the Israel Journal of Health Policy Research released a study titled The effect of community water fluoridation cessation on children's dental health: a national experience which concluded that lack of fluoride in the water caused a surge in tooth decay, especially in children. The paper stated, "our results clearly show the benefits of CWF in maintaining pediatric dental health. It seems that CWF was stopped for political reasons, and the lack of fluoride has led to an increase in dental problems which can cause systemic health issues."
As of July 2024, although in practice there is no legal impediment today to the return of drinking water fluoridation, it has not yet been returned in practice, and there has been no drinking water fluoridation in Israel.
Japan
The first community water fluoridation programme was in Kyoto prefecture in 1952, lasting 13 years. The second was established by US military authorities in Okinawa prefecture in 1957, lasting 15 years. The last experience was in Mie Prefecture in 1967, lasting 4 years.
Less than 1% of Japan practices water fluoridation. Instead, as of March 2010, a total of 7,479 schools and 777,596 preschool to junior high school children were participating in school-based fluoride mouth-rinsing programme (S-FMR), with an estimate of 2,000,000 children participating in 2020.
South Korea
In 2005, the ruling Uri Party proposed legislation for compulsory water fluoridation for municipalities. The legislation failed, and only 29 out of around 250 municipal governments had introduced water fluoridation at that time. Fluoridation was proposed again in 2012.
Malaysia
In 1998, 66% of Malaysians were getting fluoridated water.
In 2010, Bernama reported, "Principal Director (Oral Health) in the Health Ministry, Datuk Dr Norain Abu Taib said that only 75.5% of the country's population are enjoying the benefits of water fluoridation".
Singapore
In 1956, Singapore was the first Asian country to institute a water fluoridation program that covered 100% of the population. Water is fluoridated to a typical value of 0.4-0.6 mg per litre.
Taiwan
Taiwan does not currently fluoridate its water.
Vietnam
Only about 4% of the population of Vietnam has water fluoridation, whereas only 70% get their water from public supplies. Many places in Vietnam already have sufficient levels of fluoride or in some cases, fluoride concentrations were already too high and needed to be reduced to avoid the effects of fluorosis.
Europe
Out of a population of about three-quarters of a billion, under 14 million people (approximately 2%) in Europe receive artificially-fluoridated water. Those people are in the UK (5,797,000), Republic of Ireland (4,780,000), Spain (4,250,000), and Serbia (300,000).
The first water fluoridation in Europe was in West Germany and Sweden in 1952, bringing fluoridated water to about 42,000 people. By mid-1962, about 1 million Europeans in 18 communities in 11 countries were receiving fluoridated water.
Many European countries have rejected water fluoridation, including: Austria, Belgium, Finland, France, Germany, Hungary, Luxembourg, Netherlands, Northern Ireland, Norway, Sweden, Switzerland, Scotland, Iceland, and Italy. A 2003 survey of over 500 Europeans from 16 countries concluded that "the vast majority of people opposed water fluoridation".
Austria
Austria has never implemented fluoridation due to an adequate level of fluoride in drinking water according to a study conducted in 1993. (Nell A, Sperr W. Fluoridgehaltuntersuchung des Trinkwassers in Osterreich 1993 [Analysis of the fluoride content of drinking water in Austria 1993]. Wien Klin Wochenschr. 1994;106(19):608-14. German. PMID 7998407.)
Belgium
Belgium does not fluoridate its water supply, although legislation permits it.
Czech Republic
Czech Republic (previously Czechoslovakia) started water fluoridation in 1958 in Tábor. In Prague fluoridation started in 1975. It was stopped in Prague in 1988 and subsequently in the whole country. Since 2008 no water has been fluoridated. Fluoridated salt is available.
Croatia
Croatia does not fluoridate their tap water.
Denmark
Denmark has released test results for levels of various water contaminants, including fluoride, in the drinking water of some cities: Copenhagen, Brøndby, Albertslund, Dragør, Hvidovre, Rødovre, Vallensbæk, and Herlev.
Estonia
There is no water fluoridation in Estonia. About 5% of the population may be exposed to excessive natural fluoride in drinking waters, and there are measures to remove excess fluoride.
Finland
Kuopio is the only community in Finland with at least 70,000 people that has ever had water fluoridated. Kuopio stopped fluoridation in 1992. In regions with rapakivi bedrock (small, but densely populated regions), 22% of well waters and 55% of drilled well waters exceed the legal limit of 1.5 mg/L; generally, surface and well waters have 0.5-2.0 mg/L fluoride in affected regions.
France
Fluoridated salt is available in France, and 3% of the population uses naturally fluoridated water, but the water is not artificially fluoridated.
Germany
Public drinking water supplies are not currently fluoridated in any part of Germany, however for children and adolescents use of fluoridated salt and toothpaste, as well as fluoride tablets and washes is strongly encouraged by the German Ministry of Health.
Kassel-Wahlershausen in West Germany became the second location in Europe where water fluoridation was practiced in 1952. By 1962, no other part of the FRG was fluoridating, and Kassel-Wahlershausen discontinued the practice in 1971.
In the GDR (East Germany) in the late 1980s, about 3.4 million people (20%) were receiving water with added fluoride. Fluoride tablets were also provided. The fluoridated areas of the GDR included the towns of Karl-Marx-Stadt (now Chemnitz), Plauen, Zittau, and Spremberg. Children in those towns were part of large long-running studies of caries prevalence. A fluoride cessation study found that consistent with a previously observed population-wide phenomenon that the rate of cavities continued to drop after the fluoride concentration in water fell from the augmented 1.0 ppm to its natural level below 0.2 ppm. Water fluoridation was discontinued after the German reunification although still exists on some US military bases.
Greece
There is no water fluoridation in Greece.
Hungary
In the early 1960s the city of Szolnok briefly fluoridated its water. The program was discontinued due to technical problems and a public view that fluoridation did not seem reasonable. Hungary has not used artificially fluoridated water since then.
Ireland
Ireland is the only European country with a policy of mandatory water fluoridation. Worldwide, the Irish Republic, Singapore and New Zealand are the only countries which implement mandatory water fluoridation.
The majority of drinking water in the Republic, (but not Northern Ireland), is fluoridated. In 2012, roughly 3.25 million people received artificially-fluoridated water. Almost 71% of the population in 2002 resided in fluoridated communities. All public water supplies are fluoridated and the remainder of the supplies are group water schemes which are privately owned and not fluoridated artificially. The fluoridation agent used is hydrofluorosilicic acid (HFSA; H2SiF6). In a 2002 public survey, 45% of respondents expressed some concern about fluoridation.
In 1957, the Department of Health established a Fluorine Consultative Council which recommended fluoridation at 1.0 ppm of public water supplies, then accessed by approximately 50% of the population. This was felt to be an effective way of preventing tooth decay, in an era before fluoridated toothpaste was commonly used. This led to the Health (Fluoridation of Water Supplies) Act 1960, which mandated compulsory fluoridation by local authorities. The statutory instruments made in 1962–65 under the 1960 Act were separate for each local authority, setting the level of fluoride in drinking water to 0.8–1.0 ppm. The current regulations date from 2007, and set the level to 0.6–0.8 ppm, with a target value of 0.7 ppm.
Implementation of fluoridation was held up by preliminary dental surveying and water testing, and a court case, Ryan v. Attorney General.
In 1960, the Fianna Fáil minister for health, Seán MacEntee, brought forward the Health (Fluoridation of Water Supplies) Act and a Dublin housewife Gladys Ryan challenged the Act as an “invasion of family rights”. Ryan lost the case, which lasted 65 days, at the High Court (Ireland), and appealed to the Supreme Court. Ryan was represented in court by Seán MacBride who argued that fluoridation was an infringement of human rights since people had no option but to drink it. Ryan's lawyers, including Richie Ryan (politician) worked on a pro bono basis and expenses were paid by fundraising. In 1965, the Supreme Court rejected Gladys Ryan's appeal that the Act violated the Constitution of Ireland's guarantee of the right to bodily integrity.
By 1965, Greater Dublin's water was fluoridated; by 1973, other urban centers were too. Studies from the late 1970s to mid 1990s showed a decrease in (and lower incidence of) dental decay in school children living in areas where water was fluoridated than in areas where water was not fluoridated. The government of the Republic of Ireland has yet to carry out a public health survey on the effects of fluoridation, even though this is required to under the 1960 Health (Fluoridation of Water Supplies) Act.
A private member's bill to end fluoridation was defeated in the Dáil on 12 November 2013. It was supported by Sinn Féin and some of the technical group and opposed by the Fine Gael-Labour government and Fianna Fáil.
There is much local government opposition to compulsory fluoridation, legally mandated nationwide by Dáil Éireann. Early in 2014, Cork County Council and Laois County Council passed motions for the cessation of water fluoridation. In Autumn 2014, Cork City Council, Dublin City Council and Kerry County Council passed similar motions. However, because of the 1960 law forcing artificial fluoridation of the public water, city councils and corporations can only vote to stop fluoridation but have no power to stop it, unless the law is repealed.
Fine Gael was opposed to compulsory water fluoridation but they now support the policy. Fianna Fáil is in favour of compulsory water fluoridation and in 2004 Micheal Martin set up the pro-fluoride Irish Expert Body on Fluorides and Health.
Italy
There is no water or food fluoridation in Italy. Except for isolated locations near volcanos or polluters, fluoride in water is low across the country.
Latvia
There is no water fluoridation in Latvia. Riga's upper limit on natural fluoride is 1.5 mg/L.
Netherlands
Water was fluoridated in large parts of the Netherlands from 1960 to 1973, when the High Council of The Netherlands declared fluoridation of drinking water unauthorized. Dutch authorities had no legal basis for adding chemicals to drinking water if they would not contribute to a sound water supply. Drinking water has not been fluoridated in any part of the Netherlands since 1973.
Norway
In 2000, representatives of the Norwegian National Institute for Public Health reported that no cities in Norway were practicing water fluoridation. There had been intense discussion of the issue around 1980, but no ongoing political discussion in 2000. In recent years, Norway has continued its policy against water fluoridation. The Norwegian Directorate of Health has stated that there is no need for water fluoridation due to the low prevalence of dental caries and the availability of fluoride through other means, such as toothpaste and professional dental treatments. Public debate in Norway remains focused on promoting overall dental hygiene rather than introducing fluoridation of public water supplies.
Serbia
About 300,000 people in Serbia (3%) were receiving fluoridated water before 2003.
Spain
Around 10% of the population (4,250,000 people) receive fluoridated water.
Sweden
In 1952, Norrköping in Sweden became one of the first cities in Europe to fluoridate its water supply. It was declared illegal by the Supreme Administrative Court of Sweden in 1961, re-legalized in 1962 and finally prohibited by the parliament in 1971, after considerable debate. The parliament majority said that there were other and better ways of reducing tooth decay than water fluoridation. Four cities received permission to fluoridate tap water when it was legal. An official commission was formed, which published its final report in 1981. They recommended other ways of reducing tooth decay (improving food and oral hygiene habits) instead of fluoridating tap water. They also found that many people found fluoridation to infringe upon personal liberty/freedom of choice by forcing them to be medicated, and that the long-term effects of fluoridation were insufficiently acknowledged. They also lacked a proper study on the effects of fluoridation on formula-fed infants. In the year 2004 the allowed amount of fluoride in the water was decreased to 1,5 mg/L.
Switzerland
In Switzerland, since 1962, two fluoridation programs had operated in tandem: water fluoridation in the City of Basel, and salt fluoridation in the rest of Switzerland (around 83% of domestic salt sold had fluoride added). However it became increasingly difficult to keep the two programs separate. As a result, some of the population of Basel were assumed to use both fluoridated salt and fluoridated water. In order to correct the situation, in April 2003 the Grand Council of Basel-Stadt resolved to cease water fluoridation and expand salt fluoridation to Basel.
United Kingdom
Around 14% of the population of the United Kingdom receives fluoridated water. About half a million people receive water that is naturally fluoridated with calcium fluoride, and about 6 million total receive fluoridated water. The Water Act 2003 required water suppliers to comply with requests from local health authorities to fluoridate their water.
The following UK water utility companies fluoridate their supply:
Anglian Water Services Ltd
Northumbrian Water Ltd
South Staffordshire Water plc
Severn Trent plc
United Utilities Water plc
Earlier plans were undertaken in the Health Authority areas of Bedfordshire, Hertfordshire, Birmingham, Black Country, Cheshire, Merseyside, County Durham, Tees Valley, Cumbria, Lancashire, North, East Yorkshire, Northern Lincolnshire, Northumberland, Tyne and Wear, Shropshire, Staffordshire, Trent and West Midlands South whereby fluoridation was introduced progressively in the years between 1964 and 1988.
The South Central Strategic Health Authority carried out the first public consultation under the Water Act 2003, and in 2009 its board voted to fluoridate water supplies in the Southampton area to address the high incidence of tooth decay in children there. Surveys had found that the majority of surveyed Southampton residents opposed the plan, but the Southampton City Primary Care Trust decided that "public vote could not be the deciding factor and that medical evidence shows fluoridation will reduce tooth decay – and failed to back up claims of serious negative side effects". Fluoridation plans in the northwest of England were delayed after concerns over increased projected costs and health risks were raised. In October 2014, Public Health England abandoned plans for water fluoridation for 195,000 people in Southampton and neighbouring parts of south-west Hampshire due to opposition from both Hampshire County Council and Southampton City Council.
It was reported in 2007 that the UK Milk Fluoridation Programme, centered in the northwest of England, involved more than 16,000 children.
The water supply in Northern Ireland has never been artificially fluoridated except in two small localities where fluoride was added to the water for about 30 years. By 1999, fluoridation ceased in those two areas, as well.
In 2004, following a public consultation, Scotland's parliament rejected proposals to fluoridate public drinking water.
There are currently no community fluoridation schemes in Wales. The Welsh Government stated in November 2014 that it had no plans to fluoridate the water supply, but said that it was something the Welsh Government will continue to review.
In September 2021, the UK's chief medical officers concluded that fluoridation of water supplies would cut tooth decay.
North America
Canada
The decision to fluoridate lies with local governments, with guidelines set by provincial, territorial, and federal governments. Brantford, Ontario, became the first city in Canada to fluoridate its water supplies in 1945. In 1955, Toronto approved water fluoridation, but delayed implementation of the program until 1963 due to a campaign against fluoridation by broadcaster Gordon Sinclair. The city continues to fluoridate its water today.
In 2008, the recommended fluoride levels in Canada were reduced from 0.8 to 1.0 mg/L to 0.7 mg/L to minimize the risk of dental fluorosis. Ontario, Alberta, and Manitoba have the highest rates of fluoridation, about 70–75%. The lowest rates are in Quebec (about 6%), British Columbia (about 4% - Vancouver does not add Fluoride), and Newfoundland and Labrador (1.5%), with Nunavut and the Yukon having no fluoridation at all. Overall, about 45% of the Canadian population had access to fluoridated water supplies in 2007. A 2008 telephone survey found that about half of Canadian adults knew about fluoridation, and of these, 62% supported the idea.
In 2010, the Region of Waterloo held a non-binding referendum for residents to decide whether water fluoridation should continue. The result of the vote was 50.3% voting against fluoridation. The regional council honored the vote, and over forty years of fluoridation in Waterloo Region ended in November.
In 2011, Calgary city council voted 10–3 to stop adding fluoride to the city's drinking water, having started water fluoridation in 1991. A research project has been planned to study the effects of Calgary's cessation, using Edmonton as a control.
Lakeshore and Amherstberg have voted to end water fluoridation.
Hamilton, London, and Toronto have recently chosen to continue fluoridation. Toronto treats its water to 0.6 mg/L.
Fluoridation was gradually abandoned in the province of Quebec, with Montreal stopping the treatment in the areas where it was still in operation in 2024, leaving St-George as the last municipality in the province to maintain it.
On 28 January 2013, Windsor city council voted 8–3 to cease fluoridation of Windsor's drinking water for five years, honoring a February 2012 recommendation by the Windsor Utilities Commission. Tecumseh gets its water from Windsor, and Tecmuseh's council had voted on 13 March 2012 to ask Windsor to stop fluoridating. Money formerly spent on fluoridation was reallocated to oral health and nutrition education programs. Windsor's water had been fluoridated for over fifty years. On 14 December 2018, Windsor city council voted 8–3 to reintroduce fluoridation of Windsor's drinking water. According to the Oral Health 2018 report released by the health unit, the percentage of children with tooth decay or requiring urgent care has increased by 51 per cent in 2016–17 compared to 2011–12.
In 2021, Regina, Saskatchewan, city council voted to add fluoride to the city’s drinking water with the program expected to start once upgrades to the Buffalo Water Treatment plant are completed in 2025. Communities such as Saskatoon and Moose Jaw fluoridate their water, while others do not.
Mexico
Mexico has no water fluoridation program; instead it has a table salt fluoridation program. But the potable water in Mexico City has higher levels of fluoride than recommended by WHO.
United States
As of May 2000, 42 of the 50 largest U.S. cities had water fluoridation. In 2010, 66% of all U.S. residents and 74% of U.S. residents with access to community water systems receive fluoridated water. In 2010, a U.S. Centers for Disease Control and Prevention study determined that "40.7% of adolescents aged 12–15 had dental fluorosis [in 1999–2004]". In response, in 2011 the U.S. Department of Health and Human Services and the U.S. Environmental Protection Agency (EPA) proposed to reduce the recommended level of fluoride in drinking water to the lowest end of the current range, 0.7 milligrams per liter of water (mg/L), from the previous recommended maximum of 0.7 to 1.2 mg/L in recognition of the increase in sources of fluoride such as fluoridated toothpastes and mouthwashes. This could effectively terminate municipal water fluoridation in areas where fluoride levels from mineral deposits and industrial pollution exceed the new recommendation. As of 2021 the federal maximum contaminant level for fluoride in public water systems remains at 4.0 mg/L, which had been promulgated by EPA in 1986. Several states have set more stringent standards, including New York, where the fluoride MCL is 2.2 mg/L.
As of 2023, approximately 73% of the U.S. population continues to receive fluoridated water. In the same year, the CDC reported that water fluoridation prevents roughly 25% of cavities in children and adults. Despite this, debates about the safety and necessity of fluoridation persist. Some municipalities, such as Portland, Oregon, have chosen not to fluoridate their water, citing concerns over potential health risks and the ethical implications of mass medication. Conversely, areas like San Francisco, California, have maintained their fluoridation programs, emphasizing the public health benefits, particularly for low-income populations who may have limited access to dental care. A 2022 study in the Journal of Public Health Dentistry found that cessation of water fluoridation in Calgary, Alberta, led to an increase in dental caries among children, reinforcing the CDC's stance on the importance of fluoridation.
Oceania
Australia
Australia now provides fluoridated water for 70% or more of the population in all states and territories. Many of Australia's drinking water supplies began fluoridation in the 1960s and 1970s. By 1984 almost 66% of the Australian population had access to fluoridated drinking water, represented by 850 towns and cities. Some areas within Australia have natural fluoride levels in the groundwater, which was estimated in 1991 to provide drinking water to approximately 0.9% of the population.
The first town to fluoridate the water supply in Australia was Beaconsfield, Tasmania in 1953. Queensland became the last state to formally require the addition of fluoride to public drinking water supplies in December 2008.
Fiji
In 2011, Water Authority of Fiji announced that it would add fluoride to water supplied to residents of the Suva-Nausori corridor, with the long term goal of adding fluoride to water nationwide.
New Zealand
The use of water fluoridation first began in Hastings, New Zealand in 1954. A Commission of Inquiry was held in 1957 and then its use rapidly expanded in the mid 1960s. New Zealand now has fluoridated water supplied to about half of the total population. Of the six main centers, only Christchurch and Tauranga do not have a fluoridated water supply. Wellington's water supply is mostly fluoridated, but the suburbs of Petone and Korokoro receive a non-fluoridated supply. In Auckland, the suburbs Onehunga and Huia Village do not fluoridate water.
In 2013, a Hamilton City Council committee voted to remove fluoride from late June 2013. A referendum was held during the council elections in October 2013 with approximately 70% of voters voting for fluoride to be added back into the water supply, and in March 2014, the council voted 9 to 1 to re-introduce fluoride into the supply. In a 2007 referendum about half of voters in the Central Otago, South Otago and the Southland Region did not want fluoridation and voters in the Waitaki District were against water fluoridation for all Wards. Ashburton and Greymouth also voted against fluoridation.
In 2014, the Prime Minister's Chief Science Advisor and the Royal Society of New Zealand published a report on the health effects of water fluoridation.
In June 2018, the Supreme Court of New Zealand in New Health New Zealand Inc v South Taranaki District Council upheld the legality of water fluoridation in New Zealand.
In late July 2022, Director-General of Health Ashley Bloomfield ordered 14 territorial authorities to add fluoride to their water supplies. Bloomfield stated that this measure would boost the number of the New Zealand population receiving water fluoridation by from 51% to 60%.
Central and South America
Argentina
As of 2012, 21% of the Argentinian population had fluoridated water. The capital city, Buenos Aires, has its water fluoridated via a local scheme.
Brazil
By 2008, 41% of people (73.2 million) in Brazil were getting artificially-fluoridated water.
Water fluoridation was first adopted in Brazil in the city of Baixo Guandu, ES, in 1953. A 1974 federal law required new or enlarged water treatment plants to have fluoridation, and its availability was greatly expanded in the 1980s, with optimum fluoridation levels set at 0.8 mg/L. Today, the expansion of fluoridation in Brazil is a governmental priority; Between 2005 and 2008, fluoridation became available to 7.6 million people in 503 municipalities. As of 2008, 3,351 municipalities (60.6%) had adopted fluoridation, up from 2,466 in 2000.
Chile
In Chile, 70.5% of the population receives fluoridated water (10.1 million added by chemical means, 604,000 naturally occurring). The Biobio Region is the only administrative division that doesn't fluoridate water.
Colombia
In Bogota, the average drinking water fluoride concentration is 0.08 ppm. Medellin is the only city which preserves an annual oral health prevention programme based on education and fluoridated mouth rinses in public schools since 1981, and its drinking water contains an average Fluoride concentration of 0.05 ppm. Cartagena is located in the coastal region of Colombia, presenting one of the highest average temperatures in the country and its drinking water has an average Fluoride concentration of 0.08 ppm.
The average fluoride residing in Bogota and Medellin is comparable with the values reported for the optimally fluoridated water of Indianapolis.
Guatemala
As of 2012, 1,800,000 people received fluoridated water, amounting to 13% of the population.
Guyana
In Guyana, 245,000 people, or 32% of the population, have access to fluoridated water. Of those with access, 45,000 have access to artificially fluoridated water, with the rest being naturally fluoridation.
Panama
By 2012, over 15% (510,000 people) of the population were receiving artificially fluoridated water. There are fluoridation schemes in Panama City and San Miguelito.
Paraguay
Approximately 6% of the population, or 350,000 people, receive fluoridated water as of 2012.
Peru
An estimated 80,000 people drink naturally fluoridated water, with 500,000 people receiving artificially fluoridated water. This amounts to 2% of the population.
Venezuela
Following an unsuccessful rollout of water fluoridation, the government began a salt fluoridation program in 1995. Fluoride levels were introduced at a level of 60–90 mgF per kg of salt. This concentration was later raised to 180-220 mgF per kg, considered the appropriate range for preventing dental caries in the Latin American population who are at a minimal risk of dental fluorosis. In markets, around 80% of table salt is fluoridated.
References
Fluoridation | Water fluoridation by country | Chemistry | 8,473 |
15,190,840 | https://en.wikipedia.org/wiki/Chaotic%20hysteresis | A nonlinear dynamical system exhibits chaotic hysteresis if it simultaneously exhibits chaotic dynamics (chaos theory) and hysteresis. As the latter involves the persistence of a state, such as magnetization, after the causal or exogenous force or factor is removed, it involves multiple equilibria for given sets of control conditions. Such systems generally exhibit sudden jumps from one equilibrium state to another (sometimes amenable to analysis using catastrophe theory). If chaotic dynamics appear either prior to or just after such jumps, or are persistent throughout each of the various equilibrium states, then the system is said to exhibit chaotic hysteresis. Chaotic dynamics are irregular and bounded and subject to sensitive dependence on initial conditions.
Background and applications
The term was introduced initially by Ralph Abraham and Christopher Shaw (1987), but was modeled conceptually earlier and has been applied to a wide variety of systems in many disciplines. The first model of such a phenomenon was due to Otto Rössler in 1983, which he viewed as applying to major brain dynamics, and arising from three-dimensional chaotic systems. In 1986 it was applied to electric oscillators by Newcomb and El-Leithy, perhaps the most widely used application since (see also Pecora and Carroll, 1990).
The first to use the term for a specific application was J. Barkley Rosser, Jr. in 1991, who suggested that it could be applied to explaining the process of systemic economic transition, with Poirot (2001) following up on this in regard to the Russian financial crisis of 1998. Empirical analysis of the phenomenon in the Russian economic transition was done by Rosser, Rosser, Guastello, and Bond (2001). While he did not use the term, Tönu Puu (1989) presented a multiplier-accelerator business cycle model with a cubic accelerator function that exhibited the phenomenon.
Other conscious applications of the concept have included to Rayleigh-Bénard convection rolls, hysteretic scaling for ferromagnetism, and a pendulum on a rotating table (Berglund and Kunz, 1999), to induction motors (Súto and Nagy, 2000), to combinatorial optimization in integer programming (Wataru and Eitaro, 2001), to isotropic magnetization (Hauser, 2004), to bursting oscillations in beta cells in the pancreas and population dynamics (Françoise and Piquet, 2005), to thermal convection (Vadasz, 2006), and to neural networks (Liu and Xiu, 2007).
References
Ralph H. Abraham and Christopher D. Shaw. “Dynamics: A Visual Introduction.” In F. Eugene Yates, ed., Self-Organizing Systems: The Emergence of Order. New York: Plenum Press, pp. 543–597, 1987.
Otto E. Rössler. “The Chaotic Hierarchy.” Zeitschrift für Natuforschung 1983, 38a, pp. 788–802.
R.W. Newcomb and N. El-Leithy. “Chaos Generation Using Binary Hysteresis.” Circuits, Systems and Signal Processing September 1986, 5(3), pp. 321–341.
L.M. Pecora and T.L. Carroll. “Synchronization in Chaotic Systems.” Physical Review Letters February 19, 1990, 64(8), pp. 821–824.
J. Barkley Rosser, Jr. From Catastrophe to Chaos: A General Theory of Economic Discontinuities. Boston/Dordrecht: Kluwer Academic Publishers, Chapter 17, 1991.
Clifford S. Poirot. “Financial Integration under Conditions of Chaotic Hysteresis: The Russian Financial Crisis of 1998.” Journal of Post Keynesian Economics Spring 2001, 23(3), pp. 485–508.
J. Barkley Rosser, Jr., Marina V. Rosser, Stephen J. Guastello, and Robert W. Bond, Jr. “Chaotic Hysteresis and Systemic Economic Transformation: Soviet Investment Patterns.” Nonlinear Dynamics, Psychology, and Life Sciences October 2001, 5(4), pp. 545–566.
Tönu Puu. Nonlinear Economic Dynamics. Berlin: Springer-Verlag, 1989.
N. Berglund and H. Kunz. “Memory Effects and Scaling Laws in Slowly Driven Systems.” Journal of Physics A: Mathematical and General January 8, 1999, 32(1), pp. 15–39.
Zoltán Súto and István Nagy. “Study of Chaotic and Periodic Behaviours of a Hysteresis Current Controlled Induction Motor Drive.” In Hajime Tsuboi and István Vajda, eds., Applied Electromagnetics and Computational Technology II. Amsterdam: IOS Press, pp. 233–243.
Murano Wataru and Aiyoshi Eitaro. “Opening Door toward 21st Century. Integer Programming by the Multi-Valued Hysteresis Machines with the Chaotic Properties.” Transactions of the Institute of Electrical Engineers of Japan C 2001, 121(1), pp. 76–82.
Hans Hauser. “Energetic Model of Ferromagnetic Hysteresis: Isotropic Magnetization.” Journal of Applied Physics September 1, 2004, 96(5), pp. 2753–2767.
J.P. Françoise and C. Piquet. “Hysteresis Dynamics, Bursting Oscillations and Evolution to Chaotic Regimes.” Acta Biotheoretica 2005, 53(4), pp. 381–392.
P. Vadasz. “Chaotic Dynamics and Hysteresis in Thermal Convection.” Journal of Mechanical Engineering Science “ 2006, 220(3), pp. 309-323.
Xiangdong Liu and Chunko Xiu. “Hysteresis Modeling Based on the Hysteretic Chaotic Neural Network.” Neural Computing Applications online October 30, 2007: http://www.springerlink.com/content/x76777476785m48.
Chaos theory
Bifurcation theory | Chaotic hysteresis | Mathematics | 1,257 |
36,780,591 | https://en.wikipedia.org/wiki/Fort%20des%20Dunes | The Fort des Dunes, also known as Fort Leffrinckoucke and sometime Fort de l'Est, is located in the commune of Leffrinckoucke, France, about east of Dunkirk (Dunkerque). Built from 1878 to 1880, it is part of the Séré de Rivières system of fortifications that France built following the defeat of the Franco-Prussian War. Although it played no part in World War I, it had a significant role in both the beginning and end of World War II. It has been preserved and is interpreted by a local preservation association for the public.
Description
The Fort des Dunes was built as the westernmost frontier fort in the Séré de Rivières system, in the coastal sand dunes within a few hundred metres of the English Channel. The chosen site was both served by and a place of protection for the coastal railway and canal. It occupies a sandy hill high, and is itself protected by as many as of sand cover. The rectangular fort is surrounded by a dry moat defended by caponiers that provide a protected firing position to sweep the length of the ditch with gunfire. The main fort is accessed by a drawbridge over the ditch. The Fort des Dunes was armed with a variety of artillery over its history, initially mounted on the fort's surface. The fort's barracks and service areas are recessed into the surface and covered with soil and turf. The walls are built of brick and stone masonry. It was initially armed with about 25 artillery pieces, served by 451 men.
The Fort des Dunes was a component of a larger system of coastal batteries and outlying positions defending the greater Dunkirk area. These fortifications were modified as artillery technology developed, making fixed open-air gun emplacements untenable. During World War I the fort's primary armament was two or three 90mm guns on the ramparts. Several 120mm anti-aircraft guns were positioned in the area surrounding the fort.
History
The Fort des Dunes did not see action during the First World War, since it was well behind the lines. It was garrisoned largely by reservists. The fort's primary function during this time was as a munitions depot.
Operation Dynamo
During the Battle of France in 1940, large numbers of French and British troops arrived in the Dunkirk area, separated from their units. The Camp des Dunes was established at the fort to process French soldiers and to assign them duties. General Georges Blanchard, whose First French Army had effectively ceased to exist, arrived at the fort on 30 May. The fort became the headquarters of the French 12th Motorized Infantry Division on 1 June. On 2 June the fort was attacked by aircraft. Two bombs exploded in the courtyard of the fort. Among the dead was the 12th Motorized Infantry's General Janssen. Another bombing raid on 3 June hit the fort with six bombs, heavily damaging the fort and killing six more officers, with a total of between 150 and 200 killed at the fort in both raids. The repeated attacks and heavy damage led the 12th Division to leave the fort.
Following the last evacuations from the beaches and port of Dunkirk on 4 June, German forces took possession of the fort. The Germans made repairs to the fort and organized reburials of soldiers who had been interred where they had fallen or who had been entombed by debris, mostly carried out by local citizens and prisoners of war. The fort became a component of the German Atlantic Wall fortifications, primarily as an annex to the Zuydcoote battery. Apart from functioning as a rations depot, the fort supported an anti-aircraft battery with a radar installation. A small blockhouse was built by the Todt Organization on the west side, with another bunker covering the approach road. These were reinforced in 1944 with temporary revetments and a heavy machine gun position. German troops left significant murals and decorations in the magazines and barracks.
1944
On 4 September 1944 an attempt to kill a German soldier was made by the French Resistance in Rosendäel. The house that the assailant fled to was surrounded and the occupants arrested. All eight were held at the Fort des Dunes, except for Daniel Decroos, who had been killed while trying to escape. Another detainee had been wounded by an exploding grenade. On 6 September six detainees were executed by firing squad in the north ditch and the wounded detainee was euthanized. The seven were buried next to the ramparts. The wall where the prisoners had been shot was destroyed and collapsed on the graves. Following the Liberation the execution was investigated and the graves discovered. The remains were exhumed and reinterred.
German forces held the Dunkirk Pocket through the Siege of Dunkirk to the end of the war, when they surrendered. 10,000 Germans were taken prisoner on 9 May 1945. 3,700 Germans were kept in the Dunkirk area, with many at the Fort des Dunes. The prisoners were engaged in cleanup and mine removal. After the departure of the prisoners the fort was transferred to the customs service, who used it as a storage centre for seized goods. A customs officer and his family lived at the fort. The fort went back to the army in 1955, which began excavations to recover the remains of those killed in 1940. A ceremony of re-interment was held in August 1955, attended by the widow of General Janssen. The fort was then abandoned for twenty years. In 1978 a local organization was formed to preserve the fort as a center for military reserve functions. By 1990 the fort had been made habitable.
Present
The fort is maintained and interpreted by the Association Fort des Dunes. It is usually open to the public on summer weekends. It was transferred to the commune of Leffrinckoucke in 1998. The fort still shows the scars of the 1940 bombardment, with the entry tunnel largely exposed.
References
External links
Association Fort des Dunes
Fort des Dunes at Dunkerque Flandre Côte d’Opale
Fort des Dunes at fortiff.be
Leffrinckoucke at Chemins de Mémoire
Séré de Rivières system
World War I museums in France
World War II museums in France | Fort des Dunes | Engineering | 1,241 |
481,852 | https://en.wikipedia.org/wiki/Somatostatin | Somatostatin, also known as growth hormone-inhibiting hormone (GHIH) or by several other names, is a peptide hormone that regulates the endocrine system and affects neurotransmission and cell proliferation via interaction with G protein-coupled somatostatin receptors and inhibition of the release of numerous secondary hormones. Somatostatin inhibits insulin and glucagon secretion.
Somatostatin has two active forms produced by the alternative cleavage of a single preproprotein: one consisting of 14 amino acids (shown in infobox to right), the other consisting of 28 amino acids.
Among the vertebrates, there exist six different somatostatin genes that have been named: SS1, SS2, SS3, SS4, SS5 and SS6. Zebrafish have all six. The six different genes, along with the five different somatostatin receptors, allow somatostatin to possess a large range of functions.
Humans have only one somatostatin gene, SST.
Nomenclature
Synonyms of "somatostatin" include:
growth hormone–inhibiting hormone (GHIH)
growth hormone release–inhibiting hormone (GHRIH)
somatotropin release–inhibiting factor (SRIF)
somatotropin release–inhibiting hormone (SRIH)
Production
Digestive system
Somatostatin is secreted by delta cells at several locations in the digestive system, namely the pyloric antrum, the duodenum and the pancreatic islets.
Somatostatin released in the pyloric antrum travels via the portal venous system to the heart, then enters the systemic circulation to reach the locations where it will exert its inhibitory effects. In addition, somatostatin release from delta cells can act in a paracrine manner.
In the stomach, somatostatin acts directly on the acid-producing parietal cells via a G-protein coupled receptor (which inhibits adenylate cyclase, thus effectively antagonising the stimulatory effect of histamine) to reduce acid secretion. Somatostatin can also indirectly decrease stomach acid production by preventing the release of other hormones, including gastrin and histamine which effectively slows down the digestive process.
Brain
Somatostatin is produced by neuroendocrine neurons of the ventromedial nucleus of the hypothalamus. These neurons project to the median eminence, where somatostatin is released from neurosecretory nerve endings into the hypothalamohypophysial system through neuron axons. Somatostatin is then carried to the anterior pituitary gland, where it inhibits the secretion of growth hormone from somatotrope cells. The somatostatin neurons in the periventricular nucleus mediate negative feedback effects of growth hormone on its own release; the somatostatin neurons respond to high circulating concentrations of growth hormone and somatomedins by increasing the release of somatostatin, so reducing the rate of secretion of growth hormone.
Somatostatin is also produced by several other populations that project centrally, i.e., to other areas of the brain, and somatostatin receptors are expressed at many different sites in the brain. In particular, populations of somatostatin neurons occur in the arcuate nucleus, the hippocampus, and the brainstem nucleus of the solitary tract.
Functions
Somatostatin is classified as an inhibitory hormone, and is induced by low pH. Its actions are spread to different parts of the body. Somatostatin release is inhibited by the vagus nerve.
Anterior pituitary
In the anterior pituitary gland, the effects of somatostatin are:
Inhibiting the release of growth hormone (GH) (thus opposing the effects of growth hormone–releasing hormone (GHRH))
Inhibiting the release of thyroid-stimulating hormone (TSH)
Inhibiting adenylyl cyclase in parietal cells
Inhibiting the release of prolactin (PRL)
Gastrointestinal system
Somatostatin is homologous with cortistatin (see somatostatin family) and suppresses the release of gastrointestinal hormones
Decreases the rate of gastric emptying, and reduces smooth muscle contractions and blood flow within the intestine
Suppresses the release of pancreatic hormones
Somatostatin release is triggered by the beta cell peptide urocortin3 (Ucn3) to inhibit insulin release.
Inhibits the release of glucagon
Suppresses the exocrine secretory action of the pancreas
Synthetic substitutes
Octreotide (brand name Sandostatin, Novartis Pharmaceuticals) is an octapeptide that mimics natural somatostatin pharmacologically, though is a more potent inhibitor of growth hormone, glucagon, and insulin than the natural hormone, and has a much longer half-life (about 90 minutes, compared to 2–3 minutes for somatostatin). Since it is absorbed poorly from the gut, it is administered parenterally (subcutaneously, intramuscularly, or intravenously). It is indicated for symptomatic treatment of carcinoid syndrome and acromegaly. It is also finding increased use in polycystic diseases of the liver and kidney.
Lanreotide (Somatuline, Ipsen Pharmaceuticals) is a medication used in the management of acromegaly and symptoms caused by neuroendocrine tumors, most notably carcinoid syndrome. It is a long-acting analog of somatostatin, like octreotide. It is available in several countries, including the United Kingdom, Australia, and Canada, and was approved for sale in the United States by the Food and Drug Administration on August 30, 2007.
Pasireotide, sold under the brand name Signifor, is an orphan drug approved in the United States and the European Union for the treatment of Cushing's disease in patients who fail or are ineligible for surgical therapy. It was developed by Novartis. Pasireotide is somatostatin analog with a 40-fold increased affinity to somatostatin receptor 5 compared to other somatostatin analogs.
Evolutionary history
Six somatostatin genes have been discovered in vertebrates. The current proposed history as to how these six genes arose is based on the three whole-genome duplication events that took place in vertebrate evolution along with local duplications in teleost fish. An ancestral somatostatin gene was duplicated during the first whole-genome duplication event (1R) to create SS1 and SS2. These two genes were duplicated during the second whole-genome duplication event (2R) to create four new somatostatin genes:SS1, SS2, SS3, and one gene that was lost during the evolution of vertebrates. Tetrapods retained SS1 (also known as SS-14 and SS-28) and SS2 (also known as cortistatin) after the split in the Sarcopterygii and Actinopterygii lineage split. In teleost fish, SS1, SS2, and SS3 were duplicated during the third whole-genome duplication event (3R) to create SS1, SS2, SS4, SS5, and two genes that were lost during the evolution of teleost fish. SS1 and SS2 went through local duplications to give rise to SS6 and SS3.
See also
FK962
Hypothalamic–pituitary–somatic axis
Octreotide
References
Further reading
External links
Antidiarrhoeals
Endocrine system
Hormones of the somatotropic axis
Neuropeptides
Neuroendocrinology
Pancreatic hormones
Somatostatin inhibitors | Somatostatin | Biology | 1,660 |
29,537 | https://en.wikipedia.org/wiki/Scientific%20misconduct | Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research. It is violation of scientific integrity: violation of the scientific method and of research ethics in science, including in the design, conduct, and reporting of research.
A Lancet review on Handling of Scientific Misconduct in Scandinavian countries provides the following sample definitions, reproduced in The COPE report 1999:
Danish definition: "Intention or gross negligence leading to fabrication of the scientific message or a false credit or emphasis given to a scientist"
Swedish definition: "Intention[al] distortion of the research process by fabrication of data, text, hypothesis, or methods from another researcher's manuscript form or publication; or distortion of the research process in other ways."
The consequences of scientific misconduct can be damaging for perpetrators and journal audience and for any individual who exposes it. In addition there are public health implications attached to the promotion of medical or other interventions based on false or fabricated research findings. Scientific misconduct can result in loss of public trust in the integrity of science.
Three percent of the 3,475 research institutions that report to the US Department of Health and Human Services' Office of Research Integrity, indicate some form of scientific misconduct. However the ORI will only investigate allegations of impropriety where research was funded by federal grants. They routinely monitor such research publications for red flags and their investigation is subject to a statute of limitations. Other private organizations like the Committee of Medical Journal Editors (COJE) can only police their own members.
Motivation
According to David Goodstein of Caltech, there are motivators for scientists to commit misconduct, which are briefly summarised here.
Career pressure
Science is still a very strongly career-driven discipline. Scientists depend on a good reputation to receive ongoing support and funding, and a good reputation relies largely on the publication of high-profile scientific papers. Hence, there is a strong imperative to "publish or perish". This may motivate desperate (or fame-hungry) scientists to fabricate results.
Ease of fabrication
In many scientific fields, results are often difficult to reproduce accurately, being obscured by noise, artifacts, and other extraneous data. That means that even if a scientist does falsify data, they can expect to get away with it – or at least claim innocence if their results conflict with others in the same field. There are few strongly backed systems to investigate possible violations, attempt to press charges, or punish deliberate misconduct. It is relatively easy to cheat although difficult to know exactly how many scientists fabricate data.
Monetary Gain
In many scientific fields, the most lucrative options for professionals are often selling opinions. Corporations can pay experts to support products directly or indirectly via conferences. Psychologists can make money by repeatedly acting as an expert witness in custody proceedings for the same law firms.
Forms
The U.S. National Science Foundation defines three types of research misconduct: fabrication, falsification, and plagiarism.
Fabrication is making up results and recording or reporting them. This is sometimes referred to as "drylabbing". A more minor form of fabrication is where references are included to give arguments the appearance of widespread acceptance, but are actually fake, or do not support the argument.
Falsification is manipulating research materials, equipment, or processes or changing or omitting data or results such that the research is not accurately represented in the research record.
Plagiarism is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit. One form is the appropriation of the ideas and results of others, and publishing as to make it appear the author had performed all the work under which the data was obtained. A subset is citation plagiarism – willful or negligent failure to appropriately credit other or prior discoverers, so as to give an improper impression of priority. This is also known as, "citation amnesia", the "disregard syndrome" and "bibliographic negligence". Arguably, this is the most common type of scientific misconduct. Sometimes it is difficult to guess whether authors intentionally ignored a highly relevant cite or lacked knowledge of the prior work. Discovery credit can also be inadvertently reassigned from the original discoverer to a better-known researcher. This is a special case of the Matthew effect.
Plagiarism-fabrication – the act of taking an unrelated figure from an unrelated publication and reproducing it exactly in a new publication, claiming that it represents new data.
Self-plagiarism – or multiple publication of the same content with different titles or in different journals is sometimes also considered misconduct; scientific journals explicitly ask authors not to do this. It is referred to as "salami" (i.e. many identical slices) in the jargon of medical journal editors. According to some editors this includes publishing the same article in a different language.
Other types of research misconduct are also recognized:
Ghostwriting – the phenomenon where someone other than the named author(s) makes a major contribution. Typically, this is done to mask contributions from authors with a conflict of interest.
Guest authorship - phenomenon where authorship is given to someone who has not made any substantial contribution. This can done by senior researchers who muscle their way onto the papers of inexperienced junior researchers as well as others that stack authorship in an effort to guarantee publication. This is much harder to prove due to a lack of consistency in defining "authorship" or "substantial contribution".
Scientific misconduct can also occur during the peer-review process by a reviewer or editor with a conflict of interest. Reviewer-coerced citation can also inflate the perceived citation impact of a researcher's work and their reputation in the scientific community, similar to excessive self-citation. Reviewers are expected to be impartial and assess the quality of their work. They are expected to declare a conflict of interest to the editors if they are colleagues or competitors of the authors. A rarer case of scientific misconduct is editorial misconduct, where an editor does not declare conflicts of interest, creates pseudonyms to review papers, gives strongly worded editorial decisions to support reviews suggesting to add excessive citations to their own unrelated works or to add themselves as a co-author or their name to the title of the manuscript.
Publishing in a predatory journal, knowingly or unknowingly, was discussed as a form of potential scientific misconduct.
The peer-review process can have limitations when considering research outside the conventional scientific paradigm: social factors such as "groupthink" can interfere with open and fair deliberation of new research.
Sneaked references – the act of subtly embedding references that are not present in a manuscript in the metadata of this accepted manuscript without the original authors being capable of noticing or correcting such modifications.
Photo manipulation
Compared to other forms of scientific misconduct, image fraud (manipulation of images to distort their meaning) is of particular interest since it can frequently be detected by external parties. In 2006, the Journal of Cell Biology gained publicity for instituting tests to detect photo manipulation in papers that were being considered for publication. This was in response to the increased usage of programs such as Adobe Photoshop by scientists, which facilitate photo manipulation. Since then more publishers, including the Nature Publishing Group, have instituted similar tests and require authors to minimize and specify the extent of photo manipulation when a manuscript is submitted for publication. However, there is little evidence to indicate that such tests are applied rigorously. One Nature paper published in 2009 has subsequently been reported to contain around 20 separate instances of image fraud.
Although the type of manipulation that is allowed can depend greatly on the type of experiment that is presented and also differ from one journal to another, in general the following manipulations are not allowed:
splicing together different images to represent a single experiment
changing brightness and contrast of only a part of the image
any change that conceals information, even when it is considered to be non-specific, which includes:
changing brightness and contrast to leave only the most intense signal
using clone tools to hide information
showing only a very small part of the photograph so that additional information is not visible
Image manipulations are typically done on visually repetitive images such as those of blots and microscope images.
Helicopter research
Responsibilities
Authorship responsibility
All authors of a scientific publication are expected to have made reasonable attempts to check findings submitted to academic journals for publication.
Simultaneous submission of scientific findings to more than one journal or duplicate publication of findings is usually regarded as misconduct, under what is known as the Ingelfinger rule, named after the editor of The New England Journal of Medicine 1967–1977, Franz Ingelfinger.
Guest authorship (where there is stated authorship in the absence of involvement, also known as gift authorship) and ghost authorship (where the real author is not listed as an author) are commonly regarded as forms of research misconduct. In some cases coauthors of faked research have been accused of inappropriate behavior or research misconduct for failing to verify reports authored by others or by a commercial sponsor. Examples include the case of Gerald Schatten who co-authored with Hwang Woo-Suk, the case of Professor Geoffrey Chamberlain named as guest author of papers fabricated by Malcolm Pearce, (Chamberlain was exonerated from collusion in Pearce's deception) – and the coauthors with Jan Hendrik Schön at Bell Laboratories. More recent cases include that of Charles Nemeroff, then the editor-in-chief of Neuropsychopharmacology, and a well-documented case involving the drug Actonel.
Authors are expected to keep all study data for later examination even after publication. The failure to keep data may be regarded as misconduct. Some scientific journals require that authors provide information to allow readers to determine whether the authors might have commercial or non-commercial conflicts of interest. Authors are also commonly required to provide information about ethical aspects of research, particularly where research involves human or animal participants or use of biological material. Provision of incorrect information to journals may be regarded as misconduct. Financial pressures on universities have encouraged this type of misconduct. The majority of recent cases of alleged misconduct involving undisclosed conflicts of interest or failure of the authors to have seen scientific data involve collaborative research between scientists and biotechnology companies.
Research institution responsibility
In general, defining whether an individual is guilty of misconduct requires a detailed investigation by the individual's employing academic institution. Such investigations require detailed and rigorous processes and can be extremely costly. Furthermore, the more senior the individual under suspicion, the more likely it is that conflicts of interest will compromise the investigation. In many countries (with the notable exception of the United States) acquisition of funds on the basis of fraudulent data is not a legal offence and there is consequently no regulator to oversee investigations into alleged research misconduct. Universities therefore have few incentives to investigate allegations in a robust manner, or act on the findings of such investigations if they vindicate the allegation.
Well publicised cases illustrate the potential role that senior academics in research institutions play in concealing scientific misconduct. A King's College (London) internal investigation showed research findings from one of their researchers to be 'at best unreliable, and in many cases spurious' but the college took no action, such as retracting relevant published research or preventing further episodes from occurring.
In a more recent case an internal investigation at the National Centre for Cell Science (NCCS), Pune determined that there was evidence of misconduct by Gopal Kundu, but an external committee was then organised which dismissed the allegation, and the NCCS issued a memorandum exonerating the authors of all charges of misconduct. Undeterred by the NCCS exoneration, the relevant journal (Journal of Biological Chemistry) withdrew the paper based on its own analysis.
Scientific peer responsibility
Some academics believe that scientific colleagues who suspect scientific misconduct should consider taking informal action themselves, or reporting their concerns. This question is of great importance since much research suggests that it is very difficult for people to act or come forward when they see unacceptable behavior, unless they have help from their organizations. A "User-friendly Guide," and the existence of a confidential organizational ombudsman may help people who are uncertain about what to do, or afraid of bad consequences for their speaking up.
Responsibility of journals
Journals are responsible for safeguarding the research record and hence have a critical role in dealing with suspected misconduct. This is recognised by the Committee on Publication Ethics (COPE) which has issued clear guidelines on the form (e.g. retraction) that concerns over the research record should take.
The COPE guidelines state that journal editors should consider retracting a publication if they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error). Retraction is also appropriate in cases of redundant publication, plagiarism and unethical research.
Journal editors should consider issuing an expression of concern if they receive inconclusive evidence of research or publication misconduct by the authors, there is evidence that the findings are unreliable but the authors' institution will not investigate the case, they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive, or an investigation is underway but a judgement will not be available for a considerable time.
Journal editors should consider issuing a correction if a small portion of an otherwise reliable publication proves to be misleading (especially because of honest error), or the author / contributor list is incorrect (i.e. a deserving author has been omitted or somebody who does not meet authorship criteria has been included).
Evidence emerged in 2012 that journals learning of cases where there is strong evidence of possible misconduct, with issues potentially affecting a large portion of the findings, frequently fail to issue an expression of concern or correspond with the host institution so that an investigation can be undertaken. In one case, Nature allowed a corrigendum to be published despite clear evidence of image fraud. Subsequent retraction of the paper required the actions of an independent whistleblower.
The cases of Joachim Boldt and Yoshitaka Fujii in anaesthesiology focussed attention on the role that journals play in perpetuating scientific fraud as well as how they can deal with it. In the Boldt case, the editors-in-chief of 18 specialist journals (generally anaesthesia and intensive care) made a joint statement regarding 88 published clinical trials conducted without Ethics Committee approval. In the Fujii case, involving nearly 200 papers, the journal Anesthesia & Analgesia, which published 24 of Fujii's papers, has accepted that its handling of the issue was inadequate. Following publication of a letter to the editor from Kranke and colleagues in April 2000, along with a non-specific response from Dr. Fujii, there was no follow-up on the allegation of data manipulation and no request for an institutional review of Dr. Fujii's research. Anesthesia & Analgesia went on to publish 11 additional manuscripts by Dr. Fujii following the 2000 allegations of research fraud, with Editor Steven Shafer stating in March 2012 that subsequent submissions to the Journal by Dr. Fujii should not have been published without first vetting the allegations of fraud. In April 2012 Shafer led a group of editors to write a joint statement, in the form of an ultimatum made available to the public, to a large number of academic institutions where Fujii had been employed, offering these institutions the chance to attest to the integrity of the bulk of the allegedly fraudulent papers.
Consequences of scientific misconduct
Consequences for science
The consequences of scientific fraud vary based on the severity of the fraud, the level of notice it receives, and how long it goes undetected. For cases of fabricated evidence, the consequences can be wide-ranging, with others working to confirm (or refute) the false finding, or with research agendas being distorted to address the fraudulent evidence. The Piltdown Man fraud is a case in point: The significance of the bona-fide fossils that were being found was muted for decades because they disagreed with Piltdown Man and the preconceived notions that those faked fossils supported. In addition, the prominent paleontologist Arthur Smith Woodward spent time at Piltdown each year until he died, trying to find more Piltdown Man remains. The misdirection of resources kept others from taking the real fossils more seriously and delayed the reaching of a correct understanding of human evolution. (The Taung Child, which should have been the death knell for the view that the human brain evolved first, was instead treated very critically because of its disagreement with the Piltdown Man evidence.)
In the case of Prof Don Poldermans, the misconduct occurred in reports of trials of treatment to prevent death and myocardial infarction in patients undergoing operations. The trial reports were relied upon to issue guidelines that applied for many years across North America and Europe.
In the case of Dr Alfred Steinschneider, two decades and tens of millions of research dollars were lost trying to find the elusive link between infant sleep apnea, which Steinschneider said he had observed and recorded in his laboratory, and sudden infant death syndrome (SIDS), of which he stated it was a precursor. The cover was blown in 1994, 22 years after Steinschneider's 1972 Pediatrics paper claiming such an association, when Waneta Hoyt, the mother of the patients in the paper, was arrested, indicted and convicted on five counts of second-degree murder for the smothering deaths of her five children. While that in itself was bad enough, the paper, presumably written as an attempt to save infants' lives, ironically was ultimately used as a defense by parents suspected in multiple deaths of their own children in cases of Münchausen syndrome by proxy. The 1972 Pediatrics paper was cited in 404 papers in the interim and is still listed on Pubmed without comment.
Consequences for those who expose misconduct
The potentially severe consequences for individuals who are found to have engaged in misconduct also reflect on the institutions that host or employ them and also on the participants in any peer review process that has allowed the publication of questionable research. This means that a range of actors in any case may have a motivation to suppress any evidence or suggestion of misconduct. Persons who expose such cases, commonly called whistleblowers, find themselves open to retaliation by a number of different means. These negative consequences for exposers of misconduct have driven the development of whistle blowers charters – designed to protect those who raise concerns (for more details refer to retaliation (law)).
Regulatory Violations and Consequences (example)
Title 10 Code of Federal Regulation (CFR) Part 50.5, Deliberate Misconduct of the U.S. Nuclear Regulatory Commission (NRC) regulations, addresses the prohibition of certain activities by individual involved in NRC-licensed activities. 10 CFR 50.5 is designed to ensure the safety and integrity of nuclear operations. 10 CFR Part 50.9, Completeness and Accuracy of Information, focuses on the requirements for providing information and data to the NRC. The intent of 10 CFR 50.5 is to deter and penalize intentional wrongdoing (i.e., violations). 10 CFR 50.9 is crucial in maintaining transparency and reliability in the nuclear industry, which effectively emphasizes honesty and integrity in maintaining the safety and security of nuclear operations. Providing false or misleading information or data to the NRC is therefore a violation of 10 CFR 50.9.
Violation of any of these rules can lead to severe penalties, including termination, fines and criminal prosecution. It can also result in the revocation of licenses or certifications, thereby barring individuals or entities from participating in any NRC-licensed activities in the future.
Data issues
Exposure of fraudulent data
With the advancement of the internet, there are now several tools available to aid in the detection of plagiarism and multiple publication within biomedical literature. One tool developed in 2006 by researchers in Dr. Harold Garner's laboratory at the University of Texas Southwestern Medical Center at Dallas is Déjà vu, an open-access database containing several thousand instances of duplicate publication. All of the entries in the database were discovered through the use of text data mining algorithm eTBLAST, also created in Dr. Garner's laboratory. The creation of Déjà vu and the subsequent classification of several hundred articles contained therein have ignited much discussion in the scientific community concerning issues such as ethical behavior, journal standards, and intellectual copyright. Studies on this database have been published in journals such as Nature and Science, among others.
Other tools which may be used to detect fraudulent data include error analysis. Measurements generally have a small amount of error, and repeated measurements of the same item will generally result in slight differences in readings. These differences can be analyzed, and follow certain known mathematical and statistical properties. Should a set of data appear to be too faithful to the hypothesis, i.e., the amount of error that would normally be in such measurements does not appear, a conclusion can be drawn that the data may have been forged. Error analysis alone is typically not sufficient to prove that data have been falsified or fabricated, but it may provide the supporting evidence necessary to confirm suspicions of misconduct.
Data sharing
Kirby Lee and Lisa Bero suggest, "Although reviewing raw data can be difficult, time-consuming and expensive, having such a policy would hold authors more accountable for the accuracy of their data and potentially reduce scientific fraud or misconduct."
Underreporting
The vast majority of cases of scientific misconduct may not be reported. The number of article retractions in 2022 was nearly 5,500, but Ivan Oransky and Adam Marcus, co-founders of Retraction Watch, estimate that at least 100,000 retractions should occur every year, with only about one in five being due to "honest error".
Some notable cases
In 1998 Andrew Wakefield published a fraudulent research paper in The Lancet claiming links between the MMR vaccine, autism, and inflammatory bowel disease. In 2010, he was found guilty of dishonesty in his research and banned from medicine by the UK General Medical Council following an investigation by Brian Deer of the London Sunday Times.
The claims in Wakefield's paper were widely reported, leading to a sharp drop in vaccination rates in the UK and Ireland and outbreaks of mumps and measles. Promotion of the claimed link continues to fuel the anti-vaccination movement.
In 2011 Diederik Stapel, a highly regarded Dutch social psychologist was discovered to have fabricated data in dozens of studies on human behaviour. He has been called "the biggest con man in academic science".
In 2020, Sapan Desai and his coauthors published two papers in the prestigious medical journals The Lancet and The New England Journal of Medicine, early in the COVID-19 pandemic. The papers were based on a very large dataset published by Surgisphere, a company owned by Desai. The dataset was exposed as a fabrication, and the papers were soon retracted.
In 2024, Eliezer Masliah, head of the Division of Neuroscience at the National Institute on Aging, was suspected of having manipulated and inappropriately reused images in over 100 scientific papers spanning several decades, including those that were used by the FDA to greenlight testing for the experimental drug prasinezumab as a treatment for Parkinson's.
Solutions
Changing research assessment
Since 2012, the Declaration on Research Assessment (DORA), from San Francisco, gathered many institutions, publishers, and individuals committing to improving the metrics used to assess research and to stop focusing on the journal impact factor.
See also
Academic dishonesty
Archaeological forgery
Bioethics
Bullying in academia
Committee on Publication Ethics
Conflicts of interest in academic publishing
Cyril Burt
Dana-Farber Cancer Institute
Danish Committees on Scientific Dishonesty
Data fabrication
Engineering ethics
Fabrication (science)
Hippocratic Oath for scientists
International Committee of Medical Journal Editors
Japanese scientific misconduct allegations
Laurie Glimcher
List of cognitive biases
List of experimental errors and frauds in physics
List of fallacies
List of memory biases
List of topics characterized as pseudoscience
Lysenkoism
Mertonian norms
Metascience
Pathological science
Politicization of science
Reproducibility
Research ethics
Research integrity
Research paper mill
Retraction
Scientific method
Scientific plagiarism in India
Scientific plagiarism in the United States
Sham peer review
Source criticism
United States Office of Research Integrity (ORI)
Betrayers of the Truth: Fraud and Deceit in the Halls of Science
EASE Guidelines for Authors and Translators of Scientific Articles
Straight and Crooked Thinking
The Great Betrayal: Fraud In Science
References
Further reading
Patricia Keith-Spiegel, Joan Sieber, and Gerald P. Koocher (November, 2010). Responding to Research Wrongdoing: A User Friendly Guide.
Jargin SV. Misconduct in Medical Research and Practice. Nova Science Publishers, 2020. https://novapublishers.com/shop/misconduct-in-medical-research-and-practice/
External links
Publication ethics checklist (PDF) (for routine use during manuscript submission to a scientific journal)
de:Wissenschaftliches Fehlverhalten | Scientific misconduct | Technology | 5,131 |
33,899,746 | https://en.wikipedia.org/wiki/Thiourea%20dioxide | Thiourea dioxide or thiox is an organosulfur compound that is used in the textile industry. It functions as a reducing agent. It is a white solid, and exhibits tautomerism in solution.
Structure
The structure of thiourea dioxide depends on its environment. Crystalline and gaseous thiourea dioxide adopts a structure with C2v symmetry. Selected bond lengths: S-C = 186, C-N = 130, and S-O = 149 pm. The sulfur center is pyramidal. The C-S bond length is more similar to that of a single bond. For comparison, the C=S bond in thiourea is 171 pm. The long C-S bond indicates the absence of C=S character. Instead the bonding is described with a significant contribution from a dipolar resonance structure with multiple bonding between C and N. One consequence of this bonding is the planarity of the nitrogen centers. In the presence of water or DMSO, thiourea dioxide converts to the tautomer, a sulfinic acid, (H2N)HN=CS(O)(OH), named formamidine sulfinic acid.
Synthesis
Thiourea dioxide was first prepared in 1910 by the English chemist Edward de Barry Barnett.
Thiourea dioxide is prepared by the oxidation of thiourea with hydrogen peroxide.
(NH2)2CS + 2H2O2 → (NH)(NH2)CSO2H + 2H2O
The mechanism of the oxidation has been examined. An aqueous solution of thiourea dioxide has a pH about 6.5 at which thiourea dioxide is hydrolyzed to urea and sulfoxylic acid. It has been found that at pH values of less than 2, thiourea and hydrogen peroxide react to form a disulfide species. It is therefore convenient to keep the pH between 3 and 5 and the temperature below 10 °C. It can also be prepared by oxidation of thiourea with chlorine dioxide. The quality of the product can be assessed by titration with indigo.
Uses
Thiourea dioxide is used in reductive bleaching in textiles. Thiourea dioxide has also been used for the reduction of aromatic nitroaldehydes and nitroketones to nitroalcohols.
References
Sulfinic acids
Thioureas
Amidines
Reducing agents
Substances discovered in the 1910s | Thiourea dioxide | Chemistry | 518 |
4,775,999 | https://en.wikipedia.org/wiki/BioSteel%20%28fiber%29 | BioSteel was a trademark name for a high-strength fiber-based material made of the recombinant spider silk-like protein extracted from the milk of transgenic goats, made by defunct Montreal-based company Nexia Biotechnologies, and later by the Randy Lewis lab of the University of Wyoming and Utah State University. It is reportedly 7-10 times as strong as steel if compared for the same weight, and can stretch up to 20 times its unaltered size without losing its strength properties. It also has very high resistance to extreme temperatures, not losing any of its properties within .
The company had created lines of goats to produce recombinant versions of two spidroins from Nephila clavipes, the golden orb weaver, MaSp1 and MaSp2 When the female goats lactate, the milk, containing the recombinant DNA silk, was to be harvested and subjected to chromatographic techniques to purify the recombinant silk proteins.
The purified silk proteins could be dried, dissolved using solvents (DOPE formation) and transformed into microfibers using wet-spinning fiber production methods. The spun fibers were reported to have tenacities in the range of 2 - 3 grams/denier and elongation range of 25-45%. The "Biosteel biopolymer" had been transformed into nanofibers and nanomeshes using the electrospinning technique.
Nexia is the only company that has successfully produced fibers from spider silk expressed in goat's milk. The Lewis lab has produced fibers from recombinant spider silk protein and synthetic spider silk proteins and genetic chimeras produced in both recombinant E. coli and the milk of recombinant goats, however, no one has been able to produce the silk in commercial quantities thus far. The company was founded in 1993 by Dr. Jeffrey Turner and Paul Ballard and was sold in 2005 to Pharmathene.
In 2009, two transgenic goats were sold to the Canada Agriculture Museum after Nexia Biotechnologies went bankrupt.
Research has since continued with the help of Randy Lewis, a professor formerly at the University of Wyoming and now at Utah State University. He was also able to successfully breed "spider goats" in order to create artificial silk. As of 2012, there are about 30 of the goats at a university-run farm. The U.S. Navy has plans to turn this silk into a tool for stopping vessels by entangling their propellers.
Potential applications of artificial spider silk biopolymers include using it for the coating of implants and medical products as well as for artificial ligaments and tendons, due to its elastic tendencies and also since it is a natural product which will synthesize well with the body. Other potential uses for artificial silk biopolymers include personal care products and textiles.
References
Biotechnology products
Genetically modified organisms | BioSteel (fiber) | Engineering,Biology | 596 |
14,555,039 | https://en.wikipedia.org/wiki/International%20Electrical%20Congress | The International Electrical Congress was a series of international meetings, from 1881 to 1904, in the then new field of applied electricity. The first meeting was initiated by the French government, including official national representatives, leading scientists, and others. Subsequent meetings also included official representatives, leading scientists, and others. Primary aims were to develop reliable standards, both in relation to electrical units and electrical apparatus.
Historical background
In 1881, both within and across countries, different electrical units were being used. There were at least 12 different units of electromotive force, 10 different units
of electric current and 15 different units of resistance.
A number of international Congresses were held, and sometimes referred to as International Electrical Congress, Electrical Conference, and similar variations. Secondary sources make different judgments about how to classify the Congresses. In this article, the Congresses with representatives from national governments are identified as International Electrical Congress. Other Congresses — often addressing the same issues — are identified here as Concurrent Related International Electrical Congresses. Some of these related conferences were devoted to preparing for an International Electrical Congress.
In 1906 the International Electrotechnical Commission was created. Congresses were organised under its auspices were also sometimes referred to as International Electrical Congress. In this article, Congresses organized by the Commission are listed under International Electrotechnical Congresses, while other related Congresses are listed under Related International Electrotechnical Conferences.
International Electrical Congress
Source:
1881 in Paris
Held from 15 September-5 October 1881, in connection with the International Exposition of Electricity. Adolphe Cochery, Minister of Posts and Telegraphs of the French
Government, was the Chairman. At the Congress, William Thomson (United Kingdom), Hermann von Helmholtz (Germany), and (Italy) were elected as foreign vice-presidents.
About 200-250 persons participated, and a proceedings was published in 1882. Notable participants included: Helmholtz, Clausius, Kirchhoff, Werner Siemens, Ernst Mach, Rayleigh, and Lenz, among others.
Important events
The three main topics for the Congress were: electrical units, improvements in international telegraphy, and various applications of
electricity. The Congress resolved to endorse the 1873 British Association for the Advancement of Science proposal for defining the ohm and the volt as practical units, and also made resolutions to define ampere, coulomb and farad, as units for current, quantity, and capacity respectively, to complete the practical system. It also resolved that an international committee should conduct new tests to determine the length of the column of mercury for measuring the ohm.
1893 in Chicago
Held from 21 to 25 August, in connection with the World's Columbian Exposition, with almost 500 participants. Elisha Gray was the Congress president. A proceedings was published.
Refinements to the units of measurement, including the Clark cell, were discussed. Laid down rules for the physical representation: ohm, ampere and volt. Ohm and ampere were defined in terms of the CGS electromagnetic system. The units were named international to distinguish them from the 1881 proposal, hence International System of Electrical and Magnetic Units.
1900 in Paris
Held in 18–25 August in connection with the Paris Exposition Universelle. Éleuthère Mascart was the congress president. There were more than 900 participants, about half of which were from France, and about 120 technical papers presented. A two-volume proceedings was published in 1901
Dealt mainly with magnetic units. During this congress, names were proposed for four magnetic-circuit units in the C.G.S. System. Only two were accepted by vote: the C.G.S. unit for magnetic flux ( ) was named maxwell and C.G.S. unit of magnetising force (or magnetic field intensity) was named gauss (H). Some delegates mistakenly believed and reported that the gauss was adopted as the C.G.S. unit of flux density (B). This mistake has been reproduced in contemporary texts, which have cited a mistaken report. It is relevant to note that the Congress's official formulation for the gauss was in French, , which would be translated into English as magnetic field, which has been used to refer both to (B) and (H), noted in magnetic field. In 1930 the International Electrotechnical Commission decided that the magnetic field strength (H) was different from the magnetic flux density (B), but now assigned the gauss to refer to magnetic flux density (B), in contrast to the decision from this Congress.
1904 in St.Louis, Missouri
Held from 12 to 17 September 1904, in connection with the Louisiana Purchase Exposition
Recommended two permanent international commissions, one about electrical units and standards, the other about unification of nomenclature and characteristics of electrical machines and apparatus. These recommendations are considered the seed that initiated the creation of the International Electrotechnical Commission in 1906.
Concurrent Related International Electrical Congresses
During the period that the Electrical Congresses were held, other conferences and international Congresses were held, sometimes in preparation to the official Electrical Congresses. These events are listed here.
1882 in Paris
Conférence international pour la détermination des unités électriques (International Conference for Determination of Electrical Units)
Held 16–26 October. Was motivated by a resolution from the 1881 International Electrical Congress. A verbal transcript of the conference was published.
1884 in Paris
International Conference for Determination of Electrical Units
1889 in Paris
International Congress of Electricians
Held 24–31 August, in connection with Exposition universelle de 1889. About 530 participants from at least 11 countries.
Adopted several units, including practical units of power (watt) and work (joule), where 1 watt = 107 erg/second, and 1 joule = 107 erg. Considered practical magnetic units, but did not make any resolutions or recommendations.
1891 in Frankfurt
Held 7–12 September, in connection with the International Electrotechnical Exhibition (Die Internationale Elektrotechnische Ausstellung 1891), organized by Elektrotechnische Gesellschaft. Galileo Ferraris was a vice-president at the conference. There were 715 participants (473 from Germany and 243 from other countries, including Austria, United Kingdom, USA, and France). An official report of the conference was published.
Papers and discussions were organised in five main areas: Theory and Measuring Science; Strong Current Technology; Signalling, Telegraphy, and Telephony; Electrochemistry and Electric Current Applications; and Legislation to Mediate Conflicts between Cities around different currents used for electric lights, telephones, and telegraphs.
1892 in Edinburgh
Held in connection with the British Association for the Advancement of Science annual meeting
1896 in Geneva
Held 4–9 August, in connection with the . Insufficient and late communication about the organization of the Congress hampered widespread participation, so that the conference had about 200 participants, mostly from Switzerland, Austria, Germany and Belgium.
Topics for discussion were magnetic units, photometric units, the long-distance transmission of power, the protection of high-tension lines against atmospheric discharge, and the problems and challenges of electric railway operation.
International Electrotechnical Congress
1908 in London
International Conference on Electric Units and Standards. Held in October. Organized by the Commission on Electric Units and Standards of the International Electrotechnical Commission
Formal adoption of the "international units" (e.g., international ohm, international ampere), which were proposed originally in the 1893 meeting of the International Electrical Congress in Chicago.
1911 in Turin
Held 10–17 September, organized by and the Italian Electrotechnical Committee of the International Electrotechnical Commission
1915 in San Francisco
Was to be held 13–18 September, and organized by the American Institute of Electrical Engineers, but was cancelled because of the outbreak of World War I.
Related International Electrotechnical Conferences
1905 in Berlin
Internationale Konferenz über Elektrische Masseinheiten (International Conference on Electrical Units)
Held 23–25 October at Physikalisch-Technischen Reichsanstalt at Charlottenburg. The 1904 Congress recommended holding an international conference to address discrepancies in the electrical units and their interpretation. Emil Warburg, president of the Physikalisch-Technische Reichsanstalt in Germany, invited representatives from corresponding national laboratories in the United States (National Bureau of Standards), the United Kingdom (National Physical Laboratory), and the official standards commissions in Austria and Belgium to an informal conference on electrical standards and units. Additionally Mascart (France), Rayleigh (United Kingdom) and Carhart (USA) were invited because of their expertise and influence. Thirteen of the fifteen invited persons participated in the conference, six from the Reichsanstalt, two from the Belgian Commission on Electrical Units, two from the Austrian Commission on Standardization, Richard Glazebrook from the National Physical Laboratory, Mascart, and Carhart. The non-attendees were Samuel Wesley Stratton, director of the National Bureau of Standard, who sent three papers outlining the positions and proposals of the Bureau, and Rayleigh. A proceedings was published.
Concentrated on the redefinition of the ohm, ampere, and volt, as resolved in the 1904 Congress. The aim was to attain true international uniformity in definitions of these concepts. The main question was whether ohm, ampere, and volt should be independent of each other, or only two should be defined, and which two. The conference concluded that only two electrical units should be taken as fundamental: the international ohm and the international ampere. It also adopted the Western Cadmium Cell as the standard cell, and added rules about the preparation and use of the mercury tube, whose geometry was specified at the 1893 Congress. The conference resolved that another international conference in the course of a year should be held to establish an agreement about the electric standards in use, because different countries had different laws about electrical units.
1908 in Marseille
Held 14–19 September, in connection with the L'exposition internationale des applications de l'électricité. A three-volume proceedings was published.
References
International standards
International conferences
1881 conferences
1893 conferences
1900 conferences
1904 conferences
1908 conferences
History of electrical engineering | International Electrical Congress | Engineering | 2,041 |
30,308,308 | https://en.wikipedia.org/wiki/C8H16O6 | {{DISPLAYTITLE:C8H16O6}}
The molecular formula C8H16O6 (molar mass: 208.21 g/mol, exact mass: 208.0947 u) may refer to:
Pinpollitol
Viscumitol
Molecular formulas | C8H16O6 | Physics,Chemistry | 61 |
42,742,118 | https://en.wikipedia.org/wiki/LG%20G%20Pad%208.0 | The LG G Pad 8.0 (also known as LG G Tab 8.0) is an 8.0-inch Android-based tablet computer produced and marketed by LG Electronics. It belongs to the LG G series, and was announced on 13 May 2014 along with the G Pad 7.0, and G Pad 10.1. This is one of LG's new tablet size variants aimed to compete directly with the Samsung Galaxy Tab 4 series.
History
The G Pad 8.0 was first announced on 13 May 2014. It was officially unveiled at the MedPI tradeshow in Monaco. It was released in July 2014.
Features
The G Pad 8.0 is released with Android 4.4.2 Kitkat. LG has customized the interface with its Optimus UI software. As well as apps from Google, including Google Play, Gmail and YouTube, it has access to LG apps such as QPair, QSlide, KnockOn, and Slide Aside.
The G Pad 8.0 is available in a Wi-Fi-only, 3G & Wi-Fi, and 4G/LTE & Wi-Fi variants. Internal storage is 16 GB, with a microSDXC card slot for expansion. It has an 8.0-inch IPS LCD screen with a resolution of 1280x800 pixel. It also features a front camera without flash and rear-facing camera. It also has the ability to record HD videos.
References
G Pad 8.0
Android (operating system) devices
Tablet computers
Tablet computers introduced in 2014 | LG G Pad 8.0 | Technology | 323 |
59,457,867 | https://en.wikipedia.org/wiki/Abeng | An Abeng is an animal horn or musical instrument in the language of the Akan people. The word abeng is from the Twi language in modern-day Ghana, it is a commonly used word in the Caribbean, especially Jamaica, and the instrument is associated with the Maroon people.
The Maroons of Jamaica used the horn to communicate over great distances in ways that couldn't be understood by people outside the community.
Today the abeng is made from cattle horn and is still used in Maroon communities on ceremonial occasions or to announce important news.
See also
Sneng a similar side-blown horn in Cambodia
References
External links
Article with details on Abeng
Animal products
Natural horns and trumpets | Abeng | Chemistry | 140 |
17,004 | https://en.wikipedia.org/wiki/Kennelly%E2%80%93Heaviside%20layer | The Heaviside layer, sometimes called the Kennelly–Heaviside layer, named after Arthur E. Kennelly and Oliver Heaviside, is a layer of ionised gas occurring roughly between 90km and 150 km (56 and 93 mi) above the ground — one of several layers in the Earth's ionosphere. It is also known as the E region. It reflects medium-frequency radio waves. Because of this reflective layer, radio waves radiated into the sky can return to Earth beyond the horizon. This "skywave" or "skip" propagation technique has been used since the 1920s for radio communication at long distances, up to transcontinental distances.
Propagation is affected by the time of day. During the daytime the solar wind presses this layer closer to the Earth, thereby limiting how far it can reflect radio waves. Conversely, on the night (lee) side of the Earth, the solar wind drags the ionosphere further away, thereby greatly increasing the range which radio waves can travel by reflection. The extent of the effect is further influenced by the season, and the amount of sunspot activity.
History
Existence of a reflective layer was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British polymath Oliver Heaviside (1850–1925), as an explanation for the propagation of radio waves beyond the horizon observed by Guglielmo Marconi in 1901. However, it was not until 1924 that its existence was shown by British scientist Edward V. Appleton, for which he received the 1947 Nobel Prize in Physics.
Physicists resisted the idea of the reflecting layer for one very good reason; it would require total internal reflection, which in turn would require that the speed of light in the ionosphere would be greater than in the atmosphere below it. Since the latter speed is essentially the same as the speed of light in vacuum (c), scientists were unwilling to believe the speed in the ionosphere could be higher. Nevertheless, Marconi had received signals in Newfoundland that were broadcast in England, so clearly there must be some mechanism allowing the transmission to reach that far. The paradox was resolved by the discovery that there were two velocities of light, the phase velocity and the group velocity. The phase velocity can in fact be greater than c, but the group velocity, being capable of transmitting information, cannot, by special relativity, be greater than c. The phase velocity for radio waves in the ionosphere is indeed greater than c, and that makes total internal reflection possible, and so the ionosphere can reflect radio waves. The geometric mean of the phase velocity and the group velocity cannot exceed c, so when the phase velocity goes above c, the group velocity must go below it.
In 1925, Americans Gregory Breit and Merle A. Tuve first mapped the Heaviside layer's variations in altitude. The ITU standard model of absorption and reflection of radio waves by the Heaviside Layer was developed by the British Ionospheric physicist Louis Muggleton in the 1970s.
Etymology
Around 1910, William Eccles proposed the name "Heaviside Layer" for the radio-wave reflecting layer in the upper atmosphere, and the name has subsequently been widely adopted. The name Kennelly–Heaviside layer was proposed in 1925 to give credit to the work of Kennelly, which predated the proposal by Heaviside by several months.
See also
Van Allen Belt
References
Ionosphere
Radio frequency propagation
ru:Ионосфера#Слой Е | Kennelly–Heaviside layer | Physics | 731 |
53,100,222 | https://en.wikipedia.org/wiki/Triple-twin | The Triple-twin was a type of double vacuum triode for audio power amplifiers. A triple-twin contained two dissimilar, directly coupled triodes in a common envelope. To maximize power yield, the output triode was intended to be positively biased, and thus required substantial grid current. This current was supplied by the input triode, configured as a cathode follower. The cathode of the input triode was hard-wired to the control grid of the output triode inside the envelope.
The first tube of the family, type 295, was introduced by Cable Radio Tube Corporation under the Speed label in March 1932. The company advertised 295 as being twice as powerful as the type 47 pentode, and three times as powerful as the type 45 directly heated triode (the most common output tube of the period) - hence the name triple-twin. Maximum output power reached 4,5 W at 5% distortion into 4 kOhm load; at 2 kOhm and 10 kOhm loads distortion increased to 8%. The 295 required +250 V plate voltage, and around +6 V positive grid bias.
The original type 295 had a directly heated output section, and an indirectly heated input section. The 2B6 tube, introduced in 1933, had similar power ratings, but had both cathodes indirectly heated. The Sylvania 6N6G, introduced in 1936, had both cathodes indirectly heated, and also had the cathode follower resistor hard-wired inside the envelope. A single-ended 6N6G amplifier required only one external component, the output transformer; currents and bias voltages were set by the internal resistor. A push-pull 6N6G amplifier required only two tubes and two transformers (input and output).
Despite massive advertising, the triple-twin was a market failure. The industry preferred general-purpose tube types, and the triple-twin was obsolete by the end of the 1930s.
References
Vacuum tubes
1932 in technology
1932 in radio | Triple-twin | Physics | 410 |
75,583,266 | https://en.wikipedia.org/wiki/Adidas%20miCoach | Adidas miCoach (stylized as adidas miCoach) was an Adidas subsidiary which was first announced as a pacer and smart watch on January 7 at the 2010 International Consumer Electronics Show.
Adidas miCoach was the parent of many products, including a video game, a fitness app, a pacer, a smart watch, and a performance center at the Ajax Youth Academy.
Video game
Adidas miCoach is a fitness/sports simulation game developed by Lightning Fish Games and Chromativity and published by 505 Games. It was released on July 13, 2012, in Europe, July 24, 2012, in North America, and July 26, 2012, in Australia for the Xbox 360 via Kinect and PlayStation 3 via PlayStation Move.
Gameplay
Adidas miCoach brought the miCoach interactive athletic training system to video game consoles. Players received real-time feedback on the actual in-game performance during their workouts when wearing a miCoach heart rate monitor.
Instead of faceless narrators or anonymous characters, Adidas miCoach makes use of digitized video footage of actual star athletes, such as Dwight Howard and Ana Ivanovic.
Adidas miCoach offers fitness training for specific sports across six disciplines. Users can train in basketball, association football, American football, tennis, running and rugby. There are also general fitness plans for both men and women.
Reception
Adidas miCoach received "mixed or average" reviews, according to review aggregator Metacritic.
Liam Martin on Digital Spy rated the game 3/5. Also stated that "Adidas miCoach lacks a little finesse, making it hard to recommend above superior fitness titles such as UFC Personal Trainer."
Ravi Sinha on GamingBolt rated the game 5/10 for the Xbox 360. Also stated that "Adidas miCoach isn’t fun, it isn’t responsive and while it means well with excessive content and features, it just doesn’t measure up to Kinect fitness game standards."
Play Sense rated the PlayStation 3 version of the game 3.5/10, stating that "The game uses the PlayStation Move and after calibrating, you would expect it to work well. However, the registration is pretty worthless." Also stating that "In conclusion, Adidas miCoach is a failure on almost every level."
Fitness app
The adidas miCoach app was a free fitness app that provides real-time audible training and sports-specific training programs. It was available for iOS, Android, and Windows.
The user could customize the app with the voice of an adidas athlete of their choice, such as David Villa.
The app could track the user's exercise and act as a personal trainer. It also provided GPS tracking, though pace and speed were calculated guesses without GPS. The app worked similarly to Nike+.
Discontinuation
In February 2017, Adidas announced it was discontinuing the miCoach platform. On February 28, 2018, it shut down the platform and passed the torch to Runtastic, but users were allowed migrate to Runtastic and get a free Premium membership. Adidas set up a transition service for miCoach users to migrate to Runtastic. Users could link their miCoach account to Runtastic and sync their workout data. All miCoach data from users was anonymized within Adidas systems and is no longer accessible (unless users had migrated to Runtastic). Data migration from the miCoach platform to the Adidas Running app is no longer feasible.
In September 2019, Runtastic rebranded its "Runtastic" app to "Adidas Running" and its "Results" app to "Adidas Training".
Smart watch
The Adidas miCoach Smart Run watch was announced on January 7, 2010, at the 2010 International Consumer Electronics Show and was released on in mid-August 2014 in North America, on August 15, 2014, worldwide.
The game and app were compatible with the discontinued Adidas miCoach Smart Run watch. The Adidas miCoach Smart Run was the sports company's first entry into the smartwatch market and is supposed to feel like a personal coach on your wrist than a simple activity tracker. The Smart Run is a stand-alone device and cannot be used with Adidas Running. It only worked with the Adidas miCoach app.
The miCoach replaced the need for a chest-mounted heart rate monitor, building it in directly beneath the watch, and also came with GPS and Bluetooth for an all-in-one running gadget. The company priced it at $199 and released in mid-August 2014 in the North America, and for the rest of the world on August 15, 2014, through Adidas' website, and on September 1 through Adidas Sports Performance stores.
Richard Trenholm on CNET stated that "Adidas's new high-tech timepiece does a lot more than the new Nike FuelBand SE. The FuelBand is a bracelet that records your activity and converts your exertions into NikeFuel points on the Nike+ website, but fitness fanatics are disappointed (as) the new bracelet doesn't feature a heart rate monitor. Still, its substantially cheaper than the Smart Run."
See also
Adidas
References
Adidas
GPS sports tracking applications
Fitness apps
Exercise equipment
IOS software
WatchOS software
Android (operating system) software
Windows Phone software
Xbox 360 games
PlayStation 3 games
Fitness games
Sports video games
Adidas video games
Kinect games
PlayStation Move-compatible games
2012 video games
505 Games games
Lightning Fish games | Adidas miCoach | Technology | 1,141 |
164,332 | https://en.wikipedia.org/wiki/Chiaroscuro | In art, chiaroscuro ( , ; ) is the use of strong contrasts between light and dark, usually bold contrasts affecting a whole composition. It is also a technical term used by artists and art historians for the use of contrasts of light to achieve a sense of volume in modelling three-dimensional objects and figures. Similar effects in cinema, and black and white and low-key photography, are also called chiaroscuro. Taken to its extreme, the use of shadow and contrast to focus strongly on the subject of a painting is called tenebrism.
Further specialized uses of the term include chiaroscuro woodcut for colour woodcuts printed with different blocks, each using a different coloured ink; and chiaroscuro for drawings on coloured paper in a dark medium with white highlighting.
Chiaroscuro originated in the Renaissance period but is most notably associated with Baroque art. Chiaroscuro is one of the canonical painting modes of the Renaissance (alongside cangiante, sfumato and unione) (see also Renaissance art). Artists known for using the technique include Leonardo da Vinci, Caravaggio, Rembrandt, Vermeer, Goya, and Georges de La Tour.
History
Origin in the chiaroscuro drawing
The term chiaroscuro originated during the Renaissance as drawing on coloured paper, where the artist worked from the paper's base tone toward light using white gouache, and toward dark using ink, bodycolour or watercolour. These in turn drew on traditions in illuminated manuscripts going back to late Roman Imperial manuscripts on purple-dyed vellum. Such works are called "chiaroscuro drawings", but may only be described in modern museum terminology by such formulae as "pen on prepared paper, heightened with white bodycolour". Chiaroscuro woodcuts began as imitations of this technique. When discussing Italian art, the term sometimes is used to mean painted images in monochrome or two colours, more generally known in English by the French equivalent, grisaille. The term broadened in meaning early on to cover all strong contrasts in illumination between light and dark areas in art, which is now the primary meaning.
Chiaroscuro modelling
The more technical use of the term chiaroscuro is the effect of light modelling in painting, drawing, or printmaking, where three-dimensional volume is suggested by the value gradation of colour and the analytical division of light and shadow shapes—often called "shading". The invention of these effects in the West, "skiagraphia" or "shadow-painting" to the Ancient Greeks, traditionally was ascribed to the famous Athenian painter of the fifth century BC, Apollodoros. Although few Ancient Greek paintings survive, their understanding of the effect of light modelling still may be seen in the late-fourth-century BC mosaics of Pella, Macedonia, in particular the Stag Hunt Mosaic, in the House of the Abduction of Helen, inscribed gnosis epoesen, or 'knowledge did it'.
The technique also survived in rather crude standardized form in Byzantine art and was refined again in the Middle Ages to become standard by the early fifteenth-century in painting and manuscript illumination in Italy and Flanders, and then spread to all Western art.
According to the theory of the art historian Marcia B. Hall, which has gained considerable acceptance, chiaroscuro is one of four modes of painting colours available to Italian High Renaissance painters, along with cangiante, sfumato and unione.
The Raphael painting illustrated, with light coming from the left, demonstrates both delicate modelling chiaroscuro to give volume to the body of the model, and strong chiaroscuro in the more common sense, in the contrast between the well-lit model and the very dark background of foliage. To further complicate matters, however, the compositional chiaroscuro of the contrast between model and background probably would not be described using this term, as the two elements are almost completely separated. The term is mostly used to describe compositions where at least some principal elements of the main composition show the transition between light and dark, as in the Baglioni and Geertgen tot Sint Jans paintings illustrated above and below.
Chiaroscuro modelling is now taken for granted, but it has had some opponents; namely: the English portrait miniaturist Nicholas Hilliard cautioned in his treatise on painting against all but the minimal use we see in his works, reflecting the views of his patron Queen Elizabeth I of England: "seeing that best to show oneself needeth no shadow of place but rather the open light... Her Majesty... chose her place to sit for that purpose in the open alley of a goodly garden, where no tree was near, nor any shadow at all..."
In drawings and prints, modelling chiaroscuro often is achieved by the use of hatching, or shading by parallel lines. Washes, stipple or dotting effects, and "surface tone" in printmaking are other techniques.
Chiaroscuro woodcuts
Chiaroscuro woodcuts are old master prints in woodcut using two or more blocks printed in different colours; they do not necessarily feature strong contrasts of light and dark. They were first produced to achieve similar effects to chiaroscuro drawings. After some early experiments in book-printing, the true chiaroscuro woodcut conceived for two blocks was probably first invented by Lucas Cranach the Elder in Germany in 1508 or 1509, though he backdated some of his first prints and added tone blocks to some prints first produced for monochrome printing, swiftly followed by Hans Burgkmair the Elder. The formschneider or block-cutter who worked in the press of Johannes Schott in Strasbourg is claimed to be the first one to achieve chiaroscuro woodcuts with three blocks. Despite Vasari's claim for Italian precedence in Ugo da Carpi, it is clear that his, the first Italian examples, date to around 1516 But other sources suggest, the first chiaroscuro woodcut to be the Triumph of Julius Caesar, which was created by Andrea Mantegna, an Italian painter, between 1470 and 1500. Another view states that: "Lucas Cranach backdated two of his works in an attempt to grab the glory" and that the technique was invented "in all probability" by Burgkmair "who was commissioned by the emperor Maximilian to find a cheap and effective way of getting the imperial image widely disseminated as he needed to drum up money and support for a crusade".
Other printmakers who have used this technique include Hans Wechtlin, Hans Baldung Grien, and Parmigianino. In Germany, the technique achieved its greatest popularity around 1520, but it was used in Italy throughout the sixteenth century. Later artists such as Goltzius sometimes made use of it. In most German two-block prints, the keyblock (or "line block") was printed in black and the tone block or blocks had flat areas of colour. In Italy, chiaroscuro woodcuts were produced without keyblocks to achieve a very different effect.
Compositional chiaroscuro to Caravaggio
Manuscript illumination was, as in many areas, especially experimental in attempting ambitious lighting effects since the results were not for public display. The development of compositional chiaroscuro received a considerable impetus in northern Europe from the vision of the Nativity of Jesus of Saint Bridget of Sweden, a very popular mystic. She described the infant Jesus as emitting light; depictions increasingly reduced other light sources in the scene to emphasize this effect, and the Nativity remained very commonly treated with chiaroscuro through to the Baroque. Hugo van der Goes and his followers painted many scenes lit only by candle or the divine light from the infant Christ. As with some later painters, in their hands the effect was of stillness and calm rather than the drama with which it would be used during the Baroque.
Strong chiaroscuro became a popular effect during the sixteenth century in Mannerism and Baroque art. Divine light continued to illuminate, often rather inadequately, the compositions of Tintoretto, Veronese, and their many followers. The use of dark subjects dramatically lit by a shaft of light from a single constricted and often unseen source, was a compositional device developed by Ugo da Carpi (c. 1455 – c. 1523), Giovanni Baglione (1566–1643), and Caravaggio (1571–1610), the last of whom was crucial in developing the style of tenebrism, where dramatic chiaroscuro becomes a dominant stylistic device.
17th and 18th centuries
Tenebrism was especially practiced in Spain and the Spanish-ruled Kingdom of Naples, by Jusepe de Ribera and his followers. Adam Elsheimer (1578–1610), a German artist living in Rome, produced several night scenes lit mainly by fire, and sometimes moonlight. Unlike Caravaggio's, his dark areas contain very subtle detail and interest. The influences of Caravaggio and Elsheimer were strong on Peter Paul Rubens, who exploited their respective approaches to tenebrosity for dramatic effect in paintings such as The Raising of the Cross (1610–1611). Artemisia Gentileschi (1593–1656), a Baroque artist who was a follower of Caravaggio, was also an outstanding exponent of tenebrism and chiaroscuro.
A particular genre that developed was the nocturnal scene lit by candlelight, which looked back to earlier northern artists such as Geertgen tot Sint Jans and more immediately, to the innovations of Caravaggio and Elsheimer. This theme played out with many artists from the Low Countries in the first few decades of the seventeenth century, where it became associated with the Utrecht Caravaggisti such as Gerrit van Honthorst and Dirck van Baburen, and with Flemish Baroque painters such as Jacob Jordaens. Rembrandt van Rijn's (1606–1669) early works from the 1620s also adopted the single-candle light source. The nocturnal candle-lit scene re-emerged in the Dutch Republic in the mid-seventeenth century on a smaller scale in the works of fijnschilders such as Gerrit Dou and Gottfried Schalken.
Rembrandt's own interest in effects of darkness shifted in his mature works. He relied less on the sharp contrasts of light and dark that marked the Italian influences of the earlier generation, a factor found in his mid-seventeenth-century etchings. In that medium he shared many similarities with his contemporary in Italy, Giovanni Benedetto Castiglione, whose work in printmaking led him to invent the monotype.
Outside the Low Countries, artists such as Georges de La Tour and Trophime Bigot in France and Joseph Wright of Derby in England, carried on with such strong, but graduated, candlelight chiaroscuro. Watteau used a gentle chiaroscuro in the leafy backgrounds of his fêtes galantes, and this was continued in paintings by many French artists, notably Fragonard. At the end of the century Fuseli and others used a heavier chiaroscuro for romantic effect, as did Delacroix and others in the nineteenth century.
Use of the term
The French use of the term, , was introduced by the seventeenth-century art-critic Roger de Piles in the course of a famous argument (Débat sur le coloris), on the relative merits of drawing and colour in painting (his Dialogues sur le coloris, 1673, was a key contribution to the Débat).
In English, the Italian term has been usedoriginally as and since at least the late seventeenth century. The term is less frequently used of art after the late nineteenth century, although the Expressionist and other modern movements make great use of the effect.
Especially since the strong twentieth-century rise in the reputation of Caravaggio, in non-specialist use the term is mainly used for strong chiaroscuro effects such as his, or Rembrandt's. As the Tate puts it: "Chiaroscuro is generally only remarked upon when it is a particularly prominent feature of the work, usually when the artist is using extreme contrasts of light and shade".
Cinema and photography
Chiaroscuro is used in cinematography for extreme low key and high-contrast lighting to create distinct areas of light and darkness in films, especially in black and white films. Classic examples are The Cabinet of Dr. Caligari (1920), Nosferatu (1922), Metropolis (1927) The Hunchback of Notre Dame (1939), The Devil and Daniel Webster (1941), and the black and white scenes in Andrei Tarkovsky's Stalker (1979).
For example, in Metropolis, chiaroscuro lighting creates contrast between light and dark mise-en-scene and figures. The effect highlights the differences between the capitalist elite and the workers.
In photography, chiaroscuro can be achieved by using "Rembrandt lighting". In more highly developed photographic processes, the technique may be termed "ambient/natural lighting", although when done so for the effect, the look is artificial and not generally documentary in nature. In particular, Bill Henson along with others, such as W. Eugene Smith, Josef Koudelka, Lothar Wolleh, Annie Leibovitz, Floria Sigismondi, and Ralph Gibson may be considered some of the modern masters of chiaroscuro in documentary photography.
Perhaps the most direct use of chiaroscuro in filmmaking is Stanley Kubrick's 1975 film Barry Lyndon. When informed that no lens then had a sufficiently wide aperture to shoot a costume drama set in grand palaces using only candlelight, Kubrick bought and retrofitted a special lens for the purpose: a modified Mitchell BNC camera and a Zeiss lens manufactured for the rigors of space photography, with a maximum aperture of f/0.7. The natural, unaugmented lighting of the sets in the film exemplified low-key, natural lighting in filmwork at its most extreme, outside of the Eastern European/Soviet filmmaking tradition (itself exemplified by the harsh low-key lighting style employed by Soviet filmmaker Sergei Eisenstein).
Sven Nykvist, the longtime collaborator of Ingmar Bergman, also informed much of his photography with chiaroscuro realism, as did Gregg Toland, who influenced such cinematographers as László Kovács, Vilmos Zsigmond, and Vittorio Storaro with his use of deep and selective focus augmented with strong horizon-level key lighting penetrating through windows and doorways. Much of the celebrated film noir tradition relies on techniques related to chiaroscuro that Toland perfected in the early 1930s (though high-key lighting, stage lighting, frontal lighting, and other film noir effects are interspersed in ways that diminish the chiaroscuro claim).
Gallery
Chiaroscuro in modelling; paintings
Chiaroscuro in modelling; prints and drawings
Chiaroscuro as a major element in composition: painting
Chiaroscuro as a major element in composition: photography
Chiaroscuro faces
Chiaroscuro drawings and woodcuts
See also
Light-and-shade watermark
Notes
References
David Landau & Peter Parshall, The Renaissance Print, pp. 179–202; 273–81 & passim; Yale, 1996,
External links
Chiaroscuro Woodcut from the Metropolitan Museum of Art Timeline of Art History
Chiaroscuro woodcut from Spencer Museum of Art, Kansas
(Modelling) chiaroscuro from Evansville University
Visual arts terminology
Artistic techniques
Italian words and phrases
Composition in visual art
Shadows | Chiaroscuro | Physics | 3,272 |
4,041,866 | https://en.wikipedia.org/wiki/128P/Shoemaker%E2%80%93Holt | 128P/Shoemaker–Holt, also known as Shoemaker-Holt 1, is a periodic comet in the Solar System. The comet passed close to Jupiter in 1982 and was discovered in 1987. The comet was last observed in March 2018.
The nucleus was split into two pieces (A+B) during the 1997 apparition. Fragment A was last observed in 1996 and only has a 79-day observation arc. Fragment B is estimated to be 4.6 km in diameter.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
128P/Shoemaker-Holt 1 – Seiichi Yoshida @ aerith.net
128P at Kronk's Cometography
Periodic comets
0128
128P
128P
128P
128P
128P
19871018 | 128P/Shoemaker–Holt | Astronomy | 163 |
71,742,822 | https://en.wikipedia.org/wiki/PALISADE%20%28software%29 | PALISADE is an open-source cross platform software library that provides implementations of lattice cryptography building blocks and homomorphic encryption schemes.
History
PALISADE adopted the open modular design principles of the predecessor SIPHER software library from the DARPA PROCEED program. SIPHER development began in 2010, with a focus on modular open design principles to support rapid application deployment over multiple FHE schemes and hardware accelerator back-ends, including on mobile, FPGA and CPU-based computing systems. PALISADE began building from earlier SIPHER designs in 2014, with an open-source release in 2017 and substantial improvements every subsequent 6 months.
PALISADE development was funded originally by the DARPA PROCEED and SafeWare programs, with subsequent improvements funded by additional DARPA programs, IARPA, the NSA, NIH, ONR, the United States Navy, the Sloan Foundation and commercial entities such as Duality Technologies. PALISADE has subsequently been used in commercial offerings, such as by Duality Technologies who raised funding in a Seed round and a later Series A round led by Intel Capital.
In 2022 OpenFHE was released as a fork that also implements CKKS bootstrapping.
Features
PALISADE includes the following features:
Post-quantum public-key encryption
Fully homomorphic encryption (FHE)
Brakerski/Fan-Vercauteren (BFV) scheme for integer arithmetic with RNS optimizations
Brakerski-Gentry-Vaikuntanathan (BGV) scheme for integer arithmetic with RNS optimizations
Cheon-Kim-Kim-Song (CKKS) scheme for real-number arithmetic with RNS optimizations
Ducas-Micciancio (FHEW) scheme for Boolean circuit evaluation with optimizations
Chillotti-Gama-Georgieva-Izabachene (TFHE) scheme for Boolean circuit evaluation with extensions
Multiparty extensions of FHE
Threshold FHE for BGV, BFV, and CKKS schemes
Proxy re-encryption for BGV, BFV, and CKKS schemes
Digital signature
Identity-based encryption
Ciphertext-policy attribute-based encryption
Availability
There are several known git repositories/ports for PALISADE:
C++
PALISADE Stable Release (official stable release repository)
PALISADE Preview Release (official development/preview release repository)
PALISADE Digital Signature Extensions
PALISADE Attribute-Based Encryption Extensions (includes identity-based encryption and ciphertext-policy attribute-based encryption)
JavaScript / WebAssembly
PALISADE WebAssembly (official WebAssembly port)
Python
Python Demos (official Python demos)
FreeBSD
PALISADE (FreeBSD port)
References
Homomorphic encryption
Cryptographic software
Free and open-source software
Software using the BSD license
Free software programmed in C++ | PALISADE (software) | Mathematics | 576 |
171,915 | https://en.wikipedia.org/wiki/Lee%20Smolin | Lee Smolin (; born June 6, 1955) is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo, and a member of the graduate faculty of the philosophy department at the University of Toronto. Smolin's 2006 book The Trouble with Physics criticized string theory as a viable scientific theory. He has made contributions to quantum gravity theory, in particular the approach known as loop quantum gravity. He advocates that the two primary approaches to quantum gravity, loop quantum gravity and string theory, can be reconciled as different aspects of the same underlying theory. He also advocates an alternative view on space and time that he calls temporal naturalism. His research interests also include cosmology, elementary particle theory, the foundations of quantum mechanics, and theoretical biology.
Personal life
Smolin was born in New York City to Michael Smolin, an environmental and process engineer and Pauline Smolin, a playwright. Smolin said his parents were Jewish followers of the Fourth Way, founded by George Gurdjieff, an Armenian mystic. Smolin described himself as Jewish. His brother, David M. Smolin, became a professor at the Cumberland School of Law in Birmingham, Alabama.
Smolin dropped out of Walnut Hills High School in Cincinnati, Ohio. His interest in physics began at that time when he read Einstein's reflections on the two tasks he would leave unfinished at his death: 1, to make sense of quantum mechanics, and, 2 to unify that understanding of the quanta with gravity. Smolin would take it as his "mission" to try to complete these tasks. Shortly afterward, he browsed the Physics Library at the University of Cincinnati, where he came across Louis de Broglie's pilot wave theory in French. "I still can close my eyes," Smolin wrote in "Einstein's Unfinished Revolution" "and see a page of the book, displaying the equation that relates wavelength to momentum." Soon after that he would "talk his way into" Hampshire College, find great teachers, and get lucky in his applications to graduate school. As to his mission of solving Einstein's two big questions, by Smolin's account, he did not succeed; "Very unfortunately, neither has anyone else."
Smolin has stayed involved with theatre becoming a scientific consultant for such plays as A Walk in the Woods by Lee Blessing, Background Interference by Drucilla Cornell, and Infinity by Hannah Moscovitch.
Smolin is married to Dina Graser, a lawyer and urban policy consultant in Toronto, Ontario. He was previously married to Fotini Markopoulou-Kalamara. His brother is law professor David M. Smolin.
Career
He held postdoctoral research positions at the Institute for Advanced Study in Princeton, New Jersey, the Kavli Institute for Theoretical Physics in Santa Barbara, and the University of Chicago, before becoming a faculty member at Yale, Syracuse, and Pennsylvania State Universities. He was a visiting scholar at the Institute for Advanced Study in 1995 and a visiting professor at Imperial College London (1999-2001), before becoming one of the founding faculty members at the Perimeter Institute in 2001.
Theories and work
Loop quantum gravity
Smolin contributed to the theory of loop quantum gravity (LQG) in collaborative work with Ted Jacobson, Carlo Rovelli, Louis Crane, Abhay Ashtekar and others. LQG is an approach to the unification of quantum mechanics with general relativity which utilizes a reformulation of general relativity in the language of gauge field theories, which allows the use of techniques from particle physics, particularly the expression of fields in terms of the dynamics of loops. With Rovelli he discovered the discreteness of areas and volumes and found their natural expression in terms of a discrete description of quantum geometry in terms of spin networks. In recent years he has focused on connecting LQG to phenomenology by developing implications for experimental tests of spacetime symmetries as well as investigating ways elementary particles and their interactions could emerge from spacetime geometry.
Background independent approaches to string theory
Between 1999 and 2002, Smolin made several proposals to provide a fundamental formulation of string theory that does not depend on approximate descriptions involving classical background spacetime models.
Experimental tests of quantum gravity
Smolin is among those theorists who have proposed that the effects of quantum gravity can be experimentally probed by searching for modifications in special relativity detected in observations of high energy astrophysical phenomena, including very high energy cosmic rays and photons and neutrinos from gamma ray bursts. Among Smolin's contributions are the co-invention of doubly special relativity (with João Magueijo, independently of work by Giovanni Amelino-Camelia), and of relative locality (with Amelino-Camelia, Laurent Freidel, and Jerzy Kowalski-Glikman).
Foundations of quantum mechanics
Smolin has worked since the early 1980s on a series of proposals for hidden variables theories, which would be non-local deterministic theories which would give a precise description of individual quantum phenomena. In recent years, he has pioneered two new approaches to the interpretation of quantum mechanics suggested by his work on the reality of time, called the real ensemble interpretation and the principle of precedence.
Cosmological natural selection
Smolin's hypothesis of cosmological natural selection, also called the fecund universes theory, suggests that a process analogous to biological natural selection applies at the grandest of scales. Smolin published the idea in 1992 and summarized it in a book aimed at a lay audience called The Life of the Cosmos.
Black holes have a role in this natural selection. In fecund theory, a collapsing black hole causes the emergence of a new universe on the "other side", whose fundamental constant parameters (masses of elementary particles, Planck constant, elementary charge, and so forth) may differ slightly from the universe where the black hole collapsed. Each universe gives rise to as many new universes — its "offspring" — as it has black holes, giving an evolutionary advantage to universes in which black holes are common, which are similar to our own. The theory thus explains why our universe appears "fine-tuned" for the emergence of life as we know it. Because the theory applies the evolutionary concepts of "reproduction", "mutation", and "selection" to universes, it is formally analogous to models of population biology.
When Smolin published the theory in 1992, he proposed as a prediction of his theory that no neutron star should exist with a mass of more than 1.6 times the mass of the sun. Later this figure was raised to two solar masses following more precise modeling of neutron star interiors by nuclear astrophysicists. Smolin also predicted that inflation, if true, must only be in its simplest form, governed by a single field and parameter.
Contributions to the philosophy of physics
Smolin has contributed to the philosophy of physics through a series of papers and books that advocate the relational, or Leibnizian, view of space and time. Since 2006, he has collaborated with the Brazilian philosopher and Harvard Law School professor Roberto Mangabeira Unger on the issues of the reality of time and the evolution of laws; in 2014 they published a book, its two parts being written separately.
A book length exposition of Smolin's philosophical views appeared in April 2013. Time Reborn argues that physical science has made time unreal while, as Smolin insists, it is the most fundamental feature of reality: "Space may be an illusion, but time must be real" (p. 179). An adequate description according to him would give a Leibnizian universe: indiscernibles would not be admitted and every difference should correspond to some other difference, as the principle of sufficient reason would have it. A few months later a more concise text was made available in a paper with the title Temporal Naturalism.
The Trouble with Physics
Smolin's 2006 book The Trouble with Physics explored the role of controversy and disagreement in the progress of science. It argued that science progresses fastest if the scientific community encourages the widest possible disagreement among trained and accredited professionals prior to the formation of consensus brought about by experimental confirmation of predictions of falsifiable theories. He proposed that this meant the fostering of diverse competing research programs, and that premature formation of paradigms not forced by experimental facts can slow the progress of science.
As a case study, The Trouble with Physics focused on the issue of the falsifiability of string theory due to the proposals that the anthropic principle be used to explain the properties of our universe in the context of the string landscape. The book was criticized by physicist Joseph Polchinski and other string theorists.
In his earlier book Three Roads to Quantum Gravity (2002), Smolin stated that loop quantum gravity and string theory were essentially the same concept seen from different perspectives. In that book, he also favored the holographic principle. The Trouble with Physics, on the other hand, was strongly critical of the prominence of string theory in contemporary theoretical physics, which he believes has suppressed research in other promising approaches. Smolin suggests that string theory suffers from serious deficiencies and has an unhealthy near-monopoly in the particle theory community. He called for a diversity of approaches to quantum gravity, and argued that more attention should be paid to loop quantum gravity, an approach Smolin has devised. Finally, The Trouble with Physics is also broadly concerned with the role of controversy and the value of diverse approaches in the ethics and process of science.
In the same year that The Trouble with Physics was published, Peter Woit published Not Even Wrong, a book for nonspecialists whose conclusion was similar to Smolin's, namely that string theory was a fundamentally flawed research program.
Views
Smolin's view on the nature of time:
More and more, I have the feeling that quantum theory and general relativity are both deeply wrong about the nature of time. It is not enough to combine them. There is a deeper problem, perhaps going back to the beginning of physics.
Smolin does not believe that quantum mechanics is a "final theory":
I am convinced that quantum mechanics is not a final theory. I believe this because I have never encountered an interpretation of the present formulation of quantum mechanics that makes sense to me. I have studied most of them in depth and thought hard about them, and in the end I still can't make real sense of quantum theory as it stands.
In a 2009 article, Smolin articulated the following philosophical views (the sentences in italics are quotations):
There is only one universe. There are no others, nor is there anything isomorphic to it. Smolin denies the existence of a "timeless" multiverse. Neither other universes nor copies of our universe—within or outside—exist. No copies can exist within the universe, because no subsystem can model precisely the larger system it is a part of. No copies can exist outside the universe, because the universe is by definition all there is. This principle also rules out the notion of a mathematical object isomorphic in every respect to the history of the entire universe, a notion more metaphysical than scientific.
All that is real is real in a moment, which is a succession of moments. Anything that is true is true of the present moment. Not only is time real, but everything that is real is situated in time. Nothing exists timelessly.
Everything that is real in a moment is a process of change leading to the next or future moments. Anything that is true is then a feature of a process in this process causing or implying future moments. This principle incorporates the notion that time is an aspect of causal relations. A reason for asserting it, is that anything that existed for just one moment, without causing or implying some aspect of the world at a future moment, would be gone in the next moment. Things that persist must be thought of as processes leading to newly changed processes. An atom at one moment is a process leading to a different or a changed atom at the next moment.
Mathematics is derived from experience as a generalization of observed regularities, when time and particularity are removed. Under this heading, Smolin distances himself from mathematical platonism, and gives his reaction to Eugene Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
Smolin views rejecting the idea of a creator as essential to cosmology on similar grounds to his objections against the multiverse. He does not definitively exclude or reject religion or mysticism but rather believes that science should only deal with that which is observable. He also opposes the anthropic principle, which he claims "cannot help us to do science."
He also advocates "principles for an open future" which he claims underlie the work of both healthy scientific communities and democratic societies: "(1) When rational argument from public evidence suffices to decide a question, it must be considered to be so decided. (2) When rational argument from public evidence does not suffice to decide a question, the community must encourage a diverse range of viewpoints and hypotheses consistent with a good-faith attempt to develop convincing public evidence." (Time Reborn p. 265.)
Lee Smolin has been a recurring guest on Through the Wormhole.
Awards and honors
Smolin was named as #21 on Foreign Policy Magazine's list of Top 100 Public Intellectuals. He is also one of many physicists dubbed the "New Einstein" by the media. The Trouble with Physics was named by Newsweek magazine as number 17 on a list of 50 "Books for our Time", June 27, 2009. In 2007 he was awarded the Majorana Prize from the Electronic Journal of Theoretical Physics, and in 2009 the Klopsteg Memorial Award from the American Association of Physics Teachers (AAPT) for "extraordinary accomplishments in communicating the excitement of physics to the general public," He is a fellow of the Royal Society of Canada and the American Physical Society. In 2014 he was awarded the Buchalter Cosmology Prize for a work published in collaboration with Marina Cortês.
Publications
1997. The Life of the Cosmos
2001. Three Roads to Quantum Gravity
2006. The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Houghton Mifflin.
2013. Time Reborn: From the Crisis in Physics to the Future of the Universe.
2014. The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy by Lee Smolin and Roberto Mangabeira Unger, Cambridge University Press,
2019. Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum, Penguin Press.
See also
List of University of Waterloo people
References
External links
A partial list of Smolin's published work
A debate of the merits of string theory between Smolin and Brian Greene, from National Public Radio (2006)
"The Unique Universe ": Smolin explains his skepticism re the multiverse (2009)
Closer to the Truth: Series of interviews by Smolin on fundamental issues in physics
: Smolin's presentation at the Royal Society of Arts (2013)
21st-century American physicists
American cosmologists
1955 births
Living people
American Jews
Hampshire College alumni
Harvard University alumni
University of Cincinnati alumni
Institute for Advanced Study visiting scholars
Academic staff of the University of Waterloo
Loop quantum gravity researchers
American relativity theorists
Philosophers of cosmology
Philosophers of time
Fellows of the American Physical Society | Lee Smolin | Astronomy | 3,145 |
1,629,349 | https://en.wikipedia.org/wiki/Catechol | Catechol ( or ), also known as pyrocatechol or 1,2-dihydroxybenzene, is an organic compound with the molecular formula . It is the ortho isomer of the three isomeric benzenediols. This colorless compound occurs naturally in trace amounts. It was first discovered by destructive distillation of the plant extract catechin. About 20,000 tonnes of catechol are now synthetically produced annually as a commodity organic chemical, mainly as a precursor to pesticides, flavors, and fragrances. Small amounts of catechol occur in fruits and vegetables.
Isolation and synthesis
Catechol was first isolated in 1839 by Edgar Hugo Emil Reinsch (1809–1884) by distilling it from the solid tannic preparation catechin, which is the residuum of catechu, the boiled or concentrated juice of Mimosa catechu (Acacia catechu). Upon heating catechin above its decomposition point, a substance that Reinsch first named Brenz-Katechusäure (burned catechu acid) sublimated as a white efflorescence. This was a thermal decomposition product of the flavanols in catechin. In 1841, both Wackenroder and Zwenger independently rediscovered catechol; in reporting on their findings, Philosophical Magazine coined the name pyrocatechin. By 1852, Erdmann realized that catechol was benzene with two oxygen atoms added to it; in 1867, August Kekulé realized that catechol was a diol of benzene, so by 1868, catechol was listed as pyrocatechol. In 1879, the Journal of the Chemical Society recommended that catechol be called "catechol", and in the following year, it was listed as such.
Catechol has since been shown to occur in free form naturally in kino and in beechwood tar. Its sulfonic acid has been detected in the urine of horses and humans.
Catechol is produced industrially by the hydroxylation of phenol using hydrogen peroxide.
It can be produced by reaction of salicylaldehyde with base and hydrogen peroxide (Dakin oxidation), as well as the hydrolysis of 2-substituted phenols, especially 2-chlorophenol, with hot aqueous solutions containing alkali metal hydroxides. Its methyl ether derivative, guaiacol, converts to catechol via hydrolysis of the bond as promoted by hydroiodic acid (HI).
Reactions
Like some other difunctional benzene derivatives, catechol readily condenses to form heterocyclic compounds. For example, using phosphorus trichloride or phosphorus oxychloride gives the cyclic chlorophosphonite or chlorophosphonate, respectively; sulfuryl chloride gives the sulfate; and phosgene () gives the carbonate:
where X = PCl or POCl; ; CO
Basic solutions of catechol react with iron(III) to give the red . Ferric chloride gives a green coloration with the aqueous solution, while the alkaline solution rapidly changes to a green and finally to a black color on exposure to the air. Iron-containing dioxygenase enzymes catalyze the cleavage of catechol.
Redox chemistry
Catechols convert to the semiquinone radical. At , this conversion occurs at 100 mV:
The semiquinone radical can be reduced to the catecholate dianion, the potential being dependent on pH:
Catechol is produced by a reversible two-electron, two-proton reduction of 1,2-benzoquinone ( vs SHE; vs SHE).
The redox series catecholate dianion, monoanionic semiquinonate, and benzoquinone are collectively called dioxolenes. Dioxolenes can function as ligands for metal ions.
Catechol derivatives
Catechol derivatives are found widely in nature. They often arise by hydroxylation of phenols.
Arthropod cuticle consists of chitin linked by a catechol moiety to protein. The cuticle may be strengthened by cross-linking (tanning and sclerotization), in particular, in insects, and of course by biomineralization.
The synthetic derivative 4-tert-butylcatechol is used as an antioxidant and polymerization inhibitor.
Uses
Approximately 50% of the synthetic catechol is consumed in the production of pesticides, the remainder being used as a precursor to fine chemicals such as perfumes and pharmaceuticals. It is a common building block in organic synthesis. Several industrially significant flavors and fragrances are prepared starting from catechol. Guaiacol is prepared by methylation of catechol and is then converted to vanillin on a scale of about 10M kg per year (1990). The related monoethyl ether of catechol, guethol, is converted to ethylvanillin, a component of chocolate confectioneries. 3-trans-Isocamphylcyclohexanol, widely used as a replacement for sandalwood oil, is prepared from catechol via guaiacol and camphor. Piperonal, a flowery scent, is prepared from the methylene diether of catechol followed by condensation with glyoxal and decarboxylation.
Josef Maria Eder published in 1879 his findings on the use of catechol as a black-and-white photographic developer, but, except for some special purpose applications, its use is largely historical. It is rumored to have been used briefly in Eastman Kodak's HC-110 developer and is rumored to be a component in Tetenal's Neofin Blau developer. It is a key component of Finol from Moersch Photochemie in Germany. Modern catechol developing was pioneered by noted photographer Sandy King. His "PyroCat" formulation is popular among modern black-and-white film photographers. King's work has since inspired further 21st-century development by others such as Jay De Fehr with Hypercat and Obsidian Acqua developers, and others.
Nomenclature
Although rarely encountered, the officially "preferred IUPAC name" (PIN) of catechol is benzene-1,2-diol. The trivial name pyrocatechol is a retained IUPAC name, according to the 1993 Recommendations for the Nomenclature of Organic Chemistry.
See also
Enol
Pyrogallol
Thiotimoline
References
External links
International Chemical Safety Card 0411
NIOSH Pocket Guide to Chemical Hazards
IARC Monograph: "Catechol"
IUPAC Nomenclature of Organic Chemistry (online version of the "Blue Book")
Antioxidants
Chelating agents
Enediols
IARC Group 2B carcinogens
Photographic chemicals
Reducing agents
Substances discovered in the 19th century | Catechol | Chemistry | 1,446 |
37,725,306 | https://en.wikipedia.org/wiki/Ball-and-disk%20integrator | The ball-and-disk integrator is a key component of many advanced mechanical computers. Through simple mechanical means, it performs continual integration of the value of an input. Typical uses were the measurement of area or volume of material in industrial settings, range-keeping systems on ships, and tachometric bombsights. The addition of the torque amplifier by Vannevar Bush led to the differential analysers of the 1930s and 1940s.
Description and operation
The basic mechanism consists of two inputs and one output. The first input is a spinning disk, generally electrically driven, and using some sort of governor to ensure that it turns at a fixed rate. The second input is a movable carriage that holds a bearing against the input disk, along its radius. The bearing transfers motion from the disk to an output shaft. The axis of the output shaft is oriented parallel to the rails of the carriage. As the carriage slides, the bearing remains in contact with both the disk & the output, allowing one to drive the other.
The spin rate of the output shaft is governed by the displacement of the carriage; this is the "integration." When the bearing is positioned at the center of the disk, no net motion is imparted; the output shaft remains stationary. As the carriage moves the bearing away from the center and towards the edge of the disk, the bearing, and thus the output shaft, begins to rotate faster and faster. Effectively, this is a system of two gears with an infinitely variable gear ratio; when the bearing is nearer to the center of the disk, the ratio is low (or zero), and when the bearing is nearer to the edge, it is high.
The output shaft can rotate either "forward" or "backward," depending on the direction of the bearing's displacement; this is a useful property for an integrator.
Consider an example system that measures the total amount of water flowing through a sluice: A float is attached to the input carriage so the bearing moves up and down with the level of the water. As the water level rises, the bearing is pushed farther from the center of the input disk, increasing the output's rotation rate. By counting the total number of turns of the output shaft (for example, with an odometer-type device), and multiplying by the cross-sectional area of the sluice, the total amount of water flowing past the meter can be determined.
History
Invention and early use
The basic concept of the ball-and-disk integrator was first described by James Thomson, brother of William Thomson, 1st Baron Kelvin. William used the concept to build the Harmonic Analyser in 1886. This system was used to calculate the coefficients of a Fourier series representing inputs dialled in as the positions of the balls. The inputs were set to measured tide heights from any port being studied. The output was then fed into a similar machine, the Harmonic Synthesiser, which spun several wheels to represent the phase of the contribution from the sun and moon. A wire running along the top of the wheels took the maximum value, which represented the tide in the port at a given time. Thomson mentioned the possibility of using the same system as a way to solve differential equations, but realized that the output torque from the integrator was too low to drive the required downstream systems of pointers.
A number of similar systems followed, notably those of Leonardo Torres Quevedo, a Spanish physicist who built several machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities.
Military computers
By the turn of the 20th century, naval ships were starting to mount guns with over-the-horizon range. At these sorts of distances, spotters in the towers could not accurately estimate range by eye, leading to the introduction of ever more complex range finding systems. Additionally, the gunners could no longer directly spot the fall of their own shot, relying on the spotters to do this and relay this information to them. At the same time the speed of the ships was increasing, consistently breaking the 20 knot barrier en masse around the time of the introduction of the Dreadnought in 1906. Centralized fire control followed in order to manage the information flow and calculations, but calculating the firing proved to be very complex and error prone.
The solution was the Dreyer table, which used a large ball-and-disk integrator as a way to compare the motion of the target relative to the ship, and thereby calculate its range and speed. Output was to a roll of paper. The first systems were introduced around 1912 and installed in 1914. Over time, the Dreyer system added more and more calculators, solving for the effects of wind, corrections between apparent and real wind speed and direction based on the ships motion, and similar calculations. By the time the Mark V systems were installed on later ships after 1918, the system might have as many as 50 people operating it in concert.
Similar devices soon appeared in other navies and for other roles. The US Navy used a somewhat simpler device known as the Rangekeeper, but this also saw continual modification over time and eventually turned into a system of equal or greater sophistication to the UK versions. A similar calculator formed the basis of the Torpedo Data Computer, which solved the more demanding problem of the very long engagement times of torpedo fire.
A well-known example is the Norden bombsight which used a slight variation on the basic design, replacing the ball with another disk. In this system the integrator was used to calculate the relative motion of objects on the ground given the altitude, airspeed, and heading. By comparing the calculated output with the actual motion of objects on the ground, any difference would be due to the effects of wind on the aircraft. Dials setting these values were used to zero out any visible drift, which resulted in accurate wind measurements, formerly a very difficult problem.
Ball disk integrators were used in the analog guidance computers of ballistic missile weapon systems as late as the mid 1970s. The Pershing 1 missile system utilized the Bendix ST-120 inertial guidance platform, combined with a mechanical analog computer, to achieve accurate guidance. The ST-120 provided accelerometer information for all three axes. The accelerometer for forward movement transmitted its position to the ball position radial arm, causing the ball fixture to move away from the disk center as acceleration increased. The disk itself represents time and rotates at a constant rate. As the ball fixture moves further out from the center of the disk, the ball spins faster. The ball speed represents the missile speed, the number of ball rotations represent distance traveled. These mechanical positions were used to determine staging events, thrust termination, and warhead separation, as well as "good guidance" signals used to complete the arming chain for the warhead. The first known use of this general concept was in the V-2 missile developed by the Von Braun group at Peenemünde. See PIGA accelerometer. It was later refined at Redstone Arsenal and applied to the Redstone rocket and subsequently Pershing 1.
References
Bibliography
Mechanical computers | Ball-and-disk integrator | Physics,Technology | 1,501 |
38,007 | https://en.wikipedia.org/wiki/Julian%20day | The Julian day is a continuous count of days from the beginning of the Julian period; it is used primarily by astronomers, and in software for easily calculating elapsed days between two events (e.g. food production date and sell by date).
The Julian period is a chronological interval of 7980 years, derived from three multi-year cycles: the Indiction, Solar, and Lunar cycles. The last year that was simultaneously the beginning of all three cycles was , so that is year 1 of the current Julian period, making AD year of that Period. The next Julian Period begins in the year AD 3268. Historians used the period to identify Julian calendar years within which an event occurred when no such year was given in the historical record, or when the year given by previous historians was incorrect.
The Julian day number (JDN) shares the epoch of the Julian period, but counts days instead of years. Specifically, Julian day number 0 is assigned to the day starting at noon Universal Time on Monday, January 1, 4713 BC, proleptic Julian calendar (November 24, 4714 BC, in the proleptic Gregorian calendar),. For example, the Julian day number for the day starting at 12:00 UT (noon) on January 1, 2000, was .
The Julian date (JD) of any instant is the Julian day number plus the fraction of a day since the preceding noon in Universal Time. Julian dates are expressed as a Julian day number with a decimal fraction added. For example, the Julian Date for 00:30:00.0 UT January 1, 2013, is . This article was loaded at (UTC) – expressed as a Julian date this is .
Terminology
The term Julian date may also refer, outside of astronomy, to the day-of-year number (more properly, the ordinal date) in the Gregorian calendar, especially in computer programming, the military and the food industry, or it may refer to dates in the Julian calendar. For example, if a given "Julian date" is "October 5, 1582", this means that date in the Julian calendar (which was October 15, 1582, in the Gregorian calendarthe date it was first established). Without an astronomical or historical context, a "Julian date" given as "36" most likely means the 36th day of a given Gregorian year, namely February 5. Other possible meanings of a "Julian date" of "36" include an astronomical Julian Day Number, or the year AD 36 in the Julian calendar, or a duration of 36 astronomical Julian years). This is why the terms "ordinal date" or "day-of-year" are preferred. In contexts where a "Julian date" means simply an ordinal date, calendars of a Gregorian year with formatting for ordinal dates are often called "Julian calendars", but this could also mean that the calendars are of years in the Julian calendar system.
Historically, Julian dates were recorded relative to Greenwich Mean Time (GMT) (later, Ephemeris Time), but since 1997 the International Astronomical Union has recommended that Julian dates be specified in Terrestrial Time. Seidelmann indicates that Julian dates may be used with International Atomic Time (TAI), Terrestrial Time (TT), Barycentric Coordinate Time (TCB), or Coordinated Universal Time (UTC) and that the scale should be indicated when the difference is significant. The fraction of the day is found by converting the number of hours, minutes, and seconds after noon into the equivalent decimal fraction. Time intervals calculated from differences of Julian Dates specified in non-uniform time scales, such as UTC, may need to be corrected for changes in time scales (e.g. leap seconds).
Variants
Because the starting point or reference epoch is so long ago, numbers in the Julian day can be quite large and cumbersome. A more recent starting point is sometimes used, for instance by dropping the leading digits, in order to fit into limited computer memory with an adequate amount of precision. In the following table, times are given in 24-hour notation.
In the table below, Epoch refers to the point in time used to set the origin (usually zero, but (1) where explicitly indicated) of the alternative convention being discussed in that row. The date given is a Gregorian calendar date unless otherwise specified. JD stands for Julian Date. 0h is 00:00 midnight, 12h is 12:00 noon, UT unless otherwise specified. Current value is at and may be cached. []
The Modified Julian Date (MJD) was introduced by the Smithsonian Astrophysical Observatory in 1957 to record the orbit of Sputnik via an IBM 704 (36-bit machine) and using only 18 bits until August 7, 2576. MJD is the epoch of VAX/VMS and its successor OpenVMS, using 63-bit date/time, which allows times to be stored up to July 31, 31086, 02:48:05.47. The MJD has a starting point of midnight on November 17, 1858, and is computed by MJD = JD − 2400000.5
The Truncated Julian Day (TJD) was introduced by NASA/Goddard in 1979 as part of a parallel grouped binary time code (PB-5) "designed specifically, although not exclusively, for spacecraft applications". TJD was a 4-digit day count from MJD 40000, which was May 24, 1968, represented as a 14-bit binary number. Since this code was limited to four digits, TJD recycled to zero on MJD 50000, or October 10, 1995, "which gives a long ambiguity period of 27.4 years". (NASA codes PB-1–PB-4 used a 3-digit day-of-year count.) Only whole days are represented. Time of day is expressed by a count of seconds of a day, plus optional milliseconds, microseconds and nanoseconds in separate fields. Later PB-5J was introduced which increased the TJD field to 16 bits, allowing values up to 65535, which will occur in the year 2147. There are five digits recorded after TJD 9999.
The Dublin Julian Date (DJD) is the number of days that has elapsed since the epoch of the solar and lunar ephemerides used from 1900 through 1983, Newcomb's Tables of the Sun and Ernest W. Brown's Tables of the Motion of the Moon (1919). This epoch was noon UT on :January 0, 1900, which is the same as noon UT on December 31, 1899. The DJD was defined by the International Astronomical Union at their meeting in Dublin, Ireland, in 1955.
The Lilian day number is a count of days of the Gregorian calendar and not defined relative to the Julian Date. It is an integer applied to a whole day; day 1 was October 15, 1582, which was the day the Gregorian calendar went into effect. The original paper defining it makes no mention of the time zone, and no mention of time-of-day. It was named for Aloysius Lilius, the principal author of the Gregorian calendar.
Rata Die is a system used in Rexx, Go and Python. Some implementations or options use Universal Time, others use local time. Day 1 is January 1, 1, that is, the first day of the Christian or Common Era in the proleptic Gregorian calendar. In Rexx, January 1 is Day 0.
The Heliocentric Julian Day (HJD) is the same as the Julian day, but adjusted to the frame of reference of the Sun, and thus can differ from the Julian day by as much as 8.3 minutes (498 seconds), that being the time it takes light to reach Earth from the Sun.
History
Julian Period
The Julian day number is based on the Julian Period proposed by Joseph Scaliger, a classical scholar, in 1583 (one year after the Gregorian calendar reform) as it is the product of three calendar cycles used with the Julian calendar:
Its epoch occurs when all three cycles (if they are continued backward far enough) were in their first year together. Years of the Julian Period are counted from this year, , as , which was chosen to be before any historical record.
Scaliger corrected chronology by assigning each year a tricyclic "character", three numbers indicating that year's position in the 28-year solar cycle, the 19-year lunar cycle, and the 15-year indiction cycle. One or more of these numbers often appeared in the historical record alongside other pertinent facts without any mention of the Julian calendar year. The character of every year in the historical record was unique – it could only belong to one year in the 7980-year Julian Period. Scaliger determined that 1 BC or year 0 was Julian Period . He knew that 1 BC or year 0 had the character 9 of the solar cycle, 1 of the lunar cycle, and 3 of the indiction cycle. By inspecting a 532-year Paschal cycle with 19 solar cycles (each of 28 years, each year numbered 1–28) and 28 lunar cycles (each of 19 years, each year numbered 1–19), he determined that the first two numbers, 9 and 1, occurred at its year 457. He then calculated via remainder division that he needed to add eight 532-year Paschal cycles totaling 4256 years before the cycle containing 1 BC or year 0 in order for its year 457 to be indiction 3. The sum was thus JP 4713.
A formula for determining the year of the Julian Period given its character involving three four-digit numbers was published by Jacques de Billy in 1665 in the Philosophical Transactions of the Royal Society (its first year). John F. W. Herschel gave the same formula using slightly different wording in his 1849 Outlines of Astronomy.
Carl Friedrich Gauss introduced the modulo operation in 1801, restating de Billy's formula as:
where a is the year of the indiction cycle, b of the lunar cycle, and c of the solar cycle.
John Collins described the details of how these three numbers were calculated in 1666, using many trials. A summary of Collin's description is in a footnote.
Reese, Everett and Craun reduced the dividends in the Try column from 285, 420, 532 to 5, 2, 7 and changed remainder to modulo, but apparently still required many trials.
The specific cycles used by Scaliger to form his tricyclic Julian Period were, first, the indiction cycle with a first year of 313. Then he chose the dominant 19-year Alexandrian lunar cycle with a first year of 285, the Era of Martyrs and the Diocletian Era epoch, or a first year of 532 according to Dionysius Exiguus. Finally, Scaliger chose the post-Bedan solar cycle with a first year of 776, when its first quadrennium of concurrents, , began in sequence. Although not their intended use, the equations of de Billy or Gauss can be used to determined the first year of any 15-, 19-, and 28-year tricyclic period given any first years of their cycles. For those of the Julian Period, the result is AD 3268, because both remainder and modulo usually return the lowest positive result. Thus 7980 years must be subtracted from it to yield the first year of the present Julian Period, −4712 or 4713 BC, when all three of its sub-cycles are in their first years.
Scaliger got the idea of using a tricyclic period from "the Greeks of Constantinople" as Herschel stated in his quotation below in Julian day numbers. Specifically, the monk and priest Georgios wrote in 638/39 that the Byzantine year 6149 AM (640/41) had indiction 14, lunar cycle 12, and solar cycle 17, which places the first year of the Byzantine Era in 5509/08 BC, the Byzantine Creation. Dionysius Exiguus called the Byzantine lunar cycle his "lunar cycle" in argumentum 6, in contrast with the Alexandrian lunar cycle which he called his "nineteen-year cycle" in argumentum 5.
Although many references say that the Julian in "Julian Period" refers to Scaliger's father, Julius Scaliger, at the beginning of Book V of his ("Work on the Emendation of Time") he states, "", which Reese, Everett and Craun translate as "We have termed it Julian because it fits the Julian year". Thus Julian refers to the Julian calendar.
Julian day numbers
Julian days were first used by Ludwig Ideler for the first days of the Nabonassar and Christian eras in his 1825 Handbuch der mathematischen und technischen Chronologie. John F. W. Herschel then developed them for astronomical use in his 1849 Outlines of Astronomy, after acknowledging that Ideler was his guide.
At least one mathematical astronomer adopted Herschel's "days of the Julian period" immediately. Benjamin Peirce of Harvard University used over 2,800 Julian days in his Tables of the Moon, begun in 1849 but not published until 1853, to calculate the lunar ephemerides in the new American Ephemeris and Nautical Almanac from 1855 to 1888. The days are specified for "Washington mean noon", with Greenwich defined as west of Washington (282°57′W, or Washington 77°3′W of Greenwich). A table with 197 Julian days ("Date in Mean Solar Days", one per century mostly) was included for the years –4713 to 2000 with no year 0, thus "–" means BC, including decimal fractions for hours, minutes, and seconds. The same table appears in Tables of Mercury by Joseph Winlock, without any other Julian days.
The national ephemerides started to include a multi-year table of Julian days, under various names, for either every year or every leap year beginning with the French Connaissance des Temps in 1870 for 2,620 years, increasing in 1899 to 3,000 years. The British Nautical Almanac began in 1879 with 2,000 years. The Berliner Astronomisches Jahrbuch began in 1899 with 2,000 years. The American Ephemeris was the last to add a multi-year table, in 1925 with 2,000 years. However, it was the first to include any mention of Julian days with one for the year of issue beginning in 1855, as well as later scattered sections with many days in the year of issue. It was also the first to use the name "Julian day number" in 1918. The Nautical Almanac began in 1866 to include a Julian day for every day in the year of issue. The Connaissance des Temps began in 1871 to include a Julian day for every day in the year of issue.
The French mathematician and astronomer Pierre-Simon Laplace first expressed the time of day as a decimal fraction added to calendar dates in his book, , in 1823. Other astronomers added fractions of the day to the Julian day number to create Julian Dates, which are typically used by astronomers to date astronomical observations, thus eliminating the complications resulting from using standard calendar periods like eras, years, or months. They were first introduced into variable star work in 1860 by the English astronomer Norman Pogson, which he stated was at the suggestion of John Herschel. They were popularized for variable stars by Edward Charles Pickering, of the Harvard College Observatory, in 1890.
Julian days begin at noon because when Herschel recommended them, the astronomical day began at noon. The astronomical day had begun at noon ever since Ptolemy chose to begin the days for his astronomical observations at noon. He chose noon because the transit of the Sun across the observer's meridian occurs at the same apparent time every day of the year, unlike sunrise or sunset, which vary by several hours. Midnight was not even considered because it could not be accurately determined using water clocks. Nevertheless, he double-dated most nighttime observations with both Egyptian days beginning at sunrise and Babylonian days beginning at sunset. Medieval Muslim astronomers used days beginning at sunset, so astronomical days beginning at noon did produce a single date for an entire night. Later medieval European astronomers used Roman days beginning at midnight so astronomical days beginning at noon also allow observations during an entire night to use a single date. When all astronomers decided to start their astronomical days at midnight to conform to the beginning of the civil day, on , it was decided to keep Julian days continuous with previous practice, beginning at noon.
During this period, usage of Julian day numbers as a neutral intermediary when converting a date in one calendar into a date in another calendar also occurred. An isolated use was by Ebenezer Burgess in his 1860 translation of the Surya Siddhanta wherein he stated that the beginning of the Kali Yuga era occurred at midnight at the meridian of Ujjain at the end of the 588,465th day and the beginning of the 588,466th day (civil reckoning) of the Julian Period, or between JP 1612 or 3102 BC. Robert Schram was notable beginning with his 1882 Hilfstafeln für Chronologie. Here he used about 5,370 "days of the Julian Period". He greatly expanded his usage of Julian days in his 1908 Kalendariographische und Chronologische Tafeln containing over 530,000 Julian days, one for the zeroth day of every month over thousands of years in many calendars. He included over 25,000 negative Julian days, given in a positive form by adding 10,000,000 to each. He called them "day of the Julian Period", "Julian day", or simply "day" in his discussion, but no name was used in the tables. Continuing this tradition, in his book "Mapping Time: The Calendar and Its History" British physics educator and programmer Edward Graham Richards uses Julian day numbers to convert dates from one calendar into another using algorithms rather than tables.
Julian day number calculation
The Julian day number can be calculated using the following formulas (integer division rounding towards zero is used exclusively, that is, positive values are rounded down and negative values are rounded up):
The months January to December are numbered 1 to 12. For the year, astronomical year numbering is used, thus 1 BC is 0, 2 BC is −1, and 4713 BC is −4712. JDN is the Julian Day Number. Use the previous day of the month if trying to find the JDN of an instant before midday UT.
Converting Gregorian calendar date to Julian Day Number
The algorithm is valid for all (possibly proleptic) Gregorian calendar dates after November 23, −4713. Divisions are integer divisions towards zero; fractional parts are ignored.
Converting Julian calendar date to Julian Day Number
The algorithm is valid for all (possibly proleptic) Julian calendar years ≥ −4712, that is, for all JDN ≥ 0. Divisions are integer divisions, fractional parts are ignored.
Finding Julian date given Julian day number and time of day
For the full Julian Date of a moment after 12:00 UT one can use the following. Divisions are real numbers.
So, for example, January 1, 2000, at 18:00:00 UT corresponds to JD = 2451545.25 and January 1, 2000, at 6:00:00 UT corresponds to JD = 2451544.75.
Finding day of week given Julian day number
Because a Julian day starts at noon while a civil day starts at midnight, the Julian day number needs to be adjusted to find the day of week: for a point in time in a given Julian day after midnight UT and before 12:00 UT, add 1 or use the JDN of the next afternoon.
The US day of the week W1 (for an afternoon or evening UT) can be determined from the Julian Day Number J with the expression:
If the moment in time is after midnight UT (and before 12:00 UT), then one is already in the next day of the week.
The ISO day of the week W0 can be determined from the Julian Day Number J with the expression:
Julian or Gregorian calendar from Julian day number
This is an algorithm by Edward Graham Richards to convert a Julian Day Number, J, to a date in the Gregorian calendar (proleptic, when applicable). Richards states the algorithm is valid for Julian day numbers greater than or equal to 0. All variables are integer values, and the notation "a div b" indicates integer division, and "mod(a,b)" denotes the modulus operator.
For Julian calendar:
f = J + j
For Gregorian calendar:
f = J + j + (((4 × J + B) div 146097) × 3) div 4 + C
For Julian or Gregorian, continue:
D, M, and Y are the numbers of the day, month, and year respectively for the afternoon at the beginning of the given Julian day.
Julian Period from indiction, Metonic and solar cycles
Let Y be the year BC or AD and i, m, and s respectively its positions in the indiction, Metonic and solar cycles. Divide 6916i + 4200m + 4845s by 7980 and call the remainder r.
Example
i = 8, m = 2, s = 8. What is the year?
Julian date calculation
As stated above, the Julian date (JD) of any instant is the Julian day number for the preceding noon in Universal Time plus the fraction of the day since that instant. Ordinarily calculating the fractional portion of the JD is straightforward; the number of seconds that have elapsed in the day divided by the number of seconds in a day, 86,400. But if the UTC timescale is being used, a day containing a positive leap second contains 86,401 seconds (or in the unlikely event of a negative leap second, 86,399 seconds). One authoritative source, the Standards of Fundamental Astronomy (SOFA), deals with this issue by treating days containing a leap second as having a different length (86,401 or 86,399 seconds, as required). SOFA refers to the result of such a calculation as "quasi-JD".
See also
5th millennium BC
Barycentric Julian Date
Dual dating
Decimal time
Epoch (astronomy)
Epoch (reference date)
Era
J2000 – the epoch that starts on JD 2451545.0 (TT), the standard epoch used in astronomy since 1984
Julian year (calendar)
Lunation Number (similar concept)
Ordinal date
Time
Time standards
Zeller's congruence
Notes
References
Sources
Alsted, Johann Heinrich 1649 [1630]. Encyclopaedia , Tome 4, Page 122.
American Ephemeris and Nautical Almanac, Washington, 1855–1980, Hathi Trust
Astronomical almanac for the year 2001. (2000). U.S. Nautical Almanac Office and Her Majesty's Nautical Almanac Office. .
Astronomical almanac for the year 2017. (2016). U.S. Naval Observatory and Her Majesty's Nautical Almanac Office. .
Astronomical Almanac Online . (2016). U.S. Nautical Almanac Office and Her Majesty's Nautical Almanac Office.
Bede: The Reckoning of Time, tr. Faith Wallis, 725/1999, pp. 392–404, . Also Appendix 2 (Beda Venerabilis' Paschal table.
Blackburn, Bonnie; Holford-Strevens, Leofranc. (1999) The Oxford Companion to the Year, Oxford University Press, .
Burgess, Ebenezer, translator. 1860. Translation of the Surya Siddhanta. Journal of the American Oriental Society 6 (1858–1860) 141–498, p. 161.
Berliner astronomisches Jahrbuch, Berlin, 1776–1922, Hathi Trust
Chi, A. R. (December 1979). "A Grouped Binary Time Code for Telemetry and Space Application" (NASA Technical Memorandum 80606). Retrieved from NASA Technical Reports Server April 24, 2015.
Collins, John (1666–1667). "A method for finding the number of the Julian Period for any year assign'd", Philosophical Transactions of the Royal Society, series 1665–1678, volume 2, pp. 568–575.
Connaissance des Temps 1689–1922, Hathi Trust table of contents at end of book
Chronicon Paschale 284–628 AD, tr. Michael Whitby, Mary Whitby, 1989, p. 10, .
"CS 1063 Introduction to Programming: Explanation of Julian Day Number Calculation." (2011). Computer Science Department, University of Texas at San Antonio.
"De argumentis lunæ libellus" in Patrologia Latina, 90: 701–728, col. 705D (in Latin).
de Billy (1665–1666). "A problem for finding the year of the Julian Period by a new and very easie method", Philosophical Transactions of the Royal Society, series 1665–1678, volume 1, p. 324.
Leo Depuydt, "AD 297 as the first indiction cycle",The bulletin of the American Society of Papyrologists, 24 (1987), 137–139.
Dershowitz, N. & Reingold, E. M. (2008). Calendrical Calculations 3rd ed. Cambridge University Press. .
Franz Diekamp, "Der Mönch und Presbyter Georgios, ein unbekannter Schriftsteller des 7. Jahrhunderts", Byzantinische Zeitschrift 9 (1900) 14–51 (in German and Greek).
Digital Equipment Corporation. Why is Wednesday, November 17, 1858, the base time for VAX/VMS? Modified Julian Day explanation
Dionysius Exiguus, 1863 [525], Cyclus Decemnovennalis Dionysii, Patrologia Latina vol. 67, cols. 493–508 (in Latin).
Dionysius Exiguus, 2003 [525], tr. Michael Deckers, Nineteen year cycle of Dionysius, Argumentum 5 (in Latin and English).
Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac, Her Majesty's Stationery Office, 1961, pp. 21, 71, 97, 100, 264, 351, 365, 376, 386–389, 392, 431, 437–441, 489.
Fliegel, Henry F. & Van Flanderen, Thomas C. (October 1968). "A machine algorithm for processing calendar dates". Communications of the Association for Computing Machinery Vol. 11, No. 10, p. 657.
Furness, Caroline Ellen (1915). An introduction to the study of variable stars. Boston: Houghton-Mifflin. Vassar Semi-Centennial Series.
Gauss, Carl Frederich (1966). Clarke, Arthur A., translator. Disquisitiones Arithmeticae. Article 36. pp. 16–17. Yale University Press.
Gauss, Carl Frederich (1801). Disquisitiones Arithmeticae. Article 36. pp. 25–26.
Grafton, Anthony T. (May 1975) "Joseph Scaliger and historical chronology: The rise and fall of a discipline", History and Theory 14/2 pp. 156–185.
Grafton, Anthony T. (1994) Joseph Scaliger: A Study in the History of Classical Scholarship. Volume II: Historical Chronology (Oxford-Warburg Studies).
Venance Grumel, La chronologie, 1958, 31–55 (in French).
Heath, B. (1760). Astronomia accurata; or the royal astronomer and navigator. London: author. [Google Books version.
. Herschel's words remained the same in all editions, even while the page varied.
Hopkins, Jeffrey L. (2013). Using Commercial Amateur Astronomical Spectrographs, p. 257, Springer Science & Business Media,
HORIZONS System. (April 4, 2013). NASA.
Ideler, Ludwig. Handbuch der mathematischen und technischen Chronologie, vol. 1, 1825, pp. 102–106 (in German).
IBM 2004. "CEEDATEconvert Lilian date to character format". COBOL for AIX (2.0): Programming Guide.
Information Bulletin No. 81. (January 1998). International Astronomical Union.
"Julian Date". (n.d.). Defit's Definitions of Information Technology Terms. Brainsoft.
Julian Date Converter (March 20, 2013). US Naval Observatory. Retrieved September 16, 2013.
Kempler, Steve. (2011). Day of Year Calendar. Goddard Earth Sciences Data and Information Services Center.
Laplace (1823). Traité de Mécanique Céleste vol. 5 p. 348 (in French)
McCarthy, D. & Guinot, B. (2013). Time. In S. E. Urban & P. K. Seidelmann, eds. Explanatory Supplement to the Astronomical Almanac, 3rd ed. (pp. 76–104). Mill Valley, Calif.: University Science Books.
Meeus Jean. Astronomical Algorithms (1998), 2nd ed,
Moyer, Gordon. (April 1981). "The Origin of the Julian Day System", Sky and Telescope 61 311−313.
Nautical Almanac and Astronomical Ephemeris, London, 1767–1923, Hathi Trust
Otto Neugebauer, Ethiopic Astronomy and Computus, Red Sea Press, 2016, pp. 22, 93, 111, 183, . Page references in text, footnotes, and index are six greater than the page numbers in this edition.
Noerdlinger, P. (April 1995 revised May 1996). Metadata Issues in the EOSDIS Science Data Processing Tools for Time Transformations and Geolocation. NASA Goddard Space Flight Center.
Nothaft, C. Philipp E., Scandalous Error: Calendar Reform and Calendrical Astronomy in Medieval Europe, Oxford University Press, 2018, pp. 57–58, .
Ohms, B. G. (1986). Computer processing of dates outside the twentieth century. IBM Systems Journal 25, 244–251. doi:10.1147/sj.252.0244
Pallé, Pere L., Esteban, Cesar. (2014). Asteroseismology, p. 185, Cambridge University Press,
Ransom, D. H. Jr. () ASTROCLK Astronomical Clock and Celestial Tracking Program pp. 69–143, "Dates and the Gregorian calendar" pp. 106–111. Retrieved September 10, 2009.
Reese, Ronald Lane; Everett, Steven M.; Craun, Edwin D. (1981). "The origin of the Julian Period: An application of congruences and the Chinese Remainder Theorem", American Journal of Physics, Vol. 49, pp. 658–661.
"Resolution B1". (1997). XXIIIrd General Assembly (Kyoto, Japan). International Astronomical Union, p. 7.
Richards, E. G. (2013). Calendars. In S. E. Urban & P. K. Seidelmann, eds. Explanatory Supplement to the Astronomical Almanac, 3rd ed. (pp. 585–624). Mill Valley, Calif.: University Science Books.
Richards, E. G. (1998). Mapping Time: The Calendar and its History. Oxford University Press.
"SDP Toolkit Time Notes". (July 21, 2014). In SDP Toolkit / HDF-EOS. NASA.
Seidelmann, P. Kenneth (ed.) (1992). Explanatory Supplement to the Astronomical Almanac pp. 55, 603–606. University Science Books, .
Seidelmann, P. Kenneth. (2013). "Introduction to Positional Astronomy" in Sean Urban and P. Kenneth Seidelmann (eds.) Explanatory supplement to the Astronomical Almanac''' (3rd ed.) pp. 1–44. Mill Valley, CA: University Science Books.
"SOFA Time Scale and Calendar Tools". (June 14, 2016). International Astronomical Union.
Theveny, Pierre-Michel. (September 10, 2001). "Date Format" The TPtime Handbook. Media Lab.
Tøndering, Claus. (2014). "The Julian Period" in Frequently Asked Questions about Calendars. author.
USDA. (). Julian date calendar.
US Naval Observatory. (2005, last updated July 2, 2011). Multiyear Interactive Computer Almanac 1800–2050'' (ver. 2.2.2). Richmond VA: Willmann-Bell, .
Winkler, M. R. (n. d.). "Modified Julian Date". US Naval Observatory. Retrieved April 24, 2015.
External links
Calendar algorithms
Calendaring standards
Celestial mechanics
Chronology
Time in astronomy | Julian day | Physics,Astronomy | 6,922 |
41,626,159 | https://en.wikipedia.org/wiki/%CE%93-Tocotrienol | γ-Tocotrienol is one of the four types of tocotrienol, a type of vitamin E.
Vitamin E exists in nature in eight forms, each of which consists of a head section joined to either a saturated (phytyl) or an unsaturated (farnesyl) tail. The four compounds with the saturated tails are the tocopherols, and the four compounds with the unsaturated tails are the tocotrienols. There are four unique dihydrocoumarin head sections, distinguished by one of four substitution patterns and designated as α, β, γ, or δ. The alpha- forms are distinguished by their three substituted methyl groups and the delta- forms by their one substituted methyl group. The beta- and gamma- forms both have two substituted methyl groups, although at different structural positions (5,8-dimethyl and 7,8-dimethyl, respectively), making both beta / gamma tocotrienol as well as the beta / gamma tocopherol pairs of stereoisomer.
See also
Vitamin E
Tocopherol
Tocotrienol
Antioxidants
α-Tocotrienol
β-Tocotrienol
δ-Tocotrienol
References
Vitamin E | Γ-Tocotrienol | Chemistry | 256 |
49,235,577 | https://en.wikipedia.org/wiki/2MASS%20J2126%E2%80%938140 | 2MASS J21265040−8140293, also known as 2MASS J2126−8140, is an exoplanet orbiting the red dwarf TYC 9486-927-1, 111.4 light-years away from Earth. Its estimated mass, age (10-45 million years), spectral type (L3), and Teff (1800 K) are similar to the well-studied planet β Pictoris b. With an estimated distance of around 1 trillion kilometres from the host star, this is one of the largest solar systems ever found.
See also
COCONUTS-2b
Gliese 900
References
J21265040−8140293
Exoplanets detected by direct imaging
Giant planets
Exoplanets discovered in 2009
Octans | 2MASS J2126–8140 | Astronomy | 169 |
4,440,211 | https://en.wikipedia.org/wiki/International%20Display%20Technology | International Display Technology (IDTech) was a partnership between Taiwan's Chi Mei Corporation and IBM Japan. Its manufacturing factory was sold to Sony in 2005. The headquarters was renamed to the current name, CMO Japan Co., Ltd. in 2006. It manufactured the IBM T220/T221 LCD monitors, among other products.
External links
Official Website (site already down)
SONY TO ACQUIRE IDTECH'S YASU LCD MANUFACTURING FACILITY Acquisition Will Serve As Second Manufacturing Base of Low-Temperature Polysilicon TFT LCD Display Panel for Mobile-Products
Electronics companies of Taiwan | International Display Technology | Technology | 119 |
11,531,972 | https://en.wikipedia.org/wiki/Ascochyta%20sorghi | Ascochyta sorghi is a fungal plant pathogen. It causes Ascochyta leaf spot (also known as rough leaf spot) on barley that can also be caused by the related fungi Ascochyta hordei, Ascochyta graminea and Ascochyta tritici. It is considered a minor disease of barley.
Hosts and symptoms
Ascochyta sorghi infects grain crops such as sorghum (Sorghum bicolor), Johnson grass (Sorghum halepense), Sudan grass (Sorghum sudanense), and barley (Hordeum vulgare). It can also infect wild sorghum species.
Symptoms of rough leaf spot can appear on leaf blades, leaf sheaths, peduncles, stalks, and glumes of susceptible species. On sorghum, symptoms are usually noted on leaf blades beginning as small red lesions. Lesions expand over time, becoming broadly-elliptical up to one inch in length. Spots usually develop a tan interior bordered by a dark red to purple color, but can remain a uniform dark color. The presence of black pycnidia exposed on the surface of the lesions give the leaf a rough, sandpapery feeling, hence the name "rough leaf spot". Rough leaf spot can eventually lead to leaf senescence.
Management
Ascochyta sorghi is controlled through host plant resistance, cultural practices, and chemical application when necessary. Varieties of sorghum are not generally susceptible to rough leaf spot, although exceptions do exist. Cultural practices include crop rotation, deep plowing, and avoiding field operations when leaf surfaces are wet. As Ascochyta sorghi survives in plant debris and pycnidia in the soil, crop rotation and deep plowing allow for the avoidance of potential inoculum sources. Other sanitation, such as using clean seed and removing alternate hosts, such as wild sorghum species, can reduce disease incidence. Spores are spread from water splash, and can also be transmitted through contact with field equipment, especially when leaves are wet, so delaying field operations until plants are dry can help prevent spread of the pathogen. If necessary, the application of fungicides can help limit disease severity.
Importance
Ascochyta sorghi is found in all sorghum growing areas. Throughout most areas of the Americas, Asia, Africa, and Europe, Ascochyta sorghi causes little crop loss and is considered to have a very low overall impact on sorghum production. The lack of economic importance of rough leaf spot is thought to be due to the prevalence of resistant varieties. However, there are a few areas where it may be more prevalent. Some states in India, such as Madhya Pradesh, have seen severe outbreaks of A. sorghi, where it has the potential to become epidemic. Weimer et al. (1937) reported that A. sorghi had the capacity to become damaging in Georgia. Historically, rough spot has been responsible for crop losses between 3 and 10% in French Equatorial Africa.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Barley diseases
sorghi
Fungi described in 1878
Fungus species | Ascochyta sorghi | Biology | 655 |
61,814,067 | https://en.wikipedia.org/wiki/David%20Robertson%20%28engineer%29 | David Robertson (1875 – 1941) was the first Professor of Electrical Engineering at Bristol University. Robertson had wide interests and one of these was horology – he wanted to provide the foundation of what we could call “horological engineering”, that is, a firm science-based approach to the design of accurate mechanical clocks. He contributed a long series on the scientific foundations of precision clocks to the Horological Journal which was the main publication for the trade in the UK; he and his students undertook research on clocks and pendulums (some funded by the Society of Merchant Venturers); and he designed at least one notable clock, to keep University time and control the chiming of Great George in the Wills Memorial Building from its inauguration on 1925, for which he also designed the chiming mechanism.
Today, we get accurate time from atomic clock ensembles in observatories round the world, compared and distributed by GPS satellites and over the internet, and displayed on almost any public or personal screen. Accurate time has become ubiquitous and its maintenance a branch of information and communications technology. A century ago none of this existed, and the world depended on the pendulum clock to keep its time, referenced to astronomical observations. There was a scientific literature on the behaviour of pendulums and clocks; and a widespread craft-based industry making timepieces; but it could not be said that horology was a branch of engineering.
Robertson became Professor of Electrical Engineering in Merchant Venturer’s Technical College in 1902. MVTC merged with University College Bristol when the latter was granted a Royal Charter in 1909 and became the engineering faculty of the new University of Bristol – Robertson then became the first professor of the subject in the faculty. He served in this post until his death in 1941. Clock-wise, the Shortt Synchronome Free Pendulum clock entered service at the Royal Observatory in 1923 and kept Greenwich, and therefore the nation’s, time until supplanted by quartz clocks in the 1940s. Throughout Robertson’s career therefore, pendulum time was paramount. Suppliers such as the Synchronome Company or Gents of Leicester could by 1925 have supplied perfectly satisfactory and well-proven systems to run the bell and slave clocks throughout the building. The fact that the University chose to commission a unique and original design is a tribute perhaps to its pride in the new building and to its distinguished Professor, who was able to put into practice the principles that he had developed.
The Robertson Clock
Originally mounted in an interior foyer of the Wills Memorial Building, Robertson's clock is housed in an oak case 1753 x 837 x 310 mm (h/w/d), originally carried on stout oak “dogs” let into the masonry of an internal wall. The case was also secured to the wall through its back, but does not support any of the mechanisms, which are separately mounted through the case back into the wall using studs. The opening front door is fully glazed. In its new home in Queen’s Building the original studs are re-mounted on to a large steel plate, firmly screwed to the reinforced concrete wall.
At the top of the case a clock dial displays hours and minutes as kept by the pendulum. The dial is a standard Gents slave clock movement which is advanced by a pulse every 30s, counted down from seconds pulses generated by the pendulum. Additional circuits in the clock once generated other half-minute pulses that controlled 3 strings of similar slave clocks throughout the building.
Right down the centre of the case is the pendulum, of the order of a metre long and with a period of 2 seconds. It is suspended from a bracket attached to a massive iron casting bolted through to the wall, which also carries the “escapement” mechanism to the right under the face. This drives the pendulum with a small impulse of force every second, generated by the drop of a small weight under the control of an electromagnet. Part of the mechanism includes a 60-tooth ratchet wheel advanced on every pendulum swing by a pawl driven by the electromagnet. Originally this operated a pair of contacts by two pins on its periphery to generate the half-minute pulses, but at some stage these contacts were removed.
To the left of the pendulum is the regulator. This is arranged to apply a small force to the pendulum which through an ingenious linkage effectively works against gravity, slowing the pendulum down. The force comes from a torque generated by a spiral hair-spring, one end being attached to the pivot of a lever that forms part of the escapement linkage, the other to a disk that can be rotated in small steps by a solenoid-operated “stepper motor”. This allows the period of the pendulum to be adjusted by changing the torque, under the control of a system that compares the pendulum phase to a time standard (originally a daily pulse sent out over the telegraph network at 10.00 GMT).
Behind the pendulum and near its top is a standard aneroid barometer, and below that a mercury thermometer. These would have been used when checking the clocks’ rate, which depends on both atmospheric temperature and pressure.
To the left of the pendulum is the Civil Time Unit (CTU). This is essentially a clock that receives a pulse every second from the pendulum and keeps track of local time, GMT or BST depending on the season, to control the pulses sent to Great George to make it chime on the hours, 0700 through 2100 except Sundays. The CTU was driven by its own electromagnet.
On the right is the Greenwich Time Unit (GTU), which essentially kept GMT by counting seconds impulses but also controlled the sequencing of the synchronising system around 10.00 am GMT every day. Again, the GTU had its own electromagnet drive.
Behind the wall to which the clock was mounted there was a Control Box that housed several terminal frames, some relays, and ancillary components, that were connected to contacts on the TUs by wires going through the wall. Most of this has now been lost. The clock and its circuits were power by a 24 volt lead-acid battery, possibly also housed in this room. This Control Box has also been recovered and will be installed beside the clock case to house support electronics.
References
Horology
British electrical engineers
1875 births
1941 deaths | David Robertson (engineer) | Physics | 1,299 |
51,181,206 | https://en.wikipedia.org/wiki/Timeline%20of%20social%20media | This page is a timeline of social media. Major launches, milestones, and other major events are included.
Overview
Timeline
An asterisk (*) indicates relaunches.
See also
Timeline of Facebook
Timeline of Instagram
Timeline of LinkedIn
Timeline of Pinterest
Timeline of Snapchat
Timeline of Twitter
Timeline of YouTube
References
Social media
Social media | Timeline of social media | Technology | 73 |
37,871,408 | https://en.wikipedia.org/wiki/Foreground%20detection | Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing (object recognition etc.).
Many applications do not need to know everything about the evolution of movement in a video sequence, but only require the information of changes in the scene, because an image's regions of interest are objects (humans, cars, text etc.) in its foreground. After the stage of image preprocessing (which may include image denoising, post processing like morphology etc.) object localisation is required which may make use of this technique.
Foreground detection separates foreground from background based on these changes taking place in the foreground. It is a set of techniques that typically analyze video sequences recorded in real time with a stationary camera.
Description
All detection techniques are based on modelling the background of the image, i.e. set the background and detect which changes occur. Defining the background can be very difficult when it contains shapes, shadows, and moving objects. In defining the background, it is assumed that the stationary objects could vary in color and intensity over time.
Scenarios where these techniques apply tend to be very diverse. There can be highly variable sequences, such as images with very different lighting, interiors, exteriors, quality, and noise. In addition to processing in real time, systems need to be able to adapt to these changes.
A very good foreground detection system should be able to:
Develop a background (estimate) model.
Be robust to lighting changes, repetitive movements (leaves, waves, shadows), and long-term changes.
Background subtraction
Background subtraction is a widely used approach for detecting moving objects in videos from static cameras. The rationale in the approach is that of detecting the moving objects from the difference between the current frame and a reference frame, often called "background image", or "background model". Background subtraction is mostly done if the image in question is a part of a video stream. Background subtraction provides important cues for numerous applications in computer vision, for example surveillance tracking or human pose estimation.
Background subtraction is generally based on a static background hypothesis which is often not applicable in real environments. With indoor scenes, reflections or animated images on screens lead to background changes. Similarly, due to wind, rain or illumination changes brought by weather, static backgrounds methods have difficulties with outdoor scenes.
Temporal average filter
The temporal average filter is a method that was proposed at the Velastin. This system estimates the background model from the median of all pixels of a number of previous images. The system uses a buffer with the pixel values of the last frames to update the median for each image.
To model the background, the system examines all images in a given time period called training time. At this time, we only display images and will find the median, pixel by pixel, of all the plots in the background this time.
After the training period for each new frame, each pixel value is compared with the input value of funds previously calculated. If the input pixel is within a threshold, the pixel is considered to match the background model and its value is included in the pixbuf. Otherwise, if the value is outside this threshold pixel is classified as foreground, and not included in the buffer.
This method cannot be considered very efficient because they do not present a rigorous statistical basis and requires a buffer that has a high computational cost.
Conventional approaches
A robust background subtraction algorithm should be able to handle lighting changes, repetitive motions from clutter and long-term scene changes. The following analyses make use of the function of V(x,y,t) as a video sequence where t is the time dimension, x and y are the pixel location variables. e.g. V(1,2,3) is the pixel intensity at (1,2) pixel location of the image at t = 3 in the video sequence.
Using frame differencing
A motion detection algorithm begins with the segmentation part where foreground or moving objects are segmented from
the background. The simplest way to implement this is to take an image as background and take the frames obtained at the time
t, denoted by I(t) to compare with the background image denoted by B. Here using simple arithmetic calculations, we can
segment out the objects simply by using image subtraction technique of computer vision meaning for each pixels in I(t), take the
pixel value denoted by P[I(t)] and subtract it with the corresponding pixels at the same position on the background image
denoted as P[B].
In mathematical equation, it is written as:
The background is assumed to be the frame at time t. This difference image would only show some intensity for the pixel locations which have changed in the two frames. Though we have seemingly removed the background, this approach will only work for cases where all foreground pixels are moving, and all background pixels are static. A threshold "Threshold" is put on this difference image to improve the subtraction (see Image thresholding):
This means that the difference image's pixels' intensities are 'thresholded' or filtered on the basis of value of Threshold.
The accuracy of this approach is dependent on speed of movement in the scene. Faster movements may require higher thresholds.
Mean filter
For calculating the image containing only the background, a series of preceding images are averaged. For calculating the background image at the instant t:
where N is the number of preceding images taken for averaging. This averaging refers to averaging corresponding pixels in the given images. N would depend on the video speed (number of images per second in the video) and the amount of movement in the video. After calculating the background B(x,y,t) we can then subtract it from the image V(x,y,t) at time t = t and threshold it. Thus the foreground is:
where Th is a threshold value. Similarly, we can also use median instead of mean in the above calculation of B(x,y,t).
Usage of global and time-independent thresholds (same Th value for all pixels in the image) may limit the accuracy of the above two approaches.
Running Gaussian average
For this method, Wren et al. propose fitting a Gaussian probabilistic density function (pdf) on the most recent frames. In order to avoid fitting the pdf from scratch at each new frame time , a running (or on-line cumulative) average is computed.
The pdf of every pixel is characterized by mean and variance . The following is a possible initial condition (assuming that initially every pixel is background):
where is the value of the pixel's intensity at time . In order to initialize variance, we can, for example, use the variance in x and y from a small window around each pixel.
Note that background may change over time (e.g. due to illumination changes or non-static background objects). To accommodate for that change, at every frame , every pixel's mean and variance must be updated, as follows:
Where determines the size of the temporal window that is used to fit the pdf (usually ) and is the Euclidean distance between the mean and the value of the pixel.
We can now classify a pixel as background if its current intensity lies within some confidence interval of its distribution's mean:
where the parameter is a free threshold (usually ). A larger value for allows for more dynamic background, while a smaller increases the probability of a transition from background to foreground due to more subtle changes.
In a variant of the method, a pixel's distribution is only updated if it is classified as background. This is to prevent newly introduced foreground objects from fading into the background. The update formula for the mean is changed accordingly:
where when is considered foreground and otherwise. So when , that is, when the pixel is detected as foreground, the mean will stay the same. As a result, a pixel, once it has become foreground, can only become background again when the intensity value gets close to what it was before turning foreground. This method, however, has several issues: It only works if all pixels are initially background pixels (or foreground pixels are annotated as such). Also, it cannot cope with gradual background changes: If a pixel is categorized as foreground for a too long period of time, the background intensity in that location might have changed (because illumination has changed etc.). As a result, once the foreground object is gone, the new background intensity might not be recognized as such anymore.
Background mixture models
Mixture of Gaussians method approaches by modelling each pixel as a mixture of Gaussians and uses an on-line approximation to update the model. In this technique, it is assumed that every pixel's intensity values in the video can be modeled using a Gaussian mixture model. A simple heuristic determines which intensities are most probably of the background. Then the pixels which do not match to these are called the foreground pixels. Foreground pixels are grouped using 2D connected component analysis.
At any time t, a particular pixel ()'s history is:
This history is modeled by a mixture of K Gaussian distributions:
where:
First, each pixel is characterized by its intensity in RGB color space. Then probability of observing the current pixel is given by the following formula in the multidimensional case:
Where K is the number of distributions, ω is a weight associated to the ith Gaussian at time t and μ, Σ are the mean and standard deviation of said Gaussian respectively.
Once the parameters initialization is made, a first foreground detection can be made then the parameters are updated. The first B Gaussian distribution which exceeds the threshold T is retained for a background distribution:
The other distributions are considered to represent a foreground distribution. Then, when the new frame incomes at times , a match test is made of each pixel. A pixel matches a Gaussian distribution if the Mahalanobis distance:
where k is a constant threshold equal to . Then, two cases can occur:
Case 1: A match is found with one of the k Gaussians. For the matched component, the update is done as follows:
Power and Schoonees [3] used the same algorithm to segment the foreground of the image:
The essential approximation to is given by :
Case 2: No match is found with any of the Gaussians. In this case, the least probable distribution is replaced with a new one with parameters:
Once the parameter maintenance is made, foreground detection can be made and so on. An on-line K-means approximation is used to update the Gaussians. Numerous improvements of this original method developed by Stauffer and Grimson have been proposed and a complete survey can be found in Bouwmans et al. A standard method of adaptive backgrounding is averaging the images over time, creating a background approximation which is similar to the current static scene except where motion occur.
Surveys
Several surveys which concern categories or sub-categories of models can be found as follows:
MOG background subtraction
Subspace learning background subtraction
Statistical background subtraction
Fuzzy background subtraction
RPCA background subtraction (See Robust principal component analysis for more details)
Dynamic RPCA for background/foreground separation (See Robust principal component analysis for more details)
Decomposition into low-rank plus additive matrices for background/foreground Separation
Deep neural networks concepts for background subtraction
Traditional and recent approaches for background subtraction
Applications
Video surveillance
Optical motion capture
Human computer interaction
Content-based video coding
Traffic monitoring
Real-time motion gesture recognition
For more details, please see
See also
3D data acquisition and object reconstruction
Gaussian adaptation
Region of interest
Teknomo–Fernandez algorithm
ViBe
References
Comparisons
Several comparison/evaluation papers can be found in the literature:
A. Sobral, A. Vacavant. "A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos". Computer Vision and Image Understanding, CVIU 2014, 2014.
A. Shahbaz, J. Hariyono, K. Jo, "Evaluation of Background Subtraction Algorithms for Video Surveillance", FCV 2015, 2015.
Y. Xu, J. Dong, B. Zhang, D. Xu, "Background modeling methods in video analysis: A review and comparative evaluation', CAAI Transactions on Intelligence Technology, pages 43–60, Volume 1, Issue 1, January 2016.
Books
T. Bouwmans, F. Porikli, B. Horferlin, A. Vacavant, Handbook on "Background Modeling and Foreground Detection for Video Surveillance: Traditional and Recent Approaches, Implementations, Benchmarking and Evaluation", CRC Press, Taylor and Francis Group, June 2014. (For more information: http://www.crcpress.com/product/isbn/9781482205374)
T. Bouwmans, N. Aybat, and E. Zahzah. Handbook on Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, CRC Press, Taylor and Francis Group, May 2016. (For more information: http://www.crcpress.com/product/isbn/9781498724623)
Journals
T. Bouwmans, L. Davis, J. Gonzalez, M. Piccardi, C. Shan, Special Issue on "Background Modeling for Foreground Detection in Real-World Dynamic Scenes", Special Issue in Machine Vision and Applications, July 2014.
A. Vacavant, L. Tougne, T. Chateau, Special section on "Background models comparison", Computer Vision and Image Understanding, CVIU 2014, May 2014.
A. Petrosino, L. Maddalena, T. Bouwmans, Special Issue on "Scene Background Modeling and Initialization", Pattern Recognition Letters, September 2017.
T. Bouwmans, Special Issue on "Detection of Moving Objects", MDPI Journal of Imaging, 2018.
Workshops
Background Learning for Detection and Tracking from RGB videos (RGBD 2017) Workshop in conjunction with ICIAP 2017. (For more information: http://rgbd2017.na.icar.cnr.it/)
Scene Background Modeling and Initialization (SBMI 2015) Workshop in conjunction with ICIAP 2015. (For more information: http://sbmi2015.na.icar.cnr.it/)
IEEE Change Detection Workshop in conjunction with CVPR 2014. (For more information: http://www.changedetection.net/)
Workshop on Background Model Challenges (BMC 2012) in conjunction with ACCV 2012. (For more information: http://bmc.iut-auvergne.com/)
Contests
IEEE Scene Background Modeling Contest (SBMC 2016) in conjunction with ICPR 2016 (For more information: http://pione.dinf.usherbrooke.ca/sbmc2016/ )
External links
Background subtraction by R. Venkatesh Babu
Foreground Segmentation and Tracking based on Foreground and Background Modeling Techniques by Jaume Gallego
Detecció i extracció d’avions a seqüències de vídeo by Marc Garcia i Ramis
Websites
Background Subtraction website
The Background Subtraction Website (T. Bouwmans, Univ. La Rochelle, France) contains a comprehensive list of the references in the field, and links to available datasets and software.
Datasets
ChangeDetection.net (For more information: http://www.changedetection.net/)
Background Models Challenge (For more information: http://bmc.iut-auvergne.com/)
Stuttgart Artificial Background Subtraction Dataset (For more information: http://www.vis.uni-stuttgart.de/index.php?id=sabs )
SBMI dataset (For more information: http://sbmi2015.na.icar.cnr.it/)
SBMnet dataset (For more information: http://pione.dinf.usherbrooke.ca/dataset/ )
Libraries
BackgroundSubtractorCNT
The BackgroundSubtractorCNT library implements a very fast and high quality algorithm written in C++ based on OpenCV. It is targeted at low spec hardware but works just as fast on modern Linux and Windows. (For more information: https://github.com/sagi-z/BackgroundSubtractorCNT).
BGS Library
The BGS Library (A. Sobral, Univ. La Rochelle, France) provides a C++ framework to perform background subtraction algorithms. The code works either on Windows or on Linux. Currently the library offers more than 30 BGS algorithms. (For more information: https://github.com/andrewssobral/bgslibrary)
LRS Library – Low-Rank and Sparse tools for Background Modeling and Subtraction in Videos The LRSLibrary (A. Sobral, Univ. La Rochelle, France) provides a collection of low-rank and sparse decomposition algorithms in MATLAB. The library was designed for motion segmentation in videos, but it can be also used or adapted for other computer vision problems. Currently the LRSLibrary contains more than 100 matrix-based and tensor-based algorithms. (For more information: https://github.com/andrewssobral/lrslibrary)
OpenCV – The OpenCV library provides a number background/foreground segmentation algorithms.
Telecommunications | Foreground detection | Technology | 3,683 |
72,329,175 | https://en.wikipedia.org/wiki/Hanseniaspora%20pseudoguilliermondii | Hanseniaspora pseudoguilliermondii is a species of yeast in the family Saccharomycetaceae. Originally isolated from orange juice concentrate, it has been found on fruit and fruit juices in locations around the world. It has also been observed forming hybrids with Hanseniaspora opuntiae.
Taxonomy
A sample of H. pseudoguilliermondii was first isolated from orange juice concentrate in Georgia, USA. It was studied in 2003 by Neža Čadež, Gé A. Poot, Peter Raspor, and Maudy Th. Smith, who found that it could not be distinguished from Hanseniaspora guilliermondii using physiological criteria. After further testing in 2006, Čadež, Raspor, and Smith offered a description of the species, based upon DNA testing, that they called Hanseniaspora pseudoguilliermondii. The specific epithet "pseudoguilliermondii" was chosen because the species is similar to H. guilliermondii.
Description
Microscopic examination of the yeast cells in YM liquid medium after 48 hours at 25°C reveals cells that are 2.2 to 8.7 μm by 1.6 to 4.2 μm in size, apiculate, ovoid to elongate, appearing singly or in pairs. Reproduction is by budding, which occurs at both poles of the cell. In broth culture, sediment is present, and after one month a very thin ring and a sediment is formed.
Colonies that are grown on malt agar for one month at 25°C appear cream-colored, butyrous, glossy, and smooth. Growth is flat to slightly raised at the center, with an entire to slightly undulating margin. The yeast forms poorly developed pseudohyphae on cornmeal or potato agar. The yeast has been observed to form four hat-shaped ascospores when grown for at least seven days on 5% Difco malt extract agar.
The yeast can ferment glucose and cellobiose, but not galactose, sucrose, maltose, lactose, raffinose or trehalose. It has a positive growth rate at 37°C, but there is no growth at 40°C. It can grow on agar media containing 0.1% cycloheximide and 10% sodium but growth on 50% glucose-yeast extract agar is weak.
Ecology
The original strain of this species was isolated from orange juice concentrate. It has also been isolated from fruit and fermenting fruit juices in The Philippines, Réunion, and French Guiana. It has been observed to form hybrids with Hanseniaspora opuntiae.
It is not known whether it has any human pathogenic potential, but it can grow at a normal body temperature.
References
Saccharomycetes
Yeasts
Fungi described in 2006
Fungus species | Hanseniaspora pseudoguilliermondii | Biology | 594 |
77,834,296 | https://en.wikipedia.org/wiki/Buloxibutid | Buloxibutid is an investigational new drug that is being evaluated to treat COVID-19 infections. It is an angiotensin II receptor type 2 agonist.
References
Carbamates
Imidazoles
Sulfonamides
Thiophenes
Butyl esters
Isobutyl compounds | Buloxibutid | Chemistry | 65 |
3,608,401 | https://en.wikipedia.org/wiki/Acoustic%20foam | Acoustic foam is an open celled foam used for acoustic treatment. It attenuates airborne sound waves, reducing their amplitude, for the purposes of noise reduction or noise control. The energy is dissipated as heat. Acoustic foam can be made in several different colors, sizes and thickness.
Acoustic foam can be attached to walls, ceilings, doors, and other features of a room to control noise levels, vibration, and echoes.
Many acoustic foam products are treated with dyes and/or fire retardants.
Uses
The objective of acoustic foam is to improve or change a room's sound qualities by controlling residual sound through absorption. This purpose requires strategic placement of acoustic foam panels on walls, ceilings, floors and other surfaces. Proper placement can help effectively manage resonance within the room and help give the room the desired sonic qualities.
Acoustic enhancement
The objective of acoustic foam is to enhance the sonic properties of a room by effectively managing unwanted reverberations. For this reason, acoustic foam is often used in restaurants, performance spaces, and recording studios. Acoustic foam is also often installed in large rooms with large, reverberative surfaces like gymnasiums, places of worship, theaters, and concert halls where excess reverberation is prone to arise. The purpose is to reduce, but not entirely eliminate, resonance within the room. In unmanaged spaces without acoustic foam or similar sound absorbing materials, sound waves reflect off of surfaces and continue to bounce around in the room. When a wave encounters a change in acoustic impedance, such as hitting a solid surface, acoustic reflections transpire. These reflections will occur many times before the wave becomes inaudible. Reflections can cause acoustic problems such as phase summation and phase cancellation. A new complex wave originates when the direct source wave coincides with the reflected waves. This complex wave will change the frequency response of the source material.
Functionality
Acoustic foam is a lightweight material made from polyurethane (either polyether or polyester) or extruded melamine foam. It is usually cut into tiles. One surface of these tiles often features pyramid, cone, wedge, or uneven cuboid shapes. Acoustic foam tiles are suited to placing on sonically reflective surfaces to act as sound absorbers, thus enhancing or changing the sound properties of a room.
This type of sound absorption is different from soundproofing, which is typically used to keep sound from escaping or entering a room rather than changing the properties of sound within the room itself.
Acoustic foam panels typically suppress reverberations in the mid and high frequencies. To deal with lower frequencies, much thicker pieces of acoustic foam (often in metal or wood enclosures) can be placed in the corners of a room and are called acoustic foam bass traps.
See also
Anechoic chamber
Bushing (isolator)
Polystyrene
Polyurethane
Sorbothane
Soundproofing
Styrofoam
Vibration isolation
References
Acoustics
Foams
Noise reduction
Noise control | Acoustic foam | Physics,Chemistry | 600 |
51,227,133 | https://en.wikipedia.org/wiki/NGC%20146 | NGC 146 is a small open cluster in the constellation Cassiopeia. It was discovered by John Herschel in 1829 using his father's 18.7 inch reflecting telescope.
Location
NGC 146 is fairly easy to locate in the sky, being half a degree away from the bright star Kappa Cassiopeiae. However, spotting the cluster itself is difficult because of its low apparent magnitude of 9.1. Its relatively high declination of about 63° means it is not visible for below 27° S.
Its distance is estimated at 3030 parsecs (9880 light years ), but may be around 3500 pc (11000 ly) away.
Characteristics
The cluster is at most 10 million years old, as there are numerous B-type main sequence stars and pre-main-sequence stars but relatively few evolved supergiants. Among its most massive stars are two Herbig Be stars.
References
Further reading
Open clusters
0146
Cassiopeia (constellation) | NGC 146 | Astronomy | 199 |
8,529 | https://en.wikipedia.org/wiki/Disjunction%20elimination | In propositional logic, disjunction elimination (sometimes named proof by cases, case analysis, or or elimination) is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement implies a statement and a statement also implies , then if either or is true, then has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true.
An example in English:
If I'm inside, I have my wallet on me.
If I'm outside, I have my wallet on me.
It is true that either I'm inside or I'm outside.
Therefore, I have my wallet on me.
It is the rule can be stated as:
where the rule is that whenever instances of "", and "" and "" appear on lines of a proof, "" can be placed on a subsequent line.
Formal notation
The disjunction elimination rule may be written in sequent notation:
where is a metalogical symbol meaning that is a syntactic consequence of , and and in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
where , , and are propositions expressed in some formal system.
See also
Disjunction
Argument in the alternative
Disjunct normal form
Proof by exhaustion
References
Rules of inference
Theorems in propositional logic | Disjunction elimination | Mathematics | 313 |
7,091,330 | https://en.wikipedia.org/wiki/Water-fuelled%20car | A water-fuelled car is an automobile that hypothetically derives its energy directly from water. Water-fuelled cars have been the subject of numerous international patents, newspaper and popular science magazine articles, local television news coverage, and websites. The claims for these devices have been found to be pseudoscience and some were found to be tied to investment frauds. These vehicles may be claimed to produce fuel from water on board with no other energy input, or may be a hybrid claiming to derive some of its energy from water in addition to a conventional source (such as gasoline). According to the currently accepted laws of physics, there is no way to extract chemical energy from water alone.
What water-fuelled cars are not
A water-fuelled car is not any of the following:
Water injection, which is a method for cooling the combustion chambers of engines by adding water to the incoming fuel-air mixture, allowing for greater compression ratios and reduced engine knocking (detonation).
The hydrogen car, although it often incorporates some of the same elements. To fuel a hydrogen car from water, electricity is used to generate hydrogen by electrolysis. The resulting hydrogen is an energy carrier that can power a car by reacting with oxygen from the air to create water, either through burning in a combustion engine or catalyzed to produce electricity in a fuel cell.
Hydrogen fuel enhancement, where a mixture of hydrogen and conventional hydrocarbon fuel is burned in an internal combustion engine, usually in an attempt to improve fuel economy or reduce emissions.
The steam car, which uses water (in both liquid and gaseous forms) as a working fluid, not as a fuel.
An electric car charged with or directly powered by hydroelectricity.
Extracting energy from water
According to the currently accepted laws of physics, there is no way to extract chemical energy from water alone. Water itself is highly stable—it was one of the classical elements and contains very strong chemical bonds. Its enthalpy of formation is negative (−68.3 kcal/mol or −285.8 kJ/mol), meaning that energy is required to break those stable bonds, to separate water into its elements, and there are no other compounds of hydrogen and oxygen with more negative enthalpies of formation, meaning that no energy can be released in this manner either.
Most proposed water-fuelled cars rely on some form of electrolysis to separate water into hydrogen and oxygen and then recombine them to release energy. However, the first law of thermodynamics guarantees that the energy required to separate the elements will always be equal to the amount of energy released (assuming no losses), so this cannot be used to produce net energy. The second law of thermodynamics further states that the amount of useful energy released this way is necessarily less than the amount of energy input.
Claims of functioning water-fuelled cars
Garrett electrolytic carburetor
Charles H. Garrett allegedly demonstrated a water-fuelled car "for several minutes", which was reported on September 8, 1935, in The Dallas Morning News. The car generated hydrogen by electrolysis as can be seen by examining Garrett's patent, issued that same year. This patent includes drawings which show a carburetor similar to an ordinary float-type carburetor but with electrolysis plates in the lower portion, and where the float is used to maintain the level of the water. Garrett's patent fails to identify a new source of energy.
Stanley Meyer's water fuel cell
At least as far back as 1980, Stanley Meyer claimed that he had built a dune buggy that ran on water, although he gave inconsistent explanations as to its mode of operation. In some cases, he claimed that he had replaced the spark plugs with a "water splitter", while in other cases it was claimed to rely on a "fuel cell" that split the water into hydrogen and oxygen. The "fuel cell", which he claimed was subjected to an electrical resonance, would split the water mist into hydrogen and oxygen gas, which would then be combusted back into water vapour in a conventional internal combustion engine to produce net energy. Meyer's claims were never independently verified, and in an Ohio court in 1996 he was found guilty of "gross and egregious fraud". He died of an aneurysm in 1998, although conspiracy theories claim that he was poisoned.
Dennis Klein
In 2002, the firm Hydrogen Technology Applications patented an electrolyser design and trademarked the term "Aquygen" to refer to the hydrogen oxygen gas mixture produced by the device. Originally developed as an alternative to oxyacetylene welding, the company claimed to be able to run a vehicle exclusively on water, via the production of "Aquygen", and invoked an unproven state of matter called "magnegases" and a discredited theory about magnecules to explain their results. Company founder Dennis Klein claimed to be in negotiations with a major US auto manufacturer and that the US government wanted to produce Hummers that used his technology.
At present, the company no longer claims it can run a car exclusively on water, and is instead marketing "Aquygen" production as a technique to increase fuel efficiency, thus making it hydrogen fuel enhancement rather than a water-fuelled car.
Genesis World Energy (GWE)
Also in 2002, Genesis World Energy announced a market ready device which would extract energy from water by separating the hydrogen and oxygen and then recombining them. In 2003, the company announced that this technology had been adapted to power automobiles. The company collected over $2.5 million from investors, but none of their devices were ever brought to market. In 2006, Patrick Kelly, the owner of Genesis World Energy was sentenced in New Jersey to five years in prison for theft and ordered to pay $400,000 in restitution.
Genepax Water Energy System
In June 2008, Japanese company Genepax unveiled a car it claimed ran on only water and air, and many news outlets dubbed the vehicle a "water-fuel car". The company said it "cannot [reveal] the core part of this invention" yet, but it disclosed that the system used an onboard energy generator, which it called a "membrane electrode assembly", to extract the hydrogen using a "mechanism which is similar to the method in which hydrogen is produced by a reaction of metal hydride and water". The hydrogen was then used to generate energy to run the car. This led to speculation that the metal hydride is consumed in the process and is the ultimate source of the car's energy, making it a hydride-fuelled "hydrogen on demand" vehicle rather than water-fuelled as claimed. On the company's website the energy source is explained only with the words "Chemical reaction". The science and technology magazine Popular Mechanics described Genepax's claims as "rubbish". The vehicle Genepax demonstrated to the press in 2008 was a REVAi electric car, which was manufactured in India and sold in the UK as the G-Wiz.
In early 2009, Genepax announced they were closing their website, citing large development costs.
Thushara Priyamal Edirisinghe
Also in 2008, Sri Lankan news sources reported that Thushara Priyamal Edirisinghe claimed to drive a water-fuelled car about . on of water. Like other alleged water-fuelled cars described above, energy for the car was supposedly produced by splitting water into hydrogen and oxygen using electrolysis, and then burning the gases in the engine. Thushara showed the technology to Prime Minister Ratnasiri Wickramanayaka, who "extended the Government’s full support to his efforts to introduce the water-powered car to the Sri Lankan market". Thushara was arrested a few months later on suspicion of investment fraud.
Daniel Dingel
Daniel Dingel, a Filipino inventor, has been claiming since 1969 to have developed technology allowing water to be used as fuel. In 2000, Dingel entered into a business partnership with Formosa Plastics Group to further develop the technology. In 2008, Formosa Plastics successfully sued Dingel for fraud and Dingel, who was 82, was sentenced to 20 years' imprisonment.
Ghulam Sarwar
In December 2011, Ghulam Sarwar claimed he had invented a car that ran only on water. At the time the invented car was claimed to use 60% water and 40% Diesel or fuel, but that the inventor was working to make it run on only water, probably by end of June 2012. It was further claimed the car "emits only oxygen rather than the usual carbon".
Agha Waqar Ahmad
Pakistani man Agha Waqar Ahmad claimed in July 2012 to have invented a water-fuelled car by installing a "water kit" for all kind of automobiles, which consists of a cylindrical jar that holds the water, a bubbler, and a pipe leading to the engine. He claimed the kit used electrolysis to convert water into "HHO", which is then used as fuel. The kit required use of distilled water to work. Ahmed claimed he has been able to generate more oxyhydrogen than any other inventor because of "undisclosed calculations". He applied for a patent in Pakistan. Some Pakistani scientists said Agha's invention was a fraud that violates the laws of thermodynamics.
Aryanto Misel
Indonesian inventor Aryanto Misel claimed in May 2022 that his invention, called Nikuba, can convert water into hydrogen that can be used as fuel for motorcycles. Aryanto claimed that he only required 1 liter of water for the distance of 500 kilometers.
In July 2023, Aryanto claimed that Italian-based automobile manufacturers Lamborghini, Ducati, and Ferrari are interested in Nikuba. He also claimed that he is willing to sell the device to foreign companies for 15 billion rupiahs, while also claiming that he didn't need the Indonesian government and National Research and Innovation Agency as they have "destroyed" him. Indonesian scientists from National Research and Innovation Agency stated that the device is theoretically impossible. They also stated that there is no interest from Italian automobile manufacturers in Nikuba, and Aryanto was invited by their partners instead of the automobile manufacturers.
Hydrogen as a supplement
In addition to claims of cars that run exclusively on water, there have also been claims that burning hydrogen or oxyhydrogen together with petrol or diesel increases mileage and efficiency; these claims are debated. A number of websites promote the use of oxyhydrogen, also called "HHO", selling plans for do-it-yourself electrolysers or kits with the promise of large improvements in fuel efficiency. According to a spokesman for the American Automobile Association, "All of these devices look like they could probably work for you, but let me tell you they don't".
Gasoline pill and related additives
Related to the water-fuelled car hoax are claims that additives, often a pill, can convert the water into usable fuel, similar to a carbide lamp, in which a high-energy additive produces the combustible fuel. These claims are all false, and often with fraudulent intent, as water itself cannot contribute any energy to the process.
Hydrogen on demand technologies
A hydrogen on demand vehicle uses a chemical reaction to produce hydrogen from water. The hydrogen is then burned in an internal combustion engine or used in a fuel cell to generate electricity which powers the vehicle. These designs take energy from the chemical that reacts with water; vehicles of this type are not precluded by the laws of nature. Aluminium, magnesium, and sodium borohydride react with water to generate hydrogen and have been used in hydrogen on demand prototypes. Eventually, the chemical runs out and has to be replenished. The energy required to produce such compounds exceeds the energy obtained from their reaction with water.
One example of a hydrogen on demand device, created by scientists from the University of Minnesota and the Weizmann Institute of Science, uses boron to generate hydrogen from water. An article in New Scientist in July 2006 described the power source under the headline "A fuel tank full of water," and they quote Abu-Hamed as saying:
A vehicle powered by the device would take on water and boron instead of petrol, and generate boron trioxide. Elemental boron is difficult to prepare and does not occur naturally. Boron trioxide is an example of a borate, which is the predominant form of boron on earth. Thus, a boron-powered vehicle would require an economical method of preparing elemental boron. The chemical reactions describing the oxidation of boron are:
4B + 6H2O -> 2B2O3 + 6H2 [Hydrogen generation step]
6H2 + 3O2 -> 6H2O [Combustion step]
The balanced chemical equation representing the overall process (hydrogen generation and combustion) is:
4B + 3O2 -> 2B2O3
As shown above, boron trioxide is the only net byproduct, and it could be removed from the car and turned back into boron and reused. Electricity input is required to complete this process, which Al-Hamed suggests could come from solar panels. Although it is possible to obtain elemental boron by electrolysis, a substantial expenditure of energy is required. The process of converting borates to elemental boron and back might be compared with the analogous process involving carbon: carbon dioxide could be converted to charcoal (elemental carbon), then burnt to produce carbon dioxide.
In popular culture
It is referred to in the pilot episode for the That '70s Show sitcom, as well as in the twenty-first episode of the fifth season and the series finale.
"Gashole" (2010), a documentary film about the history of oil prices and the future of alternative mentions multiple stories regarding engines that use water to increase mileage efficiency.
"Like Water for Octane," an episode of The Lone Gunmen, is based on a "water-powered" car that character Melvin Frohike saw with his own eyes back in 1962.
The Water Engine, a David Mamet play made into a television film in 1994, tells the story of Charles Lang inventing an engine that runs using water for fuel. The plot centers on the many obstacles the inventor must overcome to patent his device.
The plot of the 1996 action film Chain Reaction revolves around a technology to turn water (via a type of self-sustaining bubble fusion & electrolysis) into fuel and official suppression of it.
A water-powered car was depicted in a 1997 episode of Team Knight Rider (a spinoff of the original Knight Rider TV series) entitled "Oil and Water". In the episode, the vehicle explodes after a character sabotages it by putting seltzer tablets in the fuel tank. The car shown was actually a Bricklin SV-1.
See also
List of topics characterized as pseudoscience
List of water fuel inventions
Perpetual motion
Water power engine
References
Further reading
Free energy conspiracy theories
Fringe physics | Water-fuelled car | Technology | 3,087 |
53,902,845 | https://en.wikipedia.org/wiki/Graphene%20plasmonics | Graphene is a 2D nanosheet with atomic thin thickness in terms of 0.34 nm. Due to the ultrathin thickness, graphene showed many properties that are quite different from their bulk graphite counterparts. The most prominent advantages are known to be their high electron mobility and high mechanical strengths.
Thus, it exhibits potential for applications in optics and electronics especially for the development of wearable devices as flexible substrates. More importantly, the optical absorption rate of graphene is 2.3% in the visible and near-infrared region. This broadband absorption characteristic also attracted great attention of the research community to exploit the graphene-based photodetectors/modulators.
Plasmons are collective electron oscillations usually excited at metal surfaces by a light source. Doped graphene layers have also shown the similar surface plasmon effects to that of metallic thin films. Through the engineering of metallic substrates or nanoparticles (e.g., gold, silver and copper) with graphene, the plasmonic properties of the hybrid structures could be tuned for improving the optoelectronic device performances. The electrons at the metallic structure could transfer to the graphene conduction band. This is attributed to the zero bandgap property of graphene nanosheet.
Graphene plasmons can also be decoupled from their environment and give rise to genuine Dirac plasmon at low-energy range where the wavelengths exceed the damping length. These graphene plasma resonances have been observed in the GHz–THz electronic domain.
Graphene plasmonics is an emergent research field, that is attracting plenty of interest and has already resulted in a textbook.
Application
When the plasmons were resonant at the graphene/metal surface, a strong electric field would be induced which could enhance the generation of electron-hole pairs in the graphene layer. The excited electron carrier numbers linearly increased with the field intensity based on the Fermi’s rule. The induced charge carriers of metal/graphene hybrid nanostructure could be up to 7 times higher than that of pristine graphene ones due to the plasmonic enhancement.
So far, the graphene plasmonic effects have been demonstrated for different applications ranging from light modulation to biological/chemical sensing. High-speed photodetection at 10 Gbit/s based on graphene and 20-fold improvement on the detection efficiency through graphene/gold nanostructure were also reported. Graphene plasmonics are considered as good alternatives to the noble metal plasmons not only due to their cost-effectiveness for large-scale production but also by the higher confinement of the plasmonics at the graphene surface. The enhanced light-matter interactions could further be optimized and tuned through electrostatic gating. These advantages of graphene plasmonics paved a way to achieve single-molecule detection and single-plasmon excitation.
See also
Surface plasmon polariton
Nanomaterial
References
Graphene
Plasmonics | Graphene plasmonics | Physics,Chemistry,Materials_science | 629 |
4,595,931 | https://en.wikipedia.org/wiki/Flux%20tube | A flux tube is a generally tube-like (cylindrical) region of space containing a magnetic field, B, such that the cylindrical sides of the tube are everywhere parallel to the magnetic field lines. It is a graphical visual aid for visualizing a magnetic field. Since no magnetic flux passes through the sides of the tube, the flux through any cross section of the tube is equal, and the flux entering the tube at one end is equal to the flux leaving the tube at the other. Both the cross-sectional area of the tube and the magnetic field strength may vary along the length of the tube, but the magnetic flux inside is always constant.
As used in astrophysics, a flux tube generally means an area of space through which a strong magnetic field passes, in which the behavior of matter (usually ionized gas or plasma) is strongly influenced by the field. They are commonly found around stars, including the Sun, which has many flux tubes from tens to hundreds of kilometers in diameter. Sunspots are also associated with larger flux tubes of 2500 km diameter. Some planets also have flux tubes. A well-known example is the flux tube between Jupiter and its moon Io.
Definition
The flux of a vector field passing through any closed orientable surface is the surface integral of the field over the surface. For example, for a vector field consisting of the velocity of a volume of liquid in motion, and an imaginary surface within the liquid, the flux is the volume of liquid passing through the surface per unit time.
A flux tube can be defined passing through any closed, orientable surface in a vector field , as the set of all points on the field lines passing through the boundary of . This set forms a hollow tube. The tube follows the field lines, possibly turning, twisting, and changing its cross sectional size and shape as the field lines converge or diverge. Since no field lines pass through the tube walls there is no flux through the walls of the tube, so all the field lines enter and leave through the end surfaces. Thus a flux tube divides all the field lines into two sets; those passing through the inside of the tube, and those outside. Consider the volume bounded by the tube and any two surfaces and intersecting it. If the field has sources or sinks within the tube the flux out of this volume will be nonzero. However, if the field is divergenceless (solenoidal, ) then from the divergence theorem the sum of the flux leaving the volume through these two surfaces will be zero, so the flux leaving through will be equal to the flux entering through . In other words, the flux within the tube through any surface intersecting the tube is equal, the tube encloses a constant quantity of flux along its length. The strength (magnitude) of the vector field, and the cross sectional area of the tube varies along its length, but the surface integral of the field over any surface spanning the tube is equal.
Since from Maxwell's equations (specifically Gauss's law for magnetism) magnetic fields are divergenceless, magnetic flux tubes have this property, so flux tubes are mainly used as an aid in visualizing magnetic fields. However flux tubes can also be useful for visualizing other vector fields in regions of zero divergence, such as electric fields in regions where there are no charges and gravitational fields in regions where there is no mass.
In particle physics, the hadron particles that make up all matter, such as neutrons and protons, are composed of more basic particles called quarks, which are bound together by thin flux tubes of strong nuclear force field. The flux tube model is important in explaining the so-called color confinement mechanism, why quarks are never seen separately in particle experiments.
Types
Flux rope: Twisted magnetic flux tube.
Fibril field: Magnetic flux tube that does not have a magnetic field outside the tube.
History
In 1861, James Clerk Maxwell gave rise to the concept of a flux tube inspired by Michael Faraday's work in electrical and magnetic behavior in his paper titled "On Physical Lines of Force". Maxwell described flux tubes as:
If upon any surface which cuts the lines of fluid motion we draw a closed curve, and if from every point of this curve we draw lines of motion, these lines of motion will generate a tubular surface which we may call a tube of fluid motion.
Flux tube strength
The flux tube's strength, , is defined to be the magnetic flux through a surface intersecting the tube, equal to the surface integral of the magnetic field over
Since the magnetic field is solenoidal, as defined in Maxwell's equations (specifically Gauss' law for magnetism): . the strength is constant at any surface along a flux tube. Under the condition that the cross-sectional area, , of the flux tube is small enough that the magnetic field is approximately constant, can be approximated as . Therefore, if the cross sectional area of the tube decreases along the tube from to , then the magnetic field strength must increase proportionally from to in order to satisfy the condition of constant flux F.
Plasma physics
Flux conservation
In magnetohydrodynamics, Alfvén's theorem states that the magnetic flux through a surface, such as the surface of a flux tube, moving along with a perfectly conducting fluid is conserved. In other words, the magnetic field is constrained to move with the fluid or is "frozen-in" to the fluid.
This can be shown mathematically for a flux tube using the induction equation of a perfectly conducting fluid
where is the magnetic field and is the velocity field of the fluid. The change in magnetic flux over time through any open surface of the flux tube enclosed by with a differential line element can be written as
.
Using the induction equation gives
which can be rewritten using Stokes' theorem and an elementary vector identity on the first and second term, respectively, to give
Compression and extension
In ideal magnetohydrodynamics, if a cylindrical flux tube of length is compressed while the length of tube stays the same, the magnetic field and the density of the tube increase with the same proportionality. If a flux tube with a configuration of a magnetic field of and a plasma density of confined to the tube is compressed by a scalar value defined as , the new magnetic field and density are given by:
If , known as transverse compression, and increase and are scaled the same while transverse expansion decreases and by the same value and proportion where is constant.
Extending the length of the flux tube by gives a new length of while the density of the tube remains the same, , which then results in the magnetic field strength increasing by . Reducing the length of the tubes results in a decrease of the magnetic field's strength.
Plasma pressure
In magnetohydrostatic equilibrium, the following condition is met for the equation of motion of the plasma confined to the flux tube:
where
is the plasma pressure
is the current density of the plasma
is the gravitational force
With the magnetohydrostatic equilibrium condition met, a cylindrical flux tube's plasma pressure of is given by the following relation written in cylindrical coordinates with as the distance from the axis radially:
The second term in the above equation gives the magnetic pressure force while the third term represents the magnetic tension force. The field line's twist around the axis from one end of the tube of length to the other end is given by:
Examples
Solar
Examples of solar flux tubes include sunspots and intense magnetic tubes in the photosphere and the field around the solar prominence and coronal loops in the corona.
Sunspots occur when small flux tubes combine into a large flux tube that breaks the surface of the photosphere. The large flux tube of the sunspot has a field intensity of around 3 kG with a diameter of typically 4000 km. There are extreme cases of when the large flux tubes have diameters of km with a field strength of 3 kG. Sunspots can continue to grow as long as there is a constant supply of new flux from small flux tubes on the surface of the Sun. The magnetic field within the flux tube can be compressed by decreasing the gas pressure inside and therefore the internal temperature of the tube while maintaining a constant pressure outside.
Intense magnetic tubes are isolated flux tubes that have diameters of 100 to 300 km with an overall field strength of 1 to 2 kG and a flux of around Wb. These flux tubes are concentrated strong magnetic fields that are found between solar granules. The magnetic field causes the plasma pressure in the flux tube to decrease, known as the plasma density depletion region. If there is a significant difference in the temperatures in the flux tube and the surroundings, there is a decrease in plasma pressure as well as a decrease in the plasma density causing some of the magnetic field to escape the plasma.
Plasma that is trapped within magnetic flux tubes that are attached to the photosphere, referred to as footpoints, create a loop-like structure known as a coronal loop. The plasma inside the loop has a higher temperature than the surroundings causing the pressure and density of the plasma to increase. These coronal loops get their characteristic high luminosity and ranges of shapes from the behavior of the magnetic flux tube. These flux tubes confine plasma and are characterized as isolated. The confined magnetic field strength varies from 0.1 to 10 G with diameters ranging from 200 to 300 km.
The result of emerging twisted flux tubes from the interior of the Sun cause twisted magnetic structures in the corona, which then lead to solar prominences. Solar prominences are modeled using twisted magnetic flux tubes known as flux ropes.
Planetary
Magnetized planets have an area above their ionospheres which traps energetic particles and plasma along magnetic fields, referred to as magnetospheres. The extension of the magnetosphere away from the sun known as a magnetotail is modeled as magnetic flux tubes. Mars and Venus both have strong magnetic fields resulting in flux tubes from the solar wind gathering at high altitudes of the ionosphere on the sun side of the planets and causing the flux tubes to distort along the magnetic field lines creating flux ropes. Particles from the solar wind magnetic field lines can transfer to the magnetic field lines of a planet's magnetosphere through the processes of magnetic reconnection that occurs when a flux tube from the solar wind and a flux tube from the magnetosphere in opposite field directions get close to one another.
Flux tubes that occur from magnetic reconnection forms into a dipole-like configuration around the planet where plasma flow occurs. An example of this case is the flux tube between Jupiter and its moon Io approximately 450 km in diameter at the points closest to Jupiter.
See also
QCD string, sometimes called a flux tube
Flux transfer event
Birkeland current
Magnetohydrodynamics (MHD)
Marklund convection
References
Concepts in astrophysics | Flux tube | Physics | 2,183 |
49,216,832 | https://en.wikipedia.org/wiki/NGC%20128 | NGC 128 is a lenticular galaxy in the constellation Pisces. It is approximately 190 million light-years from the Sun and has a diameter of about 165,000 light-years.
Discovery
NGC 128 was discovered by astronomer William Herschel on 25 December 1790 using a reflecting telescope with an aperture of 18.7 inches. At the time of discovery, its coordinates were recorded as 00h 22m 05s, +87° 54.6′ -20.0″. It was later observed by John Herschel
on 12 October 1827.
Visual appearance
The galaxy is described as "pretty bright", "very small" with a "brighter middle". It is approximately 165,000 light years in diameter and is elongated. The galaxy is famous for its (peanut shell)-shaped bulge, and in 2016 it was discovered that there are two such nested structures, possibly associated with two stellar bars.
Galaxy group information
NGC 128 is the largest member, and the namesake of, the NGC 128 group which also includes the galaxies NGC 127 and NGC 130. NGC 128 has a strong tidal bridge with NGC 127 and there is evidence of interaction between all three galaxies in the group. NGC 128 has a noticeable peanut shape that is likely to be caused by gravitational effects of the other two galaxies.
Gallery
See also
List of NGC objects (1–1000)
NGC 125
NGC 126
NGC 127
NGC 130
References
External links
Lenticular galaxies
0128
Astronomical objects discovered in 1790
Pisces (constellation) | NGC 128 | Astronomy | 299 |
11,467,997 | https://en.wikipedia.org/wiki/Sporisorium%20cruentum | Sporisorium cruentum is a plant pathogen infecting sorghum.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Sorghum diseases
Ustilaginomycotina
Fungus species | Sporisorium cruentum | Biology | 50 |
5,814,292 | https://en.wikipedia.org/wiki/Angiopoietin | Angiopoietin is part of a family of vascular growth factors that play a role in embryonic and postnatal angiogenesis. Angiopoietin signaling most directly corresponds with angiogenesis, the process by which new arteries and veins form from preexisting blood vessels. Angiogenesis proceeds through sprouting, endothelial cell migration, proliferation, and vessel destabilization and stabilization. They are responsible for assembling and disassembling the endothelial lining of blood vessels. Angiopoietin cytokines are involved with controlling microvascular permeability, vasodilation, and vasoconstriction by signaling smooth muscle cells surrounding vessels.
There are now four identified angiopoietins: ANGPT1, ANGPT2, ANGPTL3, ANGPT4.
In addition, there are a number of proteins that are closely related to ('like') angiopoietins (Angiopoietin-related protein 1, , , , , , , ).
Angiopoietin-1 is critical for vessel maturation, adhesion, migration, and survival. Angiopoietin-2, on the other hand, promotes cell death and disrupts vascularization. Yet, when it is in conjunction with vascular endothelial growth factors, or VEGF, it can promote neo-vascularization.
Structure
Structurally, angiopoietins have an N-terminal super clustering domain, a central coiled domain, a linker region, and a C-terminal fibrinogen-related domain responsible for the binding between the ligand and receptor.
Angiopoietin-1 encodes a 498 amino acid polypeptide with a molecular weight of 57 kDa whereas angiopoietin-2 encodes a 496 amino acid polypeptide.
Only clusters/multimers activate receptors
Angiopoietin-1 and angiopoietin-2 can form dimers, trimers, and tetramers. Angiopoietin-1 has the ability to form higher order multimers through its super clustering domain. However, not all of the structures can interact with the tyrosine kinase receptor. The receptor can only be activated at the tetramer level or higher.
Specific mechanisms
Tie pathway
The collective interactions between angiopoietins, receptor tyrosine kinases, vascular endothelial growth factors and their receptors form the two signaling pathways— Tie-1 and Tie-2. The two receptor pathways are named as a result of their role in mediating cell signals by inducing the phosphorylation of specific tyrosines. This in turn initiates the binding and activation of downstream intracellular enzymes, a process known as cell signaling.
Tie-2
Tie-2/Ang-1 signaling activates β1-integrin and N-cadherin in LSK-Tie2+ cells and promotes hematopoietic stem cell (HSC) interactions with extracellular matrix and its cellular components. Ang-1 promotes quiescence of HSC in vivo. This quiescence or slow cell cycling of HSCs induced by Tie-2/Ang-1 signaling contributes to the maintenance of long-term repopulating ability of HSC and the protection of the HSC compartment from various cellular stresses. Tie-2/Ang-1 signaling plays a critical role in the HSC that is required for the long-term maintenance and survival of HSC in bone marrow. In the endosteum, Tie-2/Ang-1 signaling is predominantly expressed by osteoblastic cells. Although which specific TIE receptors mediate signals downstream of angiogenesis stimulation is highly contested, it is clear that TIE-2 is capable of activation as a result of binding angiopoietins.
Angiopoietin proteins 1 through 4 are all ligands for Tie-2 receptors. Tie-1 heterodimerizes with Tie-2 to enhance and modulate signal transduction of Tie-2 for vascular development and maturation. These Tyrosine kinase receptors are typically expressed on vascular endothelial cells and specific macrophages for immune responses. Angiopoietin-1 is a growth factor produced by vascular support cells, specialized pericytes in the kidney, and hepatic stellate cells (ITO) cells in the liver. This growth factor is also a glycoprotein and functions as an agonist for the tyrosine receptor found in endothelial cells. Angiopoietin-1 and tyrosine kinase signaling are essential for regulating blood vessel development and the stability of mature vessels.
The expression of Angiopoietin-2 in the absence of vascular endothelial growth factor (VEGF) leads to endothelial cell death and vascular regression. Increased levels of Ang2 promote tumor angiogenesis, metastasis, and inflammation. Effective means to control Ang2 in inflammation and cancer should have clinical value. Angiopoeitin, more specifically Ang-1 and Ang-2, work hand in hand with VEGF to mediate angiogenesis. Ang-2 works as an antagonist of Ang-1 and promotes vessel regression if VEGF is not present. Ang-2 works with VEGF to facilitate cell proliferation and migration of endothelial cells. Changes in expression of Ang-1, Ang-2 and VEGF have been reported in the rat brain after cerebral ischemia.
Angiogenesis signaling
To migrate, the endothelial cells need to loosen the endothelial connections by breaking down the basal lamina and the ECM scaffold of blood vessels. These connections are a key determinant of vascular permeability and relieve peri-endothelial cell contact, which is also a major factor in vessel stability and maturity. After the physical barrier is removed, under the influence of the growth factors VEGF with addition contributions of other factors like angiopoietin-1, integrins, and chemokines play an essential role. VEGF and ang-1 are involved in endothelial tube formation.
Vascular permeability signaling
Angiopoietin-1 and angiopoietin-2 are modulators of endothelial permeability and barrier function. Endothelial cells secrete angiopoietin-2 for autocrine signaling while parenchymal cells of the extravascular tissue secrete angiopoietin-2 onto endothelial cells for paracrine signaling, which then binds to the extracellular matrix and is stored within the endothelial cells.
Cancer
Angiopoietin-2 has been proposed as a biomarker in different cancer types. Angiopoietin-2 expression levels are proportional to the cancer stage for both small and non-small cell lung cancers. It has been also implicated to play role in hepatocellular and endometrial carcinoma-induced angiogenesis. Experiments using blocking antibodies for angiopoietin-2 have shown to decrease metastasis to lungs and lymph nodes.
Clinical relevance
Deregulation of angiopoietin and the tyrosine kinase pathway is common in blood-related diseases such as diabetes, malaria, sepsis, and pulmonary hypertension. This is demonstrated by an increased ratio of angiopoietin-2 and angiopoietin-1 in blood serum. To be specific, angiopoietin levels provide an indication for sepsis. Research on angiopoietin-2 has shown that it is involved in the onset of septic shock. The combination of fever and high levels of angiopoietin-2 are correlated with a greater prospect of the development of septic shock. It has also been shown that imbalances between angiopoietin-1 and angiopoietin-2 signaling can act independently of each other. One angiopoietin factor can signal at high levels while the other angiopoieting factor remains at baseline level signaling.
Angiopoietin-2 is produced and stored in Weibel-Palade bodies in endothelial cells and acts as a TEK tyrosine kinase antagonist. As a result, the promotion of endothelial activation, destabilization, and inflammation are promoted. Its role during angiogenesis depends on the presence of Vegf-a.
Serum levels of angiopoietin-2 expression are associated with the growth of multiple myeloma, angiogenesis, and overall survival in oral squamous cell carcinoma. Circulating angiopoietin-2 is a marker for early cardiovascular disease in children on chronic dialysis. Kaposi's sarcoma-associated herpesvirus induces rapid release of angiopoietin-2 from endothelial cells.
Angiopoietin-2 is elevated in patients with angiosarcoma.
Research has shown angiopoietin signaling to be relevant in treating cancer as well. During tumor growth, pro-angiogenic molecules and anti-angiogenic molecules are off balance. Equilibrium is disrupted such that the number of pro-angiogenic molecules are increased. Angiopoietins have been known to be recruited as well as VEGFs and platelet-derived growth factors (PDGFs). This is relevant for clinical use relative to cancer treatments because the inhibition of angiogenesis can aid in suppressing tumor proliferation.
References
External links
Angiogenesis
Growth factors | Angiopoietin | Chemistry,Biology | 1,967 |
588,193 | https://en.wikipedia.org/wiki/MOS%20%28operating%20system%29 | Mobile Operating System (MOS; ) is an operating system, a Soviet clone of Unix from the 1980s.
Overview
This operating system is commonly found on SM EVM minicomputers; it was also ported to ES EVM and Elbrus. MOS is also used by high-end PDP-11 clones.
Modifications of MOS include MNOS, DEMOS, , etc.
See also
List of Soviet computer systems
References
Unix variants
Computing in the Soviet Union | MOS (operating system) | Technology | 96 |
23,522,900 | https://en.wikipedia.org/wiki/Phase-jitter%20modulation | Phase-jitter modulation (PJM) is a modulation method specifically designed to meet the unique requirements of passive RFID tags. It has been adopted by the high-frequency RFID Air Interface Standard ISO/IEC 18000-3 MODE 2 for high-speed bulk conveyor-fed item-level identification because of its demonstrably higher data rates. The MODE 2 PJM data rate is 423,75 kbit/s; 16 times faster than the alternative MODE 1 system ISO/IEC 18000-3 MODE 1 and the legacy HF system ISO/IEC 15693.
Method
PJM works by representing data as very small phase changes in the instantaneous phase of a carrier signal. PJM can be regarded as a very low-level phase-modulation (PM) signal where amplitude-modulation (AM) components are suppressed to provide a constant-modulus signal. Most of the power (greater than 99%) in a PJM signal is transmitted as an un-modulated carrier and conveys no information. Less than 1% of the transmitted power is used for conveying the modulated data.
Passive RFID tags have no internal power source and derive their power from an external power source, typically the interrogating signal generated by an RFID interrogator. The interrogation signal is required to both power and communicate with the RFID tag. For a PJM signal the un-modulated carrier component powers the passive tag and the low-level modulated component conveys data to the tag. The tag uses the un-modulated carrier signal as a phase reference for demodulating the data signal. There is no reduction in the transfer of power to the tag during PJM.
There are international and US regulations that restrict the spectrum of the transmitted interrogation signal used by any RFID system. These regulations mandate a spectral mask that restricts both the frequency and amplitude of the interrogation signal. For a PJM signal the powering signal and the modulated data signal components are decoupled allowing the spectrum of the PJM signal to be matched to the spectral mask defined under these regulations by suitable amplitude adjustment of the un-modulated carrier and encoding and/or filtering of the modulated data signal.
Applications
Primary applications are in RFID tags for use in gaming, healthcare, pharmaceuticals, document and media management.
References
External links
Infineon.com
Satovicinity.com
Logistics
Sensors
Radio-frequency identification | Phase-jitter modulation | Technology,Engineering | 495 |
43,937,224 | https://en.wikipedia.org/wiki/Pan%20Jianwei | Pan Jianwei (; born 11 March 1970) is a Chinese academic administrator and quantum physicist. He is a university administrator and professor of physics at the University of Science and Technology of China. Pan is known for his work in the field of quantum entanglement, quantum information and quantum computers. In 2017, he was named one of Nature's 10, which labelled him "Father of Quantum". He is an academician of the Chinese Academy of Sciences and the World Academy of Sciences and Executive Vice President of the University of Science and Technology of China. He also serves as one of the Vice Chairman of Jiusan Society.
Early life and education
Pan was born in Dongyang, Jinhua, Zhejiang province in 1970. In 1987, he entered the University of Science and Technology of China (USTC), from which he received his bachelor's and master's degrees. He received his PhD from the University of Vienna in Austria, where he studied and worked in the group led by Nobel prize winning physicist Anton Zeilinger.
Contributions
Pan's team demonstrated five-photon entanglement in 2004. Under his leadership, the world's first quantum satellite launched successfully in August 2016 as part of the Quantum Experiments at Space Scale, a Chinese research project. In June 2017, Pan's team used their quantum satellite to demonstrate entanglement with satellite-to-ground total summed lengths between 1600km and 2400km and entanglement distribution over 1200km between receiver stations.
In 2021, Pan led a team which built quantum computers. One of the devices, named "Zuchongzhi 2.1", was claimed to be one million times faster than its nearest competitor, Google's Sycamore.
Awards and recognition
Pan was elected to the Chinese Academy of Sciences in 2011 at the age of 41, making him one of the youngest CAS academicians. He was then elected to the World Academy of Sciences in 2012 and won the International Quantum Communication Award in the same year.
In April 2014, he was appointed Vice President of the University of Science and Technology of China.
His team's work on double quantum-teleportation was selected as the Physics World "Top Breakthrough of the Year" in 2015. His team, whose members include Peng Chengzhi, Chen Yu'ao, Lu Chaoyang, and Chen Zengbing, won the State Natural Science Award (First Class) in 2015.
In 2017, the journal Nature named Pan, along with such figures as Ann Olivarius and Scott Pruitt, one of the top 10 people who made "a significant impact in science either for good or for bad", with the label "Father of Quantum" given to Pan. The same year he won the Future Science Prize.
Pan was included in Time magazine's 100 Most Influential People of 2018.
In 2019, Pan was appointed as lead editor of Physical Review Research. He also received The Optical Society's R. W. Wood Prize.
In 2020, Pan received the ZEISS Research Award.
References
1970 births
Living people
21st-century Chinese physicists
Academic staff of the University of Science and Technology of China
Chinese academic administrators
Educators from Jinhua
Members of the Chinese Academy of Sciences
Members of the Jiusan Society
Quantum physicists
People from Dongyang
Physicists from Zhejiang
Scientists from Jinhua
TWAS fellows
University of Science and Technology of China alumni
University of Vienna alumni
Fellows of the American Physical Society
Westlake University
Fellows of Optica (society) | Pan Jianwei | Physics | 703 |
55,410,471 | https://en.wikipedia.org/wiki/CRISPR%20activation | CRISPR activation (CRISPRa) is a gene regulation technique that utilizes an engineered form of the CRISPR-Cas9 system to enhance the expression of specific genes without altering the underlying DNA sequence. Unlike traditional CRISPR-Cas9, which introduces double-strand breaks to edit genes, CRISPRa employs a modified, catalytically inactive Cas9 (dCas9) fused with transcriptional activators to target promoter or enhancer regions, thereby boosting gene transcription. This method allows for precise control of gene expression, making it a valuable tool for studying gene function, creating gene regulatory networks, and developing potential therapeutic interventions for a variety of diseases.
Like for CRISPR interference, the CRISPR effector is guided to the target by a complementary guide RNA. However, CRISPR activation systems are fused to transcriptional activators to increase expression of genes of interest. Such systems are usable for many purposes including but not limited to, genetic screens and overexpression of proteins of interest.
The most commonly-used effector is based on Cas9 (from Type II systems), but other effectors like Cas12a (Type V) have been used as well.
Components
dCas9
Cas9 Endonuclease Dead, also known as dead Cas9 or dCas9, is a mutant form of Cas9 whose endonuclease activity is removed through point mutations in its endonuclease domains. Similar to its unmutated form, dCas9 is used in CRISPR systems along with gRNAs to target specific genes or nucleotides complementary to the gRNA with PAM sequences that allow Cas9 to bind. Cas9 ordinarily has 2 endonuclease domains called the RuvC and HNH domains. The point mutations D10A and H840A change 2 important residues for endonuclease activity that ultimately results in its deactivation. Although dCas9 lacks endonuclease activity, it is still capable of binding to its guide RNA and the DNA strand that is being targeted because such binding is managed by other domains. This alone is often enough to attenuate if not outright block transcription of the targeted gene if the gRNA positions dCas9 in a way that prevents transcriptional factors and RNA polymerase from accessing the DNA. However, this ability to bind DNA can also be exploited for activation since dCas9 has modifiable regions, typically the N and C terminus of the protein, that can be used to attach transcriptional activators.
Guide RNA
See: Guide RNA, CRISPR
A small guide RNA (sgRNA), or gRNA is an RNA with around 20 nucleotides used to direct Cas9 or dCas9 to their targets. gRNAs contain two major regions of importance for CRISPR systems: the scaffold and spacer regions. The spacer region has nucleotides that are complementary to those found on the target genes, often in the promoter region. The scaffold region is responsible for formation of a complex with (d)Cas9. Together, they bind (d)Cas9 and direct it to the gene(s) of interest. Since the spacer region of a gRNA can be modified for any potential sequence, they give CRISPR systems much more flexibility as any genes and nucleotides with a sequence complementary to the spacer region can become possible targets.
Transcriptional activators
See: Transcriptional Activator, Transcription Factor
Transcriptional Activators are protein domains or whole proteins linked to dCas9 or sgRNAs that assist in the recruitment of important co-factors as well as RNA Polymerase for transcription of the gene(s) targeted by the system. In order for a protein to be made from the gene that encodes it, RNA polymerase must make RNA from the DNA template of the gene during a process called transcription. Transcriptional activators have a DNA binding domain and a domain for activation of transcription. The activation domain can recruit general transcription factors or RNA polymerase to the gene sequence. Activation domains can also function by facilitating transcription by stalled RNA polymerases, and in eukaryotes can act to move nucleosomes on the DNA or modify histones to increase gene expression. These activators can be introduced into the system through attachment to dCas9 or to the sgRNA. Some researchers have noted that the extent of transcriptional upregulation can be modulated by using multiple sites for activator attachment in one experiment and by using different variations and combinations of activators at once in a given experiment or sample.
Expression system
An expression system is required for the introduction of the gRNAs and (d)Cas9 proteins into the cells of interest. Typically employed options include but are not limited to plasmids and viral vectors such as adeno-associated virus (AAV) vector or lentivirus vector.
Specific activation systems
VP64-p65-Rta
The VP64-p65-Rta, or VPR, dCas9 activator was created by modifying an existing dCas9 activator, in which a Vp64 transcriptional activator is joined to the C terminus of dCas9. In the dCas9-VPR protein, the transcription factors p65 and Rta are added to the C terminus of dCas9-Vp64. Therefore, all three transcription factors are targeted to the same gene. The use of three transcription factors, as opposed to solely Vp64, results in increased expression of targeted genes. When different genes were targeted by dCas9, they all showed significantly greater expression with dCas9-VPR than with dCas9-VP64. It has also been demonstrated that dCas9-VPR can be used to increase expression of multiple genes within the same cell by putting multiple sgRNAs into the same cell.
dCas9-VPR has been used to activate the neurogenin 2 (link) and neurogenic differentiation 1 (link) genes, resulting in differentiation of induced pluripotent stem cells into induced neurons. A study comparing dCas9 activators found that the VPR, SAM, and Suntag activators worked best with dCas9 to increase gene expression in a variety of fruit fly, mouse, and human cell types.
Synergistic activation mediator
To overcome the limitation of the dCas9-VP64 gene activation system, the dCas9-SAM system was developed to incorporate multiple transcriptional factors. Utilizing MS2, p65, and HSF1 proteins, dCas9-SAM system recruits various transcriptional factors working synergistically to activate the gene of interest.
In order to assemble different transcriptional activators, the dCas9-SAM system uses a modified single guide RNA (sgRNA) that has binding sites for the MS2 protein. Hairpin aptamers are attached to the tetra loop and the stem loop 2 of the sgRNA to become binding sites for dimerized MS2 bacteriophage coat proteins. As the hairpins are exposed outside of the dCas9-sgRNA complex, other transcriptional factors can bind to the MS2 protein without disrupting the dCas9-sgRNA complex. Thus, the MS2 protein is engineered to include p65 and HSF1 proteins. The MS2-p65-HSF1 fusion protein interacts with the dCas9-VP64 to recruit more transcriptional factors onto the promoter of the target genes.
Employing the dCas-SAM system, Zhang et al. (2015) successfully reactivated the latent HIV gene to over-express viral proteins from the HIV host cells. They were able to over-express viral proteins substantially to trigger apoptosis of HIV-1 latent cells due to the toxicity of viral proteins. In another dCas-SAM system experiment, Konermann et al. (2015) found genes in melanoma cells that give resistance to a BRAF inhibitor through activating candidate genes via dCas system. Thus, the dCas9-SAM system can further be employed to activate latent genes, develop gene therapies, and discover new genes.
SunTag
The SunTag activator system uses the dCas9 protein, which is modified to be linked with the SunTag. The SunTag is a repeating polypeptide array that can recruit multiple copies of antibodies. Through attaching transcriptional factors on the antibodies, the SunTag dCas9 activating complex amplifies its recruitment of transcriptional factors. In order to guide the dCas9 protein to its target gene, the dCas9 SunTag system uses sgRNA.
Tanenbaum et al.(2014) are credited for creating the dCas9 SunTag system. For the antibodies, they employed GCN4 antibodies which was bound to transcriptional factor VP64. In order to transport the antibodies to the nuclei of the cells, they attached NLS tag. To confirm the nuclear localization of the antibodies, sfGFP was used for visualization purpose. Therefore, the GCN4-sfGFP-NLS-VP64 protein was developed to be interact with dCas SunTag system. The antibodies successfully bound to SunTag polypeptides and activated target CXCR4 gene in K562 cell lines. Comparing with the dCas9-VP64 activation complex, they were able to increase the CXCR4 gene expression 5-25 times greater in K562 cell lines. Not only was there a greater CXCR4 protein overexpression but also CXCR4 proteins were active to further travel on the transwell migration assay. Thus, the dCas9-SunTag system can be used to activate genes that are present latently such as virus genes.
Applications
The dCas9 activation system allows a desired gene or multiple genes in the same cell to be expressed. It is possible to study genes involved in a certain process using a genome wide screen that involves activating expression of genes. Examining which sgRNAs yield a phenotype suggests which genes are involved in a specific pathway. The dCas9 activation system can be used to control exactly which cells are activated and at what time activation occurs. dCas9 constructs have been made that turn on a dCas9-activator fusion protein in the presence of light or chemicals. Cells can also be reprogrammed or differentiated from one cell type into another by increasing the expression of certain genes important for the formation or maintenance of a cell type.
Greater control over gene expression
One research group used a system in which dCas9 was fused to a particular domain, C1B1. When blue light is shined on the cell, the cryptochrome 2 (Cry2) domain binds to C1B1. The Cry2 domain is fused to a transcriptional activator, so blue light targets the activator to the spot where dCas9 is bound. The use of light allows a great deal of control over when the targeted gene is activated. Removing the light from the cell results in only dCas9 remaining at the target gene, so expression is not increased. In this way, the system is reversible. A similar system was developed using chemical control. In this system, dCas9 recruits an MS2 fusion protein that contains the domain FKBP. In the presence of the chemical RAP, an FRB domain fused to a chromatin modifying complex binds to FKBP. Whenever RAP is added to the cells, a specific chromatin modifier complex can be targeted to the gene. That allows scientists to examine how specific chromatin modifications affect the expression of a gene. The dCAs9-VPR system is used as an activator by targeting it to the promoter of a gene upstream of the coding region. A study used various sgRNAs to target different portions of the gene, finding that the dCas9-VPR activator can act as an activator or a repressor, depending on the location it binds. In a cell, sgRNAs targeting the promoter could allow dCas9-VPR to increase expression, while sgRNAs targeting the coding region of the gene result in dCas9-VPR decreasing expression.
Genome wide activation
The versatility of sgRNAs allows dCas9 activators to increase the expression of any gene within an organism's genome. That could be used to increase expression of a protein coding gene or a transcribed RNA. A paper demonstrated that genome wide activation could be used to determine which proteins are involved in mediated resistance to a specific drug. Another paper used genome wide activation of long, noncoding RNAs and observed that increasing the expression of certain long noncoding RNAs conferred resistance to the drug vemurafenib. In both cases, the cells that survive the drug could be studied to determine which sgRNAs they contain. That allows researchers to determine which gene was activated in each surviving cell, which suggests which genes are important for resistance to that drug.
Use in organisms
A dCas9 fusion with VP64, p65, and HSF1 (heat shock factor 1) allowed researchers to target genes in Arabidopsis thaliana and increase transcription to a similar level as when the gene itself is inserted into the plant's genome. For one of the two genes tested, the dCas9 activator changes the number and size of leaves and made the plants better able to handle drought. The authors conclude that the dCas9 activator can create phenotypes in plants that are similar to those observed when a transgene is inserted for overexpression. Researchers have used multiple guide RNAs to target dCas9 activation system to multiple genes in a specific mouse strain in which dCas9 can be turned on in specific cell lines using the Cre recombinase system. Scientists used the targeting and increased expression of several genes to examine the processes involved in regeneration and carcinomas of the liver.
References
Genetic engineering
Genome editing | CRISPR activation | Chemistry,Engineering,Biology | 2,882 |
64,919,624 | https://en.wikipedia.org/wiki/GamEvac-Combi | GamEvac-Combi () is a heterologous VSV- and Ad5-vectored Ebola vaccine. There is also a version called GamEvac which is a homologous Ad5-vectored vaccine. GamEvac-Combi was developed by Gamaleya Research Institute of Epidemiology and Microbiology. the vaccine has been licensed in Russia for emergency use, on the basis of Phase 1 and Phase 2 clinical trials.
Description
The vaccine consists of live-attenuated recombinant vesicular stomatitis virus (VSV) and adenovirus serotype-5 (Ad5) expressing Ebola envelope glycoprotein. The vaccine is targeted against the Makona variant of Ebola that was circulating in West Africa during the 2013-2016 outbreak.
History
GamEvac-Combi was licensed by the Ministry of Health of the Russian Federation for emergency use in the territory of the Russian Federation in December 2015. The emergency license was based on Phase I and II clinical data of safety and immunogenicity.
See also
Gam-COVID-Vac
References
Vaccines
Science and technology in Russia
Ebola
Gamaleya Research Institute of Epidemiology and Microbiology | GamEvac-Combi | Biology | 253 |
44,328,955 | https://en.wikipedia.org/wiki/Churn%20turbulent%20flow | Churn turbulent flow is a two-phase gas/liquid flow regime characterized by a highly-agitated flow where gas bubbles are sufficient in numbers to both interact with each other and, while interacting, coalesce to form larger distorted bubbles with unique shapes and behaviors in the system. This flow regime is created when there is a large gas fraction in a system with a high gas and low liquid velocity. It is an important flow regime to understand and model because of its predictive value in nuclear reactor vessel boiling flow.
Occurrence
A flow in which the number of bubbles is low is called ideally-separated bubble flow. The bubbles don’t interact with each other. As the number of bubbles increase they start colliding each other. A situation then arises where they tend to coalesce to form cap bubbles, and the new flow pattern formed is called churn turbulent flow. The bubbles occurring in such a flow can be classified in small, large, and distorted bubbles. The small bubbles are generally spherical or elliptical and are encountered in a major concentration in the wake of large and distorted bubbles and close to the walls. Large, ellipsoidal or cap bubbles can be found in the core region of the flow as well as the distorted bubbles with a highly deformed interface.
Churn turbulent flow is commonly encountered in industrial applications. A typical example is boiling flow in nuclear reactors.
Numerical simulation of bubble column flows in churn turbulent regime
Numerical simulations of cylindrical bubble columns operating in the churn-turbulent regime have been carried out using an Euler–Euler approach incorporated with the RNG k–ε model for liquid turbulence. Several approaches have been carried out, including single-sized bubble modeling, double-sized bubble modeling, and the multiple sizes group modeling (MUSIG).
Breakup mass conserved formulations and coalescence rates mass conserved formulation was used in the computation of bubble size distributions. For single size modelling the Schiller–Naumann drag force was used, and for the modelling of MUSIG the Ishii–Zuber drag force was used. An empirical drag formulation was used for the double size bubble model. The simulation results of time-averaged axial velocity and gas holdup obtained with the three models were compared with reported experimental data in the resulting literature. After the comparison of all the three results it gets very clear that only MUSIG models with some lift force can replicate the measured radial distribution of gas holdup in the fully developed flow regime. The inhomogeneous MUSIG model gives a little better result than other models in the prediction of axial liquid velocity. For all the simulations the RNG k–ε model was used, and the results showed that this version of k–ε model did yield comparatively high rate of turbulence dissipation and high bubble breakup and, hence, a rational bubble size distribution formed. Here the ad hoc manipulation of the breakup rates was ignored. Mutual effects of drag force, mean bubble sizes, and turbulence characteristics profound from the simulation results. A decrease in the relative velocity between two phases is encounters due to an increase in the drag force, and this could result in decrease in k and ε. Low breakup rates results a large Sauter diameter which was directly connected to the dissipation rates of turbulence. Drag force is directly influenced by the change of Sauter diameter.
References
Montoya, G.; Liao, Y.; Lucas, D.; Krepper, E. "Analysis and Applications of a Two-Fluid Multi-Field Hydrodynamic Model for Churn-Turbulent Flows", 21st International Conference on Nuclear Engineering – ICONE 21. China (2013)
Montoya, G.; Baglietto, E.; Lucas, D.; Krepper, E. "A Generalized Multi-Field Two-Fluid Approach for Treatment of Multi-Scale Interfacial Structures in High Void-Fraction Regimes", MIT Energy Night 2013. Cambridge, Massachusetts, USA (2013)
•Montoya, G.; Lucas, D.; Krepper, E.; Hänsch, S.; Baglietto, E.
Analysis and Applications of a Generalized Multi-Field Two-Fluid Approach for Treatment of Multi-Scale Interfacial Structures in High Void-Fraction Regimes
2014 International Congress on Advances in Nuclear Power Plants – ICAPP 2014. USA (2014)
•Montoya, G.; Baglietto, E.; Lucas, D.; Krepper, E.; Hoehne, T.
Comparative Analysis of High Void Fraction Regimes using an Averaging Euler-Euler Multi-Fluid Approach and a Generalized Two-Phase Flow (GENTOP) Concept
22nd International Conference on Nuclear Engineering – ICONE 22. Czech Republic (2014)
Montoya, G.; Baglietto, E.; Lucas, D.; Krepper, E.
Development and Analysis of a CMFD Generalized Multi-Field Model for Treatment of Different Interfacial Scales in Churn-Turbulent and Transitional Flows
CFD4NRS-5 – Application of CFD/CMFD Codes to Nuclear Reactor Safety Design and their Experimental Validation. Switzerland (2014)
https://www.hzdr.de/db/!Publications?pSelTitle=18077&pSelMenu=-1&pNid=3016
Shuiqing Zhan, Mao Li, Jiemin Zhou, Jianhong YangYiwen Zhou Applied Thermal Engineering 2014, 73, 803–816 [CrossRef]
T. T. DeviB. Kumar Thermophysics and Aeromechanics 2014, 21, 365–382 [CrossRef]
R.M.A. MasoodA. Delgado Chemical Engineering Science 2014, 108, 154–168 [CrossRef]
×M. Pourtousi, J.N. SahuP. Ganesan Chemical Engineering and Processing: Process Intensification 2014, 75, 38–47 [CrossRef]
Flow regimes
Turbulence models | Churn turbulent flow | Chemistry | 1,222 |
60,897,141 | https://en.wikipedia.org/wiki/International%20Biodeterioration%20and%20Biodegradation%20Society | The International Biodeterioration and Biodegradation Society (IBBS) is a scientific society with an international membership. It is a charity registered in the UK. IBBS belongs to the Federation of European Microbiological Societies (FEMS), along with national organizations from European countries and appears in the Yearbook of International Organisations On-line, published by the Union of International Associations. The aim of IBBS is to promote and spread knowledge of Biodeterioration and Biodegradation. Conferences are arranged on specific topics and every three years an International Symposium covering a wide range of research in these scientific areas is organized; the last (IBBS17) was held in Manchester, UK. Members can apply for various grants or bursaries. The Society's journal, International Biodeterioration and Biodegradation, is published by Elsevier.
Aims and early history
The International Biodeterioration and Biodegradation Society (IBBS) is a learned scientific society with a worldwide membership coming from academia and industry. Its aims are to promote the sciences of Biodeterioraion and Biodegradation by means of international meetings, conferences and publications. It appears in the Yearbook of International Organisations On-line, published by the Union of International Associations, in cooperation with the United Nations Economic and Social Council. It began as the Biodeterioration Society. The draft constitution of the Society was agreed in 1969 and the first annual general meeting was held on 9 July 1971 . The aim of the Society was to promote the science of Biodeterioration, which is defined as any undesirable change in the properties of a material caused by the vital activities of living organisms. The economic importance of biodeterioration was discussed in an article by Dennis Allsopp, a former president and secretary of the Society. The first Biodeterioration Symposium was held prior to the inauguration of the Society, in Southampton, UK, in 1968. A copy of the abstracts is available at . The Second International Biodeterioration Symposium, and the first to be held under the auspices of the newly-formed Society, was held in Lunteren, The Netherlands, in September, 1971. The Third International Symposium, held at the University of Rhode Island, USA, in 1975, was designated the "Third International Biodegradation Symposium", this being the more recognized word in the USA. It was not until the 8th Symposium, however, in Windsor, Ontario, in 1990, that the term was reintroduced. Since then, all triennial events have been entitled "International Biodeterioration and Biodegradation Symposia" and the Society adopted the word into its name, becoming the International Biodeterioration and Biodegradation Society, or IBBS.
Governance and publications
IBBS is a charity registered in the UK. It has an executive body, the Council, with elected honorary officers , which meets three times each year. The Honorary Scientific Programme Officers collaborate on the organization of conferences and small meetings suggested by members. A Newsletter is produced under the aegis of its Honorary Managing Editor and emailed to members three times each year. IBBS has no physical headquarters, any physical records and publications being kept by Council members. Back issues of the Society's first publication, International Biodeterioration Bulletin (1965-1986, now discontinued) have been converted into digital format and made freely available on the website . From 1984, the Journal was published by the Commonwealth Agricultural Bureaux (CAB) in the UK, under ISSN 0265-3036. In 1987, the Society agreed with Elsevier that the journal "International Biodeterioration and Biodegradation" (ISSN 0964-8305) would be published by them and acknowledged as the Official Journal of IBBS. Reduced subscriptions are available to IBBS members.
Membership and meetings
The Society is a member of FEMS (Federation of European Microbiological Societies), but its members are not restricted to Europe. IBBS has a diverse membership with scientists from all over the world and with approximately equal numbers of male and female members. "Country Representatives" have the role of promoting IBBS in their countries and acting as a focal point for members in that area. Meetings have been held in the UK, USA, Austria, Canada, Czech Republic, France, Germany, Holland, India, Italy, Poland and Spain, with overarching international symposia held every 3 years. The last triennial International Biodeterioration and Biodegradation Symposium (IBBS17) was held in Manchester, UK, in September, 2017. The 2020 Symposium was delayed because of the COVID outbreak and was held on-line in September, 2021, www.ibbs18.org.
References
Biodegradation
British biology societies
International scientific organizations
Microbiology societies | International Biodeterioration and Biodegradation Society | Chemistry | 998 |
47,016,009 | https://en.wikipedia.org/wiki/RD-0243 | The RD-0243 () is a propulsion module composed of an RD-0244 main engine and a RD-0245 vernier thruster. Both are liquid-fuel rocket engines, burning a hypergolic mixture of unsymmetrical dimethylhydrazine (UDMH) fuel with dinitrogen tetroxide () oxidizer. The RD-0244 main engine operates in the oxidizer rich staged combustion cycle, while the vernier RD-0245 uses the simpler gas generator cycle. Since volume is at a premium on submarine launches, this module is submerged on the propellant tank. Its development period was from 1977 to 1985, having had its first launch on December 27, 1981. Originally developed for the RSM-54, it was used later for the Shtil'.
See also
R-29RM Shtil
Shtil'
Rocket engine using liquid fuel
References
External links
KbKhA official information on the engine. (Archived)
Encyclopedia Astronautica information on the propulsion module. (Archived)
Rocket engines of the Soviet Union
Rocket engines using hypergolic propellant
Sea launch to orbit
Rocket engines using the staged combustion cycle
KBKhA rocket engines | RD-0243 | Astronomy | 244 |
61,060,210 | https://en.wikipedia.org/wiki/Melanie%20Leng | Melanie Jane Leng is a Professor of Isotope Geosciences at the University of Nottingham working on isotopes, palaeoclimate and geochemistry. She also serves as the Chief Scientist for Environmental Change Adaptation and Resilience at the British Geological Survey and Director of the Centre for Environmental Geochemistry, a collaboration between the University of Nottingham and the British Geological Survey. For many years (till 2019) she has been the UK convenor and representative of the UK geoscience community on the International Continental Scientific Drilling Program.
Early life and education
Leng grew up in Scarborough, North Yorkshire. She spent her childhood on the cliffs and beaches of the Lower Jurassic. Leng studied geology for GCSE and A Level. At Sixth Form College she took a field trip to Ravenscar and described finding an ammonite which hooked her into geology. She studied for a BSc in Earth Science at Oxford Polytechnic, gained her PhD at Aberystwyth University in 1990, then moved to the British Geological Survey to work in the isotope laboratory.
Research and career
Leng has several roles, her most current is Chief Scientist for Environmental Change Adaptation and Resilience at the British Geological Survey. She is also Director of the Centre for Environmental Geochemistry, a collaboration between the British Geological Survey and the University of Nottingham, Leng leads research around environmental change, human impact, food security, and resource management. Leng has been involved in deep drilling as part of the International Continental Scientific Drilling Program, and worked in Lake Ohrid in Macedonia and Lake Chala in East Africa. She also heads the Stable Isotope Facility at the British Geological Survey, which is part of the National Environmental Isotope Facility. Stable isotopes can be used to better understand climate change and human-landscape interactions, with increasing importance on the Anthropocene and the modern calibration period; tracers of modern pollution; and understanding the hydrological cycle especially in areas suffering human impact. Leng takes part in expeditions, most recently the Natural Environment Research Council (NERC) mission called Ocean Regulation of Climate by Heat and Carbon Sequestration and Transports (ORCHESTRA). She actively blogs about her research.
Leng serves on the editorial board of the journals Quaternary Research, Quaternary Science Reviews, Scientific Reports and the Journal of Paleolimnology.
She has written several articles about successfully undertaking a PhD.
Awards and honours
Leng was appointed a Member of the Order of the British Empire (MBE) in the 2019 Birthday Honours.
Leng received a Honorary Doctor of Science (DSc) degree from Oxford Brookes University in 2022.
References
Year of birth missing (living people)
Living people
Members of the Order of the British Empire
Geochemists
People from Scarborough, North Yorkshire
Alumni of Aberystwyth University | Melanie Leng | Chemistry | 567 |
8,729,683 | https://en.wikipedia.org/wiki/Stirling%20numbers%20and%20exponential%20generating%20functions%20in%20symbolic%20combinatorics | The use of exponential generating functions (EGFs) to study the properties of Stirling numbers is a classical exercise in combinatorial mathematics and possibly the canonical example of how symbolic combinatorics is used. It also illustrates the parallels in the construction of these two types of numbers, lending support to the binomial-style notation that is used for them.
This article uses the coefficient extraction operator for formal power series, as well as the (labelled) operators (for cycles) and (for sets) on combinatorial classes, which are explained on the page for symbolic combinatorics. Given a combinatorial class, the cycle operator creates the class obtained by placing objects from the source class along a cycle of some length, where cyclical symmetries are taken into account, and the set operator creates the class obtained by placing objects from the source class in a set (symmetries from the symmetric group, i.e. an "unstructured bag".) The two combinatorial classes (shown without additional markers) are
permutations (for unsigned Stirling numbers of the first kind):
and
set partitions into non-empty subsets (for Stirling numbers of the second kind):
where is the singleton class.
Warning: The notation used here for the Stirling numbers is not that of the Wikipedia articles on Stirling numbers; square brackets denote the signed Stirling numbers here.
Stirling numbers of the first kind
The unsigned Stirling numbers of the first kind count the number of permutations of [n] with k cycles. A permutation is a set of cycles, and hence the set of permutations is given by
where the singleton marks cycles. This decomposition is examined in some detail on the page on the statistics of random permutations.
Translating to generating functions we obtain the mixed generating function of the unsigned Stirling numbers of the first kind:
Now the signed Stirling numbers of the first kind are obtained from the unsigned ones through the relation
Hence the generating function of these numbers is
A variety of identities may be derived by manipulating this generating function:
In particular, the order of summation may be exchanged, and derivatives taken, and then z or u may be fixed.
Finite sums
A simple sum is for
This formula holds because the exponential generating function of the sum is
Infinite sums
Some infinite sums include
where (the singularity nearest to
of is at )
This relation holds because
Stirling numbers of the second kind
These numbers count the number of partitions of [n] into k nonempty subsets. First consider the total number of partitions, i.e. Bn where
i.e. the Bell numbers. The Flajolet–Sedgewick fundamental theorem applies (labelled case).
The set of partitions into non-empty subsets is given by ("set of non-empty sets of singletons")
This decomposition is entirely analogous to the construction of the set of permutations from cycles, which is given by
and yields the Stirling numbers of the first kind. Hence the name "Stirling numbers of the second kind."
The decomposition is equivalent to the EGF
Differentiate to obtain
which implies that
by convolution of exponential generating functions and because differentiating an EGF drops the first coefficient and shifts Bn+1 to z n/n!.
The EGF of the Stirling numbers of the second kind is obtained by marking every subset that goes into the partition with the term , giving
Translating to generating functions, we obtain
This EGF yields the formula for the Stirling numbers of the second kind:
or
which simplifies to
References
Ronald Graham, Donald Knuth, Oren Patashnik (1989): Concrete Mathematics, Addison–Wesley,
D. S. Mitrinovic, Sur une classe de nombre relies aux nombres de Stirling, C. R. Acad. Sci. Paris 252 (1961), 2354–2356.
A. C. R. Belton, The monotone Poisson process, in: Quantum Probability (M. Bozejko, W. Mlotkowski and J. Wysoczanski, eds.), Banach Center Publications 73, Polish Academy of Sciences, Warsaw, 2006
Milton Abramowitz and Irene A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, USGPO, 1964, Washington DC,
Enumerative combinatorics | Stirling numbers and exponential generating functions in symbolic combinatorics | Mathematics | 888 |
70,081,751 | https://en.wikipedia.org/wiki/Aircraft%20Research%20Association | The Aircraft Research Association (ARA) is an aerodynamics research institute in the north-west of Bedford.
History
The association was founded on 22 January 1952. 14 main British aviation companies funded £1.25m to build a large wind tunnel.
It was first proposed in 1953 to build the site at Stevington, north-east of Bedford. By March 1953, the current site was chosen.
Construction
Work started on Monday 7 September 1953.
The wind tunnel was fabricated by Moreland Hayne of east London.
The transonic tunnel first ran in April 1956.
Visits
The Duke of Edinburgh visited on the morning of Friday 4 May 1956. He had been planning to land by helicopter in the south-east of Bedford, and to be driven from there to the site by car, but weather conditions were unsuitable.
Structure
The site has the largest transonic wind tunnel in the UK, known as the TWT, with speeds up to Mach 1.4, powered by a Sulzer axial compressor. It is 25,000 hp electric-powered.
Wind tunnels
Supersonic tunnel, Mach 1.4 - 3.5, built in 1958
Two hypersonic tunnels
Mach 4-5 tunnel, built in 1965
Mach 7 tunnel, built in 1968
Research
Projects worked on include Concorde, the Harrier and most Airbus aircraft. The Rolls-Royce RB211 was tested there.
The site now works with RUAG of Switzerland.
See also
Aerospace Technology Institute, in Bedfordshire, launched in 2012 by the government as the UK Aerodynamics Centre
British Hydromechanics Research Association (BHRA), also in Bedfordshire
UK Aerospace Research Consortium (UK-ARC), formed in 2018, an alliance of university departments
List of wind tunnels
References
External links
ARA Bedford
1952 establishments in the United Kingdom
Aerospace engineering organizations
Aerospace industry in the United Kingdom
Engineering research institutes
Organisations based in Bedford
Research institutes established in 1952
Science and technology in Bedfordshire
Technology consortia
Wind tunnels | Aircraft Research Association | Engineering | 385 |
14,391,787 | https://en.wikipedia.org/wiki/Bayes%20linear%20statistics | Bayes linear statistics is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarily developed by Michael Goldstein. Mathematically and philosophically it extends Bruno de Finetti's Operational Subjective approach to probability and statistics.
Motivation
Consider first a traditional Bayesian Analysis where you expect to shortly know D and you would like to know more about some other observable B. In the traditional Bayesian approach it is required that every possible outcome is enumerated i.e. every possible outcome is the cross product of the partition of a set of B and D. If represented on a computer where B requires n bits and D m bits then the number of states required is . The first step to such an analysis is to determine a person's subjective probabilities e.g. by asking about their betting behaviour for each of these outcomes. When we learn D conditional probabilities for B are determined by the application of Bayes' rule.
Practitioners of subjective Bayesian statistics routinely analyse datasets where the size of this set is large enough that subjective probabilities cannot be meaningfully determined for every element of D × B. This is normally accomplished by assuming exchangeability and then the use of parameterized models with prior distributions over parameters and appealing to the de Finetti's theorem to justify that this produces valid operational subjective probabilities over D × B. The difficulty with such an approach is that the
validity of the statistical analysis requires that the subjective probabilities are a good representation of an individual's beliefs however this method results in a very precise specification over D × B and it is often difficult to articulate what it would mean to adopt these belief specifications.
In contrast to the traditional Bayesian paradigm Bayes linear statistics following de Finetti uses Prevision or subjective expectation as a primitive, probability is then defined as the expectation of an indicator variable. Instead of specifying a subjective probability for every element in the partition D × B the analyst specifies subjective expectations for just a few quantities that they are interested in or feel knowledgeable about. Then instead of conditioning an adjusted expectation is computed by a rule that is a generalization of Bayes' rule that is based upon expectation.
The use of the word linear in the title refers to de Finetti's arguments that probability theory is a linear theory (de Finetti argued against the more common measure theory approach).
Example
In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation.
To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e.
In order to specify a Bayes linear model it is necessary to supply expectations for the vectors B and D, and to also specify the correlation between each component of B and each component of D.
For example the expectations are specified as:
and the covariance matrix is specified as :
The repetition in this matrix, has some interesting implications to be discussed shortly.
An adjusted expectation is a linear estimator of the form
where and are chosen to minimise the prior expected loss for the observations i.e. in this case. That is for
where
are chosen in order to minimise the prior expected loss in estimating
In general the adjusted expectation is calculated with
Setting to minimise
From a proof provided in (Goldstein and Wooff 2007) it can be shown that:
For the case where is not invertible the Moore–Penrose pseudoinverse should be used instead.
Furthermore, the adjusted variance of the variable after observing the data is given by
See also
Imprecise probability
External links
Bayes Linear Methods
References
Goldstein, M. (1981) Revising Previsions: a Geometric Interpretation (with Discussion). Journal of the Royal Statistical Society, Series B, 43(2), 105-130
Goldstein, M. (2006) Subjectivism principles and practice. Bayesian Analysis]
Michael Goldstein, David Wooff (2007) Bayes Linear Statistics, Theory & Methods, Wiley.
de Finetti, B. (1931) "Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science," (translation of 1931 article) in Erkenntnis, volume 31, September 1989. The entire double issue is devoted to de Finetti's philosophy of probability.
de Finetti, B. (1937) “La Prévision: ses lois logiques, ses sources subjectives,” Annales de l'Institut Henri Poincaré,
- "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), Studies in Subjective Probability, New York: Wiley, 1964.
de Finetti, B. (1974) Theory of Probability, (translation by A Machi and AFM Smith of 1970 book) 2 volumes, New York: Wiley, 1974-5.
Linear statistics
Probability interpretations | Bayes linear statistics | Mathematics | 1,133 |
18,150,648 | https://en.wikipedia.org/wiki/TAC%20%28building%20automation%29 | TAC is a Swedish-based building automation company in the fields of both energy and security. It also operates in other countries including the United Kingdom and the United States.
It was originally established in 1925 as Tour Agenturer in Stockholm.
TAC has announced a name change to Schneider Electric, its parent company, to take place in October 2009.
History
1925–1995
In the beginning, the company produced a product range focusing particularly on draught regulators and radiator valves.
It continued to develop its product range introducing the first transistorized heating regulator in 1962 and a computer-based system for climate control in 1974.
Tour Agenturer became Tour & Andersson in 1977 following a merger with AH Andersson.
Over the following years, Tour & Andersson extended its product range to include an integrated access control system and hotel management & signal system in 1980 and 1981 respectively.
In 1987 it released Micro 7, an IBM PC-based control system with an easier user interface than previously available. It was operated with a mouse in a similar fashion to modern-day computers.
In 1994, the company "moves towards open systems architecture." The following year, Tour & Andersson was separated into two companies: TA Hydronics and TA Control.. TA Hydronics became IMI Hydronic Engineering in 2014
1996–2006
The first major development by TA Control arrived in 1996 with a programmable control system featuring graphical programming.
The following year TA Control changed its name to TAC and focused equally on its services and systems operations in addition to its international partner networks. TAC acquired Norwegian major systems integration company, Solberg Andersen in 1998 shortly before being bought by investment company EQT.
Between 1998 and 2006, several more acquisitions and mergers took place, beginning with TAC's acquisition of Danish-based Danfoss System Automatik. It then merged with CSI—an American company based in Dallas, Texas—in 2000 to create a new company of 2,000 employees covering three major regions: Europe, the Americas and the Pacific.
The years 2002 and 2003 brought further development for TAC Group with its acquisition of both Control Solutions and MicroSign in 2002 before TAC Group was itself acquired by French-based Schneider Electric in 2003. In 2004, the company bought Seattle-based Abacus Engineered Systems which was then merged with the Energy Solutions Division of TAC.
In the same year TAC's parent company, Schneider Electric, acquired Andover Controls which was then merged with TAC to boost its security operations in addition to expanding its building automation activities.
The company became Tour Andover Controls and merged with three more companies over the next two years: Satchwell Controls and the European division of Invensys Advanced Building Systems in 2005, and Invensys Building Systems (IBS) in 2006.
It became TAC Satchwell for a brief period between 2005 and 2007 during the integration of Satchwell Controls into the company.
2007 to present
The year 2007 brought TAC's most recent merger with Pelco, Dean Meyer in charge of the Pelco division. TAC now covers 7 continents and 80 countries.
Industries
Commercial
Education
Healthcare
Data Centers
Hotel
Transportation
Retail
Industry and Technology
Government and Military
Residential
Life Sciences
References
"TAC Corporate Brochure 2009"
Building automation
Schneider Electric | TAC (building automation) | Engineering | 659 |
55,772,368 | https://en.wikipedia.org/wiki/Sukanasa | In Hindu temple architecture a sukanasa (, IAST: śukanāsa) or sukanasi is an external ornamented feature over the entrance to the garbhagriha or inner shrine. It sits on the face of the sikhara tower (in South India, the vimana) as a sort of antefix. The forms of the sukanasa can vary considerably, but it normally has a vertical face, very often in the form of a large gavaksha or "window" motif, with an ornamental frame above and to the sides, forming a roughly triangular shape. In discussing temples in Karnataka local authors tend to use "sukanasi" (the preferred form in these cases) as a term for the whole structure of the antarala or ante-chamber from the floor to the top of the sukanasa roof above.
It often contains an image of the deity to whom the temple is dedicated inside this frame, or other figurative subjects. The vertical face may be the termination of a horizontally-projecting structure of the same shape, especially in temples with an antarala or ante-chamber between the mandapa or public worship hall and the garbhagriha. In these cases the projection is over the antarala. Some temples have large gavaksha motifs, in effect sukanasas, on all four faces of the shikara, and there may be two tiers of sukanasa going up the tower. Sukasanas are also often found in Jain temples.
The name strictly means "parrot's beak", and is often referred to as the "nose" of the temple superstructure, as part of the understanding of the temple as representing in its various parts the anatomy of the deity. Various early texts set out proportions for the shape of the sukasana, centred on a circular gavaksha, and its size in proportion to the rest of the temple, especially the height of the shikara. They vary and in any case are not always followed.
Especially in the south, the sukanasa may be topped by a kirtimukha head, the open-mouthed monster swallowing or vomiting the rest of the motif below. As with the gavaksha, the motif represents a window through which the light of the deity shines out across the world.
History
The sukanasa appears to develop from later forms of the large "chaitya arch" on the outside facade of Buddhist chaitya halls. Initially these were a large practical window admitting light to the interior, and reflecting the shape of the curved internal roof, based on timber and thatch predecessors. Later, these large motifs developed into a setting for sculpture that was largely "blind" or not actually an opening in the wall. Both phases now only survive in rock-cut "cave temples" at sites such as the Ajanta Caves, where the first type can be seen at Caves 9, 10, 19, and 26, and Ellora, where Cave 10 shows the second type.
According to Adam Hardy, "possibly the first use of a sukanasa in a Dravida temple" is the Parvati temple, Sandur (7–8th century), using his terminology where "Karnataka Dravida" architecture is treated as a form of Dravidian architecture; others describe this as Badami Chalukya architecture or similar terms.
In Hoysala architecture, the sukanasa is typically brought forward over an antarala, and the royal emblem of the Hoysala Empire, the mythical founder Sala stabbing a lion (according to the legend a tiger, but the two are not distinguished in Indian art), often stands over the barrel roof as sculpture in the round. Among other places, this can be seen at the Bucesvara Temple, Koravangala, both temples at the Nageshvara-Chennakeshava Temple complex, Mosale, and the Kedareshvara Temple, Balligavi.
Notes
References
Foekema, Gerard, A Complete Guide to Hoysala Temples, Abhinav, 1996 , google books
Hardy, Adam, Indian Temple Architecture: Form and Transformation : the Karṇāṭa Drāviḍa Tradition, 7th to 13th Centuries, 1995, Abhinav Publications, , 9788170173120, google books
Harle, J.C., The Art and Architecture of the Indian Subcontinent, 2nd edn. 1994, Yale University Press Pelican History of Art,
Krishna Murthy, M.S., "Jaina Monuments In Southern Karnataka", Ahimsa Foundation (www.jainsamaj.org)
Kramrisch, Stella, The Hindu Temple, Volume 1, 1996 (originally 1946), , 9788120802223, google books
Michell, George, The Penguin Guide to the Monuments of India, Volume 1: Buddhist, Jain, Hindu, 1989, Penguin Books,
Hindu temple architecture
Architectural elements | Sukanasa | Technology,Engineering | 1,013 |
9,131,931 | https://en.wikipedia.org/wiki/Z%20curve | The Z curve (or Z-curve) method is a bioinformatics algorithm for genome analysis. The Z-curve is a three-dimensional curve that constitutes a unique representation of a DNA sequence, i.e., for the Z-curve and the given DNA sequence each can be uniquely reconstructed from the other.
The resulting curve has a zigzag shape, hence the name Z-curve.
Background
The Z Curve method was first created in 1994 as a way to visually map a DNA or RNA sequence. Different properties of the Z curve, such as its symmetry and periodicity can give unique information on the DNA sequence. The Z curve is generated from a series of nodes, P0, P1,...PN, with the coordinates xn, yn, and zn (n=0,1,2...N, with N being the length of the DNA sequence). The Z curve is created by connecting each of the nodes sequentially.
Applications
Information on the distribution of nucleotides in a DNA sequence can be determined from the Z curve. The four nucleotides are combined into six different categories. The nucleotides are placed into each category by some defining characteristic and each category is designated a letter.
The x, y, and z components of the Z curve display the distribution of each of these categories of bases for the DNA sequence being studied. The x-component represents the distribution of purines and pyrimidine bases (R/Y). The y-component shows the distribution of amino and keto bases (M/K) and the z-component shows the distribution of strong-H bond and weak-H bond bases (S/W) in the DNA sequence.
The Z-curve method has been used in many different areas of genome research, such as replication origin identification,, ab initio gene prediction,
isochore identification,
genomic island identification
and comparative genomics. Analysis of the Z curve has also been shown to be able to predict if a gene contains introns,
Research
Experiments have shown that the Z curve can be used to identify the replication origin in various organisms. One study analyzed the Z curve for multiple species of Archaea and found that the oriC is located at a sharp peak on the curve followed by a broad base. This region was rich in AT bases and had multiple repeats, which is expected for replication origin sites. This and other similar studies were used to generate a program that could predict the origins of replication using the Z curve.
The Z curve has also been experimentally used to determine phylogenetic relationships. In one study, a novel coronavirus in China was analyzed using sequence analysis and the Z curve method to determine its phylogenetic relationship to other coronaviruses. It was determined that similarities and differences in related species can quickly by determined by visually examining their Z curves. An algorithm was created to identify the geometric center and other trends in the Z curve of 24 species of coronaviruses. The data was used to create a phylogenetic tree. The results matched the tree that was generated using sequence analysis. The Z curve method proved superior because while sequence analysis creates a phylogenetic tree based solely on coding sequences in the genome, the Z curve method analyzed the entire genome.
References
External links
The Z curve database
— a free, web-based program for predicting "origins of replication" using Z-curves.
ENCODE threads explorer Three-dimensional connections across the genome. Nature (journal)
ZCurve
Introduction to Z curves. http://tubic.tju.edu.cn/zcurve/introduce.php
Identify Gene Start Sites Using Z curves. http://tubic.tju.edu.cn/GS-Finder/
Bioinformatics algorithms | Z curve | Biology | 772 |
11,421,821 | https://en.wikipedia.org/wiki/HgcG%20RNA | The HgcG RNA gene is a non-coding RNA that was identified computationally and experimentally verified in AT-rich hyperthermophiles. The genes from this screen were named hgcA through hgcG ("high GC"). HgcG is of unknown function. hgcG is significantly similar to a region of the Archaeoglobus fulgidus genome. The genes were named hgcA through hgcG ("high GC"). It was later identified as Pab40 H/ACA snoRNA with rRNA targets.
See also
HgcC family RNA
HgcE RNA
HgcF RNA
References
External links
Non-coding RNA | HgcG RNA | Chemistry | 146 |
20,209,852 | https://en.wikipedia.org/wiki/Intracavernous%20injection | An intracavernous (or intracavernosal) injection is an injection into the base of the penis. This injection site is often used to administer medications to check for or treat erectile dysfunction in adult men (in, for example, a combined intracavernous injection and stimulation test). The more common medications administered in this manner include Caverject, Trimix (prostaglandin, papaverine, and phentolamine), Bimix (papaverine and phentolamine), and Quadmix (prostaglandin, papaverine, phentolamine, and either atropine or forskolin). These medications are all types of vasodilators and cause tumescence within 15 minutes.
Common side effects include priapism, bruising, fibrosis, Peyronie's disease, and pain.
Priapism is also often treated with intracavernous injections, usually with sympathomimetic vasoconstricting drugs like adrenaline or phenylephrine.
References
Male genital procedures
Routes of administration
Dosage forms | Intracavernous injection | Chemistry | 236 |
50,362,339 | https://en.wikipedia.org/wiki/Pregnancy%20specific%20biological%20substances | Pregnancy-specific biological substances, which include the placenta, umbilical cord, amniotic fluid, and amniotic membrane are being studied for a number of health uses. For example, Placental-derived stem cells are being studied so they can serve as a potential treatment method for cell therapy. Hepatocyte-like cells (HLC) are generated from differentiated human amniotic epithelial cells (hAEC) that are abundant in the placenta. HLC may replace hepatocytes for hepatocyte transplantation to treat acute or chronic liver damage.
Recent research has shown that the placenta and placenta derivatives are being regenerative cell therapies and also includes immunological features. Placenta structures consist of unique physiognomies. Placenta's structure not only regulates its function but also gives the probability of efficient use in clinics and in biotechnology.
According to a research study by Bhattacharya N., Anemia caused by Diabetes mellitus in patients with albuminuria can be treated with cord blood transfusion. The research showed increased in albumin per gram of creatinine that assessed for albuminuria for patients that received cord blood transfusions.
References
Human pregnancy
Organs (anatomy) | Pregnancy specific biological substances | Chemistry,Biology | 263 |
376,458 | https://en.wikipedia.org/wiki/Emilio%20Segr%C3%A8 | Emilio Gino Segrè (; 1 February 1905 – 22 April 1989) was an Italian and naturalized-American physicist and Nobel laureate, who discovered the elements technetium and astatine, and the antiproton, a subatomic antiparticle, for which he was awarded the Nobel Prize in Physics in 1959 along with Owen Chamberlain.
Born in Tivoli, near Rome, Segrè studied engineering at the University of Rome La Sapienza before taking up physics in 1927. Segrè was appointed assistant professor of physics at the University of Rome in 1932 and worked there until 1936, becoming one of the Via Panisperna boys. From 1936 to 1938 he was director of the Physics Laboratory at the University of Palermo. After a visit to Ernest O. Lawrence's Berkeley Radiation Laboratory, he was sent a molybdenum strip from the laboratory's cyclotron accelerator in 1937, which was emitting anomalous forms of radioactivity. Using careful chemical and theoretical analysis, Segrè was able to prove that some of the radiation was being produced by a previously unknown element, named technetium, the first artificially synthesized chemical element that does not occur in nature.
In 1938 and while Segrè was visiting the Berkeley Radiation laboratory, Benito Mussolini's fascist government passed antisemitic laws barring Jews from university positions. As a Jew, Segrè was rendered an indefinite émigré. At the Berkeley Radiation Lab, Lawrence offered him an underpaid job as a research assistant. There, Segrè helped discover the element astatine and the isotope plutonium-239, which was later used to make the Fat Man nuclear bomb dropped on Nagasaki. From 1943 to 1946 he worked at the Los Alamos National Laboratory as a group leader for the Manhattan Project. He found in April 1944 that Thin Man, the proposed plutonium gun-type nuclear weapon, would not work due to the presence of plutonium-240 impurities. In 1944, he became a naturalized citizen of the United States. On his return to Berkeley in 1946, he became a professor of physics and of history of science, serving until 1972. Segrè and Owen Chamberlain co-headed a research group at the Lawrence Radiation Laboratory that discovered the antiproton, for which the two shared the 1959 Nobel Prize in Physics.
Segrè was an active photographer who took many pictures documenting events and people in the history of modern science, which were donated to the American Institute of Physics after his death. The American Institute of Physics named its photographic archive of physics history in his honor.
Early life
Emilio Gino Segrè was born into a Sephardic Jewish family in Tivoli, near Rome, on 1 February 1905, the son of Giuseppe Segrè, a businessman who owned a paper mill, and Amelia Susanna Treves. He had two older brothers, Angelo and Marco. His uncle, Gino Segrè, was a law professor. He was educated at the ginnasio in Tivoli and, after the family moved to Rome in 1917, the ginnasio and liceo in Rome. He graduated in July 1922 and enrolled in the University of Rome La Sapienza as an engineering student.
In 1927, Segrè met Franco Rasetti, who introduced him to Enrico Fermi. The two young physics professors were looking for talented students. They attended the Volta Conference at Como in September 1927, where Segrè heard lectures from notable physicists including Niels Bohr, Werner Heisenberg, Robert Millikan, Wolfgang Pauli, Max Planck and Ernest Rutherford. Segrè then joined Fermi and Rasetti at their laboratory in Rome. With the help of the director of the Institute of Physics, Orso Mario Corbino, Segrè was able to transfer to physics, and, studying under Fermi, earned his laurea degree in July 1928, with a thesis on "Anomalous Dispersion and Magnetic Rotation".
After a stint in the Italian Army from 1928 to 1929, during which he was a commissioned as a second lieutenant in the antiaircraft artillery, Segrè returned to the laboratory on Via Panisperna. He published his first article, which summarised his thesis, "On anomalous dispersion in mercury and in lithium", jointly with Edoardo Amaldi in 1928, and another article with him the following year on the Raman effect.
In 1930, Segrè began studying the Zeeman effect in certain alkaline metals. When his progress stalled because the diffraction grating he required to continue was not available in Italy, he wrote to four laboratories elsewhere in Europe asking for assistance and received an invitation from Pieter Zeeman to finish his work at Zeeman's laboratory in Amsterdam. Segrè was awarded a Rockefeller Foundation fellowship and, on Fermi's advice, elected to use it to study under Otto Stern in Hamburg. Working with Otto Frisch on space quantization produced results that apparently did not agree with the current theory; but Isidor Isaac Rabi showed that theory and experiment were in agreement if the nuclear spin of potassium was +1/2.
Physics professor
Segrè was appointed assistant professor of physics at the University of Rome in 1932 and worked there until 1936, becoming one of the Via Panisperna boys. In 1934, he met Elfriede Spiro, a Jewish woman whose family had come from Ostrowo in West Prussia, but had fled to Breslau when that part of Prussia became part of Poland after World War I. After the Nazi Party came to power in Germany in 1933, she had emigrated to Italy, where she worked as a secretary and an interpreter. At first she did not speak Italian well, and Segrè and Spiro conversed in German, in which he was fluent. The two were married at the Great Synagogue of Rome on 2 February 1936. He agreed with the rabbi to spend the minimal amount on the wedding, giving the balance of what would be spent on a luxury wedding to Jewish refugees from Germany. The rabbi managed to give them many of the trappings of a luxury wedding anyway. The couple had three children: Claudio, born in 1937, Amelia Gertrude Allegra, born in 1937, and Fausta Irene, born in 1945.
After marrying, Segrè sought a stable job and became professor of physics and director of the Physics Institute at the University of Palermo. He found the equipment there primitive and the library bereft of modern physics literature, but his colleagues at Palermo included the mathematicians Michele Cipolla and Michele De Franchis, the mineralogist Carlo Perrier and the botanist . In 1936 he paid a visit to Ernest O. Lawrence's Berkeley Radiation Laboratory, where he met Edwin McMillan, Donald Cooksey, Franz Kurie, Philip Abelson and Robert Oppenheimer. Segrè was intrigued by the radioactive scrap metal that had once been part of the laboratory's cyclotron. In Palermo, this was found to contain a number of radioactive isotopes. In February 1937, Lawrence sent him a molybdenum strip that was emitting anomalous forms of radioactivity. Segrè enlisted Perrier's help to subject the strip to careful chemical and theoretical analysis, and they were able to prove that some of the radiation was being produced by a previously unknown element. In 1947 they named it technetium, as it was the first artificially synthesized chemical element.
Radiation Laboratory
In June 1938, Segrè paid a summer visit to California to study the short-lived isotopes of technetium, which did not survive being mailed to Italy. While Segrè was en route, Benito Mussolini's fascist government passed racial laws barring Jews from university positions. As a Jew, Segrè was now rendered an indefinite émigré. The Czechoslovakian crisis prompted Segrè to send for Elfriede and Claudio, as he now feared that war in Europe was inevitable. In November 1938 and February 1939 they made quick trips to Mexico to exchange their tourist visas for immigration visa. Both Segrè and Elfriede held grave fears for the fate of their parents in Italy and Germany.
At the Berkeley Radiation Lab, Lawrence offered Segrè a job as a research assistant—a relatively lowly position for someone who had discovered an element—for a month for six months. When Lawrence learned that Segrè was legally trapped in California, he took advantage of the situation to reduce Segrè's salary to $116 a month. Working with Glenn Seaborg, Segrè isolated the metastable isotope technetium-99m. Its properties made it ideal for use in nuclear medicine, and it is now used in about 10 million medical diagnostic procedures annually. Segrè went looking for element 93, but did not find it, as he was looking for an element chemically akin to rhenium instead of a rare-earth element, which is what element 93 was. Working with Alexander Langsdorf, Jr., and Chien-Shiung Wu, he discovered xenon-135, which later became important as a nuclear poison in nuclear reactors.
Segrè then turned his attention to another missing element on the periodic table, element 85. After he announced how he intended to create it by bombarding bismuth-209 with alpha particles at a Monday meeting Radiation Laboratory meeting, two of his colleagues, Dale R. Corson and Robert A. Cornog carried out his proposed experiment. Segrè then asked whether he could do the chemistry and, with Kenneth Ross MacKenzie, successfully isolated the new element, which is today called astatine. Segrè and Wu then attempted to find the last remaining missing non-transuranic element, element 61. They had the correct technique for making it, but lacked the chemical methods to separate it. He also worked with Seaborg, McMillan, Joseph W. Kennedy and Arthur C. Wahl to create plutonium-239 in Lawrence's cyclotron in December 1940.
Manhattan Project
The Japanese attack on Pearl Harbor in December 1941 and the subsequent United States declaration of war upon Italy rendered Segrè an enemy alien and cut him off from communication with his parents. Physicists began leaving the Radiation Laboratory to do war work, and Raymond T. Birge asked him to teach classes to the remaining students. This provided a useful supplement to Segrè's income, and he established important friendships and professional associations with some of these students, who included Owen Chamberlain and Clyde Wiegand.
In late 1942, Oppenheimer asked Segrè to join the Manhattan Project at its Los Alamos Laboratory. Segrè became the head of the laboratory's P-5 (Radioactivity) Group, which formed part of Robert Bacher's P (Experimental Physics) Division. For security reasons, he was given the cover name of Earl Seaman. He moved to Los Alamos with his family in June 1943.
Segrè's group set up its equipment in a disused Forest Service cabin in the Pajarito Canyon near Los Alamos in August 1943. His group's task was to measure and catalog the radioactivity of various fission products. An important line of research was determining the degree of isotope enrichment achieved with various samples of enriched uranium. Initially, the tests using mass spectrometry, used by Columbia University, and neutron assay, used by Berkeley, gave different results. Segrè studied Berkeley's results and could find no error, while Kenneth Bainbridge likewise found no fault with New York's. However, analysis of another sample showed close agreement. Higher rates of spontaneous fission were observed at Los Alamos, which Segrè's group concluded were due to cosmic rays, which were more prevalent at Los Alamos due to its high altitude.
The group measured the activity of thorium, uranium-234, uranium-235 and uranium-238, but only had access to microgram quantities of plutonium-239. The first sample plutonium produced in the nuclear reactor at Oak Ridge was received in April 1944. Within days the group observed five times the rate of spontaneous fission as with the cyclotron-produced plutonium. This was not news that the leaders of the project wanted to hear. It meant that Thin Man, the proposed plutonium gun-type nuclear weapon, would not work and implied that the project's investment in plutonium production facilities at the Hanford Site was wasted. Segrè's group carefully checked their results and concluded that the increased activity was due to the plutonium-240 isotope.
In June 1944, Segrè was summoned into Oppenheimer's office and informed that while his father was safe, his mother had been rounded up by the Nazis in October 1943. Segrè never saw either of his parents again. His father died in Rome in October 1944. In late 1944, Segrè and Elfriede became naturalized citizens of the United States. His group, now designated R-4, was given responsibility for measuring the gamma radiation from the Trinity nuclear test in July 1945. The blast damaged or destroyed most of the experiments, but enough data was recovered to measure the gamma rays.
Later life
In August 1945, a few days before the surrender of Japan and the end of World War II, Segrè received an offer from Washington University in St. Louis of an associate professorship with a salary of . The following month, the University of Chicago also made him an offer. After some prompting, Birge offered $6,500 and a full professorship, which Segrè decided to accept. He left Los Alamos in January 1946 and returned to Berkeley.
In the late 1940s, many academics left the University of California, lured away by higher-salary offers and by the university's peculiar loyalty oath requirement. Segrè chose to take the oath and stay, but this did not allay suspicions about his loyalty. Luis Alvarez was incensed that Amaldi, Fermi, Pontecorvo, Rasetti and Segrè had chosen to pursue patent claims against the United States for their pre-war discoveries and told Segrè to let him know when Pontecorvo wrote from Russia. He also clashed with Lawrence over the latter's plan to create a rival nuclear-weapons laboratory to Los Alamos in Livermore, California, in order to develop the hydrogen bomb, a weapon that Segrè felt would be of dubious utility.
Unhappy with his deteriorating relationships with his colleagues and with the poisonous political atmosphere at Berkeley caused by the loyalty oath controversy, Segrè accepted a job offer from the University of Illinois at Urbana–Champaign. The courts ultimately resolved the patent claims in the Italian scientists' favour in 1953, awarding them for the patents related to generating neutrons, which worked out to about $20,000 after legal costs. Kennedy, Seaborg, Wahl and Segrè were subsequently awarded the same amount for their discovery of plutonium, which came to $100,000 after being divided four ways, there being no legal fees this time.
After turning down offers from IBM and the Brookhaven National Laboratory, Segrè returned to Berkeley in 1952. He was elected to the United States National Academy of Sciences that same year. He moved his family from Berkeley to nearby Lafayette, California, in 1955. Working with Chamberlain and others, he began searching for the antiproton, a subatomic antiparticle of the proton. The antiparticle of the electron, the positron had been predicted by Paul Dirac in 1931 and then discovered by Carl D. Anderson in 1932. By analogy, it was now expected that there would be an antiparticle corresponding to the proton, but no one had found one, and even in 1955 some scientists doubted that it existed. Using Lawrence's Bevatron set to 6 GeV, they managed to detect conclusive evidence of antiprotons. Chamberlain and Segrè were awarded the 1959 Nobel Prize in Physics for their discovery. This was controversial, because Clyde Wiegand and Thomas Ypsilantis were co-authors of the same article, but did not share the prize.
Segrè served on the university's powerful Budget Committee from 1961 to 1965 and was chairman of the Physics Department from 1965 to 1966. He supported Teller's successful bid to separate the Lawrence Berkeley Laboratory from the Lawrence Livermore Laboratory in 1970. He was elected to the American Philosophical Society in 1963. He was one of the trustees of Fermilab from 1965 to 1968. He attended its inauguration with Laura Fermi in 1974. During the 1950s, Segrè edited Fermi's papers. He later published a biography of Fermi, Enrico Fermi: Physicist (1970). He published his own lecture notes as From X-rays to Quarks: Modern Physicists and Their Discoveries (1980) and From Falling Bodies to Radio Waves: Classical Physicists and Their Discoveries (1984). He also edited the Annual Review of Nuclear and Particle Science from 1958 to 1977 and wrote an autobiography, A Mind Always in Motion (1993), which was published posthumously.
Elfriede died in October 1970, and Segrè married Rosa Mines in February 1972. He was elected to the American Academy of Arts and Sciences in 1973. That year he reached the University of California's compulsory retirement age. He continued teaching the history of physics. In 1974 he returned to the University of Rome as a professor, but served only a year before reaching the mandatory retirement age. Segrè died from a heart attack at the age of 84 while out walking near his home in Lafayette. Active as a photographer, Segrè took many photos documenting events and people in the history of modern science. After his death Rosa donated many of his photographs to the American Institute of Physics, which named its photographic archive of physics history in his honor. The collection was bolstered by a subsequent bequest from Rosa after her death from an accident in Tivoli in 1997.
Notes
See also
List of Jewish Nobel laureates
References
Bibliography
E. Segrè (1964). Nuclei and Particles.
E. Segrè (1970). Enrico Fermi, Physicist, University of Chicago Press.
eBook published by Plunkett Lake Press (2016). .
E. Segrè (1980). From X-rays to Quarks: Modern Physicists and Their Discoveries (Dover Classics of Science & Mathematics), Dover Publications.
E. Segrè (1984). From Falling Bodies to Radio Waves: Classical Physicists and Their Discoveries.
Free Online – UC Press E-Books Collection.
eBook published by Plunkett Lake Press (2016). .
Further reading
Segrè, E; et.al. "Formation of the 50-Year Element 94 from Deuteron Bombardment of U238", (June 1942), Argonne National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
Segrè, E. "Spontaneous Fission", (22 November 1950), Radiation Laboratory, Lawrence Berkeley National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
Segrè, E. (1953) Experimental Nuclear Physics.
Segrè, E; et.al. "Observation of Antiprotons", (19 October 1955), Radiation Laboratory, Lawrence Berkeley National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
Segrè, E; et.al. "Antiprotons", (29 November 1955), Radiation Laboratory, Lawrence Berkeley National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
Segrè, E; et.al. "The Antiproton-Nucleon Annihilation Process (Antiproton Collaboration Experiment)", (10 September 1956), Radiation Laboratory, Lawrence Berkeley National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
Segrè, E; et.al. "Experiments on Antiprotons: Antiproton-Nucleon Cross Sections", (22 July 1957), Radiation Laboratory, Lawrence Berkeley National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission).
External links
1965 Audio Interview with Emilio Segre by Stephane Groueff Voices of the Manhattan Project
Oral History transcripts with Emilio G. Segre, American Institute of Physics, Niels Bohr Library and Archives
including his Nobel Lecture, 11 December 1959 Properties of Antinucleons
Archival collections
Emilio Segre lectures and other collected recordings, 1968-1997, Niels Bohr Library & Archives
1905 births
1989 deaths
People from Tivoli, Lazio
Sapienza University of Rome alumni
Academic staff of the Sapienza University of Rome
Nobel laureates in Physics
Italian Nobel laureates
20th-century Italian physicists
Discoverers of chemical elements
Experimental physicists
Italian emigrants to the United States
20th-century Italian Jews
Jewish American physicists
Manhattan Project people
Fellows of the American Physical Society
University of California, Berkeley faculty
Academic staff of the University of Palermo
Rare earth scientists
Italian exiles
People from Los Alamos, New Mexico
Italian Sephardi Jews
Annual Reviews (publisher) editors
Time Person of the Year
Members of the American Philosophical Society | Emilio Segrè | Physics | 4,281 |
23,776,575 | https://en.wikipedia.org/wiki/Postselection | In probability theory, to postselect is to condition a probability space upon the occurrence of a given event. In symbols, once we postselect for an event , the probability of some other event changes from to the conditional probability .
For a discrete probability space, , and thus we require that be strictly positive in order for the postselection to be well-defined.
See also PostBQP, a complexity class defined with postselection. Using postselection it seems quantum Turing machines are much more powerful: Scott Aaronson proved PostBQP is equal to PP.
Some quantum experiments use post-selection after the experiment as a replacement for communication during the experiment, by post-selecting the communicated value into a constant.
References
Conditional probability
Theoretical computer science
Quantum complexity theory | Postselection | Mathematics | 159 |
8,591,677 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Grus | This is the list of notable stars in the constellation Grus, sorted by decreasing brightness.
See also
List of stars by constellation
References
List
Grus | List of stars in Grus | Astronomy | 31 |
52,014,798 | https://en.wikipedia.org/wiki/Thermodesulforhabdus | Thermodesulforhabdus is an acetate-oxidizing bacterial genus from the order Syntrophobacterales. Up to now there is only on species of this genus known (Thermodesulforhabdus norvegica).
See also
List of bacterial orders
List of bacteria genera
References
Thermodesulfobacteriota
Monotypic bacteria genera
Bacteria genera | Thermodesulforhabdus | Biology | 81 |
57,298,509 | https://en.wikipedia.org/wiki/1st%20Railway%20Corps%20%28People%27s%20Republic%20of%20China%29 | 1st Railway Corps () of the People's Liberation Army was a military formation mainly focusing on railway construction missions. It was activated on September 1, 1956. The corps commander was He Huiyan.
The corps was composed of 2nd, 7th and 11th Railway Divisions.
From 1956 to early 1957, the corps was in charge of the construction of Lanzhou–Xinjiang railway.
In early 1957 the corps diverted to Ningxia and Inner Mongolia to build the Baotou-Lanzhou railway. During the construction, 11th Railway Division was detached from the corps, while 9th Railway Division and Independent Bridge Construction Regiment attached. In July 1958 the construction was finished.
The corps was inactivated before November 1958.
References
Corps of the People's Liberation Army
Military units and formations established in 1956
Military units and formations disestablished in 1958
Railway troops | 1st Railway Corps (People's Republic of China) | Engineering | 166 |
74,885,040 | https://en.wikipedia.org/wiki/Satellite%20imagery%20in%20North%20Korea | Satellite imagery in North Korea is a knowledge-building tool in the field of North Korean studies. It enables researchers to produce data-based analyses in the agricultural, humanitarian, economic and military fields, in a country where access to the field is limited.
Context
Collecting data on North Korea is very difficult. States generally produce reliable data, but in North Korea the data produced is often non-existent or of poor quality. When the North Korean government does produce data, its completeness and relevance are often called into question.
Access to the country is limited and satellite imagery is sometimes the only way to get an overview of important political or military locations.
History
Started in 1999 and emerging in 2012
High-resolution satellite imagery (1 metre and less) has been available since 1999, but its use for North Korean studies did not emerge until 2012. By 2004, researchers and NGOs had imaging and computing capabilities comparable to those available to the US government 20-30 years earlier, in the 1970s. Prior to 2012, imagery was largely focused on nuclear sites in North Korea, notably Yongbyon and Punggue-ri.
Satellite images with a resolution of less than one metre can be used to identify a wide range of objects such as buildings, forests, orchards, fields, fences, rivers, railways and roads.
At the same time as improving resolution, satellite images make spectral analysis accessible, beyond the visible light spectrum. SARs provide a 3D rendering of the earth, even in rainy weather or at night.
Obstacle to the acquisition of satellite images
The development of analyses based on satellite images has been hampered by a number of factors: the cost of acquiring these images, the decision by satellite imagery companies not to include images of the country in their public catalogues for political or technical reasons, or simply as a management decision.
Speeding up satellite image acquisition
The use of satellite imagery has been boosted by technological improvements, the expansion of the satellite imagery industry and public interest in North Korea, prompting satellite imagery companies to capture more frequently and highlight this product on their catalogues.
Difficulties of analysis and risks of misinformation
Despite the name "high-resolution image" the analysis of satellite images is hampered by the quality of the images available, leaving more or less room for interpretation as well as by the frequency of image capture. Analysis is dependent on the analysts' cultural, technical knowledge and experience of the specific country context. This is exacerbated for observation of North Korea's nuclear programme. To produce robust information satellite imagery must be cross-referenced with other data sources to understand long-term dynamics.
The growing competition in the field of North Korea studies and the resulting pressure on researchers to be the first to produce a scientific paper is a source of haste in the analysis. The analysis is constructed with less information. Over-extrapolation from satellite images can lead to major errors in analysis.
In a context of short, rapid media cycles, brief or incomplete analyses based on satellite imagery can lead to the dissemination of misinformation, even unintentionally.
The number of experts in the analysis of satellite imagery in North Korea is limited, but recognition algorithms can partially mitigate this problem.
North Korea's adaptation
North Korea is adapting to the high level of surveillance of its territory by satellite images, so the examination of any satellite image must take into account the possibility of concealment, camouflage or even deception by the North Koreans.
Usage
Economic
Analysis of satellite imagery creates knowledge based on new data to better understand activities in North Korea in areas where data acquisition is difficult, such as infrastructure development, construction projects, smuggling activities, etc.
Imagery via Google Earth of the country has enabled researcher Benjamin Katzeff to analyse markets in North Korea by their location their size and geographical distribution, and then approximate economic aggregates, such as reported taxes.
Imagery is used to observe regional and national development patterns and trends or to observe activities violating sanctions against North Korea such as fishing activities.
Agriculture
Satellite imagery is used to monitor agricultural and food production. Changes in weather conditions (flooding, drought, etc.) are tracked in agricultural areas to estimate harvest possibilities. Examination of areas hit by typhoons has enabled an analysis of the impact on harvests, enabling food needs to be assessed and humanitarian aid to be provided.
Humanitarian aid
Some NGOs use satellite images to study the progress of their project as they are not always on the ground.
Military
Nuclear and weapons of mass destruction
Analysis of satellite imagery allows us to understand the development of North Korea's nuclear arsenal by observing the infrastructure and activity of nuclear sites. This is sometimes the only way to observe North Korea's nuclear programme, as international and US experts are rarely admitted to the country's nuclear sites.
Human rights
Satellite imagery is very useful in the field of human rights in the country. The U.S. Committee for Human Rights in North Korea has published reports using satellite imagery and defector testimony to analyse the infrastructure and activity of prison camps, particularly to understand renovations, extensions or closures of these sites.
Satellite imagery producer
There are different types of imagery providers. They are military, governmental or commercial
Military
United States Government, through the National Reconnaissance Office (classified documents)
Government
Landsat (by the USGS)
European Union, with research programmes (Copernicus...)
Commercial
Maxar Technologies, formerly known as Digital Globe (0.3 m)
Airbus DS Geo (0.5 m)
ESRI
Planet Labs (imagery captured at daily frequency, 3m)
References
External links
Remote sensing
Geographic data and information
Science and technology in North Korea | Satellite imagery in North Korea | Technology | 1,119 |
75,596,143 | https://en.wikipedia.org/wiki/Minister%20for%20Biosecurity | The Minister for Biosecurity is a minister in the New Zealand Government with the responsibility of managing biosecurity.
The current Minister for Biosecurity is Andrew Hoggard.
History
The portfolio was created after the 1996 general election. Previously, biosecurity matters had been under the purview of the Minister of Agriculture; it was John Falloon, acting in that portfolio, who had been responsible for the passage of the Biosecurity Act 1993. Briefly from 1998 to 1999 and again from 2011 to 2017, the portfolio was consolidated with other primary industries portfolios, first as the Minister for Food, Fibre, Biosecurity and Border Control and latterly as the Minister for Primary Industries.
List of ministers for biosecurity
The following ministers have held the office of Minister for Biosecurity.
Notes
References
Lists of government ministers of New Zealand
Biosecurity | Minister for Biosecurity | Environmental_science | 183 |
36,132,944 | https://en.wikipedia.org/wiki/Motion%20%28geometry%29 | In geometry, a motion is an isometry of a metric space. For instance, a plane equipped with the Euclidean distance metric is a metric space in which a mapping associating congruent figures is a motion. More generally, the term motion is a synonym for surjective isometry in metric geometry, including elliptic geometry and hyperbolic geometry. In the latter case, hyperbolic motions provide an approach to the subject for beginners.
Motions can be divided into direct and indirect motions.
Direct, proper or rigid motions are motions like translations and rotations that preserve the orientation of a chiral shape.
Indirect, or improper motions are motions like reflections, glide reflections and Improper rotations that invert the orientation of a chiral shape.
Some geometers define motion in such a way that only direct motions are motions.
In differential geometry
In differential geometry, a diffeomorphism is called a motion if it induces an isometry between the tangent space at a manifold point and the tangent space at the image of that point.
Group of motions
Given a geometry, the set of motions forms a group under composition of mappings. This group of motions is noted for its properties. For example, the Euclidean group is noted for the normal subgroup of translations. In the plane, a direct Euclidean motion is either a translation or a rotation, while in space every direct Euclidean motion may be expressed as a screw displacement according to Chasles' theorem. When the underlying space is a Riemannian manifold, the group of motions is a Lie group. Furthermore, the manifold has constant curvature if and only if, for every pair of points and every isometry, there is a motion taking one point to the other for which the motion induces the isometry.
The idea of a group of motions for special relativity has been advanced as Lorentzian motions. For example, fundamental ideas were laid out for a plane characterized by the quadratic form in American Mathematical Monthly.
The motions of Minkowski space were described by Sergei Novikov in 2006:
The physical principle of constant velocity of light is expressed by the requirement that the change from one inertial frame to another is determined by a motion of Minkowski space, i.e. by a transformation
preserving space-time intervals. This means that
for each pair of points x and y in R1,3.
History
An early appreciation of the role of motion in geometry was given by Alhazen (965 to 1039). His work "Space and its Nature" uses comparisons of the dimensions of a mobile body to quantify the vacuum of imaginary space. He was criticised by Omar Khayyam who pointed that Aristotle had condemned the use of motion in geometry.
In the 19th century Felix Klein became a proponent of group theory as a means to classify geometries according to their "groups of motions". He proposed using symmetry groups in his Erlangen program, a suggestion that was widely adopted. He noted that every Euclidean congruence is an affine mapping, and each of these is a projective transformation; therefore the group of projectivities contains the group of affine maps, which in turn contains the group of Euclidean congruences. The term motion, shorter than transformation, puts more emphasis on the adjectives: projective, affine, Euclidean. The context was thus expanded, so much that "In topology, the allowed movements are continuous invertible deformations that might be called elastic motions."
The science of kinematics is dedicated to rendering physical motion into expression as mathematical transformation. Frequently the transformation can be written using vector algebra and linear mapping. A simple example is a turn written as a complex number multiplication: where . Rotation in space is achieved by use of quaternions, and Lorentz transformations of spacetime by use of biquaternions. Early in the 20th century, hypercomplex number systems were examined. Later their automorphism groups led to exceptional groups such as G2.
In the 1890s logicians were reducing the primitive notions of synthetic geometry to an absolute minimum. Giuseppe Peano and Mario Pieri used the expression motion for the congruence of point pairs. Alessandro Padoa celebrated the reduction of primitive notions to merely point and motion in his report to the 1900 International Congress of Philosophy. It was at this congress that Bertrand Russell was exposed to continental logic through Peano. In his book Principles of Mathematics (1903), Russell considered a motion to be a Euclidean isometry that preserves orientation.
In 1914 D. M. Y. Sommerville used the idea of a geometric motion to establish the idea of distance in hyperbolic geometry when he wrote Elements of Non-Euclidean Geometry. He explains:
By a motion or displacement in the general sense is not meant a change of position of a single point or any bounded figure, but a displacement of the whole space, or, if we are dealing with only two dimensions, of the whole plane. A motion is a transformation which changes each point P into another point P ′ in such a way that distances and angles are unchanged.
Axioms of motion
László Rédei gives as axioms of motion:
Any motion is a one-to-one mapping of space R onto itself such that every three points on a line will be transformed into (three) points on a line.
The identical mapping of space R is a motion.
The product of two motions is a motion.
The inverse mapping of a motion is a motion.
If we have two planes A, A' two lines g, g' and two points P, P' such that P is on g, g is on A, P' is on g' and g' is on A' then there exist a motion mapping A to A', g to g' and P to P'
There is a plane A, a line g, and a point P such that P is on g and g is on A then there exist four motions mapping A, g and P onto themselves, respectively, and not more than two of these motions may have every point of g as a fixed point, while there is one of them (i.e. the identity) for which every point of A is fixed.
There exists three points A, B, P on line g such that P is between A and B and for every point C (unequal P) between A and B there is a point D between C and P for which no motion with P as fixed point can be found that will map C onto a point lying between D and P.
Axioms 2 to 4 imply that motions form a group.
Axiom 5 means that the group of motions provides group actions on R that are transitive so that there is a motion that maps every line to every line
Notes and references
Tristan Needham (1997) Visual Complex Analysis, Euclidean motion p 34, direct motion p 36, opposite motion p 36, spherical motion p 279, hyperbolic motion p 306, Clarendon Press, .
Miles Reid & Balázs Szendröi (2005) Geometry and Topology, Cambridge University Press, , .
External links
Motion. I.P. Egorov (originator), Encyclopedia of Mathematics.
Group of motions. I.P. Egorov (originator), Encyclopedia of Mathematics.
Metric geometry
Differential geometry
Transformation (function) | Motion (geometry) | Mathematics | 1,475 |
18,005,720 | https://en.wikipedia.org/wiki/Manganese%28II%29%20bromide | Manganese(II) bromide is the chemical compound composed of manganese and bromine with the formula MnBr2.
It can be used in place of palladium in the Stille reaction, which couples two carbon atoms using an organotin compound.
References
Manganese(II) compounds
Bromides
Metal halides | Manganese(II) bromide | Chemistry | 68 |
2,378,222 | https://en.wikipedia.org/wiki/Islamic%20Society%20of%20Engineers | The Islamic Society of Engineers (ISE) (, ) is a principlist political organization of engineers in Iran. Formerly one of the parties aligned with the Combatant Clergy Association, it is close to the Islamic Coalition Party, whose decisions it mostly follows. It is questionable whether it is an independent and strong party.
The Society was formed at the end of the Iran–Iraq War (1988) with the objective of elevating the Islamic, political, scientific and technical knowledge of the Muslim people of Iran, defending major freedoms such as freedom of expression and gatherings, as well as continued campaigning against foreign cultural agents whether Eastern or Western materialism.
Members
Mahmoud Ahmadinejad, the sixth president of Iran, was an active member since its establishment but turned against the party after his presidency.
Mohammad Reza Bahonar, current secretary-general and former deputy speaker of the Parliament of Iran
Manouchehr Mottaki, former minister of foreign affairs
Mohammad Nazemi Ardakani, former minister of cooperatives
Party leaders
Current officeholders
Morteza Nabavi, member of Expediency Discernment Council
Morteza Saghaiyannejad, mayor of Qom
Parliament members
Hamidreza Fouladgar (Isfahan)
Mohammad Mehdi Zahedi (Kerman and Ravar)
Jabbar Kouchakinejad (Rasht)
Mohammad Mehdi Mofatteh (Toiserkan)
References
External links
ecoi.net's profile of ISE
Hamshahri's report from the general congress of the Islamic Society of Engineers (in Persian)
1988 establishments in Iran
Political parties established in 1988
Principlist political groups in Iran
Engineering organizations | Islamic Society of Engineers | Engineering | 340 |
11,415,141 | https://en.wikipedia.org/wiki/Masturbation | Masturbation is a form of autoeroticism in which a person sexually stimulates their own genitals for sexual arousal or other sexual pleasure, usually to the point of orgasm. Stimulation may involve use of hands, everyday objects, sex toys, or more rarely, the mouth (autofellatio and autocunnilingus). Masturbation may also be performed with a sex partner, either masturbating together or watching the other partner masturbate and this is known as "Mutual Masturbation".
Masturbation is frequent in both sexes. Various medical and psychological benefits have been attributed to a healthy attitude toward sexual activity in general and to masturbation in particular. No causal relationship between masturbation and any form of mental or physical disorder has been found. Masturbation is considered by clinicians to be a healthy, normal part of sexual enjoyment. The only exception from "masturbation causes no harm" are some cases of Peyronie's disease.
Masturbation has been depicted in art since prehistoric times, and is both mentioned and discussed in very early writings. Religions vary in their views of masturbation. In the 18th and 19th centuries, some European theologians and physicians described it in negative terms, but during the 20th century, these taboos generally declined. There has been an increase in discussion and portrayal of masturbation in art, popular music, television, films, and literature. The legal status of masturbation has also varied through history and masturbation in public is illegal in most countries. Masturbation in non-human animals has been observed both in the wild and captivity.
Etymology
The word masturbation was introduced in the 18th century, based on the Latin verb , alongside the slightly earlier onanism.
The Latin verb is of uncertain origin. Suggested derivations include an unattested word for penis, *mazdo, cognate with Greek μέζεα mézea 'genitals', or alternatively a corruption of an unattested *manu stuprare ("to defile with the hand"), by association with turbare 'to disturb'.
Terminology
While masturbation is the formal word for this practice, many other expressions are in common use. Terms such as playing with oneself, pleasuring oneself and slang such as wanking, jerking off, jacking off, fapping and frigging are common. Self-abuse and self-pollution were common in early modern times and are still found in modern dictionaries. A large variety of other euphemisms and dysphemisms exist which describe masturbation. For a list of terms, see the entry for masturbate in Wiktionary.
Techniques
General
Masturbation involves touching, pressing, rubbing, or massaging one's own genital area with the hands, fingers, or against an object such as a pillow; inserting fingers or an object into the vagina or anus (see anal masturbation); and stimulating the penis or vulva with an electric vibrator, which may also be inserted into the vagina or anus. It may also involve touching, rubbing, or pinching the nipples or other erogenous zones while masturbating. Both sexes sometimes apply lubricants to reduce friction.
Reading or viewing pornography, sexual fantasies, or other erotic stimuli may lead to a desire for sexual release such as by masturbation. Pornography is also used to assist with masturbation and to improve the experience of masturbating. Some people get sexual pleasure by inserting objects, such as urethral sounds, into the urethra (the tube through which urine and, in men, semen, flows), a practice known as urethral play or "sounding". Other objects such as ball point pens and thermometers are sometimes used, although this practice can lead to injury or infection. Some people use sex machines to simulate intercourse.
Men and women may masturbate until they are close to orgasm, stop for a while to reduce excitement, and then resume masturbating. They may repeat this cycle multiple times. This "stop and go" build-up, known as "edging", can achieve even stronger orgasms. Rarely, people quit stimulation just before orgasm to retain the heightened energy that normally comes down after orgasm.
Female masturbation
Manual stimulation (fingering)
Manual stimulation for masturbation among females involves the stroking or rubbing of the vulva, especially the clitoris, with an index or middle finger, or both. Sometimes one or more fingers may be inserted into the vagina to stroke its frontal wall where the G-spot may be located.
Other methods
Masturbation aids such as a vibrator, dildo, or Ben Wa balls can also be used to stimulate the vagina and clitoris. Many women caress their breasts or stimulate a nipple with the free hand and anal stimulation is also enjoyed by some. Personal lubricant is sometimes used during masturbation, especially when penetration is involved, but this is not universal and many women find their natural lubrication sufficient.
Common positions for female masturbation include lying on one's back or face down, sitting, squatting, kneeling, or standing. In a bath or shower, a female may direct water via a handheld showerhead at her clitoris, vulva, or perineum. Lying face down one may use their hands, one may straddle a pillow, the corner or edge of the bed, a partner's leg or some scrunched-up clothing and "hump" the vulva and clitoris against it. Standing up, a chair, the corner of an item of furniture, or even a washing machine can be used to stimulate the clitoris through the labia and clothing. Some masturbate only using pressure applied to the clitoris without direct contact, for example by pressing the palm or ball of the hand against underwear or other clothing. In the 1920s, Havelock Ellis reported that turn-of-the-century seamstresses using treadle-operated sewing machines could achieve orgasm by sitting near the edge of their chairs.
Women can stimulate themselves sexually by crossing their legs tightly and clenching the muscles in their legs, creating pressure on the genitals. This can potentially be done in public without observers noticing. Thoughts, fantasies, and memories of previous instances of arousal and orgasm can produce sexual excitation. Some women can orgasm spontaneously by force of will alone, although this may not strictly qualify as masturbation as no physical stimulus is involved.
Sex therapists will sometimes recommend that female patients take time to masturbate to orgasm, for example, to help improve sexual health and relationships, to help determine what is erotically pleasing to them, and because mutual masturbation can lead to more satisfying sexual relationships and added intimacy.
Male masturbation
Manual stimulation
The most common masturbation technique is to hold the penis with a loose fist and then move the hand up and down on the glans and the shaft of the penis. This type of stimulation can result in orgasm and ejaculation. The hand motion and the speed of the action may vary throughout the masturbation session. Some men may use their free hand to fondle their scrotum and testicles, the perineum, and other body parts, or may place both hands directly on the penis. Common positions include standing, sitting, lying on one's back or lying face down, squatting, or kneeling. In some cases, to avoid friction and irritation or to enhance sexual sensation, men prefer to use a personal lubricant or saliva. Men may also rub or massage different areas of their glans, like its ventral surface, the left and right sides, the rounded rim, known as the corona, and around the frenulum. Some men lie face down in prone position and gently rub their penis against a comfortable surface, such as a mattress or pillow, a technique known as prone masturbation.
Other methods
Prostate massage is one other technique used for sexual stimulation, often to reach orgasm. The prostate is sometimes referred to as the "male G-spot" or P-spot. Some men can achieve orgasm through stimulation of the prostate gland, by stimulating it using a well-lubricated finger or dildo inserted through the anus into the rectum. Men who report the sensation of prostate stimulation often give descriptions similar to females' accounts of G-spot stimulation. In some men, prostate stimulation might produce more intense orgasms than penile stimulation. Stimulating the prostate from outside, via pressure on the perineum, can be pleasurable as well. Anal masturbation without any prostate stimulation, with fingers or otherwise, is also a technique that some men enjoy. The muscles of the anus contract during orgasm, thus the presence of an object holding the sphincter open can strengthen the sensation of the contractions and intensify orgasm.
Some men keep their hands stationary while pumping into them with pelvic thrusts to simulate the motions of sexual intercourse. The nipples are erogenous zones and vigorous stimulation of them during masturbation can result in enhanced sexual arousal. Others may also use vibrators and other sexual devices for sexual stimulation. The device can be used to stimulate the penis and other areas, like the scrotum, the perineum or the anus. Other sexual toys for men are artificial vaginas, like fleshlights or other simulacrums. In a bath or shower, a male may direct water via a handheld showerhead at his frenulum, testicles, or perineum. A somewhat controversial ejaculation control technique is to put intense pressure on the perineum, about halfway between the scrotum and the anus, just before ejaculating. This can, however, redirect semen into the bladder (referred to as retrograde ejaculation).
Mutual masturbation
Mutual masturbation involves two or more people who either masturbate at the same time or sexually stimulate each other, usually with the hands. It can be practiced by people of any sexual orientation, and can be part of other sexual activity. It may be used as foreplay, or as an alternative to sexual penetration. When used as an alternative to penile-vaginal penetration, the goal may be to preserve virginity or to avoid risk of pregnancy.
Forms of mutual masturbation include:
Non-contact mutual masturbation – Two people masturbating in the presence of each other but not touching.
Contact mutual masturbation – One person touching another person to masturbate. The other person may do the same during or after.
Non-contact group – More than two people masturbating in the presence of each other in a group but not touching each other.
Contact group – More than two people physically touching each other to masturbate as a group.
Mutual masturbation foreplay – The manual stimulation of each other's genitals where the session eventually leads to sexual intercourse.
Remote mutual masturbation – Some mutual masturbation occurs between individuals in different locations, facilitated by internet enabled devices, sometimes referred to as teledildonics.
Frequency, age, and sex
Frequency of masturbation is determined by many factors, e.g., one's resistance to sexual tension, hormone levels influencing sexual arousal, sexual habits, peer influences, health and one's attitude to masturbation formed by culture; E. Heiby and J. Becker examined the latter. Medical causes have also been associated with masturbation, wherein masturbation is not cause, but effect, with the exception of inserting foreign objects into the urinary bladder.
Different studies have found that masturbation is frequent in humans. Alfred Kinsey's 1950s studies on the US population have shown that 92% of men and 62% of women have masturbated during their lifespan. Similar results have been found in a 2007 British national probability survey. It was found that, among individuals aged 16 to 44, 95% of men and 71% of women masturbated at some point in their lives. 73% of men and 37% of women reported masturbating in the four weeks before their interview, while 53% of men and 18% of women reported masturbating in the previous seven days.
The Merck Manual says that 97% of men and 80% of women have masturbated and that, generally speaking, males masturbate more than females. It states that almost half of the population reported to have masturbated in the past four weeks.
Masturbation is considered normal when performed by children, even in early infancy. In 2009, the Sheffield NHS Health Trust issued a pamphlet called "Pleasure" which discussed the health benefits of masturbation. This was done in response to data and experience from the other EU member states to reduce teen pregnancy and STIs (STDs), and to promote healthy habits.
According to the New Oxford Textbook of Psychiatry (1st ed.), "Masturbation and sexual play are common well before puberty. Sexual behaviour in young children is common, and should only be regarded as a sign of sexual abuse when it is out of context and is inappropriate."
In the book Human Sexuality: Diversity in Contemporary America, by Strong, Devault and Sayad, the authors point out, "A baby boy may laugh in his crib while playing with his erect penis". "Baby girls sometimes move their bodies rhythmically, almost violently, appearing to experience orgasm." Italian gynecologists Giorgio Giorgi and Marco Siccardi observed via ultrasound a female fetus possibly masturbating and having what appeared to be an orgasm.
Popular belief asserts that individuals of either sex who are not in sexually active relationships tend to masturbate more frequently than those who are; however, much of the time this is not true as masturbation alone or with a partner is often a feature of a relationship. Contrary to this belief, several studies reveal a positive correlation between the frequency of masturbation and the frequency of intercourse. A study has reported a significantly higher rate of masturbation in gay men and women who were in a relationship.
Coon and Mitterer stated: "Approximately 70 percent of married women and men masturbate at least occasionally."
Mitterer, Coon and Martini wrote in 2015: "Do more men masturbate than women? Yes. While 89 percent of women reported that they had masturbated at some time, the figure was 95 percent for men. (Some cynics add, 'And the other 5 percent lied!')"
Evolutionary utility
Female masturbation alters conditions in the vagina, cervix and uterus, in ways that can alter the chances of conception from intercourse, depending on the timing of the masturbation. A female's orgasm between one minute before and 45 minutes after insemination favors the chances of sperm reaching her egg. If, for example, she has had intercourse with more than one male, such an orgasm can increase the likelihood of a pregnancy by one of them. Female masturbation can also provide protection against cervical infections by increasing the acidity of the cervical mucus and by moving debris out of the cervix.
In males, masturbation flushes out old sperm with low motility from the male's genital tract. The next ejaculation then contains proportionally more fresh sperm, which have higher chances of achieving conception during intercourse. If more than one male has intercourse with a female, the sperm with the highest motility will compete more effectively.
Health effects
The American Medical Association declared masturbation to be normal by consensus in 1972. It does not deplete one's body of energy or cause premature ejaculation. The medical consensus is that masturbation is a medically healthy and psychologically normal habit. According to the Merck Manual of Diagnosis and Therapy, "It is considered abnormal only when it inhibits partner-oriented behavior, is done in public, or is sufficiently compulsive to cause distress."
Solo masturbation is a sexual activity that is nearly free of risk of sexually transmitted infection. With two or more participants, the risk of sexually transmitted infection, while not eliminated, remains lower than with most forms of penetrative sex. Support for such a view and for making masturbation part of the American sex education curriculum led to the dismissal of US Surgeon General Joycelyn Elders during the Clinton administration.
Benefits
Masturbation among adolescents contributes to their developing a sense of mastery over sexual impulses, and it has a role in the physical and emotional development of prepubescents and pubescents.
Sex therapists sometimes recommend that female patients take time to masturbate to orgasm; for example, to help improve sexual health and relationships, to help determine what is erotically pleasing to them, and because mutual masturbation can lead to more satisfying sexual relationships and added intimacy. Encyclopedia Britannica mentions the use of masturbation inside sex therapy. So does Human Sexuality: An Encyclopedia, also. Britannica also calls the idea that masturbation is physically harmful a "myth", and states that there is no evidence that it is an immature behavior.
Mutual masturbation enables partners in a couple to reveal the "map to [their] pleasure centers", learning how they enjoy being touched. When intercourse is inconvenient or impractical, mutual masturbation affords couples the opportunity to obtain sexual release as often as desired.
It is held in many mental health circles that masturbation can relieve depression and lead to a higher sense of self-esteem. When one partner in a relationship wants more sex than the other, masturbation can provide a balancing effect and promote a more harmonious relationship.
In 2003, an Australian research team led by Graham Giles of The Cancer Council Australia found that males who masturbated frequently had a lower probability of developing prostate cancer, although they could not demonstrate a direct causation. A 2008 study concluded that frequent ejaculation between the ages of 20 and 40 was correlated with higher risk of developing prostate cancer, while frequent ejaculation in the sixth decade of life was found to be correlated with a lower risk. However, a larger 2016 study found that regular ejaculation markedly reduced prostate cancer risk in all age groups.
A study published in 1997 found an inverse association between death from coronary heart disease and frequency of orgasm, even given the risk that myocardial ischaemia and myocardial infarction can be triggered by sexual activity. Its authors stated: "The association between frequency of orgasm and all cause mortality was also examined using the midpoint of each response category recorded as number of orgasms per year. The age adjusted odds ratio for an increase of 100 orgasms per year was 0.64 (0.44 to 0.95)." That is, a difference in mortality appeared between any two subjects when one subject ejaculated at around two times per week more than the other. Assuming a broad range average of between three and five ejaculations per week for healthy males, this would mean five to seven ejaculations per week. This is consistent with a 2003 paper that found the strength of these correlations increased with increasing frequency of ejaculation.
A 2008 study at Tabriz Medical University found that ejaculation reduces swollen nasal blood vessels, freeing the airway for normal breathing. The mechanism is through stimulation of the sympathetic nervous system and is long-lasting. The study author suggests: "It can be done [from] time-to-time to alleviate the congestion and the patient can adjust the number of intercourses or masturbations depending on the severity of the symptoms."
Sexual climax leaves an individual in a relaxed and contented state, frequently followed by drowsiness and sleep.
Some professionals consider masturbation equivalent to a cardiovascular workout. Though research remains scant, those suffering from cardiovascular disorders, particularly those recovering from heart attacks, should resume physical activity gradually and with the frequency and rigor which their physical status will allow. This limitation can serve as encouragement to follow through with physical therapy sessions to help improve endurance. In general, sex slightly increases energy consumption.
Risks
Masturbation is generally safe, and complications are rare. When issues do occur, they are generally due to methodology or underlying psychiatric illness.
Those who insert objects as aids to masturbation risk them becoming stuck (either due to size, technique, or anatomy; including rectal foreign bodies and urethral foreign bodies), causing damage. Such risks can effect both men and women, with a multitude of case reports available, including that of a female who pierced her urethra after inserting two pencils during masturbation, and the case of a male who required extensive treatment after inserting a pair of headphones into his bladder.
A male whose penis is bluntly traumatized during intercourse or masturbation may, rarely, sustain a penile fracture or develop Peyronie's disease. In these cases, any energetic manipulation of the penis can cause discomfort or further damage.
A small percentage of males experience postorgasmic illness syndrome (POIS), which can cause severe muscle pain throughout the body and other symptoms immediately following ejaculation, whether due to masturbation or partnered sex. The symptoms last for up to a week. Some doctors speculate that the frequency of POIS "in the population may be greater than has been reported in the academic literature", and that many cases are undiagnosed.
Compulsive masturbation and other compulsive behaviors can be signs of an emotional problem, which may need to be addressed by a mental health specialist. As with any "nervous habit", it is more helpful to consider the causes of compulsive behavior, rather than try to repress masturbation.
Alongside many other factors—such as medical evidence, age-inappropriate sexual knowledge, sexualized play and precocious or seductive behavior—excessive masturbation may be an indicator of sexual abuse.
According to DSM-5-TR, "Delayed ejaculation is associated with highly frequent masturbation, use of masturbation techniques not easily duplicated by a partner, and marked disparities between sexual fantasies during masturbation and the reality of sex with a partner."
Cultural history
Ancient world
The sexual stimulation of one's own genitals has been interpreted variously by different religions, the subject of legislation, social controversy, activism, as well as intellectual study in sexology. Social views regarding masturbation taboo have varied greatly in different cultures, and over history.
There are depictions of male and female masturbation in prehistoric rock paintings around the world. From the earliest records, the ancient Sumerians had very relaxed attitudes toward sex. The Sumerians widely believed that masturbation enhanced sexual potency, both for men and for women, and they frequently engaged in it, both alone and with their partners. Men would often use puru-oil, a special oil probably mixed with pulverized iron ore intended to enhance friction. Masturbation was also an act of creation and, in Sumerian mythology, the god Enki was believed to have created the Tigris and Euphrates rivers by masturbating and ejaculating into their empty riverbeds. The ancient Egyptians also regarded masturbation by a deity as an act of creation; the god Atum was believed to have created the universe by masturbating to ejaculation.
The ancient Greeks also regarded masturbation as a normal and healthy substitute for other forms of sexual pleasure. Most information about masturbation in ancient Greece comes from surviving works of ancient Greek comedy and pottery. Masturbation is frequently referenced in the surviving comedies of Aristophanes, which are the most important sources of information on ancient Greek views on the subject. In ancient Greek pottery, satyrs are often depicted masturbating. According to the Lives and Opinions of Eminent Philosophers by the third-century AD biographer Diogenes Laërtius, Diogenes of Sinope, the fourth-century BC Cynic philosopher, often masturbated in public, which was considered scandalous. When people confronted him over this, he would say, "If only it were as easy to banish hunger by rubbing my belly."
Among non-western perspectives on the matter, some teachers and practitioners of Traditional Chinese medicine, Taoist meditative and martial arts say that masturbation can cause a lowered energy level of the yang in men, but causes no harm to women with yin, even going further to introduce masturbating tools for women in books. Within the African Congo Basin, the Aka, Ngandu, Lesi, Brbs, and Ituri ethnic groups all lack a word for masturbation in their languages and are confused by the concept of masturbation.
Development of the contemporary Western world view
18th century
Onanism is a hybrid term which combines the proper noun, Onan, with the suffix, -ism. Notions of self-pollution, impurity and uncleanness were increasingly associated with various other sexual vices and crimes of the body (such as fornication, sodomy, adultery, incest and obscene language); in reaction to the 17th-century libertine culture, middle-class moralists increasingly campaigned for a reformation of manners and a stricter regulation of the body. Paradoxically, a crime that was secret and private became a popular and fashionable topic. Moreover, writers tended to focus more on the perceived links with mental and physical illnesses that were deemed to be associated with the sense of moral outrage. Attention increasingly shifted to the prevention and cure of this illness which perilously sapped men of their virility.
The first use of the word "onanism" to consistently and specifically refer to masturbation is a pamphlet first distributed in London in 1716, titled "Onania, or the Heinous Sin of self-Pollution, And All Its Frightful Consequences, In Both Sexes, Considered: With Spiritual and Physical Advice To Those Who Have Already Injured Themselves By This Abominable Practice." The Online Etymology Dictionary, however, claims the earliest known use of onanism occurred in 1727. In 1743–1745, the British physician Robert James published A Medicinal Dictionary, in which he described masturbation as being "productive of the most deplorable and generally incurable disorders" and stated that "there is perhaps no sin productive of so many hideous consequences". One of the many horrified by the descriptions of malady in Onania was the notable Swiss physician Samuel-Auguste Tissot. In 1760, he published L'Onanisme, his own comprehensive medical treatise on the purported ill-effects of masturbation. Though Tissot's ideas are now considered conjectural at best, his treatise was presented as a scholarly, scientific work in a time when experimental physiology was practically nonexistent.
Immanuel Kant regarded masturbation as a violation of the moral law. In The Metaphysics of Morals (1797), he made the a posteriori argument that "such an unnatural use of one's sexual attribute" strikes "everyone upon his thinking of it" as "a violation of one's duty to himself", and suggested that it was regarded as immoral even to give it its proper name (unlike the case of the similarly undutiful act of suicide). He went on, however, to acknowledge that "it is not so easy to produce a rational demonstration of the inadmissibility of that unnatural use", but ultimately concluded that its immorality lay in the fact that "a man gives up his personality … when he uses himself merely as a means for the gratification of an animal drive". His arguments were rejected as flawed by ethicists of the 20th and 21st centuries.
19th century
By 1838, Jean Esquirol had declared in his Des Maladies Mentales that masturbation was "recognized in all countries as a cause of insanity". The medical literature of the time also described more invasive procedures including electric shock treatment, infibulation, restraining devices like chastity belts and straitjackets, cauterization or – as a last resort – wholesale surgical excision of the genitals. Medical attitudes toward masturbation began to change towards the end of the 19th century when H. Havelock Ellis, in his seminal 1897 work Studies in the Psychology of Sex, questioned Tissot's premises.
20th century
In 1905, Sigmund Freud addressed masturbation in his Three Essays on the Theory of Sexuality and associated it with addictive substances. He described the masturbation of infants at the period when the infant is nursing, at four years of age, and at puberty. At the same time, the supposed medical condition of hysteria—from the Greek hystera or uterus—was being treated by what would now be described as medically administered or medically prescribed masturbation for women. In 1910, the meetings of the Vienna psychoanalytic circle discussed the moral or health effects of masturbation, but its publication on the matter was suppressed. "Concerning Specific Forms of Masturbation" is a 1922 essay by another Austrian, the psychiatrist and psychoanalyst Wilhelm Reich. In the seven and a half page essay Reich accepts the prevalent notions on the roles of unconscious fantasy and the subsequent emerging guilt feelings which he saw as originating from the act itself.
By 1930, F. W. W. Griffin, editor of The Scouter, had written in a book for Rover Scouts stating that the temptation to masturbate was "a quite natural stage of development" and, citing Ellis' work, held that "the effort to achieve complete abstinence was a very serious error." The work of sexologist Alfred Kinsey during the 1940s and 1950s, most notably the Kinsey Reports, insisted that masturbation was an instinctive behavior for both males and females. In 1961 The Encyclopedia of Sexual Behavior edited by Albert Ellis and Albert Abarbanel declared that masturbation is normal and healthy at any age. In the US, masturbation has not been a diagnosable condition since DSM II (1968). Circumcision was sometimes used as a prevention for masturbation, with some mainstream pediatric manuals in English-speaking countries continuing to recommend it as a deterrent against masturbation into the 1950s, and a 1970 edition of the standard US urology textbook said "Parents readily ... adopt measures which may avert masturbation. Circumcision is usually advised on these grounds."
Thomas Szasz stated in 1973 the shift in scientific consensus: "Masturbation: the primary sexual activity of mankind. In the nineteenth century, it was a disease; in the twentieth, it's a cure."
Dörner and others wrote in their now classic book (1978): "Self-satisfaction is therefore a priceless good for the success of sexual pleasure, but also for other partnership and sexual relationships: for only if I can offer something to myself can I also offer it to someone else. ... Not self-satisfaction, but feelings closely correlated with it need among others help through counseling, respectively therapy!"
In the 1980s, Michel Foucault was arguing masturbation taboo was "rape by the parents of the sexual activity of their children". However, in 1994, when the surgeon general of the United States, Joycelyn Elders, said that it should be mentioned in school sex education curricula, as a side note, that masturbation is safe and healthy, she was forced to resign, with opponents asserting that she was promoting the teaching of how to masturbate.
21st century
Thomas W. Laqueur stated: "Less clinical, less overtly political, the solitary vice of the imagination and of fantasy that had so terrified Rousseau had been transformed into a virtue: self-pleasuring was the path to self-knowledge, self-discovery, and spiritual well-being."
Objectively seen, masturbation is not immoral.
Both practices and cultural views of masturbation have continued to evolve in the 21st century, partly because the contemporary lifeworld is increasingly technical. For example, digital photographs or live video may be used to share masturbatory experiences either in a broadcast format (possibly in exchange of money, as with performances by webcam models), or between members of a long-distance relationship. Teledildonics is a growing field. Masturbation has been depicted as a complicated part of "Love in the 21st Century" in the Channel 4 drama of the same name.
In modern culture
Stigma
Even though many medical professionals and scientists have found large amounts of evidence that masturbating is healthy and commonly practiced by males and females, stigma on the topic still persists today. In November 2013, Matthew Burdette committed suicide after a fellow student secretly made a video of him masturbating in a restroom stall, and published it.
In an article published by the nonprofit organization Planned Parenthood Federation of America, it was reported: "Proving that these ancient stigmas against masturbation are still alive and felt by women and men, researchers in 1994 found that half of the adult women and men who masturbate feel guilty about it (Laumann, et al., 1994. p.85). Another study in 2000 found that adolescent young men are still frequently afraid to admit that they masturbate (Halpern, et al., 2000, 327)."
Sperm donation
Male masturbation may be used as a method to obtain semen for third party reproductive procedures such as artificial insemination and in vitro fertilisation which may involve the use of either partner or donor sperm.
At a sperm bank or fertility clinic, a special room or cabin may be set aside so that semen may be produced by male masturbation for use in fertility treatments such as artificial insemination. Most semen used for sperm donation, and all semen donated through a sperm bank by sperm donors, is produced in this way. The facility at a sperm bank used for this purpose is known as a masturbatorium (US) or men's production room (UK). A bed or couch is usually provided for the man, and pornographic films or other material may be made available.
Encouragement
In the UK in 2009, a leaflet was issued by the National Health Service in Sheffield carrying the slogan, "an orgasm a day keeps the doctor away". It also says: "Health promotion experts advocate five portions of fruit and veg a day and 30 minutes' physical activity three times a week. What about sex or masturbation twice a week?" This leaflet has been circulated to parents, teachers and youth workers and is meant to update sex education by telling older school students about the benefits of enjoyable sex. Its authors have said that for too long, experts have concentrated on the need for "safe sex" and committed relationships while ignoring the principal reason that many people have sex. The leaflet is entitled Pleasure. Instead of promoting teenage sex, it could encourage young people to delay losing their virginity until they are certain they will enjoy the experience, said one of its authors.
The Spanish region of Extremadura launched a program in 2009 to encourage "sexual self-exploration and the discovery of self-pleasure" in people aged from 14 to 17. The €14,000 campaign includes leaflets, flyers, a "fanzine", and workshops for the young in which they receive instruction on masturbation techniques along with advice on contraception and self-respect. The initiative, whose slogan is, "Pleasure is in your own hands" has angered local right-wing politicians and challenged traditional Roman Catholic views. Officials from the neighboring region of Andalucia have expressed an interest in copying the program.
The text book Palliative care nursing: quality care to the end of life states, "Terminally ill people are likely no different from the general population regarding their masturbation habits. Palliative care practitioners should routinely ask their patients if anything interferes in their ability to masturbate and then work with the patient to correct the problem if it is identified."
The sex-positive movement argues for a supportive environment for masturbation.
A 2016 review paper says that safe masturbation, in moderation (not excessive), is beneficial for heart health, and decreases risk of major adverse cardiovascular diseases.
A 2019 research paper says that masturbation, in moderation, can improve sleep quality, especially when one or more orgasms occur during the activity.
Law
The prosecution of masturbation has varied at different times, from complete illegality to virtually unlimited acceptance. In a 17th-century law code for the Puritan colony of New Haven, Connecticut, blasphemers, homosexuals and masturbators were eligible for the death penalty.
Often, masturbation in the sight of others is prosecuted under a general law such as public indecency, though some laws make specific mention of masturbation. In the UK, masturbating in public is illegal under Section 28 of the Town Police Clauses Act 1847. The penalty may be up to 14 days in prison, depending on a range of circumstantial factors. In the US, laws vary from state to state. In 2010, the Supreme Court of Alabama upheld a state law criminalizing the distribution of sex-toys. In the city of Charlotte, North Carolina, masturbating in public is a class 3 misdemeanor. In 2013, a male found masturbating openly on a beach in Sweden was cleared of charges of sexual assault, the court finding that his activities had not been directed towards any specific person.
In many jurisdictions, masturbation by one person of another is considered digital penetration which may be illegal in some cases, such as when the other person is a minor.
There is debate whether masturbation should be promoted in correctional institutions. Restrictions on pornography, used to accompany masturbation, are common in American correctional facilities. Connecticut Department of Corrections officials say that these restrictions are intended to avoid a hostile work environment for correctional officers. Other researchers argue allowing masturbation could help prisoners restrict their sexual urges to their imaginations rather than engaging in prison rape or other non-masturbatory sexual activity that could pose sexually transmitted infection or other health risks.
Religious views
Religions vary broadly in their views of masturbation, from considering it completely impermissible (for example in Catholicism, most forms of Islam, and some sects of Judaism) to encouraging and refining it (as, for example, in some Dharmic, Neotantra, and Taoist sexual practices).
In popular culture
Literature
The 1858 schoolboys' novel Eric, or, Little by Little was a tract against masturbation, but it did not mention the subject except extremely obliquely as "Kibroth-Hattaavah", a place mentioned in the Old Testament where those that lusted after meat were buried.
In October 1972, an important censorship case was held in Australia, leading to the banning of Philip Roth's novel Portnoy's Complaint in that country due to its masturbation references. The censorship led to public outcry at the time.
Further portrayals and references to masturbation have occurred throughout literature, and the practice itself has even contributed to the production of literature among certain writers, such as Wolfe, Balzac, Flaubert and John Cheever.
Perhaps the most famous fictional depiction of masturbation occurs in the "Nausicaa" episode of Ulysses by James Joyce. Here, the novel's protagonist Bloom brings himself to covert climax during a public fireworks display after being aroused by a young female's exhibitionism.
Music
In popular music, there are various songs that deal with masturbation. Some of the earliest examples are "My Ding-a-Ling" by Chuck Berry and "Mary Ann with the Shaky Hand" and "Pictures of Lily" by The Who.
More recent popular songs include "Love Myself" by Hailee Steinfeld, "Rosie" by Jackson Browne, "Una luna de miel en la mano" by Virus, "I Touch Myself" by the Divinyls, "Very Busy People" by The Limousines, "Dancing with Myself" by Billy Idol, "Everyday I Die" by Gary Numan, "You're Makin' Me High" by Toni Braxton, "Holding My Own" by The Darkness, "Nickelodeon Girls" by Pink Guy, "Vibe On" by Dannii Minogue, "Orgasm Addict" by the Buzzcocks, "Spank Thru" and "Paper Cuts" by Nirvana, "Captain Jack" and "The Stranger" by Billy Joel, "Blister in the Sun" by Violent Femmes, "Longview" by Green Day, "M+Ms" by Blink-182, "Wow, I Can Get Sexual Too" by Say Anything, "Touch of My Hand" by Britney Spears, "Fingers" and "U + Ur Hand" by P!nk, "So Happy I Could Die" by Lady Gaga, "Masturbating Jimmy" by The Tiger Lillies, "When Life Gets Boring" by Gob, "Daybed" by FKA Twigs, "Get a Grip" by Semisonic, and "Darling Nikki" by Prince,"Masturbation" by Dadaroma. The 1983 recording "She Bop" by Cyndi Lauper was one of the first fifteen songs ever required to carry a Parental Advisory sticker for sexual content. In a 1993 interview on The Howard Stern Show, Lauper claimed she recorded the vocal track in the nude. The song "Masturbates" by rock group Mindless Self Indulgence also deals with the concept of auto-erotic activity in a punk framework.
Television
In the Seinfeld episode "The Contest", the show's main characters enter into a contest to see who can go the longest without masturbating. Because Seinfelds network, NBC, did not think masturbation was a suitable topic for prime-time television, the word is never used. Instead, the subject is described using a series of euphemisms. "Master of my domain" became a part of the American lexicon from this episode.
Another NBC show, Late Night with Conan O'Brien, had a character known as the Masturbating Bear, a costume of a bear with a diaper covering its genitals. The Masturbating Bear would touch his diaper to simulate masturbation. Prior to leaving Late Night to become host of The Tonight Show, Conan O'Brien originally retired the character due to concerns about its appropriateness in an earlier time slot. The Masturbating Bear, however, made his Tonight Show debut during the final days of Conan O'Brien's tenure as host of the Tonight Show. It was clear by then that Conan O'Brien was being removed from the show and he spent his last shows pushing the envelope with skits that typically would not be appropriate for the Tonight Show, one of which was the Masturbating Bear. After much debate on whether or not he would be able to be used on Conan O'Brien's new TBS show, Conan, the Masturbating Bear made an appearance on the first episode.
In March 2007, the UK broadcaster Channel 4 was to air a season of television programs about masturbation, called Wank Week. (Wank is a Briticism for masturbate.) The series came under public attack from senior television figures and was pulled amid claims of declining editorial standards and controversy over the channel's public service broadcasting credentials.
Film
In Monty Python's The Meaning of Life (1983), the song "Every Sperm Is Sacred" is a satire of Catholic teachings on reproduction that forbid masturbation (and contraception) by artificial means. In Talking Cock by comedian Richard Herring, the sketch is used to ridicule those who condemn masturbation (and sex) for any purpose other than procreation.
In American Pie (1999), Nadia (Shannon Elizabeth) discovers Jim's (Jason Biggs) pornography collection and, while sitting on his bed half-naked, masturbates to it. In American Reunion (2012), Noah (Eugene Levy) attempts to explain the potential joys and difficulties of Jim explaining masturbation to his future son.
Pornography
Depictions of male and female masturbation are common in pornography, including gay pornography. Am Abend (1910), one of the earliest pornographic films that have been collected at the Kinsey Institute for Research in Sex, Gender, and Reproduction, starts with a female masturbation scene. Solo performances in gay pornography have been described in 1985 as "either or both active (tense, upright) and/or passive (supine, exposed, languid, available)", whereas female solo performances are said to be "exclusively passive (supine, spread, seated, squatted, orifices offered, etc.)". Solo pornography recognized with AVN Awards include the All Alone series and All Natural: Glamour Solos.
Illustrations
Here are some historical illustrations of masturbation.
Other animals
Masturbatory behavior has been documented in a very wide range of species. Individuals of some species have been known to create tools for masturbation purposes.
See also
Adult video arcade
Cum shot
Die große Nacht im Eimer
Jugum penis, archaic anti-masturbation device
National Masturbation Day
Nocturnal emission
Phone sex
Self-love
Sex doll
Sex magic
Sexual assistance
Venus Butterfly
References
Further reading
External links
(University of Guelph students enrolled in FRHD 4200 Issues in Human Sexuality created Public Service Announcement (PSA) on the health benefits of masturbation).
(University of Guelph students enrolled in FRHD 4200 Issues in Human Sexuality created Public Service Announcement (PSA) on the health benefits of masturbation).
Female masturbation
Habits
Leisure activities
Male masturbation
Sexual acts | Masturbation | Biology | 9,523 |
633,822 | https://en.wikipedia.org/wiki/Sarich%20orbital%20engine | The Sarich orbital engine is a type of internal combustion engine, invented in 1972 by Ralph Sarich, an engineer from Perth, Australia, which features orbital rather than reciprocating motion of its central piston. It differs from the conceptually similar Wankel engine by using a generally prismatic shaped piston that orbits the axis of the engine, without rotation, rather than the rotating trilobular rotor of the Wankel.
Overview
The engine promised to be about one third the size and weight of conventional piston engines due to the compact arrangement of the combustion chambers. Another advantage is that there is no high-speed contact area with the engine walls, unlike in the Wankel engine in which edge wear is a problem. However, the combustion chambers are divided by vanes which do have contact with both the walls and the orbiting piston and are more difficult to seal due to the eight corners of the combustion chamber.
In the patent, the engine is described as two stroke internal combustion engine, but the patent claims that with a different valve mechanism it could be used as a four stroke engine. However most of the development work was done on four stroke versions with both poppet and disk valve arrangements. A supercharger is required if operated in two stroke mode since crankcase pumping can't be used to charge the combustion chamber.
Interestingly, in his seminal book researching and documenting all the possible ways to create a rotary piston displacer, Felix Wankel shows the orbiting piston and reciprocating vane mechanism used in the orbital engine.
Research and development
The Orbital Engine Company, with funding from partner BHP and Federal Government R&D grants, worked on the concept from 1972 until 1983 and had a 3.5L four stroke engine performing as well as the similar petrol car engines of the day at typical road load conditions. A technical paper was presented to the Society of Automotive Engineers in 1982, and is now part of their historic transaction collection.
A major reason for the good performance of this engine was the development of a unique and patented injection system directed into the combustion chamber which created a stratified charge combustion process.
Several auto makers from around the world showed great interest in the engine, however it was realised that there was still at least $100 million of development work required to commercialise the engine and the funding sources decided this was not a sound investment. Instead it was realised the same injection and combustion system could be adapted onto existing two and four stroke petrol engines and this work become the future of the company, being called the Orbital Combustion Process.
During prototyping process, the engine has been installed in 3 vehicles: Toyota Kijang (3 cylinder unit), and Suzuki Karimun. (2 cylinder unit), installed by Sangeet Hari Kapoor when he was working in PT Wahana Perkasa Auto Jaya, which is a company under the Texmaco group. The 3 cylinder unit is also installed to 100 units of Ford Festivas in Australia, dubbed Festiva EcoSport, and the verdict is that while the car is somewhat more powerful than the Ford Festiva 1.3, it failed in to deliver emission compliance, efficiency, and NVH (noise, vibration, harshness) reduction at same time.
Technical problems
The orbital engine has two of fundamental design issues, which also plague the Wankel engine:
Large surface to volume ratio combustion chamber which leads to larger combustion chamber heat losses and so loss of power, which can be greatly reduced using stratified combustion;
Long sealing paths and multiple corner seals which mean it is harder to contain the chamber gases and so there is some loss of pressure and thus power.
Drawings
Some conceptual sketches from the engine's patent:
See also
Orbital Corporation
Powerplus supercharger
William Selwood
Cecil Hughes
References
External links
Pistonless rotary engine
Australian inventions
Engine technology | Sarich orbital engine | Technology | 770 |
3,811,895 | https://en.wikipedia.org/wiki/Skycam |
Skycam is a computer-controlled, stabilized, cable-suspended camera system. The system is maneuvered through three dimensions in the open space over a playing area of a stadium or arena by computer-controlled cable-drive system. It is responsible for bringing video game–like camera angles to television sports coverage. The camera package weighs less than and can travel at .
History
Skycam was invented by Garrett Brown (also the inventor of the Steadicam) in the early 1980s. The patent for Skycam was assigned to Skycam, Inc. In 2004, Skycam, Inc. was acquired by Winnercomm, Inc. In 2009, Winnercomm was acquired by Outdoor Channel Holdings, Inc., parent company of the Outdoor Channel. In 2013, Outdoor Channel was acquired by Kroenke Sports & Entertainment, owner of several sports franchises including the Los Angeles Rams and Denver Nuggets.
In 2015, a federal lawsuit was filed by Nic Salomon, the former President of Skycam, claiming intentional interference with contractual relations related to the 2013 acquisition by Kroenke Sports & Entertainment. The case was allowed to proceed by the court in the Northern District of Texas in August 2017. It was then dismissed in February 2019, weeks before a jury trial. In March 2019, Salomon appealed to the United States Court of Appeals for the Fifth Circuit. In April 2021, the Court of Appeals ruled in Kroenke Sports' favor.
Despite the dispute, Skycam remains an important technology for the presentation of football content.
Usage
While "SkyCam" is a registered trademark, the term "Skycam" is often used generically for cable-suspended camera system, and competing systems like CableCam (invented by Jim Rodnunsky but also a subsidiary of Kroenke Sports & Entertainment, LLC), Spidercam and Robycam 3D. Systems like it have been in limited use since the mid-1980s when the technology was first patented, but until the mid-1990s progress was slow due to limitations in computer and servo motor technology as well as cost (a 2001 estimate pegged the cost to use the Skycam at $30,000 per event). All of these systems began seeing more widespread use in the 21st century.
American football
Skycam was first publicly used in fall 1984, at a preseason National Football League (NFL) game in San Diego between the Chargers and 49ers, televised by CBS. NBC debuted the first wire-flown remote-controlled camera used in sports coverage at the 1985 Orange Bowl.
The XFL was one of the first leagues to make extensive use of the Skycam as a primary camera angle for broadcasts when it debuted in spring 2001. Traditional camera angles were used more prominently after the first week of play; the "Xcam" (as it was known in that league's broadcasts) remained in regular use throughout the rest of the season.
ESPN first used Skycam in 2001 for an NFL pre-season telecast and then consistently in 2002 for Sunday Night Football broadcasts. Since then, ESPN and sister-network ABC have made widespread use of Skycam for NCAA football, Monday Night Football, and Super Bowl XXXVII. The networks have regularly offered a Skycam-only internet broadcast of many of its more important sportscasts under the Megacast brand. Amazon Prime Video offers an alternate analytics broadcast of Thursday Night Football branded as "Prime Vision with Next Gen Stats", which mainly carries the Skycam's video in addition to real-time statistical graphics.
CBC used a CableCam in their broadcasts of the 2005 and 2006 Grey Cups.
On October 22, 2017, NBC was required to broadcast the majority of a Sunday Night Football game using Skycam angles, as their traditional sideline angles were obscured by a large amount of fog. Reception to the impromptu experiment was mostly positive (with some drawing comparisons to the default camera angle used in football video games, such as the Madden NFL franchise); NBC announced that it would experiment with intentionally using the Skycam as a primary angle during a subsequent Thursday-night game on November 16, 2017 and again for the December 14 game the same year.
The Skycam's perspective, while making more effective use of the field of vision offered by a television screen (thus allowing viewers to see plays develop more clearly than the traditional sideline view), distorts vertical distances and makes it more difficult to assess yardages, which was part of the reason it has not been used more often. Consequently, in NBC's trial runs, they switched to a traditional sideline camera in short-yardage situations including the red zone, where the shorter distances negate some of the disadvantages of the sideline camera. To mitigate some of these disadvantages, NBC experimented with expanding the live on-field graphics to include a "green zone" that darkens the area between the line of scrimmage and the line to gain for a first down.
Other sports
Prior to the 1984 Olympics in Los Angeles, it was proposed that Skycam be used at the Opening Ceremonies and Track & Field events at the LA Coliseum. During test runs, the images were excellent, but on its last test, one of its four support wires snagged on the top of the steel football goal post at the peristyle end of the Coliseum and bent one of the arms. Skycam was unharmed, but was not used at the Olympics that year.
Systems from Skycam and CableCam have also been used for the NBA and NHL final series and the beginning of the 2005 and 2006 NASCAR season broadcast on Fox. CableCam was used on the famous 17th hole at the Tournament Players Club at Sawgrass for NBC's coverage of The Players Championship in 2005. CBS used a SkyCam for their coverage of the 2010 NCAA Men's Basketball Final Four games in Lucas Oil Stadium.
In Australia, the Nine Network trialed Skycam for three of their Friday Night Football broadcasts of the Australian Football League for the 2004 season. It was also used in the State of Origin series.
The first use of Skycam for an MLS broadcast was on April 2, 2005, for an ESPN broadcast of a match between DC United and Chivas USA at the Home Depot Center in Carson, California. However, the use of Skycam proved to be controversial three weeks later on April 23, 2005 when the camera crashed to the field of the Home Depot Center during a match between the LA Galaxy and Chivas USA.
Skycam has been used infrequently for MLS broadcasts since then, including the 2015 MLS All-Star Game. On April 2, 2016, Sporting Kansas City debuted the league's first semi-permanent Skycam installation at Children's Mercy Park, in a match against Real Salt Lake.
Technical overview
Skycam consists of three major components: the reel—the motor drive and cables; the spar—the counterbalanced pan and tilt video camera; and central control—the computer software used by the operator to fly the camera.
Reels
The system consists of four reels anchored at high fixed points at corners of the stadium or arena (the cables are attached to fixed spars formed by tall extensible lift platforms when permanent anchors are not available). Each reel is a cable spool with motor and disc brakes with its own computer capable of a positioning resolution. The cable is a braided Kevlar jacketed single mode optical fiber with conductive copper elements and is capable of supporting on a single cable.
Mobile spar
The tall spar contains the Sony HD camera, the pan and tilt motor, and stabilization sensors. Weighing , the package also includes a power distribution module and electronics for fiber optic signaling.
Central control
Central control is an industrial grade Linux computer workstation that provides camera flight and video control. Both a pilot (the one who flies the spar in 3D space) and the operator (the one who controls the camera pan, tilt, zoom and focus) use this system for controlling the overall video shot. The central computer system uses a custom software package to control each aspect of the camera system, including motion, video, and obstacle avoidance.
Incidents
In the December 20, 2009 Las Vegas Bowl between the Oregon State Beavers and the BYU Cougars, Skycam had to be taken down as a result of high winds. Gusts were reported at over .
In the 2011 Insight Bowl on December 30, 2011 between the Iowa Hawkeyes and the Oklahoma Sooners, Skycam crashed onto the field with 2:22 left to play, almost striking Iowa receiver Marvin McNutt. The game was delayed for about 5 minutes as a result, as the camera and its cables were removed from the field of play.
During a Week 9 game between the Buffalo Bills and New York Jets at MetLife Stadium on November 6, 2022, one of the Skycam cables snapped with 8:27 remaining in the third quarter, causing a 12-minute delay. The camera and its cables were removed from the field of play.
See also
Spidercam
References
Notes
Bibliography
Gwinn, Eric (November 11, 2004). "Working the angles". Chicago Tribune.
External links
Official site
CableCam
Skycam Inventor Garrett Brown
Article at DTV Professional
Press release announcing LynxOS real-time operating system in Skycam (2003)
A cable camera platform used in the 1994 Winter Olympics and for filming Peter Pan's "Hook."
A DIY Skycam
Cameras
Film and video technology
Sports television technology
Kroenke Sports & Entertainment | Skycam | Technology | 1,916 |
51,171,821 | https://en.wikipedia.org/wiki/Zapier | Zapier is an American multinational software company that provides integrations for web applications for use in automated workflows.
Overview
Zapier provides workflows that allow different web applications to be used in the same workflow. Their products focus on automating recurring tasks, such as lead management. Users can set up "rules" that set up the flow of data between different tools and services.
History
Zapier was founded in Columbia, Missouri by Wade Foster, Bryan Helmig, and Mike Knoop in 2011. The following year, they were accepted to the Y Combinator startup seed accelerator and temporarily relocated to Mountain View, California. In October 2012, Zapier received a $1.3 million seed round led by Bessemer Venture Partners.
In March 2017, the company offered a "relocation package", consisting of a $10,000 moving reimbursement to employees who wished to leave the San Francisco Bay Area.
In 2020, as the Covid-19 pandemic spread, Zapier set up a $1 million small business assistance fund for struggling customers.
Sequoia Capital and Steadfast Financial bought shares from some of the company's original investors in January 2021 at a valuation of $5 billion.
In March 2021, the company acquired Makerpad, a no-code education service, for an undisclosed sum of money.
As of January 2022, the company employs approximately 500 people in 38 countries.
See also
IFTTT
Power Automate
References
External links
Cloud applications
Data synchronization
Remote companies
Y Combinator companies
Companies based in Mountain View, California
2011 establishments in Missouri
Automation software
Workflow applications | Zapier | Engineering | 335 |
60,437,977 | https://en.wikipedia.org/wiki/Reinsurance%20to%20close | Reinsurance to close (RITC) is a business transaction whereby the estimated future liabilities of an insurance company are reinsured into another, in order that the profitability of the former can be finally determined. It is most closely associated with the Lloyd's of London insurance market that comprises numerous competing "syndicates", and in order to close each accounting year and declare a profit or loss, each syndicate annually "reinsures to close" its books. In most cases, the liabilities are simply reinsured into the subsequent accounting year of the same syndicate, however, in some circumstances the RITC may be made to a different syndicate or even to a company outside of the Lloyd's market.
History
At Lloyd's, traditionally each year of each syndicate is a separate enterprise, and the profitability of each year is determined essentially by payments for known liabilities (claims) and money reserved for unknown liabilities that may emerge in the future on claims that have been incurred but not reported (IBNR). The estimation of the quantity of IBNR is difficult and can be inaccurate.
Capital providers typically "joined" their syndicate for one calendar year only, and at the end of the year the syndicate as an ongoing trading entity was effectively disbanded. However, usually the syndicate re-formed for the next calendar year with more or less the same capital membership. In this way, a syndicate could have a continuous existence for many years, but each year was accounted for separately. Since some claims can take time to be reported and then paid, the profitability of each syndicate took time to realise. The practice at Lloyd's was to wait three years from the beginning of the year in which the business was written before "closing" the year and declaring a result. For example, for the 1984 year a syndicate would ordinarily declare its result at 31 December 1986. The syndicate's 1984 members would therefore be paid any profit during 1987 (in proportion to their share of the total capacity of the syndicate); conversely, they would have to reimburse the syndicate during 1987 for their share of any 1984 loss.
For the estimated future claims liabilities, the syndicate bought an RITC; the premium for the reinsurance was equal to the amount of the reserve. In other words, rather than placing the reserve in a bank to earn interest, the syndicate transferred its liabilities for future claims to a reinsurer, thus allowing the year to be closed and the profit or loss to be declared. For example, the members of syndicate number '1' in 1984 reinsured the future liabilities of the members of syndicate '1' in 1985. The membership might be the same, or it might have changed.
Disadvantages
A capital provider for a syndicate with a long history of RITC transactions could – and often did – become liable for losses on insurance policies written many years or even decades previously. If the reserves had been accurately estimated and the appropriate RITC premium paid every year, then this would not present an issue. However, it became apparent during the asbestosis crisis at Lloyd's in the 1990s that in many cases this had not been possible: a huge surge in asbestos and pollution related losses was not foreseen or adequately reserved for. Therefore, the amounts of money transferred from earlier years by successive RITC premiums to cover these losses were grossly insufficient, and the later members had to pay the shortfall.
Similarly, within a stock company, an initial reserve for future claims liabilities is set aside immediately, in year one. Any deterioration in that initial reserve in subsequent years will result in a reduced profit in the later years, and a consequently reduced dividend and/or share price for shareholders in those later years, whether or not those shareholders in the later year are the same as the shareholders in year one. Arguably, Lloyd's practice of using reserves in year three to establish the RITC premiums should have resulted in a more equitable handling of losses such as asbestosis than would the stock company approach. Nevertheless, the difficulties in correctly estimating losses such as these overwhelmed even Lloyd's extended process.
See also
Actuarial science
Loss reserving
References
Actuarial science
Types of insurance | Reinsurance to close | Mathematics | 849 |
9,875,250 | https://en.wikipedia.org/wiki/Geison | Geison ( – often interchangeable with somewhat broader term cornice) is an architectural term of relevance particularly to ancient Greek and Roman buildings, as well as archaeological publications of the same. The geison is the part of the entablature that projects outward from the top of the frieze in the Doric order and from the top of the frieze course (or sometimes architrave) of the Ionic and Corinthian orders; it forms the outer edge of the roof on the sides of a structure with a sloped roof. The upper edge of the exterior often had a drip edge formed as a hawksbeak molding to shed water; there were also typically elaborate moldings or other decorative elements, sometimes painted. Above the geison ran the sima. The underside of the geison may be referred to as a soffit. The form of a geison (particularly the Hawksbeak molding of the outer edge) is often used as one element of the argument for the chronology of its building.
Horizontal geison
The horizontal geison runs around the full perimeter of a Greek temple, projecting from the top of the entablature to protect it from the elements and as a decorative feature. Horizontal geisa may be found in other ancient structures that are built according to one of the architectural orders. The horizontal sima (with its antefixes and water-spouts) ran above the horizontal geison along the sides of a building, acting as a rain gutter and final decoration.
Doric order
In the Doric order, the sloped underside of the horizontal geison is decorated with a series of protruding, rectangular mutules aligned with the triglyphs and metopes of the Doric frieze below. Each mutule typically had three rows of six guttae (decorative conical projections) protruding from its underside. The gaps between the mutules are termed viae (roads). The effect of this decoration was to thematically link the entire Doric entablature (architrave, frieze, and geisa) with a repeating pattern of vertically and horizontally aligned architectural elements. Use of the hawksbill molding at the top of the projecting segment is common, as is the undercutting of the lower edge to aid in dispersing rainwater. In order to separate the geison from the frieze visually, there is typically a bed molding aligned with the face of the triglyphs.
Ionic and Corinthian orders
Horizontal geisa of these orders relied on moldings rather than the mutules of the Doric order for their decoration.
Raking geison
A raking geison ran along the top edge of a pediment, on a temple or other structure such as the aedicula of a scaenae frons (theater stage building). This element was typically less decorative than the horizontal geison, and often of a differing profile from the horizontal geison of the same structure. The difference is particularly marked in the Doric order, where the raking geison lacks the distinctive mutules. The raking sima ran over the raking geison as a decorative finish and, essentially, a rain gutter.
See also
Glossary of architecture
Fascia (architecture)
Notes
References
Robertson, D. S. 1943. Handbook of Greek and Roman Architecture 2nd Edition. Cambridge: Cambridge University Press
Architectural elements
Ancient Greek architecture | Geison | Technology,Engineering | 698 |
6,662,470 | https://en.wikipedia.org/wiki/Chlorotrianisene | Chlorotrianisene (CTA), also known as tri-p-anisylchloroethylene (TACE) and sold under the brand name Tace among others, is a nonsteroidal estrogen related to diethylstilbestrol (DES) which was previously used in the treatment of menopausal symptoms and estrogen deficiency in women and prostate cancer in men, among other indications, but has since been discontinued and is now no longer available. It is taken by mouth.
CTA is an estrogen, or an agonist of the estrogen receptors, the biological target of estrogens like estradiol. It is a high-efficacy partial estrogen and shows some properties of a selective estrogen receptor modulator, with predominantly estrogenic activity but also some antiestrogenic activity. CTA itself is inactive and is a prodrug in the body.
CTA was introduced for medical use in 1952. It has been marketed in the United States and Europe. However, it has since been discontinued and is no longer available in any country.
Medical uses
CTA has been used in the treatment of menopausal symptoms and estrogen deficiency in women and prostate cancer in men, among other indications. It has been used to suppress lactation in women. CTA has been used in the treatment of acne as well.
Side effects
In men, CTA can produce gynecomastia as a side effect. Conversely, it does not appear to lower testosterone levels in men, and hence does not seem to have a risk of hypogonadism and associated side effects in men.
Pharmacology
CTA is a relatively weak estrogen, with about one-eighth the potency of DES. However, it is highly lipophilic and is stored in fat tissue for prolonged periods of time, with its slow release from fat resulting in a very long duration of action. CTA itself is inactive; it behaves as a prodrug to desmethylchlorotrianisene (DMCTA), a weak estrogen that is formed as a metabolite via mono-O-demethylation of CTA in the liver. As such, the potency of CTA is reduced if it is given parenterally instead of orally.
Although it is referred to as a weak estrogen and was used solely as an estrogen in clinical practice, CTA is a high-efficacy partial agonist of the estrogen receptor. As such, it is a selective estrogen receptor modulator (SERM), with predominantly estrogenic effects but also with antiestrogenic effects, and was arguably the first SERM to ever be introduced. CTA can antagonize estradiol at the level of the hypothalamus, resulting in disinhibition of the hypothalamic–pituitary–gonadal axis and an increase in estrogen levels. Clomifene and tamoxifen were both derived from CTA via structural modification, and are much lower-efficacy partial agonists than CTA and hence much more antiestrogenic in comparison. As an example, chlorotrianisene produces gynecomastia in men, albeit reportedly to a lesser extent than other estrogens, while clomifene and tamoxifen do not and can be used to treat gynecomastia.
CTA at a dosage of 48 mg/day inhibits ovulation in almost all women. Conversely, it has been reported that CTA has no measurable effect on circulating levels of testosterone in men. This is in contrast to other estrogens, like diethylstilbestrol, which can suppress testosterone levels by as much as 96%—or to an equivalent extent as castration. These findings suggest that CTA is not an effective antigonadotropin in men.
Chemistry
Chlorotrianisene, also known as tri-p-anisylchloroethylene (TACE) or as tris(p-methoxyphenyl)chloroethylene, is a synthetic nonsteroidal compound of the triphenylethylene group. It is structurally related to the nonsteroidal estrogen diethylstilbestrol and to the SERMs clomifene and tamoxifen.
History
CTA was introduced for medical use in the United States in 1952, and was subsequently introduced for use throughout Europe. It was the first estrogenic compound of the triphenylethylene series to be introduced. CTA was derived from estrobin (DBE), a derivative of the very weakly estrogenic compound triphenylethylene (TPE), which in turn was derived from structural modification of diethylstilbestrol (DES). The SERMs clomifene and tamoxifen, as well as the antiestrogen ethamoxytriphetol, were derived from CTA via structural modification.
Society and culture
Generic names
Chlorotrianisene is the generic name of the drug and its , , and . It is also known as tri-p-anisylchloroethylene (TACE).
Brand names
CTA has been marketed under the brand names Tace, Estregur, Anisene, Clorotrisin, Merbentyl, Merbentul, and Triagen among many others.
Availability
CTA is no longer marketed and hence is no longer available in any country. It was previously used in the United States and Europe.
References
Abandoned drugs
Hormonal antineoplastic drugs
Organochlorides
4-Methoxyphenyl compounds
Prodrugs
Progonadotropins
Selective estrogen receptor modulators
Synthetic estrogens
Triphenylethylenes
Bis(4-hydroxyphenyl)methanes | Chlorotrianisene | Chemistry | 1,222 |
42,063,681 | https://en.wikipedia.org/wiki/Korteweg-de%20Vries-Burgers%27%20equation | The Korteweg-de Vries–Burgers equation is a nonlinear partial differential equation:
The equation gives a description for nonlinear waves in dispersive-dissipative media by combining the nonlinear and dispersive elements from the KdV equation with the dissipative element from Burgers' equation.
The modified KdV-Burgers equation can be written as:
See also
Burgers' equation
Korteweg–de Vries equation
modified KdV equation
Notes
References
Nonlinear partial differential equations | Korteweg-de Vries-Burgers' equation | Mathematics | 108 |
49,668,158 | https://en.wikipedia.org/wiki/HTC%20One%20X9 | HTC One X9 is a touchscreen, slate smartphone designed and manufactured by HTC. It was released running Android 6.0, (Marshmallow) with the HTC Sense skin overlay. The One X9 was announced in December 2015 and released in China in January 2016.
HTC smartphones
Android (operating system) devices
Mobile phones introduced in 2016 | HTC One X9 | Technology | 76 |
72,573 | https://en.wikipedia.org/wiki/Ted%20Taylor%20%28physicist%29 | Theodore Brewster "Ted" Taylor (July 11, 1925 – October 28, 2004) was an American theoretical physicist, specifically concerning nuclear energy. His higher education included a PhD from Cornell University in theoretical physics. His most noteworthy contributions to the field of nuclear weaponry were his small bomb developments at the Los Alamos Laboratory in New Mexico. Although not widely known to the general public, Taylor is credited with numerous landmarks in fission nuclear weaponry development, including having designed and developed the smallest, most powerful, and most efficient fission weapons ever tested by the US. Though not considered a brilliant physicist from a calculative viewpoint, his vision and creativity allowed him to thrive in the field. The later part of Taylor's career was focused on nuclear energy instead of weaponry, and included his work on Project Orion, nuclear reactor developments, and anti-nuclear proliferation.
Early life
Ted Taylor was born in Mexico City, Mexico, on July 11, 1925. His mother and father were both Americans. His mother, Barbara Southworth Howland Taylor, held a PhD in Mexican literature from the Universidad Nacional Autónoma de México, and his father, Walter Clyde Taylor, was the director of a YMCA in Mexico City. Before marrying in 1922, his father had been a widower with three sons and his mother a widow with a son of her own. Both of his maternal grandparents were Congregationalist missionaries in Guadalajara. Taylor grew up in a house without electricity in the Atlixo 13 neighborhood of Cuernavaca. His upbringing was quiet and religious, and his home filled with books, mainly atlases and geographies, which he would read by candlelight. This interest followed him into adulthood.
Taylor showed an early interest in chemistry, specifically pyrotechnics, when he received a chemistry set at the age of ten. This fascination was enhanced when a small and exclusive university in the area built a chemistry laboratory in his neighborhood, after which Taylor had access to items from local druggists that otherwise would not have been readily available, including corrosive and explosive chemicals, as well as nitric and sulfuric acids. These allowed him to conduct his own experiments. He also often read through the 1913 New International Encyclopedia, which contained extensive chemistry, for new concoctions to make. These included sleeping drugs, small explosives, guncotton, precipitates, and many more. His mother was extremely tolerant of his experimentation but prohibited any experiments that involved nitroglycerin.
Growing up, Taylor also showed an interest in billiards. In the afternoons after school he played billiards for almost ten hours a week. He would recall this early interest as his introduction to the mechanics of collisions, relating it to his later work in particle physics. The behavior of the interacting balls on the table and their elastic collisions within the confining framework of the reflector cushions helped him to conceptualize the difficult abstractions of cross sections, neutron scattering, and fission chain reactions.
As a child, he developed a passion for music, and would quietly sit for an hour and listen to his favorite songs in the mornings before school. Later, while completing his PhD at Cornell, he noted that while his theoretical physicist peers embraced the classical music piped into their rooms, their experimentalist counterparts would uniformly shut the system off.
Taylor attended the American School in Mexico City from elementary school through high school. A gifted student, he finished the fourth through sixth grades in one year. Being an accelerated student, Taylor found himself three years younger than his friends as he entered his teens. Taylor graduated early from high school in 1941 at the age of 15. Not yet meeting the age requirements for American universities, he then attended the Exeter Academy in New Hampshire for one year, where he took Modern Physics from Elbert P. Little. This developed his interest in physics, though he displayed poor academic performance in the course: Little gave Taylor a grade D on his final winter term examination. He quickly brushed this failure off, and soon confirmed that he wanted to be a physicist. Apart from education, he also developed an interest in throwing discus at Exeter. This interest continued into his college career, as he continued to throw discus at Caltech.
He enrolled at the California Institute of Technology in 1942 and then spent his second and third years in the Navy V-12 program. This accelerated his schooling and he graduated with a bachelor's degree in physics from Caltech in 1945 at age nineteen.
After graduation, he attended the midshipman school at Throgs Neck, in the Bronx, New York, for one year to fulfill his naval active duty requirement. He was discharged in mid-1946, by which time he had been promoted to the rank of lieutenant.
He then enrolled in a graduate program in theoretical physics at the University of California at Berkeley, while also working part-time at the Berkeley Radiation laboratory, mainly on the cyclotron and a beta-ray spectrograph. After failing an oral preliminary examination on mechanics and heat, and a second prelim in modern physics in 1949, Taylor was disqualified from the graduate program.
Taylor married Caro Arnim in 1948 and had five children in the following years: Clare Hastings, Katherine Robertson, Christopher Taylor, Robert Taylor, and Jeffrey Taylor. Arnim was majoring in Greek at Scripps College, a liberal arts university in Claremont, California, and Taylor would visit her whenever he could. Both Arnim and Taylor were very shy people, and unsure of what the future held. When they first met they both believed that Taylor would end up as a college professor in a sleepy town, and that Caro would be a librarian. After 44 years of marriage the couple divorced in 1992.
Taylor died on October 28, 2004, of coronary artery disease.
Early career
Prior to Taylor's work at Los Alamos, he had firmly declared himself an opponent of nuclear weapons. While at the midshipmen school, he received news of the atomic bombing of Hiroshima by the United States. He immediately wrote a letter home discussing the perils of nuclear proliferation and his fears that it would lead to the end of mankind in the event of another war. He showed some optimism, however, as he felt with proper leadership the nuclear bomb could result in the end of wars altogether. Either way, he was still very curious about the field of nuclear physics after his time as an undergraduate.
Taylor began his work in nuclear physics in 1949 when he was hired to a junior position at Los Alamos National Laboratory in the Theoretical Physics Division. He received this job after failing out of the PhD program at Berkeley; J. Carson Mark connected Taylor with a leader at Los Alamos and recommended him for a position. Taylor was unsure of the details of his new job at Los Alamos prior to his arrival. He had only been briefed that his first assignment related to investigations of Neutron Diffusion Theory, a theoretical analysis of neutron movement within a nuclear core. While at Los Alamos, Taylor's strictly anti-nuclear development beliefs changed. His theory on preventing nuclear war turned to developing bombs of unprecedented power in an attempt to make people, including governments, so afraid of the consequences of nuclear warfare that they would not dare engage in this sort of altercation. He continued in his junior position at Los Alamos until 1953, when he took a temporary leave of absence to obtain his PhD from Cornell.
Finishing his PhD in 1954, he returned to Los Alamos, and by 1956 he was famous for his work in small-bomb development. Freeman Dyson is quoted as saying, "A great part of the small-bomb development of the last five years [at Los Alamos] was directly due to Ted." Although the majority of the brilliant minds at Los Alamos were focused on developing the fusion bomb, Taylor remained hard at work on improving fission bombs. His innovations in this area of study were so important that he was eventually given the freedom to choose whatever he wanted to study. Eventually, Taylor's stance on nuclear warfare and weapon development changed, altering his career path. In 1956, Taylor left his position at Los Alamos and went to work for General Atomics. Here, he developed TRIGA, a reactor that produced isotopes used in the medical field. In 1958, Taylor began working on Project Orion, which sought to develop space travel that relied on nuclear energy as the fuel source. The proposed spacecraft would use a series of nuclear fission reactions as its propellant, thus accelerating space travel while eliminating the Earth's source of fuel for nuclear weaponry. In collaboration with Dyson, Taylor led the project development team for six years until the 1963 Nuclear Test Ban Treaty was instituted. After this, they could not test their developments and the project became unviable.
Late career
Theodore Taylor's career shifted again after project Orion. He developed an even greater fear of the potential ramifications of his entire life's work, and began taking precautionary measures to mitigate those concerns. In 1964 he served as the deputy director of the Defense Atomic Support Agency (a branch within the Department of Defense), where he managed the U.S. nuclear weapons inventory. Then, in 1966 he created a consulting firm called the International Research and Technology Corporation, located in Vienna, Austria, which sought to prevent the development of more nuclear weapons programs. Taylor also worked as a visiting professor at the University of California, Santa Cruz and Princeton University. His focus eventually turned to renewable energy, and In 1980 Taylor started a company called Nova Incorporated, which focused on nuclear energy alternatives as a means of supplementing the energy requirements of the earth. He studied energy capture from sources like cooling ice ponds and heating solar ponds, and eventually turned to energy conservation within buildings. Concerning this work in energy conservation, he founded a not-for-profit organization in Montgomery County, Maryland called Damascus Energy, which focuses on energy efficiency within the home. Theodore Taylor also served on the President of the United States' commission concerning the Three Mile Island Accident, working to mitigate the issues associated with the reactor meltdown.
Legacy
Theodore Taylor was involved in many important projects and made numerous contributions to nuclear development for the United States. During his time at Los Alamos, he was responsible for designing the smallest fission bomb of the era, named Davy Crockett, which weighed only 50 pounds, measured approximately 12 inches across, and could produce between 10 and 20 tons of TNT equivalent. This device was formerly known as the M28 Weapons System. The Davy Crockett itself was the M388 Atomic Round fired from the weapons system, featuring a recoilless rifle either erected and fixed on as freestanding tripod or mounted on the frame of a light utility vehicle, such as the Jeep, the former functioned similarly to other modern rocket propelled rounds (see RPG-7). It was a mounted weapons system, which means that it would be set up, aimed, and fired as a crew-served weapon. Taylor also designed fission bombs smaller than Davy Crockett, which were developed after he left Los Alamos. He designed a nuclear bomb so small that it weighed only 20 pounds, but it was never developed and tested. Taylor designed the Super Oralloy Bomb, also known as the "SOB". It still holds the record for the largest fission explosion ever tested (as the Ivy King device tested during Operation Ivy), producing over 500 kilotons of TNT equivalent. Taylor was credited with developing multiple techniques that improved the fission bomb. For example, he was largely responsible for the development of fusion boosting, which is a technique that improves the reaction yield and efficiency of a nuclear reaction. This technique was a re-invention of the implosion mechanism used in the bomb detonated at Nagasaki. He theorized a series of nuclear reactions within the implosion mechanism that, in combination, trigger the large chain reaction to detonate. This eliminated much of the energy waste and necessity for precision of the original reaction mechanism. This technique is still found in all U.S. fission nuclear weapons today. He also developed a technique that greatly reduced the size of atomic bombs. First tested in a bomb called "Scorpion", it used a reflector made of beryllium, which was drastically lighter than the materials previously used, such as tungsten carbide (WC). Taylor recognized that although a low-atomic-number element like beryllium did not "bounce" neutrons back into the fissile core as efficiently as heavy tungsten, its propensity for neutron spallation (in nuclear physics the so-called "(n,2n)" reaction) more than compensated in overall reflector performance.
After these breakthroughs, Taylor became more of an important figure at Los Alamos. He was included in high priority situations reserved for important personnel, and was even taken to The Pentagon as a consultant on strategies and the potential outcomes of a nuclear war with Russia. In total, Taylor was responsible for the development of eight bombs: the Super Oralloy Bomb, Davey Crockett, Scorpion, Hamlet, Bee, Hornet, Viper, and the Puny Plutonium bomb. The latter was the first-ever dud in the history of U.S. nuclear tests. He produced the bomb called Hamlet after receiving direct orders from military officials to pursue a project in bomb efficiency; it ended up being the most efficient fission bomb ever exploded in the kiloton range.
Apart from bombs, Taylor also explored concepts of producing large amounts of nuclear fuel in an expedited manner. His plans, known as MICE (Megaton Ice Contained Explosions), essentially sought to plant a thermonuclear weapon deep in the ice and detonate it, resulting in a giant underground pool of radioactive materials that could then be retrieved. While his idea had merit, Taylor ultimately received little support for this concept and the project never came to fruition.
Publications and other works
Ted Taylor was an accomplished author in the latter part of his career. He worked in cooperation with many specialists in other fields to publish his work on anti-nuclear proliferation and sustainable nuclear energy. Perhaps the greatest fear that propelled Taylor to work so fervently in these areas was the realization that the consequences of nuclear material ending up in the wrong hands could be severe.
Nuclear Theft: Risks and Safeguards is a book Taylor wrote in collaboration with Mason Willrich in the 1970s. According to reviews, the book predicted a future where nuclear energy was the primary energy source in the United States, and therefore needed enhanced protective measures to protect the public. In the book, Taylor and Willrich provide multiple recommendations on ways to prevent nuclear material from ending up in the wrong hands, as they anticipated that there would be multiple more sources of nuclear byproducts and therefore more opportunity for nuclear theft. This book likely was a culmination of much of Ted's work in the field, as he often toured nuclear reactor sites and provided insight on potential weak points in their security measures.
Taylor also co-authored the book The Restoration of the Earth with Charles C. Humpstone. According to reviews, the book focused on techniques to enhance sustainability and expanded on different sources of energy that could be used alternatively to meet the power needs of the earth. This book was also a culmination of his focus on nuclear security and the ramifications of the use of nuclear weaponry. In it he addressed the potential effects of nuclear fallout on the environment. This 1973 hardcover discussed potential sources of energy in 2000, along with the conceptualization of safer alternatives to the methods of acquiring nuclear energy that were available at the time. In fact, Taylor indirectly referenced a concept for a nuclear reactor which is inherently similar to a reactor that he patented in 1964. Taylor spent much of his time studying the risk potential of the nuclear power fuel cycle after learning about the detrimental effects that his nuclear weapons had on the environment, so he sought to explore new opportunities for safer use of nuclear power. In his writing, Taylor argued that the most dangerous and devastating events that could possibly occur during nuclear research would most likely happen at reactors that are incapable of running efficiently and maintaining a safe temperature. Taylor went on to state that the prioritization of safety in nuclear reactors is relatively low compared to how it should be, and that if one were to create a nuclear reactor with the capability of cooling down—without the initiation of a fission reaction—then efforts at harvesting nuclear energy would be more incentivized and exponentially safer.
Taylor also wrote the book Nuclear Proliferation: Motivations, Capabilities and Strategies for Control with Harold Feiveson and Ted Greenwood. The book explains the two most dangerous mechanisms by which nuclear proliferation could be devastating for the world, as well as how to disincentivize nuclear proliferation within destabilizing political systems.
Taylor further collaborated with George Gamow on a study called, "What the World Needs Is a Good Two-Kiloton Bomb", which investigated the concept of small nuclear artillery weapons. This paper reflected another shift in Taylor's beliefs about nuclear weapons. He had changed from his deterrent position to a position that sought to develop small yield nuclear weapons that could target specific areas and minimize collateral damage.
Taylor was not only involved in the publication of the aforementioned books, but he, along with a few of his colleagues, was also responsible for a number of patents involving nuclear physics. Taylor is credited with patenting a nuclear reactor with a prompt negative temperature coefficient and fuel element, along with a patent protecting their discovery of an efficient method of producing isotopes from thermonuclear explosions. The patent concerning the production of isotopes from thermonuclear explosions was groundbreaking because of its efficiency and cost effectiveness. It also provides a means for attaining necessary elements that otherwise are difficult to find in nature. Prior to this discovery, the cost per neutron in a nuclear reaction was relatively high. The patent concerning the prompt negative temperature coefficient was groundbreaking because it provided a markedly safer reactor even in the event of misuse. With the negative temperature coefficient, the reactor can mitigate sudden surges of reactivity propelled into the system. These patented realizations would later become vital components in the future of nuclear technology.
The Curve of Binding Energy, by John McPhee, is written primarily about the life of Theodore Taylor, as he and McPhee traveled together quite often—spending a great deal of time with one another. It is evident that during their time together, McPhee was very inclined to learn from Taylor. Many of Taylor's personal opinions regarding nuclear energy and safety are mentioned throughout McPhee's writing. McPhee voices one of Taylor's bigger concerns in particular—that plutonium can be devastating if left in the wrong hands. According to McPhee, Taylor suspected that if plutonium were to be acquired by someone with ill-intentions and handled improperly, the aftermath could be catastrophic—as plutonium is a rather volatile element and can be lethal for anyone within hundreds of miles. This clearly can be avoided, Taylor suggests, if nuclear reactors are protected and all sources of nuclear fuel elements are heavily guarded. The book would inspire Princeton student John Aristotle Phillips, and several other imitators, to prove Taylor's contention that "anyone" could design a plausible nuclear weapon using declassified and public information.
The Santa Claus machine and Pugwash
According to Freitas and Merkle, the only known extant source on Taylor's concept of the "Santa Claus machine" is found in Nigel Calder's Spaceships of the Mind. The concept would use a large mass spectrometer to separate an ion beam into atomic elements for later use in making products.
Taylor was a member of the Pugwash Conferences on Science and World Affairs and attended several of its meetings during the 1980s. After his retirement he lived in Wellsville, New York.
Freeman Dyson on Taylor
Freeman Dyson said of Taylor, "Very few people have Ted's imagination. ... I think he is perhaps the greatest man that I ever knew well. And he is completely unknown."
Media appearances
The Voyage of the Mimi: Water, Water, Everywhere (PBS, 1984)
History Undercover: Code Name Project Orion (1999)
To Mars by A-Bomb: The Secret History of Project Orion (BBC, 2003)
See also
Alvin C. Graves
Amory Lovins
List of books about nuclear issues
List of nuclear whistleblowers
National Security Archive
Nevada Test Site
Nuclear disarmament
Nuclear weapons of the United States
References
Further reading
Nigel Calder Spaceships of the Mind, Viking Press, New York, 1978.
Robert A. Freitas Jr. and Ralph C. Merkle. Kinematic Self-Replicating Machines, 2004, 3.10
John McPhee, The Curve of Binding Energy, Ballantine, 1973, 1974. . This book about proliferation is largely an account of Taylor's ideas, including his idea that it is "easy" for rogue actors to produce nuclear bombs.
George Dyson, Project Orion: The True Story of the Atomic Spaceship, Henry Holt and Company, 2002.
Mason Willrich, Ted Taylor, Nuclear Theft: Risks and Safeguards: A Report to the Energy Policy Project of the Ford Foundation, Ballinger, 1974,
Taylor, Theodore B., Humpstone, Charles C., The Restoration of the Earth, Harper and Row, 1973
Nuclear Power and Nuclear Weapons, an anti-proliferation essay by Taylor (1996)
Oral History interview transcript with Ted Taylor on February 13 1995, American Institute of Physics, Niels Bohr Library and Archives
External links
Audio Interview with Ted Taylor by Richard Rhodes, Voices of the Manhattan Project
Annotated Bibliography for Ted Taylor from the Alsos Digital Library for Nuclear Issues
American nuclear physicists
20th-century American physicists
Cornell University alumni
Freeman Dyson
Mexican people of American descent
Mexican emigrants to the United States
Energy engineers
Scientists from Mexico City
People from Wellsville, New York
United States Navy sailors
1925 births
2004 deaths
Scientists from New York (state) | Ted Taylor (physicist) | Engineering | 4,459 |
1,359,407 | https://en.wikipedia.org/wiki/Pacific%20Tsunami%20Warning%20Center | The Pacific Tsunami Warning Center (PTWC), located on Ford Island, Hawaii, is one of two tsunami warning centers in the United States, covering Hawaii, Guam, American Samoa and the Northern Mariana Islands in the Pacific, as well as Puerto Rico, the U.S. Virgin Islands and the British Virgin Islands in the Caribbean Sea. Other parts of the United States are covered by the National Tsunami Warning Center.
PTWC is also the operational center of the Pacific Tsunami Warning System and issued tsunami warnings for dozens of countries from 1965 to 2014. In October 2014, the authority to issue tsunami warnings was delegated to individual member states. As a result, the center now issues advice rather than official warnings for non-U.S. coastlines, with the exception of the British Virgin Islands.
The PTWC uses seismic data as its starting point, but then takes into account oceanographic data when calculating possible threats. Tide gauges in the area of the earthquake are checked to establish if a tsunami has formed. The center then forecasts the future of the tsunami.
History
Up until the late 1940s, the United States had no way to warn the public about tsunami threats. After the 1946 Aleutian Islands earthquake, which generated a tsunami and killed more than 170 people in Hawaii, a plan was devised to warn the public of possible tsunami inundation. The facility became operational in 1948 and was called the Seismic Sea Wave Warning System (SSWWS), headquartered at the Coast and Geodetic Survey's seismological observatory in Honolulu, Hawaii.
Initially, the Seismic Sea Wave Warning System covered only the Hawaiian Islands and was limited to teletsunamis (distant events), using data from 4 seismic stations and 9 tide gages. The 1960 Valdivia earthquake and tsunami, which killed thousands of people, led to the establishment of the Pacific Tsunami Warning System under the auspices of UNESCO's Intergovernmental Oceanographic Commission, with the Seismic Sea Wave Warning System as its operational center. As a result, the name of the facility was changed to the Pacific Tsunami Warning Center.
The expanded system became operational in April 1965 but, like its local predecessor, was limited to teletsunamis – tsunamis which are capable of causing damage far away from their source. The system covered all countries of the Pacific Ocean with data from 20 seismic stations around the world and 40 tide stations.
In the aftermath of the 1964 Alaska earthquake and tsunami, which killed 131 people, it was decided to create another warning system to provide timely warnings about local events for coastal areas of Alaska. After Congress approved funding in 1965, the Alaska Regional Tsunami Warning System was launched in September 1967 with observatories in Palmer, Adak and Sitka. At that time, PTWC ended its coverage of Alaska.
The 1975 Hawaii earthquake and tsunami, which killed several people, highlighted the threat of tsunamis caused by nearby events. As a result, PTWC began issuing tsunami warnings for local events near Hawaii.
In 1982, the Alaska Tsunami Warning Center's area of responsibility was enlarged to include California, Oregon and Washington, as well as British Columbia in Canada, but only for earthquakes in the vicinity of the West Coast. PTWC continued to provide coverage of teletsunamis. The Alaska center's responsibilities were expanded in 1996 to include all Pacific-wide sources, after which it became known as the West Coast/Alaska Tsunami Warning Center (WCATWC). As a result, PTWC's area of responsibility was further reduced.
On December 1, 2001, the PTWC was re-dedicated as the Richard H. Hagemeyer Pacific Tsunami Warning Center, in honor of the former U.S. Tsunami Program Manager and National Weather Service Pacific Region Director who managed the center for many years.
In 2005, in the aftermath of the 2004 Indian Ocean earthquake and tsunami, the Pacific Tsunami Warning Center's responsibilities were expanded to include tsunami guidance for the Indian Ocean, the South China Sea and the Caribbean Sea, though its authority to issue warnings was limited to Puerto Rico and the U.S. Virgin Islands. For all other areas, the decision to issue tsunami warnings was left to individual countries.
The responsibility for Puerto Rico and the U.S. Virgin Islands was passed to the West Coast/Alaska Tsunami Warning Center in June 2007, while PTWC continued to issue advice for other parts of the Caribbean Sea. In 2013, the West Coast/Alaska Tsunami Warning Center became known as the National Tsunami Warning Center.
PTWC discontinued its messages for the Indian Ocean in 2013 after regional tsunami warning centers were opened in Australia, India and Indonesia.
In October 2014, the authority to issue official tsunami warnings for coastlines in the Pacific was delegated to individual member states. This happened because warnings and watches issued by PTWC caused confusion when they conflicted with a country's independently derived level of alert. As a result, the center now issues advice rather than official warnings for all non-U.S. coastlines, with the exception of the British Virgin Islands.
In 2015, the annual operating cost of the Pacific Tsunami Warning System was estimated to be between 50 and 80 million U.S. dollars.
In April 2017, the responsibility for Puerto Rico and the U.S. Virgin Islands returned to PTWC, along with the British Virgin Islands, to consolidate Caribbean responsibilities under one warning center.
As of 2023, the Pacific Tsunami Warning System has access to about 600 high-quality seismic stations around the world and about 500 coastal and deep-ocean sea level stations. It has 46 member states: Brunei, Cambodia, Canada, Chile (including Easter Island and the Juan Fernández Islands), China (which is considered to include Hong Kong and Macau), Colombia, Costa Rica, East Timor, North Korea, Ecuador (including the Galapagos Islands), El Salvador, Guatemala, Honduras, Indonesia, Japan, Malaysia, Mexico, Nicaragua, Panama, Peru, Philippines, South Korea, Russia, Singapore, Thailand, United States (including Guam, Northern Mariana Islands, and the Minor Outlying Islands), Vietnam, Australia (including Norfolk Island), Cook Islands, Fiji, France (including French Polynesia, New Caledonia and Wallis and Futuna), Kiribati (including the Gilbert Islands, the Phoenix Islands and Kiritimati), the Marshall Islands (including Kwajalein Atoll and Majuro), the Federated States of Micronesia, Nauru, New Zealand (including the Kermadec Islands), Niue, Palau, Papua New Guinea, Samoa, the Solomon Islands, Tokelau, Tonga, Tuvalu, the United Kingdom (including the Pitcairn Islands), and Vanuatu.
Coverage area
Alert levels
Official tsunami warnings and watches are limited to U.S. coastlines, with the exception of the British Virgin Islands. PTWC messages for other regions do not include alerts, but rather advice, as the authority to issue tsunami warnings was delegated to member states in 2014 to avoid confusion among the public.
Current format
Old format (before 2014)
The alert levels below were retired on October 1, 2014.
Distribution
Local populations in the United States of America receive tsunami information through radio and television receivers connected to the Emergency Alert System, and in some places (such as Hawaii) civil defense sirens and roving loudspeaker broadcasts from police vehicles. The public can subscribe to the RSS feed or email alerts from the PTWC web site, and the UNESCO site. Email and text messages are also available from the USGS Earthquake Notification Service which includes tsunami alerts.
Deep-ocean tsunami detection
In 1995, NOAA began developing the Deep-ocean Assessment and Reporting of Tsunamis (DART) system. By 2001, an array of six stations had been deployed in the Pacific Ocean.
Beginning in 2005, as a result of the tsunami caused by the 2004 Indian Ocean earthquake, plans were announced to add 32 more DART buoys to be operational by mid-2007.
These stations give detailed information about tsunamis while they are still far off shore. Each station consists of a sea-bed bottom pressure recorder (at a depth of 1000–6000 m) which detects the passage of a tsunami and transmits the data to a surface buoy via acoustic modem. The surface buoy then radios the information to the PTWC via the GOES satellite system. The bottom pressure recorder lasts for two years while the surface buoy is replaced every year. The system has considerably improved the forecasting and warning of tsunamis in the Pacific Ocean.
References
External links
US Tsunami Warning System
National Tsunami Warning Center
Northwest Pacific Tsunami Advisory
DART
How the US Tsunami Warning System works
Warning systems
Tsunami
Earthquake and seismic risk mitigation
Seismological observatories, organisations and projects
Disaster preparedness in the United States
Emergency management in Oceania
1949 establishments in Hawaii | Pacific Tsunami Warning Center | Technology,Engineering | 1,786 |
10,389,193 | https://en.wikipedia.org/wiki/Temperature-sensitive%20mutant | Temperature-sensitive mutants are variants of genes that allow normal function of the organism at low temperatures, but altered function at higher temperatures. Cold sensitive mutants are variants of genes that allow normal function of the organism at higher temperatures, but altered function at low temperatures.
Mechanism
Most temperature-sensitive mutations affect proteins, and cause loss of protein function at the non-permissive temperature. The permissive temperature is one at which the protein typically can fold properly, or remain properly folded. At higher temperatures, the protein is unstable and ceases to function properly. These mutations are usually recessive in diploid organisms. Temperature sensitive mutants arrange a reversible mechanism and are able to reduce particular gene products at varying stages of growth and are easily done by changing the temperature of growth.
Permissive temperature
The permissive temperature is the temperature at which a temperature-sensitive mutant gene product takes on a normal, functional phenotype.
When a temperature-sensitive mutant is grown in a permissive condition, the mutant gene product behaves normally (meaning that the phenotype is not observed), even if there is a mutant allele present. This results in the survival of the cell or organism, as if it were a wild type strain. In contrast, the nonpermissive temperature or restrictive temperature is the temperature at which the mutant phenotype is observed.
Temperature sensitive mutations are usually missense mutations, which slightly modifies the energy landscape of the protein folding. The mutant protein will function at the standard, permissive, low temperature. It will alternatively lack the function at a rather high, non-permissive, temperature and display a hypomorphic (partial loss of gene function) and a middle, semi-permissive, temperature.
Use in research
Temperature-sensitive mutants are useful in biological research. They allow the study of essential processes required for the survival of the cell or organism. Mutations to essential genes are generally lethal and hence temperature-sensitive mutants enable researchers to induce the phenotype at the restrictive temperatures and study the effects. The temperature-sensitive phenotype could be expressed during a specific developmental stage to study the effects.
Examples
In the late 1970s, the Saccharomyces cerevisiae secretory pathway, essential for viability of the cell and for growth of new buds, was dissected using temperature-sensitive mutants, resulting in the identification of twenty-three essential genes.
In the 1970s, several temperature-sensitive mutant genes were identified in Drosophila melanogaster, such as shibirets, which led to the first genetic dissection of synaptic function.< In the 1990s, the heat shock promoter hsp70 was used in temperature-modulated gene expression in the fruit fly.
Bacteriophage
An infection of an Escherichia coli host cell by a bacteriophage (phage) T4 temperature sensitive (ts) conditionally lethal mutant at a high restrictive temperature generally leads to no phage growth. However, a co-infection under restrictive conditions with two ts mutants defective in different genes generally leads to robust growth because of intergenic complementation. The discovery of ts mutants of phage T4, and the employment of such mutants in complementation tests contributed to the identification of many of the genes in this organism. Because multiple copies of a polypeptide specified by a gene often form multimers, mixed infections with two different ts mutants defective in the same gene often leads to mixed multimers and partial restoration of function, a phenomenon referred to as intragenic complementation. Intragenic complementation of ts mutants defective in the same gene can provide information on the structural organization of the multimer.
Growth of phage ts mutants under partially restrictive conditions has been used to identify the functions of genes. Thus genes employed in the repair of DNA damages were identified, as well as genes affecting genetic recombination. For example, growing a ts DNA repair mutant at an intermediate temperature will allow some progeny phage to be produced. However, if that ts mutant is irradiated with UV light, its survival will be more strongly reduced compared the reduction of survival of irradiated wild-type phage T4.
Conditional lethal mutants able to grow at high temperatures, but unable to grow at low temperatures, were also isolated in phage T4. These cold sensitive mutants defined a discrete set of genes, some of which had been previously identified by other types of conditional lethal mutants.
References
Temperature
Cell biology
Biology terminology | Temperature-sensitive mutant | Physics,Chemistry,Biology | 905 |
15,757,169 | https://en.wikipedia.org/wiki/Satellite%20watching | Satellite watching or satellite spotting is a hobby which consists of the observation and tracking of artificial satellites that are orbiting Earth. People with this hobby are variously called satellite watchers, trackers, spotters, observers, etc. Since satellites outside Earth's shadow reflect sunlight, those especially in low Earth orbit may visibly glint (or "flare") as they traverse the observer's sky, usually during twilight.
History
Amateur satellite spotting traces back to the days of early artificial satellites when the Smithsonian Astrophysical Observatory launched the Operation Moonwatch program in 1956 to enlist amateur astronomers in an early citizen science effort to track Soviet sputniks. The program was an analog to the World War II Ground Observer Corps citizen observation program to spot enemy bombers. Moonwatch was crucial until professional stations were deployed in 1958. The program was discontinued in 1975. The people who had been involved continued to track satellites however and began to concentrate on satellites that had been omitted from the Satellite Catalog (deliberately), these satellites are from the US and other, allied, countries.
In February 2008 the front page of The New York Times hosted an article about an amateur satellite watcher Ted Molczan in relation to the story about falling American spy satellite USA-193. American officials were reluctant to provide information about the satellite, and instead, Ted Molczan, as the article says, "uncovers some of the deepest of the government’s expensive secrets and shares them on the Internet."
Molczan participates with a group of other sky-watchers who have created a "network of amateur sky-watchers and satellite observers" who focus on "spotting secret intelligence-gathering satellites launched by the United States, Russia and China." , the amateurs continue to make their sightings and analysis public on the internet via an electronic mailing list called SeeSat-L, just as they had a decade earlier, since they began the practice in the previous century in days of the early internet.
Prior to 2008, NASA's Orbital Information Group had been providing free information about over 10,000 objects in Earth orbit. US security authorities identified identified this as a security threat, and a pilot program was launched in 2008 to replace the NASA OIG website with a US Air Force site (Space-Track.org) with somewhat more controlled access.
The practice by the militaries of countries such as the United States to not distribute all of their satellite orbital data can be counteracted by the skills of satellite watchers, who can calculate the orbits of many military satellites.
As the digital revolution continued to advance in the 2000s, many planetarium and satellite tracking computer programs to aid satellite spotting emerged. In the 2010s, accompanied by the development of augmented reality (AR) technologies, satellite watching programs for mobile devices have been developed. During the 64th International Astronautical Congress 2013 in Beijing a citizen science method to track satellite beacon signals by a Distributed Ground Station Network (DGSN) was presented. The purpose of this network at announcement was to support small satellites and cubesats projects of universities.
In 2019, amateur sky-watchers analyzed the high-resolution photograph of an Iranian launch site accident tweeted by US President Trump and identified the specific classified spysat (USA-224, a KH-11 satellite with an objective mirror as large as the Hubble Space Telescope) that had taken the photograph, and when it was taken.
Spotting satellites
Satellite watching started by being done with the naked eye or with the aid of binoculars since predictions of when they would be visible was difficult; most low Earth orbit satellites also move too quickly to be tracked easily by the telescopes available to astronomers. It is this movement, as the satellite tracks across the night sky, that makes them possible to see. As with any sky-watching pastime, the darker the sky the better, so hobbyists will meet with better success further away from light-polluted urban areas.
Today most observers use digital still cameras or video cameras; imagery is put into Astrometry software to generate the angles needed to generate "observations" that are used to calculate orbits of the satellites imaged.
Because geosynchronous satellites move slowly relative to the viewer they can be difficult to find and were not typically sought when satellite watching. However, with digital cameras it is easy to photograph most high-altitude satellites.
Although to the observer low Earth orbit satellites can move at a similar speed as high altitude commercial aircraft, individual satellites can be faster or slower; they do not all move at the same speed. Individual satellites never deviate in their velocity (speed and direction). They can be distinguished from aircraft because satellites do not leave contrails and do not have red and green navigation lights. They are lit solely by the reflection of sunlight from solar panels or other surfaces. A satellite's brightness sometimes changes as it moves across the sky. Occasionally a satellite will 'flare' as it changes orientation relative to the viewer, suddenly increasing in reflectivity. Satellites often grow dimmer and are more difficult to see toward the horizons. Because reflected sunlight is necessary to see satellites, the best viewing times are for a few hours immediately after nightfall and a few hours before dawn.
Given the number of satellites now in orbit, a fifteen-minute session of sky watching will generally yield at least one satellite passing overhead.
Satellite watcher clubs
There are many satellite watcher clubs, which collect observations and issue awards for observations according to various rules.
The Astronomical League has the Earth Orbiting Satellite Observers Club.
SeeSat-L is the internet list of an amateur sky-watching group that focuses on spotting the military intelligence-gathering satellites of the United States, Russia and China. Many of these satellites are "visible with the naked eye and require only data-sharing to pinpoint."
See also
Pass (spaceflight)
United States Space Surveillance Network
Geoffrey Perry
References
External links
Real Time Satellite Tracking and Predictions
Archive of SeeSat-L mailing list
How to Spot Satellites at space.com
Heavens Above computes times that satellites pass over your location.
spectator.earth Real-time tracking of Earth Observation satellite overpasses, acquisition plans and data updates
See A Satellite Tonight shows you where to look using Google Street View.
Observation hobbies
Satellites | Satellite watching | Astronomy | 1,254 |
28,741,893 | https://en.wikipedia.org/wiki/IPv6-to-IPv6%20Network%20Prefix%20Translation | IPv6-to-IPv6 Network Prefix Translation (NPTv6) is a specification for IPv6 to achieve address-independence at the network edge, similar to network address translation (NAT) in Internet Protocol version 4 (IPv4). It has fewer architectural problems than traditional IPv4 NAT; for example, it is stateless and preserves the reachability attributed to the end-to-end principle. However, the method may not translate embedded IPv6 addresses properly (IPsec can be impacted), and split-horizon DNS may be required for use in a business environment.
NPTv6 differs from NAT66, which is stateful. With NPTv6, no port translation is required nor other manipulation of transport characteristics. Compared to NAT66, with NPTv6 there is end-to-end reachability along with 1:1 address mapping. This makes NPTv6 a better choice than NAT66.
References
External links
Cisco documentation on NPTv6
Juniper documentation on NPTv6
VyOS documentation on NPTv6
OPNsense documentation on NPTv6
APNIC blog post from 2018 on NAT66
pfSense documentation on NPt
IPv6
Network address translation | IPv6-to-IPv6 Network Prefix Translation | Technology | 248 |
988,753 | https://en.wikipedia.org/wiki/Yoga%20nidra | Yoga nidra () or yogic sleep in modern usage is a state of consciousness between waking and sleeping, typically induced by a guided meditation.
A state called yoga nidra is mentioned in the Upanishads and the Mahabharata, while a goddess named Yoganidrā appears in the Devīmāhātmya. Yoga nidra is linked to meditation in Shaiva and Buddhist tantras, while some medieval hatha yoga texts use "yoganidra" as a synonym for the deep meditative state of samadhi. These texts however offer no precedent for the modern technique of guided meditation. That derives from 19th and 20th century Western "proprioceptive relaxation" as described by practitioners such as Annie Payson Call and Edmund Jacobson.
The modern form of the technique, pioneered by Dennis Boyes in 1973, made widely known by Satyananda Saraswati in 1976, and then by Swami Rama, Richard Miller, and others has spread worldwide. It is applied by the U.S. Army to assist soldier recovery from post-traumatic stress disorder. There is limited scientific evidence that the technique helps relieve stress.
Historical usage
Ancient times
The Hindu epic Mahabharata, completed by the 3rd century CE, mentions a state called "yoganidra", and associates it with Lord Vishnu:
The Devīmāhātmya, written around the 6th century CE, mentions a goddess whose name is Yoganidrā. The God Brahma asks Yoganidrā to wake up Vishnu to go and fight the Asuras or demigods named Madhu and Kaitabha. These early mentions do not define any yoga technique or practice, but describe the God Vishnu's transcendental sleep in between the Yugas, the cycles of the universe, and the manifestation of the goddess as sleep itself.
Medieval practices
Yoganidra is first linked to meditation in Shaiva and Buddhist tantras. In the Shaiva text Ciñcinīmatasārasamuccaya (7.164), yoganidra is called "peace beyond words"; in the Mahāmāyātantra (2.19ab) it is named as a state in which perfected Buddhas may access secret knowledge. In the 11th or 12th century, yoganidra is first used in Hatha yoga and Raja yoga texts as a synonym for samadhi, a deep state of meditative consciousness where the yogi no longer thinks, moves, or breathes. The Amanaska (2.64) asserts that "Just as someone who has suddenly arisen from sleep becomes aware of sense objects, so the yogi wakes up from that [world of sense objects] at the end of his yogic sleep."
By the 14th century, the Yogatārāvalī (24–26) gives a more detailed description, stating that yoganidra "removes all thought of the world of multiplicity" in the advanced yogi who has completely uprooted his "network of Karma". He then enters the "fourth state", namely turiya or samadhi, beyond the usual states of waking, dreaming, and deep sleep, "that special thoughtless sleep, which consists of [just] consciousness." The 15th century Haṭha Yoga Pradīpikā goes further, stating (4.49) that "One should practice Khecarī Mudrā until one is asleep in yoga. For one who has achieved Yoganidrā, death never occurs." Khecarī Mudrā is the Hatha yoga practice of folding the tongue back so that it reaches inside the nasal cavity, where it can enable the yogi to reach samadhi. In the 17th century Haṭha Ratnāvalī (3.70), Yoganidrasana is first described. It is an asana or yoga pose where the legs are wrapped around the back of the neck. The text says that the yogi should sleep in this position, which "bestows bliss". These texts view yoganidra as a state, not a practice in itself.
Modern usage
Western "relaxationism"
The yoga scholar Mark Singleton states that while relaxation is a primary feature of modern Western yoga, its relaxation techniques "have no precedent in the pre-modern yoga tradition", but derive mostly from 19th and 20th century Western "proprioceptive relaxation". This prescriptive approach was described by authors such as the "relaxationist" Annie Payson Call in her 1891 book Power through Repose, and the Chicago psychiatrist Edmund Jacobson, the creator of progressive muscle relaxation and biofeedback, in his 1934 book You Must Relax!.
Dennis Boyes
In 1973, French yoga advocate Dennis Boyes published his book Le Yoga du sommeil éveillé; méthode de relaxation, yoga nidra ("The Yoga of Waking Sleep: method of relaxation, yoga nidra"). This is the first known usage of "yoga nidra" in a modern sense. In the book, Boyes makes use of relaxation techniques including the direction of attention to each part of the body:
The French journal Revue 3e Millénaire, reviewing Boyes's approach in 1984, wrote that Boyes proposes relaxation in order to "reach the state of emptiness". The person thus imperceptibly moves to a stage where relaxation becomes meditation and can remain there once the mind's obsession with external objects or thoughts is removed.
Satyananda
In modern times, Satyananda Saraswati claimed to have experienced yoga nidra when he was living with his guru Sivananda Saraswati in Rishikesh. In 1976, he constructed a system of relaxation through guided meditation, which he popularized in the mid-20th century. He explained yoga nidra as a state of mind between wakefulness and sleep that opened deep phases of the mind, suggesting a connection with the ancient tantric practice called nyasa, whereby Sanskrit mantras are mentally placed within specific body parts while meditating on each part (of the bodymind). The form of practice taught by Satyananda includes eight stages (internalisation, resolve (sankalpa), rotation of consciousness, breath awareness, manifestation of opposites, creative visualization, repeated resolve (sankalpa), and externalisation). Satyananda used this technique, along with the suggestion, on the child who was to become his successor, Niranjanananda Saraswati, from age four. He claimed to have been taught several languages by this method.
Satyananda's multi-stage yoga nidra technique is not found in ancient or medieval texts. However, the yoga scholars Jason Birch and Jacqueline Hargreaves note that there are analogues for several of his yoga nidra activities.
Yoga nidra in this modern sense is a state in which the body is completely relaxed, and the practitioner becomes systematically and increasingly aware of the inner world by following a set of verbal instructions. This state of consciousness is different from meditation, in which concentration on a single focus is required. In yoga nidra the practitioner remains in a state of light withdrawal of the 5 senses (pratyahara) with four senses internalised, that is, withdrawn, and only hearing still connects to any instructions given.
Swami Rama
Swami Rama taught a form of yoga nidra (in a broad sense), which involves an exercise called shavayatra, "inner pilgrimage [through the body]", which directs the attention around "61 sacred points of the body" during relaxation in shavasana, corpse pose. A second exercise, shithali karana, is said to induce "a very deep state of relaxation", and is described as a preliminary for yoga nidra (in a narrow sense). It, too, is performed in Shavasana, involving exhalations imagined as directed from the crown of the head to different points around the body, each repeated 5 or 10 times. The yoga nidra exercise involves directed breathing on the left side, then the right side, then in Shavasana. In Shavasana, the attention is directed to the eyebrow, throat, and heart centers or chakras.
Richard Miller
The Western pioneer of yoga as therapy, Richard Miller, has developed the use of yoga nidra for rehabilitating soldiers in pain, using the Integrative Restoration (iRest) methodology. Miller worked with Walter Reed Army Medical Center and the United States Department of Defense studying the efficacy of the approach. According to Yoga Journal, "Miller is responsible for bringing the practice to a remarkable variety of nontraditional settings," which includes "military bases and in veterans' clinics, homeless shelters, Montessori schools, Head Start programs, hospitals, hospices, chemical dependency centers, and jails." The iRest protocol was used with soldiers returning from Iraq and Afghanistan suffering from post-traumatic stress disorder (PTSD). The Surgeon General of the United States Army endorsed Yoga Nidra as a complementary alternative medicine (CAM) for chronic pain in 2010.
Post-lineage yoga nidra
In 2021, the yoga teachers Uma Dinsmore-Tuli and Nirlipta Tuli jointly published a "declaration of independence for Yoga Nidrā Shakti". In it, they stated that yoga nidra had become commodified and promoted by commercial organisations for profit; that abuse had taken place within those organisations; and that the organisations had propagated origin stories for yoga nidra "that privilege their own founders" and exclude or neglect older roots of the practice. They state their shock at abuses by Satyananda, Swami Rama, Amrit Desai, and Richard Miller. They invite practitioners and teachers to learn about the history of yoga nidra outside organisational boundaries and to work without "trademarked versions" of the practice.
Reception
The Mindful Yoga teacher Anne Cushman states that "This body-sensing journey [that I teach in Mindful Yoga] ... is one variation of the ancient practice of Yoga nidra ... and of the body-scan technique commonly used in the Buddhist Vipassana tradition."
The cultural historian Alistair Shearer writes that the name yoga nidra is an umbrella term for different systems of "progressive relaxation or 'guided meditation'." He comments that Satyananda promoted his version of yoga nidra, claiming it was ancient, when its connections to ancient texts "seem vague at best". Shearer writes that other teachers have defined yoga nidra as "the state of conscious sleep" in which inner awareness is maintained, without reference to Satyananda's method of progressive relaxation by directing attention to different parts of the body. Shearer attributes this "inner lucidity" to the buddhi (intellect, literally "wakefulness") of Sankhya philosophy. He compares buddhi to the "intellect" discussed by Saint Augustine and the Apostolic Fathers at about the same time as Patanjali's Yoga Sutra.
Scientific evidence
Scientific evidence for the action of yoga nidra is patchy. Parker (2019) conducted a single-observation study of a famous yogi; in it, Swami Rama demonstrated conscious entry into NREM delta wave sleep through yoga nidra, while a disciple produced delta and theta waves even with eyes open and talking. A therapeutic model was developed by Datta and Colleagues (2017) and the same appeared to be useful for insomnia patients. Datta and colleagues (2022) report a beneficial effect of yoga nidra on the sleep of forty-five male athletes, noting that sportsmen often have sleep problems. Their small randomised controlled trial found improvements in subjective sleep latency and sleep efficiency with four weeks of yoga nidra compared to progressive muscular relaxation (used as the control).
Primary research, sometimes informal, on a small scale, and without strictly controlled trials, has been conducted on various aspects of yoga nidra. These have made tentative findings of benefits to mind and body such as increased dopamine release in the brain, improved heart rate variability, reduced blood pressure, reduced anxiety, and improved self-esteem.
See also
Dream yoga
Notes
References
External links
Systematic review articles on Yoga Nidra indexed by Google Scholar
Sleep
Yoga as therapy | Yoga nidra | Biology | 2,470 |
3,955,475 | https://en.wikipedia.org/wiki/Weather%20warfare | Weather warfare is the use of weather modification techniques such as cloud seeding for military purposes.
History
Prior to the Environmental Modification Convention signed in Geneva in 1977, the United States used weather warfare in the Vietnam War. Operation Popeye saw the use of cloud seeding over the Ho Chi Minh trail. It was hoped that the increased rainfall would reduce the rate of infiltration down the trail.
A research paper produced for the United States Air Force written in 1996 speculates about the future use of nanotechnology to produce "artificial weather", clouds of microscopic computer particles all communicating with each other to form an "intelligent fog" that could be used for various purposes. "Artificial weather technologies do not currently exist. But as they are developed, the importance of their potential applications rises rapidly."
The Environmental Modification Convention (ENMOD), which was signed in Geneva on May 18, 1977, and entered into force on October 5, 1978, prohibits "widespread, long-lasting or severe effects as the means of destruction, damage or injury". In 1972 an ENMOD convention on weather warfare presented that this permits "local, non-permanent changes". The "Consultative Committee of Experts" established in Article VIII of the Convention stated in their "Understanding relating to Article II" that any use of environmental modification where this is done "as a means of destruction, damage or injury to another State Party, would be prohibited.". It also suggests all signatories are expected to abstain from using weather modification to cause harm at any scale, stating "military or any other hostile use of environmental modification techniques, would result, or could reasonably be expected to result, in widespread, long-lasting or severe destruction, damage or injury." However, the treaty does not directly condemn military use of weather modification when it does not directly cause harm, such as the United States' use of weather modification in the Battle of Khe Sanh in the Vietnam War. The limitations of the treaty, and its application only to signatory states, allow weather warfare to continue to play a role in warfare throughout the 21st century. The United States prohibits weather modification without permission of the United States Secretary of Commerce.
See also
Weather Modification Operations and Research Board
References
External links
Winter Northern Lights 2023: Why It’s The Best In The Last 20 Years And How To Watch It
Non Lethal Warfare Proposal:Weather Modification, The Sunshine Projectpurposes.
Weather modification
Warfare by type
Meteorological hypotheses | Weather warfare | Engineering | 500 |
2,010,826 | https://en.wikipedia.org/wiki/Sodium%20oxide | Sodium oxide is a chemical compound with the formula . It is used in ceramics and glasses. It is a white solid but the compound is rarely encountered. Instead "sodium oxide" is used to describe components of various materials such as glasses and fertilizers which contain oxides that include sodium and other elements. Sodium oxide is a component.
Structure
The structure of sodium oxide has been determined by X-ray crystallography. Most alkali metal oxides (M = Li, Na, K, Rb) crystallise in the antifluorite structure. In this motif the positions of the anions and cations are reversed relative to their positions in , with sodium ions tetrahedrally coordinated to 4 oxide ions and oxide cubically coordinated to 8 sodium ions.
Preparation
Sodium oxide is produced by the reaction of sodium with sodium hydroxide, sodium peroxide, or sodium nitrite:
To the extent that NaOH is contaminated with water, correspondingly greater amounts of sodium are employed. Excess sodium is distilled from the crude product.
A second method involves heating a mixture of sodium azide and sodium nitrate:
Burning sodium in air produces a mixture of and sodium peroxide ().
A third much less known method involves heating sodium metal with iron(III) oxide (rust):
the reaction should be done in an inert atmosphere to avoid the reaction of sodium with the air instead.
Applications
Glassmaking
Glasses are often described in terms of their sodium oxide content although they do not really contain . Furthermore, such glasses are not made from sodium oxide, but the equivalent of is added in the form of "soda" (sodium carbonate), which loses carbon dioxide at high temperatures:
A typical manufactured glass contains around 15% sodium oxide, 70% silica (silicon dioxide), and 9% lime (calcium oxide). The sodium carbonate "soda" serves as a flux to lower the temperature at which the silica mixture melts. Such soda-lime glass has a much lower melting temperature than pure silica and has slightly higher elasticity. These changes arise because the -based material is somewhat more flexible.
Reactions
Sodium oxide reacts readily and irreversibly with water to give sodium hydroxide:
Because of this reaction, sodium oxide is sometimes referred to as the base anhydride of sodium hydroxide (more archaically, "anhydride of caustic soda").
References
Oxides
Sodium compounds
Fluorite crystal structure | Sodium oxide | Chemistry | 497 |
3,865,797 | https://en.wikipedia.org/wiki/Uranium%20tetrafluoride | Uranium tetrafluoride is the inorganic compound with the formula UF4. It is a green solid with an insignificant vapor pressure and low solubility in water. Uranium in its tetravalent (uranous) state is important in various technological processes. In the uranium refining industry it is known as green salt.
Production
UF4 is prepared from UO2 in a fluidized bed by reaction with Hydrogen fluoride. The UO2 is derived from mining operations. Around 60,000 tonnes are prepared in this way annually. A common impurity is UO2F2. UF4 is susceptible to hydrolysis as well.
UF4 is formed by the reaction of UF6 with hydrogen gas in a vertical tube-type reactor.
The bulk density of UF4 varies from about 2.0 g/cm3 to about 4.5 g/cm3 depending on the production process and the properties of the starting uranium compounds.
A molten salt reactor design, a type of nuclear reactor where the working fluid is a molten salt, would use UF4 as the core material. UF4 is generally chosen over related compounds because of the usefulness of the elements without isotope separation, better neutron economy and moderating efficiency, lower vapor pressure and better chemical stability.
Reactions
Uranium tetrafluoride reacts stepwise with fluorine, first to give uranium pentafluoride and then volatile UF6:
2UF4 + F2 → 2UF5
2UF5 + F2 → 2UF6
UF4 is reduced by magnesium to give the metal:
UF4 + 2Mg → U + 2MgF2
UF4 reacts slowly with moisture at ambient temperature, forming UO2 and HF.
Structure
Like most binary metal fluorides, UF4 is a dense highly crosslinked inorganic polymer. As established by X-ray crystallography, the U centres are eight-coordinate with square antiprismatic coordination spheres. The fluoride centres are doubly bridging.
Safety
Like all uranium salts, UF4 is toxic and thus harmful by inhalation, ingestion, and through skin contact.
See also
Praseodymium(IV) fluoride which has the same crystal structure
References of historical interest
References
External links
Uranium(IV) compounds
Nuclear materials
Fluorides
Actinide halides
Inorganic compounds | Uranium tetrafluoride | Physics,Chemistry | 492 |
11,305,178 | https://en.wikipedia.org/wiki/Nectriella%20pironii | Nectriella pironii is a plant pathogen. It parasitizes Aphelandra squarrosa, Clerodendron bungei, Codiaeum variegatum, Jussiaea peruviana, Leucophyllum frutescens, Pittosporum tobria, Plumbago capensis, Chrysanthemum morifolium and Psychotria undata.
The species is named in honor of Pascal Pompey Pirone.
Plant symptoms
Symptoms on leaves appear as slightly sunken spots surrounded by a dark brown edge. The center of the spot may appear pink from the spore masses which are being produced by the fungus. Frequently, large areas on the leaf turn brown, dry out along the leaf margins, and eventually the leaf falls off. Symptoms on fruit also develop as small, discolored, sunken areas that enlarge and develop pink spore masses in the center. Fruits that develop will soft rot and drop off the tree.
In chrysanthemum, the fungus will first produce an infection canker near the bottom of the stem after which the fungus most likely disturbes the plant hormonal system causing infected plants to grow taller than uninfected plants. In time, infected chrysanthemum plants will also show very typical blister like leasions on the leaves, making this disease quite easily recognisable (William Quaedvlieg, unpublished information).
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Fungi described in 1980
Bionectriaceae
Fungus species | Nectriella pironii | Biology | 325 |
1,037,221 | https://en.wikipedia.org/wiki/Fructooligosaccharide | Fructooligosaccharides (FOS) also sometimes called oligofructose or oligofructan, are oligosaccharide fructans, used as an alternative sweetener. FOS exhibits sweetness levels between 30 and 50 percent of sugar in commercially prepared syrups. It occurs naturally, and its commercial use emerged in the 1980s in response to demand for healthier and calorie-reduced foods.
Chemistry
Two different classes of fructooligosaccharide (FOS) mixtures are produced commercially, based on inulin degradation or transfructosylation processes.
FOS can be produced by degradation of inulin, or polyfructose, a polymer of D-fructose residues linked by β(2→1) bonds with a terminal α(1→2) linked D-glucose. The degree of polymerization of inulin ranges from 10 to 60. Inulin can be degraded enzymatically or chemically to a mixture of oligosaccharides with the general structure Glu–Frun (abbrev. GFn) and Frum (Fm), with n and m ranging from 1 to 7. This process also occurs to some extent in nature, and these oligosaccharides may be found in a large number of plants, especially in Jerusalem artichoke, chicory and the blue agave plant. The main components of commercial products are kestose (GF2), nystose (GF3), fructosylnystose (GF4), bifurcose (GF3), inulobiose (F2), inulotriose (F3), and inulotetraose (F4).
The second class of FOS is prepared by the transfructosylation action of a β-fructosidase of Aspergillus niger or Aspergillus on sucrose. The resulting mixture has the general formula of GFn, with n ranging from 1 to 5. Contrary to the inulin-derived FOS, not only is there β(1→2) binding but other linkages do occur, however, in limited numbers.
Because of the configuration of their glycosidic bonds, fructooligosaccharides resist hydrolysis by salivary and intestinal digestive enzymes. In the colon they are fermented by anaerobic bacteria. In other words, they have a lower caloric value, while contributing to the dietary fiber fraction of the diet. Fructooligosaccharides are more soluble than inulins and are, therefore, sometimes used as an additive to yogurt and other (dairy) products. Fructooligosaccharides are used specially in combination with high-intensity artificial sweeteners, whose sweetness profile and aftertaste it improves.
Food sources
FOS is extracted from the blue agave plant as well as fruits and vegetables such as bananas, onions, chicory root, garlic, asparagus, jícama, and leeks. Some grains and cereals, such as wheat and barley, also contain FOS. The Jerusalem artichoke and its relative yacón together with the blue agave plant have been found to have the highest concentrations of FOS of cultured plants.
Health benefits
FOS has been a popular sweetener in Japan and Korea for many years, even before 1990, when the Japanese government installed a "Functionalized Food Study Committee" of 22 experts to start to regulate "special nutrition foods or functional foods" that contain the categories of fortified foods (e.g., vitamin-fortified wheat flour), and is now becoming increasingly popular in Western cultures for its prebiotic effects. FOS serves as a substrate for microflora in the large intestine, increasing the overall gastrointestinal tract health. It has also been proposed as a supplement for treating yeast infections.
Several studies have found that FOS and inulin promote calcium absorption in both the animal and the human gut. The intestinal microflora in the lower gut can ferment FOS, which results in a reduced pH. Calcium is more soluble in acid, and, therefore, more of it comes out of food and is available to move from the gut into the bloodstream.
In a randomized controlled trial involving 36 twin pairs aged 60 and above, participants were given either a prebiotic (3.375 mg inulin and 3.488 mg FOS) or a placebo daily for 12 weeks along with resistance exercise and branched-chain amino acid (BCAA) supplementation. The trial, conducted remotely, showed that the prebiotic supplement led to changes in the gut microbiome, specifically increasing Bifidobacterium abundance. While there was no significant difference in chair rise time between the prebiotic and placebo groups, the prebiotic did improve cognition. The study suggests that simple gut microbiome interventions could enhance cognitive function in the elderly.
FOS can be considered a small dietary fibre with (like all types of fibre) low caloric value. The fermentation of FOS results in the production of gases and short chain fatty acids. The latter provide some energy to the body.
Side-effects
All inulin-type prebiotics, including FOS, are generally thought to stimulate the growth of Bifidobacteria species. Bifidobacteria are considered beneficial bacteria. This effect has not been uniformly found in all studies, either for bifidobacteria or for other gut organisms. FOS are also fermented by numerous bacterial species in the intestine, including Klebsiella, E. coli and many Clostridium species, which can be pathogenic in the gut. These species are responsible mainly for the gas formation (hydrogen and carbon dioxide), which results after ingestion of FOS. Studies have shown that up to 20 grams/day is well tolerated.
Regulation
US FDA regulation
FOS is classified as generally recognized as safe (GRAS).
NZ FSANZ regulation
The Food Safety Authority warned parents of babies that a major European baby-formula brand made in New Zealand does not comply with local regulations (because it contains fructo-oligosaccharides (FOS)), and urged them to stop using it.
EU regulation
FOS use has been approved in the European Union; allowing addition of FOS in restricted amounts to baby formula (for babies up to 6 months) and follow-on formula (for babies between 6 and 12 months). Infant and follow-on formula products containing FOS have been sold in the EU since 1999.
Canadian regulations
FOS is currently not approved for use in baby formula.
See also
Xylooligosaccharide (XOS)
References
Oligosaccharides
Prebiotics (nutrition)
Sugar substitutes | Fructooligosaccharide | Chemistry | 1,431 |
41,214,091 | https://en.wikipedia.org/wiki/Action%20AWE | Action AWE (Action Atomic Weapons Eradication) is a grassroots activist anti-nuclear weapons campaign/group launched in February 2013. Its aim is to increase and activate public opposition to the UK Trident nuclear weapons system, and depleted uranium warheads manufactured at AWE Burghfield (where AWE stands for Atomic Weapons Establishment), along with AWE Aldermaston.
The group has been involved in numerous non-violent "disarmament" direct actions, both under its own banner and in association with other groups.
The group has attracted media attention.
Basis for Action
The foundation of Action AWE's various disarmament actions is the 1996 Advisory Opinion of the International Court of Justice, Legality of the Threat or Use of Nuclear Weapons, in which it found that 'the threat or use of nuclear weapons would generally be contrary to the rules of international law applicable in armed conflict'.
In addition to this, activists also argue that since the British government is not actively negotiating nuclear disarmament and is actively considering upgrading the UK Trident programme, it is in violation of the Non-Proliferation Treaty of 1968.
There is dissent to the continuation of the Trident programme from within the government as well as from without.
Actions and Protests
The launch event in February 2012 was a public meeting and a banner hang, timed to coincide with a parliamentary meeting on defence spending.
From the 26 August to the 2 September 2013, Action AWE camped beside AWE Burghfield for solidarity, research, training, the sharing of information and to support activists blockading and researching in the area. Traffic was surveyed and monitored with data compiled for Nukewatch.
On a Blockade Day on Monday the 2 September, there were 21 arrests and at various times throughout the day all four gates to the establishment were blockaded by activists.
On the 6 September a more light-hearted "nearly nude" protest was well covered in the Reading Post.
In August 2014, in association with Wool Against Weapons, Action AWE activists laid 7 miles of pink scarf between AWE Burghfield and AWE Aldermasrston. This was knitted by hundreds of people all over the country, sharing the anti-nuclear-weapons sentiment.
There is an ongoing, regular vigil on the first Tuesday of every month, at AWE Aldermaston.
See also
Anti-nuclear movement in the United Kingdom
Trident Ploughshares
International Campaign to Abolish Nuclear Weapons
Christian CND
War Resisters' International
References
External links
Archived as of 6 October 2014.
Action AWE (archived)
War Resisters' International (archived)
Trident Ploughshares (archived)
Nukewatch (archived)
Anti–nuclear weapons movement
Anti-nuclear organizations
Trident (UK nuclear programme)
Direct action | Action AWE | Engineering | 542 |
76,671,176 | https://en.wikipedia.org/wiki/JANNAF | The JANNAF Interagency Propulsion Committee (JANNAF IPC, or simply JANNAF) is a joint-agency committee chartered by the USDOD and NASA. JANNAF is composed of two committees: the Technical Committee and the Programmatic & Industrial Base (PIB) Committee. The Technical Committee is itself divided into subcommittees focused on specific technology areas of mutual interest to the DoD and NASA. The JANNAF PIB Committee is a forum for the discussion of strategic program planning and industrial base capabilities in the area of rocket propulsion and energetic systems and components for military and civil space, tactical and strategic missiles, and large gun systems.
JANNAF was re-chartered on June 19th 2014 with the signatures of Frank Kendall III, Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and Robert Lightfoot Jr., Associate Administrator of the National Aeronautics and Space Administration (NASA)
References
External links
NIST-JANAF Thermochemical Tables
JANNAF/GL-2016-0001 Simulation Credibility - Advances in Verification, Validation, and Uncertainty Quantification
JANNAF DRAFT: Test and Evaluation Guideline for Liquid Rocket Engines
United States military associations
Aerospace engineering organizations
1945 establishments in Maryland | JANNAF | Engineering | 255 |
62,589,823 | https://en.wikipedia.org/wiki/H4K91ac | H4K91ac is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the acetylation at the 91st lysine residue of the histone H4 protein. No known diseases are attributed to this mark but it might be implicated in melanoma.
Nomenclature
H4K91ac indicates acetylation of lysine 91 on histone H4 protein subunit:
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3.
H4 histone
H4 modifications are not as well known as H3's and H4 have fewer variations which might explain their important function.
H4K91ac
As December 15, 2019, no diseases are attributed to this mark although Pleckstrin homology domain's (PHIP) targetable bromodomain specifically binds H4K91ac which could implicate PHIP in the progression of melanoma. It is found at the transcription start site (TSS) of active and poised genes.
Histone acetyltransferase KAT2A is the specific reader.
Lysine acetylation and deacetylation
Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well.
The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications.
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Methods
The histone mark acetylation can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone code
Histone acetylation
References
Epigenetics
Post-translational modification | H4K91ac | Chemistry | 1,240 |
74,848,733 | https://en.wikipedia.org/wiki/Meitei%20astronomy | The astronomy of Meitei civilisation deals with celestial objects, space, and the physical universe as a whole.
Meitei language term “Khenchanglon” () is derived from its ancient Meitei equivalent “Khenchonglon” (), literally meaning "the growing up, evolving or emergence of natural / celestial body(ies) and energy(ies)" and colloquially meaning "astronomy or astronomical bodies, like stars, constellations, planets, satellites, comets, meteors, etc." The Meitei astronomy was also related to the tradition of astrology.
Constellations
Planets
Star tracks
See also
Khongjomnubi Nonggarol
Meitei calendar
Chinese astronomy
Islamic astronomy
Greek astronomy
References
External links
Meitei culture
Astronomy | Meitei astronomy | Astronomy | 159 |
28,301,027 | https://en.wikipedia.org/wiki/Suillus%20viscidus | Suillus viscidus (commonly known as the sticky bolete) is an edible, uncommon mushroom in the genus Suillus. It associates with larch and is found throughout Europe and in Japan.
Description
The cap is hemispherical when young, later convex to flat, whitish grey or darker. It is up to 12 cm in diameter. It is slimy, and blotchy when old. The large, angular pores on the underside of the cap are coloured pallid to yellowish at first, but become darker with maturity. Young specimens bear a whitish partial veil which soon shreds, sometimes leaving fragments on the cap edge. The tubes are concolorous, and have a slightly decurrent stem attachment. The stem bears a thin, slimy, dark-coloured ring in the uppermost part of the stem which is sometimes lost in mature specimens. The stem is divided by the ring into a short lighter, yellowish section above, and a duller, greyish section below, which is viscid. The flesh is whitish, staining bluish, very soft and has a mild or non-distinct taste.
The spores are clay-coloured and ellipsoid or subfusiform in shape. Their dimensions are 10–12 by 4–5.5 μm.
It is an edible mushroom of low quality.
Habitat
Suillus viscidus forms an ectomycorrhizal association with larch (Larix) specifically, and its distribution is thus limited by the range of the host tree. It occurs throughout Europe, and also in Japan. In Europe, it is considered an uncommon to rare fungus and it is to be found in the same habitat as the common larch bolete, Suillus grevillei, and also the rare Suillus tridentinus. Fruiting bodies are found in groups among grass under larch, from summer to autumn.
References
External links
viscidus
Fungi of Asia
Fungi of Europe
Edible fungi
Fungus species
Taxa named by Carl Linnaeus | Suillus viscidus | Biology | 404 |
44,654,779 | https://en.wikipedia.org/wiki/PRIME%20%28labeling%20technique%29 | PRIME (probe incorporation mediated by enzymes) is a molecular biology research tool developed by Alice Y. Ting and the Ting Lab at MIT for site-specific labeling of proteins in living cells with chemical probes. Probes often have useful biophysical properties, such as fluorescence, and allow imaging of proteins. Ultimately, PRIME enables scientists to study functions of specific proteins of interest.
Significance
Protein labeling with fluorescent molecules allows the visualization of protein dynamics, localization, and protein-protein interactions, and therefore serves as an important technique to understand protein functions and networks in living cells. The protein labeling should have a high selectivity towards the protein of interest, and should not interfere with the natural functions of the protein. Although genetic coding of fluorescent proteins, such as the green fluorescent protein (GFP), is the most popular technique due to its high specificity, fluorescent proteins are likely to interfere with the functions of the protein to which they are fused because of their large sizes. There are multiple tagging tools, such as HaloTag, SNAP tag, and FlAsH, developed in order to overcome the weakness of traditional protein labeling with fluorescent proteins. However, they still have significant shortcomings either due to the large size of a tag or the low specificity of the labeling process. PRIME has been developed in order to achieve a high labeling specificity comparable to fluorescent proteins with small molecules.
Principles
In PRIME, a mutant enzyme LplA (lipoic acid ligase from Escherichia coli) first catalyzes the conjugation of the "functional group handle" and LplA acceptor peptide (LAP), which is genetically fused to the protein of interest. “Functional group handle” indicates a bridge molecule connecting a LAP tag to a fluorescent probe or fluorophore. Fluorescent probe reacts with the “functional group handle” connected to the tag, and ultimately labels the protein of interest. Different chemical reactions can be utilized to attach the fluorescent probe to a complex consisting of the protein, the LAP tag, and the bridge: Diels-Alder Reaction, and chelation-assisted copper-catalyzed azide-alkyne cycloaddition (CuAAC) (refer to Azide-alkyne Huisgen cycloaddition). Two other versions of PRIME labeling technologies use mutant LplA proteins to directly incorporate a fluorophore to the LAP-tagged protein of interest.
Limitations
Despite the advantages of PRIME over other tagging methods, PRIME still has some possible limitations. First of all, the LAP tag may interfere with the function of proteins to which it is fused. It is recommended that the experimenters perform control experiments in order to make sure that the tagged recombinant protein functions properly. Secondly, even at a low concentration, chemicals such as the fluorescent probe can be toxic to the cells. Experimenters are also required to obtain the right balance between maximal signal of fluorescence and minimal disruption of cellular function.
References
Molecular biology techniques
Cell imaging
Protein imaging | PRIME (labeling technique) | Chemistry,Biology | 607 |
8,736,713 | https://en.wikipedia.org/wiki/Mobile%20RFID | Mobile RFID (M-RFID) are services that provide information on objects equipped with an RFID tag over a telecommunication network. The reader or interrogator can be installed in a mobile device such as a mobile phone or PDA.
Unlike ordinary fixed RFID, mobile RFID readers are mobile, and the tags fixed, instead of the other way around. The advantages of M-RFID over RFID include the absence of wires to fixed readers and the ability of a small number of mobile readers can cover a large area, instead of dozens of fixed readers.
The main focus is on supporting supply chain management. But this application has also found its way in m-commerce. The customer in the supermarket can scan the Electronic Product Code from the tag and connects via the internet to get more information.
ISO/IEC 29143 "Information technology — Automatic Identification and Data Capture Technique — Air Interface specification for Mobile RFID interrogator" is the first standard to be developed for Mobile RFID.
References
See also
MIIM
RFID
RTLS
ISO
Mobile telecommunications
Radio-frequency identification | Mobile RFID | Technology,Engineering | 221 |
74,127,943 | https://en.wikipedia.org/wiki/2023%20Yellowstone%20River%20train%20derailment | The Yellowstone River bridge collapse was a train derailment that occurred on June 24, 2023, near Columbus, Montana, United States. A bridge that crosses the Yellowstone River collapsed, causing several cars of a freight train carrying hazardous materials to fall into the water below. The incident resulted in environmental concerns and internet service disruptions in the state.
The bridge was part of the Montana Rail Link (MRL) network, a privately owned regional railroad that operates over 900 miles of track in Montana and Idaho. The train involved in the derailment was carrying hot asphalt and molten sulfur, which are both flammable. Sulfur is used in phosphate fertilizer production and for direct soil supplement. Hot sulfur burns easily, producing toxic sulfur dioxide. The train crew was safe and no injuries were reported.
Collapse
The collapse occurred around 6 a.m. local time on June 24, 2023. The cause of the collapse is under investigation, but some experts have suggested that repeated years of heavy river flows may have eroded the river bottom and weakened the bridge structure. The adjacent Twin Bridges Road Bridge constructed in 1931 was demolished in 2021 after decades of riverbed bridge scour had undermined its concrete piers and the Montana Department of Transportation had determined it was in danger of collapse. The river was swollen with recent heavy rains at the time of the derailment.
Aftermath
The collapse triggered an emergency response from local, state and federal agencies. Officials shut down drinking water intakes downstream while they evaluated the danger after the derailment. The Yellowstone County Disaster and Emergency Services asked residents to conserve water and implemented precautions at water treatment plants, irrigation districts and industrial companies. The Montana Department of Environmental Quality said it was monitoring the water quality and potential impacts to fish and wildlife. An Associated Press reporter witnessed a yellow substance coming out of some of the tank cars.
The collapse also took out a fiber-optic cable providing internet service to many customers in the state, including the high-speed provider Global Net. The company said it was working to restore service as soon as possible. Montana Gov. Greg Gianforte tweeted that he was monitoring the situation and that the state was standing by to support MRL and county officials.
MRL said it was committed to addressing any potential impacts and working to understand the reasons behind the crash.
References
June 2023 events in the United States
2023 disasters in the United States
Miamisburg train derailment
Derailments in the United States
Railway accidents and incidents in Montana
2023 in Montana
Stillwater County, Montana
Yellowstone River | 2023 Yellowstone River train derailment | Technology | 513 |
7,098,644 | https://en.wikipedia.org/wiki/System%20Fault%20Tolerance | (DELETE) This text describes "product name", is not an encyclopedic entry.
In computing, System Fault Tolerance (SFT) is a fault tolerant system built into NetWare operating systems. Three levels of fault tolerance exist:
SFT I 'Hot Fix' maps out bad disk blocks on the file system level to help ensure data integrity (fault tolerance on the disk-block level)
SFT II provides a disk mirroring or duplexing system based on RAID 1; mirroring refers to two disk drives holding the same data, duplexing uses two data channels/controllers to connect the disks (fault tolerance on the disk level and optionally on the data-channel level).
SFT III is a server duplexing scheme where if a server fails, a constantly synchronized server seamlessly takes its place (fault tolerance on the system level).
References
Novell NetWare 4.2 documentation
Novell NetWare | System Fault Tolerance | Technology | 193 |
1,939,119 | https://en.wikipedia.org/wiki/Shuttle%20%28weaving%29 | A shuttle is a tool designed to neatly and compactly store a holder that carries the thread of the weft yarn while weaving with a loom. Shuttles are thrown or passed back and forth through the shed, between the yarn threads of the warp in order to weave in the weft.
The simplest shuttles, known as "stick shuttles", are made from a flat, narrow piece of wood with notches on the ends to hold the weft yarn. More complicated shuttles incorporate bobbins or pirns.
In the United States, shuttles are often made of wood from the flowering dogwood, because it is hard, resists splintering, and can be polished to a very smooth finish. In the United Kingdom shuttles were usually made of boxwood, cornel, or persimmon.
Gallery
References
Chandler, Deborah (1995). Learning to Weave, Loveland, Colorado: Interweave Press LLC.
External links
Pak Shuttle Company (Pvt) Ltd.
Heraldic charges
Weaving equipment | Shuttle (weaving) | Engineering | 208 |
5,450,517 | https://en.wikipedia.org/wiki/Corrosion%20in%20space | Corrosion in space is the corrosion of materials occurring in outer space. Instead of moisture and oxygen acting as the primary corrosion causes, the materials exposed to outer space are subjected to vacuum, bombardment by ultraviolet and X-rays, solar energetic particles (mostly electrons and protons from solar wind), and electromagnetic radiation. In the upper layers of the atmosphere (between 90–800 km), the atmospheric atoms, ions, and free radicals, most notably atomic oxygen, play a major role. The concentration of atomic oxygen depends on altitude and solar activity, as the bursts of ultraviolet radiation cause photodissociation of molecular oxygen. Between 160 and 560 km, the atmosphere consists of about 90% atomic oxygen.
Materials
Corrosion in space has the highest impact on spacecraft with moving parts. Early satellites tended to develop problems with seizing bearings. Now the bearings are coated with a thin layer of gold.
Different materials resist corrosion in space differently. Electrolytes in batteries or cooling loops can cause galvanic corrosion, general corrosion, and stress corrosion. Aluminium is slowly eroded by atomic oxygen, while gold and platinum are highly corrosion-resistant. Gold-coated foils and thin layers of gold on exposed surfaces are therefore used to protect the spacecraft from the harsh environment. Thin layers of silicon dioxide deposited on the surfaces can also protect metals from the effects of atomic oxygen; e.g., the Starshine 3 satellite aluminium front mirrors were protected that way. However, the protective layers are subject to erosion by micrometeorites.
Silver builds up a layer of silver oxide, which tends to flake off and has no protective function; such gradual erosion of silver interconnects of solar cells was found to be the cause of some observed in-orbit failures.
Many plastics are considerably sensitive to atomic oxygen and ionizing radiation. Coatings resistant to atomic oxygen are a common protection method, especially for plastics. Silicone-based paints and coatings are frequently employed, due to their excellent resistance to radiation and atomic oxygen. However, the silicone durability is somewhat limited, as the surface exposed to atomic oxygen is converted to silica which is brittle and tends to crack.
Solving corrosion
The process of space corrosion is being actively investigated. One of the efforts aims to design a sensor based on zinc oxide, able to measure the amount of atomic oxygen in the vicinity of the spacecraft; the sensor relies on drop of electrical conductivity of zinc oxide as it absorbs further oxygen.
Other problems
The outgassing of volatile silicones on low Earth orbit devices leads to presence of a cloud of contaminants around the spacecraft. Together with atomic oxygen bombardment, this may lead to gradual deposition of thin layers of carbon-containing silicon dioxide. Their poor transparency is a concern in case of optical systems and solar panels. Deposits of up to several micrometers were observed after 10 years of service on the solar panels of the Mir space station.
Other sources of problems for structures subjected to outer space are erosion and redeposition of the materials by sputtering caused by fast atoms and micrometeoroids. Another major concern, though of non-corrosive kind, is material fatigue caused by cyclical heating and cooling and associated thermal expansion mechanical stresses.
See also
Space weathering
References
External links
The Cosmos on a Shoestring: Small Spacecraft for Space and Earth Science, Appendix B: Failure in Spacecraft Systems PDF
New Scientist premium article: Space is corrosive
NASA Long Duration Exposition Facility: surface contamination in space
Corrosion
Spaceflight | Corrosion in space | Chemistry,Materials_science,Astronomy | 705 |
498,934 | https://en.wikipedia.org/wiki/ARts | aRts (which stands for analog real time synthesizer) is an audio framework that is no longer under development. It was best known for previously being used in K Desktop Environment 2 and 3 to simulate an analog synthesizer.
A key component of aRts was the sound server which mixes several sound streams in real time. The sound server, called artsd (d for daemon), was also utilized as the standard sound server for K Desktop Environment 2–3. However, the sound server was not dependent on K Desktop Environment and can be used in other projects. It was a direct competitor to PulseAudio, another sound server, and an indirect competitor to the Enlightened Sound Daemon (ESD). It is now common to use PulseAudio instead of artsd.
The aRts platform also includes aRts Builder — an application for building custom layouts and configurations for audio mixers, sequencers, synthesizers and other audio schema via a user-friendly graphical user interface. aRts is free software, distributed under the terms of the GNU General Public License.
End of project
On December 2, 2004 aRts' creator and primary developer Stefan Westerfeld announced he was leaving the project due to a variety of fundamental development and technical issues with aRts.
In KDE Software Compilation 4 developers chose to replace aRts with a new multimedia API known as Phonon. Phonon provides a common interface on top of other systems, usually VLC media player or GStreamer, to avoid being dependent on a single multimedia framework.
See also
JACK Audio Connection Kit – prevailing sound server for professional audio production
References
External links
– The aRts project website
Audio libraries
Audio software for Linux
Free audio software
KDE Platform
Software that uses Qt | ARts | Technology | 342 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.