id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,746,538
https://en.wikipedia.org/wiki/Turtle%20Beak
The Turtle Beak mansion (觜宿, pinyin: Zī Xiù) is one of the twenty-eight mansions of the Chinese constellations. It is one of the western mansions of the White Tiger. Asterisms References Chinese constellations
Turtle Beak
[ "Astronomy" ]
50
[ "Chinese constellations", "Constellations" ]
8,746,727
https://en.wikipedia.org/wiki/Level%20of%20support%20for%20evolution
The level of support for evolution among scientists, the public, and other groups is a topic that frequently arises in the creation–evolution controversy, and touches on educational, religious, philosophical, scientific, and political issues. The subject is especially contentious in countries where significant levels of non-acceptance of evolution by the general population exists, but evolution is taught at public schools and universities. , nearly all (around 98%) of the scientific community accepts evolution as the dominant scientific theory of biological diversity with, , some 87% accepting that evolution occurs due to natural processes, such as natural selection. Scientific associations have strongly rebutted and refuted the challenges to evolution proposed by intelligent design proponents. There are many religious groups and denominations spread across several countries who reject the theory of evolution because it is in conflict with their central belief of creationism. For example, countries having such groups include the United States, South Africa, the Muslim world, South Korea, Singapore, the Philippines, and Brazil, with smaller followings in the United Kingdom, the Republic of Ireland, Japan, Italy, Germany, Israel, Australia, New Zealand, and Canada. Several publications discuss the subject of acceptance, including a document produced by the United States National Academy of Sciences. Scientific The vast majority of the scientific community and academia supports evolutionary theory as the only explanation that can fully account for observations in the fields of biology, paleontology, molecular biology, genetics, anthropology, and others. A 1991 Gallup poll found that about 5% of American scientists (including those with training outside biology) identified themselves as creationists. Additionally, the scientific community considers intelligent design, a neo-creationist offshoot, to be unscientific, pseudoscience, or junk science. The U.S. National Academy of Sciences has stated that intelligent design "and other claims of supernatural intervention in the origin of life" are not science because they cannot be tested by experiment, do not generate any predictions, and propose no new hypotheses of their own. In September 2005, 38 Nobel laureates issued a statement saying "Intelligent design is fundamentally unscientific; it cannot be tested as scientific theory because its central conclusion is based on belief in the intervention of a supernatural agent." In October 2005, a coalition representing more than 70,000 Australian scientists and science teachers issued a statement saying "intelligent design is not science" and calling on "all schools not to teach Intelligent Design (ID) as science, because it fails to qualify on every count as a scientific theory". In 1986, an amicus curiae brief, signed by 72 US Nobel Prize winners, 17 state academies of science and 7 other scientific societies, asked the US Supreme Court in Edwards v. Aguillard, to reject a Louisiana state law requiring that where evolutionary science was taught in public schools, creation science must also be taught. The brief also stated that the term "creation science" as used by the law embodied religious dogma, and that "teaching religious ideas mislabeled as science is detrimental to scientific education". This was the largest collection of Nobel Prize winners to sign a petition up to that point. According to anthropologists Almquist and Cronin, the brief is the "clearest statement by scientists in support of evolution yet produced." There are many scientific and scholarly organizations from around the world that have issued statements in support of the theory of evolution. The American Association for the Advancement of Science, the world's largest general scientific society with more than 130,000 members and over 262 affiliated societies and academies of science including over 10 million individuals, has made several statements and issued several press releases in support of evolution. The prestigious United States National Academy of Sciences, which provides science advice to the nation, has published several books supporting evolution and criticising creationism and intelligent design. There is a notable difference between the opinion of scientists and that of the general public in the United States. A poll by Pew Research Center found that "Nearly all scientists (97%) say humans and other living things have evolved over time – 87% say evolution is due to natural processes, such as natural selection. The dominant position among scientists – that living things have evolved due to natural processes – is shared by only about a third (32%) of the public." Whereas a Pew poll found "65% of [U.S.] adults say that humans and other living things have evolved". Votes, resolutions, and statements of scientists before 1985 One of the earliest resolutions in support of evolution was issued by the American Association for the Advancement of Science in 1922, and readopted in 1929. Another early effort to express support for evolution by scientists was organized by Nobel Prize–winning American biologist Hermann J. Muller in 1966. Muller circulated a petition entitled "Is Biological Evolution a Principle of Nature that has been well established by Science?" in May 1966: This manifesto was signed by 177 of the leading American biologists, including George G. Simpson of Harvard University, Nobel Prize Winner Peter Agre of Duke University, Carl Sagan of Cornell, John Tyler Bonner of Princeton, Nobel Prize Winner George Beadle, President of the University of Chicago, and Donald F. Kennedy of Stanford University, formerly head of the United States Food and Drug Administration. This was followed by the passing of a resolution by the American Association for the Advancement of Science (AAAS) in the fall of 1972 that stated, in part, "the theory of creation ... is neither scientifically grounded nor capable of performing the rules required of science theories". The United States National Academy of Sciences also passed a similar resolution in the fall of 1972. A statement on evolution called "A Statement Affirming Evolution as a Principle of Science." was signed by Nobel Prize Winner Linus Pauling, Isaac Asimov, George G. Simpson, Caltech Biology Professor Norman H. Horowitz, Ernst Mayr, and others, and published in 1977. The governing board of the American Geological Institute issued a statement supporting resolution in November 1981. Shortly thereafter, the AAAS passed another resolution supporting evolution and disparaging efforts to teach creationism in science classes. To date, there are no scientifically peer-reviewed research articles that disclaim evolution listed in the scientific and medical journal search engine PubMed. Project Steve The Discovery Institute announced that over 700 scientists had expressed support for intelligent design as of February 8, 2007. This prompted the National Center for Science Education to produce a "light-hearted" petition called "Project Steve" in support of evolution. Only scientists named "Steve" or some variation (such as Stephen, Stephanie, and Stefan) are eligible to sign the petition. It is intended to be a "tongue-in-cheek parody" of the lists of alleged "scientists" supposedly supporting creationist principles that creationist organizations produce. The petition demonstrates that there are more scientists who accept evolution with a name like "Steve" alone (over 1370) than there are in total who support intelligent design. This is, again, why the percentage of scientists who support evolution has been estimated by Brian Alters to be about 99.9 percent. Religious Creationists have claimed that they represent the interests of true Christians, and evolution is associated only with atheism. However, not all religious organizations find support for evolution incompatible with their religious faith. For example, 12 of the plaintiffs opposing the teaching of creation science in the influential McLean v. Arkansas court case were clergy representing Methodist, Episcopal, African Methodist Episcopal, Catholic, Southern Baptist, Reform Jewish, and Presbyterian groups. There are several religious organizations that have issued statements advocating the teaching of evolution in public schools. In addition, the Archbishop of Canterbury, Dr. Rowan Williams, issued statements in support of evolution in 2006. The Clergy Letter Project is a signed statement by 12,808 (as of 28 May 2012) American Christian clergy of different denominations rejecting creationism organized in 2004. Molleen Matsumura of the National Center for Science Education found, of Americans in the twelve largest Christian denominations, at least 77% belong to churches that support evolution education (and that at one point, this figure was as high as 89.6%). These religious groups include the Catholic Church, as well as various denominations of Protestantism, including the United Methodist Church, National Baptist Convention, USA, Evangelical Lutheran Church in America, Presbyterian Church (USA), National Baptist Convention of America, African Methodist Episcopal Church, the Episcopal Church, and others. A figure closer to about 71% is presented by the analysis of Walter B. Murfin and David F. Beck. Michael Shermer argued in Scientific American in October 2006 that evolution supports concepts like family values, avoiding lies, fidelity, moral codes and the rule of law. Shermer also suggests that evolution gives more support to the notion of an omnipotent creator, rather than a tinkerer with limitations based on a human model. Ahmadiyya The Ahmadiyya Movement universally accepts evolution and actively promotes it. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community. The Ahmadis do not believe Adam was the first human on Earth, but merely the first prophet to receive a revelation of God. Baha'i Faith A fundamental part of `Abdul-Bahá's teachings on evolution is the belief that all life came from the same origin: "the origin of all material life is one..." He states that from this sole origin, the complete diversity of life was generated: "Consider the world of created beings, how varied and diverse they are in species, yet with one sole origin" He explains that a slow, gradual process led to the development of complex entities: Catholic Church The 1950 encyclical Humani generis advocated scepticism towards evolution without explicitly rejecting it; this was substantially amended by Pope John-Paul II in 1996 in an address to the Pontifical Academy of Sciences in which he said, "Today, almost half a century after publication of the encyclical, new knowledge has led to the recognition of the theory of evolution as more than a hypothesis." Between 2000 and 2002 the International Theological Commission found that "Converging evidence from many studies in the physical and biological sciences furnishes mounting support for some theory of evolution to account for the development and diversification of life on earth, while controversy continues over the pace and mechanisms of evolution." This statement was published by the Vatican in July 2004 by the authority of Cardinal Ratzinger (who became Pope Benedict XVI) who was the president of the Commission at the time. The Magisterium has not made an authoritative statement on intelligent design, and has permitted arguments on both sides of the issue. In 2005, Cardinal Christoph Schönborn of Vienna appeared to endorse intelligent design when he denounced philosophically materialist interpretations of evolution. In an op-ed in the New York Times he said "Evolution in the sense of common ancestry might be true, but evolution in the neo-Darwinian sense - an unguided, unplanned process of random variation and natural selection - is not." In the January 16–17 2006 edition of the official Vatican newspaper L'Osservatore Romano, University of Bologna evolutionary biology Professor Fiorenzo Facchini wrote an article agreeing with the judge's ruling in Kitzmiller v. Dover and stating that intelligent design was unscientific. Jesuit Father George Coyne, former director of the Vatican Observatory, has also denounced intelligent design. Sikhism The Sikh scripture explicitly states that the Universe and its processes are created by, and subject to, the laws of Nature. Furthermore, the name that is used by Sikhs for God, Waheguru, is literally translated as "the Wonderful Teacher", implying that these laws are, in principle at least, at least partially discernible by human inquiry. One of the hymns that observant Sikhs recite daily describes the orbit of the Earth as being caused by those same laws (and not some mythological cause). Thus, the scientific world-view, which includes the Darwinian theory of evolution, is compatible with traditional Sikh belief. Hinduism Hindus believe in the concept of evolution of life on Earth. The concepts of Dashavatara—different incarnations of God starting from simple organisms and progressively becoming complex beings—and Day and Night of Brahma are generally cited as instances of Hindu acceptance of evolution. US religious denominations In the United States, many Protestant denominations promote creationism, preach against evolution, and sponsor lectures and debates on the subject. Denominations that explicitly advocate creationism instead of evolution or "Darwinism" include the Assemblies of God, the Free Methodist Church, Lutheran Church–Missouri Synod, Pentecostal Churches, Seventh-day Adventist Churches, Wisconsin Evangelical Lutheran Synod, Christian Reformed Church, Southern Baptist Convention, the Pentecostal Oneness churches, and the Evangelical Lutheran Synod. Jehovah's Witnesses produce day-age creationism literature to refute evolution but reject the "creationist" label, which they consider to apply only to Young Earth creationism. Medicine and industry A common complaint of creationists is that evolution is of no value, has never been used for anything, and will never be of any use. According to many creationists, nothing would be lost by getting rid of evolution, and science and industry might even benefit. In fact, evolution is being put to practical use in industry and widely used on a daily basis by researchers in medicine, biochemistry, molecular biology, and genetics to both formulate hypotheses about biological systems for the purposes of experimental design, as well as to rationalise observed data and prepare applications. As of May 2019 there are 554,965 scientific papers in PubMed that mention 'evolution'. Pharmaceutical companies utilize biological evolution in their development of new products, and also use these medicines to combat evolving bacteria and viruses. Because of the perceived value of evolution in applications, there have been some expressions of support for evolution on the part of corporations. In Kansas, there has been some widespread concern in the corporate and academic communities that a move to weaken the teaching of evolution in schools will hurt the state's ability to recruit the best talent, particularly in the biotech industry. Paul Hanle of the Biotechnology Institute warned that the United States risks falling behind in the biotechnology race with other nations if it does not do a better job of teaching evolution. James McCarter of Divergence Incorporated stated that the work of 2001 Nobel Prize winner Leland Hartwell relied heavily on the use of evolutionary knowledge and predictions, both of which have significant implications for the treatment of cancers. Furthermore, McCarter concluded that 47 of the last 50 Nobel Prizes in medicine or physiology depended on an understanding of evolutionary theory (according to McCarter's unspecified personal criteria). Public support There does not appear to be significant correlation between believing in evolution and understanding evolutionary science. In some countries, creationist beliefs (or a lack of support for evolutionary theory) are relatively widespread, even garnering a majority of public opinion. A study published in Science compared attitudes about evolution in the United States, 32 European countries, and Japan. The only country where acceptance of evolution was lower than in the United States was Turkey (25%). Public acceptance of evolution was most widespread (at over 80% of the population) in Iceland, Denmark and Sweden. Afghanistan According to the Pew Research Center, Afghanistan has the lowest acceptance of evolution in the Muslim countries. Only 26% of people in Afghanistan accept evolution. 62% deny human evolution and believe that humans have always existed in their present form. Argentina According to a 2014 poll produced by the Pew Research Center, 71% of people in Argentina believe "humans and other living things evolved over time" while 23% believe they have "always existed in the present form." Armenia According to the Pew Research Center, 56 percent of Armenians deny human evolution and claim that humans have always existed in their present and only 34 percent of Armenians accept human evolution. Australia A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God". A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution. Belarus According to the Pew Research Center, 63 percent of respondents in Belarus accept the theory of evolution while 23 percent of them deny evolution and claim that "humans have always existed in their present form." Bolivia According to a 2014 poll by the Pew Research Center, 44% of people in Bolivia believe "humans and other living things evolved over time" while 39% believe they have "always existed in the present form." Brazil In a 2010 poll, 59% of respondents said they believe in theistic evolution, or evolution guided by God. A further 8% believe in evolution without divine intervention, while 25% were creationists. Support for creationism was stronger among the poor and the least educated. According to a 2014 poll produced by the Pew Research Center, 66% of Brazilians agree that humans evolved over time and 29% think they have always existed in the present form. Canada In a 2019 nationwide poll, 61% of Canadians believe that humans evolved from less advanced life forms over millions of years, while 23% believe that God created human beings in their present form within the last 10,000 years. Chile According to a 2014 poll by the Pew Research Center, 69% of people in Chile believe "humans and other living things evolved over time" while 26% believe they have "always existed in the present form." Colombia According to a 2014 poll by the Pew Research Center, 59% of people in Colombia believe "humans and other living things evolved over time" while 35% believe they have "always existed in the present form." Costa Rica According to a 2014 poll by the Pew Research Center, 56% of people in Costa Rica believe "humans and other living things evolved over time" while 38% believe they have "always existed in the present form." Czech Republic According to the Pew Research Center, the Czech Republic has the highest acceptance of evolution in Eastern Europe. 83 percent people in the Czech Republic believe that humans evolved over time. Dominican Republic According to a 2014 poll by the Pew Research Center, 41% of people in Dominican Republic believe "humans and other living things evolved over time" while 56% believe they have "always existed in the present form." Ecuador According to a 2014 poll by the Pew Research Center, 50% of people in Ecuador believe "humans and other living things evolved over time" while 44% believe they have "always existed in the present form." El Salvador According to a 2014 poll by the Pew Research Center, 46% of people in El Salvador believe "humans and other living things evolved over time" while 45% believe they have "always existed in the present form." Estonia According to the Pew Research Center, 74% of Estonians accept the theory of evolution while 21% deny it and claim that "humans have always existed in their present form." Georgia According to the Pew Research Center, 58 percent of Georgians accept the theory of evolution while 34 percent of Georgians deny the theory of evolution. Guatemala According to a 2014 poll by the Pew Research Center, 55% of people in Guatemala believe "humans and other living things evolved over time" while 38% believe they have "always existed in the present form." Honduras According to a 2014 poll by the Pew Research Center, 49% of people in Honduras believe "humans and other living things evolved over time" while 45% believe they have "always existed in the present form." Hungary According to the Pew Research Center, 69 percent of Hungarians accept the theory of evolution and 21 percent of Hungarians deny human evolution. Kazakhstan According to the Pew Research Center, Kazakhstan has the highest acceptance of evolution in the Muslim countries. 79% of people in Kazakhstan accept the theory of evolution. India According to a 2009 survey conducted by the British Council, 77% of people in India agree that enough scientific evidence exists to support evolution. Also, 85% of God believing Indians who know about evolution agree that life on earth evolved over time as a result of natural selection. In the same 2009 survey carried among 10 major nations, the highest proportion that agreed that evolutionary theories alone should be taught in schools was in India, at 49%. In a survey conducted across 12 states in India, public acceptance of evolution stood at 68.5%. In 2023, NCERT, under the rationalization scheme, removed Darwin's theory of evolution from class 10th school textbooks. Only students who take opt for biology in class 11th will be taught Darwin's theory of evolution. Indonesia A 2009 survey conducted by the McGill researchers and their international collaborators found that 85% of Indonesian high school students agreed with the statement, "Millions of fossils show that life has existed for billions of years and changed over time." Israel The theory of evolution is a 'hard sell' in schools in Israel. More than half of Israeli Jews accept the human evolution while more than 40% deny human evolution & claim that humans have always existed in their present form. Latvia According to the Pew Research Center, 66 percent of Latvians accept the theory of evolution while 25 percent of Latvians deny evolution and claim that "humans have always existed in their present form." Lithuania According to the Pew Research Center 54 percent of Lithuanians accept the theory of evolution while 34 percent of them deny evolution and claim that "humans have always existed in their present form." Mexico According to a 2014 poll by the Pew Research Center, 64% of people in Mexico believe "humans and other living things evolved over time" while 32% believe they have "always existed in the present form." Moldova According to the Pew Research Center, 49 percent of Moldovans accept the theory of evolution while 42 percent of Moldovan deny the theory of evolution and claim that "humans have always existed in the present form." Nicaragua According to a 2014 poll by the Pew Research Center, 47% of people in Nicaragua believe "humans and other living things evolved over time" while 48% believe they have "always existed in the present form." Norway According to a 2008 Norstat poll for NRK, 59% of the Norwegian population fully accept evolution, 24% somewhat agree with the theory, 4% somewhat disagree with the theory while 8% do not accept evolution. 4% did not know. Pakistan A 2009 survey conducted by the McGill researchers and their international collaborators found that 86% of Pakistani high school students agreed with the statement, "Millions of fossils show that life has existed for billions of years and changed over time." Panama According to a 2014 poll by the Pew Research Center, 61% of people in Panama believe "humans and other living things evolved over time" while 34% believe they have "always existed in the present form." Paraguay According to a 2014 poll by the Pew Research Center, 59% of people in Paraguay believe "humans and other living things evolved over time" while 30% believe they have "always existed in the present form." Peru According to a 2014 poll by the Pew Research Center, 51% of people in Peru believe "humans and other living things evolved over time" while 39% believe they have "always existed in the present form." Poland According to the Pew Research Center, 61 percent of Poles accept the theory of evolution while 23 percent of Poles deny the theory of evolution and claim that "humans have always existed in their present form." Russia According to the Pew Research Center, 65 percent of Russians accept the theory of evolution while 26 percent of Russians deny the theory of evolution and claim that "humans have always existed in their present form." Serbia According to the Pew Research Center, 61 percent of Serbians accept the theory of evolution while 29 percent of respondents in Serbia deny the theory of evolution while and claim that "humans have always existed in their present form." Turkey In 2017, the government removed the theory of evolution from the school curriculum. United Kingdom A 2006 United Kingdom poll on the "origin and development of life" asked participants to choose between three different explanations for the origin of life: 22% chose (Young Earth) creationism, 17% opted for intelligent design ("certain features of living things are best explained by the intervention of a supernatural being, e.g. God"), 48% selected evolution theory (with a divine role explicitly excluded) and the rest did not know. A 2009 poll found that only 38% of Britons believe God played no role in evolution. In a 2012 poll, 69% of Britons believe that humans evolved from less advanced life forms, while 17% believe that God created human beings in their present forms within the last 10,000 years. United States United States courts have ruled in favor of teaching evolution in science classrooms, and against teaching creationism, in numerous cases such as Edwards v. Aguillard, Hendren v. Campbell, McLean v. Arkansas and Kitzmiller v. Dover Area School District. A prominent organization in the United States behind the intelligent design movement is the Discovery Institute, which, through its Center for Science and Culture, conducts a number of public relations and lobbying campaigns aimed at influencing the public and policy makers in order to advance its position in academia. The Discovery Institute claims that because there is a significant lack of public support for evolution, that public schools should, as their campaign states, "Teach the Controversy", although there is no controversy over the validity of evolution within the scientific community. The US has one of the highest levels of public belief in biblical or other religious accounts of the origins of life on Earth among industrialized countries. However, according to the Pew Research Center, 62 percent of adults in the United States accept human evolution while 34 percent of adults believe that humans have always existed in their present form. The poll involved over 35,000 adults in the United States. However acceptance of evolution varies per state. For example, the State of Vermont has the highest acceptance of evolution of any other State in the United States. 79% people in Vermont accept human evolution. While Mississippi with 43% has the lowest acceptance of evolution of any US state. According to a 2021 study, in 2019, 54% of Americans agreed with the statement: "Human beings, as we know them today, developed from earlier species of animals". A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the belief that "God created humans in their present form at one time within the last 10,000 years" when asked for their beliefs regarding the origin and development of human beings. 22% believed that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process", despite 49% of respondents indicating they believed in evolution. Belief in creationism is inversely correlated to education; only 22% of those with post-graduate degrees believe in strict creationism. The level of support for strict creationism could be even lower when poll results are adjusted after comparison with other polls with questions that more specifically account for uncertainty and ambivalence. A 2000 poll for People for the American Way found that 70% of the American public thought that evolution is compatible with a belief in God. According to a 2021 study, in 2019, 34% of conservative Republicans and 83% of liberal Democrats accepted evolution. A 2005 Pew Research Center poll found that 70% of evangelical Christians believed that living organisms have not changed since their creation, but only 31% of Catholics and 32% of mainline Protestants shared this opinion. A 2005 Harris Poll estimated that 63% of liberals and 37% of conservatives agreed that humans and other primates have a common ancestry. Ukraine According to the Pew Research Center, 54 percent of respondents in Ukraine accept the theory of evolution while 34 percent deny the theory of evolution and claim that "humans have always existed their present form." Uruguay According to a 2014 poll produced by the Pew Research Center, 74% of people in Uruguay believe "humans and other living things evolved over time" while 20% believe they have "always existed in the present form." Venezuela According to a 2014 poll by the Pew Research Center, 63% of people in Venezuela believe "humans and other living things evolved over time" while 33% believe they have "always existed in the present form." Other support for evolution There are also many educational organizations that have issued statements in support of the theory of evolution. Repeatedly, creationists and intelligent design advocates have lost suits in US courts. Here is a list of important court cases in which creationists have suffered setbacks: 1968 Epperson v. Arkansas, United States Supreme Court 1981 Segraves v. State of California, Supreme Court of California 1982 McLean v. Arkansas Board of Education, U.S. Federal Court 1987 Edwards v. Aguillard, United States Supreme Court 1990 Webster v. New Lenox School District, Seventh Circuit Court of Appeals 1994 Peloza v. Capistrano Unified School District, Ninth Circuit Court of Appeals 1997 Freiler v. Tangipahoa Parish Board of Education, United States District Court for the Eastern District of Louisiana 2000 Rodney LeVake v Independent School District 656, et al., District Court for the Third Judicial District of the State of Minnesota 2005 Kitzmiller v. Dover Area School District, US Federal Court 2006 Hurst v. Newman US District Court Eastern District of California Trends The level of assent that evolution garners has changed with time. The trends in acceptance of evolution can be estimated. Early impact of Darwin's theory The level of support for evolution in different communities has varied with time and social context. Darwin's theory had convinced almost every naturalist within 20 years of its publication in 1858, and was making serious inroads with the public and the more liberal clergy. It had reached such extremes, that by 1880, one American religious weekly publication estimated that "perhaps a quarter, perhaps a half of the educated ministers in our leading Evangelical denominations" thought "that the story of the creation and fall of man, told in Genesis, is no more the record of actual occurrences than is the parable of the Prodigal Son." By the late 19th century, many of the most conservative Christians accepted an ancient Earth, and life on Earth before Eden. Victorian Era Creationists were more akin to people who subscribe to theistic evolution today. Even fervent anti-evolutionist Scopes Trial prosecutor William Jennings Bryan interpreted the "days" of Genesis as ages of the Earth, and acknowledged that biochemical evolution took place, drawing the line only at the story of Adam and Eve's creation. Prominent pre-World War II creationist Harry Rimmer allowed an Old Earth by slipping millions of years into putative gaps in the Genesis account, and claimed that the Noachian Flood was only a local phenomenon. In the decades of the 20th century, George McCready Price and a tiny group of Seventh-day Adventist followers were among the very few believers in a Young Earth and a worldwide flood, which Price championed in his "new catastrophism" theories. It was not until the publication of John C. Whitcomb, Jr., and Henry M. Morris’s book Genesis Flood in 1961 that Price's idea was revived. In the last few decades, many creationists have adopted Price's beliefs, becoming progressively more strict biblical literalists. Recent public beliefs In a 1991 Gallup poll, 47% of the US population, and 25% of college graduates agreed with the statement, "God created man pretty much in his present form at one time within the last 10,000 years." Fourteen years later, in 2005, Gallup found that 53% of Americans expressed the belief that "God created human beings in their present form exactly the way the Bible describes it." About 2/3 (65.5%) of those surveyed thought that creationism was definitely or probably true. In 2005 a Newsweek poll discovered that 80 percent of the American public thought that "God created the universe." and the Pew Research Center reported that "nearly two-thirds of Americans say that creationism should be taught alongside evolution in public schools." Ronald Numbers commented on that with "Most surprising of all was the discovery that large numbers of high-school biology teachers — from 30% in Illinois and 38% in Ohio to a whopping 69% in Kentucky — supported the teaching of creationism." The National Center for Science Education reports that from 1985 to 2005, the number of Americans unsure about evolution increased from 7% to 21%, while the number rejecting evolution declined from 48% to 39%. Jon Miller of Michigan State University has found in his polls that the number of Americans who accept evolution has declined from 45% to 40% from 1985 to 2005. In light of these somewhat contradictory results, it is difficult to know for sure what is happening to public opinion on evolution in the US. It does not appear that either side is making unequivocal progress. It does appear that uncertainty about the issue is increasing, however. A Pew Research Center poll in 2018 found that the way the question is asked changes the results, for instance among U.S. adults the number of people who believe humans have evolved over time varies from 68% to 81% based on the question format. Anecdotal evidence suggests that creationism is gaining ground in the UK as well. One report in 2006 stated that UK students are increasingly arriving ill-prepared to participate in medical studies or other advanced education. Recent scientific trends The level of support for creationism among relevant scientists is minimal. In 2007 the Discovery Institute reported that about 600 scientists signed their A Scientific Dissent from Darwinism list, up from 100 in 2001. The actual statement of the Scientific Dissent from Darwinism is a relatively mild one that expresses skepticism about the absoluteness of 'Darwinism' (and is in line with the falsifiability required of scientific theories) to explain all features of life, and does not in any way represent an absolute denial or rejection of evolution. By contrast, a tongue-in-cheek response known as Project Steve, a list restricted to scientists with the name Steve (or variations of it) who agree that evolution is "a vital, well-supported, unifying principle of the biological sciences," has 1,491 signatories . People with these names make up approximately 1% of the total U.S. population. The United States National Science Foundation statistics on US yearly science graduates demonstrate that from 1987 to 2001, the number of biological science graduates increased by 59% while the number of geological science graduates decreased by 20.5%. However, the number of geology graduates in 2001 was only 5.4% of the number of graduates in the biological sciences, while it was 10.7% of the number of biological science graduates in 1987. The Science Resources Statistics Division of the National Science Foundation estimated that in 1999, there were 955,300 biological scientists in the US (about 1/3 of who hold graduate degrees). There were also 152,800 earth scientists in the US as well. A large fraction of the Darwin Dissenters have specialties unrelated to research on evolution; of the dissenters, three-quarters are not biologists. As of 2006, the dissenter list was expanded to include non-US scientists. Some researchers are attempting to understand the factors that affect people's acceptance of evolution. Studies have yielded inconsistent results, explains associate professor of education at Ohio State University, David Haury. He recently performed a study that found people are likely to reject evolution if they have feelings of uncertainty, regardless of how well they understand evolutionary theory. Haury believes that teachers need to show students that their intuitive feelings may be misleading (for example, using the Wason selection task), and thus to exercise caution when relying on them as they judge the rational merits of ideas. See also History of creationism List of scientific societies rejecting intelligent design Footnotes References Retrieved on 2007-02-07 Retrieved on 2007-02-08 Creationism Evolution Intelligent design controversies
Level of support for evolution
[ "Biology" ]
7,430
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
8,747,251
https://en.wikipedia.org/wiki/Bentazepam
Bentazepam (also known as Thiadipone, Tiadipona) is a thienodiazepine which is a benzodiazepine analog. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. Peak plasma rates are achieved in around 2,5 hours after oral administration. The elimination half-life is between approximately 2–4 hours. Bentazepam is effective as an anxiolytic. A severe benzodiazepine overdose with bentazepam may result in coma and respiratory failure. Adverse effects include dry mouth, somnolence, asthenia, dyspepsia, constipation, nausea and drug-induced lymphocytic colitis has been associated with bentazepam. Severe liver damage and hepatitis has also been associated with bentazepam. Whilst liver failure from bentazepam is considered to be rare, liver function monitoring has been recommended for all patients taking bentazepam. See also List of benzodiazepines References GABAA receptor positive allosteric modulators Hypnotics Lactams Thienodiazepines
Bentazepam
[ "Biology" ]
248
[ "Hypnotics", "Behavior", "Sleep" ]
11,882,145
https://en.wikipedia.org/wiki/Hfq%20protein
The Hfq protein (also known as HF-I protein) encoded by the hfq gene was discovered in 1968 as an Escherichia coli host factor that was essential for replication of the bacteriophage Qβ. It is now clear that Hfq is an abundant bacterial RNA binding protein which has many important physiological roles that are usually mediated by interacting with Hfq binding sRNA. In E. coli, Hfq mutants show multiple stress response related phenotypes. The Hfq protein is now known to regulate the translation of two major stress transcription factors ( σS (RpoS) and σE (RpoE) ) in Enterobacteria. It also regulates sRNA in Vibrio cholerae, a specific example being MicX sRNA. In Salmonella typhimurium, Hfq has been shown to be an essential virulence factor as its deletion attenuates the ability of S.typhimurium to invade epithelial cells, secrete virulence factors or survive in cultured macrophages. In Salmonella, Hfq deletion mutants are also non motile and exhibit chronic activation of the sigma mediated envelope stress response. A CLIP-Seq study of Hfq in Salmonella has revealed 640 binding sites across the Salmonella transcriptome. The majority of these binding sites was found in mRNAs and sRNAs. In Photorhabdus luminescens, a deletion of the hfq gene causes loss of secondary metabolite production. Hfq mediates its pleiotropic effects through several mechanisms. It interacts with regulatory sRNA and facilitates their antisense interaction with their targets. It also acts independently to modulate mRNA decay (directing mRNA transcripts for degradation) and also acts as a repressor of mRNA translation. Genomic SELEX has been used to show that Hfq binding RNAs are enriched in the sequence motif 5'-AAYAAYAA-3'. Hfq was also found to act on ribosome biogenesis in E. coli, specifically on the 30S subunit. Hfq mutants accumulate higher levels of immature small subunits and decreased translation accuracy. This function on the bacterial ribosome could also account for the pleiotropic effect typical of Hfq deletion strains. Electron microscopy imaging reveals that, in addition to the expected localization of this protein in cytoplasmic regions and in the nucleoid, an important fraction of Hfq is located in close proximity to the membrane. Crystallographic structures Six crystallographic structures of 4 different Hfq proteins have been published so far; E. coli Hfq (), P. aeruginosa Hfq in a low salt condition () and a high salt condition (), Hfq from S. aureus with bound RNA () and without (), and the Hfq(-like) protein from M. jannaschii (). All six structures confirm the hexameric ring-shape of a Hfq protein complex. See also RNA-OUT References 11. Mol Cell. 2002 Jan;9(1):23-30. Hfq: a bacterial Sm-like protein that mediates RNA-RNA interaction.Møller T1, Franch T, Højrup P, Keene DR, Bächinger HP, Brennan RG, Valentin-Hansen P. 12. EMBO J. 2002 Jul 1;21(13):3546-56.Structures of the pleiotropic translational regulator Hfq and an Hfq-RNA complex: a bacterial Sm-like protein. Schumacher MA1, Pearson RF, Møller T, Valentin-Hansen P, Brennan RG. External links Proteins Bacterial proteins
Hfq protein
[ "Chemistry" ]
804
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
11,882,154
https://en.wikipedia.org/wiki/CD4%20immunoadhesin
CD4 immunoadhesin is a recombinant fusion protein consisting of a combination of CD4 and the fragment crystallizable region, similarly known as immunoglobulin. It belongs to the antibody (Ig) gene family. CD4 is a surface receptor for human immunodeficiency virus (HIV). The CD4 immunoadhesin molecular fusion allow the protein to possess key functions from each independent subunit. The CD4 specific properties include the gp120-binding and HIV-blocking capabilities. Properties specific to immunoglobulin are the long plasma half-life and Fc receptor binding. The properties of the protein means that it has potential to be used in AIDS therapy as of 2017. Specifically, CD4 immunoadhesin plays a role in antibody-dependent cell-mediated cytotoxicity (ADCC) towards HIV-infected cells. While natural anti-gp120 antibodies exhibit a response towards uninfected CD4-expressing cells that have a soluble gp120 bound to the CD4 on the cell surface, CD4 immunoadhesin, however, will not exhibit a response. One of the most relevant of these possibilities is its ability to cross the placenta. History and significance CD4 immunoadhesin was first developed in the mid-1990s as a potential therapeutic agent and treatment for HIV/AIDS. The protein is a fusion of the extracellular domain of the CD4 receptor and the Fc domain of human immunoglobulin G (IgG), the most abundant antibody isotype in the human body. The Fc domain of IgG contributes several important properties to the fusion protein, including increased half-life in the bloodstream, enhanced binding to Fc receptors on immune cells, and the ability to activate complement. The development of CD4 immunoadhesin stems from the observation that the CD4 receptor plays a critical role in the entry of HIV into human cells. The CD4 receptor is used as a primary receptor by HIV to attach to the surface of target cells. HIV then uses a co-receptor, either CCR5 or CXCR4, to facilitate entry into the cell. The ability of CD4 immunoadhesin to block the interaction between the CD4 receptor and HIV was intended to prevent HIV from entering and infecting human cells. CD4 immunoadhesin has been extensively studied in preclinical and clinical trials as a potential treatment for HIV/AIDS. In addition to its antiviral activity, CD4 immunoadhesin has also been investigated for its potential immunomodulatory effects. For example, the fusion protein has been shown to induce the production of cytokines, such as interleukin-2 (IL-2) and interferon-gamma (IFN-γ), which are important for the activation and proliferation of immune cells. Despite its potential as a therapeutic agent, the development of CD4 immunoadhesin has faced several challenges. One major obstacle is the emergence of drug-resistant strains of HIV, which can limit the effectiveness of CD4 immunoadhesin in certain patients. Additionally, the need for frequent dosing and the potential for immune responses against the fusion protein have also limited the clinical application of CD4 immunoadhesin. Nevertheless, knowledge on the function of CD4 immunoadhesin has contributed to increased understanding of the biology of HIV and the mechanisms of viral entry. The protein has also inspired the development of other immunoadhesin molecules, such as CD4-IgG2 and CD4-mimetic compounds, which are being investigated as potential therapies for HIV/AIDS. Structure and function CD4 immunoadhesin is a bifunctional protein that has the ability to block HIV infection, inhibit autoreactive T-cell activation, and potentially modulate immune responses. Its structure, which consists of the extracellular domain of CD4 and the Fc region of IgG1, allows for soluble circulation throughout the body. The extracellular domain of CD4 contains four immunoglobulin-like domains (D1-D4), which are responsible for binding to the major histocompatibility complex (MHC) class II molecules on antigen-presenting cells. The Fc region of IgG1 is responsible for mediating effector functions such as antibody-dependent cell-mediated cytotoxicity (ADCC) and complement activation. CD4-Ig works by mimicking the binding of CD4 to HIV, thereby preventing the virus from infecting T-helper cells. HIV infects T-helper cells by binding to the CD4 receptor and the co-receptor CCR5 or CXCR4. CD4-Ig binds to the viral envelope glycoprotein gp120, which is responsible for HIV binding to CD4. By binding to gp120, CD4-Ig prevents the virus from binding to the CD4 receptor on T-helper cells, thus preventing infection. CD4-Ig has also been investigated as a potential treatment for other diseases that involve immune dysregulation, such as multiple sclerosis and rheumatoid arthritis. In these diseases, CD4-Ig may work by inhibiting the activation of autoreactive T-cells. CD4-Ig binds to MHC class II molecules on antigen-presenting cells, thereby preventing the activation of T-helper cells that are specific for self-antigens. In addition to its role in blocking HIV infection and inhibiting autoreactive T-cell activation, CD4-Ig may also have immunomodulatory effects. CD4 is known to be involved in the regulation of immune responses, and CD4-Ig may therefore have the ability to modulate immune responses in a way that is beneficial for the treatment of various diseases. CD4 immunoadhesin functions by blocking the interaction between the HIV envelope glycoprotein (gp120) and the CD4 receptor on the surface of CD4-positive cells. By binding to gp120, CD4 immunoadhesin prevents the virus from attaching to and entering host cells, thus inhibiting the spread of HIV infection. CD4 immunoadhesin has been shown to be effective in vitro and in animal models of HIV infection, and has been used in clinical trials as a potential treatment for HIV/AIDS. Clinical applications CD4 immunoadhesin has been studied extensively in preclinical and clinical trials as a potential treatment for HIV/AIDS. In a phase I/II clinical trial, CD4 immunoadhesin was found to be safe and well-tolerated in HIV-positive patients, and was able to reduce viral load in some patients. However, the development of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS has limitations, including the emergence of drug-resistant strains of HIV, the need for frequent dosing, and the potential for immune responses against the fusion protein. In a phase I/II clinical trial conducted by the National Institute of Allergy and Infectious Diseases (NIAID), 25 HIV-positive patients received intravenous infusions of CD4 immunoadhesin over a period of 12 weeks. The trial found that CD4 immunoadhesin was safe and well-tolerated in all patients, with no serious adverse events reported. Additionally, some patients showed a reduction in viral load, although the effect was not sustained after the end of the treatment period. Despite these results, the development of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS has faced several difficulties. One major obstacle is the emergence of drug-resistant strains of HIV, which can limit the effectiveness of CD4 immunoadhesin in certain patients. Additionally, the need for frequent dosing and the potential for immune responses against the fusion protein have also limited the clinical application of CD4 immunoadhesin. To address these challenges, researchers have explored various strategies to improve the efficacy and safety of CD4 immunoadhesin. For example, some studies have investigated the use of CD4 immunoadhesin in combination with other antiretroviral therapies to enhance the antiviral effect and reduce the risk of drug resistance. Other studies have focused on engineering CD4 immunoadhesin variants with improved pharmacokinetic properties and reduced immunogenicity. Future uses CD4 immunoadhesin has been used in the treatment of various diseases; many of which are still being studied and developed. Here are some future uses of CD4 immunoadhesin: HIV/AIDS: CD4 immunoadhesin has been studied extensively for its potential use in the treatment of HIV/AIDS. It works by binding to the viral envelope protein and blocking the entry of the virus into CD4+ T cells, thereby inhibiting viral replication. A phase I/II clinical trial involving CD4 immunoadhesin showed promising results in reducing the viral load in HIV-infected patients . Further studies are underway to explore the efficacy of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS. Autoimmune diseases: CD4 immunoadhesin has been investigated for its potential use in the treatment of autoimmune diseases such as rheumatoid arthritis, multiple sclerosis, and psoriasis. It acts by binding to the CD4 receptor on T cells and inhibiting the activation and proliferation of autoreactive T cells. Preclinical studies have shown that CD4 immunoadhesin can reduce disease severity and improve clinical outcomes in animal models of autoimmune diseases. Cancer: CD4 immunoadhesin has shown potential in the treatment of cancer, particularly in enhancing the immune response against cancer cells. It works by targeting the CD4 receptor on T cells and stimulating the production of cytokines and chemokines that can promote tumor cell death. CD4 immunoadhesin has been shown to be effective in preclinical studies of various types of cancer, including melanoma, breast cancer, and leukemia. Inflammatory diseases: CD4 immunoadhesin has been investigated for its potential use in the treatment of inflammatory diseases such as asthma and chronic obstructive pulmonary disease (COPD). It acts by binding to the CD4 receptor on T cells and reducing the release of pro-inflammatory cytokines and chemokines that cause inflammation in the lungs. Preclinical studies have shown that CD4 immunoadhesin can reduce inflammation and improve lung function in animal models of asthma and COPD. References Engineered proteins Immunology
CD4 immunoadhesin
[ "Biology" ]
2,268
[ "Immunology" ]
11,883,272
https://en.wikipedia.org/wiki/Heparin-binding%20EGF-like%20growth%20factor
Heparin-binding EGF-like growth factor (HB-EGF) is a member of the EGF family of proteins that in humans is encoded by the HBEGF gene. HB-EGF-like growth factor is synthesized as a membrane-anchored mitogenic and chemotactic glycoprotein. An epidermal growth factor produced by monocytes and macrophages, due to an affinity for heparin is termed HB-EGF. It has been shown to play a role in wound healing, cardiac hypertrophy, and heart development and function. First identified in the conditioned media of human macrophage-like cells, HB-EGF is an 87-amino acid glycoprotein that displays highly regulated gene expression. Ectodomain shedding results in the soluble mature form of HB-EGF, which influences the mitogenicity and chemotactic factors for smooth muscle cells and fibroblasts. The transmembrane form of HB-EGF is the unique receptor for diphtheria toxin and functions in juxtacrine signaling in cells. Both forms of HB-EGF participate in normal physiological processes and in pathological processes including tumor progression and metastasis, organ hyperplasia, and atherosclerotic disease. HB-EGF can bind two locations on cell surfaces: heparan sulfate proteoglycans and EGF-receptors effecting cell-to-cell interactions. Interactions Heparin-binding EGF-like growth factor has been shown to interact with NRD1, Zinc finger and BTB domain-containing protein 16 and BAG1. HB-EGF biological activities with these genes influence cell cycle progression, molecular chaperone regulation, cell survival, cellular functions, adhesion, and mediation of cell migration. The NRD1 gene codes for the protein nardilysin, an HB-EGF modulator. Zinc finger and BTB domain-containing protein 16 and BAG family molecular chaperone regulator function as co-chaperone proteins in processes involving HB-EGF. Role in cancer Recent studies indicate significant HB-EGF gene expression elevation in a number of human cancers as well as cancer-derived cell lines. Evidence indicates that HB-EGF plays a significant role in the development of malignant phenotypes contributing to the metastatic and invasive behaviors of tumors. The proliferative and chemotactic effects of HB-EGF results from the target influence on particular cells including fibroblasts, smooth muscles cells, and keratinocytes. For numerous cell types such as breast and ovarian tumor cells, human epithelial cells and keratinocytes HB-EGF is a potent mitogen resulting in evidenced upregulation of HB-EGF in such specimens. Both in vivo and in vitro studies of tumor formation in cancer derived cell lines indicate that expression of HB-EGF is essential for tumor development. As a result, studies implementing the use of specific HB-EGF inhibitors and monoclonal antibodies against HB-EGF show the potential for the development of novel therapies for treating cancers by targeting HB-EGF expression. Role in cardiac development and vasculature HB-EGF binding and activation of EGF receptors plays a critical role during cardiac valve tissue development and the maintenance of normal heart function in adults. During valve tissue development the interaction of HB-EGF with EGF receptors and heparan sulfate proteoglycans is essential for the prevention of malformation of valves due to enlargement. In the vascular system areas of disturbed flow show upregulation of HB-EGF with promotion of vascular lesions, atherogenesis, and hyperplasia of intimal tissue in vessels. The flow disturbance remodeling of the vascular tissues due to HB-EGF expression contributes to aortic valve disease, peripheral vascular disease, and conduit stenosis. Role in wound healing HB-EGF is the predominant growth factor in the epithelialization required for cutaneous wound healing. The mitogenic and migratory effects of HB-EGF on keratinocytes and fibroblasts promotes dermal repair and angiogenesis necessary for wound healing and is a major component of wound fluids. HB-EGF displays target cell specificity during the early stages of wound healing being released by macrophages, monocytes, and keratinocytes. HB-EGF cell surface binding to heparan sulfate proteoglycans enhances mitogen promoting capabilities increasing the rate of skin wound healing, decreasing human skin graft healing times, and promotes rapid healing of ulcers, burns, and epidermal split thickness wounds. Role in other physiological processes HB-EGF is recognized as an important component for the modulation of cell activity in various biological interactions. Found widely distributed in cerebral neurons and neuroglia, HB-EGF induced by brain hypoxia and or ischemia subsequently stimulates neurogenesis. Interactions between uterine HB-EGF and epidermal growth factor receptors of blastocysts influence embryo-uterine interactions and implantation. Studies show HB-EGF protects intestinal stem cells and intestinal epithelial cells in necrotizing enterocolitis, a disease affecting premature newborns. Associated with a breakdown in gut barrier function, necrotizing enterocolitis may be mediated by HB-EGF effects on intestinal mucosa. HB-EGF expressed during skeletal muscle contraction facilitates peripheral glucose removal, glucose tolerance and uptake. The upregulation of HB-EGF with exercise may explain the molecular basis for the decrease in metabolic disorders such as obesity and type 2 diabetes with regular exercise. References Further reading External links Growth factors Morphogens
Heparin-binding EGF-like growth factor
[ "Chemistry", "Biology" ]
1,227
[ "Growth factors", "Morphogens", "Induced stem cells", "Signal transduction" ]
11,884,960
https://en.wikipedia.org/wiki/Extensin
Extensins are a family of flexuous, rodlike, hydroxyproline-rich glycoproteins (HRGPs) of the plant cell wall. They are highly abundant proteins. There are around 20 extensins in Arabidopsis thaliana. They form crosslinked networks in the young cell wall. Typically they have two major diagnostic repetitive peptide motifs, one hydrophilic and the other hydrophobic, with potential for crosslinking. Extensins are thought to act as self-assembling amphiphiles essential for cell-wall assembly and growth by cell extension and expansion. The name "extensin" encapsulates the hypothesis that they are involved in cell extension. Hydrophilic motif This pentapeptide consists of serine (Ser) and four hydroxyprolines (Hyp): Ser-Hyp-Hyp-Hyp-Hyp. Hydroxyproline is unusual not only as a cyclic amino acid that restricts peptide flexibility but as an amino acid with no codon, being encoded as proline. Polypeptides targeted for secretion are subsequently hydroxylated by direct addition of molecular oxygen to proline at C-4. Extensin hydroxyproline is uniquely glycosylated with short chains of L-arabinose that further rigidify and increase hydrophilicity. Generally the serine has a single galactose attached. Hydrophobic tyrosine crosslinking motif Two tyrosines separated by a single amino acid, typically valine or another tyrosine, form a short intra-molecular diphenylether crosslink. This can be crosslinked further by the enzyme extensin peroxidase to form an inter-molecular bridge between extensin molecules and thus form networks and sheets. References Further reading Kieliszewski M, Lamport DTA (1994) Extensin: repetitive motifs, functional sites, post-translational codes, and phylogeny Plant Journal 5: 157–172 Plant proteins Structural proteins Glycoproteins
Extensin
[ "Chemistry" ]
431
[ "Glycoproteins", "Glycobiology" ]
11,884,979
https://en.wikipedia.org/wiki/National%20boundary%20delimitation
In international law, national boundary delimitation (also known as national delimitation and boundary delimitation) is the process of legally establishing the outer limits ("borders") of a state within which full territorial or functional sovereignty is exercised. National delimitation involves negotiations surrounding the modification of a state's borders and often takes place as part of the negotiations seeking to end a conflict over resource control, popular loyalties, or political interests. Occasionally this is used when referring to the maritime boundaries, in which case it is called maritime delimitation. The term "maritime delimitation" is a form of national delimitation that can be applied to the disputes between nations over maritime claims. An example is found at Maritime Boundary Delimitation in the Gulf of Tonkin. In international politics, the Division for Ocean Affairs and the Law of the Sea (Office of Legal Affairs, United Nations Secretariat) is responsible for the collection of all claims to territorial waters. See also National delimitation in the Soviet Union on the creation of territorial units based on ethnicity in the USSR; Nation-building on the processes of creating or strengthening national identity within national territorial limits. Sugauli Treaty Maritime delimitation between Romania and Ukraine Georges Bank List of maritime boundary treaties References Borders
National boundary delimitation
[ "Physics" ]
259
[ "Spacetime", "Borders", "Space" ]
11,885,085
https://en.wikipedia.org/wiki/Mark%20McMenamin
Mark A. S. McMenamin (born c. 1957) is an American paleontologist and professor of geology at Mount Holyoke College. He has contributed to the study of the Cambrian explosion and the Ediacaran biota. He is the author of several books, most recently Deep Time Analysis (2018) and Dynamic Paleontology (2016). His earlier works include The Garden of Ediacara: Discovering the Earliest Complex Life (1998), one of the only popular accounts of research on the Ediacaran biota, and Science 101: Geology (2007). He is credited with co-naming several geological formations in Mexico, describing several new fossil genera and species, and naming the Precambrian supercontinent Rodinia and the superocean Mirovia. The Cambrian archeocyathid species Markocyathus clementensis was named in his honor in 1989. Early life and career McMenamin was born in Oregon, earned his B.S. at Stanford University in 1979 and his PhD at the University of California, Santa Barbara in 1984. In 1980, while at Santa Barbara he met his future wife, Dianna Schulte McMenamin, also a paleontology graduate student, with whom he would co-author several publications. He joined the staff at Mount Holyoke College in 1984. Research and theories McMenamin's work on the paleoecology of the Cambrian explosion controversially argued that the tiny planktonic trilobites belonging to the Agnostida may have had a predatory lifestyle. McMenamin's research on the Phoenician world map helped to inspire Clive Cussler and Paul Kemprecos's 2007 novel The Navigator, and his Garden of Ediacara theory helped to inspire Greg Bear's novel Vitals. --> Origins of complex life In 1995 McMenamin led a field expedition to Sonora, Mexico, that discovered fossils (550-560 million years old) which McMenamin argued belonged to a diverse community of early animals and Ediacaran biota. The paper was published in the Proceedings of the National Academy of Sciences of the United States of America where it was reviewed by Ediacaran expert James G. Gehling. In 2011, McMenamin reported the discovery of the oldest known adult animal fossils, Proterozoic chitons from the Clemente Formation, northwestern Sonora, Mexico. However, during earlier report by him, other researchers questioned about its affinity as biogenic fossils, which also predate the majority of Ediacaran biota by at least 50 million years. Further up in this same stratigraphic sequence, McMenamin also discovered and named the early shelly fossil Sinotubulites cienegensis, a fossil that allowed the first confident Proterozoic biostratigraphic correlation between Asia and the Americas. In Lower Cambrian strata higher in the stratigraphic sequence, McMenamin also discovered important stem group brachiopods belonging to the genus Mickwitzia. During a Mount Holyoke College field trip to Death Valley, California, McMenamin and his co-authors found evidence indicating that the Proterozoic shelly fossil Qinella survived the Proterozoic-Cambrian boundary. In 2012 McMenamin proposed that the enigmatic Cambrian trace fossil Paleodictyon was the nest of an unknown animal, a hypothesis which, if supported, may be the earliest fossil evidence of parental behavior, surpassing previous findings by 200 million years. In his 2019 article 'Cambrian Chordates and Vetulicolians', McMenamin described Shenzianyuloma yunnanense, a new genus and species of Vetulicolia interpreted as bearing myotome cones, a notochord, and gut diverticula in its posterior section. Hypersea In an attempt to explain the unprecedented and rapid spread of vegetation over dry land surfaces during the middle Paleozoic, Mark and Dianna McMenamin proposed the Hypersea Theory. Their Hypersea is a geophysiological entity consisting of eukaryotic organisms on land and their symbionts. By means of a process known as hypermarine upwelling, the expansion of Hypersea led to a dramatic increase in global species diversity and a one hundred-fold increase in global biomass. Mark McMenamin's Hypertopia Option has been called one of only two "means by which planetary temperature can be stabilized." Critique of Neodarwinism Mark McMenamin has repeatedly criticized conventional Neodarwinian theory as inadequate to the task of explaining the evolutionary process. Joining with Lynn Margulis and the Russian symbiogeneticists, McMenamin has argued that symbiogenesis theory is important as one means of addressing the gap in our understanding of macroevolutionary change in conventional Neodarwinian terms. Phoenician coins In 1996, McMenamin proposed that Phoenician sailors discovered the New World c. 350 BC. The Phoenician state of Carthage minted gold staters in 350 BC bearing a pattern in the reverse exergue of the coins, which McMenamin interpreted as a map of the Mediterranean with the Americas shown to the west across the Atlantic. McMenamin later demonstrated that other (base metal) coins found in America were modern forgeries. Triassic kraken Mark McMenamin and Dianna Schulte McMenamin argued that a formation of multiple ichthyosaur fossils (belonging to the genus Shonisaurus) placed together at Berlin–Ichthyosaur State Park may represent evidence of a gigantic cephalopod or Triassic kraken that killed the ichthyosaurs and intentionally arranged their bones in the unusual pattern seen at the site. Opponents have challenged the theory as too far-fetched to be credible. PZ Myers believes that a much simpler explanation is that the rows of vertebral discs may be a result of the ichthyosaurs having fallen to one side or the other after death and rotting in that position, while Ryosuke Motani, a paleontologist at the University of California, Davis, has alternately proposed that the bones may have been moved together by ocean currents because of their circular shape. McMenamin has dismissed both of these concerns as not being in accord with either the sequence of bone placement or the hydrodynamics of the site. Mark and Dianna McMenamin presented new evidence favoring the existence of the hypothesized Triassic kraken on October 31, 2013 at the Geological Society of America annual meeting in Denver, Colorado. Paleontologist David Fastovsky critiqued McMenamin's argument, saying that the fossil fragment used as evidence was too small to determine its origin and that the argument about currents didn't take into account the lack of knowledge about currents at the time and what would be needed to move the vertebrae. Fastovsky stated that the most likely scenario was one in which, once the tendons and ligaments holding the vertebrae together are gone, the vertebral column "sort of starts to fall over almost like a row of dominoes" with the most likely configuration for that to be the assemblage found. Adolf Seilacher has noted that this ichthyosaur bone arrangement "has never been observed at other localities". In 2023, McMenamin described a fossil that he interpreted as upper beak rostrum of a large cephalopod and estimated total length of this animal at . Based on the morphology of the fossil, McMenamin rejected previous interpretation of the fossil as part of the hinge of a ramonalinid clam. Filmography Books References External links Mount Holyoke College Faculty Profile 1950s births American paleontologists Educators from Oregon Living people Mount Holyoke College faculty Non-Darwinian evolution University of California, Santa Barbara alumni Stanford University alumni Symbiogenesis researchers
Mark McMenamin
[ "Biology" ]
1,630
[ "Non-Darwinian evolution", "Biology theories" ]
11,885,652
https://en.wikipedia.org/wiki/Technical%20debt
In software development and other information technology fields, technical debt (also known as design debt or code debt) is the implied cost of future reworking because a solution prioritizes expedience over long-term design. Analogous with monetary debt, if technical debt is not repaid, it can accumulate "interest", making it harder to implement changes. Unaddressed technical debt increases software entropy and cost of further rework. Similarly to monetary debt, technical debt is not necessarily a bad thing, and sometimes (e.g. as a proof-of-concept) is required to move projects forward. On the other hand, some experts claim that the "technical debt" metaphor tends to minimize the ramifications, which results in insufficient prioritization of the necessary work to correct it. As a change is started on a codebase, there is often the need to make other coordinated changes in other parts of the codebase or documentation. Changes required that are not completed are considered debt, and until paid, will incur interest on top of interest, making it cumbersome to build a project. Although the term is primarily used in software development, it can also be applied to other professions. In a Dagstuhl seminar held in 2016, technical debt was defined by academic and industrial experts of the topic as follows: "In software-intensive systems, technical debt is a collection of design or implementation constructs that are expedient in the short term, but set up a technical context that can make future changes more costly or impossible. Technical debt presents an actual or contingent liability whose impact is limited to internal system qualities, primarily maintainability and evolvability." Assumptions Technical debt posits that an expedient design essentially reduces expense in the present, but causes extra expense in the future. This premise makes assumptions about the future: That the product survives long enough to actually incur the future costs That future events do not make the "long-term" design obsolete just as soon as the expedient design That future advancements do not make reworking less expensive than present assumptions Since the future is uncertain, it is possible that a perceived technical debt today may in fact look like a savings in the future. Although the debt scenario is considered more likely, the uncertainty further complicates design decisions. Also, the calculation of technical debt typically considers the cost of employee work time, but a complete assessment should include other costs incurred or deferred by the design decision, such as training, licensing, tools, services, hardware, opportunity cost, etc. Causes Common causes of technical debt include: Ongoing development, long series of project enhancements over time renders old solutions sub-optimal. Insufficient up-front definition, where requirements are still being defined during development, development starts before any design takes place. This is done to save time but often has to be reworked later. Business pressures, where the business considers getting something released sooner before the necessary changes are completed, hence builds up technical debt involving those uncompleted changes. Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications. Tightly coupled components, where functions are not modular, the software is not flexible enough to adapt to changes in business needs. Lack of a test suite, which encourages quick and risky band-aid bug fixes. Lack of software documentation, where code is created without supporting documentation. The work to create documentation represents debt. Lack of collaboration, where knowledge isn't shared around the organization and business efficiency suffers, or junior developers are not properly mentored. Parallel development on multiple branches accrues technical debt because of the work required to merge the changes into a single source base. The more changes done in isolation, the more debt. Deferred refactoring; As the requirements for a project evolve, it may become clear that parts of the code have become inefficient or difficult to edit and must be refactored in order to support future requirements. The longer refactoring is delayed, and the more code is added, the bigger the debt. Lack of alignment to standards, where industry standard features, frameworks, and technologies are ignored. Eventually integration with standards will come and doing so sooner will cost less (similar to "delayed refactoring"). Lack of knowledge, when the developer doesn't know how to write elegant code. Lack of ownership, when outsourced software efforts result in in-house engineering being required to refactor or rewrite outsourced code. Poor technological leadership, where poorly thought out commands are handed down the chain of command. Last minute specification changes. These have potential to percolate throughout a project, but there is insufficient time or budget to document and test the changes. Laziness, where employees might not be willing or incentivized to put extra effort into code readability and documentation. Service or repay the technical debt Kenny Rubin uses the following status categories: Happened-upon technical debt—debt that the development team was unaware existed until it was exposed during the normal course of performing work on the product. For example, the team is adding a new feature to the product and in doing so it realizes that a work-around had been built into the code years before by someone who has long since departed. Known technical debt—debt that is known to the development team and has been made visible using one of many approaches. Targeted technical debt—debt that is known and has been targeted for servicing by the development team. Consequences "Interest payments" are caused by both the necessary local maintenance and the absence of maintenance by other users of the project. Ongoing development in the upstream project can increase the cost of "paying off the debt" in the future. One pays off the debt by simply completing the uncompleted work. The buildup of technical debt is a major cause for projects to miss deadlines. It is difficult to estimate exactly how much work is necessary to pay off the debt. For each change that is initiated, an uncertain amount of uncompleted work is committed to the project. The deadline is missed when the project realizes that there is more uncompleted work (debt) than there is time to complete it in. To have predictable release schedules, a development team should limit the amount of work in progress in order to keep the amount of uncompleted work (or debt) small at all times. If enough work is completed on a project to not present a barrier to submission, then a project will be released which still carries a substantial amount of technical debt. If this software reaches production, then the risks of implementing any future refactors which might address the technical debt increase dramatically. Modifying production code carries the risk of outages, actual financial losses and possibly legal repercussions if contracts involve service-level agreements (SLA). For this reason we can view the carrying of technical debt to production almost as if it were an increase in interest rate and the only time this decreases is when deployments are turned down and retired. While Manny Lehman's Law already indicated that evolving programs continually add to their complexity and deteriorating structure unless work is done to maintain them, Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report: In his 2004 text, Refactoring to Patterns, Joshua Kerievsky presents a comparable argument concerning the costs associated with architectural negligence, which he describes as "design debt". Activities that might be postponed include documentation, writing tests, attending to TODO comments and tackling compiler and static code analysis warnings. Other instances of technical debt include knowledge that isn't shared around the organization and code that is too confusing to be modified easily. Writing about PHP development in 2014, Junade Ali said: Grady Booch compares how evolving cities is similar to evolving software-intensive systems and how lack of refactoring can lead to technical debt. In open source software, postponing sending local changes to the upstream project is a form of technical debt. See also Code smell (symptoms of inferior code quality that can contribute to technical debt) Big ball of mud Bus factor Escalation of commitment Manumation Overengineering Shotgun surgery Software entropy Software rot Spaghetti code SQALE Sunk cost TODO, FIXME, XXX References External links Ward Explains Debt Metaphor, video from Ward Cunningham OnTechnicalDebt The online community for discussing technical debt Experts interviews on Technical Debt: Ward Cunningham, Philippe KRUCHTEN, Ipek OZKAYA, Jean-Louis LETOUZEY Steve McConnell discusses technical debt TechnicalDebt from Martin Fowler Bliki Averting a "Technical Debt" Crisis by Doug Knesek "Get out of Technical Debt Now!", a talk by Andy Lester Lehman's Law Managing Technical Debt Webinar by Steve McConnell Boundy, David, Software cancer: the seven early warning signs or here, ACM SIGSOFT Software Engineering Notes, Vol. 18 No. 2 (April 1993), Association for Computing Machinery, New York, New York, US Technical debt: investeer en voorkom faillissement by Colin Spoel Technical debts: Everything you need to know What is technical debt? from DeepSource blog Metaphors Software architecture Software engineering terminology Software maintenance
Technical debt
[ "Technology", "Engineering" ]
1,890
[ "Software engineering", "Computing terminology", "Software maintenance", "Software engineering terminology" ]
11,885,910
https://en.wikipedia.org/wiki/Betacellulin
Betacellulin is a protein that in humans is encoded by the BTC gene located on chromosome 4 at locus 4q13-q21. Betacellulin was initially identified as a mitogen. Betacellulin, is a part of an Epidermal Growth Factor (EGF) family and functions as a ligand for the epidermal growth factor receptor (EGFR). The role of betacellulin as an EGF is manifested differently in various tissues, and it has a great effect on nitrogen signaling in retinal pigment epithelial cells and vascular smooth muscle cells. While many studies attest a role for betacellulin in the differentiation of pancreatic β-cells, the last decade witnessed the association of betacellulin with many additional biological processes, ranging from reproduction to the control of neural stem cells. Betacellulin is a member of the EGF family of growth factors. It is synthesized primarily as a transmembrane precursor, which is then processed to mature molecule by proteolytic events. Structure As shown on figure 1, the secondary structure of the human betacellulin-2 has 6% helical (1 helices; 3 residues) 36% beta sheet (5 strands; 18 residues). The mRNA of betacellulin contains six exons in which is 2816 base-pair long. The mRNA was translated into 178 amino acids, and different regions of the amino acid are responsible for different function. The first 31 amino acids are responsible for the signal peptide (Figure 2, exon 1), the 32nd to 118th amino acids are responsible for the extracellular region (Figure 2, exon 2 and 3), the 65-105 amino acids are responsible for the EGF-like domain (Figure 2, exon 3), the transmembrane domain is from amino acids 119-139 (Figure 2, exon 4), the cytoplasmic tail is from amino acid 140-178 (Figure 2, exon 5). Function As a typical EGFR ligand, betacellulin is expressed by a variety of cell types and tissues, the post-translation of the betacellulin can ectodomain shedding, and the proteolytic release the soluble factors can bind and activate the homodimer or heterodimer of the ERBB receptors. The membrane-anchored form of the betacellulin can activate the epidermal growth factor receptor (EGFR). Betacellulin stimulates the proliferation of retinal pigment epithelial and vascular smooth muscle cells but did not stimulate the growth of several other cell types, such as endothelial cells and fetal lung fibroblasts. Tissue distribution The mRNA coding for betacellulin was found to be slightly higher compared in the rat sciatic nerve segment after nerve damage, suggesting that betacellulin can play a role in peripheral nerve regeneration. Immunohistochemistry has been used to look for betacellulin expression in Schwann cells. Treating cells with betacellulin recombinant protein can be used to investigate the role of betacellulin in managing Schwann cells. A co-culture assay can also used to assess the effect of Schwann cell-secreted betacellulin on neurons. Mouse BTC is expressed as a 178-amino acid precursor. The membrane-bound precursor is cleaved to yield mature secreted mouse BTC. BTC is synthesized in a wide range of adult tissues and in many cultured cells, including smooth muscle cells and epithelial cells. The amino acid sequence of mature mouse BTC is 82.5%, identical with that of human BTC, and both exhibit significant overall similarity with other members of the EGF family. Clinical significance The transcription factor signal transducer and activator of transcription 3 (STAT3) was identified as the therapeutic target for glioblastoma. References Further reading External links Genes on human chromosome 4 Growth factors
Betacellulin
[ "Chemistry" ]
830
[ "Growth factors", "Signal transduction" ]
11,885,926
https://en.wikipedia.org/wiki/Flow%20velocity
In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the flow velocity vector is scalar, the flow speed. It is also called velocity field; when evaluated along a line, it is called a velocity profile (as in, e.g., law of the wall). Definition The flow velocity u of a fluid is a vector field which gives the velocity of an element of fluid at a position and time The flow speed q is the length of the flow velocity vector and is a scalar field. Uses The flow velocity of a fluid effectively describes everything about the motion of a fluid. Many physical properties of a fluid can be expressed mathematically in terms of the flow velocity. Some common examples follow: Steady flow The flow of a fluid is said to be steady if does not vary with time. That is if Incompressible flow If a fluid is incompressible the divergence of is zero: That is, if is a solenoidal vector field. Irrotational flow A flow is irrotational if the curl of is zero: That is, if is an irrotational vector field. A flow in a simply-connected domain which is irrotational can be described as a potential flow, through the use of a velocity potential with If the flow is both irrotational and incompressible, the Laplacian of the velocity potential must be zero: Vorticity The vorticity, , of a flow can be defined in terms of its flow velocity by If the vorticity is zero, the flow is irrotational. The velocity potential If an irrotational flow occupies a simply-connected fluid region then there exists a scalar field such that The scalar field is called the velocity potential for the flow. (See Irrotational vector field.) Bulk velocity In many engineering applications the local flow velocity vector field is not known in every point and the only accessible velocity is the bulk velocity or average flow velocity (with the usual dimension of length per time), defined as the quotient between the volume flow rate (with dimension of cubed length per time) and the cross sectional area (with dimension of square length): . See also Displacement field (mechanics) Drift velocity Enstrophy Group velocity Particle velocity Pressure gradient Strain rate Strain-rate tensor Stream function Velocity potential Vorticity Wind velocity References Fluid dynamics Continuum mechanics Vector calculus Velocity Spatial gradient Vector physical quantities
Flow velocity
[ "Physics", "Chemistry", "Engineering" ]
532
[ "Physical phenomena", "Physical quantities", "Chemical engineering", "Motion (physics)", "Vector physical quantities", "Piping", "Velocity", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
11,886,605
https://en.wikipedia.org/wiki/FNAEG
The Fichier National Automatisé des Empreintes Génétiques () is the French national DNA database, used by both the national police force and local gendarmerie. Origins and evolution In June 1998, the Guigou law on the prevention of sexually-related crimes, passed by the Plural Left Lionel Jospin government, created a national DNA database. The implementation, originally planned for 1999, was finally completed in 2001, with the database itself located at Écully in the Rhône, managed by a subdirectorate of the technical and scientific departments of the French police force. In the aftermath of the September 11 attacks on the US in 2001, the French government increased the scope of the database to include DNA related to other serious criminal offences, such as voluntary manslaughter, criminal violence and terrorism. A further 'law for interior safety' introduced on 18 March 2003 expanded the scope still further to cover almost all violent crimes to people or property, serious crimes such as drug trafficking, simple thefts, tags and dégradations, and finally almost all small offenses, but not traffic offenses or crimes committed abroad. Samples are taken from convicted persons and also from simple suspects. The law does not specify a minimum age. In September 2009, Matthieu Bonduelle, the general secretary of the Syndicat de la Magistrature (the first syndicat of juges) has declared that "nobody defends a universal database, but, in fact, it is being done." Relative size As at 1 October 2003, FNAEG was understood to contain the DNA records of approximately 8,000 convicted criminals and another 3,200 suspects. In 2006, this number was believed to now be in excess of 330,000 entries. In May 2007, this number was believed to now be in excess of nearly 500,000 entries. In December 2009, there were 1.27 million entries. Privacy concerns With the expansion of the database in 2003, it also became an offence for suspects to fail to provide a DNA sample, with punishment ranging from a prison sentence of between six months and two years, and a fine of between €7500 and €30000. At the end of 2006, the media raised the case of individuals refusing to provide DNA samples. Many of them were civil disobedience activists opposed to Genetically modified organism (GMO) (See fr:Faucheurs volontaires). Although this was only around 200 cases, they denounced what they regarded as a threat to personal freedom. See also Government database References External links Fichier national automatisé des empreintes génétiques On the official Website of the french ministère de l’intérieur : Fichier national automatisé des empreintes génétiques Human genetics Identity documents Biometrics Government databases in France National DNA databases Privacy in France Forensic databases Biological databases
FNAEG
[ "Biology" ]
592
[ "Bioinformatics", "Biological databases" ]
11,887,250
https://en.wikipedia.org/wiki/Stellar%20magnetic%20field
A stellar magnetic field is a magnetic field generated by the motion of conductive plasma inside a star. This motion is created through convection, which is a form of energy transport involving the physical movement of material. A localized magnetic field exerts a force on the plasma, effectively increasing the pressure without a comparable gain in density. As a result, the magnetized region rises relative to the remainder of the plasma, until it reaches the star's photosphere. This creates starspots on the surface, and the related phenomenon of coronal loops. Measurement A star's magnetic field can be measured using the Zeeman effect. Normally the atoms in a star's atmosphere will absorb certain frequencies of energy in the electromagnetic spectrum, producing characteristic dark absorption lines in the spectrum. However, when the atoms are within a magnetic field, these lines become split into multiple, closely spaced lines. The energy also becomes polarized with an orientation that depends on the orientation of the magnetic field. Thus the strength and direction of the star's magnetic field can be determined by examination of the Zeeman effect lines. A stellar spectropolarimeter is used to measure the magnetic field of a star. This instrument consists of a spectrograph combined with a polarimeter. The first instrument to be dedicated to the study of stellar magnetic fields was NARVAL, which was mounted on the Bernard Lyot Telescope at the Pic du Midi de Bigorre in the French Pyrenees mountains. Various measurements—including magnetometer measurements over the last 150 years; 14C in tree rings; and 10Be in ice cores—have established substantial magnetic variability of the Sun on decadal, centennial and millennial time scales. Field generation Stellar magnetic fields, according to solar dynamo theory, are caused within the convective zone of the star. The convective circulation of the conducting plasma functions like a dynamo. This activity destroys the star's primordial magnetic field, then generates a dipolar magnetic field. As the star undergoes differential rotation—rotating at different rates for various latitudes—the magnetism is wound into a toroidal field of "flux ropes" that become wrapped around the star. The fields can become highly concentrated, producing activity when they emerge on the surface. The magnetic field of a rotating body of conductive gas or liquid develops self-amplifying electric currents, and thus a self-generated magnetic field, due to a combination of differential rotation (different angular velocity of different parts of body), Coriolis forces and induction. The distribution of currents can be quite complicated, with numerous open and closed loops, and thus the magnetic field of these currents in their immediate vicinity is also quite twisted. At large distances, however, the magnetic fields of currents flowing in opposite directions cancel out and only a net dipole field survives, slowly diminishing with distance. Because the major currents flow in the direction of conductive mass motion (equatorial currents), the major component of the generated magnetic field is the dipole field of the equatorial current loop, thus producing magnetic poles near the geographic poles of a rotating body. The magnetic fields of all celestial bodies are often aligned with the direction of rotation, with notable exceptions such as certain pulsars. Periodic field reversal Another feature of this dynamo model is that the currents are AC rather than DC. Their direction, and thus the direction of the magnetic field they generate, alternates more or less periodically, changing amplitude and reversing direction, although still more or less aligned with the axis of rotation. The Sun's major component of magnetic field reverses direction every 11 years (so the period is about 22 years), resulting in a diminished magnitude of magnetic field near reversal time. During this dormancy, the sunspots activity is at maximum (because of the lack of magnetic braking on plasma) and, as a result, massive ejection of high energy plasma into the solar corona and interplanetary space takes place. Collisions of neighboring sunspots with oppositely directed magnetic fields result in the generation of strong electric fields near rapidly disappearing magnetic field regions. This electric field accelerates electrons and protons to high energies (kiloelectronvolts) which results in jets of extremely hot plasma leaving the Sun's surface and heating coronal plasma to high temperatures (millions of kelvin). If the gas or liquid is very viscous (resulting in turbulent differential motion), the reversal of the magnetic field may not be very periodic. This is the case with the Earth's magnetic field, which is generated by turbulent currents in a viscous outer core. Surface activity Starspots are regions of intense magnetic activity on the surface of a star. (On the Sun they are termed sunspots.) These form a visible component of magnetic flux tubes that are formed within a star's convection zone. Due to the differential rotation of the star, the tube becomes curled up and stretched, inhibiting convection and producing zones of lower than normal temperature. Coronal loops often form above starspots, forming from magnetic field lines that stretch out into the stellar corona. These in turn serve to heat the corona to temperatures over a million kelvins. The magnetic fields linked to starspots and coronal loops are linked to flare activity, and the associated coronal mass ejection. The plasma is heated to tens of millions of kelvins, and the particles are accelerated away from the star's surface at extreme velocities. Surface activity appears to be related to the age and rotation rate of main-sequence stars. Young stars with a rapid rate of rotation exhibit strong activity. By contrast middle-aged, Sun-like stars with a slow rate of rotation show low levels of activity that varies in cycles. Some older stars display almost no activity, which may mean they have entered a lull that is comparable to the Sun's Maunder minimum. Measurements of the time variation in stellar activity can be useful for determining the differential rotation rates of a star. Magnetosphere A star with a magnetic field will generate a magnetosphere that extends outward into the surrounding space. Field lines from this field originate at one magnetic pole on the star then end at the other pole, forming a closed loop. The magnetosphere contains charged particles that are trapped from the stellar wind, which then move along these field lines. As the star rotates, the magnetosphere rotates with it, dragging along the charged particles. As stars emit matter with a stellar wind from the photosphere, the magnetosphere creates a torque on the ejected matter. This results in a transfer of angular momentum from the star to the surrounding space, causing a slowing of the stellar rotation rate. Rapidly rotating stars have a higher mass loss rate, resulting in a faster loss of momentum. As the rotation rate slows, so too does the angular deceleration. By this means, a star will gradually approach, but never quite reach, the state of zero rotation. Magnetic stars A T Tauri star is a type of pre-main-sequence star that is being heated through gravitational contraction and has not yet begun to burn hydrogen at its core. They are variable stars that are magnetically active. The magnetic field of these stars is thought to interact with its strong stellar wind, transferring angular momentum to the surrounding protoplanetary disk. This allows the star to brake its rotation rate as it collapses. Small, M-class stars (with 0.1–0.6 solar masses) that exhibit rapid, irregular variability are known as flare stars. These fluctuations are hypothesized to be caused by flares, although the activity is much stronger relative to the size of the star. The flares on this class of stars can extend up to 20% of the circumference, and radiate much of their energy in the blue and ultraviolet portion of the spectrum. Straddling the boundary between stars that undergo nuclear fusion in their cores and non-hydrogen fusing brown dwarfs are the ultracool dwarfs. These objects can emit radio waves due to their strong magnetic fields. Approximately 5–10% of these objects have had their magnetic fields measured. The coolest of these, 2MASS J10475385+2124234 with a temperature of 800-900 K, retains a magnetic field stronger than 1.7 kG, making it some 3000 times stronger than the Earth's magnetic field. Radio observations also suggest that their magnetic fields periodically change their orientation, similar to the Sun during the solar cycle. Planetary nebulae are created when a red giant star ejects its outer envelope, forming an expanding shell of gas. However it remains a mystery why these shells are not always spherically symmetrical. 80% of planetary nebulae do not have a spherical shape; instead forming bipolar or elliptical nebulae. One hypothesis for the formation of a non-spherical shape is the effect of the star's magnetic field. Instead of expanding evenly in all directions, the ejected plasma tends to leave by way of the magnetic poles. Observations of the central stars in at least four planetary nebulae have confirmed that they do indeed possess powerful magnetic fields. After some massive stars have ceased thermonuclear fusion, a portion of their mass collapses into a compact body of neutrons called a neutron star. These bodies retain a significant magnetic field from the original star, but the collapse in size causes the strength of this field to increase dramatically. The rapid rotation of these collapsed neutron stars results in a pulsar, which emits a narrow beam of energy that can periodically point toward an observer. Compact and fast-rotating astronomical objects (white dwarfs, neutron stars and black holes) have extremely strong magnetic fields. The magnetic field of a newly born fast-spinning neutron star is so strong (up to 108 teslas) that it electromagnetically radiates enough energy to quickly (in a matter of few million years) damp down the star rotation by 100 to 1000 times. Matter falling on a neutron star also has to follow the magnetic field lines, resulting in two hot spots on the surface where it can reach and collide with the star's surface. These spots are literally a few feet (about a metre) across but tremendously bright. Their periodic eclipsing during star rotation is hypothesized to be the source of pulsating radiation (see pulsars). An extreme form of a magnetized neutron star is the magnetar. These are formed as the result of a core-collapse supernova. The existence of such stars was confirmed in 1998 with the measurement of the star SGR 1806-20. The magnetic field of this star has increased the surface temperature to 18 million K and it releases enormous amounts of energy in gamma ray bursts. Jets of relativistic plasma are often observed along the direction of the magnetic poles of active black holes in the centers of very young galaxies. Star-planet interaction controversy In 2008, a team of astronomers first described how as the exoplanet orbiting HD 189733 A reaches a certain place in its orbit, it causes increased stellar flaring. In 2010, a different team found that every time they observe the exoplanet at a certain position in its orbit, they also detected X-ray flares. Theoretical research since 2000 suggested that an exoplanet very near to the star that it orbits may cause increased flaring due to the interaction of their magnetic fields, or because of tidal forces. In 2019, astronomers combined data from Arecibo Observatory, MOST, and the Automated Photoelectric Telescope, in addition to historical observations of the star at radio, optical, ultraviolet, and X-ray wavelengths to examine these claims. Their analysis found that the previous claims were exaggerated and the host star failed to display many of the brightness and spectral characteristics associated with stellar flaring and solar active regions, including sunspots. They also found that the claims did not stand up to statistical analysis, given that many stellar flares are seen regardless of the position of the exoplanet, therefore debunking the earlier claims. The magnetic fields of the host star and exoplanet do not interact, and this system is no longer believed to have a "star-planet interaction." See also References External links Magnetic field Magnetism in astronomy Concepts in stellar astronomy
Stellar magnetic field
[ "Physics", "Astronomy" ]
2,495
[ "Concepts in astrophysics", "Concepts in stellar astronomy", "Magnetism in astronomy", "Astronomical sub-disciplines", "Stellar astronomy" ]
11,887,599
https://en.wikipedia.org/wiki/Epsilon%20Tauri%20b
Epsilon Tauri b (abbreviated ε Tauri b or ε Tau b), formally named Amateru , is a super-Jupiter exoplanet orbiting the K-type giant star Epsilon Tauri approximately 155 light-years (47.53 parsecs, or nearly km) away from the Earth in the constellation of Taurus. It orbits the star further out than Earth orbits the Sun. It has moderate eccentricity. Name In July 2014, the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Amateru for this planet. The name was based on that submitted by the Kamagari Astronomical Observatory of Kure, Hiroshima Prefecture, Japan: namely 'Amaterasu', the Shinto goddess of the Sun, born from the left eye of the god Izanagi. The IAU substituted 'Amateru' - which is a common Japanese appellation for shrines when they enshrine Amaterasu - because 'Amaterasu' is already used for asteroid 10385 Amaterasu. Characteristics Mass, radius and temperature Epsilon Tauri b is a "super-Jupiter", an exoplanet that has a radius and mass larger than that of the gas giants Jupiter and Saturn. It has a temperature of . It has an estimated mass of around 7.6 and a potential radius of around 18% larger than Jupiter (1.18 , or 12 ) based on its mass, since it is more massive than the jovian planet. Host star The planet orbits a (K-type) giant star named Epsilon Tauri. It has exhausted the hydrogen supply in its core and is currently fusing helium. The star has a mass of 2.7 and a radius of around 12.6 . It has a surface temperature of 4901 K and is 625 million years old. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 3.53. Therefore, Epsilon Tauri can be seen with the naked eye. Orbit Epsilon Tauri b orbits its star with nearly 97 times the Sun's luminosity (97 ) every 645 days at a distance of 1.93 AU (compared to Mars' orbital distance from the Sun, which is 1.52 AU). It has a mildly eccentric orbit, with an eccentricity of 0.15. Discovery Epsilon Tauri b was discovered by using the High Dispersion Echelon Spectrograph at Okayama Astrophysical Observatory (OAO) as part of a process to study G-type and K-type giant stars to search for exoplanets. Measurements of radial velocity from Epsilon Tauri were taken between December 2003 and July 2006. Wobbles in the star were detected, and after analyzing the data, it was eventually concluded that there was a planetary companion with a mass 7 times that of Jupiter orbiting Epsilon Tauri every 645 days, or nearly 2 years with an eccentricity of 0.15. See also 4 Ursae Majoris b Epsilon Eridani b Epsilon Reticuli Ab In popular culture The planet Amateru is mentioned by name in the science fiction book Starsong Chronicles: Exodus by American author JJ Clayborn. References External links Taurus (constellation) Giant planets Exoplanets discovered in 2007 Exoplanets detected by radial velocity Exoplanets with proper names Hyades (star cluster)
Epsilon Tauri b
[ "Astronomy" ]
751
[ "Taurus (constellation)", "Constellations" ]
11,887,815
https://en.wikipedia.org/wiki/Postreplication%20repair
Postreplication repair is the repair of damage to the DNA that takes place after replication. Some example genes in humans include: BRCA2 and BRCA1 BLM NBS1 Accurate and efficient DNA replication is crucial for the health and survival of all living organisms. Under optimal conditions, the replicative DNA polymerases ε, δ, and α can work in concert to ensure that the genome is replicated efficiently with high accuracy in every cell cycle. However, DNA is constantly challenged by exogenous and endogenous genotoxic threats, including solar ultraviolet (UV) radiation and reactive oxygen species (ROS) generated as a byproduct of cellular metabolism. Damaged DNA can act as a steric block to replicative polymerases, thereby leading to incomplete DNA replication or the formation of secondary DNA strand breaks at the sites of replication stalling. Incomplete DNA synthesis and DNA strand breaks are both potential sources of genomic instability. An arsenal of DNA repair mechanisms exists to repair various forms of damaged DNA and minimize genomic instability. Most DNA repair mechanisms require an intact DNA strand as template to fix the damaged strand. DNA damage prevents the normal enzymatic synthesis of DNA by the replication fork. At damaged sites in the genome, both prokaryotic and eukaryotic cells utilize a number of postreplication repair (PRR) mechanisms to complete DNA replication. Chemically modified bases can be bypassed by either error-prone or error-free translesion polymerases, or through genetic exchange with the sister chromatid. The replication of DNA with a broken sugar-phosphate backbone is most likely facilitated by the homologous recombination proteins that confer resistance to ionizing radiation. The activity of PRR enzymes is regulated by the SOS response in bacteria and may be controlled by the postreplication checkpoint response in eukaryotes. The elucidation of PRR mechanisms is an active area of molecular biology research, and the terminology is currently in flux. For instance, PRR has recently been referred to as "DNA damage tolerance" to emphasize the instances in which postreplication DNA damage is repaired without removing the original chemical modification to the DNA. While the term PRR has most frequently been used to describe the repair of single-stranded postreplication gaps opposite damaged bases, a more broad usage has been suggested. In this case, the term PRR would encompasses all processes that facilitate the replication of damaged DNA, including those that repair replication-induced double-strand breaks. Melanoma cells are commonly defective in postreplication repair of DNA damages that are in the form of cyclobutane pyrimidine dimers, a type of damage caused by ultraviolet radiation. A particular repair process that appears to be defective in melanoma cells is homologous recombinational repair. Defective postreplication repair of cyclobutane pyrimidine dimers can lead to mutations that are the primary driver of melanoma. References DNA repair
Postreplication repair
[ "Biology" ]
616
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
11,888,310
https://en.wikipedia.org/wiki/Mummia
Mummia, mumia, or originally mummy referred to several different preparations in the history of medicine, from "mineral pitch" to "powdered human mummies". It originated from Arabic mūmiyā "a type of resinous bitumen found in Western Asia and used curatively" in traditional Islamic medicine, which was translated as pissasphaltus (from "pitch" and "asphalt") in ancient Greek medicine. In medieval European medicine, mūmiyā "bitumen" was transliterated into Latin as mumia meaning both "a bituminous medicine from Persia" and "mummy". Merchants in apothecaries dispensed expensive mummia bitumen, which was thought to be an effective cure-all for many ailments. It was also used as an aphrodisiac. Beginning around the 12th century when supplies of imported natural bitumen ran short, mummia was misinterpreted as "mummy", and the word's meaning expanded to "a black resinous exudate scraped out from embalmed Egyptian mummies". This began a period of lucrative trade between Egypt and Europe, and suppliers substituted rare mummia exudate with entire mummies, either embalmed or desiccated. After Egypt banned the shipment of mummia in the 16th century, unscrupulous European apothecaries began to sell fraudulent mummia prepared by embalming and desiccating fresh corpses. During the Renaissance, scholars proved that translating bituminous mummia as mummy was a mistake, and physicians stopped prescribing the ineffective drug. Artists in the 17–19th centuries still used ground up mummies to tint a popular oil-paint called mummy brown. Terminology The etymologies of both English mummia and mummy derive from Medieval Latin mumia, which transcribes Arabic mūmiyā "a kind of bitumen used medicinally; a bitumen-embalmed body" from mūm "wax (used in embalming)", which descend from Persian mumiya and mum. The Oxford English Dictionary records the complex semantic history of mummy and mummia. Mummy was first recorded meaning "a medicinal preparation of the substance of mummies; hence, an unctuous liquid or gum used medicinally" (c. 1400), which Shakespeare used jocularly for "dead flesh; body in which life is extinct" (1598), and later "a pulpy substance or mass" (1601). Second, it was semantically extended to mean "a sovereign remedy" (1598), "a medicinal bituminous drug obtained from Arabia and the East" (1601), "a kind of wax used in the transplanting and grafting of trees" (1721), and "a rich brown bituminous pigment" (1854). The third mummy meaning was "the body of a human being or animal embalmed (according to the ancient Egyptian or some analogous method) as a preparation for burial" (1615), and "a human or animal body desiccated by exposure to sun or air" (1727). Mummia was originally used in mummy'''s first meaning "a medicinal preparation…" (1486), then in the second meaning "a sovereign remedy" (1741), and lastly to specify "in mineralogy, a sort of bitumen, or mineral pitch, which is soft and tough, like shoemaker's wax, when the weather is warm, but brittle, like pitch, in cold weather. It is found in Persia, where it is highly valued" (1841). In modern English usage, mummy commonly means "embalmed body" as distinguished from mummia "a medicine" in historical contexts.Mummia or mumia is defined by three English mineralogical terms. Bitumen (from Latin bitūmen) originally meant "a kind of mineral pitch found in Palestine and Babylon, used as mortar, etc. The same as asphalt, mineral pitch, Jew's pitch, Bitumen judaicum", and in modern scientific use means "the generic name of certain mineral inflammable substances, native hydrocarbons more or less oxygenated, liquid, semi-solid, and solid, including naphtha, petroleum, asphalt, etc." Asphalt (from Ancient Greek ásphaltos "asphalt, bitumen”) first meant "A bituminous substance, found in many parts of the world, a smooth, hard, brittle, black or brownish-black resinous mineral, consisting of a mixture of different hydrocarbons; called also mineral pitch, Jews' pitch, and in the [Old Testament] 'slime'", and presently means "A composition made by mixing bitumen, pitch, and sand, or manufactured from natural bituminous limestones, used to pave streets and walks, to line cisterns, etc.", used as an abbreviation for asphalt concrete. Until the 20th century, the Latinate term asphaltum was also used. Pissasphalt (from Greek pissasphaltus "pitch" and "asphalt") names "A semi-liquid variety of bitumen, mentioned by ancient writers". The medicinal use of bituminous mummia has a parallel in Ayurveda: shilajit or silajit (from Sanskrit shilajatu "rock-conqueror") or mumijo (from Persian mūmiyā "wax") is "A name given to various solid or viscous substances found on rock in India and Nepal … esp. a usu. dark-brown odoriferous substance which is used in traditional Indian medicine and probably consists principally of dried animal urine". History The usage of mumiya as medicine began with the famous Persian mumiya black pissasphalt remedy for wounds and fractures, which was confused with similarly appearing black bituminous materials used in Egyptian mummification. This was misinterpreted by Medieval Latin translators to mean whole mummies. Starting in the 12th century and continuing until as far as the 19th century, mummies and bitumen from mummies would be central in European medicine and art, as well as Egyptian trade. Bitumen or asphalt had many uses in the ancient world such as glue, mortar, and waterproofing. The ancient Egyptians began to use bitumen for embalming mummies during the Twelfth Dynasty (1991–1802 BCE).S. G. F. Brandon, "Mummification." Man, Myth and Magic: An Illustrated Encyclopedia of the Supernatural. According to historians of pharmacy, mummia became part of the materia medica of the Arabs, discussed by Muhammad ibn Zakariya al-Razi (845–925) and Ibn al-Baitar (1197–1248). Medieval Persian physicians used bitumen/asphalt both as a salve for cuts, bruises, and bone fractures, and as an internal medicine for stomach ulcers and tuberculosis. They achieved the best results with a black pissasphalt that seeped from a mountain in Darabgerd, Persia. The Greek physician Pedanius Dioscorides' c. 50–70 De Materia Medica ranked bitumen from the Dead Sea as medicinally superior to the pissasphalt from Apollonia (Illyria), both of which were considered to be an equivalent substitute for the scarce and expensive Persian mumiya. During the Crusades, European soldiers learned firsthand of the drug mummia, which was considered to have great healing powers in cases of fracture and rupture. The demand for mummia increased in Europe and since the supply of natural bitumen from Persia and the Dead Sea was limited, the search for a new source turned to the tombs of Egypt. Misinterpreting the Latin word mumia "medicinal bitumen" involved several steps. The first was to substitute substances exuded by Egyptian mummies for the natural product. The Arab physician Serapion the Younger (fl. 12th century) wrote about bituminous mumia and its many uses, but the Latin translation of Simon Geneunsis (d. 1303) said, "Mumia, this is the mumia of the sepulchers with aloes and myrrh mixed with the liquid (humiditate) of the human body". Two 12th century Italian examples: Gerard of Cremona, mistakenly translated Arabic mumiya as "the substance found in the land where bodies are buried with aloes by which the liquid of the dead, mixed with the aloes, is transformed and it is similar to marine pitch", and the physician Matthaeus Platearius said "Mumia is a spice found in the sepulchers of the dead.... That is best which is black, ill-smelling, shiny, and massive". The second step was to confuse and replace the rare black exudation from embalmed corpses with the black bitumen that Egyptians used as an embalming preservative. The Baghdad physician Abd al-Latif al-Baghdadi (1162–1231) described ancient Egyptian mummies, "In the belly and skull of these corpses is also found in great abundance called mummy", added that although the word properly denoted bitumen or asphalt, "The mummy found in the hollows of the corpses in Egypt, differs but immaterially from the nature of mineral mummy; and where any difficulty arises in procuring the latter, may be substituted in its stead." The third step in misinterpreting mummia was to substitute the blackened flesh of an entire mummy for the hardened bituminous materials from the interior cavities of the cadavers. The ancient tombs of Egypt and the deserts could not meet the European demand for the drug mumia, so a commerce developed in the manufacture and sale of fraudulent mummies, sometimes called mumia falsa. The Italian surgeon Giovanni da Vigo (1450–1525) defined mumia as "The flesh of a dead body that is embalmed, and it is hot and dry in the second [grade], and therefore it has virtue to incarne [i.e., heal over] wounds and to staunch blood", and included it in his list of essential drugs. The Swiss-German polymath Paracelsus (1493–1541) gave mummia a new meaning of "intrinsic spirit" and said true pharmaceutical mummia must be "the body of a man who did not die a natural death but rather died an unnatural death with a healthy body and without sickness". The German physician Oswald Croll (1563–1609) said mumia was "not the liquid matter which is found in the Egyptian sepulchers," but rather "the flesh of a man that perishes a violent death, and kept for some time in the air", and gave a detailed recipe for making tincture of mumia from the corpse of a young red-haired man, who had been hanged, bludgeoned on the breaking wheel, exposed to the air for days, then cut into small pieces, sprinkled with powdered myrrh and aloes, soaked in wine, and dried. Renaissance scholars and physicians first expressed opposition to using human mumia in the 16th century. The French naturalist Pierre Belon (1517–1564) concluded that the Arab physicians, from whom the western writers derived their knowledge of mumia, had actually referred to the pissasphalt of Dioscorides, which had been misconstrued by the translators. He said Europeans were importing both the "falsely called" mumia obtained from the scraping the bodies of cadavers, and "artificial mumia" made by exposing buried dead bodies to the heat of the sun before grinding them up. While he considered the available mumia to be a valueless and even dangerous drug, he noted that King Francis I always carried with him a mixture of mumia and rhubarb to use as an immediate remedy for any injury. The barber surgeon Ambroise Paré (d. 1590) revealed the manufacture of fake mummia both in France, where apothecaries would steal the bodies of executed criminals, dry them in an oven, and sell the flesh; and in Egypt, where a merchant, who admitted collecting dead bodies and preparing mummia, expressed surprise that the Christians, "so dainty-mouthed, could eat the bodies of the dead". Paré admitted to having personally administered mumia a hundred times, but condemned "this wicked kinde of Drugge, doth nothing helpe the diseased," and so he stopped prescribing it and encouraged others not to use mumia. The English herbalist John Gerard's 1597 Herball described the ancient Egyptians using cedar pitch for embalming, and noted that the preserved bodies that shopkeepers falsely call "mumia" should be what the Greeks called pissasphalton. Gerard blamed the error on the translator of Serapion who interpreted mumia "according to his own fancie" that it is the exudate from an embalmed human corpse. The medical use of Egyptian mumia continued through the 17th century. The physicist Robert Boyle (1627–1691) praised it as "one of the useful medicines commended and given by our physicians for falls and bruises, and in other cases too." The Dutch physician Steven Blankaart's 1754 Lexicon medicum renovatum listed four types of mumia: Arabian exudate from bodies embalmed with spices and asphalt, Egyptian bodies embalmed with pissasphalt, sun-dried bodies found in the desert, the natural pissasphalt. Mummia's familiarity as a remedy in Britain is demonstrated by passing references in Shakespeare, Francis Beaumont and John Fletcher, and John Donne, and also by more detailed remarks in the writings of Thomas Browne, Francis Bacon, and Robert Boyle. By the 18th century, skepticism about the pharmaceutical value of mumia was increasing, and medical opinion was turning against its use. The English medical writer John Quincy wrote in 1718 that although mumia was still listed in medicinal catalogues, "it is quite out of use in Prescription". Mummia was offered for sale medicinally as late as 1924 in the price list of Merck & Co. Both mummia and asphalt have long been used as pigments. The British chemist and painter Arthur Herbert Church described the use of mummia for making "mummy brown" oil paint: 'Mummy,' as a pigment, is inferior to prepared, but superior to raw, asphalt, inasmuch as it has been submitted to a considerable degree of heat, and has thereby lost some of its volatile hydrocarbons. Moreover, it is usual to grind up the bones and other parts of the mummy together, so that the resulting powder has more solidity and is less fusible than the asphalt alone would be. A London colourman informs me that one Egyptian mummy furnishes sufficient material to satisfy the demands of his customers for twenty years. It is perhaps scarcely necessary to add that some samples of the pigment sold as 'mummy' are spurious. The modern pigment sold as "mummy brown" is composed of a mixture of kaolin, quartz, goethite and hematite. See also Bitumen of Judea Human fat Medical cannibalism Mellified man References Additional sources External links Sheba's Secret Mummies – Channel 4 documentary The Gruesome History of Eating Corpses as Medicine, Smithsonian'' Ancient Egyptian mummies Medical cannibalism History of pharmacy Magic powders Resins Traditional medicine
Mummia
[ "Physics" ]
3,206
[ "Amorphous solids", "Unsolved problems in physics", "Resins" ]
11,888,347
https://en.wikipedia.org/wiki/Tersano
Tersano Inc. is a privately held company based in Oldcastle, Ontario with offices and distribution centres throughout Canada, USA, Latin America, APAC and EMEA. Established in 2001, Tersano develops and manufactures devices that produce Stabilized Aqueous Ozone (SAO) for personal and professional use. It serves clients in industries such as healthcare, education, manufacturing, and hospitality. Tersano was the first company to concurrently receive both the Green Seal 37 (for Industrial and Institutional Cleaners) and the Green Seal 53 (for Biologically Active Cleaning Products). In 2006 the product was ready for its launch after being approved by the United States Environmental Protection Agency (EPA Establishment #: 89093-CAN-1), U.S. Food and Drug Administration (FDA), Canadian Standards Association (CSA) and Underwriters Laboratory. Since then Tersano has released more commercial and residential products such as ProScrub degreaser, ProBowl descaler, laundry pods, and microfiber cloths. It was declared one of the best inventions in 2006 by Time Magazine. It has also received television media attention when the system was featured on an episode of “The Big Idea with Donny Deutsch”. References External links Companies established in 2001 Ozone 2001 establishments in Ontario Companies based in Windsor, Ontario
Tersano
[ "Chemistry" ]
275
[ "Ozone", "Oxidizing agents" ]
11,888,426
https://en.wikipedia.org/wiki/Polish%20Register%20of%20Shipping
Polish Register of Shipping, (), also known as PRS, is an independent classification society established in 1936. It is a not-for-profit company working on the marine market, developing technical rules and supervising their implementation, managing risk and performing surveys on ships. PRS has been authorized by a number of State Maritime Administrations to act on their behalf. PRS is the only classification societies which has its own team of scuba divers surveyors performing underwater inspections. The Society's head office is placed at 126 Aleja gen. Hallera, Gdańsk, Poland. Main activities The development and updating of the rules for classification and construction of floating objects, industrial objects as well as statutory and administration survey guidelines resulting from authorizations granted to PRS by Governments. Performing surveys for compliance with the requirements of the Society's own rules for classification and construction and/or the requirements of the relevant international conventions as well as national regulations regarding the following: floating objects, including naval craft, special purpose objects intended for the State security and defence, construction of steel structures, pipelines and industrial installations, as well as land objects, construction and repair of containers, manufacture of materials and products, approval of products, manufacturers and service suppliers. 3. Provision of technical expertise and advisory services. 4. Certification Certification of quality management systems (ISO 9001) Certification of environmental protection systems (ISO 14001) Certification of work safety management systems (PN-N-18001) HACCP Certification Products certification for conformity with the European Union directives 4. Industrial supervision Investors supervision of steel structures Supervision of gas transfer pipelines, power engineering and municipal media and their equipment Investors supervision of motorways, roads, viaducts, tunnels and culverts Supervision of land environment protection objects Supervision of hydrotechnical objects (weirs, locks, breakwaters, platforms, wharves) 5. Others activities Underwater inspections Approval of products Approval of firms Technical expertise and advising for external bodies such as: underwriters, banks Research and development History In 1936 Polish Register of Inland Shipping was established In 1946 the name of the organisation was changed to Polski Rejestr Statków. In 1954 the institution, by decision of the Authorities in power, was transformed into State Enterprise. From 1972 to 2000 PRS was a member of IACS. After the sinking of the Polish Register classed Leader L in the Western Atlantic on March 23, 2000, PRS's associate status in IACS was terminated with immediate effect. In 2001 Polski Rejestr Statków State Enterprise was transformed into a sole proprietorship joint stock company. From 2006 PRS is recognised by the European Commission, EMSA. On June 3, 2011, PRS was re-admitted into IACS. On 2 February 2012 the European Commission issued a Commission Implementing Decision 2012/66/EU on the recognition of the Polish Register of Shipping as a classification society for inland waterway vessels External links PRS website Download PRS Rules and publications Water transport in Poland Ship classification societies Polish Joint-stock companies
Polish Register of Shipping
[ "Engineering" ]
615
[ "Marine engineering organizations", "Ship classification societies" ]
11,889,100
https://en.wikipedia.org/wiki/Wildlife%20of%20Eritrea
The wildlife of Eritrea is composed of its flora and fauna. Eritrea has 96 species of mammals and a rich avifauna of 566 species of birds. Fauna Mammals Birds References External links Eastern Africa: Ethiopia, extending into Eritrea, Eritrean coastal desert Eastern Africa: Eritrea, Ethiopia, Kenya, Somalia, and Sudan Biota of Eritrea Eritrea
Wildlife of Eritrea
[ "Biology" ]
71
[ "Biota by country", "Wildlife by country", "Biota of Eritrea" ]
11,889,121
https://en.wikipedia.org/wiki/Diels%E2%80%93Reese%20reaction
The Diels–Reese Reaction is a reaction between hydrazobenzene and dimethyl acetylenedicarboxylate (or related esters) first reported in 1934 by Otto Diels and Johannes Reese. Later work by others extended the reaction scope to include substituted hydrazobenzenes. The exact mechanism is not known. By changing the acidic or basic nature of the solvent, the reaction gives different products. With acetic acid as solvent (acidic), the reaction gives an diphenylpyrazolone. With xylene as solvent (neutral), the reaction gives an indole. With pyridine as solvent (basic), the reaction gives a carbomethoxyquinoline which can be degraded to a dihydroquinoline. References Name reactions
Diels–Reese reaction
[ "Chemistry" ]
166
[ "Name reactions", "Organic chemistry stubs" ]
11,889,169
https://en.wikipedia.org/wiki/Cercospora%20apiicola
Cercospora apiicola is a fungal plant pathogen, who causes leaf spot on celery. References apiicola Fungal plant pathogens and diseases Eudicot diseases Fungus species
Cercospora apiicola
[ "Biology" ]
40
[ "Fungi", "Fungus species" ]
11,890,087
https://en.wikipedia.org/wiki/Pulsometer%20pump
The Pulsometer steam pump is a pistonless pump which was patented in 1872 by American Charles Henry Hall. In 1875 a British engineer bought the patent rights of the Pulsometer and it was introduced to the market soon thereafter. The invention was inspired by the Savery steam pump invented by Thomas Savery. Around the turn of the century, it was a popular and effective pump for quarry pumping. Construction and operation This extremely simple pump was made of cast iron, and had no pistons, rods, cylinders, cranks, or flywheels. It operated by the direct action of steam on water. The mechanism consisted of two chambers. As the steam condensed in one chamber, it acted as a suction pump, while in the other chamber, steam was introduced under pressure and so it acted as a force pump. At the end of every stroke, a ball valve consisting of a small brass ball moved slightly, causing the two chambers to swap functions from suction-pump to force-pump and vice versa. The result was that the water was first suction pumped and then force pumped. A good explanation can be found in the 1901 article referenced below: The operation of the pulsometer is as follows: The ball being at the entrance of the left-hand chamber, and the right-hand being full of water, steam enters, pressing on the surface of the water, and forcing it out through the discharge passage. A rapid condensation of steam occurs from contact with the water and with the walls of the chamber, previously cooled by the water. When the water level has reached the horizontal edge of the discharge passage, a large volume of steam suddenly escapes and is at once condensed by the relatively cold water between the chamber and the discharge valve. The pressure in the chamber quickly decreases; it cannot be sustained by steam from the boiler, for, in accordance with the inventor's first specifications, the steam pipe is small. If now the pressure in the left chamber is equal, or nearly equal, to that in the right, friction caused by the rapid flow of steam past the ball will draw the ball over and close the right-hand chamber. Cut off from further supply, the steam, in contact with water, begins to condense; a jet of cold water from the discharge pipe spurts up through the injection tube, and by breaking into spray against the side of the steam space, completes the condensation. The partial vacuum produced brings water through the suction valve to fill the chamber; but at the same time the air valve admits a little air, which passes up ahead of the water and forms an elastic cushion to prevent the water from striking violently against the steam ball. The air chamber is for the purpose of preventing water-hammer in the suction pipe. Advantages The pump ran automatically without attendance. It was praised for its "extreme simplicity of construction, operation, compact form, high efficiency, economy, durability, and adaptability". Later designs were improved upon to enhance efficiency and to make the machine more accessible for inspection and repairs, thus reducing maintenance costs. Detailed analysis In the January 1901 issue of Technology Quarterly and Proceedings of the Society of Arts, an article appeared by Joseph C. Riley describing key operational details and technical evaluation of the pulsometer pump's performance. Riley noted that although somewhat inefficient, the pulsometer's simplicity and robust construction made it well suited to pumping "thick liquids or semi-fluids, such as heavy syrups, or even liquid mud". Pulsometer Engineering Company Limited Pulsometer Engineering Company Limited was founded in Britain in 1875 after a British engineer bought the patent rights of the pulsometer pump from Thomas Hall. In 1901 the company moved from London to Reading, Berkshire. In 1961 Pulsometer merged with Sigmund Pumps of Gateshead to form Sigmund Pulsometer Pumps. SPP Pumps Ltd became one of the largest pump companies in Europe. SPP Pumps Ltd is now part of Kirloskar Brothers Ltd. References Kirloskar Brothers Limited Pumps Steam power
Pulsometer pump
[ "Physics", "Chemistry" ]
823
[ "Pumps", "Physical quantities", "Turbomachinery", "Steam power", "Physical systems", "Power (physics)", "Hydraulics" ]
11,890,255
https://en.wikipedia.org/wiki/Palaeonisciformes
The Palaeonisciformes, commonly known as "palaeoniscoids" (also spelled "paleoniscoid", or alternatively "paleoniscids") are an extinct grouping of primitive ray-finned fish (Actinopterygii), spanning from the Silurian/Devonian to the Cretaceous. They are generally considered paraphyletic, but their exact relationships to living ray-finned fish are uncertain. While some and perhaps most palaeoniscoids likely belong to the stem-group of Actinopteryii, it has been suggested that some may belong to the crown group, with some of these possibly related to Cladistia (containing bichirs) and/or Chondrostei (which contains sturgeons and paddlefish). Many palaeoniscoids share a conservative body shape and a similar arrangement of skull bones, though paleoniscoids as a whole exhibit considerable diversity in body shape. Historic background The systematics of fossil and extant fishes has puzzled ichthyologists since the time of Louis Agassiz, who first grouped all Palaeozoic ray-finned fishes together with Chondrostei (sturgeons, paddlefishes), gars, lungfishes, and acanthodians in his Ganoidei. Carl Hermann Müller later proposed to divide actinopterygians into three groups: Chondrostei, Holostei, and Teleostei. Later, Edward Drinker Cope included these three groups within Actinopteri. The same classification is also used today, though the definitions of these groups have changed significantly over the years. The sister group to Actinopteri are the Cladistia, which include Polypterus (bichirs), Erpetoichthys and their fossil relatives. All together are grouped as Actinopterygii. A few additional classification schemes were proposed over the years. Lev Berg erected the superorder Palaeonisci, in which he included early actinopterygians that belonged to neither Chondrostei nor Polypteri (Cladistia). Mostly following Berg, Jean-Pierre Lehman grouped the Actinopterygii into 26 orders, among others the Palaeonisciformes with the two suborders Palaeoniscoidei and Platysomoidei. Numerous genera of early actinopterygians have been referred to either Palaeonisciformes or to one of its suborders based on superficial resemblance with either Palaeoniscum (Palaeoniscoidei) or Platysomus (Platysomoidei), especially during the early and middle parts of the 20th century. Palaeonisciformes, Palaeoniscoidei, and Platysomoidei have therefore become wastebasket taxa. They are not natural groups, but instead paraphyletic assemblages of the early members of several ray-finned fish lineages. Palaeoniscoidei have traditionally encompassed most Paleozoic actinopterygians, except those that exhibit atypical body forms (such as the deep-bodied Platysomoidei, or those assigned securely to any of the living groups of ray-finned fishes. The same can also be said about the family Palaeoniscidae sensu lato, to which several genera not closely related to Palaeoniscum have been referred in the past. The grouping of "palaeonisciforms" was based largely on shared plesiomorphic features, such as the forward position of the eye, the large gape or the presence of rhombic scales. However, such symplesiomorphies are not informative with regard to phylogeny, but rather an indication of common ancestry. In modern biology, taxonomists group taxa based on shared apomorphies (synapomorphies) in order to detect monophyletic groups (natural groups). They use computer software (e.g., PAUP) to determine the most likely evolutionary relationships between taxa, thereby putting previous hypotheses of such relationships to the test. As a consequence, many genera have been subsequently removed from Palaeonisciformes and referred to distinct orders (e.g., Saurichthyiformes). The term Palaeonisciformes has mostly disappeared from the modern literature or is nowadays only used to refer to the "primitive" morphology of a taxon (e.g., "palaeonisciform skull shape" or "palaeoniscoid body shape"). In order to make the Paleonisciformes, Palaeoniscoidei or Palaeoniscidae monophyletic, these terms should only be used in a strict sense, i.e., when referring to the clade of actinopterygians that includes Palaeoniscum and the taxa closely related to it. A monophyletic clade including several taxa classically referred to the Palaeonisciformes (e.g., Aesopichthys, Birgeria, Boreosomus, Canobius, Pteronisculus, Rhadinichthys) was recovered in the cladistic analysis by Lund et al. This clade, coined Palaeoniscimorpha, is also used in subsequent publications. Recent cladistic analyses also recovered clades containing several genera that have historically been grouped within Palaeonisciformes, while excluding others. Due to the delicate nature of fossils of ray-finned fishes and the incomplete knowledge of several taxa (especially with regard to the internal cranial anatomy), there is still no consensus about the evolutionary relationships of several early actinopterygians previously grouped within Palaeonisciformes. Classification The following list includes species that have been referred to Palaeonisciformes (or Palaeoniscidae, respectively), usually because of superficial resemblance with Palaeoniscum freieslebeni. Many of these species are poorly known and have never been included in any cladistic analysis. Their inclusion in Palaeonisciformes (or Palaeoniscidae) is in most cases doubtful and requires confirmation by cladistic studies. Which taxa should be included in Palaeonisciformes sensu stricto (or Palaeoniscidae sensu stricto) and which ones moved to other orders or families, respecitively, is a matter of ongoing research. Order †Palaeonisciformes Hay, 1902 sensu stricto [Palaeoniscida Moy-Thomas & Miles, 1971] Family †Palaeoniscidae Vogt, 1852 Genus ?†Agecephalichthys Wade, 1935 Species †Agecephalichthys granulatus Wade, 1935 Genus ?†Atherstonia Woodward, 18989 [Broometta Chabakov, 1927] Species †Atherstonia scutata Woodward, 1889 [Atherstonia cairncrossi Broom, 1913; Amblypterus capensis Broom, 1913; Broometta cairncrossi Chabakov, 1927] Species †Atherstonia minor Woodward, 1893 Genus ?†Cryphaeiolepis Traquair, 1881 Species †Cryphaeiolepis scutata Traquair, 1881 Genus ?†Cteniolepidotrichia Poplin & Su, 1992 Species †Cteniolepidotrichia turfanensis Poplin & Su, 1992 Genus †Dicellopyge Brough, 1931 Species †Dicellopyge macrodentata Brough, 1931 Species †Dicellopyge lissocephalus Brough, 1931 Genus ?†Duwaichthys Liu et al., 1990 Species †Duwaichthys mirabilis Liu et al., 1990 Genus ?†Ferganiscus Sytchevskaya & Yakolev, 1999 Species †Ferganiscus osteolepis Sytchevskaya & Yakolev, 1999 Genus †Gyrolepis Agassiz, 1833 non Kade, 1858 Species †G. albertii Agassiz, 1833 Species †G. gigantea Agassiz, 1833 Species †G. maxima Agassiz, 1833 Species †G. quenstedti Dames, 1888 Species †G. tenuistriata Agassiz, 1833 Genus †Gyrolepidoides Cabrera, 1944 Species †G. creyanus Schaeffer, 1955 Species †G. cuyanus Cabrera, 1944 Species †G. multistriatus Rusconi, 1948 Genus ?†Palaeoniscinotus Rohon, 1890 Species †P. czekanowskii Rohon, 1890 Genus †Palaeoniscum de Blainville, 1818 [Palaeoniscus Agassiz, 1833 non Von Meyer, 1858; Palaeoniscas Rzchak, 1881; Eupalaeoniscus Rzchak, 1881; Palaeomyzon Weigelt, 1930; Geomichthys Sauvage, 1888] Species †P. angustum (Rzehak, 1881) [Palaeoniscas angustus Rzehak, 1881] Species †P. antipodeum (Egerton, 1864) [Palaeoniscus antipodeus Egerton, 1864] Species †P. antiquum Williams, 1886 Species †P. arenaceum Berger, 1832 Species †P. capense (Bloom, 1913) [Palaeoniscus capensis Bloom, 1913] Species †P. comtum (Agassiz, 1833) [Palaeoniscus comtus Agassiz, 1833] Species †P. daedalium Yankevich & Minich, 1998 Species †P. devonicum Clarke, 1885 Species †P. elegans (Sedgwick, 1829) [Palaeoniscus elegans Sedgwick, 1829] Species †P. freieslebeni de Blainville, 1818 [Eupalaeoniscus freieslebeni (de Brainville, 1818); Palaeoniscus freieslebeni (de Brainville, 1818)] Species †P. hassiae (Jaekel, 1898) [Galeocerdo contortus hassiae Jaekel, 1898; Palaeomyzon hassiae (Jaekel, 1898)] Species †P. kasanense Geinitz & Vetter, 1880 Species †P. katholitzkianum (Rzehak, 1881) [Palaeoniscas katholitzkianus Rzehak, 1881] Species †P. landrioti (le Sauvage, 1890) [Palaeoniscus landrioti le Sauvage, 1890] Species †P. longissimum (Agassiz, 1833) [Palaeoniscus longissimus Agassiz, 1833] Species †P. macrophthalmum (McCoy, 1855) [Palaeoniscus macrophthalmus McCoy, 1855] Species †P. magnum (Woodward, 1937) [Palaeoniscus magnus Woodward, 1937] Species †P. moravicum (Rzehak, 1881) [Palaeoniscas moravicus Rzehak, 1881] Species †P. promtu (Rzehak, 1881) [Palaeoniscas promtus Rzehak, 1881] Species †P. reticulatum Williams, 1886 Species †P. scutigerum Newberry, 1868 Species †P. vratislavensis (Agassiz, 1833) [Palaeoniscus vratislavensis Agassiz, 1833] Genus †Palaeothrissum de Blainville, 1818 Species †P. elegans Sedgwick, 1829 Species †P. macrocephalum de Blainville, 1818 Species †P. magnum de Blainville, 1818 Genus ?†Shuniscus Su, 1983 Species †Shuniscus longianalis Su, 1983 Genus ?†Suchonichthys Minich, 2001 Species †Suchonichthys molini Minich, 2001 Genus ?†Trachelacanthus Fischer De Waldheim, 1850 Species †Trachelacanthus stschurovskii Fischer De Waldheim, 1850 Genus ?†Triassodus Su, 1984 Species †Triassodus yanchangensis Su, 1984 Genus ?†Turfania Liu & Martínez, 1973 Species †T. taoshuyuanensis Liu & Martínez, 1973 Species †T. varta Wang, 1979 Genus ?†Turgoniscus Jakovlev, 1968 Species †Turgoniscus reissi Jakovlev, 1968 Genus ?†Weixiniscus Su & Dezao, 1994 Species †Weixiniscus microlepis Su & Dezao, 1994 Genus ?†Xingshikous Liu, 1988 Species †Xingshikous xishanensis Liu, 1988 Genus ?†Yaomoshania Poplin et al., 1991 Species †Yaomoshania minutosquama Poplin et al., 1991 Brachydegma Other families attributed to Palaeonisciformes This list includes families that at one time or another were placed in the order Palaeonisciformes. The species included in these families are often poorly known, and a close relationship with the family Palaeoniscidae is therefore doubtful unless confirmed by cladistic analyses. These families are therefore better treated as Actinopterygii incertae sedis for the time being. The evolutionary relationships of early actinopterygians is a matter of ongoing studies. †Acropholidae Kazantseva-Selezneva, 1977 †Atherstoniidae Gardiner, 1969 †Brazilichthyidae Cox & Hutchinson, 1991 †Centrolepididae Gardier, 1960 †Coccolepididae Berg, 1940 corrig. †Commentryidae Gardiner, 1963 †Cryphiolepididae MoyThomas, 1939 corrig. †Dwykiidae Gardiner, 1969 †Holuridae Moy-Thomas, 1939 †Igornichthyidae Heyler, 1977 †Irajapintoseidae Beltan, 1978 †Monesedeiphidae Beltan, 1989 †Moythomasiidae Kazantseva, 1971 †Rhabdolepididae Gardiner, 1963 †Stegotrachelidae Gardiner, 1963 †Thrissonotidae Berg, 1955 †Tienshaniscidae Lu & Chen, 2010 †Turseodontidae Bock, 1959 corrig. †Uighuroniscidae Jin, 1996 †Urosthenidae Woodward, 1931 Timeline of genera Andreolepis hedei, previously grouped within Palaeonisciformes, has proven so far to be the earliest-known actinopterygiian, living around 420 million years ago (Late Silurian in Russia, Sweden, Estonia, and Latvia. Actinopterygians underwent an extensive diversification during the Carboniferous, after the end-Devonian Hangenberg extinction. References External links Palaeonisciformes at University of Bristol Prehistoric ray-finned fish orders Paraphyletic groups
Palaeonisciformes
[ "Biology" ]
3,147
[ "Phylogenetics", "Paraphyletic groups" ]
11,890,372
https://en.wikipedia.org/wiki/SahysMod
SahysMod is a computer program for the prediction of the salinity of soil moisture, groundwater and drainage water, the depth of the watertable, and the drain discharge in irrigated agricultural lands, using different hydrogeologic and aquifer conditions, varying water management options, including the use of ground water for irrigation, and several crop rotation schedules, whereby the spatial variations are accounted for through a network of polygons. Rationale There is a need for a computer program that is easier to operate and that requires a simpler data structure then most currently available models. Therefore, the SahysMod program was designed keeping in mind a relative simplicity of operation to facilitate the use by field technicians, engineers and project planners instead of specialized geo-hydrologists. It aims at using input data that are generally available, or that can be estimated with reasonable accuracy, or that can be measured with relative ease. Although the calculations are done numerically and have to be repeated many times, the final results can be checked by hand using the formulas in this manual. SahysMod's objective is to predict the long-term hydro-salinity in terms of general trends, not to arrive at exact predictions of how, for example, the situation would be on the first of April in ten years from now. Further, SahysMod gives the option of the re-use of drainage and well water (e.g. for irrigation) and it can account for farmers' responses to waterlogging, soil salinity, water scarcity and over-pumping from the aquifer. Also it offers the possibility to introduce subsurface drainage systems at varying depths and with varying capacities so that they can be optimized. Other features of SahysMod are found in the next section. Methods Calculation of aquifer conditions in polygons The model calculates the ground water levels and the incoming and outgoing ground water flows between the polygons by a numerical solution of the well-known Boussinesq equation. The levels and flows influence each other mutually. The ground water situation is further determined by the vertical groundwater recharge that is calculated from the agronomic water balance. These depend again on the levels of the ground water. When semi-confined aquifers are present, the resistance to vertical flow in the slowly permeable top-layer and the overpressure in the aquifer, if any, are taken into account. Hydraulic boundary conditions are given as hydraulic heads in the external nodes in combination with the hydraulic conductivity between internal and external nodes. If one wishes to impose a zero flow condition at the external nodes, the conductivity can be set at zero. Further, aquifer flow conditions can be given for the internal nodes. These are required when a geological fault is present at the bottom of the aquifer or when flow occurs between the main aquifer and a deeper aquifer separated by a semi-confining layer. The depth of the water table, the rainfall and salt concentrations of the deeper layers are assumed to be the same over the whole polygon. Other parameters can very within the polygons according to type of crops and cropping rotation schedule. Seasonal approach The model is based on seasonal input data and returns seasonal outputs. The number of seasons per year can be chosen between a minimum of one and a maximum of four. One can distinguish for example dry, wet, cold, hot, irrigation or fallow seasons. Reasons of not using smaller input/output periods are: short-term (e.g., daily) inputs would require much information, which, in large areas, may not be readily available; short-term outputs would lead to immense output files, which would be difficult to manage and interpret; this model is especially developed to predict long-term trends, and predictions for the future are more reliably made on a seasonal (long-term) than on a daily (short-term) basis, due to the high variability of short-term data; though the precision of the predictions for the future may be limited, a lot is gained when the trend is sufficiently clear. For example, it need not be a major constraint to the design of appropriate soil salinity control measures when a certain salinity level, predicted by SahysMod to occur after 20 years, will in reality occur after 15 or 25 years. Computational time steps Many water balance factors depend on the level of the water table, which again depends on some of the water-balance factors. Due to these mutual influences there can be non-linear changes throughout the season. Therefore, the computer program performs daily calculations. For this purpose, the seasonal water-balance factors given with the inpu] are reduced automatically to daily values. The calculated seasonal water-balance factors, as given in the output, are obtained by summations of the daily calculated values. Groundwater levels and soil salinity (the state variables) at the end of the season are found by accumulating the daily changes of water and salt storage. In some cases the program may detect that the time step must be taken less than 1 day for better accuracy. The necessary adjustments are made automatically. Data requirements Polygonal network The model permits a maximum of 240 internal and 120 external polygons with a minimum of 3 and a maximum of 6 sides each. The subdivision of the area into polygons, based on nodal points with known coordinates, should be governed by the characteristics of the distribution of the cropping, irrigation, drainage and groundwater characteristics over the study area. The nodes must be numbered, which can be done at will. With an index one indicates whether the node is internal or external. Nodes can be added and removed at will or changed from internal to external or vice versa. Through another index one indicates whether the internal nodes have an unconfined or semi-confined aquifer. This can also be changed at will. Nodal network relations are to be given indicating the neighboring polygon numbers of each node. The program then calculates the surface area of each polygon, the distance between the nodes and the length of the sides between them using the Thiessen principle. The hydraulic conductivity can vary for each side of the polygons. The depth of the water table, the rainfall and salt concentrations of the deeper layers are assumed to be the same over the whole polygon. Other parameters can very within the polygons according to type of crops and cropping rotation schedule. Hydrological data The method uses seasonal water balance components as input data. These are related to the surface hydrology (like rainfall, potential evaporation, irrigation, use of drain and well water for irrigation, runoff), and the aquifer hydrology (e.g., pumping from wells). The other water balance components (like actual evaporation, downward percolation, upward capillary rise, subsurface drainage, groundwater flow) are given as output. The quantity of drainage water, as output, is determined by two drainage intensity factors for drainage above and below drain level respectively (to be given with the input data) and the height of the water table above the given drain level. This height results from the computed water balance Further, a drainage reduction factor can be applied to simulate a limited operation of the drainage system. Variation of the drainage intensity factors and the drainage reduction factor gives the opportunity to simulate the effect of different drainage options. To obtain accuracy in the computations of the ground water flow (sect. 2.8), the actual evaporation and the capillary rise, the computer calculations are done on a daily basis. For this purpose, the seasonal hydrological data are divided by the number of days per season to obtain daily values. The daily values are added to yield seasonal values. Cropping patterns/rotations The input data on irrigation, evaporation, and surface runoff are to be specified per season for three kinds of agricultural practices, which can be chosen at the discretion of the user: A: irrigated land with crops of group A B: irrigated land with crops of group B U: non-irrigated land with rain-fed crops or fallow land The groups, expressed in fractions of the total area, may consist of combinations of crops or just of a single kind of crop. For example, as the A-type crops one may specify the lightly irrigated cultures, and as the B type the more heavily irrigated ones, such as sugarcane and rice. But one can also take A as rice and B as sugar cane, or perhaps trees and orchards. A, B and/or U crops can be taken differently in different seasons, e.g. A=wheat plus barley in winter and A=maize in summer while B=vegetables in winter and B=cotton in summer. Non-irrigated land can be specified in two ways: (1) as U = 1−A−B and (2) as A and/or B with zero irrigation. A combination can also be made. Further, a specification must be given of the seasonal rotation of the different land uses over the total area, e.g. full rotation, no rotation at all, or incomplete rotation. This occurs with a rotation index. The rotations are taken over the seasons within the year. To obtain rotations over the years it is advisable to introduce annual input changes as explained When a fraction A1, B1 and/or U1 differs from the fraction A2, B2 and/or U2 in another season, because the irrigation regime changes in the different seasons, the program will detect that a certain rotation occurs. If one wishes to avoid this, one may specify the same fractions in all seasons (A2=A1, B2=B1, U2=U1) but the crops and irrigation quantities may be different and may need to be proportionally adjusted. One may even specify irrigated land (A or B) with zero irrigation, which is the same as un-irrigated land (U). Cropping rotation schedules vary widely in different parts of the world. Creative combinations of area fractions, rotation indexes, irrigation quantities and annual input changes can accommodate many types of agricultural practices. Variation of the area fractions and/or the rotational schedule gives the opportunity to simulate the effect of different agricultural practices on the water and salt balance. Soil strata, type of aquifer SahysMod accepts four different reservoirs of which three are in the soil profile: s: a surface reservoir, r: an upper (shallow) soil reservoir or root zone, x: an intermediate soil reservoir or transition zone, q: a deep reservoir or main aquifer. The upper soil reservoir is defined by the soil depth, from which water can evaporate or be taken up by plant roots. It can be taken equal to the root zone. It can be saturated, unsaturated, or partly saturated, depending on the water balance. All water movements in this zone are vertical, either upward or downward, depending on the water balance. (In a future version of Sahysmod, the upper soil reservoir may be divided into two equal parts to detect the trend in the vertical salinity distribution.) The transition zone can also be saturated, unsaturated or partly saturated. All flows in this zone are horizontal, except the flow to subsurface drains, which is radial. If a horizontal subsurface drainage system is present, this must be placed in the transition zone, which is then divided into two parts: an upper transition zone (above drain level) and a lower transition zone (below drain level). If one wishes to distinguish an upper and lower part of the transition zone in the absence of a subsurface drainage system, one may specify in the input data a drainage system with zero intensity. The aquifer has mainly horizontal flow. Pumped wells, if present, receive their water from the aquifer only. The flow in the aquifer is determined in dependence of spatially varying depths of the aquifer, levels of the water table, and hydraulic conductivity. SahysMod permits the introduction of phreatic (unconfined) and semi-confined aquifers. The latter may develop a hydraulic over or under pressure below the slowly permeable top-layer (aquitard). Agricultural water balances The agricultural water balances are calculated for each soil reservoir separately as shown in the article Hydrology (agriculture). The excess water leaving one reservoir is converted into incoming water for the next reservoir. The three soil reservoirs can be assigned different thickness and storage coefficients, to be given as input data. When, in a particular situation the transition zone or the aquifer is not present, they must be given a minimum thickness of 0.1 m. The depth of the water table at the end of the previous time step, calculated from the water balances, is assumed to be the same within each polygon. If this assumption is not acceptable, the area must be divided into a larger number of polygons. Under certain conditions, the height of the water table influences the water-balance components. For example, a rise of the water table towards the soil surface may lead to an increase of capillary rise, actual evaporation, and subsurface drainage, or a decrease of percolation losses. This, in turn, leads to a change of the water-balance, which again influences the height of the water table, etc. This chain of reactions is one of the reasons why Sahysmod has been developed into a computer program, in which the computations are made day by day to account for the chain of reactions with a sufficient degree of accuracy. Drains, wells, and re-use The sub-surface drainage can be accomplished through drains or pumped wells. The subsurface drains, if any, are characterized by drain depth and drainage capacity. The drains are located in the transition zone. The subsurface drainage facility can be applied to natural or artificial drainage systems. The functioning of an artificial drainage system can be regulated through a drainage control factor. By installing a drainage system with zero capacity one obtains the opportunity to have separate water and salt balances in the transition above and below drain level. The pumped wells, if any, are located in the aquifer. Their functioning is characterized by the well discharge. The drain and well water can be used for irrigation through a (re)use factor. This may affect the water and salt balance and on the irrigation efficiency or sufficiency. Salt balances The salt balances are calculated for each soil reservoir separately. They are based on their water balances, using the salt concentrations of the incoming and outgoing water. Some concentrations must be given as input data, like the initial salt concentrations of the water in the different soil reservoirs, of the irrigation water and of the incoming groundwater in the aquifer. The concentrations are expressed in terms of electric conductivity (EC in dS/m). When the concentrations are known in terms of g salt/L water, the rule of thumb: 1 g/L -> 1.7 dS/m can be used. Usually, salt concentrations of the soil are expressed in ECe, the electric conductivity of an extract of a saturated soil paste. In Sahysmod, the salt concentration is expressed as the EC of the soil moisture when saturated under field conditions. As a rule, one can use the conversion rate EC : ECe = 2 : 1. The principles used are correspond to those described in the article soil salinity control. Salt concentrations of outgoing water (either from one reservoir into the other or by subsurface drainage) are computed on the basis of salt balances, using different leaching or salt mixing efficiencies to be given with the input data. The effects of different leaching efficiencies can be simulated varying their input value. If drain or well water is used for irrigation, the method computes the salt concentration of the mixed irrigation water in the course of the time and the subsequent effect on the soil and ground water salinity, which again influences the salt concentration of the drain and well water. By varying the fraction of used drain or well water (through the input), the long-term effect of different fractions can be simulated. The dissolution of solid soil minerals or the chemical precipitation of poorly soluble salts is not included in the computation method. However, but to some extent, it can be accounted for through the input data, e.g. increasing or decreasing the salt concentration of the irrigation water or of the incoming water in the aquifer. In a future version, the precipitation of gypsum may be introduced. Farmers' responses If required, farmers' responses to waterlogging and soil salinity can be automatically accounted for. The method can gradually decrease: The amount of irrigation water applied when the water table becomes shallower depending on the kind of crop (paddy rice and non-rice) The fraction of irrigated land when the available irrigation water is scarce; The fraction of irrigated land when the soil salinity increases; for this purpose, the salinity is given a stochastic interpretation; The groundwater abstraction by pumping from wells when the water table drops. The farmers' responses influence the water and salt balances, which, in turn, slows down the process of water logging and salinization. Ultimately a new equilibrium situation will arise. The user can also introduce farmers' responses by manually changing the relevant input data. Perhaps it will be useful first to study the automatic farmers' responses and their effect first and thereafter decide what the farmers' responses will be in the view of the user. Annual input changes The program runs either with fixed input data for the number of years determined by the user. This option can be used to predict future developments based on long-term average input values, e.g. rainfall, as it will be difficult to assess the future values of the input data year by year. The program also offers the possibility to follow historic records with annually changing input values (e.g. rainfall, irrigation, cropping rotations), the calculations must be made year by year. If this possibility is chosen, the program creates a transfer file by which the final conditions of the previous year (e.g. water table and salinity) are automatically used as the initial conditions for the subsequent period. This facility makes it also possible to use various generated rainfall sequences drawn randomly from a known rainfall probability distribution and to obtain a stochastic prediction of the resulting output parameters. Some input parameters should not be changed, like the nodal network relations, the system geometry, the thickness of the soil layers, and the total porosity, otherwise illogical jumps occur in the water and salt balances. These parameters are also stored in the transfer file, so that any impermissible change is overruled by the transfer data. In some cases of incorrect changes, the program will stop and request the user to adjust the input. Output data The output is given for each season of any year during any number of years, as specified with the input data. The output data comprise hydrological and salinity aspects. As the soil salinity is very variable from place to place (figure left) SahysMod includes frequency distributions in the output. The figure was made with the CumFreq program. The output data are filed in the form of tables that can be inspected directly, through the user menu, that calls selected groups of data either for a certain polygon over time, or for a certain season over the polygons. The model includes mapping facilities of output data. Also, the program has the facility to store the selected data in a spreadsheet format for further analysis and for import into a GIS program. Different users may wish to establish different cause-effect relationships. The program offers only a limited number of standard graphics, as it is not possible to foresee all different uses that may be made. This is the reason why the possibility for further analysis through spreadsheet programs was created. Although the computations need many iterations, all the results can be checked by hand using the equations presented in the manual. See also DPHM-RS References External links and download location Free download location of SahysMod software from : or from : Soil chemistry Soil physics Environmental soil science Environmental chemistry Agricultural soil science Hydrogeology Hydrology models Irrigation Drainage Land management Land reclamation Scientific simulation software
SahysMod
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
4,208
[ "Hydrology", "Applied and interdisciplinary physics", "Biological models", "Environmental chemistry", "Soil physics", "Soil chemistry", "Hydrology models", "nan", "Environmental soil science", "Environmental modelling", "Hydrogeology" ]
11,890,691
https://en.wikipedia.org/wiki/Stewart%E2%80%93Tolman%20effect
The Stewart–Tolman effect is a phenomenon in electrodynamics caused by the finite mass of electrons in conducting metal, or, more generally, the finite mass of charge carriers in an electrical conductor. It is named after T. Dale Stewart and Richard C. Tolman, two American physicists who carried out their experimental work in the 1910s. This eponym appears to be first used by Lev Landau. In a conducting body undergoing accelerating motion, inertia causes the electrons in the body to "lag" behind the overall motion. In the case of linear acceleration, negative charge accumulates at the end of the body; while for rotation the negative charge accumulates at the outer rim. The accumulation of charges can be measured by a galvanometer. This effect is proportional to the mass of the charge carriers. It is much more significant in electrolyte conductors than metals, because ions in the former are 103-104 times more massive than electrons in the latter. Notes External links R.C. Tolman, T.D. Stewart: The electromotive force produced by the acceleration of metals. The original article of Physical Review from 1916. Electrodynamics
Stewart–Tolman effect
[ "Materials_science", "Mathematics" ]
240
[ "Electrodynamics", "Materials science stubs", "Electromagnetism stubs", "Dynamical systems" ]
11,890,785
https://en.wikipedia.org/wiki/Summer%20solstice
The summer solstice or estival solstice occurs when one of Earth's poles has its maximum tilt toward the Sun. It happens twice yearly, once in each hemisphere (Northern and Southern). The summer solstice is the day with the longest period of daylight and shortest night of the year in that hemisphere, when the sun is at its highest position in the sky. At either pole there is continuous daylight at the time of its summer solstice. The opposite event is the winter solstice. The summer solstice occurs during the hemisphere's summer. In the Northern Hemisphere, this is the June solstice (20, 21 or 22 June) and in the Southern Hemisphere, this is the December solstice (20, 21, 22 or 23 of December). Since prehistory, the summer solstice has been a significant time of year in many cultures, and has been marked by festivals and rituals. Traditionally, in temperate regions (especially Europe), the summer solstice is seen as the middle of summer and referred to as midsummer; although today in some countries and calendars it is seen as the beginning of summer. On the summer solstice, Earth's maximum axial tilt toward the Sun is 23.44°. Likewise, the Sun's declination from the celestial equator is 23.44°. In areas outside the tropics, the sun reaches its highest elevation angle at solar noon on the summer solstice. Although the summer solstice is the longest day of the year for that hemisphere, the dates of earliest sunrise and latest sunset vary by a few days. This is because Earth orbits the Sun in an ellipse, and its orbital speed varies slightly during the year. Culture There is evidence that the summer solstice has been culturally important since the Neolithic era. Many ancient monuments in Europe especially, as well as parts of the Middle East, Asia and the Americas, are aligned with the sunrise or sunset on the summer solstice (see archaeoastronomy). The significance of the summer solstice has varied among cultures, but most recognize the event in some way with holidays, festivals, and rituals around that time with themes of fertility. In the Roman Empire, the traditional date of the summer solstice was 24 June. In Germanic-speaking cultures, the time around the summer solstice is called 'midsummer'. Traditionally in northern Europe midsummer was reckoned as the night of 23–24 June, with summer beginning on May Day. The summer solstice continues to be seen as the middle of summer in many European cultures, but in some cultures or calendars it is seen as summer's beginning. In Sweden, midsummer is one of the year's major holidays when the country closes down as much as during Christmas. Observances Traditional festivals Saint John's Eve (Europe), including: Juhannus (Finland) Jaanipäev (Estonia) Jāņi (Latvia) Joninės (Lithuania) Jónsmessa (Iceland) Golowan (Cornwall) Kupala Night (Slavic peoples) Yhyakh (Yakuts) Tiregān (Iran) Xiazhi (China) Shën Gjini–Shën Gjoni, Festa e Malit/Bjeshkës, Festa e Blegtorisë, etc. (Albanians) Modern observances National Indigenous Peoples Day (Canada) Day of Private Reflection (Northern Ireland) Fremont Solstice Parade (Fremont, Seattle, Washington, United States) Santa Barbara Summer Solstice Parade (Santa Barbara, California, United States) International Yoga Day Fête de la Musique, also known as World Music Day In folk music "Oh at Ivan, oh at Kupala" (Ukr. Ой на Івана, ой на Купала) - Ukrainian folk song. "Kupalinka" - (Belar. Купалінка) - Belarusian folk song "There is a lake behind the hill" (Lith. Už kalnelio ežerėlis) - Lithuanian folk song. Length of the day on northern summer solstice The length of day increases from the equator towards the North Pole in the Northern Hemisphere in June (around the summer solstice there), but decreases towards the South Pole in the Southern Hemisphere at the time of the southern winter solstice. Notes References External links SummerSolstice.uk - Summer Solstice Dates, Information & Community. NeoProgrammics - Table of Northern/Southern Solstice Dates/Times From 1600–2400 December observances June observances International observances solstice de:Sonnenwende#Sommersonnenwende
Summer solstice
[ "Astronomy" ]
980
[ "Time in astronomy", "Summer solstice" ]
11,890,889
https://en.wikipedia.org/wiki/Taste%20receptor
A taste receptor or tastant is a type of cellular receptor that facilitates the sensation of taste. When food or other substances enter the mouth, molecules interact with saliva and are bound to taste receptors in the oral cavity and other locations. Molecules which give a sensation of taste are considered "sapid". Vertebrate taste receptors are divided into two families: Type 1, sweet, first characterized in 2001: – Type 2, bitter, first characterized in 2000: In humans there are 25 known different bitter receptors, in cats there are 12, in chickens there are three, and in mice there are 35 known different bitter receptors. Visual, olfactive, "sapictive" (the perception of tastes), trigeminal (hot, cool), mechanical, all contribute to the perception of taste. Of these, transient receptor potential cation channel subfamily V member 1 (TRPV1) vanilloid receptors are responsible for the perception of heat from some molecules such as capsaicin, and a CMR1 receptor is responsible for the perception of cold from molecules such as menthol, eucalyptol, and icilin. Tissue distribution The gustatory system consists of taste receptor cells in taste buds. Taste buds, in turn, are contained in structures called papillae. There are three types of papillae involved in taste: fungiform papillae, foliate papillae, and circumvallate papillae. (The fourth type - filiform papillae do not contain taste buds). Beyond the papillae, taste receptors are also in the palate and early parts of the digestive system like the larynx and upper esophagus. There are three cranial nerves that innervate the tongue; the vagus nerve, glossopharyngeal nerve, and the facial nerve. The glossopharyngeal nerve and the chorda tympani branch of the facial nerve innervate the TAS1R and TAS2R taste receptors. Next to the taste receptors in on the tongue, the gut epithelium is also equipped with a subtle chemosensory system that communicates the sensory information to several effector systems involved in the regulation of appetite, immune responses, and gastrointestinal motility. In 2010, researchers found bitter receptors in lung tissue, which cause airways to relax when a bitter substance is encountered. They believe this mechanism is evolutionarily adaptive because it helps clear lung infections, but could also be exploited to treat asthma and chronic obstructive pulmonary disease. The sweet taste receptor (T1R2/T1R3) can be found in various extra-oral organs throughout the human body such as the brain, heart, kidney, bladder, nasal respiratory epithelium and more. In most of the organs, the receptor function is unclear. The sweet taste receptor found in the gut and in the pancreas was found to play an important role in the metabolic regulation of the gut carbohydrate-sensing process and in insulin secretion. This receptor is also found in the bladder, suggesting that consumption of artificial sweeteners which activates this receptor might cause excessive bladder contraction. Function Taste helps to identify toxins, maintain nutrition, and regulate appetite, immune responses, and gastrointestinal motility. Five basic tastes are recognized today: salty, sweet, bitter, sour, and umami. Salty and sour taste sensations are both detected through ion channels. Sweet, bitter, and umami tastes, however, are detected by way of G protein-coupled taste receptors. In addition, some agents can function as taste modifiers, as miraculin or curculin for sweet or sterubin to mask bitter. Mechanism of action The standard bitter, sweet, or umami taste receptor is a G protein-coupled receptor with seven transmembrane domains. Ligand binding at the taste receptors activate second messenger cascades to depolarize the taste cell. Gustducin is the most common taste Gα subunit, having a major role in TAS2R bitter taste reception. Gustducin is a homologue for transducin, a G-protein involved in vision transduction. Additionally, taste receptors share the use of the TRPM5 ion channel, as well as a phospholipase PLCβ2. Savory or glutamates (Umami) The TAS1R1+TAS1R3 heterodimer receptor functions as an umami receptor, responding to L-amino acid binding, especially L-glutamate. The umami taste is most frequently associated with the food additive monosodium glutamate (MSG) and can be enhanced through the binding of inosine monophosphate (IMP) and guanosine monophosphate (GMP) molecules. TAS1R1+3 expressing cells are found mostly in the fungiform papillae at the tip and edges of the tongue and palate taste receptor cells in the roof of the mouth. These cells are shown to synapse upon the chorda tympani nerves to send their signals to the brain, although some activation of the glossopharyngeal nerve has been found. Alternative candidate umami taste receptors include splice variants of metabotropic glutamate receptors, mGluR4 and mGluR1, and the NMDA receptor. During the evolution of songbirds, the umami taste receptor has undergone structural modifications in the ligand binding site, enabling these birds to sense the sweet taste by this receptor. Sweet The TAS1R2+TAS1R3 heterodimer receptor functions as the sweet receptor by binding to a wide variety of sugars and sugar substitutes. TAS1R2+3 expressing cells are found in circumvallate papillae and foliate papillae near the back of the tongue and palate taste receptor cells in the roof of the mouth. These cells are shown to synapse upon the chorda tympani and glossopharyngeal nerves to send their signals to the brain. The TAS1R3 homodimer also functions as a sweet receptor in much the same way as TAS1R2+3 but has decreased sensitivity to sweet substances. Natural sugars are more easily detected by the TAS1R3 receptor than sugar substitutes. This may help explain why sugar and artificial sweeteners have different tastes. Genetic polymorphisms in TAS1R3 partly explain the difference in sweet taste perception and sugar consumption between people of African American ancestry and people of European and Asian ancestries. Sensing of the sweet taste has changed throughout the evolution of different animals. Mammals sense the sweet taste by transferring the signal through the heterodimer T1R2/T1R3, the sweet taste receptor. In birds, however, the T1R2 monomer does not exist and they sense the sweet taste through the heterodimer T1R1/T1R3, the umami taste receptor, which has gone through modifications during their evolution. A recently conducted study showed that along the evolution stages of songbirds, there was a decrease in the ability to sense the umami taste, and an increase in the ability to sense the sweet taste, whereas the primordial songbird parent could only sense the umami taste. Researchers found a possible explanation for this phenomenon to be a structural change in the ligand binding site of the umami receptor between the sweet taste sensing and non-sensing songbirds. It is assumed that a mutation in the binding site occurred over time, which allowed them to sense the sweet taste through the umami taste receptor. Bitter The TAS2R proteins () function as bitter taste receptors. There are 43 human TAS2R genes, each of which (excluding the five pseudogenes) lacks introns and codes for a GPCR protein. These proteins, as opposed to TAS1R proteins, have short extracellular domains and are located in circumvallate papillae, palate, foliate papillae, and epiglottis taste buds, with reduced expression in fungiform papillae. Though it is certain that multiple TAS2Rs are expressed in one taste receptor cell, it is still debated whether mammals can distinguish between the tastes of different bitter ligands. Some overlap must occur, however, as there are far more bitter compounds than there are TAS2R genes. Common bitter ligands include cycloheximide, denatonium, PROP (6-n-propyl-2-thiouracil), PTC (phenylthiocarbamide), and β-glucopyranosides. Signal transduction of bitter stimuli is accomplished via the α-subunit of gustducin. This G protein subunit activates a taste phosphodiesterase and decreases cyclic nucleotide levels. Further steps in the transduction pathway are still unknown. The βγ-subunit of gustducin also mediates taste by activating IP3 (inositol triphosphate) and DAG (diglyceride). These second messengers may open gated ion channels or may cause release of internal calcium. Though all TAS2Rs are located in gustducin-containing cells, knockout of gustducin does not completely abolish sensitivity to bitter compounds, suggesting a redundant mechanism for bitter tasting (unsurprising given that a bitter taste generally signals the presence of a toxin). One proposed mechanism for gustducin-independent bitter tasting is via ion channel interaction by specific bitter ligands, similar to the ion channel interaction which occurs in the tasting of sour and salty stimuli. One of the best-researched TAS2R proteins is TAS2R38, which contributes to the tasting of both PROP and PTC. It is the first taste receptor whose polymorphisms are shown to be responsible for differences in taste perception. Current studies are focused on determining other such taste phenotype-determining polymorphisms. More recent studies show that genetic polymorphisms in other bitter taste receptor genes influence bitter taste perception of caffeine, quinine and denatonium benzoate. It has been demonstrated that bitterness receptors (TAS2R) play an important role in an innate immune system of airway (nose and sinuses) ciliated epithelium tissues. This innate immune system adds an "active fortress" to the physical Immune system surface barrier. This fixed immune system is activated by the binding of ligands to specific receptors. These natural ligands are bacterial markers, for TAS2R38 example: acyl-homoserine lactones or quinolones produced by Pseudomonas aeruginosa. To defend against predators, some plants have produced mimic bacterial markers substances. These plant mimes are interpreted by the tongue, and the brain, as being bitterness. The fixed immune system receptors are identical to the bitter taste receptors, TAS2R. Bitterness substances are agonist of TAS2R fixed immune system. The innate immune system uses nitric oxide and defensins which are capable of destroying bacteria, and also viruses. These fixed innate immune systems (Active Fortresses) are known in other epithelial tissues than upper airway (nose, sinuses, trachea, bronchi), for example: breast (mammary epithelial cells), gut and also human skin (keratinocytes) Bitter molecules, their associated bitter taste receptors, and the sequences and homology models of bitter taste receptors, are available via BitterDB. Sour Historically it was thought that the sour taste was produced solely when free hydrogen ions (H+) directly depolarised taste receptors. However, specific receptors for sour taste with other methods of action are now being proposed. The HCN channels were such a proposal; as they are cyclic nucleotide-gated channels. The two ion channels now suggested to contribute to sour taste are ASIC2 and TASK-1. Salt Various receptors have also been proposed for salty tastes, along with the possible taste detection of lipids, complex carbohydrates, and water. Evidence for these receptors had been unconvincing in most mammal studies. For example, the proposed ENaC receptor for sodium detection can only be shown to contribute to sodium taste in Drosophila. However, proteolyzed forms of ENaC have been shown to function as a human salt taste receptor. Proteolysis is the process where a protein is cleaved. The mature form of ENaC is thought to be proteolyzed, however the characterization of which proteolyzed forms exist in which tissues is incomplete. Proteolysis of cells created to overexpress hetermulitmeric ENaC comprising alpha, beta and gamma subunits was used to identify compounds that selectively enhanced the activity of proteolyzed ENaC versus non-proteolyzed ENaC. Human sensory studies demonstrated that a compound that enhances proteolyzed ENaC functions to enhance the salty taste of table salt, or sodium chloride, confirming proteolyzed ENaC as the first human salt taste receptor. Carbonation An enzyme connected to the sour receptor transmits information about carbonated water. Fat A possible taste receptor for fat, CD36, has been identified. CD36 has been localized to the circumvallate and foliate papillae, which are present in taste buds and where lingual lipase is produced, and research has shown that the CD36 receptor binds long chain fatty acids. Differences in the amount of CD36 expression in human subjects was associated with their ability to taste fats, creating a case for the receptor's relationship to fat tasting. Further research into the CD36 receptor could be useful in determining the existence of a true fat-tasting receptor. Free fatty acid receptor 4 (also termed GPR120) and to a much lesser extent free fatty acid receptor 1 (also termed GPR40) have been implicated to respond to oral fat, and their absence leads to reduced fat preference and reduced neuronal response to orally administered fatty acids. TRPM5 has been shown to be involved in oral fat response and identified as a possible oral fat receptor, but recent evidence presents it as primarily a downstream actor. Types Human bitter taste receptor genes are named TAS2R1 to TAS2R64, with many gaps due to non-existent genes, pseudogenes or proposed genes that have not been annotated to the most recent human genome assembly. Many bitter taste receptor genes also have confusing synonym names with several different gene names referring to the same gene. See table below for full list of human bitter taste receptor genes: Loss of function In many species, taste receptors have shown loss of functions. The evolutionary process in which taste receptors lost their function is believed to be an adaptive evolution where it is associated with feeding ecology to drive specialization and bifurcation of taste receptors. Out of all the taste receptors, bitter, sweet, and umami are shown to have a correlation between inactivation of taste receptors and feeding behavior. However, there are no strong evidences that support any vertebrates are missing the bitter taste receptor genes. The sweet taste receptor is one of the taste receptors where the function has been lost. In mammals, the predominant sweet taste receptor is the Type 1 taste receptor Tas1r2/Tas1r3. Some mammalian species such as cats and vampire bats have shown inability to taste sweet. In these species, the cause of loss of function of the sweet receptor is due to the pseudogenization of Tas1r2. The pseudogenization of Tas1r2 is also observed in non-mammalian species such as chickens and tongueless Western clawed frog, and these species also show the inability to taste sweet. The pseudogenization of Tas1r2 is widespread and independent in the order Carnivora. Many studies have shown that the pseudogenization of taste receptors is caused by a deleterious mutation in the open reading frames (ORF). In a study, it was found that in nonfeline carnivorous species, these species showed ORF-disrupting mutations of Tas1r2, and they occurred independently among the species. They also showed high variance in their lineages. It is hypothesized that the pseudogenization of Tas1r2 occurred through convergent evolution where carnivorous species lost their ability to taste sweet because of dietary behavior. Umami is also a taste receptor where the function has been lost in many species. The predominant umami taste receptors are Tas1r1/Tas1r3. In two lineages of aquatic mammals including dolphins and sea lions, Tas1r1 has been found to be pseudogenized. The pseudogenization of Tas1r1 has also been found in terrestrial, carnivorous species. While the panda belongs to the order Carnivora, it is herbivorous where 99% of its diet is bamboo, and it cannot taste umami. Genome sequence of the panda shows that its Tas1r1 gene is pseudogenized. In a study, it was found that in all species in the order Carnivora except the panda, the open reading frame was maintained. In panda, the nonsynonymous to synonymous substitutions ratio was found to be much higher than other species in order Carnivora. This data correlates with fossil records date of the panda to show where panda switched from carnivore to herbivore diet. Therefore, the loss of function of umami in panda is hypothesized to be caused by dietary change where the panda became less dependence on meat. However, these studies do not explain herbivores such as horses and cows that have retained the Tas1r1 receptor. Overall, the loss of function of the a taste receptor is an evolutionary process that occurred due to a dietary change in species. See also List of distinct cell types in the adult human body References External links G protein-coupled receptors Gustation
Taste receptor
[ "Chemistry" ]
3,765
[ "G protein-coupled receptors", "Signal transduction" ]
11,890,895
https://en.wikipedia.org/wiki/Namako%20wall
Namako wall or Namako-kabe (sometimes misspelled as Nameko) is a Japanese wall design widely used for vernacular houses, particularly on fireproof storehouses by the latter half of the Edo period. The namako wall is distinguished by a white grid pattern on black slate. Geographically, it was most prominent in parts of western Japan, notably the San'in region and San'yō region and, from the 19th century, further east, in the Izu Peninsula. Origin As the base of the external walls of earthen kura storehouses is vulnerable to physical damage and damage from rain, they are often tiled to protect them. The exaggerated white clay joints that are a few centimetres wide and rounded on top remind people of namako sea cucumber. Modern uses During the Meiji period (1868–1912), when Japan imported many Western ideas, the namako wall was used in a way that mimicked the "bricks and mortar" style of these countries. For example, Kisuke Shimizu's Tsukiji Hotel for foreigners in Tokyo Bay (completed in 1868) had namako walls that stretched from the ground to the eaves. The Misono-za kabuki theatre in Nagoya features a modern namako pattern on the facade. See also Japanese wall Footnotes References External links Architectural elements Architecture in Japan Types of wall
Namako wall
[ "Technology", "Engineering" ]
276
[ "Structural engineering", "Building engineering", "Types of wall", "Architectural elements", "Components", "Architecture" ]
11,891,204
https://en.wikipedia.org/wiki/Gustducin
Gustducin is a G protein associated with taste and the gustatory system, found in some taste receptor cells. Research on the discovery and isolation of gustducin is recent. It is known to play a large role in the transduction of bitter, sweet and umami stimuli. Its pathways (especially for detecting bitter stimuli) are many and diverse. An intriguing feature of gustducin is its similarity to transducin. These two G proteins have been shown to be structurally and functionally similar, leading researchers to believe that the sense of taste evolved in a similar fashion to the sense of sight. Gustducin is a heterotrimeric protein composed of the products of the GNAT3 (α-subunit), GNB1 (β-subunit) and GNG13 (γ-subunit). Discovery Gustducin was discovered in 1992 when degenerate oligonucleotide primers were synthesized and mixed with a taste tissue cDNA library. The DNA products were amplified by the polymerase chain reaction method, and eight positive clones were shown to encode the α subunits of G-proteins, (which interact with G-protein-coupled receptors). Of these eight, two had previously been shown to encode rod and cone α-transducin. The eighth clone, α-gustducin, was unique to the gustatory tissue. Comparisons with transducin Upon analyzing the amino-acid sequence of α-gustducin, it was discovered that α-gustducins and α-transducins were closely related. This work showed that α-gustducin's protein sequence gives it 80% identity to both rod and cone a-transducin. Despite the structural similarities, the two proteins have very different functionalities. However, the two proteins have similar mechanism and capabilities. Transducin removes the inhibition from cGMP Phosphodiesterase, which leads to the breakdown of cGMP. Similarly, α-gustducin binds the inhibitory subunits of taste cell cAMP Phosphodiesterase which causes a decrease in cAMP levels. Also, the terminal 38 amino acids of α-gustducin and α-transducin are identical. This suggests that gustducin can interact with opsin and opsin-like G-coupled receptors. Conversely, this also suggests that transducin can interact with taste receptors. The structural similarities between gustducin and transducin are so great that comparison with transducin were used to propose a model of gustducin's role and functionality in taste transduction. Other G protein α-subunits have been identified in TRCs (e.g. Gαi-2, Gαi-3, Gα14, Gα15, Gαq, Gαs) with function that has not yet been determined. Location While gustducin was known to be expressed in some taste receptor cells (TRCs), studies with rats showed that gustducin was also present in a limited subset of cells lining the stomach and intestine. These cells appear to share several feature of TRCs. Another study with humans brought to light two immunoreactive patterns for α-gustducin in human circumavallate and foliate taste cells: plasmalemmal and cytosolic. These two studies showed that gustducin is distributed through gustatory tissue and some gastric and intestinal tissue and gustducin is presented either in the cytoplasm or in apical membranes on TRC surfaces. Research showed that bitter-stimulated type 2 taste receptors (T2R/TRB) are only found in taste receptor cells positive for the expression of gustducin. α-Gustducin is selectively expressed in ~25–30% of TRCs Evolution of the gustducin-mediated signaling model Due to its structural similarity to transducin, gustducin was predicted to activate a phosphodiesterase (PDE). Phosphodieterases were found in taste tissues and their activation was tested in vitro with both gustducin and transducin. This experiment revealed transducin and gustducin were both expressed in taste tissue (1:25 ratio) and that both G proteins are capable of activating retinal PDE. Furthermore, when present with denatonium and quinine, both G proteins can activate taste specific PDEs. This indicated that both gustducin and transducin are important in the signal transduction of denatonium and quinine. The 1992 research also investigated the role of gustducin in bitter taste reception by using "knock-out" mice lacking the gene for α-gustducin. A taste test with knock-out and control mice revealed that the knock-out mice showed no preference between bitter and regular food in most cases. When the α-gustducin gene was re-inserted into the knock-out mice, the original taste ability returned. However, the loss of the α-gustducin gene did not completely remove the ability of the knock-out mice to taste bitter food, indicating that α-gustducin is not the only mechanism for tasting bitter food. It was thought at the time that an alternative mechanism of bitter taste detection could be associated with the βγ subunit of gustducin. This theory was later validated when it was discovered that both peripheral and central gustatory neurons typically respond to more than one type of taste stimulant, although a neuron typically would favor one specific stimulant over others. This suggests that, while many neurons favor bitter taste stimuli, neurons that favor other stimuli such as sweet and umami may be capable of detecting bitter stimuli in the absence of bitter stimulant receptors, as with the knock-out mice. Second messengers IP3 and cAMP Until recently, the nature of gustducin and its second messengers was unclear. It was clear, however, that gustducin transduced intracellular signals. Spielman was one of the first to look at the speed of taste reception, utilizing the quenched-flow technique. When the taste cells were exposed to the bitter stimulants denatonium and sucrose octaacetate, the intracellular response - a transient increase of IP3 - occurred within 50-100 millisecond of stimulation. This was not unexpected, as it was known that transducin was capable of sending signals within rod and cone cells at similar speeds. This indicated that IP3 was one of the second messengers used in bitter taste transduction. It was later discovered that cAMP also causes an influx of cations during bitter and some sweet taste transduction, leading to the conclusion that it also acted as a second messenger to gustducin. Bitter transduction When bitter-stimulated T2R/TRB receptors activate gustducin heterotrimers, gustducin acts to mediate two responses in taste receptor cells: a decrease in cAMPs triggered by α-gustducin, and a rise in IP3(Inositol trisphosphate) and diacylglycerol (DAG) from βγ-gustducin. Although the following steps of the α-gustducin pathway are unconfirmed, it is suspected that decreased cAMPs may act on protein kinases which would regulate taste receptor cell ion channel activity. It is also possible that cNMP levels directly regulate the activity of cNMP-gated channels and cNMP-inhibited ion channels expressed in taste receptor cells. The βγ-gustducin pathway continues with the activation of IP3 receptors and the release of Ca2+ followed by neurotransmitter release. Bitter taste transduction models Several models have been suggested for the mechanisms regarding the transduction of bitter taste signals. Cell-surface receptors: Patch clamping experiments have shown evidence that bitter compounds such as denatonium and sucrose octaacetate act directly on specific cell-surface receptors. Direct activation of G proteins: Certain bitter stimulants such as quinine have been shown to activate G proteins directly. While these mechanisms have been identified, the physiologic relevance of the mechanism has not yet been established. PDE activation: Other bitter compounds, such as thioacetamide and propylthiouracil, have been shown to have stimulatory effects on PDEs. This mechanism has been recognized in bovine tongue epithelium contains fungiform papillae. PDE inhibition: Other bitter compounds have been shown to inhibit PDE. Bacitracin and hydrochloride have been shown to inhibit PDE in bovine taste tissue Channel blockage: Patch clamping experiments have shown that several bitter ions act directly on potassium channels, blocking them. This suggests that the potassium channels would be located in the apical region of the taste cells. While this theory seems valid, it has only been identified in mudpuppy taste cells. It is thought that these five diverse mechanisms have developed as defense mechanisms. This would imply that many different poisonous or harmful bitter agents exist and these five mechanisms exist to prevent humans from eating or drinking them. It is also possible that some mechanisms can act as backups should a primary mechanism fail. One example of this could be quinine, which has been shown to both inhibit and activate PDE in bovine taste tissue. Sweet transduction There are currently two models proposed for sweet taste transduction. The first pathway is a GPCRGs-cAMP pathway. This pathway starts with sucrose and other sugars activating Gs inside the cell through a membrane-bound GPCR. The activated Gas activates adenylyl cyclase to generate cAMP. From this point, one of two pathways can be taken. cAMP may act directly to cause an influx of cations through cAMP- gated channels or cAMP can activate protein kinase A, which causes the phosphorylation of K+ channels, thus closing the channels, allowing for depolarization of the taste cell, subsequent opening of voltage-gated Ca2+ channels and causing neurotransmitter release . The second pathway is a GPCR-Gq/Gβγ-IP3 pathway which is used with artificial sweeteners. Artificial sweeteners bind and activate GPCRs coupled to PLCβ2 by either α-Gq or Gβγ. The activated subunits activate PLCβ2 to generate IP3 and DAG. IP3 and DAG elicit Ca2+ release from the endoplasmic reticulum and cause cellular depolarization. An influx of Ca2+ triggers neurotransmitter release. While these two pathways coexist in the same TRCs, it is unclear how the receptors selectively mediate cAMP responses to sugars and IP3 responses to artificial sweeteners . Evolution of bitter taste receptors Of the five basic tastes, three (sweet, bitter and umami tastes) are mediated by receptors from the G protein-coupled receptor family. Mammalian bitter taste receptors (T2Rs) are encoded by a gene family of only a few dozen members. It is believed that bitter taste receptors evolved as a mechanism to avoid ingesting poisonous and harmful substances. If this is the case, one might expect different species to develop different bitter taste receptors based on dietary and geographical constraints. With the exception of T2R1 (which lies on chromosome 5) all human bitter taste receptor genes can be found clustered on chromosome 7 and chromosome 12. Analyzing the relationships between bitter taste receptor genes show that the genes on the same chromosome are more closely related to each other than genes on different chromosomes. Furthermore, the genes on chromosome 12 have higher sequence similarity than the genes found on chromosome 7. This indicates that these genes evolved via tandem gene duplications and that chromosome 12, as a result of its higher sequence similarity between its genes, went through these tandem duplications more recently than the genes on chromosome 7. Gustducin in the stomach Recent work by Enrique Rozengurt has shed some light on the presence of gustducin in the stomach and gastrointestinal tract. His work suggests that gustducin is present in these areas as a defense mechanism. It is widely known that some drugs and toxins can cause harm and even be lethal if ingested. It has already been theorized that multiple bitter taste reception pathways exist to prevent harmful substances from being ingested, but a person can choose to ignore the taste of a substance. Ronzegurt suggests that the presence of gustducin in epithelial cells in the stomach and gastrointestinal tract are indicative of another line of defense against ingested toxins. Whereas taste cells in the mouth are designed to compel a person to spit out a toxin, these stomach cells may act to force a person to spit up the toxins in the form of vomit. See also transducin gustatory system References Further reading External links G proteins
Gustducin
[ "Chemistry" ]
2,651
[ "G proteins", "Signal transduction" ]
11,891,218
https://en.wikipedia.org/wiki/Steven%20R.%20White
Steven R. White is a professor of physics at the University of California, Irvine. He is a condensed matter physicist who specializes in the simulation of quantum systems. He graduated from the University of California, San Diego; he then received his Ph.D. at Cornell University, where he was a shared student with Kenneth Wilson and John Wilkins. He works mostly in condensed matter theory, specializing in computational techniques for strongly correlated systems. These strongly correlated systems include both high-temperature superconductors and quantum spin liquids. He is most known for inventing the Density Matrix Renormalization Group (DMRG) in 1992. This is a numerical variational technique for high accuracy calculations of the low energy physics of quantum many-body systems. His over one hundred seventy papers on this and related subjects have been used and cited widely—his most cited article has received over seven thousand citations. Awards National Science Foundation (NSF) Fellowship, 1982–1985 Andrew D. White Supplementary Fellowship, 1982–1985 IBM Postdoctoral Fellowship, 1988–1989 American Physical Society, Fellow, 1998 American Physical Society, Division Councilor for Computational Physics, 1999 American Physical Society Aneesur Rahman Prize, 2003 Fellow, American Association for the Advancement of Science (2008) Fellow, American Academy of Arts and Sciences (2016) Physical Review Letters Milestone Paper of 1992 (Honored 2008) Perimeter Distinguished Visiting Research Chair (2012–present) Member, National Academy of Sciences (2018) Most cited publications Cited 2416 times, according to Web of Science, October, 2014; over 7000 citations, according to Google Scholar, February, 2023. Cited 1598 times. Cited 657 times. References External links 1959 births Living people People from Lawton, Oklahoma Cornell University alumni 21st-century American physicists University of California, Irvine faculty Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science Fellows of the American Academy of Arts and Sciences Computational physicists Members of the United States National Academy of Sciences
Steven R. White
[ "Physics" ]
399
[ "Computational physicists", "Computational physics" ]
11,891,512
https://en.wikipedia.org/wiki/System%20on%20module
A system on a module (SoM) is a board-level circuit that integrates a system function in a single module. It may integrate digital and analog functions on a single board. A typical application is in the area of embedded systems. Unlike a single-board computer, a SoM serves a special function like a system on a chip (SoC). The devices integrated in the SoM typically requires a high level of interconnection for reasons such as speed, timing, bus width, etc. There are benefits in building a SoM, as for SoC; one notable result is to reduce the cost of the base board or the main PCB. Two other major advantages of SoMs are design-reuse and that they can be integrated into many embedded computer applications. History The acronym SoM has its roots in the blade-based modules. In the mid 1980s, when VMEbus blades used M-Modules, these were commonly referred to as system On a module (SoM). These SoMs performed specific functions such as compute functions and data acquisition functions. SoMs were used extensively by Sun Microsystems, Motorola, Xerox, DEC, and IBM in their blade computers. Design A typical SoM consists of: at least one microcontroller, microprocessor or digital signal processor (DSP) core multiprocessor systems-on-chip (MPSoCs) have more than one processor core memory blocks including a selection of ROM, RAM, EEPROM and/or flash memory timing sources industry standard communication interfaces such as USB, FireWire, Ethernet, USART, SPI, I²C peripherals including counter-timers, real-time timers and power-on reset generators analog interfaces including analog-to-digital converters and digital-to-analog converters voltage regulators and power management circuits See also References ANSI/IEEE Std 1014-1987 and ANSI/VITA 1-1994 1386-2001 - IEEE Standard for a Common Mezzanine Card Family: CMC Standard ANSI/VITA 46.0-2007 VITA Technologies Hall of Fame - PCI Mezzanine Cards Microcomputers Embedded systems Computer buses IEEE standards
System on module
[ "Technology", "Engineering" ]
450
[ "Computer engineering", "Embedded systems", "Computer standards", "Computer systems", "Computer science", "IEEE standards" ]
11,891,713
https://en.wikipedia.org/wiki/Gs%20alpha%20subunit
{{DISPLAYTITLE:Gs alpha subunit}} The Gs alpha subunit (Gαs, Gsα) is a subunit of the heterotrimeric G protein Gs that stimulates the cAMP-dependent pathway by activating adenylyl cyclase. Gsα is a GTPase that functions as a cellular signaling protein. Gsα is the founding member of one of the four families of heterotrimeric G proteins, defined by the alpha subunits they contain: the Gαs family, Gαi/Gαo family, Gαq family, and Gα12/Gα13 family. The Gs-family has only two members: the other member is Golf, named for its predominant expression in the olfactory system. In humans, Gsα is encoded by the GNAS complex locus, while Golfα is encoded by the GNAL gene. Function The general function of Gs is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Gsα, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to GDP and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and GTP binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. In particular, GTP-bound, activated Gsα binds to adenylyl cyclase to produce the second messenger cAMP, which in turn activates the cAMP-dependent protein kinase (also called Protein Kinase A or PKA). Cellular effects of Gsα acting through PKA are described here. Although each GTP-bound Gsα can activate only one adenylyl cyclase enzyme, amplification of the signal occurs because one receptor can activate multiple copies of Gs while that receptor remains bound to its activating agonist, and each Gsα-bound adenylyl cyclase enzyme can generate substantial cAMP to activate many copies of PKA. Receptors The G protein-coupled receptors that couple to the Gs family proteins include: 5-HT4, 5-HT6 and 5-HT7 serotonergic receptors Adenosine receptor types A2a and A2b Adrenocorticotropic hormone receptor (a.k.a. MC2R) Arginine vasopressin receptor 2 β-adrenergic receptors types β1, β2 and β3 Calcitonin receptor Calcitonin gene-related peptide receptor Corticotropin-releasing hormone receptor Dopamine D1 and D5 receptors Follicle-stimulating hormone receptor Gastric inhibitory polypeptide receptor Glucagon receptor Growth-hormone-releasing hormone receptor Histamine H2 receptor Luteinizing hormone/choriogonadotropin receptor Melanocortin receptor: MC1R, MC2R (a.k.a. ACTH receptor), MC3R, MC4R, MC5R Olfactory receptors, through Golf in the olfactory neurons Parathyroid hormone receptors PTH1R and PTH2R Prostaglandin receptor types D2 and I2 Secretin receptor Thyrotropin receptor Trace amine-associated receptor 1 Vasopressin receptor 2 See also Second messenger system G protein-coupled receptor Heterotrimeric G protein Adenylyl cyclase Protein kinase A Gi alpha subunit Gq alpha subunit G12/G13 alpha subunits References External links Peripheral membrane proteins Medical mnemonics
Gs alpha subunit
[ "Chemistry" ]
830
[ "G proteins", "Signal transduction" ]
11,891,750
https://en.wikipedia.org/wiki/Gq%20alpha%20subunit
{{DISPLAYTITLE:Gq alpha subunit}} Gq protein alpha subunit is a family of heterotrimeric G protein alpha subunits. This family is also commonly called the Gq/11 (Gq/G11) family or Gq/11/14/15 family to include closely related family members. G alpha subunits may be referred to as Gq alpha, Gαq, or Gqα. Gq proteins couple to G protein-coupled receptors to activate beta-type phospholipase C (PLC-β) enzymes. PLC-β in turn hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) to diacyl glycerol (DAG) and inositol trisphosphate (IP3). IP3 acts as a second messenger to release stored calcium into the cytoplasm, while DAG acts as a second messenger that activates protein kinase C (PKC). Family members In humans, there are four distinct proteins in the Gq alpha subunit family: Gαq is encoded by the gene GNAQ. Gα11 is encoded by the gene GNA11. Gα14 is encoded by the gene GNA14. Gα15 is encoded by the gene GNA15. Function The general function of Gq is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Gαq, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to guanosine diphosphate (GDP) and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and guanosine triphosphate (GTP) binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. Recent evidence suggests that Gβγ and Gαq-GTP could maintain partial interaction via the N-α-helix region of Gαq. GTP-bound Gα and Gβγ are then freed to activate their respective downstream signaling enzymes. Gq/11/14/15 proteins all activate beta-type phospholipase C (PLC-β) to signal through calcium and PKC signaling pathways. PLC-β then cleaves a specific plasma membrane phospholipid, phosphatidylinositol 4,5-bisphosphate (PIP2) into diacyl glycerol (DAG) and inositol 1,4,5-trisphosphate (IP3). DAG remains bound to the membrane, and IP3 is released as a soluble molecule into the cytoplasm. IP3 diffuses to bind to IP3 receptors, a specialized calcium channel in the endoplasmic reticulum (ER). These channels are specific to calcium and only allow the passage of calcium from the ER into the cytoplasm. Since cells actively sequester calcium in the ER to keep cytoplasmic levels low, this release causes the cytosolic concentration of calcium to increase, causing a cascade of intracellular changes and activity through calcium binding proteins and calcium-sensitive processes. Further reading: Calcium function in vertebrates DAG works together with released calcium to activate specific isoforms of PKC, which are activated to phosphorylate other molecules, leading to further altered cellular activity. Further reading: function of protein kinase C The Gαq / Gα11 (Q209L) mutation is associated with the development of uveal melanoma and its pharmacological inhibition (cyclic depsipeptide FR900359 inhibitor), decreases tumor growth in preclinical trials. Receptors The following G protein-coupled receptors couple to Gq subunits: 5-HT2 serotonergic receptors Alpha-1 adrenergic receptor Vasopressin type 1 receptors: 1A and 1B Angiotensin II receptor type 1 Calcitonin receptor Glutamate mGluR1 and mGluR5 receptors Gonadotropin-releasing hormone receptor Histamine H1 receptor M1, M3, and M5 muscarinic receptors Thyrotropin-releasing hormone receptor Trace amine-associated receptor 1 At least some Gq-coupled receptors (e.g., the muscarinic acetylcholine M3 receptor) can be found preassembled (pre-coupled) with Gq. The common polybasic domain in the C-tail of Gq-coupled receptors appears necessary for this receptor¬G protein preassembly. Inhibitors The cyclic depsipeptides FR900359 and YM-254890 are strong, highly specific inhibitors of Gq and G11. See also Second messenger system G protein-coupled receptor Heterotrimeric G protein Phospholipase C Calcium signaling Protein kinase C Gs alpha subunit Gi alpha subunit G12/G13 alpha subunits References External links G proteins Peripheral membrane proteins
Gq alpha subunit
[ "Chemistry" ]
1,162
[ "G proteins", "Signal transduction" ]
11,891,828
https://en.wikipedia.org/wiki/Aggrecanase
Aggrecanases are extracellular proteolytic enzymes that are members of the ADAMTS (A Disintegrin And Metalloprotease with Thrombospondin Motifs) family. Aggrecanases act on large proteoglycans known as aggrecans, which are components of connective tissues such as cartilage. The inappropriate activity of aggrecanase is a mechanism by which cartilage degradation occurs in diseases such as arthritis. At least two forms of aggrecanase exist in humans: ADAMTS4 or aggrecanase-1 and ADAMTS5 or aggrecanase-2. Both proteins contain thrombospondin (TS) motifs required for proper recognition of substrates. Although both proteins can cleave the substrate aggrecan at the same position, they differ in kinetics and in secondary cleavage sites. References ADAMTS EC 3.4.24
Aggrecanase
[ "Chemistry" ]
189
[ "Biochemistry stubs", "Protein stubs" ]
2,201,259
https://en.wikipedia.org/wiki/Ballistic%20coefficient
In ballistics, the ballistic coefficient (BC, C) of a body is a measure of its ability to overcome air resistance in flight. It is inversely proportional to the negative acceleration: a high number indicates a low negative acceleration—the drag on the body is small in proportion to its mass. BC can be expressed with the units kilogram-force per square meter (kgf/m2) or pounds per square inch (lb/in2) (where 1 lb/in2 corresponds to ). Formulas General where: Cb,physics, ballistic coefficient as used in physics and engineering m, mass A, cross-sectional area Cd, drag coefficient , density , characteristic body length Ballistics The formula for calculating the ballistic coefficient for small and large arms projectiles only is as follows: where: Cb,projectile, ballistic coefficient as used in point mass trajectory from the Siacci method (less than 20 degrees). m, mass of bullet d, measured cross-sectional diameter of projectile i, coefficient of form The coefficient of form, i, can be derived by 6 methods and applied differently depending on the trajectory models used: G model, Beugless/Coxe; 3 Sky Screen; 4 Sky Screen; target zeroing; Doppler radar. Here are several methods to compute i or Cd: where: or A drag coefficient can also be calculated mathematically: where: Cd, drag coefficient. , density of the projectile. v, projectile velocity at range. π (pi) = 3.14159... d, measured cross-sectional diameter of projectile or From standard physics as applied to "G" models: where: i, coefficient of form. CG, drag coefficient of 1.00 from any "G" model, reference drawing, projectile. Cp, drag coefficient of the actual test projectile at range. Commercial use This formula is for calculating the ballistic coefficient within the small arms shooting community, but is redundant with Cb,projectile: where: Cb,small-arms, ballistic coefficient SD, sectional density i, coefficient of form (form factor) History Background In 1537, Niccolò Tartaglia performed test firing to determine the maximum angle and range for a shot. His conclusion was near 45 degrees. He noted that the shot trajectory was continuously curved. In 1636, Galileo Galilei published results in "Dialogues Concerning Two New Sciences". He found that a falling body had a constant acceleration. This allowed Galileo to show that a bullet's trajectory was a curve. Circa 1665, Sir Isaac Newton derived the law of air resistance. Newton's experiments on drag were through air and fluids. He showed that drag on shot increases proportionately with the density of the air (or the fluid), cross sectional area, and the square of the speed. Newton's experiments were only at low velocities to about . In 1718, John Keill challenged the Continental Mathematica, "To find the curve that a projectile may describe in the air, on behalf of the simplest assumption of gravity, and the density of the medium uniform, on the other hand, in the duplicate ratio of the velocity of the resistance". This challenge supposes that air resistance increases exponentially to the velocity of a projectile. Keill gave no solution for his challenge. Johann Bernoulli took up this challenge and soon thereafter solved the problem and air resistance varied as "any power" of velocity; known as the Bernoulli equation. This is the precursor to the concept of the "standard projectile". In 1742, Benjamin Robins invented the ballistic pendulum. This was a simple mechanical device that could measure a projectile's velocity. Robins reported muzzle velocities ranging from to . In his book published that same year "New Principles of Gunnery", he uses numerical integration from Euler's method and found that air resistance varies as the square of the velocity, but insisted that it changes at the speed of sound. In 1753, Leonhard Euler showed how theoretical trajectories might be calculated using his method as applied to the Bernoulli equation, but only for resistance varying as the square of the velocity. In 1864, the Electro-ballistic chronograph was invented, and by 1867 one electro-ballistic chronograph was claimed by its inventor to be able to resolve one ten-millionth of a second, but the absolute accuracy is unknown. Test firing Many countries and their militaries carried out test firings from the mid eighteenth century on using large ordnance to determine the drag characteristics of each individual projectile. These individual test firings were logged and reported in extensive ballistics tables. Of the test firing, most notably were: Francis Bashforth at Woolwich Marshes & Shoeburyness, England (1864-1889) with velocities to and M. Krupp (1865–1880) of Friedrich Krupp AG at Meppen, Germany. Friedrich Krupp AG continued these test firings to 1930; to a lesser extent General Nikolai V. Mayevski, then a Colonel (1868–1869) at St. Petersburg, Russia; the Commission d'Experience de Gâvre (1873 to 1889) at Le Gâvre, France with velocities to and The British Royal Artillery (1904–1906). The test projectiles (shot) used, vary from spherical, spheroidal, ogival; being hollow, solid and cored in design with the elongated ogival-headed projectiles having 1, , 2 and 3 caliber radii. These projectiles varied in size from, at to at Methods and the standard projectile Many militaries up until the 1860s used calculus to compute the projectile trajectory. The numerical computations necessary to calculate just a single trajectory was lengthy, tedious and done by hand. So, investigations to develop a theoretical drag model began. The investigations led to a major simplification in the experimental treatment of drag. This was the concept of a "standard projectile". The ballistic tables are made up for a factitious projectile being defined as: "a factitious weight and with a specific shape and specific dimensions in a ratio of calibers". This simplifies calculation for the ballistic coefficient of a standard model projectile, which could mathematically move through the standard atmosphere with the same ability as any actual projectile could move through the actual atmosphere. The Bashforth method In 1870, Bashforth publishes a report containing his ballistic tables. Bashforth found that the drag of his test projectiles varied with the square of velocity (v2) from to and with the cube of velocity (v3) from to . As of his 1880 report, he found that drag varied by v6 from to . Bashforth used rifled guns of , , and ; smooth-bore guns of similar caliber for firing spherical shot and howitzers propelled elongated projectiles having an ogival-head of caliber radius. Bashforth uses b as the variable for ballistic coefficient. When b is equal to or less than v2, then b is equal to P for the drag of a projectile. It would be found that air does not deflect off the front of a projectile in the same direction, when there are of differing shapes. This prompted the introduction of a second factor to b, the coefficient of form (i). This is particularly true at high velocities, greater than . Hence, Bashforth introduced the "undetermined multiplier" of any power called the k factor that compensate for this unknown effects of drag above ; k > i. Bashforth then integrated k and i as K. Although Bashforth did not conceive the "restricted zone", he showed mathematically there were 5 restricted zones. Bashforth did not propose a standard projectile, but was well aware of the concept. Mayevski–Siacci method In 1872, Mayevski published his report Traité de Balistique Extérieure, which included the Mayevski model. Using his ballistic tables along with Bashforth's tables from the 1870 report, Mayevski created an analytical math formula that calculated the air resistances of a projectile in terms of log A and the value n. Although Mayevski's math used a differing approach than Bashforth, the resulting calculation of air resistance was the same. Mayevski proposed the restricted zone concept and found there to be six restricted zones for projectiles. Circa 1886, Mayevski published the results from a discussion of experiments made by M. Krupp (1880). Though the ogival-headed projectiles used varied greatly in caliber, they had essentially the same proportions as the standard projectile, being mostly 3 caliber in length, with an ogive of 2 calibers radius. Giving the standard projectile dimensionally as and . In 1880, Colonel Francesco Siacci published his work "Balistica". Siacci found as did those who came before him that the resistance and density of the air becomes greater and greater as a projectile displaced the air at higher and higher velocities. Siacci's method was for flat-fire trajectories with angles of departure of less than 20 degrees. He found that the angle of departure is sufficiently small to allow for air density to remain the same and was able to reduce the ballistics tables to easily tabulated quadrants giving distance, time, inclination and altitude of the projectile. Using Bashforth's k and Mayevski's tables, Siacci created a four-zone model. Siacci used Mayevski's standard projectile. From this method and standard projectile, Siacci formulated a shortcut. Siacci found that within a low-velocity restricted zone, projectiles of similar shape, and velocity in the same air density behave similarly; or . Siacci used the variable for ballistic coefficient. Meaning, air density is the generally the same for flat-fire trajectories, thus sectional density is equal to the ballistic coefficient and air density can be dropped. Then as the velocity rises to Bashforth's for high velocity when requires the introduction of . Following within today's currently used ballistic trajectory tables for an average ballistic coefficient: would equal equals as . Siacci wrote that within any restricted zone, C being the same for two or more projectiles, the trajectories differences will be minor. Therefore, C agrees with an average curve, and this average curve applies for all projectiles. Therefore, a single trajectory can be computed for the standard projectile without having to resort to tedious calculus methods, and then a trajectory for any actual bullet with known C can be computed from the standard trajectory with just simple algebra. The ballistic tables The aforementioned ballistics tables are generally: functions, air density, projectile time at range, range, degree of projectile departure, weight and diameter to facilitate the calculation of ballistic formulae. These formulae produce the projectile velocity at range, drag and trajectories. The modern day commercially published ballistic tables or software computed ballistics tables for small arms, sporting ammunition are exterior ballistic, trajectory tables. The 1870 Bashforth tables were to . Mayevski, using his tables, supplemented by the Bashforth tables (to 6 restricted zones) and the Krupp tables. Mayevski conceived a 7th restricted zone and extended the Bashforth tables to . Mayevski converted Bashforth's data from Imperial units of measure to metric units of measure (now in SI units of measure). In 1884, James Ingalls published his tables in the U.S. Army Artillery Circular M using the Mayevski tables. Ingalls extended Mayevski's ballistics tables to within an 8th restricted zone, but still with the same n value (1.55) as Mayevski's 7th restricted zone. Ingalls, converted Mayevski's results back to Imperial units. The British Royal Artillery results were very similar to those of Mayevski's and extended their tables to within the 8th restricted zone changing the n value from 1.55 to 1.67. These ballistic tables were published in 1909 and almost identical to those of Ingalls. In 1971 the Sierra Bullet company calculated their ballistic tables to 9 restricted zones but only within . The G model In 1881, the Commission d'Experience de Gâvre did a comprehensive survey of data available from their tests as well as other countries. After adopting a standard atmospheric condition for the drag data the Gavre drag function was adopted. This drag function was known as the Gavre function and the standard projectile adopted was the Type 1 projectile. Thereafter, the Type 1 standard projectile was renamed by Ballistics Section of Aberdeen Proving Grounds in Maryland, USA as G1 after the Commission d'Experience de Gâvre. For practical purposes the subscript 1 in G1 is generally written in normal font size as G1. The general form for the calculations of trajectory adopted for the G model is the Siacci method. The standard model projectile is a "fictitious projectile" used as the mathematical basis for the calculation of actual projectile's trajectory when an initial velocity is known. The G1 model projectile adopted is in dimensionless measures of 2 caliber radius ogival-head and 3.28 caliber in length. By calculation this leaves the body length 1.96 caliber and head, 1.32 caliber long. Over the years there has been some confusion as to adopted size, weight and radius ogival-head of the G1 standard projectile. This misconception may be explained by Colonel Ingalls in the 1886 publication, Exterior Ballistics in the Plan Fire; page 15, In the following tables the first and second columns give the velocities and corresponding resistance, in pounds, to an elongated one inch in diameter and having an ogival head of one and a half calibers. They were deduced from Bashforth's experiments by Professor A. G. Greenhill, and are taken from his papers published in the Proceedings of the Royal Artillery Institution, Number 2, Volume XIII. Further it is discussed that said projectile's weight was one pound. For the purposes of mathematical convenience for any standard projectile (G) the C is 1.00. Where as the projectile's sectional density (SD) is dimensionless with a mass of 1 divided by the square of the diameter of 1 caliber equaling an SD of 1. Then the standard projectile is assigned a coefficient of form of 1. Following that . C, as a general rule, within flat-fire trajectory, is carried out to 2 decimal points. C is commonly found within commercial publications to be carried out to 3 decimal points as few sporting, small arms projectiles rise to the level of 1.00 for a ballistic coefficient. When using the Siacci method for different G models, the formula used to compute the trajectories is the same. What differs is retardation factors found through testing of actual projectiles that are similar in shape to the standard project reference. This creates slightly different set of retardation factors between differing G models. When the correct G model retardation factors are applied within the Siacci mathematical formula for the same G model C, a corrected trajectory can be calculated for any G model. Another method of determining trajectory and ballistic coefficient was developed and published by Wallace H. Coxe and Edgar Beugless of DuPont in 1936. This method is by shape comparison an logarithmic scale as drawn on 10 charts. The method estimates the ballistic coefficient related to the drag model of the Ingalls tables. When matching an actual projectile against the drawn caliber radii of Chart No. 1, it will provide i and by using Chart No. 2, C can be quickly calculated. Coxe and Beugless used the variable C for ballistic coefficient. The Siacci method was abandoned by the end of the World War I for artillery fire. But the U.S. Army Ordnance Corps continued using the Siacci method into the middle of the 20th century for direct (flat-fire) tank gunnery. The development of the electromechanical analog computer contributed to the calculation of aerial bombing trajectories during World War II. After World War II the advent of the silicon semiconductor based digital computer made it possible to create trajectories for the guided missiles/bombs, intercontinental ballistic missiles and space vehicles. Between World War I and II the U.S. Army Ballistics research laboratories at Aberdeen Proving Grounds, Maryland, USA developed the standard models for G2, G5, G6. In 1965, Winchester Western published a set of ballistics tables for G1, G5, G6 and GL. In 1971 Sierra Bullet Company retested all their bullets and concluded that the G5 model was not the best model for their boat tail bullets and started using the G1 model. This was fortunate, as the entire commercial sporting and firearms industries had based their calculations on the G1 model. The G1 model and Mayevski/Siacci Method continue to be the industry standard today. This benefit allows for comparison of all ballistic tables for trajectory within the commercial sporting and firearms industry. In recent years there have been vast advancements in the calculation of flat-fire trajectories with the advent of Doppler radar and the personal computer and handheld computing devices. Also, the newer methodology proposed by Dr. Arthur Pejsa and the use of the G7 model used by Mr. Bryan Litz, ballistic engineer for Berger Bullets, LLC for calculating boat tailed spitzer rifle bullet trajectories and 6 Dof model based software have improved the prediction of flat-fire trajectories. Differing mathematical models and bullet ballistic coefficients Most ballistic mathematical models and hence tables or software take for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistic coefficient. Those models do not differentiate between wadcutter, flat-based, spitzer, boat-tail, very-low-drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several different drag curve models optimized for several standard projectile shapes are available, however. The resulting drag curve models for several standard projectile shapes or types are referred to as: G1 or Ingalls (flatbase with 2 caliber (blunt) nose ogive - by far the most popular) G2 (Aberdeen J projectile) G5 (short 7.5° boat-tail, 6.19 calibers long tangent ogive) G6 (flatbase, 6 calibers long secant ogive) G7 (long 7.5° boat-tail, 10 calibers secant ogive, preferred by some manufacturers for very-low-drag bullets) G8 (flatbase, 10 calibers long secant ogive) GL (blunt lead nose) Since these standard projectile shapes differ significantly the Gx BC will also differ significantly from the Gy BC for an identical bullet. To illustrate this the bullet manufacturer Berger has published the G1 and G7 BCs for most of their target, tactical, varmint and hunting bullets. Other bullet manufacturers like Lapua and Nosler also published the G1 and G7 BCs for most of their target bullets. Many of these manufacturer and other independently verified G1 and G7 Ballistic Coefficients for most of the modern bullets gets published and updated regularly in freely published bullet database. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The applied reference projectile shape always has a form factor (i) of exactly 1. When a particular projectile has a sub 1 form factor (i) this indicates that the particular projectile exhibits lower drag than the applied reference projectile shape. A form factor (i) greater than 1 indicates the particular projectile exhibits more drag than the applied reference projectile shape. In general the G1 model yields comparatively high BC values and is often used by the sporting ammunition industry. The transient nature of bullet ballistic coefficients Variations in BC claims for exactly the same projectiles can be explained by differences in the ambient air density used to compute specific values or differing range-speed measurements on which the stated G1 BC averages are based. Also, the BC changes during a projectile's flight, and stated BCs are always averages for particular range-speed regimes. Further explanation about the variable nature of a projectile's G1 BC during flight can be found at the external ballistics article. The external ballistics article implies that knowing how a BC was determined is almost as important as knowing the stated BC value itself. For the precise establishment of BCs (or perhaps the scientifically better expressed drag coefficients), Doppler radar-measurements are required. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Weibel 1000e or Infinition BR-1001 Doppler radars are used by governments, professional ballisticians, defense forces, and a few ammunition manufacturers to obtain exact real-world data on the flight behavior of projectiles of interest. Doppler radar measurement results for a lathe turned monolithic solid .50 BMG very-low-drag bullet (Lost River J40 , monolithic solid bullet / twist rate 1:) look like this: The initial rise in the BC value is attributed to a projectile's always present yaw and precession out of the bore. The test results were obtained from many shots, not just a single shot. The bullet was assigned 1.062 lb/in2 (746.7 kg/m2) for its BC number by the bullet's manufacturer, Lost River Ballistic Technologies, before it went out of business. Measurements on other bullets can give totally different results. How different speed regimes affect several 8.6 mm (.338 in calibre) rifle bullets made by the Finnish ammunition manufacturer Lapua can be seen in the .338 Lapua Magnum product brochure which states Doppler radar established BC data. General trends Sporting bullets, with a calibre d ranging from , have C in the range 0.12 lb/in2 to slightly over 1.00 lb/in2 (84 kg/m2 to 703 kg/m2). Those bullets with the higher BCs are the most aerodynamic, and those with low BCs are the least. Very-low-drag bullets with C ≥ 1.10 lb/in2 (over 773 kg/m2) can be designed and produced on CNC precision lathes out of mono-metal rods, but they often have to be fired from custom made full bore rifles with special barrels. Ammunition makers often offer several bullet weights and types for a given cartridge. Heavy-for-caliber pointed (spitzer) bullets with a boattail design have BCs at the higher end of the normal range, whereas lighter bullets with square tails and blunt noses have lower BCs. The 6 mm and 6.5 mm cartridges are probably the most well known for having high BCs and are often used in long range target matches of – . The 6 and 6.5 have relatively light recoil compared to high BC bullets of greater caliber and tend to be shot by the winner in matches where accuracy is key. Examples include the 6mm PPC, 6mm Norma BR, 6×47mm SM, 6.5×55mm Swedish Mauser, 6.5×47mm Lapua, 6.5 Creedmoor, 6.5 Grendel, .260 Remington, and the 6.5-284. In the United States, hunting cartridges such as the .25-06 Remington (a 6.35 mm caliber), the .270 Winchester (a 6.8 mm caliber), and the .284 Winchester (a 7 mm caliber) are used when high BCs and moderate recoil are desired. The .30-06 Springfield and .308 Winchester cartridges also offer several high-BC loads, although the bullet weights are on the heavy side for the available case capacity, and thus are velocity limited by the maximum allowable pressure. In the larger caliber category, the .338 Lapua Magnum and the .50 BMG are popular with very high BC bullets for shooting beyond 1,000 meters. Newer chamberings in the larger caliber category are the .375 and .408 Cheyenne Tactical and the .416 Barrett. Information sources For many years, bullet manufacturers were the main source of ballistic coefficients for use in trajectory calculations. However, in the past decade or so, it has been shown that ballistic coefficient measurements by independent parties can often be more accurate than manufacturer specifications. Since ballistic coefficients depend on the specific firearm and other conditions that vary, it is notable that methods have been developed for individual users to measure their own ballistic coefficients. Satellites and reentry vehicles Satellites in low Earth orbit (LEO) with high ballistic coefficients experience smaller perturbations to their orbits due to atmospheric drag. The ballistic coefficient of an atmospheric reentry vehicle has a significant effect on its behavior. A very high ballistic coefficient vehicle would lose velocity very slowly and would impact the Earth's surface at higher speeds. In contrast, a low ballistic coefficient vehicle would reach subsonic speeds before reaching the ground. In general, reentry vehicles carrying human beings or other sensitive payloads back to Earth from space have high drag and a correspondingly low ballistic coefficient (less than approx. 100 lb/ft2). Vehicles that carry nuclear weapons launched by an intercontinental ballistic missile (ICBM), by contrast, have a high ballistic coefficient, ranging between 100 and 5000 lb/ft2, enabling a significantly faster descent from space to the surface. This in turn makes the weapon less affected by crosswinds or other weather phenomena, and harder to track, intercept, or otherwise defend against. See also External ballistics - The behavior of a projectile in flight. Trajectory of a projectile References External links Aerospace Corporation Definition Chuck Hawks Article on Ballistic Coefficient Ballistic Coefficient Tables Exterior Ballistics.com How do bullets fly? The ballistic coefficient (bc) by Ruprecht Nennstiel, Wiesbaden, Germany Ballistic Coefficients - Explained Ballistic calculators Projectiles Aerodynamics Ballistics
Ballistic coefficient
[ "Physics", "Chemistry", "Engineering" ]
5,336
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
2,201,331
https://en.wikipedia.org/wiki/Telephone%20exchange%20names
A telephone exchange name or central office name was a distinguishing and memorable name assigned to a central office. It identified the switching system to which a telephone was connected, and facilitated the connection of telephone calls between switching systems in different localities. While small towns and rural areas might each be served by a single exchange, large cities were served by multiple switching systems, either distributed in the community constituting multiple exchange areas, or sometimes hosted in the same building to serve a densely populated area. Central offices were usually identified by names that were locally significant. The leading letters of a central office name were used as the leading components of the telephone number representation, so that each telephone number in the area was unique. These letters were mapped to the digits of the dial, which was indicated visibly on the dial's numbering plate. Several systematic telephone numbering plans existed in various communities, typically evolving over time as the subscriber base outgrew older numbering schemes. A widely used numbering plan was a system of using one or two letters from the central office name with four or five digits. Such systems were designated as 2L-4N or 2L-5N, or simply 2–4 and 2–5, respectively, but some large cities initially selected plans with three letters (3L-4N). In 1917, W. G. Blauvelt of AT&T proposed a mapping system that displayed three letters each with the digits 2 through 9 on the dial. Telephone directories or other telephone number displays, such as in advertising, typically listed the telephone number showing the significant letters of the central office name in bold capital letters, followed by the digits that identified the subscriber line. On the number card of the telephone instrument, the name was typically shown in full, but only the significant letters to be dialed were capitalized, while the rest of the name was shown in lower case. Telephone exchange names were used in many countries, but were phased out in favor of numeric systems in the 1960s. In the United States, the demand for telephone service outpaced the scalability of the alphanumeric system and after introduction of area codes for direct-distance dialing, all-number calling became necessary. Similar developments followed around the world, such as the British all-figure dialling. Background In the early, small telephone networks, it was customary to initiate a connection to another subscriber by requesting the name of the desired party from the operator. While this method persisted into the 1920s in very small communities, growth of the business soon made this impractical, and subscriber stations were assigned telephone numbers. Single exchanges (central offices) were typically named after the municipality or location, so that calls to another town could be easily identified. Cities soon needed additional branch offices some distance from Central, to accommodate the subscriber base and expanding area, as a single office typically served a maximum of ten thousand telephone numbers. Often, additional central offices might be named by the directions of the compass, North, South, East, and West. But many cities chose other naming schemes, using locally significant names of districts, parks, or other well-known features, such as Market. A caller would request a connection to Market 1234, for example. The selection of central office names was conducted in a careful manner to avoid misunderstanding of the verbal requests. For automatic telephone service, impulse senders (dials) were installed on customer telephones, so that subscribers did not need operators to initiate a call, but simply dial the directory number themselves. This required a digit or letter identification of central offices, so that the central office for the recipient could be dialed before the line number. Telephone dials were typically supplemented with letters next to the numerals on the dial, as seen in the accompanying photo, so that a name could be dialed by its first letter, or by multiple letters. United States and Canada In the United States, the most-populous cities, such as New York City, Philadelphia, Boston, and Chicago, initially implemented dial service with telephone numbers consisting of three letters and four digits (3L-4N) according to a system developed by W. G. Blauvelt of AT&T in 1917. This system mapped letter of the alphabet to digits on the telephone dial. In 1930, New York City converted to a 2L-5N telephone numbering plan. Most other major Canadian and US cities, such as Toronto and Atlanta, were converted from manual exchanges using four digits to a 2L-4N numbering plan. For example, in Montréal, ATwater 1234 was dialed as six pulls on the dial (AT1234) to send the digit sequence 281234. Eventually, starting in the late 1940s, all local numbering plans were changed to the 2L-5N system to prepare for nationwide Operator Toll Dialing. For example, under this system, a well-known number in New York City was listed as PEnnsylvania 6-5000. In small towns with a single central office, local calls typically required dialing only four digits at most. A toll call required the assistance of an operator, who asked for the name of the town and the station number. Some independent telephone companies, not part of the Bell System, also did not implement central office names. In 1915, newly developed panel switching systems were tested in the Mulberry and Waverly exchanges in Newark, New Jersey. When the technology first appeared in the Mulberry exchange, subscriber telephones had no dials, and placing a call required no change for subscribers—they asked an operator to ring their called party as usual. The operator keyed the number into the panel equipment, instead of making cord connections manually. The panel switch was later, from the early 1920s through the 1930s, installed in large metropolitan areas in the Bell System. By the 1950s twenty cities were served by this type of office. Several representations of telephone numbers using central office names capitalized and emboldened the leading letters that were dialed, for example: Kenmore 9392 is a five-pull (1L-4N) small-city telephone number for the Kenmore exchange in Fort Wayne, Indiana. MArket 7032 is a six-digit (2L-4N) telephone number. This format was in use from the 1920s through the 1950s, and was phased out c. 1960. BALdwin 6828 is an urban 3L-4N example, used only in the largest cities before conversion to two-letter central office names. ENglewood 3-1234 is an example of the 2L-5N format, gradually implemented continent-wide starting in the 1940s, in preparation for DDD. MUrray Hill 5-9975 is another example of the 2L-5N format, one of the Ricardos' numbers on I Love Lucy. The H in Hill, although not dialed, is still capitalized, but not emboldened, as the first letter of the second word. In print, such as on business cards or in advertisements, the central office name was often shown only by two letters: TEmpleton 1-6400 would often appear as TE 1-6400. If the central office was known by a name, but no letters were dialed, it was common to capitalize only the first letter of the central office without emboldening the first letter, e.g., Main 600W or Fairmont 33. Such numbers were assigned typically in manual offices, and the name would be spoken by a subscriber when requesting a destination. Often these were geographically significant names, such as the town's name. In large cities with coexisting manual and dial areas, the numbering was generally standardized to one format. For example, when the last manual exchange in San Francisco was converted to dial in 1953, the numbers had for several years been in the format of JUniper 6-5833. JUniper 4 was an automatic switching system, but JUniper 6 was manual. To call JUniper 6 from JUniper 4, the subscriber dialed the number and it was displayed to the B-board operator at JUniper 6, and that operator would complete the connection manually. In the other direction, to call JUniper 4 from JUniper 6, the subscriber would lift the receiver and speak to the JUniper 4 operator who would in turn dial the JUniper 6 number. In the 1940s, the Bell System developed the nationwide telephone number plan for Operator Toll Dialing, a system of initially eighty-six numbering plan areas (NPAs) that were assigned the first set of area codes. These were used at first only by switchboard operators to route trunk calls between numbering plan areas, but were the foundation for the North American Numbering Plan, The 2L-5N system for the local directory number became the North American standard Direct long-distance dialing by customers, using the three-digit area code and a seven-digit telephone number, commenced in the 1950s. During the 1950s, cities still using five or six-digit numbers converted to the new method of seven-digit dialing. Typically, several six-digit (2L-4N) exchanges were co-located in one building already, with new ones added as old ones had filled up. After the conversion, they may have been combined into a new 2L-5N exchange area. For example, the CHerry, FIllmore, ATwater, and KLondike exchanges might be converted to OXford 1, 3, 6, and 7. Usually customers would keep the same station numbers. When mobile radio telephone service was offered, some telephone companies used letters based on various prefixes for unit identifiers (e.g. JL6-1212), or to identify radio channels (e.g. channel JR). Standardization In the early 1950s, AT&T established a list of recommended exchange names that were the result of studies to minimize misunderstandings when spoken. The recommendation was intended for newly established exchanges, and did not mandate the renaming of existing historical names. The number sequences 55, 57x, 95x, and 97x had no exchange names specified, as the mappings for the digits 5, 7, and 9 had no vowels, thus making it difficult to find names with those consonant combinations. As a result, those numbers were very seldom assigned to exchanges. However, KLondike was used for 55x in San Francisco and Columbus, Ohio, and WRigley 5 (975) in Chicago (Wrigley Field). On the telephone dial, letters were mapped to digits using the assignments shown in the table (right). The following is AT&T's recommended list of central office names as published in 1955, sorted by the first two digits of the three-digit central office code. The letters Q and Z were never used in the naming system, but Z was often mapped on the telephone dial to the digit 0 (zero). The prefix 55 was set aside for fictitious telephone numbers of the form 555-XXXX. They were often used with the fictitious exchange name KLondike (55). All-number calling As demand for telephone service grew in the post–World War II period, it was foreseeable that the demand would exceed the addressing capacity of the existing system of using memorable telephone central office names as prefixes in telephone numbers. Several letter combinations had no pronounceable or memorable names and could not be used. Several North American numbering plan areas (NPAs) were divided so that more office codes became available to smaller regions. However, as the growth accelerated, the Bell System decided to implement all-number calling (ANC) and to deprecate the use of central office names to provide more central office codes to each NPA. This extended the usable numbering plan and only two area code splits became necessary between 1962 and 1981. All-number calling was phased in starting in 1958 and most areas had adopted it fully by the mid-1960s. In some areas it did not become universal until the 1980s. The Bell System published and distributed area code handbooks yearly which compiled the towns available for calling using an area code. Experiencing significant resistance to all-number calling in many areas, the Bell System employed a strategy of gradual changes to ease the transition for customers, and employed media productions to explain the need and process of change to the public. Originally, directory listings were printed with the central office name spelled out in full. The first stage in the conversion only the dialed letters were printed, as illustrated in this example: {| class=wikitable ! Subscriber || Original listing || Abbreviated listing |- |Jones John 123 Anystreet || BUtterfield 5-1212 || BU 5-1212 |- |Jones Paul 5 Revolution Rd || ANdrew 3-2368 || AN 3-2368 |} At this stage, telephone companies had the means to assign letter combinations for central office prefixes that were previously unavailable, thus any set of five- or fewer-digit numbers could be expanded to seven digits, without naming conflicts. Finally, all central office codes were converted to a numerical format, as in the last column of this table: {| class=wikitable ! Subscriber || Alphanumeric coding || All-number calling |- |Ramsay Betty 12 Connecticut Rd || LT 1-5225 || 581-5225 |} The Bell System proceeded to convert named exchanges to all-number calling, starting in smaller communities. No significant opposition arose until conversion began in major cities. In some cities such as San Francisco, opposition was organized; the opposition group in San Francisco was called the Anti Digit Dialing League, of which S. I. Hayakawa was a notable member. The opposition caused AT&T to slow the conversion process, and names continued to be used in cities such as New York, which went to ANC only in 1978. Philadelphia had named exchanges in the Bell of Pennsylvania telephone book as late as 1983, long after AT&T had hoped to complete the conversion. Bell Canada, Alberta Government Telephones, and BC Tel completed most conversions of existing numbers during the first half of the 1960s. In Toronto, historically 2L+4N before numbers were lengthened to accommodate the 1957 introduction of direct distance dialling, the March 1966 directory had no exchange names. Typically in larger communities, conversions would be timed with issues of the telephone directory. For example, in London, Ontario, three conversions took place starting in February 1962 and completing in September 1963. GEneral 2, 3, and 9 were converted first; later GLadstone 1 and 5, and finally GEneral 4 and 8. An example from Montreal, Quebec, extended from 2L+4N to 2L+5N on August 4, 1957: WIlbank became WEllington 2, WEllington became WEllington 3 (a rare example of keeping the exchange name), FItzroy became WEllington, GLenview became WEllington 7, VEndome became DUpont 7, HEmlock became POntiac 7, TRenmore became POntiac 8, HArbour became VIctor 5, and MArquette became VIctor 9. The use of letters in exchange names resulted in the placement of letters on the telephone dial, even outside the areas using the letter and number combinations. Some Canadian areas at first used original letter schemes, notably Calgary, Alberta, until later standardization within North America. Québec exchange names differed from those on standard Bell System lists due to the need for names in the French language; Hull, Quebec's 77x (PR as in PRovince) needed to be recognizable in both languages in 1957. In smaller communities with four- or five-digit numbers and a single city exchange, central office names appeared for the first time in the late 1950s, and then solely to match the North American direct distance dial standard of a three-digit area code and seven-digit local number. The names, usually chosen from standard Bell System lists, had no local significance and were short-lived; phase-out began soon after 1960. United Kingdom and France Virtually every telephone exchange in Europe was named after a single village, town, or city but sometimes a geographical feature (e.g. The Lizard) or region (e.g. Scillonia for the Isles of Scilly) would be used for rural areas. However, in the largest cities it was clear as early as the 1880s that several exchanges would be needed. These were usually given names reflecting a district of a city, for example Holborn in London, Docks in Manchester, Leith in Edinburgh, or in some cases an entirely unrelated name, e.g., Acorn or Advance in London, Pyramid in Manchester, and Midland in Birmingham. As automated systems were introduced starting in the late 1920s, the first three letters of these names were used in the numbering plans for those exchanges. The 3L-4N system was notably used in the capital cities Paris and London, both examples of the big-city problem. Large cities served by many manual exchanges could only be converted to automatic operation gradually, which required a logistics of operating both types simultaneously for several years. Telephone directories for these cities showed the first three letters of the exchange in bold capital letters, when all seven digits were to be dialed. For example, a subscriber number for Scotland Yard on London's Whitehall exchange was shown as "WHItehall 1212". If the first three letters were capitalized but not bolded, e.g., HAYes 1295, the caller would dial the first three letters only, and when connected to Hayes ask the operator for the local number. Later, Coded Call Indicator working equipment was installed at some manual exchanges so that the caller could dial all seven digits, and the required number would be displayed to the operator. In the United Kingdom, the first Director exchange in London, Holborn Tandem, was cutover in 1927; preceded by any necessary changes in the London area, e.g. changing some exchange names and making all local numbers 4N (4-digit). As each digit represents three letters, the same network cannot have exchanges called BRIxton and CRIcklewood, which both correspond to 274. In smaller director areas some A-digit levels were combined so that local director exchange would only need four or fewer groups of directors instead of eight. But if (say) A-digit levels 7 and 8 were combined it would not be possible to have both PERivale and TERminus exchanges in the same network. The other main UK conurbations followed suit, namely Manchester in 1930 (e.g., DEAnsgate 3414, the number for Kendals department store), Birmingham (in 1931), Glasgow (in 1937), and later Liverpool and Edinburgh (c. 1950). The standards for converting exchange name letters in Europe varied, notably in the placement of the letters O, Q, and Z. When national automatic Subscriber Trunk Dialling (STD) was introduced in the United Kingdom in 1958, the first two letters of main exchange names were incorporated in the STD codes, e.g. Aylesbury was allocated STD code 0AY6. A switchover to all figure dialling began in 1966, although it was not until the early 1970s that all alphanumeric exchange names were converted. Despite the move to all-figure STD codes, and although the former Director areas were merged into single dialling codes and stated as all-figure numbers, until the 1990s it remained standard practice in the rest of the United Kingdom to state telephone numbers as exchange name + number, or include the exchange name before the national STD code. This was to enable callers to look up the correct dialling code, because calls to nearby exchanges often required a local dialling code rather than the STD code. In Paris and its suburbs, the conversion from 3L-4N to all numbers occurred in October 1963. For example, ÉLYsées became 379, LOUvre 508, PIGalle 744, POMpadour 706... But until October 1985, when an 8th number was added, it remained possible to make use of almost all the previous combinations. In popular culture Telephone exchange names often provide a historical, memorable, and even nostalgic context, personal connection, or identity to a community. They can therefore often be found in popular culture, such as music, art, and prose. An old 2L-5N format appears in the song title "PEnnsylvania 6-5000" (phone number PE 6-5000), recorded by Glenn Miller. The inspiration for that song, the Hotel Pennsylvania in New York City, held that phone number as +1-212-736-5000 until its closure in April 2020. PEnnsylvania 6-5000 was later spoofed in the Bugs Bunny cartoon Transylvania 6-5000 and the horror/comedy film Transylvania 6-5000. Other popular songs have used 2L-5N telephone exchanges in their names including: "BEechwood 4-5789", by The Marvelettes; "LOnesome 7-7203 by Hawkshaw Hawkins; and "ECho Valley 2-6809" by The Partridge Family. The title of BUtterfield 8, the 1935 John O'Hara novel whose film adaptation won Elizabeth Taylor an Academy Award for Best Actress, refers to the exchange of the characters' telephone numbers (on the Upper East Side of Manhattan). Radio show Candy Matson, YUkon 2-8209 first aired on NBC West Coast radio in March 1949. Another movie title based on these types of phone exchanges is director Henry Hathaway's "Call Northside 777" (1948), starring Jimmy Stewart. Artie Shaw and His Gramercy 5—Artie Shaw named his band the Gramercy Five after his home telephone exchange in Greenwich Village. In 1940 the original Gramercy Five pressed eight records, then dissolved this band in early 1941. The 1952 stage play and screenplay by Frederick Knott "Dial M for Murder" refers to the MAIda Vale number used to summon the intended victim to the telephone. Stan Freberg, on his 1966 album, Freberg Underground, objected to all digit dialing in song, including the lyrics: They took away our Murray Hills, They took away our Sycamores, They took away Tuxedo and State, They took away our Plaza, our Yukon, our Michigan, And left us with 47329768… See also Phonewords Telephone keypad References External links phone.net46.net historical exchange lists for Atlanta, Boston, Chicago, New Orleans, NYC, Philadelphia, Pittsburgh, Washington DC Notes on Nationwide Dialing, AT&T - 1955. Section II Appendix A is a List of Suitable Central Office Names London Telephone numbers
Telephone exchange names
[ "Mathematics" ]
4,614
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
2,201,417
https://en.wikipedia.org/wiki/Bulk%20density
In materials science, bulk density, also called apparent density, is a material property defined as the mass of the many particles of the material divided by the bulk volume. Bulk volume is defined as the total volume the particles occupy, including particle's own volume, inter-particle void volume, and the particles' internal pore volume. Bulk density is useful for materials such as powders, granules, and other "divided" solids, especially used in reference to mineral components (soil, gravel), chemical substances, pharmaceutical ingredients, foodstuff, or any other masses of corpuscular or particulate matter (particles). Bulk density is not the same as the particle density, which is an intrinsic property of the solid and does not include the volume for voids between particles (see: density of non-compact materials). Bulk density is an extrinsic property of a material; it can change depending on how the material is handled. For example, a powder poured into a cylinder will have a particular bulk density; if the cylinder is disturbed, the powder particles will move and usually settle closer together, resulting in a higher bulk density. For this reason, the bulk density of powders is usually reported both as "freely settled" (or "poured" density) and "tapped" density (where the tapped density refers to the bulk density of the powder after a specified compaction process, usually involving vibration of the container.) Soil The bulk density of soil depends greatly on the mineral make up of soil and the degree of compaction. The density of quartz is around but the (dry) bulk density of a mineral soil is normally about half that density, between . In contrast, soils rich in soil organic carbon and some friable clays tend to have lower bulk densities () due to a combination of the low-density of the organic materials themselves and increased porosity. For instance, peat soils have bulk densities from . In a detailed study which has used 6,000 analysed samples in the European Union, a high resolution map (100m) of soil bulk density for the 0-20cm using regression model. Croplands have almost 1.5 times higher bulk density compared to woodlands. Bulk density of soil is usually determined from a core sample which is taken by driving a metal corer into the soil at the desired depth and horizon. This gives a soil sample of known total volume, . From this sample the wet bulk density and the dry bulk density can be determined. For the wet bulk density (total bulk density) this sample is weighed, giving the mass . For the dry bulk density, the sample is oven dried and weighed, giving the mass of soil solids, . The relationship between these two masses is , where is the mass of substances lost on oven drying (often, mostly water). The dry and wet bulk densities are calculated as Dry bulk density = mass of soil/ volume as a whole Wet bulk density = mass of soil plus liquids/ volume as a whole The dry bulk density of a soil is inversely related to the porosity of the same soil: the more pore space in a soil the lower the value for bulk density. Bulk density of a region in the interior of the Earth is also related to the seismic velocity of waves travelling through it: for P-waves, this has been quantified with Gardner's relation. The higher the density, the faster the velocity. See also Brazil nut effect Characterisation of pore space in soil Effective porosity Density meter Number density Notes External links University of Leicester podcast 'How to measure dry bulk density' Bulk density calculator 'Determination of bulk density' Mass density Particulates Soil physics
Bulk density
[ "Physics", "Chemistry" ]
749
[ "Mechanical quantities", "Applied and interdisciplinary physics", "Physical quantities", "Mass", "Soil physics", "Intensive quantities", "Volume-specific quantities", "Particulates", "Density", "Particle technology", "Mass density", "Matter" ]
2,201,447
https://en.wikipedia.org/wiki/Angle%20of%20arrival
The angle of arrival (AoA) of a signal is the direction from which the signal (e.g. radio, optical or acoustic) is received. Measurement Measurement of AoA can be done by determining the direction of propagation of a radio-frequency wave incident on an antenna array or determined from maximum signal strength during antenna rotation. The AoA can be calculated by measuring the time difference of arrival (TDOA) between individual elements of the array. Generally this TDOA measurement is made by measuring the difference in received phase at each element in the antenna array. This can be thought of as beamforming in reverse. In beamforming, the signal from each element is weighed to "steer" the gain of the antenna array. In AoA, the delay of arrival at each element is measured directly and converted to an AoA measurement. Consider, for example, a two element array spaced apart by one-half the wavelength of an incoming RF wave. If a wave is incident upon the array at boresight, it will arrive at each antenna simultaneously. This will yield 0° phase-difference measured between the two antenna elements, equivalent to a 0° AoA. If a wave is incident upon the array at broadside, then a 180° phase difference will be measured between the elements, corresponding to a 90° AoA. In single antenna case, data-driven techniques are powerful tools to estimate AoA, capitalizing on the inherent imperfections of the antenna. In optics, AoA can be calculated using interferometry. Applications An application of AoA is in the geolocation of cell phones. The aim is either for the cell system to report the location of a cell phone placing an emergency call or to provide a service to tell the user of the cell phone where they are. Multiple receivers on a base station would calculate the AoA of the cell phone's signal, and this information would be combined to determine the phone's location. AoA is generally used to discover the location of pirate radio stations or of any military radio transmitter. In submarine acoustics, AoA is used to localize objects with active or passive ranging. Limitation Limitations on the accuracy of estimation of angle of arrival signals in digital antenna arrays are associated with jitter ADC and DAC. See also Geolocation GNSS GSM localization Multilateration Radiolocation Time of arrival Triangulation Trilateration Wideband Space Division Multiple Access Direction finding References Angle Signal processing
Angle of arrival
[ "Physics", "Technology", "Engineering" ]
504
[ "Geometric measurement", "Scalar physical quantities", "Telecommunications engineering", "Physical quantities", "Computer engineering", "Signal processing", "Wikipedia categories named after physical quantities", "Angle" ]
2,201,538
https://en.wikipedia.org/wiki/Recurrence%20quantification%20analysis
Recurrence quantification analysis (RQA) is a method of nonlinear data analysis (cf. chaos theory) for the investigation of dynamical systems. It quantifies the number and duration of recurrences of a dynamical system presented by its phase space trajectory. Background The recurrence quantification analysis (RQA) was developed in order to quantify differently appearing recurrence plots (RPs), based on the small-scale structures therein. Recurrence plots are tools which visualise the recurrence behaviour of the phase space trajectory of dynamical systems: , where is the Heaviside function and a predefined tolerance. Recurrence plots mostly contain single dots and lines which are parallel to the mean diagonal (line of identity, LOI) or which are vertical/horizontal. Lines parallel to the LOI are referred to as diagonal lines and the vertical structures as vertical lines. Because an RP is usually symmetric, horizontal and vertical lines correspond to each other, and, hence, only vertical lines are considered. The lines correspond to a typical behaviour of the phase space trajectory: whereas the diagonal lines represent such segments of the phase space trajectory which run parallel for some time, the vertical lines represent segments which remain in the same phase space region for some time. If only a univariate time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series (with and the sampling time), the embedding dimension, and the time delay. However, pPhase space reconstruction is not essential part of the RQA (although often stated in literature), because it is based on phase space trajectories which could be derived from the system's variables directly (e.g., from the three variables of the Lorenz system) or from multivariate data. The RQA quantifies the small-scale structures of recurrence plots, which present the number and duration of the recurrences of a dynamical system. The measures introduced for the RQA were developed heuristically between 1992 and 2002. They are actually measures of complexity. The main advantage of the RQA is that it can provide useful information even for short and non-stationary data, where other methods fail. RQA can be applied to almost every kind of data. It is widely used in physiology, but was also successfully applied on problems from engineering, chemistry, Earth sciences etc. Further extensions and variations of measures for quantifying recurrence properties have been proposed to address specific research questions. RQA measures are also combined with machine learning approaches for classification tasks. RQA measures The simplest measure is the recurrence rate, which is the density of recurrence points in a recurrence plot: The recurrence rate corresponds with the probability that a specific state will recur. It is almost equal with the definition of the correlation sum, where the LOI is excluded from the computation. The next measure is the percentage of recurrence points which form diagonal lines in the recurrence plot of minimal length : where is the frequency distribution of the lengths of the diagonal lines (i.e., it counts how many instances have length ). This measure is called determinism and is related with the predictability of the dynamical system, because white noise has a recurrence plot with almost only single dots and very few diagonal lines, whereas a deterministic process has a recurrence plot with very few single dots but many long diagonal lines. The number of recurrence points which form vertical lines can be quantified in the same way: where is the frequency distribution of the lengths of the vertical lines, which have at least a length of . This measure is called laminarity and is related with the amount of laminar phases in the system (intermittency). The lengths of the diagonal and vertical lines can be measured as well. The averaged diagonal line length is related with the predictability time of the dynamical system and the trapping time, measuring the average length of the vertical lines, is related with the laminarity time of the dynamical system, i.e. how long the system remains in a specific state. Because the length of the diagonal lines is related on the time how long segments of the phase space trajectory run parallel, i.e. on the divergence behaviour of the trajectories, it was sometimes stated that the reciprocal of the maximal length of the diagonal lines (without LOI) would be an estimator for the positive maximal Lyapunov exponent of the dynamical system. Therefore, the maximal diagonal line length or the divergence: are also measures of the RQA. However, the relationship between these measures with the positive maximal Lyapunov exponent is not as easy as stated, but even more complex (to calculate the Lyapunov exponent from an RP, the whole frequency distribution of the diagonal lines has to be considered). The divergence can have the trend of the positive maximal Lyapunov exponent, but not more. Moreover, also RPs of white noise processes can have a really long diagonal line, although very seldom, just by a finite probability. Therefore, the divergence cannot reflect the maximal Lyapunov exponent. The probability that a diagonal line has exactly length can be estimated from the frequency distribution with . The Shannon entropy of this probability, reflects the complexity of the deterministic structure in the system. However, this entropy depends sensitively on the bin number and, thus, may differ for different realisations of the same process, as well as for different data preparations. The last measure of the RQA quantifies the thinning-out of the recurrence plot. The trend is the regression coefficient of a linear relationship between the density of recurrence points in a line parallel to the LOI and its distance to the LOI. More exactly, consider the recurrence rate in a diagonal line parallel to LOI of distance k (diagonal-wise recurrence rate or τ-recurrence rate): then the trend is defined by with as the average value and . This latter relation should ensure to avoid the edge effects of too low recurrence point densities in the edges of the recurrence plot. The measure trend provides information about the stationarity of the system. Similar to the -recurrence rate, the other measures based on the diagonal lines (DET, L, ENTR) can be defined diagonal-wise. These definitions are useful to study interrelations or synchronisation between different systems (using recurrence plots or cross recurrence plots). Time-dependent RQA Instead of computing the RQA measures of the entire recurrence plot, they can be computed in small windows moving over the recurrence plot along the LOI. This provides time-dependent RQA measures which allow detecting, e.g., chaos-chaos transitions. Note: the choice of the size of the window can strongly influence the measure trend. Example See also Recurrence plot, a powerful visualisation tool of recurrences in dynamical (and other) systems. Recurrence period density entropy, an information-theoretic method for summarising the recurrence properties of both deterministic and stochastic dynamical systems. Approximate entropy References External links http://www.recurrence-plot.tk/ Signal processing Dynamical systems Chaos theory Nonlinear time series analysis
Recurrence quantification analysis
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,559
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Mechanics", "Dynamical systems" ]
2,201,661
https://en.wikipedia.org/wiki/New%20car%20smell
New car smell is the odor that comes from the combination of materials found in new automobiles, as well as other vehicles like buses, trucks, and aircraft. It comprises various elements such as new leather, plastics and textile interiors. Due to its liking, many automobile manufacturers have also started mimicking desired scents and utilising them to attract customers in show rooms. Although the scent is described as pleasant by some, there is some question about the possibility that these chemicals pose a health risk. A study in 2023 found that formaldehyde and acetaldehyde gases exceeded Chinese government safety standards, and researchers recommended that new car buyers drive with windows open. Chemical composition Both the scent and what produces it vary somewhat in different kinds of cars. Most of the interior of an automobile consists of plastic held together with a number of adhesives and sealers. Such materials release volatile organic compounds via off-gassing. These fumes are generally attributed to mixtures of many different chemicals off-gassing and to plasticizers, although DEHP, widely used in PVC, isn't very volatile. Researchers tested more than 200 U.S. vehicles of model years 2011–2012 for chemicals such as organobromine compounds (associated with brominated flame retardants, or BFRs), organochlorine compounds (e.g., polyvinyl chloride, or PVC), and heavy metals that off-gas from various parts such as the steering wheel, dashboard, armrests, and seats. It is recommended to keep new cars well-ventilated while driving, especially during the summer. A 1995 analysis of the air from a new Lincoln Continental found over 50 volatile organic compounds, which were identified as coming from sources such as cleaning and lubricating compounds, paint, carpeting, leather and vinyl treatments, latex glue, and gasoline and exhaust fumes. An analysis two months after the initial one found a significant reduction in the chemicals. The researchers observed that the potential toxicity of many of these compounds could pose a danger to human health. The total volatile organic compound levels can reach 7,500 micrograms per cubic meter. Concentrations decayed by approximately 90% over a three-week period. Over sixty chemical compounds were identified inside the interiors of the four vehicles in this study. In some instances, the odor results from a manufacturing defect. According to official documents of Bentley Motors (BT26), an "obnoxious odor" in Bentley cars for model years 1999–2002 was traced to a rust inhibitor. In some cultures, e.g. the Chinese culture, the new car smell is not considered desirable and manufacturers work to eliminate it. Health hazards A two-year study released in 2001 by the CSIRO in Australia found several health problems associated with these chemicals. CSIRO research scientist, Dr. Stephen Brown, reported anecdotal accounts of disorientation, headache, and irritation in some drivers of new cars. He measured pollutant levels in new cars that were sufficient to cause similar effects within minutes in controlled experiments by other researchers. Chemicals found in the cars included the carcinogen benzene, two other possible carcinogens cyclohexanone and styrene, and several other toxic chemicals. The "new car smell," while appealing to many, can pose certain health risks due to the presence of volatile organic compounds (VOCs) emitted from materials in the vehicle’s interior. A more recent study in Japan found that the volatile organic chemicals in a new minivan were over 35 times the health limit the day after its delivery. In four months levels had fallen under the limit but increased again in the hot summer months, taking three years to permanently remain below the limit. The limits were set by the Japanese health ministry in response to more car owners suffering from sick building syndrome. A Daily Telegraph article on the study described the enjoyment of new car smell as "akin to glue-sniffing". However, another study showed no toxicity from new car odors in lab grown cells. The odors did trigger an immune system reaction. The most common side effects of the new car smell are headaches, sore throats, nausea, and drowsiness. References Further reading Car culture Odor Air pollution Transport and the environment Car ownership
New car smell
[ "Physics" ]
871
[ "Physical systems", "Transport", "Transport and the environment" ]
2,201,758
https://en.wikipedia.org/wiki/Tishchenko%20reaction
The Tishchenko reaction is an organic chemical reaction that involves disproportionation of an aldehyde in the presence of an alkoxide. The reaction is named after Russian organic chemist Vyacheslav Tishchenko, who discovered that aluminium alkoxides are effective catalysts for the reaction. In the related Cannizzaro reaction, the base is sodium hydroxide and then the oxidation product is a carboxylic acid and the reduction product is an alcohol. History The reaction involving benzaldehyde was discovered by Claisen using sodium benzylate as base. The reaction produces benzyl benzoate. Enolizable aldehydes are not amenable to Claisen's conditions. Vyacheslav Tishchenko discovered that aluminium alkoxides allowed the conversion of enolizable aldehydes to esters. Examples The Tishchenko reaction of acetaldehyde gives the commercially important solvent ethyl acetate. The reaction is catalyzed by aluminium alkoxides. The Tishchenko reaction is used to obtain isobutyl isobutyrate, a specialty solvent. Hydroxypivalic acid neopentyl glycol ester is produced by a Tishchenko reaction from hydroxypivaldehyde in the presence of a basic catalyst (e.g., aluminium oxide). The Tishchenko reaction of paraformaldehyde in the presence of aluminum methylate or magnesium methylate forms methyl formate. Paraformaldehyde reacts with boric acid to form methyl formate. The key step in the reaction mechanism for this reaction is a 1,3-hydride shift in the hemiacetal intermediate formed from two successive nucleophilic addition reactions, the first one from the catalyst. The hydride shift regenerates the alkoxide catalyst. See also Aldol–Tishchenko reaction Baylis–Hillman reaction Cannizzaro reaction Meerwein–Ponndorf–Verley reduction Oppenauer oxidation References Further reading ; 482–540. (in Russian) В. Е. Тищенко and Г. Н. Григорьева (V. E. Tishchenko and G. N. Grigor'eva) (1906) "О действии амальгамы магния на изомасляного альдегида" (On the effect of magnesium amalgam on isobutyric aldehyde), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 38 : 540–547. (in Russian) М. П. Воронҝова and В. Е. Тищенко (M. P. Voronkova and V. E. Tishchenko) (1906) "О действии амальгамы магния на уксусный альдегид" (On the effect of magnesium amalgam on acetic aldehyde), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 38 : 547–550. (in Russian) В. Тищенко (V. Tishchenko) (1899) "Действие амальгамированного алюминия на алкоголь. Алкоголятов алюминия, их свойства и реакции." (Effect of amalgamated aluminium on alcohol. Aluminium alkoxides, their properties and reactions.), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 31 : 694–770. (in Russian) Organic reactions Name reactions
Tishchenko reaction
[ "Chemistry" ]
961
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
2,201,779
https://en.wikipedia.org/wiki/Lamellar%20structure
In materials science, lamellar structures or microstructures are composed of fine, alternating layers of different materials in the form of lamellae. They are often observed in cases where a phase transition front moves quickly, leaving behind two solid products, as in rapid cooling of eutectic (such as solder) or eutectoid (such as pearlite) systems. Such conditions force phases of different composition to form but allow little time for diffusion to produce those phases' equilibrium compositions. Fine lamellae solve this problem by shortening the diffusion distance between phases, but their high surface energy makes them unstable and prone to break up when annealing allows diffusion to progress. A deeper eutectic or more rapid cooling will result in finer lamellae; as the size of an individual lamellum approaches zero, the system will instead retain its high-temperature structure. Two common cases of this include cooling a liquid to form an amorphous solid, and cooling eutectoid austenite to form martensite. In biology, normal adult bones possess a lamellar structure which may be disrupted by some diseases. References Membrane biology Physical chemistry
Lamellar structure
[ "Physics", "Chemistry" ]
234
[ "Applied and interdisciplinary physics", "Membrane biology", "nan", "Molecular biology", "Physical chemistry" ]
2,201,851
https://en.wikipedia.org/wiki/Defeminization
In developmental biology and zoology, defeminization is an aspect of the process of sexual differentiation by which a potential female-specific structure, function, or behavior is changed by one of the processes of male development. See also Sexual differentiation Defeminization and masculinization Virilization Feminization References Sexual anatomy Zoology Physiology
Defeminization
[ "Biology" ]
69
[ "Sexual anatomy", "Zoology", "Physiology", "Sex" ]
2,202,156
https://en.wikipedia.org/wiki/Komagataella
Komagataella is a methylotrophic yeast within the order Saccharomycetales. It was found in the 1960s as Pichia pastoris, with its feature of using methanol as a source of carbon and energy. In 1995, P. pastoris was reassigned into the sole representative of genus Komagataella, becoming Komagataella pastoris. In 2005, it was found that almost all strains used industrially and in labs are a separate species, K. phaffii. Later studies have further distinguished new species in this genus, resulting in a total of 7 recognized species. It is not uncommon to see the old name still in use in the context of protein production, as of 2023; in less formal use, the yeast may confusingly be referred to as pichia. After years of study, Komagataella is widely used in biochemical research and biotech industries. With strong potential for being an expression system for protein production, as well as being a model organism for genetic study, Komagataella phaffii has become important for biological research and biotech applications. Taxonomy According to GBIF: Komagataella kurtzmanii Komagataella mondaviorum Komagataella pastoris Komagataella phaffii – responsible for most, if not all, industrial & research use Komagataella populi Komagataella pseudopastoris Komagataella ulmi Komagataella in nature Natural habitat In nature, Komagataella is found on trees, such as chestnut trees. They are heterotrophs and they can use several carbon sources for living, like glucose, glycerol and methanol. However, they cannot use lactose. Reproduction Komagataella can undergo both asexual reproduction and sexual reproduction, by budding and ascospore. In this case, two types of cells of Komagataella exist: haploid and diploid cells. In the asexual life cycle, haploid cells undergo mitosis for reproduction. In the sexual life cycle, diploid cells undergo sporulation and meiosis. The growth rate of its colonies can vary by a large range, from near to 0 to a doubling time of one hour, which is suitable for industrial processes. Komagataella as a model organism In the last few years, Komagataella was investigated and identified as a good model organism with several advantages. First of all, Komagataella can be grown and used easily in lab. Like other widely used yeast models, it has relatively short life span and fast regeneration time. Moreover, some inexpensive culture media have been designed, so that Komagataella can grow quickly on them, with high cell density. Whole genome sequencing for Komagataella had been performed. The K. phaffii GS115 genome has been sequenced by the Flanders Institute for Biotechnology and Ghent University, and published in Nature Biotechnology. The genome sequence and gene annotation can be browsed through the ORCAE system. The complete genomic data allows scientists to identify homologous proteins and evolutionary relationships between other yeast species and Komagataella. In addition, all seven species were sequenced by 2022. Furthermore, Komagataella are single eukaryotic cells, which means researchers could investigate the proteins inside Komagataella. Then the homologous comparison to other more complicated eukaryotic species can be processed, to obtain their functions and origins. Another advantage of Komagataella is its similarity to the well-studied yeast model — Saccharomyces cerevisiae. As a model organism for biology, S. cerevisiae have been well studied for decades and used by researchers for various purposes throughout history. The two yeast genera; Pichia (sensu lato) and Saccharomyces, have similar growth conditions and tolerances; thus, the culture of Komagataella can be adopted by labs without many modifications. Moreover, unlike S. cerevisiae, Komagataella has the ability to functionally process proteins with large molecular weight, which is useful in a translational host. Considering all the advantages, Komagataella can be usefully employed as both a genetic and experimental model organism. Komagataella as a genetic model organism As a genetic model organism, Komagataella can be used for genetic analysis and large-scale genetic crossing, with complete genome data and its ability to carry out complex eukaryotic genetic processing in a relatively small genome. The functional genes for peroxisome assembly were investigated by comparing wild-type and mutant strains of Komagataella. Komagataella as an experimental model organism As an experimental model organism, Komagataella was mainly used as the host system for transformation. Due to its abilities of recombination with foreign DNA and processing large proteins, much research has been carried out to investigate the possibility of producing new proteins and the function of artificially designed proteins, using Komagataella as a transformation host. In the last decade, Komagataella was engineered to build expression system platforms, which is a typical application for a standard experimental model organism, as described below. Komagataella as expression system platform Komagataella is frequently used as an expression system for the production of heterologous proteins. Several properties make Komagataella suited for this task. Currently, several strains of Komagataella are used for biotechnical purposes, with significant differences among them in growth and protein production. Some common variants possess a mutation in the HIS4 gene, leading to the selection of cells which are transformed successfully with expression vectors. The technology for vector integration into Komagataella genome is similar to that in Saccharomyces cerevisiae. Advantage Komagataella is able to grow on simple, inexpensive medium, with high growth rate. Komagataella can grow in either shake flasks or a fermenter, which makes it suitable for both small- and large-scale production. Komagataella has two alcohol oxidase genes, Aox1 and Aox2, which include strongly inducible promoters. These two genes allow Komagataella to use methanol as a carbon and energy source. The AOX promoters are induced by methanol, and repressed by glucose. Usually, the gene for the desired protein is introduced under the control of the Aox1 promoter, which means that protein production can be induced by the addition of methanol on medium. After several researches, scientists found that the promotor derived from AOX1 gene in Komagataella is extremely suitable to control the expression of foreign genes, which had been transformed into the Komagataella genome, producing heterologous proteins. With a key trait, Komagataella can grow with extremely high cell density on the culture. This feature is compatible with heterologous protein expression, giving higher yields of production. The technology required for genetic manipulation of Komagataella is similar to that of Saccharomyces cerevisiae, which is one of the most well-studied yeast model organisms. As a result, the experiment protocol and materials are easy to build for Komagataella. Disadvantage As some proteins require chaperonin for proper folding, Komagataella is unable to produce a number of proteins, since it does not contain the appropriate chaperones. The technologies of introducing genes of mammalian chaperonins into the yeast genome and overexpressing existing chaperonins still require improvement. Comparison with other expression systems In standard molecular biology research, the bacterium Escherichia coli is the most frequently used organism for expression system, to produce heterologous proteins, due to its features of fast growth rate, high protein production rate, as well as undemanding growth conditions. Protein production in E. coli is usually faster than that in Komagataella, with reasons: Competent E. coli cells can be stored frozen, and thawed before use, whereas Komagataella cells have to be produced immediately before use. Expression yields in Komagataella vary between different clones, so that a large number of clones has to be screened for protein production, to find the best producer. The biggest advantage of Komagataella over E. coli is that Komagataella is capable of forming disulfide bonds and glycosylations in proteins, but E. coli cannot. E. coli might produce a misfolded protein when disulfides are included in final product, leading to inactive or insoluble forms of proteins. The well-studied Saccharomyces cerevisiae is also used as an expression system with similar advantages over E. coli as Komagataella. However Komagataella has two main advantages over S. cerevisiae in laboratory and industrial settings: Komagataella, as mentioned above, is a methylotroph, meaning that it can grow with the simple methanol, as the only source of energy — Komagataella can grow fast in cell suspension with reasonably strong methanol solution, which would kill most other micro-organisms. In this case, the expression system is cheap to set up and maintain. Komagataella can grow up to a very high cell density. Under ideal conditions, it can multiply to the point where the cell suspension is practically a paste. As the protein yield from expression system in a microbe is roughly equal to the product of the proteins produced per cell, which makes Komagataella of great use when trying to produce large quantities of protein without expensive equipment. Comparing to other expression systems, such as S2-cells from Drosophila melanogaster and Chinese hamster ovary cells, Komagataella usually gives much better yields. Generally, cell lines from multicellular organisms require complex and expensive types of media, including amino acids, vitamins, as well as other growth factors. These types of media significantly increase the cost of producing heterologous proteins. Additionally, Komagataella can grow in media containing only one carbon source and one nitrogen source, which is suitable for isotopic labelling applications, like protein NMR. Industrial applications Komagataella have been used in several kinds of biotech industries, such as pharmaceutical industry. All the applications are based on its feature of expressing proteins. Biotherapeutic production In the last few years, Komagataella had been used for the production of over 500 types of biotherapeutics, such as IFNγ. At the beginning, one drawback of this protein expression system is the over-glycosylation with high density of mannose structure, which is a potential cause of immunogenicity. In 2006, a research group managed to create a new strain called YSH597. This strain can express erythropoietin in its normal glycosylation form, by exchanging the enzymes responsible for the fungal type glycosylation, with the mammalian homologs. Thus, the altered glycosylation pattern allowed the protein to be fully functional. Enzyme production In food industries, like brewery and bake house, Komagataella is used to produce different kinds of enzymes, as processing aids and food additives, with many functions. For example, some enzymes produced by genetically modified Komagataella can keep the bread soft. Meanwhile, in beer, enzymes could be used to lower the alcohol concentration. Recombinant phospholipase C can degum high-phosphorus oils by breaking down phospholipids. In animal feed, K. phaffi-produced phytase is used to break down phytic acid, an antinutrient. References Saccharomycetaceae Fungal models Yeasts Taxa described in 1995
Komagataella
[ "Biology" ]
2,445
[ "Model organisms", "Fungi", "Yeasts", "Fungal models" ]
2,202,274
https://en.wikipedia.org/wiki/Table-turning
Table-turning (also known as table-tapping, table-tipping or table-tilting) is a type of séance in which participants sit around a table, place their hands on it, and wait for rotations. The table was purportedly made to serve as a means of communicating with the spirits; the alphabet would be slowly spoken aloud and the table would tilt at the appropriate letter, thus spelling out words and sentences. The process is similar to that of a Ouija board. Scientists and skeptics consider table-turning to be the result of the ideomotor effect, or of conscious trickery. History When the movement of spiritualism first reached Europe from America in the winter of 1852–1853, the most popular method of consulting the spirits was for several persons to sit round a table, with their hands resting on it, and wait for the table to move. If the experiment was successful, the table would rotate with considerable rapidity and would occasionally rise in the air, or perform other movements. Whilst most spiritualists ascribed the table movements to the agency of spirits, two investigators, count de Gasparin and professor Thury (father of René Thury) of Geneva, conducted a careful series of experiments. They claimed to have demonstrated that the movements of the table were due to a physical force emanating from the bodies of the sitters, for which they proposed the name ectenic force. Their conclusion rested on the supposed elimination of all known physical causes for the movements; but it is doubtful from the description of the experiments whether the precautions taken were sufficient to exclude unconscious muscular action (the ideomotor effect) or even deliberate fraud. In England, table-turning became a fashionable diversion and was practised all over the country in the year 1853. John Elliotson and his followers attributed the phenomena to mesmerism. The general public were content to find the explanation of the movements in spirits, animal magnetism, Odic force, galvanism, electricity, or even the rotation of the earth. Some Evangelical clergymen alleged that the spirits who caused the movements were of a diabolic nature. In France, Allan Kardec studied the phenomenon and concluded in The Book on Mediums that some communications were caused by an outside intelligence, as the message contained information that was not known to the group. Scientific reception The Scottish surgeon James Braid, the English physiologist W. B. Carpenter and others pointed out that the phenomena could depend upon the expectation of the sitters, and could be stopped altogether by appropriate suggestion. Michel Eugène Chevreul explained that the purported magical movement was due to involuntary and unconscious muscular reactions. Michael Faraday devised a simple apparatus which conclusively demonstrated that the movements he investigated were due to unconscious muscular action. The apparatus consisted of two small boards, with glass rollers between them, the whole fastened together by india-rubber bands in such a manner that the upper board could slide under lateral pressure to a limited extent over the lower one. The occurrence of such lateral movement was at once indicated by means of an upright haystalk fastened to the apparatus. When by this means it was made clear to the experimenters that it was the fingers which moved the table, the phenomena generally ceased. After this experimental approach, Faraday criticized the believers of table-turning. Faraday's work was followed up a century later by clinical psychologist Kenneth Batcheldor who pioneered the use of infrared video recording to observe experimental subjects in complete darkness. Trickery Apart from the ideomotor effect, conscious fraudulent table tipping has also been uncovered. Professional magicians and skeptics have exposed many of the methods utilized by mediums to tip tables. The magician Chung Ling Soo described a method that involved a pin driven into the table and the use of a ring with a slot on the medium's finger. Once the pin entered the slot, the table could be lifted. Another example comes from Eusapia Palladino, who used custom-made boots with soles that extended beyond the boots' edges in order to lift tables. According to John Mulholland: The multiplicity of methods used to tip and raise tables in a séance is almost as great as the number of mediums performing the feat. One of the simplest was to slide the hands back until one or both of the medium's thumbs could catch hold of the table top. Another way was to exert no pressure on the table at all, and in the event that the sitter opposite the medium did press on the table, to permit the table to tip far enough away from him so that he could get the toe of one foot under the table leg. He would then immediately put pressure on his side, and, holding the table between his hands and his toe, move it about at will. By this method a small table can be made to float two feet off the floor... Another method was to catch the under side of the table top with the knee; and still another was merely to kick the table into the air. References Further reading John Henry Anderson. (1855). The Fashionable Science of Parlour Magic. London. pp. 85–87 Willis Dutcher. (1922). On the Other Side of the Footlights: An Expose of Routines, Apparatus and Deceptions Resorted to by Mediums, Clairvoyants, Fortune Tellers and Crystal Gazers in Deluding the Public. Berlin, WI: Heaney Magic. pp. 80–81 F. Attfield Fawkes. (1920). Spiritualism Exposed. J. W. Arrowsmith Ltd. pp. 27–29 External links based on the work of Kenneth Batcheldor. Museum of Talking Boards (official website) 1852 introductions Paranormal Spiritualism Spiritism Parapsychology History of science Séances Tables (furniture)
Table-turning
[ "Technology" ]
1,184
[ "History of science", "History of science and technology" ]
2,202,290
https://en.wikipedia.org/wiki/Stickelberger%27s%20theorem
In mathematics, Stickelberger's theorem is a result of algebraic number theory, which gives some information about the Galois module structure of class groups of cyclotomic fields. A special case was first proven by Ernst Kummer (1847) while the general result is due to Ludwig Stickelberger (1890). The Stickelberger element and the Stickelberger ideal Let denote the th cyclotomic field, i.e. the extension of the rational numbers obtained by adjoining the th roots of unity to (where is an integer). It is a Galois extension of with Galois group isomorphic to the multiplicative group of integers modulo . The Stickelberger element (of level or of ) is an element in the group ring and the Stickelberger ideal (of level or of ) is an ideal in the group ring . They are defined as follows. Let denote a primitive th root of unity. The isomorphism from to is given by sending to defined by the relation . The Stickelberger element of level is defined as The Stickelberger ideal of level , denoted , is the set of integral multiples of which have integral coefficients, i.e. More generally, if be any Abelian number field whose Galois group over is denoted , then the Stickelberger element of and the Stickelberger ideal of can be defined. By the Kronecker–Weber theorem there is an integer such that is contained in . Fix the least such (this is the (finite part of the) conductor of over ). There is a natural group homomorphism given by restriction, i.e. if , its image in is its restriction to denoted . The Stickelberger element of is then defined as The Stickelberger ideal of , denoted , is defined as in the case of , i.e. In the special case where , the Stickelberger ideal is generated by as varies over . This not true for general F. Examples If is a totally real field of conductor , then where is the Euler totient function and is the degree of over . Statement of the theorem Stickelberger's Theorem Let be an abelian number field. Then, the Stickelberger ideal of annihilates the class group of . Note that itself need not be an annihilator, but any multiple of it in is. Explicitly, the theorem is saying that if is such that and if is any fractional ideal of , then is a principal ideal. See also Gross–Koblitz formula Herbrand–Ribet theorem Thaine's theorem Jacobi sum Gauss sum Notes References Boas Erez, Darstellungen von Gruppen in der Algebraischen Zahlentheorie: eine Einführung External links PlanetMath page Cyclotomic fields Theorems in algebraic number theory
Stickelberger's theorem
[ "Mathematics" ]
579
[ "Theorems in algebraic number theory", "Theorems in number theory" ]
2,202,301
https://en.wikipedia.org/wiki/Thermal%20decomposition
Thermal decomposition, or thermolysis, is a chemical decomposition of a substance caused by heat. The decomposition temperature of a substance is the temperature at which the substance chemically decomposes. The reaction is usually endothermic as heat is required to break chemical bonds in the compound undergoing decomposition. If decomposition is sufficiently exothermic, a positive feedback loop is created producing thermal runaway and possibly an explosion or other chemical reaction. Thermal decomposition is a chemical reaction where heat is a reactant. Since heat is a reactant, these reactions are endothermic meaning that the reaction requires thermal energy to break the chemical bonds in the molecule. Decomposition temperature definition A simple substance (like water) may exist in equilibrium with its thermal decomposition products, effectively halting the decomposition. The equilibrium fraction of decomposed molecules increases with the temperature. Since thermal decomposition is a kinetic process, the observed temperature of its beginning in most instances will be a function of the experimental conditions and sensitivity of the experimental setup. For a rigorous depiction of the process, the use of thermokinetic modeling is recommended. main definition: Thermal decomposition is the breakdown of a compound into two or more different substances using heat, and it is an endothermic reaction Examples Calcium carbonate (limestone or chalk) decomposes into calcium oxide and carbon dioxide when heated. The chemical reaction is as follows: CaCO3 → CaO + CO2 The reaction is used to make quick lime, which is an industrially important product. Another example of thermal decomposition is 2Pb(NO3)2 → 2PbO + O2 + 4NO2. Some oxides, especially of weakly electropositive metals decompose when heated to high enough temperature. A classical example is the decomposition of mercuric oxide to give oxygen and mercury metal. The reaction was used by Joseph Priestley to prepare samples of gaseous oxygen for the first time. When water is heated to well over , a small percentage of it will decompose into OH, monatomic oxygen, monatomic hydrogen, O2, and H2. The compound with the highest known decomposition temperature is carbon monoxide at ≈3870 °C (≈7000 °F). Decomposition of nitrates, nitrites and ammonium compounds Ammonium dichromate on heating yields nitrogen, water and chromium(III) oxide. Ammonium nitrate on strong heating yields dinitrogen oxide ("laughing gas") and water. Ammonium nitrite on heating yields nitrogen gas and water. Barium azide -"Ba(N 3)"on heating yields barium metal and nitrogen gas. Sodium azide on heating at violently decomposes to nitrogen and metallic sodium. Sodium nitrate on heating yields sodium nitrite and oxygen gas. Organic compounds like tertiary amines on heating undergo Hofmann elimination and yield secondary amines and alkenes. Ease of decomposition When metals are near the bottom of the reactivity series, their compounds generally decompose easily at high temperatures. This is because stronger bonds form between atoms towards the top of the reactivity series, and strong bonds are difficult to break. For example, copper is near the bottom of the reactivity series, and copper sulfate (CuSO4), begins to decompose at about , increasing rapidly at higher temperatures to about . In contrast potassium is near the top of the reactivity series, and potassium sulfate (K2SO4) does not decompose at its melting point of about , nor even at its boiling point. Practical applications Many scenarios in the real world are affected by thermal degradation. One of the things affected is fingerprints. When anyone touches something, there is residue left from the fingers. If fingers are sweaty, or contain more oils, the residue contains many chemicals. De Paoli and her collogues conducted a study testing thermal degradation on certain components found in fingerprints. For heat exposure, the amino acid and urea samples started degradation at and for lactic acid, the decomposition process started around . These components are necessary for further testing, so in the forensics discipline, decomposition of fingerprints is significant. See also Thermal degradation of polymers Ellingham diagram Thermochemical cycle Thermal depolymerization Chemical thermodynamics Pyrolysis - thermal decomposition of organic material Gas generator References Chemical reactions Thermodynamics
Thermal decomposition
[ "Physics", "Chemistry", "Mathematics" ]
889
[ "Thermodynamics", "nan", "Dynamical systems" ]
2,202,360
https://en.wikipedia.org/wiki/TUGboat
TUGboat (, DOI prefix 10.47397) is a journal published three times per year by the TeX Users Group. It covers a wide range of topics in digital typography relevant to the TeX typesetting system. The editor is Barbara Beeton. See also The PracTeX Journal External links TUGboat home page List of TeX-related publications and journals TeX Typesetting Academic journals established in 1980 Computer science journals
TUGboat
[ "Mathematics", "Technology" ]
89
[ "TeX", "Mathematical markup languages", "Digital typography stubs", "Computing stubs" ]
2,202,381
https://en.wikipedia.org/wiki/Faint%20young%20Sun%20paradox
The faint young Sun paradox or faint young Sun problem describes the apparent contradiction between observations of liquid water early in Earth's history and the astrophysical expectation that the Sun's output would have been only 70 percent as intense during that epoch as it is during the modern epoch. The paradox is this: with the young Sun's output at only 70 percent of its current output, early Earth would be expected to be completely frozen, but early Earth seems to have had liquid water and supported life. The issue was raised by astronomers Carl Sagan and George Mullen in 1972. Proposed resolutions of this paradox have taken into account greenhouse effects, changes to planetary albedo, astrophysical influences, or combinations of these suggestions. The predominant theory is that the greenhouse gas carbon dioxide contributed most to the warming of the Earth. Solar evolution Models of stellar structure, especially the standard solar model predict a brightening of the Sun. The brightening is caused by a decrease in the number of particles per unit mass due to nuclear fusion in the Sun's core, from four protons and electrons each to one helium nucleus and two electrons. Fewer particles would exert less pressure. A collapse under the enormous gravity is prevented by an increase in temperature, which is both cause and effect of a higher rate of nuclear fusion. More recent modeling studies have shown that the Sun is currently 1.4 times as bright today than it was 4.6 billion years ago (Ga), and that the brightening has accelerated considerably. At the surface of the Sun, more fusion power means a higher solar luminosity (via slight increases in temperature and radius), which is termed radiative forcing. Theories Greenhouse gases Sagan and Mullen suggested during their descriptions of the paradox that it might be solved by high concentrations of ammonia gas, NH3. However, it has since been shown that while ammonia is an effective greenhouse gas, it is easily destroyed photochemically in the atmosphere and converted to nitrogen (N2) and hydrogen (H2) gases. It was suggested (again by Sagan) that a photochemical haze could have prevented this destruction of ammonia and allowed it to continue acting as a greenhouse gas during this time; however, by 2001, this idea was tested using a photochemical model and discounted. Furthermore, such a haze is thought to have cooled Earth's surface beneath it and counteracted the greenhouse effect. Around 2010, scholars at the University of Colorado revived the idea, arguing that the ammonia hypothesis is a viable contributor if the haze formed a fractal pattern. It is now thought that carbon dioxide was present in higher concentrations during this period of lower solar radiation. It was first proposed and tested as part of Earth's atmospheric evolution in the late 1970s. An atmosphere that contained about 1,000 times the present atmospheric level (or PAL) was found to be consistent with the evolutionary path of Earth's carbon cycle and solar evolution. The primary mechanism for attaining such high CO2 concentrations is the carbon cycle. On large timescales, the inorganic branch of the carbon cycle, which is known as the carbonate–silicate cycle is responsible for determining the partitioning of CO2 between the atmosphere and the surface of Earth. In particular, during a time of low surface temperatures, rainfall and weathering rates would be reduced, allowing for the build-up of carbon dioxide in the atmosphere on timescales of 0.5 million years. Specifically, using 1-D models, which represent Earth as a single point (instead of something that varies across 3 dimensions) scientists have determined that at 4.5 Ga, with a 30% dimmer Sun, a minimum partial pressure of 0.1 bar of CO2 is required to maintain an above-freezing surface temperature; 10 bar of CO2 has been suggested as a plausible upper limit. The amount of carbon dioxide levels is still under debate. In 2001, Sleep and Zahnle suggested that increased weathering on the sea floor on a young, tectonically active Earth could have reduced carbon dioxide levels. Then in 2010, Rosing et al. analyzed marine sediments called banded iron formations and found large amounts of various iron-rich minerals, including magnetite (Fe3O4), an oxidized mineral alongside siderite (FeCO3), a reduced mineral and saw that they formed during the first half of Earth's history (and not afterward). The minerals' relative coexistence suggested an analogous balance between CO2 and H2. In the analysis, Rosing et al. connected the atmospheric H2 concentrations with regulation by biotic methanogenesis. Anaerobic, single-celled organisms that produced methane (CH4) may therefore have contributed to the warming in addition to carbon dioxide. Tidal heating The Moon was originally much closer to the Earth, which rotated faster than it does today, resulting in greater tidal heating than experienced today. Original estimates found that even early tidal heating would be minimal, perhaps 0.02 watts per square meter. (For comparison, the solar energy incident on the Earth's atmosphere is on the order of 1,000 watts per square meter.) However, around 2021, a team led by René Heller in Germany argued that such estimates were simplistic and that in some plausible models tidal heating might have contributed on the order of 10 watts per square meter and increased the equilibrium temperature by up to five degrees Celsius on a timescale of 100 million years. Such a contribution would partially resolve the paradox but is insufficient to solve the faint young paradox on its own without additional factors such as greenhouse heating. The underlying assumption of Moon's formation just outside of the Roche limit is not certain, however: a magnetized disk of debris could have transported angular momentum leading to a less massive Moon in a higher orbit. Cosmic rays A minority view propounded by the Israeli-American physicist Nir Shaviv uses climatological influences of solar wind combined with a hypothesis of Danish physicist Henrik Svensmark for a cooling effect of cosmic rays. According to Shaviv, the early Sun had emitted a stronger solar wind that produced a protective effect against cosmic rays. In that early age, a moderate greenhouse effect comparable to today's would have been sufficient to explain a largely ice-free Earth. Evidence for a more active early Sun has been found in meteorites. The temperature minimum around 2.4 Ga goes along with a cosmic ray flux modulation by a variable star formation rate in the Milky Way. The reduced solar impact later results in a stronger impact of cosmic ray flux, which is hypothesized to lead to a relationship with climatological variations. Mass loss from Sun It has been proposed several times that mass loss from the faint young Sun in the form of stronger solar winds could have compensated for the low temperatures from greenhouse gas forcing. In this framework, the early Sun underwent an extended period of higher solar wind output. Based on exoplanetary data, this caused a mass loss from the Sun of 5−6 percent over its lifetime, resulting in a more consistent level of solar luminosity (as the early Sun had more mass, resulting in more energy output than was predicted). In order to explain the warm conditions in the Archean eon, this mass loss must have occurred over an interval of about one billion years. Records of ion implantation from meteorites and lunar samples show that the elevated rate of solar wind flux only lasted for a period of 100 million years. Observations of the young Sun-like star π1 Ursae Majoris match this rate of decline in the stellar wind output, suggesting that a higher mass loss rate cannot by itself resolve the paradox. Changes in clouds If greenhouse gas concentrations did not compensate completely for the fainter Sun, the moderate temperature range may be explained by a lower surface albedo. At the time, a smaller area of exposed continental land would have resulted in fewer cloud condensation nuclei both in the form of wind-blown dust and biogenic sources. A lower albedo allows a higher fraction of solar radiation to penetrate to the surface. Goldblatt and Zahnle (2011) investigated whether a change in cloud fraction could have been sufficiently warming and found that the net effect was equally as likely to have been negative as positive. At most the effect could have raised surface temperatures to just above freezing on average. Another proposed mechanism of cloud cover reduction relates a decrease in cosmic rays during this time to reduced cloud fraction. However, this mechanism does not work for several reasons, including the fact that ions do not limit cloud formation as much as cloud condensation nuclei, and cosmic rays have been found to have little impact on global mean temperature. Clouds continue to be the dominant source of uncertainty in 3-D global climate models, and a consensus has yet to be reached on how changes in cloud spatial patterns and cloud type may have affected Earth's climate during this time. Local Hubble expansion Although both simulations and direct measurements of effects of Hubble's law on gravitationally bound systems are returning inconclusive results as of 2022, it was noted that orbital expansion with a fraction of local Hubble expansion rate may explain the observed anomalies in orbital evolution, including a faint young Sun paradox. Gaia hypothesis The Gaia hypothesis holds that biological processes work to maintain a stable surface climate on Earth to maintain habitability through various negative feedback mechanisms. While organic processes, such as the organic carbon cycle, work to regulate dramatic climate changes, and that the surface of Earth has presumably remained habitable, this hypothesis has been criticized as intractable. Furthermore, life has existed on the surface of Earth through dramatic changes in climate, including Snowball Earth episodes. There are also strong and weak versions of the Gaia hypothesis, which has caused some tension in this research area. On other planets Mars Mars has its own version of the faint young Sun paradox. Martian terrains show clear signs of past liquid water on the surface, including outflow channels, gullies, modified craters, and valley networks. These geomorphic features suggest Mars had an ocean on its surface and river networks that resemble current Earth's during the late Noachian (4.1–3.7 Ga). It is unclear how Mars's orbital pattern, which places it even further from the Sun, and the faintness of the young Sun could have produced what is thought to have been a very warm and wet climate on Mars. Scientists debate over which geomorphological features can be attributed to shorelines or other water flow markers and which can be ascribed to other mechanisms. Nevertheless, the geologic evidence, including observations of widespread fluvial erosion in the southern highlands, are generally consistent with an early warm and semi-arid climate. Given the orbital and solar conditions of early Mars, a greenhouse effect would have been necessary to increase surface temperatures at least 65 K in order for these surface features to have been carved by flowing water. A much denser, CO2-dominated atmosphere has been proposed as a way to produce such a temperature increase. This would depend upon the carbon cycle and the rate of volcanism throughout the pre-Noachian and Noachian, which is not well known. Volatile outgassing is thought to have occurred during these periods. One way to ascertain whether Mars possessed a thick CO2-rich atmosphere is to examine carbonate deposits. A primary sink for carbon in Earth's atmosphere is the carbonate–silicate cycle. However it would have been difficult for CO2 to have accumulated in the Martian atmosphere in this way because the greenhouse effect would have been outstripped by CO2 condensation. A volcanically-outgassed CO2-H2 greenhouse is a plausible scenario suggested recently for early Mars. Intermittent bursts of methane may have been another possibility. Such greenhouse gas combinations appear necessary because carbon dioxide alone, even at pressures exceeding a few bar, cannot explain the temperatures required for the presence of surface liquid water on early Mars. Venus Venus's atmosphere is composed of 96% carbon dioxide. Billions of years ago, when the Sun was 25 to 30% dimmer, Venus's surface temperature could have been much cooler, and its climate could have resembled current Earth's, complete with a hydrological cycle—before it experienced a runaway greenhouse effect. See also Cool early Earth Effective temperature – of a planet, dependent on reflectivity of its surface and clouds. Isua Greenstone Belt Paleoclimatology References Further reading Sun Climate history Paradoxes 1972 in science Unsolved problems in astronomy
Faint young Sun paradox
[ "Physics", "Astronomy" ]
2,546
[ "Concepts in astronomy", "Unsolved problems in astronomy", "Astronomical controversies" ]
2,202,422
https://en.wikipedia.org/wiki/Ligand%20%28biochemistry%29
In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. The etymology stems from Latin ligare, which means 'to bind'. In protein-ligand binding, the ligand is usually a molecule which produces a signal by binding to a site on a target protein. The binding typically results in a change of conformational isomerism (conformation) of the target protein. In DNA-ligand binding studies, the ligand can be a small molecule, ion, or protein which binds to the DNA double helix. The relationship between ligand and binding partner is a function of charge, hydrophobicity, and molecular structure. Binding occurs by intermolecular forces, such as ionic bonds, hydrogen bonds and Van der Waals forces. The association or docking is actually reversible through dissociation. Measurably irreversible covalent bonding between a ligand and target molecule is atypical in biological systems. In contrast to the definition of ligand in metalorganic and inorganic chemistry, in biochemistry it is ambiguous whether the ligand generally binds at a metal site, as is the case in hemoglobin. In general, the interpretation of ligand is contextual with regards to what sort of binding has been observed. Ligand binding to a receptor protein alters the conformation by affecting the three-dimensional shape orientation. The conformation of a receptor protein composes the functional state. Ligands include substrates, inhibitors, activators, signaling lipids, and neurotransmitters. The rate of binding is called affinity, and this measurement typifies a tendency or strength of the effect. Binding affinity is actualized not only by host–guest interactions, but also by solvent effects that can play a dominant, steric role which drives non-covalent binding in solution. The solvent provides a chemical environment for the ligand and receptor to adapt, and thus accept or reject each other as partners. Radioligands are radioisotope labeled compounds used in vivo as tracers in PET studies and for in vitro binding studies. Receptor/ligand binding affinity The interaction of ligands with their binding sites can be characterized in terms of a binding affinity. In general, high-affinity ligand binding results from greater attractive forces between the ligand and its receptor while low-affinity ligand binding involves less attractive force. In general, high-affinity binding results in a higher occupancy of the receptor by its ligand than is the case for low-affinity binding; the residence time (lifetime of the receptor-ligand complex) does not correlate. High-affinity binding of ligands to receptors is often physiologically important when some of the binding energy can be used to cause a conformational change in the receptor, resulting in altered behavior for example of an associated ion channel or enzyme. A ligand that can bind to and alter the function of the receptor that triggers a physiological response is called a receptor agonist. Ligands that bind to a receptor but fail to activate the physiological response are receptor antagonists. Agonist binding to a receptor can be characterized both in terms of how much physiological response can be triggered (that is, the efficacy) and in terms of the concentration of the agonist that is required to produce the physiological response (often measured as EC50, the concentration required to produce the half-maximal response). High-affinity ligand binding implies that a relatively low concentration of a ligand is adequate to maximally occupy a ligand-binding site and trigger a physiological response. Receptor affinity is measured by an inhibition constant or Ki value, the concentration required to occupy 50% of the receptor. Ligand affinities are most often measured indirectly as an IC50 value from a competition binding experiment where the concentration of a ligand required to displace 50% of a fixed concentration of reference ligand is determined. The Ki value can be estimated from IC50 through the Cheng Prusoff equation. Ligand affinities can also be measured directly as a dissociation constant (Kd) using methods such as fluorescence quenching, isothermal titration calorimetry or surface plasmon resonance. Low-affinity binding (high Ki level) implies that a relatively high concentration of a ligand is required before the binding site is maximally occupied and the maximum physiological response to the ligand is achieved. In the example shown to the right, two different ligands bind to the same receptor binding site. Only one of the agonists shown can maximally stimulate the receptor and, thus, can be defined as a full agonist. An agonist that can only partially activate the physiological response is called a partial agonist. In this example, the concentration at which the full agonist (red curve) can half-maximally activate the receptor is about 5 x 10−9 Molar (nM = nanomolar). Binding affinity is most commonly determined using a radiolabeled ligand, known as a tagged ligand. Homologous competitive binding experiments involve binding competition between a tagged ligand and an untagged ligand. Real-time based methods, which are often label-free, such as surface plasmon resonance, dual-polarization interferometry and multi-parametric surface plasmon resonance (MP-SPR) can not only quantify the affinity from concentration based assays; but also from the kinetics of association and dissociation, and in the later cases, the conformational change induced upon binding. MP-SPR also enables measurements in high saline dissociation buffers thanks to a unique optical setup. Microscale thermophoresis (MST), an immobilization-free method was developed. This method allows the determination of the binding affinity without any limitation to the ligand's molecular weight. For the use of statistical mechanics in a quantitative study of the ligand-receptor binding affinity, see the comprehensive article on the configurational partition function. Drug or hormone binding potency Binding affinity data alone does not determine the overall potency of a drug or a naturally produced (biosynthesized) hormone. Potency is a result of the complex interplay of both the binding affinity and the ligand efficacy. Drug or hormone binding efficacy Ligand efficacy refers to the ability of the ligand to produce a biological response upon binding to the target receptor and the quantitative magnitude of this response. This response may be as an agonist, antagonist, or inverse agonist, depending on the physiological response produced. Selective and non-selective Selective ligands have a tendency to bind to very limited kinds of receptor, whereas non-selective ligands bind to several types of receptors. This plays an important role in pharmacology, where drugs that are non-selective tend to have more adverse effects, because they bind to several other receptors in addition to the one generating the desired effect. Hydrophobic ligands For hydrophobic ligands (e.g. PIP2) in complex with a hydrophobic protein (e.g. lipid-gated ion channels) determining the affinity is complicated by non-specific hydrophobic interactions. Non-specific hydrophobic interactions can be overcome when the affinity of the ligand is high. For example, PIP2 binds with high affinity to PIP2 gated ion channels. Bivalent ligand Bivalent ligands consist of two drug-like molecules (pharmacophores or ligands) connected by an inert linker. There are various kinds of bivalent ligands and are often classified based on what the pharmacophores target. Homobivalent ligands target two of the same receptor types. Heterobivalent ligands target two different receptor types. Bitopic ligands target an orthosteric binding sites and allosteric binding sites on the same receptor. In scientific research, bivalent ligands have been used to study receptor dimers and to investigate their properties. This class of ligands was pioneered by Philip S. Portoghese and coworkers while studying the opioid receptor system. Bivalent ligands were also reported early on by Micheal Conn and coworkers for the gonadotropin-releasing hormone receptor. Since these early reports, there have been many bivalent ligands reported for various G protein-coupled receptor (GPCR) systems including cannabinoid, serotonin, oxytocin, and melanocortin receptor systems, and for GPCR-LIC systems (D2 and nACh receptors). Bivalent ligands usually tend to be larger than their monovalent counterparts, and therefore, not 'drug-like' as in Lipinski's rule of five. Many believe this limits their applicability in clinical settings. In spite of these beliefs, there have been many ligands that have reported successful pre-clinical animal studies. Given that some bivalent ligands can have many advantages compared to their monovalent counterparts (such as tissue selectivity, increased binding affinity, and increased potency or efficacy), bivalents may offer some clinical advantages as well. Mono- and polydesmic ligands Ligands of proteins can be characterized also by the number of protein chains they bind. "Monodesmic" ligands (μόνος: single, δεσμός: binding) are ligands that bind a single protein chain, while "polydesmic" ligands (πολοί: many) are frequent in protein complexes, and are ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that the type of ligands and binding site structure has profound consequences for the evolution, function, allostery and folding of protein compexes. Privileged scaffold A privileged scaffold is a molecular framework or chemical moiety that is statistically recurrent among known drugs or among a specific array of biologically active compounds. These privileged elements can be used as a basis for designing new active biological compounds or compound libraries. Methods used to study binding Main methods to study protein–ligand interactions are principal hydrodynamic and calorimetric techniques, and principal spectroscopic and structural methods such as Fourier transform spectroscopy Raman spectroscopy Fluorescence spectroscopy Circular dichroism Nuclear magnetic resonance Mass spectrometry Atomic force microscope Paramagnetic probes Dual polarisation interferometry Multi-parametric surface plasmon resonance Ligand binding assay and radioligand binding assay Other techniques include: fluorescence intensity, bimolecular fluorescence complementation, FRET (fluorescent resonance energy transfer) / FRET quenching surface plasmon resonance, bio-layer interferometry, Coimmunopreciptation indirect ELISA, equilibrium dialysis, gel electrophoresis, far western blot, fluorescence polarization anisotropy, electron paramagnetic resonance, microscale thermophoresis, switchSENSE. The dramatically increased computing power of supercomputers and personal computers has made it possible to study protein–ligand interactions also by means of computational chemistry. For example, a worldwide grid of well over a million ordinary PCs was harnessed for cancer research in the project grid.org, which ended in April 2007. Grid.org has been succeeded by similar projects such as World Community Grid, Human Proteome Folding Project, Compute Against Cancer and Folding@Home. See also Agonist Schild regression Allosteric regulation Ki Database Docking@Home GPUGRID.net DNA binding ligand BindingDB SAMPL Challenge References External links BindingDB, a public database of measured protein-ligand binding affinities. BioLiP, a comprehensive database for ligand-protein interactions. Biomolecules Cell signaling Chemical bonding Proteins
Ligand (biochemistry)
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
2,394
[ "Biomolecules by chemical classification", "Natural products", "Biochemistry", "Signal transduction", "Organic compounds", "Ligands (biochemistry)", "Condensed matter physics", "nan", "Biomolecules", "Structural biology", "Proteins", "Chemical bonding", "Molecular biology" ]
2,202,441
https://en.wikipedia.org/wiki/Ornidazole
Ornidazole is an antibiotic used to treat protozoan infections. A synthetic nitroimidazole, it is commercially obtained from an acid-catalyzed reaction between 2-methyl-5-nitroimidazole and epichlorohydrin. Ornidazole is nothing but chloro-secnidazole. Antimicrobial spectrum is similar to that of metronidazole and is more well tolerated; however there are concerns of lower relative efficacy. It was first introduced for treating trichomoniasis before being recognized for its broad anti-protozoan and anti-anaerobic-bacterial capacities. has also been investigated for use in Crohn's disease after bowel resection. References Antiprotozoal agents Disulfiram-like drugs Poultry diseases Nitroimidazole antibiotics Organochlorides Halohydrins
Ornidazole
[ "Biology" ]
191
[ "Antiprotozoal agents", "Biocides" ]
2,202,538
https://en.wikipedia.org/wiki/Dimetridazole
Dimetridazole is a drug that combats protozoan infections. It is a nitroimidazole class drug. It used to be commonly added to poultry feed. This led to it being found in eggs. Because of suspicions of it being carcinogenic its use has been legally limited but it is still found in the eggs. It is now banned as a livestock feed additive in many jurisdictions, for example in the European Union, Canada. and the United States. In the US, the Food and Drug Administration bans it for extralabel use See also Metronidazole Nimorazole References Antiparasitic agents Nitroimidazole antibiotics
Dimetridazole
[ "Biology" ]
141
[ "Biocides", "Antiparasitic agents" ]
2,202,583
https://en.wikipedia.org/wiki/Aldol%E2%80%93Tishchenko%20reaction
The Aldol–Tishchenko reaction is a tandem reaction involving an aldol reaction and a Tishchenko reaction. In organic synthesis, it is a method to convert aldehydes and ketones into 1,3-hydroxyl compounds. The reaction sequence in many examples starts from conversion of a ketone into an enolate by action of lithium diisopropylamide (LDA), to which a suitable aldehyde is added. The resulting mono-ester diol is then converted into the diol by a hydrolysis step. With both the acetyl trimethylsilane and propiophenone as reactants, the diol is obtained as a pure diastereoisomer. References Addition reactions Name reactions
Aldol–Tishchenko reaction
[ "Chemistry" ]
156
[ "Name reactions" ]
2,202,629
https://en.wikipedia.org/wiki/Mebendazole
Mebendazole (MBZ), sold under the brand name Vermox among others, is a medication used to treat a number of parasitic worm infestations. This includes ascariasis, pinworm infection, hookworm infections, guinea worm infections and hydatid disease, among others. It has been used for treatment of giardiasis but is not a preferred agent. It is taken by mouth. Mebendazole is usually well tolerated. Common side effects include headache, vomiting, and ringing in the ears. If used at large doses it may cause bone marrow suppression. It is unclear if it is safe in pregnancy. Mebendazole is a broad-spectrum antihelminthic agent of the benzimidazole type. Mebendazole came into use in 1971, after it was developed by Janssen Pharmaceutica in Belgium. It is on the World Health Organization's List of Essential Medicines. Mebendazole is available as a generic medication. Medical use Mebendazole is a highly effective, broad-spectrum antihelmintic indicated for the treatment of nematode infestations, including roundworm, hookworm, whipworm, threadworm (pinworm), and the intestinal form of trichinosis prior to its spread into the tissues beyond the digestive tract. Other drugs are used to treat worm infections outside the digestive tract, as mebendazole is poorly absorbed into the bloodstream. Mebendazole is used alone in those with mild to moderate infestations. It kills parasites relatively slowly, and in those with very heavy infestations, it can cause some parasites to migrate out of the digestive system, leading to appendicitis, bile duct problems, or intestinal perforation. To avoid this, heavily infested patients may be treated with piperazine, either before or instead of mebendazole. Piperazine paralyses the parasites, causing them to pass in the feces. It is also used rarely in the treatment of cystic echinococcosis, also known as hydatid disease. Evidence for effectiveness for this disease, however, is poor. Mebendazole and other benzimidazole antithelmetics are active against both larval and adult stages of nematodes, and in the cases of roundworm and whipworm, kill the eggs, as well. Paralysis and death of the parasites occurs slowly, and elimination in the feces may require several days. Special populations Mebendazole has been shown to cause ill effects in pregnancy in animal models, and no adequate studies of its effects in human pregnancy have been conducted. Whether it can be passed by breastfeeding is unknown. Adverse effects Mebendazole sometimes causes diarrhea, abdominal pain, and elevated liver enzymes. In rare cases, it has been associated with a dangerously low white blood cell count, low platelet count, and hair loss, with a risk of agranulocytosis in rare cases. Drug interactions Carbamazepine and phenytoin lower serum levels of mebendazole. Cimetidine does not appreciably raise serum mebendazole (in contrast to the similar drug albendazole), consistent with its poor systemic absorption. Stevens–Johnson syndrome and the more severe toxic epidermal necrolysis can occur when mebendazole is combined with high doses of metronidazole. Mechanism Mebendazole works by selectively inhibiting the synthesis of microtubules via binding to the colchicine binding site of β-tubulin, thereby blocking polymerisation of tubulin dimers in intestinal cells of parasites. Disruption of cytoplasmic microtubules leads to blocking the uptake of glucose and other nutrients, resulting in the gradual immobilization and eventual death of the helminths. Poor absorption in the digestive tract makes mebendazole an efficient drug for treating intestinal parasitic infections with limited adverse effects. However mebendazole has an impact on mammalian cells, mostly by inhibiting polymeration of tubulin dimers, thereby disrupting essential microtubule structures such as mitotic spindle. Disassembly of the mitotic spindle then leads to apoptosis mediated via dephosphorylation of Bcl-2 which allows pro-apoptotic protein Bax to dimerize and initiate programmed cell death. Society and culture Availability Mebendazole is available as a generic medication. Mebendazole is distributed in international markets by Johnson and Johnson and a number of generic manufacturers. Economics In the United States, mebendazole is sometimes sold at about 200 times the price of the same medication in other countries. References Anthelmintics Aromatic ketones Belgian inventions Benzimidazoles Carbamates Embryotoxicants Drugs developed by Johnson & Johnson Janssen Pharmaceutica Wikipedia medicine articles ready to translate Teratogens World Health Organization essential medicines Methyl esters
Mebendazole
[ "Chemistry" ]
1,052
[ "Teratogens" ]
2,202,672
https://en.wikipedia.org/wiki/Carbadox
Carbadox is a veterinary drug that combats infection in swine, particularly swine dysentery. Indications Carbadox is indicated for control of swine dysentery (vibrionic dysentery, bloody scours, or hemorrhagic dysentery); control of bacterial swine enteritis (salmonellosis or necrotic enteritis caused by Salmonella enterica); aid in the prevention of migration and establishment of large roundworm (Ascaris suum) infections; aid in the prevention of establishment of nodular worm (Oesophagostomum) infections. Safety In animal models, carbadox has been shown to be carcinogenic and to induce birth defects. The Food and Drug Administration's Center for Veterinary Medicine has questioned the safety in light of its possible carcinogenicity. Regulation Carbadox is approved in the United States only for use in swine and may not be used within 42 days of slaughter or used in pregnant animals. In 2016, the United States Food and Drug Administration moved to ban its use in pork, citing a potential cancer risk to humans. However, as of August 2018, FDA had indefinitely stayed its withdrawal of approval and carbadox remains available. In 2004, carbadox was banned by the Canadian government as a livestock feed additive and for human consumption. The European Union also forbids the use of carbadox at any level. Australia forbids the use of carbadox in food producing animals. References Antimicrobials Antiparasitic agents Quinoxalines Amine oxides Hydrazones Veterinary drugs Methyl esters
Carbadox
[ "Chemistry", "Biology" ]
338
[ "Antimicrobials", "Antiparasitic agents", "Functional groups", "Amine oxides", "Hydrazones", "Biocides" ]
2,202,712
https://en.wikipedia.org/wiki/Paromomycin
Paromomycin is an antimicrobial used to treat a number of parasitic infections including amebiasis, giardiasis, leishmaniasis, and tapeworm infection. It is a first-line treatment for amebiasis or giardiasis during pregnancy. Otherwise, it is generally a second line treatment option. It is taken by mouth, applied to the skin, or by injection into a muscle. Common side effects when taken by mouth include loss of appetite, vomiting, abdominal pain, and diarrhea. When applied to the skin side effects include itchiness, redness, and blisters. When given by injection there may be fever, liver problems, or hearing loss. Use during breastfeeding appears to be safe. Paromomycin is in the aminoglycoside family of medications and causes microbe death by stopping the creation of bacterial proteins. Paromomycin was discovered in the 1950s from a type of streptomyces and came into medical use in 1960. It is on the World Health Organization's List of Essential Medicines. Paromomycin is available as a generic medication. Medical uses It is an antimicrobial used to treat intestinal parasitic infections such as cryptosporidiosis and amoebiasis, and other diseases such as leishmaniasis. Paromomycin was demonstrated to be effective against cutaneous leishmaniasis in clinical studies in the USSR in the 1960s, and in trials with visceral leishmaniasis in the early 1990s. The route of administration is intramuscular injection and capsule. Paromomycin topical cream with or without gentamicin is an effective treatment for ulcerative cutaneous leishmaniasis, according to the results of a phase-3, randomized, double-blind, parallel group–controlled trial. Pregnancy and breastfeeding The medication is poorly absorbed. The effect it may have on the baby is still unknown. There is limited data regarding the safety of taking paromomycin while breastfeeding but because the drug is poorly absorbed minimal amounts of drug will be secreted in breastmilk. HIV/AIDS There is limited evidence that paromomycin can be used in persons coinfected with HIV and Cryptosporidium. A few small trials have showed a reduction in oocyst shedding after treatment with paromomycin. Adverse effects The most common adverse effects associated with paromomycin sulfate are abdominal cramps, diarrhea, heartburn, nausea, and vomiting. Long-term use of paromomycin increases the risk for bacterial or fungal infection. Signs of overgrowth include white patches in the oral cavities. Other less common adverse events include myasthenia gravis, kidney damage, enterocolitis, malabsorption syndrome, eosinophilia, headache, hearing loss, ringing in the ear, itching, severe dizziness, and pancreatitis. Interactions Paromomycin belongs to the aminoglycoside drug class and therefore are toxic to the kidneys and to ears. These toxicities are additive and are more likely to occur when used with other drugs that cause ear and kidney toxicity. Concurrent use of foscarnet increases the risk of kidney toxicity. Concurrent use of colistimethate and paromomycin can cause a dangerous slowing of breathing known as respiratory depression, and should be done with extreme caution if necessary. When used with systemic antibiotics such as paromomycin, the cholera vaccine can cause an immune response. Use with strong diuretics, which can also harm hearing, should be avoided. Paromomycin may have dangerous reactions when used with the paralytic succinylcholine by increasing its neuromuscular effects. There are no known food or drink interactions with paromomycin. Mechanism Paromomycin is a protein synthesis inhibitor in nonresistant cells by binding to 16S ribosomal RNA. This broad-spectrum antibiotic soluble in water, is very similar in action to neomycin. Antimicrobial activity of paromomycin against Escherichia coli and Staphylococcus aureus has been shown. Paromomycin works as an antibiotic by increasing the error rate in ribosomal translation. Paromomycin binds to a RNA loop, where residues A1492 and A1493 are usually stacked, and expels these two residues. These two residues are involved in detection of correct Watson-Crick pairing between the codon and anti codon. When correct interactions are achieved, the binding provides energy to expel the two residues. Paromomycin binding provides enough energy for residue expulsion and thus results in the ribosome incorporating the incorrect amino acid into the nascent peptide chain. Recent real-time measurements of aminoglycoside effects on protein synthesis in live E. coli cells found that paromomycin's interference with protein synthesis is not only due to the misreading of mRNA but also due to a significant reduction in the overall protein elongation rate, suggesting a more comprehensive inhibition of protein synthesis. Pharmacokinetics Absorption GI absorption is poor. Any obstructions or factors which impair GI motility may increase the absorption of the drug from the digestive tract. In addition, any structural damage, such as lesions or ulcerations, will tend to increase drug absorption. For intramuscular (IM) injection, the absorption is rapid. Paromomycin will reach peak plasma concentration within one hour following IM injection. The in-vitro and in-vivo activities parallel those of neomycin. Elimination Almost 100% of the oral dose is eliminated unchanged via feces. Any absorbed drug will be excreted in urine. History Paromomycin was discovered in the 1950s amongst the secondary metabolites of a variety of Streptomyces then known as Streptomyces krestomuceticus, now known as Streptomyces rimosus. It came into medical use in 1960. References External links Aminoglycoside antibiotics Antiprotozoal agents Orphan drugs Drugs developed by Pfizer World Health Organization essential medicines Wikipedia medicine articles ready to translate Antiparasitic agents
Paromomycin
[ "Biology" ]
1,279
[ "Antiprotozoal agents", "Biocides", "Antiparasitic agents" ]
2,202,805
https://en.wikipedia.org/wiki/The%20Space%20Explorers
The Space Explorers is an animated film created by Fred Ladd that was later turned into a cartoon serial and spawned a sequel series, New Adventures of the Space Explorers. The film aired in 1958; the sequel series aired the following year. For accuracy, both animated feature films used a consultant from Hayden Planetarium. Synopsis The cartoon, which featured Jimmy, Smitty and Professor Leon Nordheim on board the Polaris spaceship, taught space-related concepts. Production The films were originally created for the education market, to be shown in classrooms. They were made under the technical guidance of Franklyn M. Branley, Associate Astronomer, American Museum of Natural History-Hayden Planetarium. It may have been rushed into production to "capitalize on the Sputnik craze". The material comes primarily from three foreign films: Various animation sequences come from the 1951 Russian film "Universe" by the late Soviet director Pavel Klushantsev. Images of the rocket Polaris come from footage of German film "Weltraumschiff 1 Startet" (Anton Kutter, 1937). Except for images of the interior of the spaceship, images of the characters and from the walk on planet were extracted from a Russian cartoon film "Polet na lunu" (Flight to the moon), 1953, (Soyuzmultfilm). Release The Space Explorers first aired in 1958 on nationwide television shows such as Claude Kirchner's on WWOR-TV, Captain Kangaroo, Captain Video (DuMont), Captain Satellite, Sheriff John, Officer Joe Bolton, and Romper Room. It was followed by the two-hour-long sequel New Adventures of the Space Explorers the following year. In popular culture The spaceship from the series, the Polaris, has been featured on the very beginning of Chapter 5 of NOVA's Public Television (PBS) production of The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. It has also been seen on Mike Myers Saturday Night Live skit Dieter. Reception In a book written by Ladd and Harvey Deneroff, they describe the film as a "cult classic". According to Jörg Hartmann The Space Explorers instantly became widely distributed in North American TV. It stood out among similar-themed children's series through its impressive special effects. The Space Explorers as well as the New Adventures of the Space Explorers remained very popular for ten years. Hartmann assumed that the popularization of space flight through media like the Space Explorers influenced some members of the Baby boomer generation to take up careers in that field, who put the depicted flight around the Moon into practice in the 1960s. Telepolis journalist Marcus Hammerschmitt called The Space Explorers an instant hit, and concluded that the climate in late-1950s America must have been favorable for its reception. He counted The Space Explorers among the works by Ladd which contributed to the spread of anime in the West. Hammerschmitt also saw a parallel between the incorporation of film material produced in Nazi Germany into an American piece of media and the careers of some rocket engineers from the German Peenemünde facility who successfully continued to work at NASA. References External links The Space Explorers website 1958 films 1958 animated films 1958 television films 1950s science fiction films 1950s American animated television series 1950s American science fiction television series 1958 American television series debuts 1958 American television series endings American children's animated adventure films Animated science fiction films Fiction about spaceflight Space exploration 1950s American films
The Space Explorers
[ "Astronomy" ]
706
[ "Space exploration", "Outer space" ]
2,202,860
https://en.wikipedia.org/wiki/Magnesium%20sulfide
Magnesium sulfide is an inorganic compound with the formula MgS. It is a white crystalline material but often is encountered in an impure form that is brown and non-crystalline powder. It is generated industrially in the production of metallic iron. Preparation and general properties MgS is formed by the reaction of sulfur or hydrogen sulfide with magnesium. It crystallizes in the rock salt structure as its most stable phase, its zinc blende and wurtzite structures can be prepared by molecular beam epitaxy. The chemical properties of MgS resemble those of related ionic sulfides such as those of sodium, barium, or calcium. It reacts with oxygen to form the corresponding sulfate, magnesium sulfate. MgS reacts with water to give hydrogen sulfide and magnesium hydroxide. Applications In the BOS steelmaking process, sulfur is the first element to be removed. Sulfur is removed from the impure blast furnace iron by the addition of several hundred kilograms of magnesium powder by a lance. Magnesium sulfide is formed, which then floats on the molten iron and is removed. MgS is a wide band-gap direct semiconductor of interest as a blue-green emitter, a property that has been known since the early 1900s. The wide-band gap property also allows the use of MgS as photo-detector for short wavelength ultraviolet light. Occurrence Aside from being a component of some slags, MgS is a rare nonterrestrial mineral niningerite detected in some meteorites. It is also a solid solution component along with CaS and FeS in oldhamite. MgS is also found in the circumstellar envelopes of certain evolved carbon stars, i. e., those with C/O > 1. Safety MgS evolves hydrogen sulfide upon contact with moisture. References Monosulfides Magnesium compounds II-VI semiconductors Rock salt crystal structure
Magnesium sulfide
[ "Chemistry" ]
383
[ "Semiconductor materials", "II-VI semiconductors", "Inorganic compounds" ]
2,202,872
https://en.wikipedia.org/wiki/Harvard%20Mark%20IV
The Harvard Mark IV was an electronic stored-program computer built by Harvard University under the supervision of Howard Aiken for the United States Air Force. The computer was finished being built in 1952. It stayed at Harvard, where the Air Force used it extensively. The Mark IV was all electronic. The Mark IV used magnetic drum and had 200 registers of ferrite magnetic-core memory (one of the first computers to do so). It separated the storage of data and instructions in what is now sometimes referred to as the Harvard architecture although that term was not coined until the 1970s (in the context of microcontrollers). See also Harvard Mark I Harvard Mark II Harvard Mark III List of vacuum-tube computers References Further reading External links Harvard Mark IV 64-bit Magnetic Shift Register at ComputerHistory.org 1950s computers Computer-related introductions in 1952 Vacuum tube computers One-of-a-kind computers Harvard University
Harvard Mark IV
[ "Technology" ]
187
[ "Computing stubs", "Computer hardware stubs" ]
2,202,916
https://en.wikipedia.org/wiki/New%20Source%20Performance%20Standards
New Source Performance Standards (NSPS) are pollution control standards issued by the United States Environmental Protection Agency (EPA). The term is used in the Clean Air Act Extension of 1970 (CAA) to refer to air pollution emission standards, and in the Clean Water Act (CWA) referring to standards for water pollution discharges of industrial wastewater to surface waters. Introduction Some pollution control laws are organized with varying degrees of stringency. The different standards may be based on several factors, including whether the pollution source is an existing facility at the time the standard is published, or is constructed after publication. The standards for new sources may be more stringent than that for existing facilities, on the principle that a new plant can be designed with the latest and most advanced control technologies. Clean Air Act The Clean Air Act NSPS dictate the level of pollution that a new stationary source may produce. These standards are authorized by Section 111 of the CAA and the regulations are published in 40 CFR Part 60. NSPS have been established for a number of individual industrial or source categories. Examples: Air emissions from chemical manufacturing wastewater Boilers Landfills Petroleum refineries Stationary gas turbines. Basic process for establishing standards Identify type of emitting facility. For each type of facility, identify the type of pollutant control technology that is appropriate. From a study of all the plants and all the information available about the plants and their technologies, establish an allowed concentration of the criteria pollutants that is the upper limit of what can be emitted. Clean Water Act Under the Clean Water Act, NSPS set the level of allowable wastewater discharges from new industrial facilities. EPA issues NSPS for categories of industrial dischargers, typically in conjunction with the issuance of effluent guidelines for existing sources. In developing NSPS, the CWA requires that EPA determine the "best available demonstrated control technology" (BADCT) for the particular industrial category. BADCT may be more stringent than the best available technology economically achievable standard used for existing dischargers. This consideration may include setting a "no discharge of pollutants standard" (also called a "zero discharge" standard) if practicable. NSPS regulations are published at 40 CFR Subchapter N (Parts 405-499). NSPS issued by EPA include the following categories: Coal Mining Concentrated Animal Feeding Operations (including zero discharge requirements) Dairy Products Inorganic Chemicals Manufacturing (including a zero discharge requirement for several subcategories) Iron and Steel Manufacturing Oil and Gas Extraction Petroleum Refining Pulp, Paper and Paperboard Sugar Processing (including a zero discharge requirement for one subcategory) Textile Mills. EPA published a general definition of "new source" in its wastewater permit regulations. More specialized definitions of "new source" are included in some of the individual category regulations, e.g., the definition for the Pulp, Paper and Paperboard category. See also National Emissions Standards for Hazardous Air Pollutants New Source Review - CAA pre-construction review process for new or modified facilities References External links EPA Air Toxics Program (CAA) EPA Effluent Guidelines Program (CWA) Air pollution in the United States Atmospheric dispersion modeling Emission standards Environmental law in the United States United States Environmental Protection Agency Waste legislation in the United States Water law in the United States Water pollution in the United States Standards of the United States
New Source Performance Standards
[ "Chemistry", "Engineering", "Environmental_science" ]
690
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
2,202,965
https://en.wikipedia.org/wiki/Major%20stationary%20source
A major stationary source is a source that emits more than a certain amount of a pollutant as defined by the U.S. Environmental Protection Agency (EPA). The amount of pollutants allowed for certain new sources is defined by the EPA's New Source Performance Standards (NSPRS). A stationary source in air quality terminology is any fixed emitter of air pollutants, such as fossil fuel burning power plants, petroleum refineries, petrochemical plants, food processing plants and other heavy industrial sources. A mobile source in air quality terminology is a non-stationary source of air pollutants, such as automobiles, buses, trucks, ships, trains, aircraft and various other vehicles. See also Air pollution dispersion terminology Atmospheric dispersion modeling AP 42 Compilation of Air Pollutant Emission Factors Lowest Achievable Emissions Rate References United States Environmental Protection Agency Atmospheric dispersion modeling
Major stationary source
[ "Chemistry", "Engineering", "Environmental_science" ]
188
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
2,203,131
https://en.wikipedia.org/wiki/Geomagnetic%20reversal
A geomagnetic reversal is a change in a planet's dipole magnetic field such that the positions of magnetic north and magnetic south are interchanged (not to be confused with geographic north and geographic south). The Earth's magnetic field has alternated between periods of normal polarity, in which the predominant direction of the field was the same as the present direction, and reverse polarity, in which it was the opposite. These periods are called chrons. Reversal occurrences are statistically random. There have been at least 183 reversals over the last 83 million years (on average once every ~450,000 years). The latest, the Brunhes–Matuyama reversal, occurred 780,000 years ago with widely varying estimates of how quickly it happened. Other sources estimate that the time that it takes for a reversal to complete is on average around 7,000 years for the four most recent reversals. Clement (2004) suggests that this duration is dependent on latitude, with shorter durations at low latitudes and longer durations at mid and high latitudes. The duration of a full reversal varies between 2,000 and 12,000 years. Although there have been periods in which the field reversed globally (such as the Laschamp excursion) for several hundred years, these events are classified as excursions rather than full geomagnetic reversals. Stable polarity chrons often show large, rapid directional excursions, which occur more often than reversals, and could be seen as failed reversals. During such an excursion, the field reverses in the liquid outer core but not in the solid inner core. Diffusion in the outer core is on timescales of 500 years or less while that of the inner core is longer, around 3,000 years. History In the early 20th century, geologists such as Bernard Brunhes first noticed that some volcanic rocks were magnetized opposite to the direction of the local Earth's field. The first systematic evidence for and time-scale estimate of the magnetic reversals were made by Motonori Matuyama in the late 1920s; he observed that rocks with reversed fields were all of early Pleistocene age or older. At the time, the Earth's polarity was poorly understood, and the possibility of reversal aroused little interest. Three decades later, when Earth's magnetic field was better understood, theories were advanced suggesting that the Earth's field might have reversed in the remote past. Most paleomagnetic research in the late 1950s included an examination of the wandering of the poles and continental drift. Although it was discovered that some rocks would reverse their magnetic field while cooling, it became apparent that most magnetized volcanic rocks preserved traces of the Earth's magnetic field at the time the rocks had cooled. In the absence of reliable methods for obtaining absolute ages for rocks, it was thought that reversals occurred approximately every million years. The next major advance in understanding reversals came when techniques for radiometric dating were improved in the 1950s. Allan Cox and Richard Doell, at the United States Geological Survey, wanted to know whether reversals occurred at regular intervals, and they invited geochronologist Brent Dalrymple to join their group. They produced the first magnetic-polarity time scale in 1959. As they accumulated data, they continued to refine this scale in competition with Don Tarling and Ian McDougall at the Australian National University. A group led by Neil Opdyke at the Lamont–Doherty Earth Observatory showed that the same pattern of reversals was recorded in sediments from deep-sea cores. During the 1950s and 1960s information about variations in the Earth's magnetic field was gathered largely by means of research vessels, but the complex routes of ocean cruises rendered the association of navigational data with magnetometer readings difficult. Only when data were plotted on a map did it become apparent that remarkably regular and continuous magnetic stripes appeared on the ocean floors. In 1963, Frederick Vine and Drummond Matthews provided a simple explanation by combining the seafloor spreading theory of Harry Hess with the known time scale of reversals: sea floor rock is magnetized in the direction of the field when it is formed. Thus, sea floor spreading from a central ridge will produce pairs of magnetic stripes parallel to the ridge. Canadian L. W. Morley independently proposed a similar explanation in January 1963, but his work was rejected by the scientific journals Nature and Journal of Geophysical Research, and remained unpublished until 1967, when it appeared in the literary magazine Saturday Review. The Morley–Vine–Matthews hypothesis was the first key scientific test of the seafloor spreading theory of continental drift. Past field reversals are recorded in the solidified ferrimagnetic minerals of consolidated sedimentary deposits or cooled volcanic flows on land. Beginning in 1966, Lamont–Doherty Geological Observatory scientists found that the magnetic profiles across the Pacific-Antarctic Ridge were symmetrical and matched the pattern in the north Atlantic's Reykjanes ridge. The same magnetic anomalies were found over most of the world's oceans, which permitted estimates for when most of the oceanic crust had developed. Observing past fields Because no existing unsubducted sea floor (or sea floor thrust onto continental plates) is more than about (Ma) old, other methods are necessary for detecting older reversals. Most sedimentary rocks incorporate minute amounts of iron-rich minerals, whose orientation is influenced by the ambient magnetic field at the time at which they formed. These rocks can preserve a record of the field if it is not later erased by chemical, physical or biological change. Because Earth's magnetic field is a global phenomenon, similar patterns of magnetic variations at different sites may be used to help calculate age in different locations. The past four decades of paleomagnetic data about seafloor ages (up to ~) has been useful in estimating the age of geologic sections elsewhere. While not an independent dating method, it depends on "absolute" age dating methods like radioisotopic systems to derive numeric ages. It has become especially useful when studying metamorphic and igneous rock formations where index fossils are seldom available. Geomagnetic polarity time scale Through analysis of seafloor magnetic anomalies and dating of reversal sequences on land, paleomagnetists have been developing a Geomagnetic Polarity Time Scale. The current time scale contains 184 polarity intervals in the last 83million years (and therefore 183 reversals). Changing frequency over time The rate of reversals in the Earth's magnetic field has varied widely over time. Around , the field reversed 5 times in a million years. In a 4-million-year period centered on , there were 10 reversals; at around , 17 reversals took place in the span of 3million years. In a period of 3million years centering on , 13 reversals occurred. No fewer than 51 reversals occurred in a 12-million-year period, centering on . Two reversals occurred during a span of 50,000 years. These eras of frequent reversals have been counterbalanced by a few "superchrons": long periods when no reversals took place. Superchrons A superchron is a polarity interval lasting at least 10million years. There are two well-established superchrons, the Cretaceous Normal and the Kiaman. A third candidate, the Moyero, is more controversial. The Jurassic Quiet Zone in ocean magnetic anomalies was once thought to represent a superchron but is now attributed to other causes. The Cretaceous Normal (also called the Cretaceous Superchron or C34) lasted for almost 40million years, from about , including stages of the Cretaceous period from the Aptian through the Santonian. The frequency of magnetic reversals steadily decreased prior to the period, reaching its low point (no reversals) during the period. Between the Cretaceous Normal and the present, the frequency has generally increased slowly. The Kiaman Reverse Superchron lasted from approximately the late Carboniferous to the late Permian, or for more than 50million years, from around . The magnetic field had reversed polarity. The name "Kiaman" derives from the Australian town of Kiama, where some of the first geological evidence of the superchron was found in 1925. The Ordovician is suspected to have hosted another superchron, called the Moyero Reverse Superchron, lasting more than 20million years (485 to 463million years ago). Thus far, this possible superchron has only been found in the Moyero river section north of the polar circle in Siberia. Moreover, the best data from elsewhere in the world do not show evidence for this superchron. Certain regions of ocean floor, older than , have low-amplitude magnetic anomalies that are hard to interpret. They are found off the east coast of North America, the northwest coast of Africa, and the western Pacific. They were once thought to represent a superchron called the Jurassic Quiet Zone, but magnetic anomalies are found on land during this period. The geomagnetic field is known to have low intensity between about and , and these sections of ocean floor are especially deep, causing the geomagnetic signal to be attenuated between the seabed and the surface. Statistical properties Several studies have analyzed the statistical properties of reversals in the hope of learning something about their underlying mechanism. The discriminating power of statistical tests is limited by the small number of polarity intervals. Nevertheless, some general features are well established. In particular, the pattern of reversals is random. There is no correlation between the lengths of polarity intervals. There is no preference for either normal or reversed polarity, and no statistical difference between the distributions of these polarities. This lack of bias is also a robust prediction of dynamo theory. There is no rate of reversals, as they are statistically random. The randomness of the reversals is inconsistent with periodicity, but several authors have claimed to find periodicity. However, these results are probably artifacts of an analysis using sliding windows to attempt to determine reversal rates. Most statistical models of reversals have analyzed them in terms of a Poisson process or other kinds of renewal process. A Poisson process would have, on average, a constant reversal rate, so it is common to use a non-stationary Poisson process. However, compared to a Poisson process, there is a reduced probability of reversal for tens of thousands of years after a reversal. This could be due to an inhibition in the underlying mechanism, or it could just mean that some shorter polarity intervals have been missed. A random reversal pattern with inhibition can be represented by a gamma process. In 2006, a team of physicists at the University of Calabria found that the reversals also conform to a Lévy distribution, which describes stochastic processes with long-ranging correlations between events in time. The data are also consistent with a deterministic, but chaotic, process. Character of transitions Duration Most estimates for the duration of a polarity transition are between 1,000 and 10,000 years, but some estimates are as quick as a human lifetime. During a transition, the magnetic field will not vanish completely, but many poles might form chaotically in different places during reversal, until it stabilizes again. Studies of 16.7-million-year-old lava flows on Steens Mountain, Oregon, indicate that the Earth's magnetic field is capable of shifting at a rate of up to 6 degrees per day. This was initially met with skepticism from paleomagnetists. Even if changes occur that quickly in the core, the mantle—which is a semiconductor—is thought to remove variations with periods less than a few months. A variety of possible rock magnetic mechanisms were proposed that would lead to a false signal. That said, paleomagnetic studies of other sections from the same region (the Oregon Plateau flood basalts) give consistent results. It appears that the reversed-to-normal polarity transition that marks the end of Chron C5Cr () contains a series of reversals and excursions. In addition, geologists Scott Bogue of Occidental College and Jonathan Glen of the US Geological Survey, sampling lava flows in Battle Mountain, Nevada, found evidence for a brief, several-year-long interval during a reversal when the field direction changed by over 50 degrees. The reversal was dated to approximately 15million years ago. In 2018, researchers reported a reversal lasting only 200 years. A 2019 paper estimates that the most recent reversal, 780,000 years ago, lasted 22,000 years. Causes The magnetic field of the Earth, and of other planets that have magnetic fields, is generated by dynamo action in which convection of molten iron in the planetary core generates electric currents which in turn give rise to magnetic fields. In simulations of planetary dynamos, reversals often emerge spontaneously from the underlying dynamics. For example, Gary Glatzmaier and collaborator Paul Roberts of UCLA ran a numerical model of the coupling between electromagnetism and fluid dynamics in the Earth's interior. Their simulation reproduced key features of the magnetic field over more than 40,000 years of simulated time, and the computer-generated field reversed itself. Global field reversals at irregular intervals have also been observed in the laboratory liquid metal experiment "VKS2". In some simulations, this leads to an instability in which the magnetic field spontaneously flips over into the opposite orientation. This scenario is supported by observations of the solar magnetic field, which undergoes spontaneous reversals every 9–12 years. With the Sun it is observed that the solar magnetic intensity greatly increases during a reversal, whereas reversals on Earth seem to occur during periods of low field strength. Some scientists, such as Richard A. Muller, think that geomagnetic reversals are not spontaneous processes but rather are triggered by external events that directly disrupt the flow in the Earth's core. Proposals include impact events or internal events such as the arrival of continental slabs carried down into the mantle by the action of plate tectonics at subduction zones or the initiation of new mantle plumes from the core-mantle boundary. Supporters of this hypothesis hold that any of these events could lead to a large scale disruption of the dynamo, effectively turning off the geomagnetic field. Because the magnetic field is stable in either the present north–south orientation or a reversed orientation, they propose that when the field recovers from such a disruption it spontaneously chooses one state or the other, such that half the recoveries become reversals. This proposed mechanism does not appear to work in a quantitative model, and the evidence from stratigraphy for a correlation between reversals and impact events is weak. There is no evidence for a reversal connected with the impact event that caused the Cretaceous–Paleogene extinction event. Effects on biosphere Shortly after the first geomagnetic polarity time scales were produced, scientists began exploring the possibility that reversals could be linked to extinction events. Many such arguments were based on an apparent periodicity in the rate of reversals, but more careful analyses show that the reversal record is not periodic. It may be that the ends of superchrons have caused vigorous convection leading to widespread volcanism, and that the subsequent airborne ash caused extinctions. Tests of correlations between extinctions and reversals are difficult for several reasons. Larger animals are too scarce in the fossil record for good statistics, so paleontologists have analyzed microfossil extinctions. Even microfossil data can be unreliable if there are hiatuses in the fossil record. It can appear that the extinction occurs at the end of a polarity interval when the rest of that polarity interval was simply eroded away. Statistical analysis shows no evidence for a correlation between reversals and extinctions. Most proposals tying reversals to extinction events assume that the Earth's magnetic field would be much weaker during reversals. Possibly the first such hypothesis was that high-energy particles trapped in the Van Allen radiation belt could be liberated and bombard the Earth. Detailed calculations confirm that if the Earth's dipole field disappeared entirely (leaving the quadrupole and higher components), most of the atmosphere would become accessible to high-energy particles but would act as a barrier to them, and cosmic ray collisions would produce secondary radiation of beryllium-10 or chlorine-36. A 2012 German study of Greenland ice cores showed a peak of beryllium-10 during a brief complete reversal 41,000 years ago, which led to the magnetic field strength dropping to an estimated 5% of normal during the reversal. There is evidence that this occurs both during secular variation and during reversals. A hypothesis by McCormac and Evans assumes that the Earth's field disappears entirely during reversals. They argue that the atmosphere of Mars may have been eroded away by the solar wind because it had no magnetic field to protect it. They predict that ions would be stripped away from Earth's atmosphere above 100 km. Paleointensity measurements show that the magnetic field has not disappeared during reversals. Based on paleointensity data for the last 800,000 years, the magnetopause is still estimated to have been at about three Earth radii during the Brunhes–Matuyama reversal. Even if the internal magnetic field did disappear, the solar wind can induce a magnetic field in the Earth's ionosphere sufficient to shield the surface from energetic particles. See also List of geomagnetic reversals, including ages Magnetic anomaly References Further reading External links Is it true that the Earth's magnetic field is about to flip? physics.org, accessed 8 January 2019 Pole Reversal Happens All The (Geologic) Time NASA, accessed 1 March 2022 Paleomagnetism Geophysics
Geomagnetic reversal
[ "Physics" ]
3,627
[ "Applied and interdisciplinary physics", "Geophysics" ]
2,203,151
https://en.wikipedia.org/wiki/Madison%20Symmetric%20Torus
The Madison Symmetric Torus (MST) is a reversed field pinch (RFP) physics experiment with applications to both fusion energy research and astrophysical plasmas. MST is located at the Center for Magnetic Self Organization (CMSO) at the University of Wisconsin-Madison. RFPs are significantly different from tokamaks (the most popular magnetic confinement scheme) in that they tend to have a higher power density and better confinement characteristics for a given average magnetic field. RFPs also tend to be dominated by non-ideal phenomena and turbulent effects. Classification As in most such experiments, the MST plasma is a toroidal pinch, which means the plasma is shaped like a donut and confined by a magnetic field generated by a large current flowing through it. MST falls into an unconventional class of machine called a reversed field pinch (RFP.) The RFP is so named because the toroidal magnetic field that permeates the plasma spontaneously reverses direction near the edge. A reversed field pinch is formed similarly to other toroidal pinch devices, by driving current through the plasma from an associated capacitor bank or other high-current power sources. In a tokamak the toroidal field is much stronger than the poloidal field, but in an RFP it's just the opposite. In fact, in an RFP the externally applied toroidal field is switched off shortly after startup. The plasma in an RFP is also much closer to the wall than in a tokamak. This permits a peculiar arrangement of the magnetic field lines, which will 'relax' into a new state such that the total magnetic energy in the plasma is minimized and the total magnetic helicity is conserved. The relaxed state called a Taylor state is marked by a peculiar arrangement of magnetic field lines where the toroidal magnetic field at the edge spontaneously reverses direction. Ongoing experiments in the MST program Oscillating field current drive Like most toroidal confinement schemes, the RFP relies on a transient burst of current to create the plasma and the magnetic fields that confine it. But for the RFP to be a viable fusion energy candidate the plasma must be sustained by a steady state current source. OFCD is a scheme for driving a steady current in a relaxed plasma by adding sizable oscillating perturbations to the toroidal and poloidal fields injecting both power and helicity into the plasma. A similar approach was patented and suggested for the Lockheed-Martin Compact Fusion Reactor. A nonlinear reaction in the plasma combines the two oscillations in such a way that, on average, a steady current is maintained. Pellet injection One of the challenges facing the RFP is fueling the hot core of the plasma directly, rather than relying on the deuterium gas to seep in slowly from the edge. The Pellet Injector fires a frozen pellet of deuterium into the plasma using a blast of gas or a mechanical punch. The pellet is vaporized and ionized as it travels into the core of the plasma. Pulsed poloidal current drive Every gradient is a source of free energy, especially if it's across a magnetic field. In MST the current is stronger in the core than at the edge. This peaked current profile serves as a source of free energy for magnetic fluctuations culminating in violent events in the plasma called sawteeth. PPCD alleviates this effect by driving a current at the edge of the plasma, flattening the current profile. Small pulses are added to the power supply currents that drive the toroidal field. The resultant pulsed toroidal magnetic field, with the aid of Faraday's law, creates a poloidal electric field and hence a poloidal current. A great deal of research on MST is devoted to the study of this effect and its application for enhanced confinement. Neutral beam injection In order to initiate a sustained fusion reaction, it is usually necessary to use many methods to heat the plasma. Neutral Beam Injection (NBI) involves injecting a high energy beam of neutral atoms, typically hydrogen or deuterium, into the core of the plasma. These energetic atoms transfer their energy to the plasma, raising the overall temperature. The neutral atoms injected don't remain neutral. As the beam passes through the plasma, the atoms are ionized as they bounce off the ions in the plasma. Because the magnetic field inside the torus is bent into a circle, the fast ions are hoped to be confined in the background plasma. The confined fast ions are slowed down by the background plasma, the same way air resistance slows down a baseball. The energy transfer from the fast ions to the plasma increases the plasma temperature. The actual injector can be seen from the observation window. It looks like a long silver cylinder laying on its side but tilted slightly downward against the torus near the back of the machine. When the injector is pulsed, 20,000 volts accelerates the beam to about 30 amperes of current for about 1.5 milliseconds. Problems would occur if the fast ions aren't confined within the plasma long enough for them to deposit their energy. Magnetic fluctuations bedevil plasma confinement in this type of device by scrambling what we hoped were well behaved magnetic fields. If the fast ions are susceptible to this type of behavior, they can escape very quickly. However, there is evidence to suggest that they aren't. Electron Bernstein wave current drive EBW is an acronym for Electron Bernstein Wave and is named after the plasma physicist, Ira Bernstein. Bernstein Wave Mode relates to a method of injecting ion or electron energy (IBW or EBW) into a plasma to increase its temperature in an attempt to reach fusion conditions. A plasma is a phase of matter which occurs naturally during lightning and electrical discharges and which is created artificially in fusion reactors to produce extremely high temperatures. This is an experiment on the MST to heat the plasma and to drive electric current inside the plasma. There is a large electric current in the plasma inside this machine; it is responsible for creating the necessary magnetic fields to make the reversed field pinch configuration. It also heats the plasma very quickly — the same way wires inside your toaster get hot. Your toaster probably uses about 10 ampere of current, while the plasma in MST is heated by up to 600,000 amperes. But even though the plasma reaches over 10,000,000 degrees Fahrenheit, it is not hot enough for practical fusion energy and we need to find other ways to deposit energy into the plasma. The EBW is a way to inject microwave power to further heat the plasma. The standard microwave oven produces around 1 kW of power at a frequency of 2.45 GHz; the EBW experiment is currently producing 150 kW at 3.6 GHz, and it is a goal of the team to upgrade to over 2 MW. To generate this type of power (on a low budget), decommissioned military radar equipment and home-made voltage power supplies are used. The second (and perhaps more scientifically important) goal of the EBW experiment is to drive electric current in a prescribed place within the plasma. The main plasma current distributes itself naturally, and the plasma tends to concentrate current into the center, leaving less current near the edge. This can lead to instability of the plasma. It has been shown (both theoretically and by experiments in the Madison Symmetric Torus) that driving current in the edge makes the plasma more stable to fluctuations in the magnetic field, resulting in better confinement of the hot plasma and leading to much higher temperature. Using the EBW to drive this stabilizing current would be a very important scientific result. The ability to deposit very specifically the auxiliary current gives us the opportunity to optimize our current drive schemes. The heating is also very localized, allowing us to study how hot (at least locally) the plasma can become within this magnetic confinement scheme — in plasma physics terms, this is called finding the beta limit. This is an unanswered question for the RFP and will give insight on whether or not this type of machine could be scaled up to a cost-effective, efficient fusion reactor. The heavy ion beam probe The Heavy Ion Beam Probe (HIBP) fires potassium ions into the plasma. By measuring their trajectory we get a profile of several key properties inside the plasma. This versatile diagnostics tool has been used in magnetic confinement fusion experiments to determine the electric potential, electron density, electron temperature, and magnetic vector potential of the plasma. A stream of sodium ions (the primary beam) is injected from the ion gun across the magnetic field into the plasma. As the singly charged particles pass through the plasma, they are further ionized creating the doubly charged secondary beam. The secondaries are then detected and analyzed outside the plasma. By curving the trajectories, the magnetic field separates secondary ions from primary ions. Because of this, only secondaries ionized at a given plasma position reach a given detector location. This allows the HIBP to make measurements localized to the ionization position. The secondary current is related to local electron density and the ionization cross-section of the primary ions, which is itself a function of the electron temperature. The electric potential can be obtained from the energy difference between primary and secondary ion beams. The energy of the secondary beam can be determined from the angle at which it enters the energy analyzer. The MST-HIBP system consists of: A 200 keV electrostatic accelerator that forms, focuses and accelerates the diagnostic ion beam; The primary and secondary beamlines with sweep systems that provide beam transmission and steering; An electrostatic analyzer that measures the energy, intensity and position of the secondary beam; Auxiliary components and systems which include the primary beam detectors and the plasma/UV suppression structures, etc. Far infrared polarimetry-interferometry system FIR, or far infrared, refers to light with wavelengths between 1 and 10 mm. The FIR system in MST is based on the FIR lasers enclosed in the beige-colored laser safety room to the right of the picture shown, in the second floor hallway. There are four FIR lasers in the system. One is a CO2 laser which produces a continuous power of about 120 W. This beam is then split in three. Each beam optically pumps a formic acid vapor laser operating at a wavelength of 432.6 mm, and a power of about 20 mW. The FIR system has 2 modes of operation: interferometry and polarimetry. What does FIR diagnostic system measure? The electron density, plasma current density, and magnetic field are three important plasma parameters of MST. The FIR system is used to measure their spatial and temporal distributions. How does FIR interferometry work? Like glass, a plasma has a refractive index different from that of vacuum (or air) that depends on plasma electron density. We send one laser beam through the plasma (the probe beam), one through the air (the reference beam), and measure the phase difference between them. This experimental configuration is called a Mach-Zehnder interferometer. The measured phase is proportional to the average plasma electron density along the beam path. In MST, we send multiple probe beams (blue lines in the figure) through the plasma at different radii. We then apply the Abel inversion technique to obtain a profile of the plasma electron density. How does FIR polarimetry work? A plasma is also an optically active media, meaning when a linearly polarized electromagnetic wave is propagating parallel (or anti-parallel) to the magnetic field, the polarization of the wave exiting the plasma will rotate a small angle. This is called Faraday rotation, and the angle is called the Faraday rotation angle. The FIR system measures the Faraday rotation, which is proportional to the line average of the electron density times the magnetic field component parallel to the beam path. The reason for Faraday rotation is as follows: When a linearly polarized wave is propagating along a magnetic field line, it is de-composed into left-hand and right-hand circularly polarized components. The phase difference between them as they exit the plasma causes the recombined linearly polarized wave to rotate its polarization direction. In MST, we launch two co-propagating, counter-rotating waves to probe the plasma. We then measure the phase difference between these two beams, which will be twice the Faraday rotation angle. In the figure, each of the 11 blue probe beams is a combination of two counter-rotating, circularly polarized beams, measuring the Faraday rotation angles along the same chords as the interferometer does. The combined interferometer phases and Faraday rotation angles can then be combined to determine the poloidal magnetic field distribution. Using Ampere's law, the toroidal plasma current can be determined as well. How well does the FIR diagnostic system work? The FIR system for MST is very precise. The Faraday rotation angle for MST plasmas is typically within 5 degrees. To measure such small signal, we have achieved an accuracy of 0.06 degree. The temporal resolution is less than 1 microsecond. What are some of the research topics related to FIR? FIR is an essential tool for most of the research topics in MST since it provides information about the basic plasma parameters. The system measures electron density, toroidal current, poloidal magnetic field, and the spatial profiles of each. Currently, we are exploring the possibility of measuring toroidal magnetic field and poloidal plasma current by using the plasma bi-refringence effect, or the Cotton-Mouton effect. When a linearly polarized EM wave is propagating perpendicular to the magnetic field, the refractive index depends on whether the wave polarization is parallel or perpendicular to the magnetic field direction. Why choose FIR lasers? For plasma polarimetry-interferometry, the wavelength we chose is sufficiently long to provide measurable plasma induced phase changes, but sufficiently short to avoid complicated plasma-wave interactions, including the bending of the beam. There are many high power molecular laser lines available in this wavelength range, and many commercially available detectors. Thomson scattering What is Thomson Scattering? Thomson scattering is the result of a collision between a photon (an electromagnetic wave) and a charged particle, such as an electron. When an electron and photon "collide" the electron feels a Lorentz force from the oscillating electric and magnetic fields of the photon and is accelerated. This acceleration causes the electron to emit a different photon in a different direction. This emitted photon has a wavelength shifted from that of the incident photon by an amount dependent on the electron energy. Another way of looking at this is that the electron absorbs the energy of the photon and re emits the energy in the form of a different electromagnetic wave. This scattering of a photon by an electron is called Thomson Scattering. How is Thomson Scattering useful to plasma physicists? Since the wavelength of the scattered photon depends on the energy of the scattering electron, Thomson scattering is good way to measure the energy of an electron. This is done by creating a photon of known wavelength and measuring the wavelength of the scattered photon. The Thomson Scattering configuration at MST uses a 1064 nm Nd:YAG laser system, which produces the best time-resolution electron temperature readings in the world. We create our photons with high power lasers that we shine into a window on the top of the MST, and collect scattered photons with a large collection lens on the side of the MST. The wavelength distribution of the scattered photons tells us the energy distribution of the electrons in the plasma, giving us a direct unobtrusive way of getting the temperature of the electrons. The amount of photons we actually collect can also tell us something about the density of the electrons in the plasma. Charge exchange recombination spectroscopy and ion Doppler spectroscopy Fusion plasmas are typically generated from ionization of a neutral gas. In most cases, deuterium is used as the plasma fuel. These plasmas are therefore primarily made up of deuterium ions (plus electrons), and it is necessary to diagnose the behavior of these ions if the relevant plasma physics is to be understood. However, in any fusion device, other types of ions ("impurities") are also present. These exist naturally due to the inability to achieve a perfect vacuum in a fusion reactor before fueling. Thus, materials such as water vapor, nitrogen, and carbon will be found in small amounts in typical plasma discharges. Impurities may also be generated during plasma discharges due to plasma-wall interactions. These interactions primarily cause material from the wall to be ejected into the plasma through sputtering. In the Madison Symmetric Torus (MST), properties of the impurity ions (e.g. carbon, oxygen, etc.) are closely linked to properties of the deuterium ions as a result of strong interaction between the ion species. Thus, impurity ion measurements can, in principle, provide direct information about the deuterium ions. Measurements of the impurity ion temperature (Ti) and flow velocity (vi) are obtained on MST using Charge Exchange Recombination Spectroscopy, or CHERS. The CHERS process can be broken down into two separate steps: Charge Exchange and Radiative Decay. In the first stage, an electron is transferred from a neutral atom (e.g. deuterium) to an impurity ion that has no electrons (e.g. C+6). During this transfer, the electron typically winds up in an excited state (high energy level) of the impurity ion. As the electron decays down to the ground state (minimum energy level), energy conservation requires radiation to be emitted by the impurity ion. This emission has discrete values of energy, or wavelength, which correspond to the energy differences between the initial and final atomic levels of a particular electron transition. For example, consider charge exchange between a deuterium atom and a C+6 ion: if the electron is transferred to the n=7 energy level of the carbon ion, then the ion will emit radiation at discrete energies given by the difference in energy between the n=7 and n=6 levels, the n=6 and n=5 levels, the n=5 and n=4 levels, and so on (down to n=1). This line emission is Doppler-broadened as a result of ion thermal motion, and Doppler-shifted as a result of ion flow. The Doppler shift causes the emission to be blue-shifted (towards shorter wavelength/higher frequency) if the ions are moving towards the point of observation, or red-shifted (towards longer wavelength/lower frequency) if the flow is away from the point of observation. Measurements of the carbon emission line shape are therefore used to extract values for the impurity ion temperature and velocity. Charge Exchange: H + C+6 → H+1 + C+5 (n=7, l=6) Radiative decay: C+5 (n=7, l=6) → C+5 (n=6, l=5) + h (photon) In a typical fusion device the neutral atom density is small. Therefore, the amount of radiated emission that results from charge exchange between impurity ions and neutrals is also small. On MST, the neutral density is enhanced by injection of fast hydrogen atoms via a diagnostic neutral beam (DNB). As a result, the radiated emission is greatly increased, though primarily along the beam injection path (the DNB is located below the deck, and cannot be seen from here; the injection path is from right to left across the plasma). Perpendicular to the beam path, there exist a number of optical ports for viewing the plasma at different radial positions. For a given plasma discharge, a fiber bundle system is placed on one of these ports, and is used to collect emission along its line-of-sight (black tubes on top of the machine contain light collection optics; fibers are placed in the long, curved white tube when not in use). This emission is sent to a spectrometer where it is dispersed over a finite wavelength range — which is centered on the emission line of interest — by a pair of optical gratings. However, because the collected emission is dominated by radiation from along the beam path, the measurements are effectively localized to the intersection volume between the fiber view and the beam. On MST, this intersection volume is small (~ 2 cm3) compared to the plasma volume, allowing spatially resolved measurements of Ti and vi to be obtained. Data collected from a number of plasma discharges — for which the location of the fiber bundle system is varied — are used to construct radial profiles of the impurity ion temperature and velocity, providing important information for understanding the physics of plasmas in MST. Typical ion temperatures measured by CHERS on MST are in the range of 100 to 800 eV (2 million to 17 million degrees Fahrenheit), depending on position in the plasma and type of discharge. Likewise, measured equilibrium ion velocities are on the order of 1,000 to 10,000 meters per second. References Magnetic confinement fusion devices University of Wisconsin–Madison
Madison Symmetric Torus
[ "Chemistry" ]
4,371
[ "Particle traps", "Magnetic confinement fusion devices" ]
2,203,446
https://en.wikipedia.org/wiki/Alexander%20Men
Alexander Vladimirovich Men (; 22 January 1935 – 9 September 1990) was a Soviet Russian Orthodox priest, dissident, theologian, biblical scholar and author of theology, the history of religion, the fundamentals of Christian doctrine, and Orthodox worship. Men wrote dozens of books (including his magnum opus, History of Religion: In Search of the Way, the Truth and the Life (1970 onwards), the seventh volume of which (entitled Son of Man, 1969) served as the introduction to Christianity for thousands of citizens of the Soviet Union); baptized hundreds; founded an Orthodox open university in 1990; opened one of the first Sunday schools in the USSR; and founded a charity group at the Russian Children's Hospital. Alexander Men was murdered early on a Sunday morning, on the 9th of September, 1990, by axe-wielding assailant(s) outside his home in in the Sergiyevo-Posadsky District of the Moscow Oblast in Russia. The circumstances of the murder remain unclear. Life Background Men's father, Volf Gersh-Leibovich (Vladimir Grigoryevich) Men, was born in 1902 in Kiev. Volf attended a religious Jewish school during his childhood, but did not practice religion. He later graduated from two universities and worked as the chief engineer of a textile factory. Men's maternal ancestors, originally from Poland, had lived in Russia since the 18th century. His grandmother, Cecilia Vasilevskaya, and grandfather, Odessa resident Semyon (Solomon) Ilyich Tsuperfein, met in Switzerland while studying at the Faculty of Chemistry at the University of Bern. Their daughter Yelena (Alexander's mother) was born in Bern in 1908. After graduating, Semyon, Cecilia, and their daughter lived in Paris. In 1914, during a visit to Russia, Semyon was mobilized, and the family settled in Kharkov. Yelena Semyonovna Men (née Tsuperfein) was drawn to Christianity from a young age. She studied the Orthodox faith at a private gymnasium in Kharkov. As a high school student, she moved to Moscow to live with her grandmother Anna Osipovna Vasilevskaya. In 1934, she married Volf. Early life Men was born in Moscow to a Jewish family on 22 January 1935. At the age of six months, he was secretly baptized with his mother in Zagorsk by the priest Archimandrite Seraphim (Bityukov) of the banned Catacomb Church, a branch of the Russian Orthodox Church that refused to cooperate with Soviet authorities. When Men was six years old, his father was arrested by the NKVD. His father spent more than a year under guard and then was assigned to labor in the Ural Mountains. Men studied at the Moscow Fur Institute in 1955 and transferred to from which he was expelled in 1958 due to his religious beliefs. One month after his expulsion, June 1st, 1958, he was ordained a deacon and sent to the in Akulovo. Priesthood On September 1, 1960, Men became a priest, graduating from the Leningrad Theological Seminary. His consecration took place at the Donskoy Monastery. Men was appointed second priest in the in , where a year later he became rector of the temple. In 1965, he completed his studies at Moscow Theological Academy. In 1964 and 1965, Men's father was investigated in connection with his acquaintance Aleksandr Solzhenitsyn. Men became a leader with considerable influence and a good reputation among Christians both locally and abroad, among Roman Catholics, Protestants, and Orthodox. He served in a series of parishes near Moscow. During the 1960s, Men was a pioneer of Christian “samizdat” (self-publishing). Starting in the early 1970s, Men became a popular figure in Russia's religious community, especially among the intelligentsia. Men was targeted by the KGB for his active missionary and evangelistic efforts. In 1974, Yuri Andropov wrote a letter to the Central Committee of the Communist Party of the Soviet Union about the "ideological struggle of the Vatican against the USSR," where he wrote: "A group of pro-Catholic-minded priests, headed by A. Men (Moscow Oblast), in their theological works pushes through the idea that only Catholicism can be the ideal of church life. These works, illegally exported abroad, are published by the Catholic publishing house Life with God (Belgium) and are then sent for distribution in the USSR." In 1984, Men was interrogated in the case of his student ; during these interrogations, Men was threatened with a ban on serving in any of the Moscow parishes. An article published in the Trud newspaper in the spring of 1986 accused him of attempting to create an "anti-Soviet underground" under the auspices of Archpriest John Meyendorff; organizing "illegal religious matinees"; and personally voicing "slide films of a religious propaganda nature, which he illegally distributed among believers." On May 11, 1988, Men's first public lecture took place in the hall of the Institute of Steel and Alloys. As noted, "the organizers were completely amazed that a church theme could attract a full hall without any advertising." In the late 1980s, he utilized mass media to proselytize and was offered to host a nationally televised program on religion. Men was one of the founders of the Russian Bible Society in 1990; that same year he founded the Open Orthodox University and "The World of the Bible" journal. His efforts in educating the Russian populace about the Orthodox faith has garnered him the labeling by the Soviet newspaper Sotsialisticheskaya Industriya as a modern-day apostle to the Soviet intelligentsia. However, some representatives of Orthodox Christianity have voiced their opinion that several of Men's views were not sufficiently “orthodox” and even advised against using his books as an introduction to Orthodoxy. Men actively supported charitable activities, attending the founding of the Mercy Group at the Russian Children's Clinical Hospital, which was later named after him. Murder On Sunday morning, 9 September 1990, Men was murdered while walking along the wooded path from his home in the Russian village of Semkhoz (near Moscow) to the local train platform. He was on his way to catch the train to Novaya Derevnya to celebrate the Divine Liturgy. Men had served at the parish in Novaya Derevnya for 20 years. His assailant or assailants used an axe. The murder occurred around the time of the dissolution of the Soviet Union, and despite orders from within the Soviet (and later the Russian) government that the case be further investigated, the murder remains unsolved. His funeral was held on the day in the Orthodox calendar which commemorates the beheading of John the Baptist. According to Lieutenant General of Police : Views and thought According to Men, "the history of world religiosity begins not with Christianity, but much earlier. Christianity is the highest point in the development of religious experience." Men positioned his attitude towards antiquity and paganism as Christian: "even in paganism you will find a presentiment and anticipation of the Good News. It is not for nothing that the Apostle Paul made the altar of the 'Unknown God' the starting point of his sermon in Athens. However, this kind of dialogue will often be replaced by a compromise with aspects of ancient beliefs that are alien to the Gospel." Works Alexander Men's greatest work is his History of Religion, published in seven volumes under the title In Search of the Way, the Truth, and the Life (volumes 1–6, Brussels, 1970–1983; 2nd edition Moscow, 1991–1992) in which the author examines the history of non-Christian religions as a way for Christians in the struggle of Magiism and Monotheism. Also including as the seventh volume his most famous work, Son of Man (Brussels, 1969; 2nd edition Moscow, 1991). Because of the persecution in the Soviet Union at the time, the Brussels editions were published under a pseudonym. An English translation of Son of Man by Mormon author Samuel Brown was completed in 1998, but is now out of print, as are several other works in English translation. In 2014, a new project was commenced by Alastair Macnaughton (1954–2017), an Anglican priest and Russian scholar, to translate the entire History of Religion into the English language for the first time. Volume 1 was published in 2018. An abridged version of the entire History of Religion in Two Volumes was also translated into English in 2021 (which additionally includes the history of Christianity of the first millennium). Recent works of Alexander Men in English translation include: "An Inner Step Toward God: Writings and teachings on Prayer", (2014) ; "Russian Religious Philosophy: 1989–1990 Lectures" (2015) (in 25th Year Memory Commemoration). "The Wellsprings of Religion. The History of Religion: In Search of the Way, the Truth, and the Life Vol 1", Trans. Alastair Macnaughton. (2018) "History of Religion in Two Volumes" (2021). Volume 1 surveys humanity's spiritual search from ancient times to the coming of Christ, and Volume 2 is an overview of the history of the Church in the first millennium. Many other works by Alexander Men have been published in Russian, most notably: Heaven on Earth (1969), published abroad under pseudonym, later reissued in Russia; "Where Did This All Come From?" (1972), published abroad under pseudonym, later reissued in Russia; "How to Read the Bible?" (1981), published abroad under pseudonym, later reissued in Russia; "World Spiritual Culture" (1995); "The History of Religions" (Volumes 1–2, 1997); "The First Apostles" (1998); "Isagogics: Old and New Testaments" (2000); "Bibliological Dictionary" (Volumes 1–3, 2002). “Mystery, Word, Image” (Brussels, 1980, 2nd edition. M.1991), published abroad under pseudonym. Legacy Since his death, Men's works and ideas have been seen as controversial among the conservative faction of the Russian Orthodox Church, citing his strong tendencies towards ecumenism, which his books advocate. Nevertheless, Men has a considerable number of supporters, some of whom argue for his canonization. His lectures are regularly broadcast over Russian radio. His books are no longer restricted from print in Russia, whereas during his lifetime, they had to be printed abroad, mainly in Brussels, Belgium, by the publishing house Foyer Chrétien Oriental and circulated in secret. Several key Russian Orthodox parishes encourage following his example as one who faithfully followed Christ. Two Russian Orthodox churches have been built on the site of his assassination, and a growing number of believers in both Russia and abroad consider him a martyr. In December 1990, the Alexander Men Foundation was founded in Riga. Men was canonized by the in 2004. In conjunction with the 25th year Commemoration of Memory, the Moscow Patriarchate Izdatel'stvo publishing house has begun a project to publish Fr. Men's "Collected Works" in a series of 15 volumes. Men's son, Mikhail Men, is a Russian political figure who from 2005 to 2013 served as the Governor of Ivanovo Oblast and subsequently as Minister of Construction Industry, Housing and Utilities Sector in Dmitry Medvedev's Cabinet. He is also a musician known outside Russia for the Michael Men Project. Views on Men's work Positive Many Orthodox people positively evaluate the activities and works of Men. Arkady Mahler noted in 2010: "The number of people who came to the Russian Orthodox Church of the Moscow Patriarchate thanks to the sermons of Father Alexander Men is always greater than we can imagine. Many of them now admit, in a half-whisper, "in fact, it was Men who brought me to the Church from the very beginning," and look away, as if apologizing for something. Moreover, we are talking not only about the "intelligentsia" - Father Alexander was a real people's preacher, quite ordinary people from all over the Soviet empire sought him out, because it was from his texts, randomly found among acquaintances of their acquaintances, that they first learned about God." Archpriest positively assessed the work of Men: "Men was great: he took on the heaviest burden - working with atheistic intellectuals." In February 2021, Metropolitan Hilarion (Alfeev) allowed the canonization of Men: "Father Alexander Men was an outstanding preacher, catechist, and missionary of his time. His death was tragic, and I think that if it is proven that it was martyrdom, he can be canonized as a martyr." Criticism At the same time, many representatives of the Russian Orthodox Church argue that some of Men's statements contradict the fundamentals of Orthodox teaching; his ecumenical views were criticized. He was also accused of sympathizing with Catholicism. Orthodox theologian Alexei Osipov and Protodeacon Andrey Kuraev did not recommend the books of Archpriest Alexander Men for getting acquainted with Orthodoxy. In an open letter to Men, allegedly written by Metropolitan , it is written: "You are not new to the church, Father Alexander <...> This means that, in your interpretation, when you combine the One God of Christians and Ancient Israel with the "god" of modern Judaism, the devil, you are doing this deliberately, deliberately mixing light with darkness." Priest Daniel Sysoev was sharply critical of Men. In 2002, he identified 9 points in his creed that he considered heretical: "Manichaeism — the doctrine of the complicity of Satan in the creation of the world, the result of which was the supposed evolution that took place", "the doctrine of man as a transfigured ape", "the rejection of the inspiration of the Holy Scriptures", "the rejection of original sin and the postulation of the independence of death from human sin", "the rejection of the existence of a personal Adam and the introduction of the Kabbalistic doctrine of Adam Kadmon", "the rejection of the authorship of almost all Old Testament books", "acceptance of branch theory", "syncretism", "encouragement of magic and extrasensory perception". Metropolitan Hilarion (Alfeyev), on the air of the Church and the World program, aired on the Russia-24 channel on 13 February 2021, stated that in Men's works there are views that are controversial, but that this is not an obstacle to Men's canonization: Men's views on himself In spite of the controversial image Men had surrounding him, he seemed to view himself and his work in a simple and humble manner. In a letter to a friend he penned shortly before his death, Men wrote, “I work now as I have always worked: with my face into the wind. . . I’m only an instrument that God is using for the moment. Afterwards, things will be as God wants them.” See also Georgy Chistyakov List of unsolved murders Literature References External links Slidefilm by Sergei Bessmertny - English version Alexander Men' Foundation, Moscow, in Russian Fr Alexander Men' Open Orthodox University, Moscow (English version) Saint Pachomius Library Links on Alexander Men In pictures » Fr. Alexander Men: Russian Orthodox priest of fearless faith BBC - Religion & Ethics photo gallery with captions Content on this page has been translated from the corresponding article in OrthoWiki. *https://web.archive.org/web/20180719174253/https://radubrava.ru/event/alexander-men-static-exhibition/ 1935 births 1990 deaths 20th-century Eastern Orthodox clergy 20th-century Eastern Orthodox martyrs 20th-century Eastern Orthodox priests 20th-century Eastern Orthodox theologians Converts to Eastern Orthodoxy from Judaism Eastern Orthodox biblical scholars Eastern Orthodox theologians Theistic evolutionists People murdered in the Soviet Union Deaths by edged and bladed weapons Axe murder Russian biblical scholars Russian Eastern Orthodox priests Russian murder victims Russian theologians Soviet dissidents Unsolved murders in the Soviet Union 20th-century Russian Jews Moscow Theological Academy alumni
Alexander Men
[ "Biology" ]
3,327
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
2,203,555
https://en.wikipedia.org/wiki/Helminthology
Helminthology is the study of parasitic worms (helminths). The field studies the taxonomy of helminths and their effects on their hosts. The origin of the first compound of the word is the Greek ἕλμινς - helmins, meaning "worm". In the 18th and early 19th century there was wave of publications on helminthology; this period has been described as the science's "Golden Era". During that period the authors Félix Dujardin, William Blaxland Benham, Peter Simon Pallas, Marcus Elieser Bloch, Otto Friedrich Müller, Johann Goeze, Friedrich Zenker, Charles Wardell Stiles, Carl Asmund Rudolphi, Otto Friedrich Bernhard von Linstow and Johann Gottfried Bremser started systematic scientific studies of the subject. The Japanese parasitologist Satyu Yamaguti was one of the most active helminthologists of the 20th century; he wrote the six-volume Systema Helminthum. See also Nematology References Subfields of zoology
Helminthology
[ "Biology" ]
219
[ "Subfields of zoology" ]
2,203,756
https://en.wikipedia.org/wiki/Jaccard%20index
The Jaccard index is a statistic used for gauging the similarity and diversity of sample sets. It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also called intersection over union (IoU). It was developed by Grove Karl Gilbert in 1884 as his ratio of verification (v) and now is often called the critical success index in meteorology. It was later developed independently by Paul Jaccard, originally giving the French name (community coefficient), and independently formulated again by T. Tanimoto. Thus, it is also called Tanimoto index or Tanimoto coefficient in some fields. Overview The Jaccard index measures similarity between finite sample sets and is defined as the size of the intersection divided by the size of the union of the sample sets: Note that by design, If A intersection B is empty, then J(A, B) = 0. The Jaccard index is widely used in computer science, ecology, genomics, and other sciences, where binary or binarized data are used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard index. Jaccard similarity also applies to bags, i.e., multisets. This has a similar formula, but the symbols used represent bag intersection and bag sum (not union). The maximum value is 1/2. The Jaccard distance, which measures dissimilarity between sample sets, is complementary to the Jaccard index and is obtained by subtracting the Jaccard index from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union: An alternative interpretation of the Jaccard distance is as the ratio of the size of the symmetric difference to the union. Jaccard distance is commonly used to calculate an matrix for clustering and multidimensional scaling of n sample sets. This distance is a metric on the collection of all finite sets. There is also a version of the Jaccard distance for measures, including probability measures. If is a measure on a measurable space , then we define the Jaccard index by and the Jaccard distance by Care must be taken if or , since these formulas are not well defined in these cases. The MinHash min-wise independent permutations locality sensitive hashing scheme may be used to efficiently compute an accurate estimate of the Jaccard similarity index of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of a hash function. Similarity of asymmetric binary attributes Given two objects, A and B, each with n binary attributes, the Jaccard index is a useful measure of the overlap that A and B share with their attributes. Each attribute of A and B can either be 0 or 1. The total number of each combination of attributes for both A and B are specified as follows: represents the total number of attributes where A and B both have a value of 1. represents the total number of attributes where the attribute of A is 0 and the attribute of B is 1. represents the total number of attributes where the attribute of A is 1 and the attribute of B is 0. represents the total number of attributes where A and B both have a value of 0. Each attribute must fall into one of these four categories, meaning that The Jaccard similarity index, J, is given as The Jaccard distance, dJ, is given as Statistical inference can be made based on the Jaccard similarity index, and consequently related metrics. Given two sample sets A and B with n attributes, a statistical test can be conducted to see if an overlap is statistically significant. The exact solution is available, although computation can be costly as n increases. Estimation methods are available either by approximating a multinomial distribution or by bootstrapping. Difference with the simple matching index (SMC) When used for binary attributes, the Jaccard index is very similar to the simple matching coefficient. The main difference is that the SMC has the term in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences (when an attribute is present in both sets) and mutual absence (when an attribute is absent in both sets) as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets. In market basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC. In other contexts, where 0 and 1 carry equivalent information (symmetry), the SMC is a better measure of similarity. For example, vectors of demographic variables stored in dummy variables, such as gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes (in this case, male and female), thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions. Weighted Jaccard similarity and distance If and are two vectors with all real , then their Jaccard similarity index (also known then as Ruzicka similarity) is defined as and Jaccard distance (also known then as Soergel distance) With even more generality, if and are two non-negative measurable functions on a measurable space with measure , then we can define where and are pointwise operators. Then Jaccard distance is Then, for example, for two measurable sets , we have where and are the characteristic functions of the corresponding set. Probability Jaccard similarity and distance The weighted Jaccard similarity described above generalizes the Jaccard Index to positive vectors, where a set corresponds to a binary vector given by the indicator function, i.e. . However, it does not generalize the Jaccard Index to probability distributions, where a set corresponds to a uniform probability distribution, i.e. It is always less if the sets differ in size. If , and then Instead, a generalization that is continuous between probability distributions and their corresponding support sets is which is called the "Probability" Jaccard. It has the following bounds against the Weighted Jaccard on probability vectors. Here the upper bound is the (weighted) Sørensen–Dice coefficient. The corresponding distance, , is a metric over probability distributions, and a pseudo-metric over non-negative vectors. The Probability Jaccard Index has a geometric interpretation as the area of an intersection of simplices. Every point on a unit -simplex corresponds to a probability distribution on elements, because the unit -simplex is the set of points in dimensions that sum to 1. To derive the Probability Jaccard Index geometrically, represent a probability distribution as the unit simplex divided into sub simplices according to the mass of each item. If you overlay two distributions represented in this way on top of each other, and intersect the simplices corresponding to each item, the area that remains is equal to the Probability Jaccard Index of the distributions. Optimality of the Probability Jaccard Index Consider the problem of constructing random variables such that they collide with each other as much as possible. That is, if and , we would like to construct and to maximize . If we look at just two distributions in isolation, the highest we can achieve is given by where is the Total Variation distance. However, suppose we weren't just concerned with maximizing that particular pair, suppose we would like to maximize the collision probability of any arbitrary pair. One could construct an infinite number of random variables one for each distribution , and seek to maximize for all pairs . In a fairly strong sense described below, the Probability Jaccard Index is an optimal way to align these random variables. For any sampling method and discrete distributions , if then for some where and , either or . That is, no sampling method can achieve more collisions than on one pair without achieving fewer collisions than on another pair, where the reduced pair is more similar under than the increased pair. This theorem is true for the Jaccard Index of sets (if interpreted as uniform distributions) and the probability Jaccard, but not of the weighted Jaccard. (The theorem uses the word "sampling method" to describe a joint distribution over all distributions on a space, because it derives from the use of weighted minhashing algorithms that achieve this as their collision probability.) This theorem has a visual proof on three element distributions using the simplex representation. Tanimoto similarity and distance Various forms of functions described as Tanimoto similarity and Tanimoto distance occur in the literature and on the Internet. Most of these are synonyms for Jaccard similarity and Jaccard distance, but some are mathematically different. Many sources cite an IBM Technical Report as the seminal reference. In "A Computer Program for Classifying Plants", published in October 1960, a method of classification based on a similarity ratio, and a derived distance function, is given. It seems that this is the most authoritative source for the meaning of the terms "Tanimoto similarity" and "Tanimoto Distance". The similarity ratio is equivalent to Jaccard similarity, but the distance function is not the same as Jaccard distance. Tanimoto's definitions of similarity and distance In that paper, a "similarity ratio" is given over bitmaps, where each bit of a fixed-size array represents the presence or absence of a characteristic in the plant being modelled. The definition of the ratio is the number of common bits, divided by the number of bits set (i.e. nonzero) in either sample. Presented in mathematical terms, if samples X and Y are bitmaps, is the ith bit of X, and are bitwise and, or operators respectively, then the similarity ratio is If each sample is modelled instead as a set of attributes, this value is equal to the Jaccard index of the two sets. Jaccard is not cited in the paper, and it seems likely that the authors were not aware of it. Tanimoto goes on to define a "distance" based on this ratio, defined for bitmaps with non-zero similarity: This coefficient is, deliberately, not a distance metric. It is chosen to allow the possibility of two specimens, which are quite different from each other, to both be similar to a third. It is easy to construct an example which disproves the property of triangle inequality. Other definitions of Tanimoto distance Tanimoto distance is often referred to, erroneously, as a synonym for Jaccard distance . This function is a proper distance metric. "Tanimoto Distance" is often stated as being a proper distance metric, probably because of its confusion with Jaccard distance. If Jaccard or Tanimoto similarity is expressed over a bit vector, then it can be written as where the same calculation is expressed in terms of vector scalar product and magnitude. This representation relies on the fact that, for a bit vector (where the value of each dimension is either 0 or 1) then and This is a potentially confusing representation, because the function as expressed over vectors is more general, unless its domain is explicitly restricted. Properties of do not necessarily extend to . In particular, the difference function does not preserve triangle inequality, and is not therefore a proper distance metric, whereas is. There is a real danger that the combination of "Tanimoto Distance" being defined using this formula, along with the statement "Tanimoto Distance is a proper distance metric" will lead to the false conclusion that the function is in fact a distance metric over vectors or multisets in general, whereas its use in similarity search or clustering algorithms may fail to produce correct results. Lipkus uses a definition of Tanimoto similarity which is equivalent to , and refers to Tanimoto distance as the function . It is, however, made clear within the paper that the context is restricted by the use of a (positive) weighting vector such that, for any vector A being considered, Under these circumstances, the function is a proper distance metric, and so a set of vectors governed by such a weighting vector forms a metric space under this function. Jaccard index in binary classification confusion matrices In confusion matrices employed for binary classification, the Jaccard index can be framed in the following formula: where TP are the true positives, FP the false positives and FN the false negatives. See also Overlap coefficient Simple matching coefficient Hamming distance Sørensen–Dice coefficient, which is equivalent: and (: Jaccard index, : Sørensen–Dice coefficient) Tversky index Correlation Mutual information, a normalized metricated variant of which is an entropic Jaccard distance. References Further reading External links Introduction to Data Mining lecture notes from Tan, Steinbach, Kumar Kaggle Dstl Satellite Imagery Feature Detection - Evaluation Index numbers Measure theory Clustering criteria String metrics Similarity measures
Jaccard index
[ "Physics", "Mathematics" ]
2,902
[ "Physical quantities", "Distance", "Mathematical objects", "Similarity measures", "Index numbers", "Numbers" ]
2,203,789
https://en.wikipedia.org/wiki/Amplified%20fragment%20length%20polymorphism
Amplified fragment length polymorphism (AFLP-PCR or AFLP) is a PCR-based tool used in genetics research, DNA fingerprinting, and in the practice of genetic engineering. Developed in the early 1990s by Pieter Vos, AFLP uses restriction enzymes to digest genomic DNA, followed by ligation of adaptors to the sticky ends of the restriction fragments. A subset of the restriction fragments is then selected to be amplified. This selection is achieved by using primers complementary to the adaptor sequence, the restriction site sequence and a few nucleotides inside the restriction site fragments (as described in detail below). The amplified fragments are separated and visualized on denaturing on agarose gel electrophoresis, either through autoradiography or fluorescence methodologies, or via automated capillary sequencing instruments. Although AFLP should not be used as an acronym, it is commonly referred to as "Amplified fragment length polymorphism". However, the resulting data are not scored as length polymorphisms, but instead as presence-absence polymorphisms. AFLP-PCR is a highly sensitive method for detecting polymorphisms in DNA. The technique was originally described by Vos and Zabeau in 1993. In detail, the procedure of this technique is divided into three steps: Digestion of total cellular DNA with one or more restriction enzymes and ligation of restriction half-site specific adaptors to all restriction fragments. Selective amplification of some of these fragments with two PCR primers that have corresponding adaptor and restriction site specific sequences. Electrophoretic separation of amplicons on a gel matrix, followed by visualisation of the band pattern. Applications The AFLP technology has the capability to detect various polymorphisms in different genomic regions simultaneously. It is also highly sensitive and reproducible. As a result, AFLP has become widely used for the identification of genetic variation in strains or closely related species of plants, fungi, animals, and bacteria. The AFLP technology has been used in criminal and paternity tests, also to determine slight differences within populations, and in linkage studies to generate maps for quantitative trait locus (QTL) analysis. There are many advantages to AFLP when compared to other marker technologies including randomly amplified polymorphic DNA (RAPD), restriction fragment length polymorphism (RFLP), and microsatellites. AFLP not only has higher reproducibility, resolution, and sensitivity at the whole genome level compared to other techniques, but it also has the capability to amplify between 50 and 100 fragments at one time. In addition, no prior sequence information is needed for amplification (Meudt & Clarke 2007). As a result, AFLP has become extremely beneficial in the study of taxa including bacteria, fungi, and plants, where much is still unknown about the genomic makeup of various organisms. The AFLP technology is covered by patents and patent applications of Keygene N.V. AFLP is a registered trademark of Keygene N.V. References External links Software for analyzing AFLP data CLIQS 1D Pro Automated electrophoresis (gel-based or capillary) band-matching and databasing of AFLP fragments BioNumerics Gelcompar II (Discontinued) One universal platform to manage and analyze all your biological data including AFLP KeyGene Quantar Suite Versatile marker scoring software SoftGenetics GeneMarker fragment analysis software Freeware for analyzing AFLP data SourceForge Genographer Free software for manual scoring (Java application) SourceForge RawGeno Free automated scoring (R CRAN environment, including a user-friendly GUI) Online programs for simulation of AFLP-PCR ALFIE - BProkaryotes or uploaded sequences In silico AFLP-PCR for prokaryotes, some eukaryotes or uploaded sequences Enzymes for AFLP New England Biolabs AFLP Technology note at KeyGene AFLP Applications Molecular biology DNA DNA profiling techniques
Amplified fragment length polymorphism
[ "Chemistry", "Biology" ]
824
[ "Biochemistry", "Genetics techniques", "DNA profiling techniques", "Molecular biology" ]
2,204,154
https://en.wikipedia.org/wiki/Spaghetti%20sort
Spaghetti sort is a linear-time, analog algorithm for sorting a sequence of items, introduced by A. K. Dewdney in his Scientific American column. This algorithm sorts a sequence of items requiring O(n) stack space in a stable manner. It requires a parallel processor, which is assumed to be able to find the maximum of a sequence of items in O(1) time. Algorithm For simplicity, assume we are sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti: For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in the list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, break a rod in two so that one piece is of length x units; discard the other piece.) Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod—this one is clearly the longest. Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed. Analysis Preparing the n rods of spaghetti takes linear time. Lowering the rods on the table takes constant time, O(1). This is possible because the hand, the spaghetti rods and the table work as a fully parallel computing device. There are then n rods to remove so, assuming each contact-and-removal operation takes constant time, the worst-case time complexity of the algorithm is O(n). References External links A. K. Dewdney's homepage Implementations of a model of physical sorting, Boole Centre for Research in Informatics Classical/Quantum Computing, IFF-Institute Sorting algorithms Metaphors referring to spaghetti
Spaghetti sort
[ "Mathematics" ]
423
[ "Order theory", "Sorting algorithms" ]
2,204,453
https://en.wikipedia.org/wiki/Street%20elbow
A street elbow (sometimes called a street ell or service ell) is a type of plumbing or piping fitting intended to join a piece of pipe and another fitting at an angle. The difference between a street elbow and a regular elbow is the gender of its two connections. A regular elbow has a hub or female-threaded connection on each end, so it can join two male pipes. Instead, a street elbow has a female fitting on one end and a male fitting on the other. The advantage of the street elbow is that it can be connected directly to another fitting without having to use an additional short connecting piece (a pipe nipple). Applications Street elbows are available with bend angles of 90°, 45°, and 22.5°. They can be used in many plumbing applications, including water supply, drainage, sewers, vents, central vacuum systems, compressed air and gas lines, heating and air conditioning, sump pump drains, and other locations where plumbing fittings would be used to join sections of pipe. Plumbing codes regulate the use of street elbows. For example, Canada's national plumbing code prohibits them in natural gas and propane installations: Street elbows and tees are not permitted because these fittings have both male and female threaded ends. This makes alignment of the piping difficult since the direction of the piping does not always correspond with the fully seated position of the fitting. When the connection is backed off to align the piping, leakage may result. See also Coupling (piping) Elbow (piping) Piping and plumbing fittings References Plumbing
Street elbow
[ "Engineering" ]
322
[ "Construction", "Plumbing" ]
2,204,566
https://en.wikipedia.org/wiki/Preemption%20%28computing%29
In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of a processor are known as context switching. User mode and kernel mode In any given system design, some operations performed by the system may not be preemptable. This usually applies to kernel functions and service interrupts which, if not permitted to run to completion, would tend to produce race conditions resulting in deadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense of system responsiveness. The distinction between user mode and kernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable. Most modern operating systems have preemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems are Solaris 2.0/SunOS 5.0, Windows NT, Linux kernel (2.5.4 and newer), AIX and some BSD systems (NetBSD, since version 5). Preemptive multitasking The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources. In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time. In preemptive multitasking, the operating system kernel can also initiate a context switch to satisfy the scheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling. The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known as time-shared scheduling, or time-sharing. Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-user control systems (like those in robotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer. Time slice The period of time for which a process is allowed to run in a preemptive multitasking system is generally called the time slice or quantum. The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input. An interrupt is scheduled to allow the operating system kernel to switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system. System support Today, nearly all operating systems support preemptive multitasking, including the current versions of Windows, macOS, Linux (including Android), iOS and iPadOS. An early microcomputer operating system providing preemptive multitasking was Microware's OS-9, available for computers based on the Motorola 6809, including home computers such as the TRS-80 Color Computer 2 when configured with disk drives, with the operating system supplied by Tandy as an upgrade. Sinclair QDOS and AmigaOS on the Amiga were also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran on Motorola 68000-family microprocessors without memory management. Amiga OS used dynamic loading of relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space. Early operating systems for IBM PC compatibles such as MS-DOS and PC DOS, did not support multitasking at all, however alternative operating systems such as MP/M-86 (1981) and Concurrent CP/M-86 did support preemptive multitasking. Other Unix-like systems including MINIX and Coherent provided preemptive multitasking on 1980s-era personal computers. Later MS-DOS compatible systems natively supporting preemptive multitasking/multithreading include Concurrent DOS, Multiuser DOS, Novell DOS (later called Caldera OpenDOS and DR-DOS 7.02 and higher). Since Concurrent DOS 386, they could also run multiple DOS programs concurrently in virtual DOS machines. The earliest version of Windows to support a limited form of preemptive multitasking was Windows/386 2.0, which used the Intel 80386's Virtual 8086 mode to run DOS applications in virtual 8086 machines, commonly known as "DOS boxes", which could be preempted. In Windows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility. In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space. Preemptive multitasking has always been supported by Windows NT (all versions), OS/2 (native applications), Unix and Unix-like systems (such as Linux, BSD and macOS), VMS, OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets. Early versions of the classic Mac OS did not support multitasking at all, with cooperative multitasking becoming available via MultiFinder in System Software 5 and then standard in System 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist in Mac OS 9, although in a limited sense), these were abandoned in favor of Mac OS X (now called macOS) that, as a hybrid of the old Mac System style and NeXTSTEP, is an operating system based on the Mach kernel and derived in part from BSD, which had always provided Unix-like preemptive multitasking. See also Computer multitasking Cooperative multitasking References Operating system technology Concurrent computing de:Multitasking#Präemptives Multitasking
Preemption (computing)
[ "Technology" ]
1,755
[ "Computing platforms", "Concurrent computing", "IT infrastructure" ]
2,204,768
https://en.wikipedia.org/wiki/Renninger%20negative-result%20experiment
In quantum mechanics, the Renninger negative-result experiment is a thought experiment that illustrates some of the difficulties of understanding the nature of wave function collapse and measurement in quantum mechanics. The statement is that a particle need not be detected in order for a quantum measurement to occur, and that the lack of a particle detection can also constitute a measurement. The thought experiment was first posed in 1953 by Mauritius Renninger. The non-detection of a particle in one arm of an interferometer implies that the particle must be in the other arm. It can be understood to be a refinement of the paradox presented in the Mott problem. The Mott problem The Mott problem concerns the paradox of reconciling the spherical wave function describing the emission of an alpha ray by a radioactive nucleus, with the linear tracks seen in a cloud chamber. Formulated in 1927 by Albert Einstein and Max Born, it was resolved by a calculation done by Sir Nevill Francis Mott that showed that the correct quantum mechanical system must include the wave functions for the atoms in the cloud chamber as well as that for the alpha ray. The calculation showed that the resulting probability is non-zero only on straight lines raying out from the decayed atom; that is, once the measurement is performed, the wave-function becomes non-vanishing only near the classical trajectory of a particle. Renninger's negative-result experiment In Renninger's 1960 formulation, the cloud chamber is replaced by a pair of hemispherical particle detectors, completely surrounding a radioactive atom at the center that is about to decay by emitting an alpha ray. For the purposes of the thought experiment, the detectors are assumed to be 100% efficient, so that the emitted alpha ray is always detected. By consideration of the normal process of quantum measurement, it is clear that if one detector registers the decay, then the other will not: a single particle cannot be detected by both detectors. The core observation is that the non-observation of a particle on one of the shells is just as good a measurement as detecting it on the other. The strength of the paradox can be heightened by considering the two hemispheres to be of different diameters; with the outer shell a good distance farther away. In this case, after the non-observation of the alpha ray on the inner shell, one is led to conclude that the (originally spherical) wave function has "collapsed" to a hemisphere shape, and (because the outer shell is distant) is still in the process of propagating to the outer shell, where it is guaranteed to eventually be detected. In the standard quantum-mechanical formulation, the statement is that the wave-function has partially collapsed, and has taken on a hemispherical shape. The full collapse of the wave function, down to a single point, does not occur until it interacts with the outer hemisphere. The conundrum of this thought experiment lies in the idea that the wave function interacted with the inner shell, causing a partial collapse of the wave function, without actually triggering any of the detectors on the inner shell. This illustrates that wave function collapse can occur even in the absence of particle detection. Common objections There are a number of common objections to the standard interpretation of the experiment. Some of these objections, and standard rebuttals, are listed below. Finite radioactive lifetime It is sometimes noted that the time of the decay of the nucleus cannot be controlled, and that the finite half-life invalidates the result. This objection can be dispelled by sizing the hemispheres appropriately with regards to the half-life of the nucleus. The radii are chosen so that the more distant hemisphere is much farther away than the half-life of the decaying nucleus, times the flight-time of the alpha ray. To lend concreteness to the example, assume that the half-life of the decaying nucleus is 0.01 microsecond (most elementary particle decay half-lives are much shorter; most nuclear decay half-lives are much longer; some atomic electromagnetic excitations have a half-life about this long). If one were to wait 0.4 microseconds, then the probability that the particle will have decayed will be ; that is, the probability will be very very close to one. The outer hemisphere is then placed at (speed of light) times (0.4 microseconds) away: that is, at about 120 meters away. The inner hemisphere is taken to be much closer, say at 1 meter. If, after (for example) 0.3 microseconds, one has not seen the decay product on the inner, closer, hemisphere, one can conclude that the particle has decayed with almost absolute certainty, but is still in-flight to the outer hemisphere. The paradox then concerns the correct description of the wave function in such a scenario. Classical trajectories Another common objection states that the decay particle was always travelling in a straight line, and that only the probability of the distribution is spherical. This, however, is a mis-interpretation of the Mott problem, and is false. The wave function was truly spherical, and is not the incoherent superposition (mixed state) of a large number of plane waves. The distinction between mixed and pure states is illustrated more clearly in a different context, in the debate comparing the ideas behind local-hidden variables and their refutation by means of the Bell inequalities. Diffraction A true quantum-mechanical wave would diffract from the inner hemisphere, leaving a diffraction pattern to be observed on the outer hemisphere. This is not really an objection, but rather an affirmation that a partial collapse of the wave function has occurred. If a diffraction pattern were not observed, one would be forced to conclude that the particle had collapsed down to a ray, and stayed that way, as it passed the inner hemisphere; this is clearly at odds with standard quantum mechanics. Diffraction from the inner hemisphere is expected. Complex decay products In this objection, it is noted that in real life, a decay product is either spin-1/2 (a fermion) or a photon (spin-1). This is taken to mean that the decay is not truly sphere symmetric, but rather has some other distribution, such as a p-wave. However, on closer examination, one sees this has no bearing on the spherical symmetry of the wave-function. Even if the initial state could be polarized; for example, by placing it in a magnetic field, the non-spherical decay pattern is still properly described by quantum mechanics. Non-relativistic language The above formulation is inherently phrased in a non-relativistic language; and it is noted that elementary particles have relativistic decay products. This objection only serves to confuse the issue. The experiment can be reformulated so that the decay product is slow-moving. At any rate, special relativity is not in conflict with quantum mechanics. Imperfect detectors This objection states that in real life, particle detectors are imperfect, and sometimes neither the detectors on the one hemisphere, nor the other, will go off. This argument only serves to confuse the issue, and has no bearing on the fundamental nature of the wave-function. See also Interaction-free measurement Elitzur–Vaidman bomb-tester Counterfactual definiteness References English translation at https://arxiv.org/abs/physics/0504043v1 Louis de Broglie, The Current Interpretation of Wave Mechanics, (1964) Elsevier, Amsterdam. (Provides discussion of the Renninger experiment.) (Section 4.1 reviews Renninger's experiment). Quantum measurement Thought experiments in quantum mechanics
Renninger negative-result experiment
[ "Physics" ]
1,589
[ "Quantum measurement", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
2,204,788
https://en.wikipedia.org/wiki/Intervalence%20charge%20transfer
In chemistry, intervalence charge transfer, often abbreviated IVCT or even IT, is a type of charge-transfer band that is associated with mixed valence compounds. It is most common for systems with two metal sites differing only in oxidation state. Quite often such electron transfer reverses the oxidation states of the sites. The term is frequently extended to the case of metal-to-metal charge transfer between non-equivalent metal centres. The transition produces a characteristically intense absorption in the electromagnetic spectrum. The band is usually found in the visible or near infrared region of the spectrum and is broad. The process can be described as follows: LnM+-bridge-M'Ln + hν → LnM-bridge-M'+Ln Mixed valency and the IT band Since the energy states of valence tautomers affect the IVCT band, the strength of electronic interaction between the sites, known as α (the mixing coefficient), can be determined by analysis of the IVCT band. Depending on the value of α, mixed valence complexes are classified into three groups: class I: α ~ 0, the complex has no interaction between redox sites. No IVCT band is observed. The oxidation states of the two metal sites are distinct and do not readily interconvert. class II: 0 < α < = 0.707, intermediate interaction between sites. An IVCT band is observed. The oxidation states of the two metal sites are distinct, but they readily interconvert. This is by far the most common class of intervalence complexes. class III: α > = 0.707, interaction between redox sites is very strong. It is better to consider these sites as one united site, not as two isolated sites. An IVCT band is observed. The oxidation states of the two metal sites are essentially equivalent. In these situations, the two metals are often best described as having the same half integer oxidation state. References Analytical chemistry Coordination chemistry Spectroscopy
Intervalence charge transfer
[ "Physics", "Chemistry" ]
404
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Coordination chemistry", "nan", "Spectroscopy" ]
2,204,862
https://en.wikipedia.org/wiki/Viscosity%20index
The viscosity index (VI) is an arbitrary, unit-less measure of a fluid's change in viscosity relative to temperature change. It is mostly used to characterize the viscosity-temperature behavior of lubricating oils. The lower the VI, the more the viscosity is affected by changes in temperature. The higher the VI, the more stable the viscosity remains over some temperature range. The VI was originally measured on a scale from 0 to 100; however, advancements in lubrication science have led to the development of oils with much higher VIs. The viscosity of a lubricant is closely related to its ability to reduce friction in solid body contacts. Generally, the least viscous lubricant which still forces the two moving surfaces apart to achieve "fluid bearing" conditions is desired. If the lubricant is too viscous, it will require a large amount of energy to move (as in honey); if it is too thin, the surfaces will come in contact and friction will increase. Relevance Many lubricant applications require the lubricant to perform across a wide range of conditions, for example, automotive lubricants are required to reduce friction between engine components when the engine is started from cold (relative to the engine's operating temperatures) up to when it is running. The best oils with the highest VI will remain stable and not vary much in viscosity over the temperature range. This provides consistent engine performance within the normal working conditions. Historically, there were two different oil types recommended for usage in different weather conditions. As an example, with winter oils and cold starting the engines, and with temperature ranges from, say, −30 °C to 0 °C, a 5 weight oil would be pumpable at the very low temperatures and the generally cooler engine operating temperatures. However, in hot climates, where temperatures range from 30 °C to 45 °C, a 50 weight oil would be necessary, so it would remain thick enough to hold up an oil film between the moving hot parts. Thus the issue of multigrade oils came into being, where with variable temperatures of, say, −10 °C during the cold nights and 20 °C during the days, a 5 weight oil would be good as the oil would be pumpable in a cold engine and as the engine came up to running temperature, and the day warmed up, the characteristics of a 30 weight oil would be ideal. Thus the 5W-30 oils were introduced, rather than the fixed and temperature limiting grades where the thin oils became too thin when hot and the thicker oils became too thick when cold. The effects of temperature on a single-viscosity oil can be demonstrated by pouring a small amount of vegetable oil into a pot or pan and then either cooling it in a freezer or heating it on a cooking stove. When oils get cold enough in a deep freezer, they will solidify into a block of "wax"-like oil that cannot be pumped around inside an engine's lubrication system. However, when a spoonful of very cold oil is put into a pan on a stove and it is slowly heated and swirled around, the oil will gradually warm up, and there is a definite temperature range where the oil is warm and traditionally "oily". However, as the oil is heated further, the oil becomes thinner and thinner, until it is nearly smoking and is almost as thin as water and thus it has almost no capacity to keep moving parts separated, resulting in metal-to-metal contact and damage of the components that are supposed to be kept apart with a thin film of oil. Thus the multigrade oils are recommended for use based on the ambient temperature ranges of the season or environment. Additionally, there are the issues of oil temperature maintenance, such as oil or engine heaters that enable easy starting and shorter warm-up period in very cold climates, and oil coolers to dump enough heat from the oil, and thus the engine, gearbox, or hydraulic oil circuit, so as to keep the oil's upper temperature to within a specified upper working limit. Classification The VI scale was set up by the Society of Automotive Engineers (SAE). The temperatures chosen arbitrarily for reference are . The scale was originally interpolated between 0 for a naphthenic Texas Gulf crude and 100 for a paraffinic Pennsylvania crude. Since the inception of the scale, better oils have also been produced, leading to VIs greater than 100 (see below). VI improving additives and higher-quality base oils are widely used nowadays, which increase the VIs attainable beyond the value of 100. The viscosity index of synthetic oils ranges from 80 to over 400. {| class="wikitable" |- ! Viscosity index !! Classification |- | Under 35 || Low |- | 35 to 80 || Medium |- | 80 to 110 || High |- | Above 110 || Very high |} Calculation The viscosity index can be calculated using the following formula: where U is the oil's kinematic viscosity at , Y is the oil's kinematic viscosity at , and L and H are the viscosities at 40 °C for two hypothetical oils of VI 0 and 100 respectively, having the same viscosity at 100 °C as the oil whose VI we are trying to determine. That is, the two oils with viscosity Y at 100 °C and a VI of 0 and 100 would have at 40 °C the viscosities of L and H respectively. These L and H values can be found in tables in ASTM D2270 and are incorporated in online calculators. References https://xenum.com/en/engine-oil-viscosity-index/ https://www.lubricants.total.com/what-motor-oil-vi Lubricants Oil additives Tribology
Viscosity index
[ "Chemistry", "Materials_science", "Engineering" ]
1,221
[ "Tribology", "Mechanical engineering", "Materials science", "Surface science" ]
2,204,906
https://en.wikipedia.org/wiki/Tractor%20vaporising%20oil
Tractor vaporising oil (TVO) is a fuel for petrol-paraffin engines. It is seldom made or used today. In the United Kingdom and Australia, after the Second World War, it was commonly used for tractors until diesel engines became commonplace, especially from the 1960s onward. In Australian English it was known as power kerosene. History TVO existed for at least fifteen years before it became widely used. A 1920 publication mentions it as a product of British Petroleum. But it was not until the late 1930s that it first became widely used. The post war Ferguson TE20 tractor, a carefully researched and near-ideal tractor for use on British farms, was designed around a petrol (gasoline) engine, the Standard inline-four. Although there was a campaign for the reintroduction of agricultural Road Duty (tax)-free petrol, which had been curtailed during the war, this was not forthcoming. Perkins Engines supplied some conversions into diesel engines which could use untaxed red diesel. On the early Fordson model N, the tap which changed over from petrol to TVO was marked G for gasoline and K for kerosene, reflecting that these tractors had their design origin in the USA. In the UK tractor vaporising oil was usually called TVO. Octane rating As a substitute for petrol, TVO was developed. Paraffin (kerosene) was commonly used as a domestic heating fuel and was untaxed. Paraffin has a low octane rating and would damage an engine built for petrol. The manufacture of paraffin involves the removal of aromatic hydrocarbons from what is now sold as heating oil. These aromatics have an octane rating, so adding some of that otherwise waste product material back in a controlled manner into paraffin gave TVO. The resulting octane rating of TVO was somewhere between 55 and 70. pbv = parts by volume The words paraffin and kerosene are often used interchangeably but the tables suggest that this is incorrect because they have different octane ratings. However, kerosene and heating oil have similar octane ratings. Paraffin, kerosene and petrol (gasoline) are all rather loosely defined. For example, gasoline may have an octane rating between 88 and 102. Engine modifications Compression ratio Because TVO has a lower octane rating than petrol, the engine needs a lower compression ratio. On the TVO version of the Ferguson TE20 tractor, the cylinder head was re-designed to reduce the compression ratio to 4.5:1. This reduced the power output, so the cylinder bore was increased to 85 mm to restore the power. The petrol version had a compression ratio of 5.77:1 and a cylinder bore of 80 mm on early versions.. Vaporiser In practice TVO had most of the properties of paraffin, including the need for heating to encourage vapourisation. As a result, the exhaust and inlet manifolds were adapted so that more heat from the former warmed the latter. Such a setup was called a vaporiser. To get the tractor to start from cold, a small second fuel tank was added that contained petrol. The tractor was started on the expensive petrol, then – once the engine was warm – the fuel supply switched over to TVO or paraffin. So long as the engine was working hard, as when ploughing or pulling a load, the TVO would burn well. Under light conditions, such as travelling unloaded on the highway, the engine was better on petrol. Radiator blind Some tractor designs included a radiator "blind" that would restrict the flow of air over the radiator which led to the engine running hotter, which could help with starting. If the radiator blind was left shut, though, there was a risk of engine damage, especially in warm weather. Terminology The phrase petrol-paraffin engine is often used to describe an engine that uses TVO. This can be interpreted either as the use of the two fuels, starting on petrol then switching to the paraffin-based TVO the use of a mixture of petrol and paraffin as a substitute for the proper TVO Supply TVO was withdrawn from sale by UK suppliers in 1974. An approximation to the correct specification can be made from petrol and heating oil (burning oil). In the UK there is an exception that permits the use of rebated kerosene and fuel oils in vintage vehicles. North American distillate fuel In North America a similar product, called distillate, was produced. Of lower quality than TVO, its octane rating varied between 33 and 45. Manufacture of tractors using distillate ended by 1956, when gasoline and diesel-engined tractors had captured the North American farming equipment market. Notes and references Petroleum products Tractors Fuels Internal combustion piston engines
Tractor vaporising oil
[ "Chemistry", "Engineering" ]
978
[ "Tractors", "Petroleum products", "Chemical energy sources", "Petroleum", "Fuels", "Engineering vehicles" ]
2,204,980
https://en.wikipedia.org/wiki/Virus%20Bulletin
Virus Bulletin is a magazine about the prevention, detection and removal of malware and spam. It regularly features analyses of the latest virus threats, articles exploring new developments in the fight against viruses, interviews with anti-virus experts, and evaluations of current anti-malware products. History and profile Virus Bulletin was founded in 1989 as a monthly hardcopy magazine, and later distributed electronically in PDF format. The monthly publication format was discontinued in July 2014 and articles are now made available as standalone pieces on the website. The magazine was originally located in the Sophos headquarters in Abingdon, Oxfordshire in the UK. It was co-founded and is owned by Jan Hruska and Peter Lammer, the co-founders of Sophos. Virus Bulletin claims to have full editorial independence and not favour Sophos products in its tests and reviews. Technical experts from anti-virus vendors have written articles for the magazine, which also conducts comparison tests of the detection rates of anti-virus software. Products which manage to detect 100% of the viruses in the wild, without false alarms, are given the VB100 award. The magazine holds an annual conference (in late September or early October) for computer security professionals. In recent years both magazine and conference have branched out to discuss anti-spam and other security issues as well as malware. Notable previous speakers include Mikko Hyppönen, Eugene Kaspersky and Graham Cluley, as well as representatives from all major anti-virus vendors. Virus Bulletin was a founder member of the Anti-Malware Testing Standards Organization and remains a member today. References External links Computer magazines published in the United Kingdom Magazines established in 1989 Mass media in Oxfordshire Works about computer hacking
Virus Bulletin
[ "Technology" ]
350
[ "Computing stubs", "Computer magazine stubs" ]
2,205,017
https://en.wikipedia.org/wiki/Allium%20montanum
The scientific name Allium montanum has been used for at least six different species of Allium. Allium montanum Schrank – the only legitimate name, first used in 1785, now considered a synonym of Allium schoenoprasum subsp. schoenoprasum (chives) The following names are synonyms for other species: Allium montanum F.W.Schmidt, nom. illeg. = Allium lusitanicum Allium montanum Guss., nom. illeg. = Allium tenuiflorum Allium montanum Rchb., nom. illeg. = Allium flavum subsp. flavum Allium montanum Sm. , nom. illeg. = Allium sibthorpianum Allium montanum Ten., nom. illeg. = Allium cupani subsp. cupani References Set index articles on plants
Allium montanum
[ "Biology" ]
194
[ "Set index articles on plants", "Set index articles on organisms", "Plants" ]
2,205,030
https://en.wikipedia.org/wiki/Vinylon
Vinylon, also known as Vinalon (more common in Korean sources), is a synthetic fiber produced from reaction between polyvinyl alcohol (PVA) fiber and formaldehyde. Chemically it is polyvinyl formal (PVF). Vinylon was first developed in Japan in 1939 by Ri Sung-gi, Ichiro Sakurada, and H. Kawakami. In North Korea, Ri Sung-gi found a route to produce PVA from domestic anthracite (black coal) and limestone as raw materials. Trial production began in 1954 and in 1961 the massive "Vinylon City" was built in Hamhung, North Korea. Vinylon's widespread usage in North Korea is often pointed to as an example of the implementation of the Juche philosophy, and it is known as the Juche fiber. PVF, in fiber form, is a useful thermoplastic resin on its own, most commonly used as electric wire insulation. Applications Vinylon is the national fiber of North Korea and is used for the majority of textiles, outstripping fiber such as cotton or nylon, which is produced only in small amounts in North Korea. Other than clothing, vinylon is also used for shoes, ropes, and quilt wadding. Japanese-Canadian textile artist Toshiko MacAdam used vinylon in her early works, as it was more economical than nylon. Swedish outdoor brand Fjällräven makes their popular Kånken backpack line out of a version of vinylon, branded Vinylon F. Properties Vinylon is resistant to heat and chemicals but has several disadvantages: being stiff, having a relatively high manufacturing cost, and being difficult to dye. Production The production process by Ri is as follows: Limestone and coal are mixed to produce calcium carbide. The carbide is used to produce acetylene gas. Reaction with acetic acid produces vinyl acetate. Polymerization produces polyvinyl acetate (PVAc). Hydrolysis of PVAc produces polyvinyl alcohol (PVOH). PVOH is stretched into a fiber and spun. The PVOH yarn is finally reacted with formaldehyde to make vinylon, a chain of acetals and hemiacetals. Other locations may use alternative feedstocks to synthesize PVOH. History 1939: colonial introduction Between the years 1910 to 1945, Korea was ruled as a Japanese colony. This fact forced the integration of Korea into the Japanese empire's economic and political spheres. Thus, after the Second Sino-Japanese War began in 1937, Korea was integrated into the Japanese war effort. It was amid Japan's efforts towards creating a more scientific and technologically advanced country for the war when a team of researchers worked to fabricate Vinylon.  The first successful creation of Vinylon was in 1939, by a Kyoto University research team in Japan, using petroleum as the feedstock. However, Vinylon was later brought to North Korea by Ri Sung-gi, one of the researchers of the Kyoto University team, amid North Korean campaign aimed at the recruitment of scientists and engineers from South Korea in the period following Korea's liberation from Japan in 1945. He was working as a professor at Seoul National University at the time. During the Korean War, when Seoul was occupied by the Korean People's Army, Ri was offered a research position in North Korea. Ri Sung-gi accepted and defected to the North. He found a way to produce Vinylon from coal. 1961: Vinylon City, Hungnam After the liberation of Korea in 1945, North Korea was under Soviet occupation and thus provided with aid by the Soviets as a means to stabilize the country. Beginning at the end of the Korean War in 1953, the Soviet Union, China, and other communist countries began actively providing foreign aid to North Korea. Therefore, the North Korean economy heavily depended on aid from other socialist countries. However, in the 1960s, the aid from the Soviet Union decreased. North Korea was no longer receiving aid in the form of grants, but loans. Hence, the North Korean leadership decided to accelerate efforts towards developing a self-sufficient economy. This resulted in the full mobilization of domestic resources. Beginning in 1961, North Korea launched its First Seven-Year Economic Development Plan, which focused on technological innovations, cultural revolution, improvement of living standards, modernization of the economy, and the facilitation of trade and international economic cooperation. As a result, the North Korean government decided to develop the vinylon industry and build the February 8 Vinylon Complex, nicknamed Vinylon City. In the early stages of North Korea's history, the government under Kim Il Sung and the official "Juche" (self-reliance) ideology promoted the idea that the only way to reach the goal of economic independence was through heavy machine industry. The manufacturing of vinylon was therefore taken as a step towards developing North Korea as a modern industrial state. With such an appeal to nationalism, the North Korean government mobilized its citizens for constructing and supporting a new vinylon factory, called Vinylon City. In 1961, Vinylon City, the factory compound for producing vinylon, was built in the northeastern industrial city of Hungnam. The construction of the factory took fourteen months, which was quite fast considering that fifty buildings made up Vinylon City. Vinylon City had a total floor space of , 15,000 production machines, 1,700 container tanks, and of piping. The tallest building in Vinylon City, measuring in height with a smokestack, was the acetic acid shop. The spinning shop, which was responsible of creating the vinylon fiber and shipping, was the largest building— long and wide, with floor space. Vinylon City became the pride of North Korea, being touted as having been built without foreign assistance. The success of Vinylon City demonstrated independence from the Soviet Union and China and appeared to reflect the Juche ideology. Even though workers had to complete dangerous tasks and some ultimately lost their lives for the sake of demonstrating the country's capabilities, vinylon thus served as a reinforcement of the party's ideological command and the Kim family's rule. The city began with a goal of producing enough fiber to supply the entire country with clothing, shoes, and other necessities, a goal that appears to have been met for several decades. The fiber produced from Vinylon City was considered so important that during the annual commemoration of Kim Il-sung's birthday, the people were given gifts of vinylon clothing. The factory is said to have hit a production ceiling in 1973. A second complex was planned in 1983 but never built. 1990s–2000s The North Korean economy suffered enormously after the collapse of the Soviet Union, culminating in the North Korean famine of 1994-1998. In 1994, the complex was forced to close due to coal shortages. After the North Korean economy recovered, the complex remained closed until 2010. During this time, the vinalon market was replaced by other fabrics, some made domestically, others imported from China. One defector said that only the army continued to purchase this material. 2010 reopening On February 8, 2010, Kim Jong Il visited the former Vinylon City complex in Hamhung to celebrate its reopening. Kim was accompanied by high-ranking officers of the party, such as the Chairman of the Presidium of the Supreme People's Assembly Kim Yong-nam, Defense Minister Kim Yong-chun and Korean Workers’ Party secretaries Kim Ki-nam and Choi Tae-bok. This was the first documented time he ever attended an industrial mass rally. While his attendance, and that of the most important party members, could signify the importance of the vinylon complex and its role in advancing the economic policies of Kim Jong Il, there is evidence that the facility could play a role in the North Korean nuclear weapons program. Based on an analysis of satellite imagery, information from Ko Chong-song (a North Korean official who had defected), and a number of North Korean technical documents, there is speculation that the Hamhung plant is manufacturing unsymmetrical dimethylhydrazine, a rocket fuel that is used in North Korean long-range missiles. In his new-year speech for 2017, Kim Jong Un expressed plans to revamp the Vinylon City. Historical significance Although vinylon was initially used to help develop the North Korean economy as a home-grown product, it also became intertwined with nationalism. As a result, vinylon became a firm part of the North Korean national identity. References Further reading External links NKChosun.com Fibers-Poly-Vinyl-Alcohol Korean clothing Science and technology in North Korea Synthetic fibers Korean inventions Japanese inventions
Vinylon
[ "Chemistry" ]
1,761
[ "Synthetic materials", "Synthetic fibers" ]
2,205,075
https://en.wikipedia.org/wiki/Christen%20C.%20Raunki%C3%A6r
Christen Christensen Raunkiær (29 March 1860 – 11 March 1938) was a Danish botanist, who was a pioneer of plant ecology. He is mainly remembered for his scheme of plant strategies to survive an unfavourable season ("life forms") and his demonstration that the relative abundance of strategies in floras largely corresponded to the Earth's climatic zones. This scheme, the Raunkiær system, is still widely used today and may be seen as a precursor of modern plant strategy schemes, e.g. J. Philip Grime's CSR system. Life He was born on a small heathland farm, named Raunkiær, in Lyhne parish in western Jutland, Denmark. He later took his surname after it. He succeeded Eugen Warming as professor in botany at the University of Copenhagen and the director of the Copenhagen Botanical Garden, a position he held from 1912 to 1923. He was married to the author and artist Ingeborg Raunkiær (1863–1921), who accompanied him on journeys to the West Indies and the Mediterranean and made line drawings for his botanical works. They divorced in 1915, the same year as their only son Barclay Raunkiær died. Raunkiær later married the botanist Agnete Seidelin (1874–1956). Raunkiær's research axiom was that everything countable in nature should be subjected to numerical analysis, e.g. the number of male and female catkins in monecious plants and the number of male and female individuals in dioecious plants. Raunkiær also was an early student of apomixis in flowering plants and hybrid swarms. In addition, he studied the effect of soil pH on plants and plants on soil pH, a work his apprentice Carsten Olsen continued. After his retirement, C. Raunkiær made numerical studies of plants and flora in the literature ("The flora and the heathland poets", "The dandelion in Danish poetry", "Plants in the psalms"). In these studies, he applied strict quantitative criteria, like in his ecological studies. For example, he defined a poet as a person who has written 1,000 or more lines of verse. Legacy Life form spectra Raunkiær devised a system for categorising plants by life-form as a way of ecologically meaningful comparison of species and vegetation in regions having different floras. Raunkiær compared statistically local life form spectra (relative abundance) with the world average, which he called "the normal spectrum" (Raunkiær 1918 – see below). Thereby, he devised the first null model in the history of ecology. Raunkiær was a keen naturalist, who described the flora and fungi of Denmark, the Virgin Islands, Tunisia, and other countries. In contrast to many contemporary naturalist, however, he strongly promoted quantitative and numerical approaches and experimentation. He devised a method to quantify the abundance of plants in vegetation as frequency in subplots and used it for quantitative studies of a range of plant communities. Raunkiær's law When plotting the number of species in a plant community that fell in each 20-percentile frequency class from very frequent, i.e. numerically dominant, to very infrequent, Raunkiær discovered that most species were either very common or very rare. This came to be known as "Raunkiær's law" and is related to R. A. Fisher's logseries distribution and to Frank W. Preston's lognormal distribution of the number of individuals of each species in a community. The significance of his idea was, however, disputed already by some of his contemporaries. Leaf size in plant geography As a further experiment in characterizing plant communities, Raunkiaer devised a numerical scheme based on leaf size classes and leaf type (simple or compound) that was extended by L. J. Webb and is used as a way to classify forest types more simply than by lists of component species. Popular references Farley Mowat, in his book, Never Cry Wolf, described using a Raunkiær's Circle in making a "cover degree" study to determine the ratios of various plants one to the other. He spoke of it as "a device designed in hell." Scientific publications Raunkiær, C. (1887) Botanisk Tidsskrift 16, 152–167. Raunkiær, C. (1888–89) Botanisk Tidsskrift 17, 20-105 Description in English of some new and of some unsatisfactorily known species of Myxomycetes described in the preceding treatise. Botanisk Tidsskrift 17, 106–110. Raunkiær, C. (1889) Notes on the vegetation of the North-Frisian Islands and a contribution to an eventual flora of these islands. Botanisk Tidsskrift 17, 179–196. Raunkiær, C. (1893) Botanisk Tidsskrift 18, 19–23. Raunkiær, C. (1895) l. Enkimbladede. Gyldendalske Boghandels Forlag, København. Raunkiær, C. (1901) Botanisk Tidsskrift 24, 223–238. Raunkiær, C. (1902) Botanisk Tidsskrift 24, 289–296. Raunkiær, C. (1903) Anatomical Potamogeton-Studies and Potamogeton fluitans. Botanisk Tidsskrift 25, 253–280. Ostenfeld, C.H. & Raunkiær, C. (1903) [English summary]. Botanisk Tidsskrift 25, 409–413. Raunkiær, C. (1904) Oversigt over Det Kongelige Danske Videnskabernes Selskabs Forhandlinger, 1904, 330–349. Raunkiær, C. (1904) Botanisk Tidsskrift 26, XIV. THIS PAGE DOES NOT CONTAIN A PAPER BY RAUNKIAER WITH THAT TITLE NOR DOES THE ENTIRE VOLUME. Ch. 1 in Raunkiær (1934): Biological types with reference to the adaption of plants to survive the unfavourable season, p. 1. Raunkiær, C. (1905) Oversigt over Det Kongelige Danske Videnskabernes Selskabs Forhandlinger, 1905, 347–438.THIS PAGE RANGE SEEMS DOUBTFUL, BUT IS REPEATED ON UMPTEEN ONLINE PAGES. Raunkiær, C. (1905) Botanisk Tidsskrift 26, 86–88. Raunkiær, C. (1906) Botanisk Tidsskrift 27, 313–316. Raunkiær, C. (1906) Oversigt over Det Kongelige Danske Videnskabernes Selskabs Forhandlinger, 1906, 31–39. Raunkiær, C. (1907) 132 pp. Ch. 2 in Raunkiær (1934): The life-forms of plants and their bearings on geography, p. 2-104. Raunkiær, C. (1907) Botanisk Tidsskrift 28, 210. Ch. 3 in Raunkiær (1934): The life-form of Tussilago farfarus, p. 105-110. Raunkiær, C. (1908) Botanisk Tidsskrift 29, 42–43. German translation: Beiheft zum Bot. Centralbl., 27 (2), 171-206 (1910). Ch. 4 in Raunkiær (1924): The statistics of life-forms as a basis for biological plant geography, p. 111-147. Raunkiær, C. (1908) Fungi from the Danish West Indies collected 1905–1906. Botanisk Tidsskrift 29, 1–3. Raunkiær, C. (1909) Kongelige Danske Videnskabernes Selskabs Skrifter - Naturvidenskabelig og Mathematisk Afdeling, 7.Rk., 8, 1-70. Ch. 5 in Raunkiær (1934): The life-forms of plants on new soil, p. 148-200. Raunkiær, C. (1909a) Botanisk Tidsskrift 30, 20–132. Ch. 6 in Raunkiær (1934): Investigations and statistics of plant formations, p. 201-282. Raunkiær, C. (1911) In: Biologiske Arbejder tilegnede Eug. Warming paa hans 70 Aars Fødselsdag den 3. Nov. 1911. Kjøbenhavn. Ch. 7 in Raunkiær (1934): The Arctic and Antarctic chamaephyte climate, p. 283-302. Raunkiær, C. (1912) Measuring-apparatus for statistical investigations of plant-formations. Botanisk Tidsskrift 33, 45–48. Raunkiær, C. (1913) Botanisk Tidsskrift 33, 197–243. Ch. 8 in Raunkiær (1934): Statistical investigations of the plant formations of Skagens Odde (The Skaw), p. 303-342. Raunkiær, C. (1914) Mindeskrift i Anledning af Hundredeaaret for Japetus Steenstrups Fødsel (eds H. F. E. Jungersen & E. Warming), pp. 1–33. København. Ch. 9 in Raunkiær (1934): On the vegetation of the French mediterranean alluvia, p. 343-367. Raunkiær, C. (1916) Botanisk Tidsskrift 34, 1–13. Ch. 10 in Raunkiær (1934): The use of leaf size in biological plant geography, p. 368-378 Raunkiær, C. (1916) Botanisk Tidsskrift 34, 289–311. Raunkiær, C. (1918) [English summary]: On leaftime in the descendants from beeches with different leaftimes]. Botanisk Tidsskrift 36, 197–203. Raunkiær, C. (1918) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 1 (3), 1-80. Ch 11 in Raunkiær (1934): Statistical researches on plant formations, p. 379-424. Raunkiær, C. (1918) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 1 (4), 1–17. Ch. 12 in Raunkiær (1934): On the biological normal spectrum, p. 425-434. Raunkiær, C. (1918) . Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 1 (7), 1–17. Børgesen, F. & Raunkiær, C. (1918) Mosses and Lichens collected in the former Danish West Indies. Dansk Botanisk Arkiv 2 (9): 1–18. Raunkiær, C. (1919) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 1 (12), 1-22. Raunkiær, C. (1920) En naturhistorisk Studie. Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 2 (4), 1-90. Raunkiær, C. (1920) Botanisk Tidsskrift 37, 151–158. Ch. 13 in Raunkiær (1934): On the significance of Cryptogams for characterizing Plant climates, p. 435-442. Raunkiær, C. (1922) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 3 (10), 1-74. Ch. 14 in Raunkiær (1934): The different influence exercised by various types of vegetation on the degree of acidity (Hydrogen-ion concentration), p. 443-487. Raunkiær, C. (1926) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 5 (5), 1-47. Ch. 15 in Raunkiær (1934): The Nitrate content of Anemone nemorosa growing in various localities, p. 488-516. Raunkiær, C. (1926) (Isoreagent-Studier II). Botanisk Tidsskrift 39, 329–347. Raunkiær, C. (1926) Botanisk Tidsskrift 39, 348–356. Raunkiær, C. (1928) Biologiske Meddelelser / Kongelige Danske Videnskabernes Selskab, 7 (1), 1-47. Ch 16 in Raunkiær (1934): The area of dominance, species density, and formation dominants, p. 517-546. Raunkiær, C. (1928) Myxomycetes from the West Indian Islands St. Croix, St. Thomas and St. Jan. Dansk Botanisk Arkiv 5 (16), 1–9. Raunkiær, C. (1930) Botanisk Tidsskrift 41, 257–258. Raunkiær, C. (1934) Botaniske Studier, 1. haefte (ed C. Raunkiær), pp. 3–30. J.H. Schultz Forlag, København. Raunkiær, C. (1934) The Life Forms of Plants and Statistical Plant Geography. Introduction by A.G. Tansley. Oxford University Press, Oxford. 632 pp. Collection of 16 of Raunkiær's publications plus one new. Raunkiær, C. (1934) Botanical studies in the Mediterranean region. Ch. 17 in The Life Forms of Plants and Statistical Plant Geography, pp. 547–620. Raunkiær, C. (1935) The vegetation of the sand dunes north of Sousse. Botaniske Studier, 3. haefte (ed C. Raunkiær), pp. 244–248. J.H. Schultz Forlag, København. Raunkiær, C. (1936) The life-form spectrum of some Atlantic islands. Botaniske Studier, 4. haefte (ed C. Raunkiær), pp. 249–328. J.H. Schultz Forlag, København. Raunkiær, C. (1937) Botaniske Studier, 5. haefte (ed C. Raunkiær), pp. 357–382. J.H. Schultz Forlag, København. Raunkiær, C. (1937) Botaniske Studier, 5. haefte (ed C. Raunkiær), pp. 329–336. J.H. Schultz Forlag, København. Raunkiær, C. (1937) Life-form, genus area, and number of species. Botaniske Studier, 5. haefte (ed C. Raunkiær), pp. 343–356. J.H. Schultz Forlag, København. Raunkiær, C. (1937) Botaniske Studier, 5. haefte (ed C. Raunkiær), pp. 337–342. J.H. Schultz Forlag, København. Reviews and biographies Biography by O.G. Petersen in: Dansk Biografisk Lexikon, 1st edn 1887-1905 (ed. Carl Frederik Bricka) Biography by Carl Christensen in: Dansk Biografisk Leksikon, 3rd edn 1979-1984 (ed. Svend Cedergreen Bech). References External links http://www.wku.edu/%7Esmithch/chronob/RAUN1860.htm 20th-century Danish botanists Danish ecologists Danish science writers Academic staff of the University of Copenhagen University of Copenhagen alumni 1860 births 1938 deaths Plant life-forms 19th-century Danish botanists
Christen C. Raunkiær
[ "Biology" ]
3,658
[ "Plant life-forms", "Plants" ]
2,205,147
https://en.wikipedia.org/wiki/Raunki%C3%A6r%20plant%20life-form
The Raunkiær system is a system for categorizing plants using life-form categories, devised by Danish botanist Christen C. Raunkiær and later extended by various authors. History It was first proposed in a talk to the Danish Botanical Society in 1904 as can be inferred from the printed discussion of that talk, but not the talk itself, nor its title. The journal, Botanisk Tidsskrift, published brief comments on the talk by M.P. Porsild, with replies by Raunkiær. A fuller account appeared in French the following year. Raunkiær elaborated further on the system and published this in Danish in 1907. The original note and the 1907 paper were much later translated to English and published with Raunkiær's collected works. Modernization Raunkiær's life-form scheme has subsequently been revised and modified by various authors, but the main structure has survived. Raunkiær's life-form system may be useful in researching the transformations of biotas and the genesis of some groups of phytophagous animals. Subdivisions The subdivisions of the Raunkiær system are premised on the location of the bud of a plant during seasons with adverse conditions, i. e. cold seasons and dry seasons: Phanerophytes These plants, normally woody perennials, grow stems into the air, with their resting buds being more than 50 cm above the soil surface, e.g. trees and shrubs, and also epiphytes, which Raunkiær later separated as a distinct class (see below). Raunkiær further divided the phanerophytes according to height as Megaphanerophytes, Mesophanerophytes, Microphanerophytes, and Nanophanerophytes. Further division was premised on the characters of duration of foliage, i. e. evergreen or deciduous, and presence of covering bracts on buds, for 12 classes. 3 further divisions were made to increase the total of classes to 15: Phanerophytic stem succulents, Phanerophytic epiphytes, and Phanerophytic herbs. Epiphytes Epiphytes were originally included in the phanerophytes (see above) but then separated because they do not grow in soil, so the soil location is irrelevant in classifying them. They form characteristic communities of moist climatic conditions. Chamaephytes These plants have buds on persistent shoots near the soil surface; woody plants with perennating buds borne close to the soil surface, a maximum of 25 cm above the soil surface, e.g., bilberry and periwinkle. Hemicryptophytes These plants have buds at or near the soil surface, e.g. common daisy and dandelion, and are divided into: Protohemicryptophytes: only cauline foliage; Partial rosette plants: both cauline and basal rosette foliage; and Rosette plants: only basal rosette foliage. Cryptophytes These plants have subterranean or under water resting buds, and are divided into: Geophytes: rest in dry soil as a rhizome, bulb, corm, et cetera, e.g. crocus and tulip, and are subdivided into: Rhizome geophytes, Stem-tuber geophytes, Root-tuber geophytes, Bulb geophytes, and Root geophytes. Helophytes: rest in marshy or wet soil, e.g. reedmace and marsh-marigold; and Hydrophytes: rest submerged under water, e.g. water lily and frogbit. Therophytes These are annual plants that complete their lives rapidly in favorable conditions and survive the unfavorable cold or dry season in the form of seeds. About 6% of plants are therophytes but their proportion is much higher in region with hot-dry summer. Aerophytes Aerophytes were a later addition to the system. These are plants that obtain moisture and nutrients from the air and rain. They usually grow on other plants yet are not parasitic on them. These are perennial plants and are like epiphytes but whose root system have been reduced. They occur in communities that inhabit exclusively hyper-arid areas with abundant fog. Like epiphytes and hemicryptophytes, their buds are near the soil surface. Some Tillandsia species are classified as aerophytes. Popular references Farley Mowat, in his book, Never Cry Wolf, described using a Raunkiær's Circle in making a "cover degree" study to determine the ratios of various plants one to the other. He spoke of it as "a device designed in hell." References Plant life-forms Botanical nomenclature Ecology de:Lebensform (Botanik)
Raunkiær plant life-form
[ "Biology" ]
1,032
[ "Botanical nomenclature", "Plants", "Botanical terminology", "Biological nomenclature", "Ecology", "Plant life-forms" ]
2,205,197
https://en.wikipedia.org/wiki/Gaseous%20fire%20suppression
Gaseous fire suppression, also called clean agent fire suppression, is the use of inert gases and chemical agents to extinguish a fire. These agents are governed by the National Fire Protection Association (NFPA) Standard for Clean Agent Fire Extinguishing Systems – NFPA 2001 in the US, with different standards and regulations elsewhere. The system typically consists of the agent, agent storage containers, agent release valves, fire detectors, fire detection system (wiring control panel, actuation signaling), agent delivery piping, and agent dispersion nozzles. Theory There are four means used by the agents to extinguish a fire. They act on the "fire tetrahedron": Reduction or isolation of fuel. No agents currently use this as the primary means of fire suppression. Reduction of heat. Representative agents: Clean agent FS 49 C2 (NAF S 227, MH227, FM-200), Novec 1230, pentafluoroethane (NAF S125, ECARO-25). Reduction or isolation of oxygen: Representative agents: IG-01 (argon); Argonite / IG-55 (ProInert) (a blend of 50% argon, 50% nitrogen); , carbon dioxide; Inert Gas 541 IG-541 Inergen (a blend of 52% nitrogen, 40% argon, and 8% ); and IG-100 (NN100), nitrogen. Inhibiting the chain reaction of the above components. Representative agents: FE-13, 1,1,1,2,3,3,3-Heptafluoropropane, FE-25, haloalkanes, bromotrifluoromethane, trifluoroiodomethane, NAF P-IV, NAF S-III, NAF S 125, NAF S 227, and Triodide (Trifluoroiodomethane). Application Broadly speaking, there are two methods for applying an extinguishing agent: total flooding and local application: Systems working on a total flooding principle apply an extinguishing agent to a room in order to achieve a concentration of the agent (volume percent of the agent in air) adequate to extinguish the fire. These types of systems may be operated automatically by detection and related controls or manually by the operation of a system actuator. Systems working on a local application principle apply an extinguishing agent directly onto a fire (usually a two dimensional area), or into the three dimensional region immediately surrounding the substance or object on fire. The main difference in local application from total flooding design is the absence of physical barriers enclosing the fire space. In the context of automatic extinguishing systems, local application generally refers to the use of systems that have been emplaced some time prior to their usage rather than the use of manually operated wheeled or portable fire extinguishers, although the nature of the agent delivery is similar and many automatic systems may also be activated manually. The lines are blurred somewhat with portable automatic extinguishing systems, although these are not common. Safety precautions Room integrity testing Room integrity testing (RIT) is also required in conjunction with gas fire suppression systems. RIT ensures that in the event of a fire, the room's containment is sufficient to ensure the effectiveness of the suppression system. RIT works by creating pressure within the room or enclosure where the suppression system has been installed and ensuring that the gas does not escape from the room so quickly that it is unable to extinguish the fire. Suffocation An extinguishing system which is primarily based on inert gases, such as or nitrogen, in enclosed spaces presents a risk of suffocation. Some incidents have occurred where individuals in these spaces have been killed by inert gas release. When installed according to fire codes the systems have an excellent safety record. To prevent such occurrences, additional life safety systems are typically installed with a warning alarm that precedes the agent release. The warning, usually an audible and visible alert, advises the immediate evacuation of the enclosed space. After a preset time, the agent starts to discharge. This can be paused by activating an abort switch, which pauses the countdown as long as it is activated, allowing everyone to evacuate the area. Accidents have also occurred during maintenance of these systems, so proper safety precautions must be taken beforehand. The pressure differential caused by these gases may be sufficient to break windows and walls. Pressure vents are mandatory on inert gases and may be required on synthetic agents. These are installed working on the physical strength of the protected space and vary in size. Accidents A study carried out by the Italian Workers' Compensation Authority and the Italian Fire Brigade (Piccolo et al., 2018) reported 12 different accidents caused by inert gas fire suppression systems that in many cases have resulted in serious injuries. From the cases analyzed, it was determined that the over pressure of inert gas systems can constitute a risk even in the presence of a system designed and built according to technical standards. On September 20, 2018, two people died after an inert gas leak from a fire suppression system located in the State Archives in Arezzo (Italy). On 8 July 2013 an explosion of Argonite cylinders tore through a Hertfordshire (U.K.) construction site, killing a plumber and injuring six other workers. Red Bee Media had a major incident with their broadcast playout facility in London, England, that is used by BBC Television, Channel 4, Channel 5 and ViacomCBS. The broadcasters upload programs and continuity links to Red Bee's servers and then broadcast to tv and online. On 25 September 2021 a gaseous fire suppression system at the facility was triggered, causing loss of service. It was found that an acoustic shock wave from the gas release nozzles had severely damaged the servers resulting in the hard drives being destroyed. See also Fire protection Hypoxic air technology for fire prevention References External links Fire Suppression Systems Association National Fire Protection Association EPA U.S. Environmental Protection Agency Active fire protection Fire suppression Firefighting equipment Industrial gases
Gaseous fire suppression
[ "Chemistry" ]
1,268
[ "Chemical process engineering", "Industrial gases" ]
2,205,504
https://en.wikipedia.org/wiki/Dysplasia
Dysplasia is any of various types of abnormal growth or development of cells (microscopic scale) or organs (macroscopic scale), and the abnormal histology or anatomical structure(s) resulting from such growth. Dysplasias on a mainly microscopic scale include epithelial dysplasia and fibrous dysplasia of bone. Dysplasias on a mainly macroscopic scale include hip dysplasia, myelodysplastic syndrome, and multicystic dysplastic kidney. In one of the modern histopathological senses of the term, dysplasia is sometimes differentiated from other categories of tissue change including hyperplasia, metaplasia, and neoplasia, and dysplasias are thus generally not cancerous. An exception is that the myelodysplasias include a range of benign, precancerous, and cancerous forms. Various other dysplasias tend to be precancerous. The word's meanings thus cover a spectrum of histopathological variations. Microscopic scale Epithelial dysplasia Epithelial dysplasia consists of an expansion of immature cells (such as cells of the ectoderm), with a corresponding decrease in the number and location of mature cells. Dysplasia is often indicative of an early neoplastic process. The term dysplasia is typically used when the cellular abnormality is restricted to the originating tissue, as in the case of an early, in-situ neoplasm. Dysplasia, in which cell maturation and differentiation are delayed, can be contrasted with metaplasia, in which cells of one mature, differentiated type are replaced by cells of another mature, differentiated type. Myelodysplastic syndrome Myelodysplastic syndromes (MDS) are a group of cancers in which immature blood cells in the bone marrow do not mature and therefore do not become healthy blood cells. Problems with blood cell formation result in some combination of low red blood cells, low platelets, and low white blood cells. Some types have an increase in immature blood cells, called blasts, in the bone marrow or blood. Fibrous dysplasia of bone Fibrous dysplasia of bone is a disorder where normal bone and marrow is replaced with fibrous tissue, resulting in formation of bone that is weak and prone to expansion. As a result, most complications result from fracture, deformity, functional impairment and pain. Macroscopic scale Hip dysplasia Hip dysplasia is an abnormality of the hip joint where the socket portion does not fully cover the ball portion, resulting in an increased risk for joint dislocation. Hip dysplasia may occur at birth or develop in early life. Regardless, it does not typically produce symptoms in babies less than a year old. Occasionally one leg may be shorter than the other. The left hip is more often affected than the right. Complications without treatment can include arthritis, limping, and low back pain. Multicystic dysplastic kidney Multicystic dysplastic kidney (MCDK) is a condition that results from the malformation of the kidney during fetal development. The kidney consists of irregular cysts of varying sizes. Multicystic dysplastic kidney is a common type of renal cystic disease, and it is a cause of an abdominal mass in infants. Etymology From Ancient Greek δυσ- dys- 'bad' or 'difficult' and πλάσις plasis 'formation'. The equivalent surface analysis, in parallel with classical compounds, is dys- + -plasia. See also Pleomorphism List of biological development disorders References Further reading Oncology Histopathology Induced stem cells Noninflammatory disorders of female genital tract
Dysplasia
[ "Chemistry", "Biology" ]
795
[ "Stem cell research", "Induced stem cells", "Histopathology", "Microscopy" ]
2,205,694
https://en.wikipedia.org/wiki/Frank%20Austin%20Gooch
Frank Austin Gooch (1852 – 1929) was an American chemist and engineer. Biography He was born to Joshua G. & Sarah Gates (Coolidge) Gooch in Watertown, Massachusetts. On his mother's side of the family, he was a descendant of Thomas Hastings who came from the East Anglia region of England to the Massachusetts Bay Colony in 1634. Gooch invented the Gooch crucible, which is used, for example, to determine the solubility of bituminous materials such as road tars and petroleum asphalts. He was awarded a Ph.D. by Harvard University in 1877. Gooch was a professor of chemistry at Yale University from 1885 to 1918. He devised or perfected a large number of analytical processes and methods, including: Invented the Gooch filtering crucible. Studied the quantitative separation of lithium from the other alkali metals, and the estimation of boric acid by distillation with methanol and fixation by calcium oxide. Developed methods for estimating molybdenum, vanadium, selenium, and tellurium. Studied the use of the paratungstate and pyrophosphate ions in analysis. Developed a series of methods for estimating various elements based on the volumetric determination of iodine. Discovered a method for the rapid electrolytic estimation of metals. He was a member of the Connecticut Academy of Arts and Sciences, the American Philosophical Society, the American Academy of Arts and Sciences, and the U.S. National Academy of Sciences. Further reading Biog. Mem. Nat. Acad. Sci. 1931, 15, 105–135. Ind. Eng. Chem. 1923, 15, 1088–1089. Proc. Am. Acad. Arts Sci. 1935–36, 70, 541. Am. J. Sci. (Ser. 5) 1929, 18, 539–540. National Cyclopaedia of American Biography, James T. White & Co.: 1921–1984; vol. 12, p329-330. References External links Frank Austin Gooch (biography at University of Illinois) Descendants of Thomas Hastings website Descendants of Thomas Hastings on Facebook 1852 births 1929 deaths American chemists American engineers Harvard University alumni People involved with the periodic table Members of the American Philosophical Society
Frank Austin Gooch
[ "Chemistry" ]
474
[ "Periodic table", "People involved with the periodic table" ]
2,205,917
https://en.wikipedia.org/wiki/AUO%20Corporation
AUO Corporation (AUO; ) is a Taiwanese company that specialises in optoelectronics. It was formed in September 2001 by the merger of Acer Display Technology, Inc. (the predecessor of AUO, established in 1996) and Unipac Optoelectronics Corporation. AUO offers display panels, and in recent years expanded its business to smart retail, smart transportation, general health, solar energy, circular economy and smart manufacturing service. AUO employs 38,000 people. History August 1996 Acer Display Technology, Inc. (the predecessor of AUO) was founded September 2000 Listed at Taiwan Stock Exchange Corporation September 2001 Merged with Unipac Optoelectronics Corporation to form AUO October 2006 Merged with Quanta Display Inc. December 2008 Entered solar business June 2009 Joint venture with Changhong in Sichuan, China to set up module plant April 2010 Joint venture with TCL in China to set up module plant July 2010 Acquired AFPD Pte., Ltd.("AFPD"), subsidiary of Toshiba Mobile Display Co., Ltd. in Singapore February 2012 OLED strategic alliance formed with Idemitsu in Japan April 2014 Initiated new model of solar power plant operation by founding Star River Energy Corporation May 2014 CSR Report acquired Taiwan's first GRI G4 certificate among the manufacturing industry December 2015 Launched Taiwan's first process water full-recycling system December 2020 AUO 9.4-inch high resolution flexible micro LED display technology honored with 2020 Innovative Product Award from Hsinchu Science Park May 2021 Established AUO Display Plus, Industrial and Commercial Display Subsidiary of AUO October 2023 AUO acquired Behr-Hella Thermocontrol for €600 million (approximately NT$20.4 billion) Controversies In September 2012, AUO was sentenced to pay a US$500 million criminal fine for its participation in a five-year conspiracy to fix the prices of thin-film transistor LCD panels sold worldwide. Its American subsidiary and two former top executives were also sentenced. The two executives were sentenced to prison and fined for their roles in the conspiracy. The $500 million fine matches the largest fine imposed against a company for violating U.S. antitrust laws. In July 2014, the Ninth Circuit rejected AUO's appeal of the fine. Shareholding and subsidiaries Lextar Electronics Corporation Qisda Corporation Darwin Precision Corporation Daxin Materials Corporation AUO Crystal Corporation Toppan CFI Fargen Power Corporation See also List of companies of Taiwan References BenQ Group Computer companies of Taiwan Computer hardware companies Taiwanese companies established in 1996 Companies listed on the Taiwan Stock Exchange Electronics companies of Taiwan Display technology companies Manufacturing companies based in Hsinchu Electronics companies established in 1996
AUO Corporation
[ "Technology" ]
553
[ "Computer hardware companies", "Computers" ]
2,205,922
https://en.wikipedia.org/wiki/List%20of%20biodiversity%20conservation%20sites%20in%20the%20United%20Kingdom
This article provides a list of sites in the United Kingdom which are recognised for their importance to biodiversity conservation. The list is divided geographically by region and county. Inclusion criteria Sites are included in this list if they are given any of the following designations: Sites of importance in a global context Biosphere Reserves (BR) World Heritage Sites (WHS) (where biological interest forms part of the reason for designation) all Ramsar Sites Sites of importance in a European context all Special Protection Areas (SPA) all Special Area of Conservation (SAC) all Important Bird Areas (IBA) Sites of importance in a national context all sites which were included in the Nature Conservation Review (NCR site) all national nature reserves (NNR) Sites of Special Scientific Interest (SSSI), where biological interest forms part of the justification for notification (SSSIs which are designated purely for their geological interest are not included unless they meet other criteria) England Southwest Cornwall Devon Dorset Somerset Avon Wiltshire Gloucestershire Southeast Bedfordshire Berkshire Buckinghamshire Essex Greater London Hampshire Hertfordshire Kent Oxfordshire Surrey Sussex Rye Harbour Nature Reserve Midlands Derbyshire Herefordshire Leicestershire Northamptonshire Shropshire Staffordshire Nottinghamshire Warwickshire Worcestershire East Anglia Northwest Cheshire Northeast Lincolnshire Yorkshire County Durham Wales Anglesey Scotland Northeast Scotland Shetland Unst Orkney Outer Hebrides Lewis and Harris North Uist, South Uist and Benbecula Other islands See also Conservation in the United Kingdom National Nature Reserves in the United Kingdom Sites of Special Scientific Interest References Biodiversity Biodiversity Conservation in the United Kingdom
List of biodiversity conservation sites in the United Kingdom
[ "Biology" ]
291
[ "Biodiversity" ]
2,205,989
https://en.wikipedia.org/wiki/Cybergirl
Cybergirl is an Australian-French children's television series that was first broadcast on Network Ten in Australia. The 26-episode series was created by Jonathan M. Shiff, whose previous series include Ocean Girl. It stars Ania Stepien in the title role. Plot Cybergirl is a Blue superheroine Human Prototype 6000 living under the secret identity of ordinary teenage girl Ashley Campbell. In reality, she is a "Human Prototype 6000" from a distant planet. Her powers include super-human strength, super-human speed, and the ability to interface directly with electronic devices and computers; she is also able to physically change her appearance between that of the blue-haired, ethereal-looking Cybergirl and the less conspicuous, mousy-haired Ashley, and can alter her clothing at will. She was originally known as the Cyber Replicant Human Prototype 6000, the only one of her model to be built. Not only are her powers far and above that of earlier models, she has a much wider emotional scope than her predecessors. She ran away from her planet of origin in order to explore the beings she was modeled after, namely humans. Two other Evil Red Replicants called Isaac and Xanda are sent after her and their sole mission is to destroy her. She lands on Earth in the fictional city of River City, Australia which is modeled on and filmed in Brisbane; nevertheless, "River City" is another popular nickname for Brisbane. She meets Jackson and Hugh Campbell, who take her in, and she adopts the name Cybergirl as her superheroine identity. Jackson calls her "Cy" and she later uses her powers to make herself look more human; this identity is called Ashley, in which she poses as Jackson's cousin and Hugh's niece. The only other person besides Hugh and Jackson to know her identity is Kat, her friend and neighbour. She is pursued not only by Xanda and Isaac but also by a powerful software mogul named Rhyss. She is well loved by the populace of River City, however, and she enjoys the approval of Mayor Buxton, whose twin daughters Emerald and Sapphire are big fans of the superheroine. Ironically they snub her, as Ashley, at school. Cast Main Ania Stepien as Ashley Campbell / Cybergirl Craig Horner as Jackson Campbell Mark Owen-Taylor as Hugh Campbell Jennifer Congram as Xanda Ric Anderson as Isaac Septimus Caton as Rhyss Winston Cooper as Giorgio Peter Mochrie as Rick Fontaine Jovita Shaw as Kat Fontaine Recurring Christine Amor as Mayor Burdette Buxton David Vallon as Romirez Michelle Atkinson as Anthea Jessica Origliasso as Emerald Buxton Lisa Origliasso as Sapphire Buxton Tony Hawkins as McMurtrie Guest John Dommett as Mr. Southerly Jason Klarwein as Sales Assistant Daniel Amalm as Marco Julie Eckersley as Julia Damien Garvey as Paramedic Remi Broadway as Zak Furnace Episodes Home media Cybergirl was released on DVD on 4 December 2006 as CyberGirl: The Superhero for a New Generation – The Complete Series. The set includes all 26 episodes on 4 DVDs and is Region 0. The release includes making of/behind-the-scenes featurettes created from a period Electronic Press Kit and the packaging makes frequent references to the fact that the series features "before they were famous" appearances by The Veronicas. Reception The first episode won the 2001 AFI Award for Best Children's Television Series and was nominated for the 2002 Logie Award for Most Outstanding Children's Program. In a negative review, Vicki Englund of The Courier-Mail wrote, "After a disappointingly slow debut episode, this Brisbane-based children's series seems to be gathering momentum, although some quickening of the pace would still be advisable to keep the young'uns from channel-surfing." The Courier-Mails Amelia Oberhardt praised the show, stating, "Cybergirl is, surprisingly, an entertaining way to spend 30 minutes. ... We can only assume Cybergirl is a spoof and is therefore more than a bit of fun. Ania Stephen, who plays Cybergirl, is either one of the most wooden actors of all time or a genius at playing spoofs." Leanne Younes of the Canberra Times gave it a rating of "Fast action, great make-up and cool music." References External links Cybergirl at the Australian Television Information Archive Jonathan M. Shiff Productions - the official website for Jonathan M. Shiff productions, has a Cybergirl section Cybergirl at the National Film and Sound Archive Network 10 original programming Australian English-language television shows Fictional computers Australian adventure television series Australian children's television series Australian science fiction television series French adventure television series French children's television series French science fiction television series Television shows set in Queensland 2001 Australian television series debuts 2002 Australian television series endings 2001 French television series debuts 2002 French television series endings Cyborgs in television Fiction about malware Child superheroes Television series about television Cyberpunk television series Australian television series about teenagers Television shows filmed in Australia Teen superhero television series
Cybergirl
[ "Technology" ]
1,038
[ "Fictional computers", "Computers" ]
2,206,157
https://en.wikipedia.org/wiki/Concurrent%20lines
In geometry, lines in a plane or higher-dimensional space are concurrent if they intersect at a single point. The set of all lines through a point is called a pencil, and their common intersection is called the vertex of the pencil. In any affine space (including a Euclidean space) the set of lines parallel to a given line (sharing the same direction) is also called a pencil, and the vertex of each pencil of parallel lines is a distinct point at infinity; including these points results in a projective space in which every pair of lines has an intersection. Examples Triangles In a triangle, four basic types of sets of concurrent lines are altitudes, angle bisectors, medians, and perpendicular bisectors: A triangle's altitudes run from each vertex and meet the opposite side at a right angle. The point where the three altitudes meet is the orthocenter. Angle bisectors are rays running from each vertex of the triangle and bisecting the associated angle. They all meet at the incenter. Medians connect each vertex of a triangle to the midpoint of the opposite side. The three medians meet at the centroid. Perpendicular bisectors are lines running out of the midpoints of each side of a triangle at 90 degree angles. The three perpendicular bisectors meet at the circumcenter. Other sets of lines associated with a triangle are concurrent as well. For example: Any median (which is necessarily a bisector of the triangle's area) is concurrent with two other area bisectors each of which is parallel to a side. A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. The three cleavers concur at the center of the Spieker circle, which is the incircle of the medial triangle. A splitter of a triangle is a line segment having one endpoint at one of the three vertices of the triangle and bisecting the perimeter. The three splitters concur at the Nagel point of the triangle. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter, and each triangle has one, two, or three of these lines. Thus if there are three of them, they concur at the incenter. The Tarry point of a triangle is the point of concurrency of the lines through the vertices of the triangle perpendicular to the corresponding sides of the triangle's first Brocard triangle. The Schiffler point of a triangle is the point of concurrence of the Euler lines of four triangles: the triangle in question, and the three triangles that each share two vertices with it and have its incenter as the other vertex. The Napoleon points and generalizations of them are points of concurrency. For example, the first Napoleon point is the point of concurrency of the three lines each from a vertex to the centroid of the equilateral triangle drawn on the exterior of the opposite side from the vertex. A generalization of this notion is the Jacobi point. The de Longchamps point is the point of concurrence of several lines with the Euler line. Three lines, each formed by drawing an external equilateral triangle on one of the sides of a given triangle and connecting the new vertex to the original triangle's opposite vertex, are concurrent at a point called the first isogonal center. In the case in which the original triangle has no angle greater than 120°, this point is also the Fermat point. The Apollonius point is the point of concurrence of three lines, each of which connects a point of tangency of the circle to which the triangle's excircles are internally tangent, to the opposite vertex of the triangle. Quadrilaterals The two bimedians of a quadrilateral (segments joining midpoints of opposite sides) and the line segment joining the midpoints of the diagonals are concurrent and are all bisected by their point of intersection. In a tangential quadrilateral, the four angle bisectors concur at the center of the incircle. Other concurrencies of a tangential quadrilateral are given here. In a cyclic quadrilateral, four line segments, each perpendicular to one side and passing through the opposite side's midpoint, are concurrent. These line segments are called the maltitudes, which is an abbreviation for midpoint altitude. Their common point is called the anticenter. A convex quadrilateral is ex-tangential if and only if there are six concurrent angles bisectors: the internal angle bisectors at two opposite vertex angles, the external angle bisectors at the other two vertex angles, and the external angle bisectors at the angles formed where the extensions of opposite sides intersect. Hexagons If the successive sides of a cyclic hexagon are a, b, c, d, e, f, then the three main diagonals concur at a single point if and only if . If a hexagon has an inscribed conic, then by Brianchon's theorem its principal diagonals are concurrent (as in the above image). Concurrent lines arise in the dual of Pappus's hexagon theorem. For each side of a cyclic hexagon, extend the adjacent sides to their intersection, forming a triangle exterior to the given side. Then the segments connecting the circumcenters of opposite triangles are concurrent. Regular polygons If a regular polygon has an even number of sides, the diagonals connecting opposite vertices are concurrent at the center of the polygon. Circles The perpendicular bisectors of all chords of a circle are concurrent at the center of the circle. The lines perpendicular to the tangents to a circle at the points of tangency are concurrent at the center. All area bisectors and perimeter bisectors of a circle are diameters, and they are concurrent at the circle's center. Ellipses All area bisectors and perimeter bisectors of an ellipse are concurrent at the center of the ellipse. Hyperbolas In a hyperbola the following are concurrent: (1) a circle passing through the hyperbola's foci and centered at the hyperbola's center; (2) either of the lines that are tangent to the hyperbola at the vertices; and (3) either of the asymptotes of the hyperbola. The following are also concurrent: (1) the circle that is centered at the hyperbola's center and that passes through the hyperbola's vertices; (2) either directrix; and (3) either of the asymptotes. Tetrahedrons In a tetrahedron, the four medians and three bimedians are all concurrent at a point called the centroid of the tetrahedron. An isodynamic tetrahedron is one in which the cevians that join the vertices to the incenters of the opposite faces are concurrent, and an isogonic tetrahedron has concurrent cevians that join the vertices to the points of contact of the opposite faces with the inscribed sphere of the tetrahedron. In an orthocentric tetrahedron the four altitudes are concurrent. Algebra According to the Rouché–Capelli theorem, a system of equations is consistent if and only if the rank of the coefficient matrix is equal to the rank of the augmented matrix (the coefficient matrix augmented with a column of intercept terms), and the system has a unique solution if and only if that common rank equals the number of variables. Thus with two variables the k lines in the plane, associated with a set of k equations, are concurrent if and only if the rank of the k × 2 coefficient matrix and the rank of the k × 3 augmented matrix are both 2. In that case only two of the k equations are independent, and the point of concurrency can be found by solving any two mutually independent equations simultaneously for the two variables. Projective geometry In projective geometry, in two dimensions concurrency is the dual of collinearity; in three dimensions, concurrency is the dual of coplanarity. References External links Wolfram MathWorld Concurrent, 2010. Elementary geometry Line (geometry)
Concurrent lines
[ "Mathematics" ]
1,729
[ "Line (geometry)", "Elementary mathematics", "Elementary geometry" ]
2,206,410
https://en.wikipedia.org/wiki/Saran%20%28plastic%29
Saran is a trade name used by S.C. Johnson & Son, Inc. for a polyethylene food wrap. The Saran trade name was first owned by Dow Chemical for polyvinylidene chloride (PVDC), along with other monomers. The formulation was changed to the less effective polyethylene in 2004 due to the chlorine content of PVDC. Since its accidental discovery in 1933, polyvinylidene chloride has been used for a number of commercial and industrial products. When formed into a thin plastic film, the principal advantages of polyvinylidene chloride, when compared to other plastics, are its ability to adhere to itself and its very low permeability to water vapor, flavor and aroma molecules, and oxygen. This oxygen barrier prevents food spoilage, while the film barrier to flavor and aroma molecules helps food retain its flavor and aroma. History Polyvinylidene chloride (PVDC) was discovered at Dow Chemical Company (Michigan, United States) in 1933 when a lab worker, Ralph Wiley, was having trouble washing beakers used in his process of developing a dry-cleaning product. It was initially developed into a spray that was used on US fighter planes and, later, automobile upholstery, to protect them from the elements. Dow Chemical later named the product Saran and eliminated its green hue and offensive odor. In 1942, fused layers of original-specification PVDC were used to make woven mesh ventilating insoles for newly developed jungle or tropical combat boots made of rubber and canvas. These insoles were tested by experimental Army units in jungle exercises in Panama, Venezuela, and other countries, where they were found to increase the flow of dry outside air to the insole and base of the foot, reducing blisters and tropical ulcers. The PVDC ventilating mesh insole was later adopted by the United States Army for standard issue in its M-1945 and M-1966 Jungle Boots. In 1943, Ralph Wiley and his boss, John Reilly, both employed by Dow Chemical Company, completed the final work needed for the introduction of PVDC, which had been invented in 1939. PVDC monofilaments were also extruded for the first time. The word Saran was formed from a combination of John Reilly's wife's and daughter's names, Sarah and Ann Reilly. In 1949, Dow introduced Saran Wrap, a thin, clingy plastic wrap that was sold in rolls and used primarily for wrapping food. It quickly became popular for preserving food items stored in the refrigerator. Saran Wrap was later acquired by S. C. Johnson & Son. After the end of the Vietnam War, the U.S. military phased out PVDC insoles in favor of Poron®, a microcellular urethane, for its jungle and combat boots. However, the British Army continues to use PVDC insoles in its combat boots, primarily because of its insulating properties. Formulation change to polyethylene Today's Saran Wrap is no longer composed of PVDC in the United States, due to cost, processing difficulties, and health and environmental concerns with halogenated materials, and is now made from polyethylene. However, polyethylene has a higher oxygen permeability, which in turn affects food spoilage prevention. For example, at 23 °C and 95% relative humidity polyvinylidene chloride has an oxygen permeability of 0.6 cm3 μm m−2 d−1 kPa−1 while low-density polyethylene under the same conditions has an oxygen permeability of 2000 cm3 μm m−2 d−1 kPa−1, or a factor of over 3,000 times more permeable. For that reason, packaging for the meat industry still may use PVDC-containing films, as a barrier layer. References External links "Saran Wrap - The History of PVDC" (from About.com) Plastics Packaging materials Food preparation utensils Synthetic fibers Kitchenware brands S. C. Johnson & Son brands Dow Chemical Company Brand name materials Products introduced in 1933
Saran (plastic)
[ "Physics", "Chemistry" ]
845
[ "Synthetic fibers", "Synthetic materials", "Unsolved problems in physics", "Amorphous solids", "Plastics" ]
2,206,496
https://en.wikipedia.org/wiki/Ionomer
An ionomer () (iono- + -mer) is a polymer composed of repeat units of both electrically neutral repeating units and ionized units covalently bonded to the polymer backbone as pendant group moieties. Usually no more than 15 mole percent are ionized. The ionized units are often carboxylic acid groups. The classification of a polymer as an ionomer depends on the level of substitution of ionic groups as well as how the ionic groups are incorporated into the polymer structure. For example, polyelectrolytes also have ionic groups covalently bonded to the polymer backbone, but have a much higher ionic group molar substitution level (usually greater than 80%); ionenes are polymers where ionic groups are part of the actual polymer backbone. These two classes of ionic-group-containing polymers have vastly different morphological and physical properties and are therefore not considered ionomers. Ionomers have unique physical properties including electrical conductivity and viscosity—increase in ionomer solution viscosity with increasing temperatures (see conducting polymer). Ionomers also have unique morphological properties as the non-polar polymer backbone is energetically incompatible with the polar ionic groups. As a result, the ionic groups in most ionomers will undergo microphase separation to form ionic-rich domains. Commercial applications for ionomers include golf ball covers, semipermeable membranes, sealing tape and thermoplastic elastomers. Common examples of ionomers include polystyrene sulfonate, Nafion and Hycar. Synthesis Usually ionomer synthesis consists of two steps – the introduction of acid groups to the polymer backbone and the neutralization of some of the acid groups by a metal cation. In very rare cases, the groups introduced are already neutralized by a metal cation. The first step (introduction of acid groups) can be done in two ways; a neutral non-ionic monomer can be copolymerized with a monomer that contains pendant acid groups or acid groups can be added to a non-ionic polymer through post-reaction modifications. For example, ethylene-methacrylic acid and sulfonated perfluorocarbon (Nafion) are synthesized through copolymerization while polystyrene sulfonate is synthesized through post-reaction modifications. In most cases, the acid form of the copolymer is synthesized (i.e. 100% of the carboxylic acid groups are neutralized by hydrogen cations) and the ionomer is formed through subsequent neutralization by the appropriate metal cation. The identity of the neutralizing metal cation has an effect on the physical properties of the ionomer; the most commonly used metal cations (at least in academic research) are zinc, sodium, and magnesium. Neutralization or ionomerization, can also be accomplished in two ways: the acid copolymer can be melt-mixed with a basic metal or neutralization can be achieved through solution processes. The former method is preferred commercially. However, as commercial manufacturers are reluctant to share their procedures, little is known about the exact conditions of the melt-mixing neutralization process other than that hydroxides are generally used to provide the metal cation. The latter solution neutralization process is generally used in academic settings. The acid copolymer is dissolved and a basic salt with the appropriate metal cation is added to this solution. Where dissolution of the acid copolymer is difficult, simply swelling the polymer in the solvent is sufficient, though dissolving is always preferred. Because basic salts are polar and are not soluble in the non-polar solvents used to dissolve most polymers, mixed solvents (e.g. 90:10 toluene/alcohol) are often used. Neutralization level must be determined after an ionomer is synthesized as varying the neutralization level varies the morphological and physical properties of the ionomer. One method used to do this is to examine the peak heights of infrared vibrations of the acid form. However, there may be substantial error in determining peak height, especially since small amounts of water appear in the same wavenumber range. Titration of the acid groups is another method that can be used, though this is not possible in some systems. Surlyn Surlyn is the brand name of an ionomer resin created by DuPont, a copolymer of ethylene and methacrylic acid used as a coating and packaging material. DuPont neutralizes the acid with NaOH, yielding the sodium salt. Crystals of ethylene-methacrylic acid ionomers exhibit dual melting behavior. Application Golf Ball Covers: Ionomers are widely used to make golf ball covers. They are essential for these covers because they have impact resistance, toughness, and durability. The ionic crosslinks in the polymer structure allow the material to withstand the high forces of a golf swing. It keeps its shape and performance over time. The resilience from the ionic clusters helps the ball maintain its flight characteristics. This ensures the ball has a longer lifespan. Additionally, the material's excellent abrasion resistance reduces surface wear. This allows for consistent performance across many rounds of play. Packaging Films: In the packaging industry, ionomers are prized for their combination of optical clarity, toughness, and sealing properties. They can form strong, heat-sealable bonds. This makes them ideal for food packaging films. Both durability and transparency are important for these films. The films can protect the contents from external contaminants. They also provide a clear view of the product, enhancing consumer appeal. Additionally, ionomers are resistant to punctures and tears. This ensures that the packaging remains intact during transportation and handling. Furthermore, ionomers are resistant to oils and fats. This makes them particularly useful in packaging greasy or oily foods. The packaging maintains its integrity without degradation. Semipermeable Membranes: Ionomers are used to make semipermeable membranes. These membranes are used in applications that require selective ion transport. This includes fuel cells and water purification systems. The ionic domains in the ionomer structure allow ions to pass through selectively. They block other molecules. This makes ionomers ideal for use in proton exchange membranes (PEMs) in fuel cells. This selective ion transport is crucial for the efficiency and effectiveness of these devices. It allows for controlled chemical reactions and energy production. In water purification, ionomer-based membranes can selectively remove contaminants. They allow pure water to pass through. This contributes to safe and efficient filtration processes. Adhesives and Sealants: Ionomers have strong adhesive properties and are flexible. That's why they are used in adhesives and sealants. Ionomers can form strong bonds with different materials like metals, plastics, and glass. This makes them suitable for use in automotive, construction, and consumer goods. In sealants, ionomers provide excellent resistance to environmental factors like moisture and temperature changes. This ensures long-lasting performance even in harsh conditions. Ionomers maintain their flexibility, which is important in applications where materials expand or have mechanical stress. Thermoplastic Elastomers: Ionomers are used as thermoplastic elastomers (TPEs). Their elasticity and ability to be remolded without significant degradation are advantageous. These materials can be stretched and deformed. They can return to their original shape when the stress is released. This makes them useful in applications requiring both flexibility and strength. TPEs based on ionomers are found in a wide range of products. These include footwear and medical devices. In these products, comfort, durability, and resilience are critical. Moreover, their resistance to chemical and UV degradation makes them ideal for outdoor applications. Long-term exposure to the elements is a concern in these applications. Coatings and Paints: Ionomers are used in coatings and paints. Their adhesion properties and resistance to environmental damage make surfaces more durable. In automotive and industrial coatings, ionomers create protective layers. These layers resist corrosion, abrasion, and chemical exposure. Ionomers can form smooth, uniform coatings. This makes them suitable for applications needing both aesthetic and functional surface protection. Additionally, ionomer-based coatings have self-healing properties. Small scratches can be repaired through thermal treatment. This extends the lifespan of coated products and reduces maintenance costs. Biomedical Applications: Ionomers have potential applications in the biomedical field. They can be used in drug delivery systems and medical implants. Ionomers are biocompatible. They can interact with biological tissues. This makes them suitable for devices that require controlled release of drugs. They are also suitable for devices that need to integrate with living tissue. Research is ongoing to explore the use of ionomers in innovative medical applications. Their unique properties could offer new solutions for healthcare challenges. For example, ionomer-based drug delivery systems can provide targeted therapy. They can control the release rate of medications. This can improve the efficacy and reduce the side effects of treatments. Ion-Exchange Resins: Ionomers are used to make ion-exchange resins. These resins are important for water treatment and purification. The resins are made from ionomer materials. The resins can selectively exchange ions in a solution. This allows them to remove unwanted contaminants like heavy metals. They can also soften water by exchanging calcium and magnesium ions with sodium or potassium ions. Ionomer-based resins are stable and durable. This makes them suitable for repeated use in industrial and household water treatment systems. Electrochemical Devices: In electrochemical devices, ionomers play a crucial role as solid electrolytes. Ionomers can conduct ions while acting as an insulating barrier for electrons. This makes them ideal for use in batteries, supercapacitors, and fuel cells. The ionomers' stability under electrochemical conditions ensures long-term performance and efficiency in these devices. In fuel cells, ionomers are used in the membrane electrode assembly (MEA). In the MEA, ionomers facilitate the transport of protons from the anode to the cathode. This enables the generation of electricity. See also Nafion External links Ionomer primer with examples References Eisenberg, A. and Kim, J.-S., Introduction to Ionomers, New York: Wiley, 1998. Grady, Brian P. "Review and Critical Analysis of the Morphology of Random Ionomers Across Many Length Scales." Polymer Engineering and Science 48 (2008): 1029-051. Print. Spencer, M.W., M.D. Wetzel, C. Troeltzsch, and D.R. Paul. "Effects of Acid Neutralization on the Properties of K and Na Poly(ethylene-co-methacrylic Acid) Ionomers." Polymer 53 (2011): 569-80. Print. Plastics Polyelectrolytes Copolymers Salts Thermoplastics
Ionomer
[ "Physics", "Chemistry" ]
2,308
[ "Amorphous solids", "Unsolved problems in physics", "Salts", "Plastics" ]
2,206,550
https://en.wikipedia.org/wiki/Amastigote
An amastigote is a protist cell that does not have visible external flagella or cilia. The term is used mainly to describe an intracellular phase in the life-cycle of trypanosomes that replicates. It is also called the leishmanial stage, since in Leishmania it is the form the parasite takes in the vertebrate host, but occurs in all trypanosome genera. References Kinetoplastids
Amastigote
[ "Biology" ]
95
[ "Eukaryotes", "Eukaryote stubs" ]
2,206,555
https://en.wikipedia.org/wiki/Annexin
Annexin is a common name for a group of cellular proteins. They are mostly found in eukaryotic organisms (animals, plants and fungi). In humans, the annexins are found inside the cell. However some annexins (Annexin A1, Annexin A2, and Annexin A5) can be secreted from the cytoplasm to outside cellular environments, such as blood. Annexin is also known as lipocortin. Lipocortins suppress phospholipase A2. Increased expression of the gene coding for annexin-1 is one of the mechanisms by which glucocorticoids (such as cortisol) inhibit inflammation. Introduction The protein family of annexins has continued to grow since their association with intracellular membranes was first reported in 1977. The recognition that these proteins were members of a broad family first came from protein sequence comparisons and their cross-reactivity with antibodies. One of these workers (Geisow) coined the name Annexin shortly after. As of 2002 160 annexin proteins have been identified in 65 different species. The criteria that a protein has to meet to be classified as an annexin are: it has to be capable of binding negatively charged phospholipids in a calcium dependent manner and must contain a 70 amino acid repeat sequence called an annexin repeat. Several proteins consist of annexin with other domains like gelsolin. The basic structure of an annexin is composed of two major domains. The first is located at the COOH-terminal and is called the “core” region. The second is located at the NH2 terminal and is called the “head” region. The core region consists of an alpha helical disk. The convex side of this disk has type 2 calcium-binding sites. They are important for allowing interaction with the phospholipids at the plasma membrane. The N terminal region is located on the concave side of the core region and is important for providing a binding site for cytoplasmic proteins. In some annexins it can become phosphorylated and can cause affinity changes for calcium in the core region or alter cytoplasmic protein interaction. Annexins are important in various cellular and physiological processes such as providing a membrane scaffold, which is relevant to changes in the cell's shape. Also, annexins have been shown to be involved in trafficking and organization of vesicles, exocytosis, endocytosis and also calcium ion channel formation. Annexins have also been found outside the cell in the extracellular space and have been linked to fibrinolysis, coagulation, inflammation and apoptosis. The first study to identify annexins was published by Creutz et al. (1978). These authors used bovine adrenal glands and identified a calcium dependent protein that was responsible for aggregation of granules amongst each other and the plasma membrane. This protein was given the name synexin, which comes from the Greek word “synexis” meaning “meeting”. Structure Several subfamilies of annexins have been identified based on structural and functional differences. However, all annexins share a common organizational theme that involves two distinct regions, an annexin core and an amino (N)-terminus. The annexin core is highly conserved across the annexin family and the N-terminus varies greatly. The variability of the N-terminus is a physical construct for variation between subfamilies of annexins. The 310 amino acid annexin core has four annexin repeats, each composed of 5 alpha-helices. The exception is annexin A-VI that has two annexin core domains connected by a flexible linker. A-VI was produced via duplication and fusion of the genes for A-V and A-X and therefore will not be discussed in length. The four annexin repeats produce a curved protein and allow functional differences based on the structure of the curve. The concave side of the annexin core interacts with the N-terminus and cytosolic second messengers, while the convex side of the annexin contains calcium binding sites. Each annexin core contains one type II, also known as an annexin type, calcium binding site; these binding sites are the typical location of ionic membrane interactions. However, other methods of membrane connections are possible. For example, A-V exposes a tryptophan residue, upon calcium binding, which can interact with the hydrocarbon chains of the lipid bilayer. The diverse structure of the N-terminus confers specificity to annexin intracellular signaling. In all annexins the N-terminus is thought to sit inside the concave side of the annexin core and folds separately from the rest of the protein. The structure of this region can be divided into two broad categories, short and long N-termini. A short N-terminus, as seen in A-III, can consist of 16 or less amino acids and travels along the concave protein core interacting via hydrogen bonds. Short N-termini are thought to stabilize the annexin complex in order to increase calcium binding and can be the sites for post-translational modifications. Long N-termini can contain up to 40 residues and have a more complex role in annexin signaling. For example, in A-I the N-terminus folds into an amphipathic alpha-helix and inserts into the protein core, displacing helix D of annexin repeat III. However, when calcium binds, the N-terminus is pushed from the annexin core by conformational changes within the protein. Therefore, the N-terminus can interact with other proteins, notably the S-100 protein family, and includes phosphorylation sites which allow for further signaling. A-II can also use its long N-terminal to form a heterotrimer between a S100 protein and two peripheral annexins. The structural diversity of annexins is the grounds for the functional range of these complex, intracellular messengers. Cellular localization Membrane Annexins are characterized by their calcium dependent ability to bind to negatively charged phospholipids (i.e. membrane walls). They are located in some but not all of the membranous surfaces within a cell, which would be evidence of a heterogeneous distribution of Ca2+ within the cell. Nuclei Annexin species (II,V,XI) have been found within the membranes. Tyrosine kinase activity has been shown to increase the concentrations of Annexins II,V within the nucleus. Annexin XI is predominantly located within the nucleus, and absent from the nucleoli. During prophase, annexin XI will translocate to the nuclear envelope. Bone Annexins are abundant in bone matrix vesicles, and are speculated to play a role in Ca2+ entry into vesicles during hydroxyapatite formation. The subject area has not been thoroughly studied, however it has been speculated that annexins may be involved in closing the neck of the matrix vesicle as it is endocytosed. Role in vesicle transport Exocytosis Annexins have been observed to play a role along the exocytotic pathway, specifically in the later stages, near or at the plasma membrane. Evidence of annexins or annexin-like proteins are involved in exocytosis has been found in lower organisms, such as the Paramecium. Through antibody recognition, there is evidence of the annexin like proteins being involved in the positioning and attachment of secretory organelles in the organism Paramecium. Annexin VII was the first annexin to be discovered while searching for proteins that promote the contact and fusion of chromaffin granules. In Vitro studies however have shown that annexin VII does not promote the fusion of membranes, only the close attachment to one another. Endocytosis Annexins have been found to be involved in the transport and also sorting of endocytotic events. Annexin one is a substrate of the EGF (epidermal growth factor) tyrosine kinase which becomes phosphorylated on its N terminus when the receptor is internalized. Unique endosome targeting sequences have been found in the N terminus of annexins I and II, which would be useful in sorting of endocytotic vesicles. Annexins are present in several different endocytotic processes. Annexin VI is thought to be involved in clathrin coated budding events, while annexin II participates in both cholesteryl ester internalization and the biogenesis of multi-vesicular endosomes. Membrane scaffolding Annexins can function as scaffolding proteins to anchor other proteins to the cell membrane. Annexins assemble as trimers, where this trimer formation is facilitated by calcium influx and efficient membrane binding. This trimer assembly is often stabilized by other membrane-bound annexin cores in the vicinity. Eventually, enough annexin trimers will assemble and bind the cell membrane. This will induce the formation of membrane-bound annexin networks. These networks can induce the indentation and vesicle budding during an exocytosis event. While different types of annexins can function as membrane scaffolds, annexin A-V is the most abundant membrane-bound annexin scaffold. Annexin A-V can form 2-dimensional networks when bound to the phosphatidylserine unit of the membrane. Annexin A-V is effective in stabilizing changes in cell shape during endocytosis and exocytosis, as well as other cell membrane processes. Alternatively, annexins A-I and A-II bind phosphatidylserine and phosphatidylcholine units in the cell membrane, and are often found forming monolayered clusters that lack a definite shape. In addition, annexins A-I and A-II have been shown to bind PIP2 (phosphatidylinositol-4,5-bisphosphate) in the cell membrane and facilitate actin assembly near the membrane. More recently, annexin scaffolding functions have been linked to medical applications. These medical implications have been uncovered with in vivo studies where the path of a fertilized egg is tracked to the uterus. After fertilization, the egg must enter a canal for which the opening is up to five times smaller than the diameter of the egg. Once the fertilized egg has passed through the opening, annexins are believed to promote membrane folding in an accordion-like fashion to return the stretched membrane back to its original form. Though this was discovered in the nematode annexin NEX-1, it is believed that a similar mechanism takes place in humans and other mammals. Membrane organization and trafficking Several annexins have been shown to have active roles in the organization of the membrane. Annexin A-II has been extensively studied in this aspect of annexin function and is noted to be heavily involved in the organization of lipids in the bilayer near sites of actin cytoskeleton assembly. Annexin A-II can bind PIP2 in the cell membrane in vivo with a relatively high binding affinity. In addition, Annexin A-II can bind other membrane lipids such as cholesterol, where this binding is made possible by the influx of calcium ions. The binding of Annexin A-II to lipids in the bilayer orchestrates the organization of lipid rafts in the bilayer at sites of actin assembly. In fact, annexin A-II is itself an actin-binding protein and therefore it can form a region of interaction with actin by means of its filamentous actin properties. In turn, this allows for further cell-cell interactions between monolayers of cells like epithelial and endothelial cells. In addition to annexin A-II, annexin A-XI has also been shown to organize cell membrane properties. Annexin A-XI is believed to be highly involved in the last stage of mitosis: cytokinesis. It is in this stage that daughter cells separate from one another because annexin A-XI inserts a new membrane that is believed to be required for abscission. Without annexin A-XI, it is believed that the daughter cells with not fully separate and may undergo apoptosis. Clinical significance Apoptosis and inflammation Annexin A-I seems to be one of the most heavily involved annexins in anti-inflammatory responses. Upon infection or damage to tissues, annexin A-I is believed to reduce inflammation of tissues by interacting with annexin A-I receptors on leukocytes. In turn, the activation of these receptors functions to send the leukocytes to the site of infection and target the source of inflammation directly. As a result, this inhibits leukocyte (specifically neutrophils) extravasation and down regulates the magnitude of the inflammatory response. Without annexin A-I in mediating this response, neutrophil extravasation is highly active and worsens the inflammatory response in damaged or infected tissues. Annexin A-I has also been implicated in apoptotic mechanisms in the cell. When expressed on the surface of neutrophils, annexin A-I promotes pro-apoptotic mechanisms. Alternatively, when expressed on the cell surface, annexin A-I promotes the removal of cells that have undergone apoptosis. Moreover, annexin A-I has further medical implications in the treatment of cancer. Annexin A-I can be used as a cell surface protein to mark some forms of tumors that can be targeted by various immunotherapies with antibodies against annexin A-I. Coagulation Annexin A-V is the major player when it comes to mechanisms of coagulation. Like other annexin types, annexin A-V can also be expressed on the cell surface and can function to form 2-dimensional crystals to protect the lipids of the cell membrane from involvement in coagulation mechanisms. Medically speaking, phospholipids can often be recruited in autoimmune responses, most commonly observed in cases of fetal loss during pregnancy. In such cases, antibodies against annexin A-V destroy its 2-dimensional crystal structure and uncover the phospholipids in the membrane, making them available for contribution to various coagulation mechanisms. Fibrinolysis While several annexins may be involved in mechanisms of fibrinolysis, annexin A-II is the most prominent in mediating these responses. The expression of annexin A-II on the cell surface is believed to serve as a receptor for plasminogen, which functions to produce plasmin. Plasmin initiates fibrinolysis by degrading fibrin. The destruction of fibrin is a natural preventative measure because it prevents the formation of blood clots by fibrin networks. Annexin A-II has medical implications because it can be utilized in treatments for various cardiovascular diseases that thrive on blood clotting through fibrin networks. Types/subfamilies Annexin, type I Annexin, type II Annexin, type III Annexin, type IV Annexin, type V Annexin, type VI Alpha giardin Annexin, type X Annexin, type VIII Annexin, type XXXI Annexin, type fungal XIV Annexin, type plant Annexin, type XIII Annexin, type VII Annexin like protein Annexin XI Human proteins containing this domain ANXA1; ANXA10; ANXA11; ANXA13; ANXA2; ANXA3; ANXA4; ANXA5; ANXA6; ANXA7; ANXA8; ANXA8L1; ANXA8L2; ANXA9; References Further reading External links European Annexin Homepage, acquired on 20 August 2005 - Calculated spatial positions of annexins in membranes (the initially bound state) Annexins repeated domain in PROSITE Protein domains Protein families Peripheral membrane proteins
Annexin
[ "Biology" ]
3,305
[ "Protein families", "Protein domains", "Protein classification" ]
2,206,580
https://en.wikipedia.org/wiki/Demining
Demining or mine clearance is the process of removing land mines from an area. In military operations, the object is to rapidly clear a path through a minefield, and this is often done with devices such as mine plows and blast waves. By contrast, the goal of humanitarian demining is to remove all of the landmines to a given depth and make the land safe for human use. Specially trained dogs are also used to narrow down the search and verify that an area is cleared. Mechanical devices such as flails and excavators are sometimes used to clear mines. A great variety of methods for detecting landmines have been studied. These include electromagnetic methods, one of which (ground penetrating radar) has been employed in tandem with metal detectors. Acoustic methods can sense the cavity created by mine casings. Sensors have been developed to detect vapor leaking from landmines. Animals such as rats and mongooses can safely move over a minefield and detect mines, and animals can also be used to screen air samples over potential minefields. Bees, plants, and bacteria are also potentially useful. Explosives in landmines can also be detected directly using nuclear quadrupole resonance and neutron probes. Detection and removal of landmines is a dangerous activity, and personal protective equipment does not protect against all types of landmine. Once found, mines are generally defused or blown up with more explosives, but it is possible to destroy them with certain chemicals or extreme heat without making them explode. Land mines Land mines overlap with other categories of explosive devices, including unexploded ordnance (UXOs), booby traps and improvised explosive devices (IEDs). In particular, most mines are factory-built, but the definition of landmine can include "artisanal" (improvised) mines. Thus, the United Nations Mine Action Service includes mitigation of IEDs in its mission. Injuries from IEDs are much more serious, but factory-built landmines are longer lasting and often more plentiful. Over 1999–2016, yearly casualties from landmines and unexploded ordnance have varied between 9,228 and 3,450. In 2016, 78% of the casualties were suffered by civilians (42% by children), 20% by military and security personnel and 2% by deminers. There are two main categories of land mine: anti-tank and anti-personnel. Anti-tank mines are designed to damage tanks or other vehicles; they are usually larger and require at least of force to trigger, so infantry will not set them off. Anti-personnel mines are designed to maim or kill soldiers. There are over 350 types, but they come in two main groups: blast and fragmentation. Blast mines are buried close to the surface and triggered by pressure. A weight between , the weight of a small child, is usually enough to set one off. They are usually cylindrical with a diameter of and a height of . Fragmentation mines are designed to explode outwards resulting in casualties as much as 100 metres away. A subtype of fragmentation mines called "bounding" mines are specifically designed to launch upward off the ground before detonating. Their size varies and they are mostly metal, so they are easily detected by metal detectors. However, they are normally activated by tripwires that can extend up to 20 metres away from the mine, so tripwire detection is essential. The casing of blast mines may be made of metal, wood, or plastic. Some mines, referred to as minimum metal mines, are constructed with as little metal as possible – as little as – to make them difficult to detect. Common explosives used in land mines include TNT (), RDX (), pentaerythritol tetranitrate (PETN, ), HMX () and ammonium nitrate (). Land mines are found in about 60 countries. Deminers must cope with environments that include deserts, jungles, and urban environments. Antitank mines are buried deeply while antipersonnel mines are usually within 6 inches of the surface. Mines may be placed by hand or scattered from airplanes, in regular or irregular patterns. In urban environments, fragments of destroyed buildings may hide them; in rural environments, soil erosion may cover them or displace them. Detectors can be confused by high-metal soils and junk. Thus, demining presents a considerable engineering challenge. Goals Military mine clearance In military demining, the goal is to create a safe path for troops and equipment. The soldiers who carry out this task are known as combat engineers, sappers, or pioneers. Sometimes soldiers may bypass a minefield, but some bypasses are designed to concentrate advancing troops into a killing zone. If engineers need to clear a path (an operation known as breaching), they may be under heavy fire and need supporting fire to suppress the enemy or obscure the site with smoke. Some risk of casualties is accepted, but engineers under heavy fire may need to clear an obstacle in 7–10 minutes to avoid excessive casualties, so manual breaching may be too slow. They may need to operate in bad weather or at night. Good intelligence is needed on factors like the locations of minefields, types of mines and how they were laid, their density and pattern, ground conditions and the size and location of enemy defenses. Humanitarian demining Humanitarian demining is a component of mine action, a broad effort to reduce the social, economic and environmental damage of mines. The other "pillars" of mine action are risk education, victim assistance, stockpile destruction, and advocacy against the use of anti-personnel mines and cluster munitions. Humanitarian demining differs from military demining in several ways. Military demining operations require speed and reliability under combat conditions to safely bypass a mine field so it is more acceptable if some mines are missed in the process. Humanitarian demining aims to reduce risk for deminers and civilians as much as possible by removing (ideally) all landmines and demining work can usually be temporarily halted if unfavorable circumstances arise. In some situations, it is a necessary precondition for other humanitarian programs. Normally, a national mine action authority (NMAA) is given the primary responsibility for mine action, which it manages through a mine action center (MAC). This coordinates the efforts of other players including government agencies, non-governmental organizations (NGOs), commercial companies, and militaries. The International Mine Action Standards (IMAS) provide a framework for mine action. While not legally binding in themselves, they are intended as guidelines for countries to develop their own standards. The IMAS also draw on international treaties including the Mine Ban Treaty, which has provisions for destroying stockpiles and clearing minefields. In the 1990s, before the IMAS, the United Nations required that deminers had to clear 99.6% of all mines and explosive ordnance. However, professional deminers found that unacceptably lax because they would be responsible if any mines later harmed civilians. In contrast, the IMAS call for the clearance of all mines and UXOs from a given area to a specified depth. Contamination and clearance As of 2017, antipersonnel mines are known to contaminate 61 states and suspected in another 10. The most heavily contaminated (with more than 100 square kilometres of minefield each) are Afghanistan, Angola, Azerbaijan, Bosnia and Herzegovina, Cambodia, Chad, Iraq, Thailand, Turkey, and Ukraine. Parties to the Mine Ban Treaty are required to clear all mines within 10 years of joining the treaty, and as of 2017, 28 countries had succeeded. However, several countries were not on track to meet their deadline or had requested extensions. A 2003 RAND Corporation report estimated that there are 45–50 million mines and 100,000 are cleared each year, so at present rates it would take about 500 years to clear them all. Another 1.9 million (19 more years of clearance) are added each year. However, there is a large uncertainty in the total number and the area affected. Records by armed forces are often incomplete or nonexistent, and many mines were dropped by airplane. Various natural events such as floods can move mines around and new mines continue to be laid. When minefields are cleared, the actual number of mines tends to be far smaller than the initial estimate; for example, early estimates for Mozambique were several million, but after most of the clearing had been done only 140,000 mines had been found. Thus, it may be more accurate to say that there are millions of landmines, not tens of millions. Before minefields can be cleared, they need to be located. This begins with non-technical survey, gathering records of mine placement and accidents from mines, interviewing former combatants and locals, noting locations of warning signs and unused agricultural land, and going to look at possible sites. This is supplemented by technical survey, where potentially hazardous areas are physically explored to improve knowledge of their boundaries. A good survey can greatly reduce the time required to clear an area; in one study of 15 countries, less than 3 percent of the area cleared actually contained mines. Economics By one United Nations estimate, the cost to produce a landmine is between $3 and $75 while the cost of removing it is between $300 and $1000. However, such estimates may be misleading. The cost of clearance can vary considerably since it depends on the terrain, the ground cover (dense foliage makes it more difficult) and the method; and some areas that are checked for mines turn out to have none. Although the Mine Ban Treaty gives each state the primary responsibility to clear its own mines, other states that can help are required to do so. In 2016, 31 donors (led by the United States with $152.1 million and the European Union with $73.8 million) contributed a total of $479.5 million to mine action, of which $343.2 million went to clearance and risk education. The top 5 recipient states (Iraq, Afghanistan, Croatia, Cambodia and Laos) received 54% of this support. Conventional detection methods The conventional method of landmine detection was developed in World War II and has changed little since then. It involves a metal detector, prodding instrument and tripwire feeler. Deminers clear an area of vegetation and then divide it into lanes. A deminer advances along a lane, swinging a metal detector close to the ground. When metal is detected, the deminer prods the object with a stick or stainless steel probe to determine whether it is a mine. If a mine is found, it must be deactivated. Although conventional demining is slow (5–150 square metres cleared per day), it is reliable, so it is still the most commonly used method. Integration with other methods such as explosive sniffing dogs can increase its reliability. Demining is a dangerous occupation. If a deminer prods a mine too hard or fails to detect it, the deminer can suffer injury or death, and the large number of false positives from metal detectors can make deminers tired and careless. According to one report, there is an accident for every 1000–2000 mines cleared. 35 percent of the accidents occur during mine excavation and 24 percent result from missed mines. Mine layers often use anti-demining techniques, including anti-lift devices, booby traps and two or three mines placed on top of each other. Anti-personnel mines are often triggered by tripwires. Prodders In World War II, the primary method of locating mines was by prodding the ground with a pointed stick or bayonet. Modern tools for prodding range from a military prodder to a screwdriver or makeshift object. They are inserted at shallow angles (30 degrees or less) to probe the sides of potential mines, avoiding the triggering mechanism that is usually on top. This method requires the deminer's head and hands to be near the mine. Rakes may also be used when the terrain is soft (e.g., sandy beaches); the deminer is further away from the mine and the rake can be used to either prod or scoop up mines from beneath. Metal detectors Metal detectors used by deminers work on the same principles as detectors used in World War I and refined during World War II. A practical design by Polish officer Józef Kosacki, known as the Polish mine detector, was used to clear German mine fields during the Second Battle of El Alamein. Although metal detectors have become much lighter, more sensitive and easier to operate than the early models, the basic principle is still electromagnetic induction. Current through a wire coil produces a time-varying magnetic field that in turn induces currents in conductive objects in the ground. In turn, these currents generate a magnetic field that induces currents in a receiver coil, and the resulting changes in electric potential can be used to detect metal objects. Similar devices are used by hobbyists. Nearly all mines contain enough metal to be detectable. No detector finds all mines, and the performance depends on factors such as the soil, type of mine and depth of burial. An international study in 2001 found that the most effective detector found 91 percent of the test mines in clay soil but only 71 percent in iron-rich soil. The worst detector found only 11 percent even in clay soils. The results can be improved by multiple passes. An even greater problem is the number of false positives. Minefields contain many other fragments of metal, including shrapnel, bullet casings, and metallic minerals. 100–1000 such objects are found for every real mine. The greater the sensitivity, the more false positives. The Cambodian Mine Action Centre found that, over a six-year period, 99.6 percent of the time (a total of 23 million hours) was spent digging up scrap. Dogs Dogs have been used in demining since World War II. They are up to a million times more sensitive to chemicals than humans, but their true capability is unknown because they can sense explosives at lower concentrations than the best chemical detectors. Well-trained mine-detection dogs (MDDs) can sniff out explosive chemicals like TNT, monofilament lines used in tripwires, and metallic wire used in booby traps and mines. The area they can clear ranges from a few hundred to a thousand meters per day, depending on several factors. In particular, an unfavorable climate or thick vegetation can impede them, and they can get confused if there is too high a density of mines. The detection rate is also variable, so the International Mine Action Standards require an area to be covered by two dogs before it can be declared safe. Preferred breeds for MDDs are the German Shepherd and Belgian Malinois, although some Labrador Retrievers and Beagles are used. They cost about $10,000 each to train. This cost includes 8–10 weeks of initial training. Another 8–10 weeks is needed in the country where the dog is deployed to accustom the dog to its handler, the soil and climate, and the type of explosives. MDDs were first deployed in WWII. They have been extensively used in Afghanistan, which still has one of the largest programs. Over 900 are used in 24 countries. Their preferred role is for verifying that an area is cleared and narrowing down the region to be searched. They are also used in Remote Explosive Scent Tracing (REST). This involves collecting air samples from stretches of land about 100 meters long and having dogs or rats sniff them to determine whether the area needs clearing. Mechanical Mine clearing machines Mechanical demining makes use of vehicles with devices such as tillers, flails, rollers, and excavation. Used for military operations as far back as World War I, they were initially "cumbersome, unreliable and under-powered", but have been improved with additional armor, safer cabin designs, reliable power trains, Global Positioning System logging systems and remote control. They are now primarily used in humanitarian demining for technical surveys, to prepare the ground (removing vegetation and tripwires), and to detonate explosives. Tiller systems consist of a heavy drum fitted with teeth or bits that are intended to destroy or detonate mines to a given depth. However, mines can be forced downwards or collected in a "bow wave" in front of the roller. They have trouble with steep slopes, wet conditions and large stones; light vegetation improves the performance, but thicker vegetation inhibits it. Flails, first used on Sherman tanks, have an extended arm with a rotating drum to which are attached chains with weights on the end. The chains act like swinging hammers. The strike force is enough to set off mines, smash them to pieces, damage the firing mechanism or throw the mine up. A blast shield protects the driver and the cabin is designed to deflect projectiles. Mine flail effectiveness can approach 100% in ideal conditions, but clearance rates as low as 50–60% have been reported. First used in World War I with tanks, rollers are designed to detonate mines; blast-resistant vehicles with steel wheels, such as the Casspir, serve a similar purpose. However, those used in humanitarian demining cannot withstand the blast from an anti-tank mine, so their use must be preceded by careful surveying. Unlike flails and tillers, they only destroy functioning mines, and even those do not always explode. Excavation, the removal of soil to a given depth, is done using modified construction vehicles such as bulldozers, excavators, front-end loaders, tractors and soil sifters. Armor plates and reinforced glass are added. Removed soil is sifted and inspected. It can also be fed through an industrial rock crusher, which is robust enough to withstand blasts from antipersonnel mines. Excavation is a reliable way of clearing an area to a depth that other mechanical systems cannot reach, and it has been used in several countries. In particular, the HALO Trust estimates that their excavation program destroys mines about 7 times faster than manual deminers. A 2004 study by the Geneva International Centre for Humanitarian Demining concluded that the data on the performance of mechanical demining systems was poor, and perhaps as a result, they were not being used as the primary clearance system (with the exception of excavators). However, by 2014, confidence in these systems had increased to the point where some deminers were using them as primary clearance systems. Mechanical demining techniques have some challenges. In steep, undulating terrain they may skip over some of the ground. Operators can be endangered by defective mines or mines with delay charges that detonate after the blast shield has passed over; shaped charge mines that are capable of piercing most armor; and intelligent mines that are off to the side and use a variety of sensors to decide when to fire a rocket at an armored vehicle. One answer is to use remote controlled vehicles such as the Caterpillar D7 MCAP (United States) and the Caterpillar D9 (Israel). Improvised techniques are sometimes used by people who need the use of land before formal demining. In parts of Ukraine mined during fighting associated with the Russian invasion that started in 2022, farmers who need to use the land improvised a mine-clearing machine by welding parts of rugged abandoned Russian fighting vehicles such as tanks on to an old tractor and harrow, remotely controlled by a battery-powered controller. Smart prodders Despite advances in mine detection technology, "mine detection boils down to rows of nervous people wearing blast-resistant clothing and creeping laboriously across a field, prodding the ground ahead to check for buried objects." Often, especially when the soil is hard, they unwittingly apply too much force and risk detonating a mine. Prodders have been developed that provide feedback on the amount of force. Detection methods under development Universities, corporations and government bodies have been developing a great variety of methods for detecting mines. However, it is difficult to compare their performance. One quantitative measure is a receiver operating characteristic (ROC) curve, which measures the tradeoff between false positives and false negatives. Ideally, there should be a high probability of detection with few false positives, but such curves have not been obtained for most of the technologies. Also, even if field tests were available for all technologies, they may not be comparable because performance depends on a myriad of factors, including the size, shape and composition of the mines; their depth and orientation; the type of explosive; environmental conditions; and performance of human operators. Most field tests have taken place in conditions that favor the performance of the technology, leading to overestimates of their performance. Electromagnetic Ground-penetrating radar Ground-penetrating radar (GPR) probes the ground using radar. A GPR device emits radio waves; these waves are reflected at discontinuities in permittivity and one or more antennae pick up the return signal. The signal is analyzed to determine the shapes and locations of the reflectors. Discontinuities occur between materials with different dielectric constants such as a landmine, a rock and soil. Unlike metal detectors, GPR devices can detect nonmetallic mine casings. However, radio waves have wavelengths that are comparable to the dimensions of landmines, so the images have low resolution. The wavelength can be varied; smaller wavelengths give better image quality but cannot penetrate as far into the soil. This tradeoff in performance depends on soil properties and other environmental factors as well as the properties of the mines. In particular, attenuation in wet soils can make it difficult to spot mines deeper than , while low-frequency radar will "bounce" off small plastic mines near the surface. Although GPR is a mature technology for other applications such as searching for archaeological artifacts, the effect of those factors on mine detection is still not adequately understood, and GPR is not widely used for demining. GPR can be used with a metal detector and data-fusion algorithms to greatly reduce the false alarms generated by metallic clutter. One such dual-sensor device, the Handheld Standoff Mine Detection System (HSTAMIDS) became the standard mine detector of the U.S. Army in 2006. For humanitarian demining, it was tested in Cambodia for a variety of soil conditions and mine types, detecting 5,610 mines and correctly identifying 96.5% of the clutter. Another dual detector developed by ERA Technology, the Cobham VMR3 Minehound, had similar success in Bosnia, Cambodia and Angola. These dual-sensor devices are relatively light and cheap, and the HALO Trust has begun to deploy more of them around the world. Infrared and hyperspectral Soil absorbs radiation from the Sun and is heated, with a resulting change in the infrared radiation that it emits. Landmines are better insulators than soil. As a result, the soil overhead tends to heat faster during the day and cool faster at night. Thermography uses infrared sensors to detect anomalies in the heating and cooling cycle. The effect can be enhanced using a heat source. The act of burying a mine also affects the soil properties, with small particles tending to collect near the surface. This tends to suppress the frequency-dependent characteristics that are evident in the larger particles. Hyperspectral imaging, which senses dozens of frequency bands ranging from visible light to long-wave infrared, can detect this effect. Finally, polarized light reflecting off man-made materials tend to remain polarized while natural materials depolarize it; the difference can be seen using a polarimeter. The above methods can be used from a safe distance, including on airborne platforms. The detector technology is well developed and the main challenge is to process and interpret the images. The algorithms are underdeveloped and have trouble coping with the extreme dependence of performance on environmental conditions. Many of the surface effects are strongest just after the mine is buried and are soon removed by weathering. Electrical impedance tomography Electrical impedance tomography (EIT) maps out the electrical conductivity of the ground using a two-dimensional grid of electrodes. Pairs of electrodes receive a small current and the resulting voltages measured on the remaining electrodes. The data are analyzed to construct a map of the conductivity. Both metallic and non-metallic mines will show up as anomalies. Unlike most other methods, EIT works best in wet conditions, so it serves as a useful complement to them. However, the electrodes must be planted in the ground, which risks setting off a mine, and it can only detect mines near the surface. X-ray backscatter In X-ray backscatter, an area is irradiated with X-rays (photons with wavelengths between 0.01 and 10 nanometres) and detecting the photons that are reflected back. Metals strongly absorb x-rays and little is reflected back, while organic materials absorb little and reflect a lot. Methods that use collimators to narrow the beams are not suitable for demining because the collimators are heavy and high-power sources are required. The alternative is to use wide beams and deconvolve the signal using spatial filters. The medical industry has driven improvements in x-ray technology, so portable x-ray generators are available. In principle, the short wavelength would allow high-resolution images, but it may take too long because the intensity must be kept low to limit exposure of humans to the radiation. Also, only mines less than 10 centimetres deep would be imaged. Explosive vapor detection A buried mine will almost always leak explosives through the casing. 95 percent of this will be adsorbed by the soil, but the other 5 percent will mostly dissolve in water and be transported away. If it gets to the surface, it leaves a chemical signature. TNT biodegrades within a few days in soil, but an impurity, 2,4-dinitrotoluene (2,4-DNT), lasts much longer and has a high vapor pressure. Thus, it is the primary target for chemical detection. However, the concentrations are very small, particularly in dry conditions. A reliable vapor detection system needs to detect 10−18 grams of 2,4-DNT per millilitre of air in very dry soil or 10−15 grams per millilitre in moist soil. Biological detectors are very effective, but some chemical sensors are being developed. Honey bees Honey bees can be used to locate mines in two ways: passive sampling and active detection. In passive sampling, their mop-like hairs, which are electrostatically charged, collect a variety of particles including chemicals leaking from explosives. The chemicals are also present in water that they bring back and air that they breathe. Methods such as solid phase microextraction, sorbent sol-gels, gas chromatography and mass spectrometry can be used to identify explosive chemicals in the hive. Honey bees can also be trained, in 1–2 days, to associate the smell of an explosive with food. In field trials, they detected concentrations of parts per trillion with a detection probability of 97–99 percent and false positives of less than 1 percent. When targets were placed consisting of small amounts of 2.4-DNT mixed with sand, they detect vapor plumes from the source several meters away and follow them to the source. Bees make thousands of foraging flights per day, and over time high concentrations of bees occur over targets. The most challenging issue is tracking them when a bee can fly 3–5 kilometres before returning to the hive. However, tests using lidar (a laser scanning technique) have been promising. Bees do not fly at night, in heavy rain or wind, or in temperatures below , but the performance of dogs is also limited under these conditions. So far, most tests have been conducted in dry conditions in open terrain, so the effect of vegetation is not known. Tests have commenced in real minefields in Croatia and the results are promising, although after about three days the bees must be retrained because they are not getting food rewards from the mines. Rats Like dogs, giant pouched rats are being trained to sniff out chemicals like TNT in landmines. A Belgian NGO, APOPO, trains rats in Tanzania at a cost of $6000 per rat. These rats, nicknamed "HeroRATS", have been deployed in Mozambique and Cambodia. APOPO credits the rats with clearing more than 100,000 mines. Rats have the advantage of being far lower mass than the human or dogs, so they are less likely to set off mines. They are just smart enough to learn repetitive tasks but not smart enough to get bored; and unlike dogs, they do not bond with their trainers, so they are easier to transfer between handlers. They have far fewer false positives than metal detectors, which detect any form of metal, so in a day they can cover an area that would take a metal detector two weeks. Other mammals In Sri Lanka, dogs are an expensive option for mine detection because they cannot be trained locally. The Sri Lankan Army Corps of Engineers has been conducting research on the use of the mongoose for mine detection, with promising initial results. Engineer Thrishantha Nanayakkara and colleagues at the University of Moratuwa in Sri Lanka have been developing a method where a mongoose is guided by a remote-controlled robot. During the Angolan Civil War, elephants fled to neighboring countries. After the war ended in 2002, they started returning, but Angola was littered with millions of landmines. A biologist noticed that the elephants soon learned to avoid them. In a study in South Africa, researchers found that some elephants could detect TNT samples with a high sensitivity, missing only one out of 97 samples. They were 5% more likely to indicate the presence of TNT than dogs and 6% less likely to miss a sample (the more important measure of success). While researchers do not plan to send elephants to minefields, they could sniff samples collected by unmanned vehicles in a preliminary screening of potential minefields. Plants Thale cress, a member of the mustard family and one of the most-studied plants in the world, normally turns red under harsh conditions. But using a combination of natural mutations and genetic manipulation, scientists from Danish biotechnology company Aresa Biodetection created a strain that only changes color in response to nitrate and nitrite, chemicals that are released when TNT breaks down. The plants would aid demining by indicating the presence of mines through color change, and could either be sown from aircraft or by people walking through demined corridors in minefields. In September 2008, Aresa Biodetection ceased development of the method, but in 2012 a group at Cairo University announced plans for large-scale testing of a method that would combine detection using Arabidopsis with bacteria that would corrode metal in mines and rose periwinkle, sugar beet, or tobacco plants that would absorb nitrogen from the TNT that was released. An inherent problem with sensing nitrate and nitrites is that they are already in the soil naturally. There are no natural chemical sensors for TNT, so some researchers are attempting to modify existing receptors so they respond to TNT-derived chemicals that do not occur naturally. Bacteria A bacterium, known as a bioreporter, has been genetically engineered to fluoresce under ultraviolet light in the presence of TNT. Tests involving spraying such bacteria over a simulated minefield successfully located mines. In the field, this method could allow for searching hundreds of acres in a few hours, which is much faster than other techniques, and could be used on a variety of terrain types. While there are some false positives (especially near plants and water drainage), even three ounces of TNT were detectable using these bacteria. Unfortunately, there is no strain of bacteria capable of detecting RDX, another common explosive, and the bacteria may not be visible under desert conditions. Also, well-constructed munitions that have not had time to corrode may be undetectable using this method. Chemical As part of the "Dog's nose" program run by the Defense Advanced Research Projects Agency (DARPA), several kinds of non-biological detectors were developed in an attempt to find a cheap alternative to dogs. These include spectroscopic, piezoelectric, electrochemical, and fluorescent detectors. Of these, the fluorescent detector has the lowest detection limit. Two glass slides are coated with a fluorescent polymer. Explosive chemicals bind to the polymer and reduce the amount of fluorescent light emitted. This has been developed by Nomadics, Inc. into a commercial product, Fido, that has been incorporated in robots deployed in Iraq and Afghanistan. Chemical sensors can be made lightweight and portable and can operate at a walking pace. However, they do not have a 100% probability of detection, and the explosive vapors they detect have often drifted away from the source. Effects of environmental conditions are not well understood. As of 2016, dogs outperformed the best technological solutions. Bulk explosive detection Although some of the methods for detecting explosive vapors are promising, the transport of explosive vapors through the soil is still not well understood. An alternative is to detect the bulk explosive inside a landmine by interacting with the nuclei of certain elements. In landmines, explosives contain 18–38% nitrogen by weight, 16–37% carbon and 2–3% hydrogen. By contrast, soils contain less than 0.07% nitrogen, 0.1–9% carbon and 0–50% hydrogen. Methods for interrogating the nuclei include nuclear quadrupole resonance and neutron methods. Detection can be difficult because the "bulk" may amount to less than 100 grams and a much greater signal may come from the surrounding earth and cosmic rays. Nuclear quadrupole resonance Nuclear quadrupole resonance (NQR) spectroscopy uses radio frequency (RF) waves to determine the chemical structure of compounds. It can be regarded as nuclear magnetic resonance "without the magnet". The frequencies at which resonances occur are primarily determined by the quadrupole moment of the nuclear charge density and the gradient of the electric field due to valence electrons in the compound. Each compound has a unique set of resonance frequencies. Unlike a metal detector, NQR does not have false positives from other objects in the ground. Instead, the main performance issue is the low ratio of the signal to the random thermal noise in the detector. This signal-to-noise ratio can be increased by increasing the interrogation time, and in principle the probability of detection can be near unity and the probability of false alarm low. Unfortunately, the most common explosive material (TNT) has the weakest signal. Also, its resonance frequencies are in the AM radio band and can be overwhelmed by radio broadcasts. Finally, it cannot see through metal casing or detect liquid explosives. Nevertheless, it is considered a promising technology for confirming results from other scanners with a low false alarm rate. Neutrons Since the late 1940s, a lot of research has examined the potential of nuclear techniques for detecting landmines and there have been several reviews of the technology. According to a RAND study in 2003, "Virtually every conceivable nuclear reaction has been examined, but ... only a few have potential for mine detection." In particular, reactions that emit charged particles can be eliminated because they do not travel far in the ground, and methods involving transmission of neutrons through the medium (useful in applications such as airport security) are not feasible because the detector and receiver cannot be placed on opposite sides. This leaves emission of radiation from targets and scattering of neutrons. For neutron detectors to be portable, they must be able to detect landmines efficiently with low-intensity beams so that little shielding is needed to protect human operators. One factor that determines the efficiency is the cross section of the nuclear reaction; if it is large, a neutron does not have to come as close to a nucleus to interact with it. One possible source of neutrons is spontaneous fission from a radioactive isotope, most commonly californium-252. Neutrons can also be generated using a portable particle accelerator (a sealed neutron tube) that promotes the fusion of deuterium and tritium, producing helium-4 and a neutron. This has the advantage that tritium, being less radiotoxic than californium-252, would pose a smaller threat to humans in the event of an accident such as an explosion. These sources emit fast neutrons with an energy of 14.1 million electron volts (MeV) from the neutron tube and 0–13 MeV from californium-252. If low-energy (thermal) neutrons are needed, they must be passed through a moderator. In one method, thermal neutron analysis (TNA), thermal neutrons are captured by a nucleus, releasing energy in the form of a gamma ray. One such reaction, nitrogen-14 captures a neutron to make nitrogen-15, releasing a gamma ray with energy 10.835 MeV. No other naturally occurring isotope emits a photon with such a high energy, and there are few transitions that emit nearly as much energy, so detectors do not need high energy resolution. Also, nitrogen has a large cross section for thermal neutrons. The Canadian Army has deployed a multi-detector vehicle, the Improved Landmine Detection System, with a TNA detector to confirm the presence of anti-tank mines that were spotted by other instruments. However, the time required to detect antipersonnel mines is prohibitively long, especially if they are deeper than a few centimeters, and a human-portable detector is considered unachievable. An alternative neutron detector uses fast neutrons that enter the ground and are moderated by it; the flux of thermal neutrons scattered back is measured. Hydrogen is a very effective moderator of neutrons, so the signal registers hydrogen anomalies. In an antipersonnel mine, hydrogen accounts for 25–35% of the atoms in the explosive and 55–65% in the casing. Hand-held devices are feasible and several systems have been developed. However, because they are sensitive only to atoms and cannot distinguish different molecular structures, they are easily fooled by water, and are generally not useful in soils with water content over 10%. However, if a distributed pulsed neutron source is used, it may be possible to distinguish wet soil from explosives by their decay constants. A "Timed Neutron Detector" based on this method has been created by the Pacific Northwest National Laboratory and has won design awards. Acoustic/seismic Acoustic/seismic methods involve creating sound waves above the ground and detecting the resulting vibrations at the surface. Usually the sound is generated by off-the-shelf loudspeakers or electrodynamic shakers, but some work has also been done with specialized ultrasound speakers that send tight beams into the ground. The measurements can be made with non-contact sensors such as microphones, radar, ultrasonic devices and laser Doppler vibrometers. A landmine has a distinctive acoustic signature because it is a container. Sound waves alternately compress and expand the enclosed volume of air and there is a lag between the volume change and the pressure that increases as the frequency decreases. The landmine and the soil above it act like two coupled springs with a nonlinear response that does not depend on the composition of the container. Such a response is not seen in most other buried objects such as roots, rocks, concrete or other man-made objects (unless they are hollow items such as bottles and cans) so the detection method has few false positives. As well as having a low false positive rate, acoustic/seismic methods respond to different physical properties than other detectors, so they could be used in tandem for a richer source of information. They are also unaffected by moisture and weather, but have trouble in frozen ground and vegetation. However, because sound attenuates in the ground, the technology has shown difficulty finding mines "deeper than approximately one mine diameter". It is also slow, with scans taking between 125 and 1000 seconds per square meter, but increasing the number of sensors can speed the scan up proportionately. Unmanned ground vehicles Unmanned ground vehicles (UGVs) such as demining robots help protect the controller by distancing them from potential mines. Being electric they need an electrical source to charge batteries and be robust enough to withstand close detonations. In Ukraine in 2023, under the Brave1 platform, an "iron caterpillar" that uses a robotic vehicle with a cheap disposable mine activation roller as a form of all terrain mine activator, is in operation. Unmanned aerial vehicle Unmanned aerial vehicles (UAVs), or drones, can be used to detect mines. The system that includes the drone, the person operating the machine, and the communication system is called an unmanned aerial (or aircraft) system (UAS). In the past decade, the use of such systems for demining has grown rapidly. Drones equipped with cameras have been used to map areas during non-technical survey, to monitor changes in land use resulting from demining, to identify patterns of mine placement and predict new locations, and to plan access routes to minefields. One such system, a fixed-wing UAV made by SenseFly, is being tested by GICHD in Angola. A Spanish company, CATUAV, equipped a drone with optical sensors to scan potential minefields in Bosnia and Herzegovina; their design was a finalist in the 2015 Drones for Good competition. From February to October 2019, Humanity & Inclusion, an international NGO, is testing drones for non-technical survey in northern Chad. Several ideas for detecting landmines are in the research and development phase. A research team at the University of Bristol is working on adding multispectral imaging (for detecting chemical leaks) to drones. Geophysicists at Binghamton University are testing the use of thermal imaging to locate "butterfly mines", which were dropped from airplanes in Afghanistan and mostly sit on the surface. At DTU Space, an institute in the Technical University of Denmark, researchers are designing a drone with magnetometer suspended underneath it, with the initial goal of clearing mines from World War II so power cables can be connected to offshore wind turbines. The Dutch Mine Kafon project, led by designer Massoud Hassani, is working on an autonomous drone called the Mine Kafon Drone. It uses robotic attachments in a three-step process. First, a map is generated using a 3-D camera and GPS. Next, a metal detector pinpoints the location of mines. Finally, a robotic gripping arm places a detonator above each mine and the drone triggers it from a distance. Drone programs must overcome challenges such as getting permission to fly, finding safe takeoff and landing spots, and getting access to electricity for charging the batteries. In addition, there are concerns about privacy, and a danger that drones could be weaponized by hostile forces. A drone developed in 2023 through the Ukrainian Brave1 platform to detect mines ST-1 is in use. Personal protective equipment Deminers may be issued personal protective equipment (PPE) such as helmets, visors, armoured gloves, vests and boots, in an attempt to protect them if a mine is set off by accident. The IMAS standards require that some parts of the body (including the chest, abdomen, groin and eyes) be protected against a blast from 240 grams of TNT at a distance of 60 centimeters; head protection is recommended. Although it says blast resistant boots may be used, the benefits are unproven and the boots may instill a false sense of security. The recommended equipment can afford significant protection against antipersonnel blast mines, but the IMAS standards acknowledge that they are not adequate for fragmentation and antitank mines. Heavier armor increases protection at the expense of comfort and mobility. PPE selection is a balance between protection should a blast occur and being sufficiently unhindered to prevent a blast in the first place. Other ways of managing risk include better detectors, remote-controlled vehicles to remove fragmentation mines, long-handled rakes for excavation and unmanned aerial vehicles to scout the hazards before approaching. Removal methods Humanitarian Once a mine is found, the most common methods of removing it are to manually defuse it (a slow and dangerous process) or blow it up with more explosives (dangerous and costly). Research programs have explored alternatives that destroy the mine without exploding it, using chemicals or heat. The most common explosive material, TNT, is very stable, not burnable with a match and highly resistant to acids or common oxidizing agents. However, some chemicals use an autocatalytic reaction to destroy it. Diethylenetriamine (DETA) and TNT spontaneously ignite when they come in contact with each other. One delivery system involves a bottle of DETA placed over a mine; a bullet shot through both brings them in contact and the TNT is consumed within minutes. Other chemicals that can be used for this purpose include pyridine, diethylamine and pyrole. They do not have the same effect on explosives such as RDX and PETN. Thermal destruction methods generate enough heat to burn TNT, such as using leftover rocket propellant from the NASA Space Shuttle missions. Thiokol, the company that built the engines for the shuttles, developed a flare with the propellant. Placed next to a mine and activated remotely, it reaches temperatures exceeding , burning a hole through the landmine casing and consuming the explosive. These flares have been used by the US Navy in Kosovo and Jordan. Another device uses a solid state reaction to create a liquid that penetrates the case and starts the explosive burning. Military In World War II, one method used by the German SS to clear minefields was to force captured civilians to cross the minefields, which would trigger any mine they encountered. In 1987, during the Iran–Iraq War, Iran used children known as baseeji as human mine detonators. More humane methods included mine plows, mounted on Sherman and Churchill tanks, and the Bangalore torpedo. Variants of these are still used today. Mine plows use a specially designed shovel to unearth mines and shove them to the side, clearing a path. They are quick and effective for clearing a lane for vehicles and are still attached to some types of tank and remotely operated vehicles. The mines are moved but not deactivated, so mine plows are not used for humanitarian demining. The mine-clearing line charge, successor to the Bangalore torpedo, clears a path through a minefield by triggering the mines with a blast wave. Several examples include the anti-personnel obstacle breaching system and the Python minefield breaching system, a hose-pipe filled with explosives that is carried across a minefield by a rocket. In the 2000s Fuel-air explosive (FAE) technology has been increasingly utilized for demining operations, offering an effective method for clearing minefields and neutralizing IEDs. One notable example of this application is the Rafael Carpet, a mine breaching system developed by Rafael Advanced Defense Systems. This system uses a series of rockets to disperse a fuel spray over a targeted area, creating a fuel-air explosive cloud that detonates to clear mines over a wide area, thus providing a rapid and safe path for military operations. See also Aftermath: The Remnants of War (film) Bomb disposal Center for International Stabilization and Recovery Counter-IED efforts Land mines in Central America Mines Advisory Group Swiss Foundation for Mine Action (FSD) Mine clearance agency MineWolf Systems Fares Scale of Injuries due to Cluster Munitions Ottawa Treaty References Further reading External links Humanitarian Demining Accident and Incident Database Studies Drug and explosive detection (pdf) Humanitarian Mine Action (blog by Andy Smith) Government programs Anti-Personnel Landmines, Small Arms and Light Weapons (European Commission) International Test and Evaluation Program for Humanitarian Demining NGOs Danish Demining Group Mines Advisory Group Bomb disposal Articles containing video clips
Demining
[ "Chemistry" ]
9,785
[ "Explosion protection", "Bomb disposal" ]
13,410,249
https://en.wikipedia.org/wiki/Winzapper
Winzapper is a freeware utility / hacking tool used to delete events from the Microsoft Windows NT 4.0 and Windows 2000 Security Log. It was developed by Arne Vidstrom as a proof-of-concept tool, demonstrating that once the Administrator account has been compromised, event logs are no longer reliable. According to Hacking Exposed: Windows Server 2003, Winzapper works with Windows NT/2000/2003. Prior to Winzapper's creation, Administrators already had the ability to clear the Security log either through the Event Viewer or through third-party tools such as Clearlogs. However, Windows lacked any built-in method of selectively deleting events from the Security Log. An unexpected clearing of the log would likely be a red flag to system administrators that an intrusion had occurred. Winzapper would allow a hacker to hide the intrusion by deleting only those log events relevant to the attack. Winzapper, as publicly released, lacked the ability to be run remotely without the use of a tool such as Terminal Services. However, according to Arne Vidstrom, it could easily be modified for remote operation. There is also an unrelated trojan horse by the same name. Countermeasures Winzapper creates a backup security log, "dummy.dat," at %systemroot%\system32\config. This file may be undeleted after an attack to recover the original log. Conceivably, however, a savvy user might copy a sufficiently large file over the dummy.dat file and thus irretrievably overwrite it. Winzapper causes the Event Viewer to become unusable until after a reboot, so an unexpected reboot may be a clue that Winzapper has recently been used. Another potential clue to a Winzapper-based attempt would be corruption of the Security Log (requiring it to be cleared), since there is always a small risk that Winzapper will do this. According to WindowsNetworking.com, "One way to prevent rogue admins from using this tool on your servers is to implement a Software Restriction Policy using Group Policy that prevents the WinZapper executable from running". References Computer security software
Winzapper
[ "Engineering" ]
458
[ "Cybersecurity engineering", "Computer security software" ]
13,410,380
https://en.wikipedia.org/wiki/Mathematics%20and%20fiber%20arts
Ideas from mathematics have been used as inspiration for fiber arts including quilt making, knitting, cross-stitch, crochet, embroidery and weaving. A wide range of mathematical concepts have been used as inspiration including topology, graph theory, number theory and algebra. Some techniques such as counted-thread embroidery are naturally geometrical; other kinds of textile provide a ready means for the colorful physical expression of mathematical concepts. Quilting The IEEE Spectrum has organized a number of competitions on quilt block design, and several books have been published on the subject. Notable quiltmakers include Diana Venters and Elaine Ellison, who have written a book on the subject Mathematical Quilts: No Sewing Required. Examples of mathematical ideas used in the book as the basis of a quilt include the golden rectangle, conic sections, Leonardo da Vinci's Claw, the Koch curve, the Clifford torus, San Gaku, Mascheroni's cardioid, Pythagorean triples, spidrons, and the six trigonometric functions. Knitting and crochet Knitted mathematical objects include the Platonic solids, Klein bottles and Boy's surface. The Lorenz manifold and the hyperbolic plane have been crafted using crochet. Knitted and crocheted tori have also been constructed depicting toroidal embeddings of the complete graph K7 and of the Heawood graph. The crocheting of hyperbolic planes has been popularized by the Institute For Figuring; a book by Daina Taimina on the subject, Crocheting Adventures with Hyperbolic Planes, won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year. Embroidery Embroidery techniques such as counted-thread embroidery including cross-stitch and some canvas work methods such as Bargello make use of the natural pixels of the weave, lending themselves to geometric designs. Weaving Ada Dietz (1882 – 1981) was an American weaver best known for her 1949 monograph Algebraic Expressions in Handwoven Textiles, which defines weaving patterns based on the expansion of multivariate polynomials. used the Rule 90 cellular automaton to design tapestries depicting both trees and abstract patterns of triangles. Spinning Margaret Greig was a mathematician who articulated the mathematics of worsted spinning. Fashion design The silk scarves from DMCK Designs' 2013 collection are all based on Douglas McKenna's space-filling curve patterns. The designs are either generalized Peano curves, or based on a new space-filling construction technique. The Issey Miyake Fall-Winter 2010–2011 ready-to-wear collection designs from a collaboration between fashion designer Dai Fujiwara and mathematician William Thurston. The designs were inspired by Thurston's geometrization conjecture, the statement that every 3-manifold can be decomposed into pieces with one of eight different uniform geometries, a proof of which had been sketched in 2003 by Grigori Perelman as part of his proof of the Poincaré conjecture. See also Mathematics and art References Further reading External links Mathematical quilts Mathematical knitting Mathematical weaving Mathematical craft projects Wooly Thoughts Creations: Maths Puzzles & Toys Penrose tiling quilt Crocheting the Hyperbolic Plane: An Interview with David Henderson and Daina Taimina AMS Special Session on Mathematics and Mathematics Education in Fiber Arts (2005) Mathematics and culture Textile arts Recreational mathematics Mathematics and art
Mathematics and fiber arts
[ "Mathematics" ]
679
[ "Recreational mathematics" ]
13,411,300
https://en.wikipedia.org/wiki/Astellas%20Institute%20for%20Regenerative%20Medicine
Astellas Institute for Regenerative Medicine is a subsidiary of Astellas Pharma located in Marlborough, Massachusetts, US, developing stem cell therapies with a focus on diseases that cause blindness. It was formed in 1994 as a company named Advanced Cell Technology, Incorporated (ACT), which was renamed to Ocata Therapeutics in November 2014. In February 2016 Ocata was acquired by Astellas for $379 million USD. History Advanced Cell Technology was formed in 1994 and was led from 2005 to late 2010 by William M. Caldwell IV, Chairman and Chief Executive Officer. Upon Mr. Caldwell's death on December 13, 2010, Gary Rabin, a member of ACT's board of directors with experience in investment and capital raising, assumed the role of Chairman and CEO. In 2007 the company's Chief Scientific Officer (CSO), Michael D. West, PhD, also founder of Geron left Ocata to join a regenerative medicine firm, BioTime as CEO. In 2008, for $250,000 plus royalties up to a total of $1 million, the company licensed its "ACTCellerate" technology to BioTime. Robert Lanza was appointed CSO. On November 22, 2010, the company announced that it had received approval from the U.S. Food and Drug Administration (FDA) to initiate the first human clinical trial using embryonic stem cells to treat retinal diseases. A preliminary report of the trial published in 2012, and a follow-up article was published in February 2015. In July 2014, Ocata announced that Paul K. Wotton, previously of Antares Pharma Inc (ATRS:NASDAQ CM), became President and Chief Executive Officer. On August 27, 2014, Ocata announced a 1-100 reverse stock split of its common stock. Ocata was listed on NASDAQ in February 2015. Research Macular degeneration On November 30, 2010, Ocata filed an Investigational New Drug application with the U.S. FDA for the first clinical trial using embryonic stem cells to regenerate retinal pigment epithelium to treat Dry Age-Related Macular Degeneration (Dry AMD). Dry AMD is the most common form of macular degeneration and represents a market size of $25–30 Billion in the U.S. and Europe. Stargardt's disease In November 2010 the FDA allowed Ocata to begin a Phase I/II human clinical trial to use its retinal pigment epithelium cell therapy to treat Stargardt disease, a form of inherited juvenile macular degeneration. See also Key stem cell research events Somatic cell nuclear transfer Stem cells without embryonic destruction References Astellas Pharma Biotechnology companies of the United States Stem cells Biotechnology companies established in 1994 Life sciences industry
Astellas Institute for Regenerative Medicine
[ "Biology" ]
593
[ "Life sciences industry" ]
13,411,406
https://en.wikipedia.org/wiki/Aleutian%20Low
The Aleutian Low is a semi-permanent low-pressure system located near the Aleutian Islands in the Bering Sea during the Northern Hemisphere winter, driven by warm sea water compared to cooler land. It is a climatic feature centered near the Aleutian Islands measured based on mean sea-level pressure. It is one of the largest atmospheric circulation patterns in the Northern Hemisphere and represents one of the "main centers of action in atmospheric circulation." Classification The Aleutian Low heavily influences the path and strength of cyclones. Extratropical cyclones which form in the sub-polar latitudes in the North Pacific typically slow down and reach maximum intensity in the area of the Aleutian Low. Tropical cyclones that form in the tropical and equatorial regions of the Pacific can veer northward and get caught in the Aleutian Low. This is usually seen in the later summer months. Both the November 2011 Bering Sea cyclone and the November 2014 Bering Sea cyclone were extratropical cyclones that had dissipated and restrengthened when the systems entered the Aleutian Low region. The storms are remembered and marked as two of the strongest storms to impact the Bering Sea and Aleutian Islands with pressure dropping below 950 mb in each system. The magnitude of the low pressure creates an extreme atmospheric disturbance, which can cause other significant shifts in weather. Following the November 2014 Bering Sea cyclone, a huge cold wave, November 2014 North American cold wave, hit the US bringing record breaking low temperatures to many states. Effects The low serves as an atmospheric driver for low-pressure systems, post-tropical cyclones and their remnants and can generate strong storms that impact Alaska and Canada. Intensity of the low is strongest in the winter and almost completely dissipates in the summer. The circulation pattern is measured based on averages of synoptic features help mark the locations of cyclones and their paths over a given time period. However, there is significant variability in these measurements. The circulation pattern shifts during the Northern Hemisphere summer when the North Pacific High takes over and breaks apart the Aleutian Low. This high-pressure circulation pattern strongly influences tropical cyclone paths. The presence of the Eurasian and North American continents prevent a continuous belt of low pressure from developing in the Northern Hemisphere sub-polar latitudes, which would mirror the circumpolar belt of low pressure and frequent storms in the Southern Ocean. However, the presence of the continents disrupts this motion, and the subpolar belt of low pressure is well developed only in the North Pacific (the Aleutian Low) and the North Atlantic (the Icelandic Low, which is located between Greenland and Iceland). The strength of the Aleutian Low has been proposed as a driving factor in determining primary production in the water column and, in turn, impacting the catch in the salmon fishery. References C.Michael Hogan. 2011. Gulf of Alaska. Topic ed. P.Saundry. Ed.-in-chief C.J.Cleveland. Encyclopedia of Earth. National council for Science and the Environment. Aleutian Islands Weather hazards Atmospheric dynamics Types of cyclone
Aleutian Low
[ "Physics", "Chemistry" ]
627
[ "Physical phenomena", "Atmospheric dynamics", "Weather hazards", "Weather", "Fluid dynamics" ]
13,411,552
https://en.wikipedia.org/wiki/Problem%20frames%20approach
Problem analysis or the problem frames approach is an approach to software requirements analysis. It was developed by British software consultant Michael A. Jackson in the 1990s. History The problem frames approach was first sketched by Jackson in his book Software Requirements & Specifications (1995) and in a number of articles in various journals devoted to software engineering. It has received its fullest description in his Problem Frames: Analysing and Structuring Software Development Problems (2001). A session on problem frames was part of the 9th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ)] held in Klagenfurt/Velden, Austria in 2003. The First International Workshop on Applications and Advances in Problem Frames was held as part of ICSE’04 held in Edinburgh, Scotland. One outcome of that workshop was a 2005 special issue on problem frames in the International Journal of Information and Software Technology. The Second International Workshop on Applications and Advances in Problem Frames was held as part of ICSE 2006 in Shanghai, China. The Third International Workshop on Applications and Advances in Problem Frames (IWAAPF) was held as part of ICSE 2008 in Leipzig, Germany. In 2010, the IWAAPF workshops were replaced by the International Workshop on Applications and Advances of Problem-Orientation (IWAAPO). IWAAPO broadens the focus of the workshops to include alternative and complementary approaches to software development that share an emphasis on problem analysis. IWAAPO-2010 was held as part of ICSE 2010 in Cape Town, South Africa. Today research on the problem frames approach is being conducted at a number of universities, most notably at the Open University in the United Kingdom as part of its Relating Problem & Solution Structures research theme The ideas in the problem frames approach have been generalized into the concepts of problem-oriented development (POD) and problem-oriented engineering (POE), of which problem-oriented software engineering (POSE) is a particular sub-category. The first International Workshop on Problem-Oriented Development was held in June 2009. Overview Fundamental philosophy Problem analysis or the problem frames approach is an approach — a set of concepts — to be used when gathering requirements and creating specifications for computer software. Its basic philosophy is strikingly different from other software requirements methods in insisting that: The best way to approach requirements analysis is through a process of parallel — not hierarchical — decomposition of user requirements. User requirements are about relationships in the real world—the application domain – not about the software system or even the interface with the software system. The approach uses three sets of conceptual tools. Tools for describing specific problems Concepts used for describing specific problems include: phenomena (of various kinds, including events), problem context, problem domain, solution domain (aka the machine), shared phenomena (which exist in domain interfaces), domain requirements (which exist in the problem domains) and specifications (which exist at the problem domain:machine interface). The graphical tools for describing problems are the context diagram and the problem diagram. Tools for describing classes of problems (problem frames) The Problem Frames Approach includes concepts for describing classes of problems. A recognized class of problems is called a problem frame (roughly analogous to a design pattern). In a problem frame, domains are given general names and described in terms of their important characteristics. A domain, for example, may be classified as causal (reacts in a deterministic, predictable way to events) or biddable (can be bid, or asked, to respond to events, but cannot be expected always to react to events in any predictable, deterministic way). (A biddable domain usually consists of people.) The graphical tool for representing a problem frame is a frame diagram. A frame diagram looks generally like a problem diagram except for a few minor differences—domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. A list of recognized classes of problems (problem frames) The first group of problem frames identified by Jackson included: required behavior commanded behavior information display simple workpieces transformation Subsequently, other researchers have described or proposed additional problem frames. Describing problems The problem context Problem analysis considers a software application to be a kind of software machine. A software development project aims to change the problem context by creating a software machine and adding it to the problem context, where it will bring about certain desired effects. The particular portion of the problem context that is of interest in connection with a particular problem — the particular portion of the problem context that forms the context of the problem — is called the application domain. After the software development project has been finished, and the software machine has been inserted into the problem context, the problem context will contain both the application domain and the machine. At that point, the situation will look like this: The problem context contains the machine and the application domain. The machine interface is where the Machine and the application domain meet and interact. The same situation can be shown in a different kind of diagram, a context diagram, this way: The context diagram The problem analyst's first task is to truly understand the problem. That means understanding the context in which the problem is set. And that means drawing a context diagram. Here is Jackson's description of examining the problem context, in this case the context for a bridge to be built: You're an engineer planning to build a bridge across a river. So you visit the site. Standing on one bank of the river, you look at the surrounding land, and at the river traffic. You feel how exposed the place is, and how hard the wind is blowing and how fast the river is running. You look at the bank and wonder what faults a geological survey will show up in the rocky terrain. You picture to yourself the bridge that you are going to build. (Software Requirements & Specifications: "The Problem Context") An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it. The context diagram shows the various problem domains in the application domain, their connections, and the Machine and its connections to (some of) the problem domains. Here is what a context diagram looks like. This diagram shows: the machine to be built. The dark border helps to identify the box that represents the Machine. the problem domains that are relevant to the problem. the solid lines represent domain interfaces — areas where domains overlap and share phenomena in common. A domain is simply a part of the world that we are interested in. It consists of phenomena — individuals, events, states of affairs, relationships, and behaviors. A domain interface is an area where domains connect and communicate. Domain interfaces are not data flows or messages. An interface is a place where domains partially overlap, so that the phenomena in the interface are shared phenomena — they exist in both of the overlapping domains. You can imagine domains as being like primitive one-celled organisms (like amoebas). They are able to extend parts of themselves into pseudopods. Imagine that two such organisms extend pseudopods toward each other in a sort of handshake, and that the cellular material in the area where they are shaking hands is mixing, so that it belongs to both of them. That's an interface. In the following diagram, X is the interface between domains A and B. Individuals that exist or events that occur in X, exist or occur in both A and B. Shared individuals, states and events may look differently to the domains that share them. Consider for example an interface between a computer and a keyboard. When the keyboard domain sees an event Keyboard operator presses the spacebar the computer will see the same event as Byte hex("20") appears in the input buffer. Problem diagrams The problem analyst's basic tool for describing a problem is a problem diagram. Here is a generic problem diagram. In addition to the kinds of things shown on a context diagram, a problem diagram shows: a dotted oval representing the requirement to bring about certain effects in the problem domains. dotted lines representing requirement references — references in the requirement to phenomena in the problem domains. An interface that connects a problem domain to the machine is called a specification interface and the phenomena in the specification interface are called specification phenomena. The goal of the requirements analyst is to develop a specification for the behavior that the Machine must exhibit at the Machine interface in order to satisfy the requirement. Here is an example of a real, if simple, problem diagram. This problem might be part of a computer system in a hospital. In the hospital, patients are connected to sensors that can detect and measure their temperature and blood pressure. The requirement is to construct a Machine that can display information about patient conditions on a panel in the nurses station. The name of the requirement is "Display ~ Patient Condition". The tilde (~) indicates that the requirement is about a relationship or correspondence between the panel display and patient conditions. The arrowhead indicates that the requirement reference connected to the Panel Display domain is also a requirement constraint. That means that the requirement contains some kind of stipulation that the Panel display must meet. In short, the requirement is that The panel display must display information that matches and accurately reports the condition of the patients. Describing classes of problems Problem frames A problem frame is a description of a recognizable class of problems, where the class of problems has a known solution. In a sense, problem frames are problem patterns. Each problem frame has its own frame diagram. A frame diagram looks essentially like a problem diagram, but instead of showing specific domains and requirements, it shows types of domains and types of requirements; domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. Variant frames In Problem Frames Jackson discussed variants of the five basic problem frames that he had identified. A variant typically adds a domain to the problem context. a description variant introduces a description lexical domain an operator variant introduces an operator a connection variant introduces a connection domain between the machine and the central domain with which it interfaces a control variant introduces no new domain; it changes the control characteristics of interface phenomena Problem concerns Jackson also discusses certain kinds of concerns that arise when working with problem frames. Particular concerns overrun initialization reliability identities completeness Composition concerns commensurable descriptions consistency precedence interference synchronization Recognized problem frames The first problem frames identified by Jackson included: required behavior commanded behavior information display simple workpieces transformation Subsequently, other researchers have described or proposed additional problem frames. Required-behavior problem frame The intuitive idea behind this problem frame is: There is some part of the physical world whose behavior is to be controlled so that it satisfies certain conditions. The problem is to build a machine that will impose that control. Commanded-behavior problem frame The intuitive idea behind this problem frame is: There is some part of the physical world whose behavior is to be controlled in accordance with commands issued by an operator. The problem is to build a machine that will accept the operator's commands and impose the control accordingly. Information display problem frame The intuitive idea behind this problem frame is: There is some part of the physical world about whose states and behavior certain information is continually needed. The problem is to build a machine that will obtain this information from the world and present it at the required place in the required form. Simple workpieces problem frame The intuitive idea behind this problem frame is: A tool is needed to allow a user to create and edit a certain class of computer-processible text or graphic objects, or similar structures, so that they can be subsequently copied, printed, analyzed or used in other ways. The problem is to build a machine that can act as this tool. Transformation problem frame The intuitive idea behind this problem frame is: There are some given computer-readable input files whose data must be transformed to give certain required output files. The output data must be in a particular format, and it must be derived from the input data according to certain rules. The problem is to build a machine that will produce the required outputs from the inputs. Problem analysis and the software development process When problem analysis is incorporated into the software development process, the software development lifecycle starts with the problem analyst, who studies the situation and: creates a context diagram gathers a list of requirements and adds a requirements oval to the context diagram, creating a grand "all-in-one" problem diagram. (However, in many cases actually creating an all-in-one problem diagram may be impractical or unhelpful: there will be too many requirements references criss-crossing the diagram to make it very useful.) decomposes the all-in-one problem and problem diagram into simpler problems and simpler problem diagrams. These problems are projections, not subsets, of the all-in-one diagram. continues to decompose problems until each problem is simple enough that it can be seen to be an instance of a recognized problem frame. Each subproblem description includes a description of the specification interfaces for the machine to be built. At this point, problem analysis — problem decomposition — is complete. The next step is to reverse the process and to build the desired software system though a process of solution composition. The solution composition process is not yet well understood, and is still very much a research topic. Extrapolating from hints in Software Requirements & Specifications, we can guess that the software development process would continue with the developers, who would: compose the multiple subproblem machine specifications into the specification for a single all-in-one machine: a specification for a software machine that satisfies all of the customer's requirements. This is a non-trivial activity — the composition process may very well raise composition problems that need to be solved. implement the all-in-one machine by going through the traditional code/test/deploy process. Similar approaches There are a few other software development ideas that are similar in some ways to problem analysis. The notion of a design pattern is similar to Jackson's notion of a problem frame. It differs in that a design pattern is used for recognizing and handling design issues (often design issues in specific object-oriented programming languages such as C++ or Java) rather than for recognizing and handling requirements issues. Furthermore, one difference is that design patterns cover solutions while in problem frames problems are represented. However, the design patterns also tend to account for semantic outcomes that are not native to the programming language they are to be implemented in. So, another difference is that problem frames is a native meta-notation for the domain of problems, whereas design patterns are a catalogue of technical debt left behind by the language implementers. Aspect-oriented programming, AOP (also known as aspect-oriented software development, AOSD) is similarly interested in parallel decomposition, which addresses what AOP proponents call cross-cutting concerns or aspects. AOP addresses concerns that are much closer to the design and code-generation phase than to the requirements analysis phase. AOP has moved into requirement engineering notations such as ITU-T Z.151 User Requirements Notation (URN). In URN the AOP is over all the intentional elements. AOP can also be applied over requirement modelling that uses problem frames as a heuristic. URN models driven with problem frame thinking, and interleaved with aspects, allows for inclusion of architectural tactics into the requirement model. Martin Fowler's book Analysis Patterns is very similar to problem analysis in its search for patterns. It doesn't really present a new requirements analysis method, however. And the notion of parallel decomposition — which is so important for problem analysis — is not a part of Fowler's analysis patterns. Jon G. Hall, Lucia Rapanotti, together with Jackson, have developed the Problem Oriented Software Engineering (POSE) framework which shares the problem frames foundations. Since, 2005, Hall and Rapanotti have extended POSE to Problem Oriented Engineering (POE), which provides a framework for engineering design, including a development process model and assurance-driven design, and may be scalable to projects that include many stake-holders and that combine diverse engineering disciplines such as software and education provision. References External links http://mcs.open.ac.uk/mj665/ is Michael A. Jackson's home page http://www.jacksonworkbench.co.uk/stevefergspages/pfa/index.html has papers and articles on the Problem Frames Approach Systems analysis Systems engineering Software requirements
Problem frames approach
[ "Engineering" ]
3,419
[ "Software engineering", "Systems engineering", "Software requirements" ]
13,412,252
https://en.wikipedia.org/wiki/High-speed%20voice%20and%20data%20link
High-speed voice and data link (HVDL) is a high speed voice and data provisioning method that allows telcos and ISPs to provide up to three voice channels and data (up to 1 Mbit/s) on a copper pair over extremely long local loops. Most DSL technologies (Etherloop in particular) work well up to about 18,000 feet (5.5 km) on a 24 AWG copper pair. Reach DSL supports lengths up to approximately 32,800 feet (10 km). HVDL has a theoretical maximum loop length of approximately 112,000 feet (approximately 34 km). Such a distance would require repeater(s) and would probably only support a connection of 128 kbit/s. The ideal speed for this service is 512 kbit/s or 384 kbit/s. This is programmed directly from the COT line card. The signal is sent from the telco's central office as an Ethernet style signal and is demuxed at the customer's premises by a POTS/Ethernet splitter. The box itself contains all the circuitry needed to split the data and voice channels. An Ethernet cable is run directly to the customer's PC or router, and the POTS lines within the home are connected to the POTS terminals inside the customer-premises equipment (CPE) unit. The CPE unit is powered from the telco's central office, and will continue to work during a power outage, and supports failover-to-POTS. External links HVDL vendor Charles Communication circuits Digital subscriber line
High-speed voice and data link
[ "Engineering" ]
328
[ "Telecommunications engineering", "Communication circuits" ]
13,412,573
https://en.wikipedia.org/wiki/Kinesthetic%20sympathy
Kinesthetic sympathy is the state of having an emotional attachment to an object when it is in hand which one does not have when it is out of sight. Concept The concept of kinesthetic sympathy is associated with John Martin, a dance critic. He introduced it in a New York Times article that discussed how the audience members respond to movements of the dancers on stage. Such response is said to transpire subconsciously. According to Martin, "when we see a human body moving, we see movement which is potentially producible by any human body and therefore by our own." This link allows humans to reproduce the movement in their present muscular experience and even awaken its connotations as if the perceived movement was their own. What this means is that an individual - through kinesthetic sympathy - could perform movements that are beyond his body's capacities. Kinesthetic sympathy is linked to the concept of kinesthetic empathy, which pertains to the embodied experience of movement emotion. NSGCD Study In 2003, a study was conducted by the National Study Group On Chronic Disorganization (NSGCD), the purpose of which was to collect data on the effectiveness of using special techniques with clients to avoid kinesthetic sympathy. Organizers working with chronically disorganized clients at their desks were asked to use the kinesthetic sympathy avoidance process by asking their respective clients to hold a mug, drinking glass, or plastic or metal tumbler as a distracting device while working together. The survey was meant to see if, by holding a solid "distraction" item, the client would exhibit less noticeable kinetic sympathy and, therefore, have a more successful paper processing session. The survey achieved mixed results. See also Behaviorism Kinesthetic learning Professional organizing Stimulus control References Emotion
Kinesthetic sympathy
[ "Biology" ]
364
[ "Emotion", "Behavior", "Human behavior" ]
13,412,633
https://en.wikipedia.org/wiki/Graminivore
A graminivore is a herbivorous animal that feeds primarily on grass, specifically "true" grasses, plants of the family Poaceae (also known as Graminae). Graminivory is a form of grazing. These herbivorous animals have digestive systems that are adapted to digest large amounts of cellulose, which is abundant in fibrous plant matter and more difficult to break down for many other animals. As such, they have specialized enzymes to aid in digestion and in some cases symbiotic bacteria that live in their digestive track and "assist" with the digestive process through fermentation as the matter travels through the intestines. Horses, cattle, geese, guinea pigs, hippopotamuses, capybara and giant pandas are examples of vertebrate graminivores. Some carnivorous vertebrates, such as dogs and cats, are known to eat grass occasionally. Grass consumption in dogs can be a way to rid their intestinal tract of parasites that may be threatening to the carnivore's health. Various invertebrates also have graminivorous diets. Many grasshoppers, such as individuals from the family Acrididae, have diets consisting primarily of plants from the family Poaceae. Although humans are not graminivores, we do get much of our nutrition from a type of grass called cereal, and especially from the fruit of that grass which is called grain. Graminivores generally exhibit a preference on which species of grass they choose to consume. For example, according to a study done on North American bison feeding on shortgrass plains in north-eastern Colorado, the cattle consumed a total of thirty-six different species of plant. Of that thirty-six, five grass species were favoured and consumed the most pervasively. The average consumption of these five species comprised about 80% of their diet. A few of these species include Aristida longiseta, Muhlenbergia species, and Bouteloua gracilis. References Ethology Herbivory
Graminivore
[ "Biology" ]
423
[ "Behavior", "Ethology", "Herbivory", "Behavioural sciences", "Eating behaviors" ]