id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
972,070
https://en.wikipedia.org/wiki/Hodge%20301
Hodge 301 is a star cluster in the Tarantula Nebula, visible from Earth's Southern Hemisphere. The cluster and nebula lie about 168,000 light years away, in one of the Milky Way's orbiting satellite galaxies, the Large Magellanic Cloud. Hodge 301, along with the cluster R136, is one of two major star clusters situated in the Tarantula Nebula, a region which has seen intense bursts of star formation over the last few tens of millions of years. R136 is situated in the central regions of the nebula, while Hodge 301 is located about 150 light years away, to the north west as seen from Earth. Hodge 301 was formed early on in the current wave of star formation, with an age estimated at 20-25 million years old, some ten times older than R136. Since Hodge 301 formed, it is estimated that at least 40 stars within it have exploded as supernovae, giving rise to violent gas motions within the surrounding nebula and emission of x-rays. This contrasts with the situation around R136, which is young enough that none of its stars have yet exploded as supernovae; instead, the stars of R136 are emitting fast stellar winds, which are colliding with the surrounding gases. The two clusters thus provide astronomers with a direct comparison between the impact of supernova explosions and stellar winds on surrounding gases. References External links Hodge 301 at ESA/Hubble Large Magellanic Cloud Open clusters Tarantula Nebula Dorado
Hodge 301
[ "Astronomy" ]
307
[ "Dorado", "Constellations" ]
972,079
https://en.wikipedia.org/wiki/Genuine%20progress%20indicator
Genuine progress indicator (GPI) is a metric that has been suggested to replace, or supplement, gross domestic product (GDP). The GPI is designed to take fuller account of the well-being of a nation, only a part of which pertains to the size of the nation's economy, by incorporating environmental and social factors which are not measured by GDP. For instance, some models of GPI decrease in value when the poverty rate increases. The GPI separates the concept of societal progress from economic growth. The GPI is used in ecological economics, "green" economics, sustainability and more inclusive types of economics. It factors in environmental and carbon footprints that businesses produce or eliminate, including in the forms of resource depletion, pollution and long-term environmental damage. GDP is increased twice when pollution is created, since it increases once upon creation (as a side-effect of some valuable process) and again when the pollution is cleaned up; in contrast, GPI counts the initial pollution as a loss rather than a gain, generally equal to the amount it will cost to clean up later plus the cost of any negative impact the pollution will have in the meantime. While quantifying costs and benefits of these environmental and social externalities is a difficult task, "Earthster-type databases could bring more precision and currency to GPI's metrics." It has been noted that such data may also be embraced by those who attempt to "internalize externalities" by making companies pay the costs of the pollution they create (rather than having the government or society at large bear those costs) "by taxing their goods proportionally to their negative ecological and social impacts". GPI is an attempt to measure whether the environmental impact and social costs of economic production and consumption in a country are negative or positive factors in overall health and well-being. By accounting for the costs borne by the society as a whole to repair or control pollution and poverty, GPI balances GDP spending against external costs. GPI advocates claim that it can more reliably measure economic progress, as it distinguishes between the overall "shift in the 'value basis' of a product, adding its ecological impacts into the equation". Comparatively speaking, the relationship between GDP and GPI is analogous to the relationship between the gross profit of a company and the net profit; the net profit is the gross profit minus the costs incurred, while the GPI is the GDP (value of all goods and services produced) minus the environmental and social costs. Accordingly, the GPI will be zero if the financial costs of poverty and pollution equal the financial gains in production of goods and services, all other factors being constant. Motivations Some economists assess progress in people's welfare by comparing the gross domestic product over time—that is, by adding up the annual dollar value of all goods and services produced within a country over successive years. However, GDP was not intended to be used for such purpose. It is prone to productivism or consumerism, over-valuing production and consumption of goods, and not reflecting improvement in human well-being. It also does not distinguish between money spent for new production and money spent to repair negative outcomes from previous expenditure. For example, it would treat as equivalent one million dollars spent to build new homes and one million dollars spent in aid relief to those whose homes have been destroyed, despite these expenditures arguably not representing the same kind of progress. This is relevant for example when considering the true costs of development that destroys wetlands and hence exacerbates flood damages. Simon Kuznets, the inventor of the concept of GDP, noted in his first report to the US Congress in 1934:the welfare of a nation can scarcely be inferred from a measure of national income. In 1962, he also wrote: Distinctions must be kept in mind between quantity and quality of growth, between costs and returns, and between the short and long run... Goals for more growth should specify more growth of what and for what. Some have argued that an adequate measure must also take into account ecological yield and the ability of nature to provide services, and that these things are part of a more inclusive ideal of progress, which transcends the traditional focus on raw industrial production. Theoretical foundation The need for a GPI to supplement indicators such as GDP was highlighted by analyses of uneconomic growth in the 1980s, notably that of Marilyn Waring, who studied biases in the UN System of National Accounts. By the early 1990s, there was a consensus in human development theory and ecological economics that growth in money supply was actually reflective of a loss of well-being: that shortfalls in essential natural and social services were being paid for in cash and that this was expanding the economy but degrading life. The matter remains controversial and is a main issue between advocates of green economics and neoclassical economics. Neoclassical economists understand the limitations of GDP for measuring human well-being but nevertheless regard GDP as an important, though imperfect, measure of economic output and would be wary of too close an identification of GDP growth with aggregate human welfare. However, GDP tends to be reported as synonymous with economic progress by journalists and politicians, and the GPI seeks to correct this shorthand by providing a more encompassing measure. Some economists, notably Herman Daly, John B. Cobb and Philip Lawn, have asserted that a country's growth, increased goods production, and expanding services have both costs and benefits, not just the benefits that contribute to GDP. They assert that, in some situations, expanded production facilities damage the health, culture, and welfare of people. Growth that was in excess of sustainable norms (e.g., of ecological yield) had to be considered to be uneconomic. According to the "threshold hypothesis", developed by Manfred Max-Neef, "when macroeconomic systems expand beyond a certain size, the additional benefits of growth are exceeded by the attendant costs" (Max-Neef, 1995). This hypothesis is borne out in data comparing GDP/capita with GPI/capita from 17 countries. The graph demonstrates that, while GDP does increase overall well-being to a point, beyond $7,000 GDP/capita the increase in GPI is reduced or remains stagnant. Similar trends can be seen when comparing GDP to life satisfaction as well as in a Gallup Poll published in 2008. According to Lawn's model, the "costs" of economic activity include the following potential harmful effects: Cost of resource depletion Cost of crime Cost of ozone depletion Cost of family breakdown Cost of air, water, and noise pollution Loss of farmland Loss of wetlands Analysis by Robert Costanza also around 1995 of nature's services and their value showed that a great deal of degradation of nature's ability to clear waste, prevent erosion, pollinate crops, etc., was being done in the name of monetary profit opportunity: this was adding to GDP but causing a great deal of long term risk in the form of mudslides, reduced yields, lost species, water pollution, etc. Such effects have been very marked in areas that suffered serious deforestation, notably Haiti, Indonesia, and some coastal mangrove regions of India and South America. Some of the worst land abuses for instance have been shrimp farming operations that destroyed mangroves, evicted families, left coastal lands salted and useless for agriculture, but generated a significant cash profit for those who were able to control the export market in shrimp. This has become a signal example to those who contest the idea that GDP growth is necessarily desirable. GPI systems generally try to take account of these problems by incorporating sustainability: whether a country's economic activity over a year has left the country with a better or worse future possibility of repeating at least the same level of economic activity in the long run. For example, agricultural activity that uses replenishing water resources, such as river runoff, would score a higher GPI than the same level of agricultural activity that drastically lowers the water table by pumping irrigation water from wells. Income vs. capital depletion Hicks (1946) pointed out that the practical purpose of calculating income is to indicate the maximum amount that people can produce and consume without undermining their capacity to produce and consume the same amount in the future. From a national income perspective, it is necessary to answer the following question: "Can a nation's entire GDP be consumed without undermining its ability to produce and consume the same GDP in the future?" This question is largely ignored in contemporary economics but fits under the idea of sustainability. In legislative decisions The best-known attempts to apply the concepts of GPI to legislative decisions are probably the GPI Atlantic, an index, not an indicator, invented by Ronald Colman for Atlantic Canada, who explicitly avoids aggregating the results obtained through research to a single number, alleging that it keeps decisions makers in the dark; the Alberta GPI created by ecological economist Mark Anielski to measure the long-term economic, social and environmental sustainability of the province of Alberta and the "environmental and sustainable development indicators" used by the Government of Canada to measure its own progress to achieving well-being goals. The Canadian Environmental Sustainability Indicators program is an effort to justify state services in GPI terms. It assigns the Commissioner of the Environment and Sustainable Development, an officer in the Auditor-General of Canada's office, to perform the analysis and report to the House of Commons. However, Canada continues to state its overall budgetary targets in terms of reducing its debt to GDP ratio, which implies that GDP increase and debt reduction in some combination are its main priorities. In the European Union (EU) the Metropole efforts and the London Health Observatory methods are equivalents focused mostly on urban lifestyle. The EU and Canadian efforts are among the most advanced in any of the G8 or OECD nations, but there are parallel efforts to measure quality of life or standard of living in health (not strictly wealth) terms in all developed nations. This has also been a recent focus of the labour movement. Calculation The calculation of GPI presented in the simplified form is the following: GPI = A + B - C - D + I A is income weighted private consumption B is value of non-market services generating welfare C is private defensive cost of natural deterioration D is cost of deterioration of nature and natural resources I is increase in capital stock and balance of international trade The GPI indicator is based on the concept of sustainable income, presented by economist John Hicks (1948). The sustainable income is the amount a person or an economy can consume during one period without decreasing his or her consumption during the next period. In the same manner, GPI depicts the state of welfare in the society by taking into account the ability to maintain welfare on at least the same level in the future. Components The Genuine Progress Indicator is measured by 26 indicators which can be divided into three main categories: Economic, Environmental, and Social. Some regions, nations, or states may adjust the verbiage slightly to accommodate their particular scenario. For example, the GPI template uses the phrase "Carbon Dioxide Emissions Damage" whereas the state of Maryland uses "Cost of Climate Change" because it also accounts for other greenhouse gases (GHG) such as methane and nitrous oxide. Development in the United States Non-profit organizations and universities have measured the GPI of Vermont, Maryland, Colorado, Ohio, and Utah. These efforts have incited government action in some states. As of 2014, Vermont, Maryland, Washington and Hawai'i have passed state government initiatives to consider GPI in budgeting decisions, with a focus on long-term cost and benefits. Hawai'i's GPI spans the years from 2000 to 2020 and will be updated annually. In 2009, the state of Maryland formed a coalition of representatives from several state government departments in search of a metric that would factor social well-being into the more traditional gross product indicators of the economy. The metric would help determine the sustainability of growth and economic progress against social and environmental factors typically left out of national indicators. The GPI was chosen as a comprehensive measure of sustainability as it has a well-accepted scientific methodology that can be adopted by other states and compared over time. Maryland's GPI trends are comparable to other states and nations that have measured their GPI in that gross state product (GSP) and GPI have diverged over the past four decades where GSP has increased more rapidly than GPI. While economic elements of GPI have increased overall (with a significant drop off during the Great Recession), social well-being has stagnated, with any values added being cancelled out by costs deducted, and environmental indicators, while improving slightly, are always considered costs. Combined, these elements bring the GPI below GSP. However, Maryland's GPI did increase by two points from 2010 to 2011. The calculation methodology of GPI was first developed and published in 1995 by Redefining Progress and applied to US data from 1950 to 1994. The original work on the GPI in 1995 was a modification of the 1994 version of the Index of Sustainable Economic Welfare in Daly and Cobb. Results showed that GDP increased substantially from 1950 to 1994. Over the same period, the GPI stagnated. Thus, according to GPI theory, economic growth in the US, i.e., the growth of GDP, did not increase the welfare of the people over that 44 year period. So far, GPI time-series have been calculated for the US and Australia as well as for several of their states. In addition, GPI has been calculated for Austria, Canada, Chile, France, Finland, Italy, the Netherlands, Scotland, and the rest of the UK. Development in Finland The GPI time-series 1945 to 2011 for Finland have been calculated by Statistics Finland. The calculation closely followed the US methodology. The results show that in the 1970s and 1980s economic growth, as measured by GDP, clearly increased welfare, measured by the GPI. After the economic recession of the early-1990s the GDP continued to grow, but the GPI stayed on a lower level. This indicates a widening gap between the trends of GDP and GPI that began in the early-1990s. In the 1990s and 2000s the growth of GDP has not benefited the average Finn. If measured by GPI, sustainable economic welfare has actually decreased due to environmental hazards that have accumulated in the environment. The Finnish GPI time series have been updated by Dr Jukka Hoffrén at Statistics Finland. Within EU's Interreg IV C FRESH Project (Forwarding Regional Environmental Sustainable Hierarchies) GPI time-series were calculated to Päijät-Häme, Kainuu and South-Ostrobotnia (Etelä-Pohjanmaa) regions in 2009–2010. During 2011 these calculations were completed with GPI calculations for the Lappland, Northern Ostrobothnia (Pohjois-Pohjanmaa) and Central-Ostrobothnia (Keski-Pohjanmaa) regions. Criticism GPI considers some types of production to have a negative impact upon being able to continue some types of production. GDP measures the entirety of production at a given time. GDP is relatively straightforward to measure compared to GPI. Competing measures like GPI define well-being, which are arguably impossible to define. Therefore, opponents of GPI claim that GPI cannot function to measure the goals of a diverse, plural society. Supporters of GDP as a measure of societal well-being claim that competing measures such as GPI are more vulnerable to political manipulation. Finnish economists Mika Maliranta and Niku Määttänen write that the problem of alternative development indexes is their attempt to combine things that are incommensurable. It is hard to say what they exactly indicate and difficult to make decisions based on them. They can be compared to an indicator that shows the mean of a car's velocity and the amount of fuel left. They add that it indeed seems as if the economy has to grow in order for the people to even remain as happy as they are at present. In Japan, for example, the degree of happiness expressed by the citizens in polls has been declining since the early 1990s, the period when Japan's economic growth stagnated. Supporting countries and groups Canada planning applications. GDP has functioned as an "income sheet". GPI will function as a "balance sheet," taking into consideration that some income sources are very costly and contribute a negative profit overall. Beyond GDP is an initiative of the European Union, Club of Rome, WWF and OECD. Redefining Progress. Reports and analyses. A non-profit organization with headquarters in Oakland, California. Gross National Happiness USA has commissioned studies and advocated adoption of GPI in the United States. GPI and GPI-type studies completed See also Broad measures of economic progress Disability-adjusted life year Economics Full cost accounting Green national product Green gross domestic product (Green GDP) Gender-related Development Index Global Peace Index Gross National Happiness Gross National Well-being (GNW) Happiness economics Happy Planet Index (HPI) Human Development Index (HDI) ISEW (Index of sustainable economic welfare) Progress (history) Progressive utilization theory Legatum Prosperity Index Leisure satisfaction Living planet index Millennium Development Goals (MDGs) Post-materialism Psychometrics Subjective life satisfaction Where-to-be-born Index Wikiprogress World Values Survey (WVS) References Further reading News articles "Advantage or Illusion: Is Alberta's Progress Sustainable?" by Mark Anielski. Encompass Vol. 5, No. 5, July/August 2001. "The Growth Consensus Unravels" by Jonathan Rowe. Dollars and Sense, July–August 1999, pp. 15–18, 33. "Real Wealth: The Genuine Progress Indicator Could Provide an Environmental Measure of the Planet's Health" by Linda Baker. E Magazine, May/June 1999, pp. 37–41. "The GDP Myth: Why 'Growth' Isn't Always a Good Thing" by Jonathan Rowe, and Judith Silverstein. Washington Monthly, March 1999, pp. 17–21. "Economic Issues" by Lusi Song, Troy Martin, and Timothy Polo. 4EM Taylor, May 28, 2008, pp. 1–3. " If the GDP is Up, Why is American Down?" by Clifford Cobb, Ted Halstead, and Jonathan Rowe, Atlantic Monthly, October 1995. "Why Bigger Isn´t Better: The Genuine Progress Indicator – 1999 Update" by Clifford Cobb, Gary Sue Goodman, and Mathis Wackernagel, Redefining Progress, November 1999 Scientific articles and books Anielski, M, M. Griffiths, D. Pollock, A. Taylor, J. Wilson, S. Wilson. 2001. Alberta Sustainability Trends 2000: Genuine Progress Indicators Report 1961 to 1999. Pembina Institute for Appropriate Development. April 2001. Anielski Home (see the Alberta Genuine Progress Indicators Reports) Anielski, M. 2001. The Alberta GPI Blueprint: The Genuine Progress Indicator (GPI) Sustainable Well-Being Accounting System. Pembina Institute for Appropriate Development. September 2001.http://www.anielski.com/Publications.htm (see the Alberta Genuine Progress Indicators Reports) Anielski, M. and C. Soskolne. 2001. "Genuine Progress Indicator (GPI) Accounting: Relating Ecological Integrity to Human Health and Well-Being." Paper in Just Ecological Integrity: The Ethics of Maintaining Planetary Life, eds. Peter Miller and Laura Westra. Lanham, Maryland: Rowman and Littlefield: pp. 83–97. Bagstad, K., G. Berik, and E. Gaddis. 2014. “Methodological developments in U.S. state-level Genuine Progress Indicators: Toward GPI 2.0” Ecological Indicators 43: 474-485. Berik, G. 2020. “Measuring What Matters and Guiding Policy: An Evaluation of the Genuine Progress Indicator” International Labour Review, 159 (1): 71-93. Bleys, B., & Van der Slycken, J. (2019). De Index voor Duurzame Economische Welvaart (ISEW) voor Vlaanderen, 1990–2017. Studie uitgevoerd in opdracht van de Vlaamse Milieumaatschappij, MIRA, MIRA/2019/04, Universiteit Gent. Web: https://biblio.ugent.be/publication/8641018/file/8641020 Charles, A. C. Burbidge, H. Boyd and A. Lavers. 2009. Fisheries and the Marine Environment in Nova Scotia: Searching for Sustainability and Resilience. GPI Atlantic. Halifax, Nova Scotia. Web: Colman, Ronald. 2003. Economic Value of Civic and Voluntary Work. GPI Atlantic. Halifax, Nova Scotia. Web: Cobb, C., Halstead T., and J. Rowe. 1995. Genuine Progress Indicator: Summary of Data and Methodology. Redefining Progress, San Francisco. Costanza, R., Erickson, J.D., Fligger, K., Adams, A., Adams, C., Altschuler, B., Balter, S., Fisher, B., Hike, J., Kelly, J., Kerr, T., McCauley, M., Montone, K., Rauch, M., Schmiedeskamp, K., Saxton, D., Sparacino, L., Tusinski, W. and L. Williams. 2004. "Estimates of the Genuine Progress Indicator (GPI) for Vermont, Chittenden County, and Burlington, from 1950 to 2000." Ecological Economics 51: 139–155. Daly, H., 1996. Beyond Growth: The Economics of Sustainable Development. Beacon Press, Boston. Daly, H. & Cobb, J., 1989, 1994. For the Common Good. Beacon Press, Boston. Delang, C. O., Yu, Yi H. 2015. "Measuring Welfare beyond Economics: The genuine progress of Hong Kong and Singapore". London: Routledge. Fisher, I., 1906. Nature of Capital and Income. A.M. Kelly, New York. Hicks, J., 1946. Value and Capital, Second Edition. Clarendon, London. Redefining Progress, 1995. "Gross production vs genuine progress". Excerpt from the Genuine Progress Indicator: Summary of Data and Methodology. Redefining Progress, San Francisco. L. Pannozzo, R. Colman, N. Ayer, T. Charles, C. Burbidge, D. Sawyer, S. Stiebert, A. Savelson, C. Dodds. (2009), The 2008 Nova Scotia GPI Accounts; Indicators of Genuine Progress, GPI Atlantic. Halifax, Nova Scotia. Van der Slycken, J. (2021). Beyond GDP : alternative measures of economic welfare for the EU-15. Universiteit Gent. Faculteit Economie en Bedrijfskunde. Web: Beyond GDP : alternative measures of economic welfare for the EU-15 External links HI-GPI metrics MD-GPI metrics Maryland Green Registry Maryland GPI background Maryland GPI first steps Archived Maryland-DNR Overview of MD-GPI metrics The IAFOR International Conference on Social Sciences- Hawaii (2009-2017) Vermont GPI overview primer Gross domestic product Welfare economics Economic ideologies Sustainability metrics and indices Quality of life Ecological economics Environmental economics
Genuine progress indicator
[ "Environmental_science" ]
4,858
[ "Environmental economics", "Environmental social science" ]
972,123
https://en.wikipedia.org/wiki/Digitized%20Sky%20Survey
The Digitized Sky Survey (DSS) is a digitized version of several photographic astronomical surveys of the night sky, produced by the Space Telescope Science Institute between 1983 and 2006. Versions and source material The term Digitized Sky Survey originally referred to the publication in 1994 of a digital version of an all-sky photographic atlas used to produce the first version of the Guide Star Catalog. For the northern sky, the National Geographic Society – Palomar Observatory Sky Survey E-band (red, named after the Eastman Kodak IIIa-E emulsion used), provided almost all of the source data (plate code "XE" in the survey). For the southern sky, the J-band (blue, Eastman Kodak IIIa-J) of the ESO/SERC Southern Sky Atlas (known as the SERC-J, code "S") and the "quick" V-band (blue or V in the Johnson–Kron–Cousins system, Eastman Kodak IIa-D) SERC-J Equatorial Extension (SERC-QV, code "XV"), from the UK Schmidt Telescope at the Australian Siding Spring Observatory, were used. Three supplemental plates in the V-band from the SERC and Palomar surveys are included (code "XX"), with shorter exposure times for the fields containing the Andromeda Galaxy, the Large and the Small Magellanic Cloud. The publication of a digital version of these photographic collections has subsequently become known as the First Generation DSS or DSS1. After the original 1994 publication, more digitizations were made using recently completed photographic surveys, and released as the Second Generation DSS or DSS2. Second Generation DSS consists of three spectra bands, blue, red, and near infrared. The red part was first to complete, and includes the F-band (red, Eastman Kodak IIIa-F) plates from the Palomar Observatory Sky Survey II, made with the Oschin Schmidt Telescope at Palomar Observatory for the northern sky. Red band sources for the southern sky include the short red (SR) plates of the SERC I/SR Survey and Atlas of the Milky Way and Magellanic Clouds (referred to as AAO-SR in DSS2), the Equatorial Red (SERC-ER), and the F-band Second Epoch Survey (referred to as AAO-SES in DSS2, AAO-R in the original literature), all made with the UK Schmidt Telescope at Anglo-Australian Observatory. Production The Digitized Sky Survey was produced by the Catalogs and Survey Branch (CASB) of the Space Telescope Science Institute (STScI). They scanned plates using one of two Perkin-Elmer PDS 2020G microdensitometers. The pixel size was 25 ("First Generation", DSS1) or 15 micrometres ("Second Generation", DSS2), corresponding to 1.7 or 1.0 arcseconds in the source material. The scanning resulted in images 14,000 x 14,000 (DSS1) or 23,040 x 23,040 pixels (DSS2) in size, or approximately 0.4 (DSS1) and 1.1 gigabytes (DSS2) each. The scanning of First Generation DSS takes a little under seven hours per plate to complete. Due to the large size of the images, they were compressed using an H-transform algorithm. This algorithm is lossy, but adaptive, and preserves most of the information in the original. Most of the First Generation DSS files were shrunk by a factor of seven. Similar methods were used in the production of the "Second Generation" DSS, but the microdensitometers have since been modified for multi-channel operation, in order to keep the scan time under 12 hours per plate. The CASB has also published several companion scientific products. The most notable is a photometric calibration of part of the "First Generation" DSS. It allows photometric measurements to be made using the digital northern POSS-E, southern SERC-J, and southern Galactic Plane SERC-V data. Publication The compressed version of the First Generation DSS was published by the STScI and the Astronomical Society of the Pacific (ASP) on 102 CD-ROMs in 1994, under the name "Digitized Sky Survey." It has also been made available online by the STScI and several other facilities in databases that can be queried over the web. The moniker "First Generation" was added later. In 1996, a more highly compressed version of the DSS was published by the STScI and ASP under the name RealSky. RealSky files were compressed by a factor of roughly 100. RealSky consequently took up less space, but the additional compression made it inappropriate for use in photometry and fine detail in the images was degraded. The Second Generation DSS has appeared steadily over the course of several years. In 2006, the Second Generation DSS (second epoch POSS-II and SES surveys) was finished, and distributed on CD-ROM to partner institutions. Generally, the data are available through WWW services at partner institutions. Funding See also References External links Digitized Sky Survey A Seamless Spherical Stitch of the Digitized Sky Survey from Microsoft Research Digitized Sky Survey in Google Sky (partly covered by SDSS and other images) Digitized Sky Survey in WIKISKY.ORG Astronomical catalogues Astronomical surveys Astronomical databases
Digitized Sky Survey
[ "Astronomy" ]
1,131
[ "Astronomical surveys", "Works about astronomy", "Astronomical catalogues", "Astronomical databases", "Astronomical objects" ]
972,208
https://en.wikipedia.org/wiki/Other%20%28philosophy%29
Other is a term used to define another person or people as separate from oneself. In phenomenology, the terms the Other and the Constitutive Other distinguish other people from the Self, as a cumulative, constituting factor in the self-image of a person; as acknowledgement of being real; hence, the Other is dissimilar to and the opposite of the Self, of Us, and of the Same. The Constitutive Other is the relation between the personality (essential nature) and the person (body) of a human being; the relation of essential and superficial characteristics of personal identity that corresponds to the relationship between opposite, but correlative, characteristics of the Self, because the difference is inner-difference, within the Self. The condition and quality of Otherness (the characteristics of the Other) is the state of being different from and alien to the social identity of a person and to the identity of the Self. In the discourse of philosophy, the term Otherness identifies and refers to the characteristics of Who? and What? of the Other, which are distinct and separate from the Symbolic order of things; from the Real (the authentic and unchangeable); from the æsthetic (art, beauty, taste); from political philosophy; from social norms and social identity; and from the Self. Therefore, the condition of Otherness is a person's non-conformity to and with the social norms of society; and Otherness is the condition of disenfranchisement (political exclusion), effected either by the State or by the social institutions (e.g., the professions) invested with the corresponding socio-political power. Therefore, the imposition of Otherness alienates the person labelled as "the Other" from the centre of society, and places him or her at the margins of society, for being the Other. The term Othering or Otherizing describes the reductive action of labelling and defining a person as a subaltern native, as someone who belongs to the socially subordinate category of the Other. The practice of Othering excludes persons who do not fit the norm of the social group, which is a version of the Self; likewise, in human geography, the practice of othering persons means to exclude and displace them from the social group to the margins of society, where mainstream social norms do not apply to them, for being the Other. Background Philosophy The concept of the Self requires the existence of the constitutive Other as the counterpart entity required for defining the Self. Accordingly, in the late 18th century, Georg Wilhelm Friedrich Hegel (1770–1831) introduced the concept of the Other as a constituent part of self-consciousness (preoccupation with the Self), which complemented the propositions about self-awareness (capacity for introspection) proffered by Johann Gottlieb Fichte (1762–1814). John Stuart Mill (1806–1873) introduced the idea of the other mind in 1865 in An Examination of Sir William Hamilton's Philosophy, the first formulation of the other after René Descartes (1596–1650). Edmund Husserl (1859–1938) applied the concept of the Other as the basis for intersubjectivity, the psychological relations among people. In Cartesian Meditations: An Introduction to Phenomenology (1931), Husserl said that the Other is constituted as an alter ego, as an other self. As such, the Other person posed and was an epistemological problem—of being only a perception of the consciousness of the Self. In Being and Nothingness: An Essay on Phenomenological Ontology (1943), Jean-Paul Sartre (1905–1980) applied the dialectic of intersubjectivity to describe how the world is altered by the appearance of the Other, of how the world then appears to be oriented to the Other person, and not to the Self. The Other appears as a psychological phenomenon in the course of a person's life, and not as a radical threat to the existence of the Self. In that mode, in The Second Sex (1949), Simone de Beauvoir (1908–1986) applied the concept of Otherness to Hegel's dialectic of the "Lord and Bondsman" (Herrschaft und Knechtschaft, 1807) and found it to be like the dialectic of the Man–Woman relationship, thus a true explanation for society's treatment and mistreatment of women. Psychology The psychoanalyst Jacques Lacan (1901–1981) and the philosopher of ethics Emmanuel Levinas (1906–1995) established the contemporary definitions, usages, and applications of the constitutive Other, as the radical counterpart of the Self. Lacan associated the Other with language and with the symbolic order of things. Levinas associated the Other with the ethical metaphysics of scripture and tradition; the ethical proposition is that the Other is superior and prior to the Self. In the event, Levinas re-formulated the face-to-face encounter (wherein a person is morally responsible to the Other person) to include the propositions of Jacques Derrida (1930–2004) about the impossibility of the Other (person) being an entirely metaphysical pure-presence. That the Other could be an entity of pure Otherness (of alterity) personified in a representation created and depicted with language that identifies, describes, and classifies. The conceptual re-formulation of the nature of the Other also included Levinas's analysis of the distinction between "the saying and the said"; nonetheless, the nature of the Other retained the priority of ethics over metaphysics. In the psychology of the mind (e.g. R. D. Laing), the Other identifies and refers to the unconscious mind, to silence, to insanity, and to language ("to what is referred and to what is unsaid"). Nonetheless, in such psychologic and analytic usages, there might arise a tendency to relativism if the Other person (as a being of pure, abstract alterity) leads to ignoring the commonality of truth. Likewise, problems arise from unethical usages of the terms The Other, Otherness, and Othering to reinforce ontological divisions of reality: of being, of becoming, and of existence. Ethics In Totality and Infinity: An Essay on Exteriority (1961), Emmanuel Lévinas said that previous philosophy had reduced the constitutive Other to an object of consciousness, by not preserving its absolute alterity—the innate condition of otherness, by which the Other radically transcends the Self and the totality of the human network, into which the Other is being placed. As a challenge to self-assurance, the existence of the Other is a matter of ethics, because the ethical priority of the Other equals the primacy of ethics over ontology in real life. From that perspective, Lévinas described the nature of the Other as "insomnia and wakefulness"; an ecstasy (an exteriority) towards the Other that forever remains beyond any attempt at fully capturing the Other, whose Otherness is infinite; even in the murder of an Other, the Otherness of the person remains uncontrolled and not negated. The infinity of the Other allowed Lévinas to derive other aspects of philosophy and science as secondary to that ethic; thus: Critical theory Jacques Derrida said that the absolute alterity of the Other is compromised, because the Other person is other than the Self and the group. The logic of alterity (otherness) is especially negative in the realm of human geography, wherein the native Other is denied ethical priority as a person with the right to participate in the geopolitical discourse with an empire who decides the colonial fate of the homeland of the Other. In that vein, the language of Otherness used in Oriental Studies perpetuates the cultural perspective of the dominantor–dominated relation, which is characteristic of hegemony; likewise, the sociologic misrepresentation of the feminine as the sexual Other to man reasserts male privilege as the primary voice in social discourse between women and men. In The Colonial Present: Afghanistan, Palestine and Iraq (2004), the geographer Derek Gregory said that the US government's ideologic answers to questions about reasons for the terrorist attacks against the U.S. (i.e. 11 September 2001) reinforced the imperial purpose of the negative representations of the Middle-Eastern Other; especially when President G. W. Bush (2001–2009) rhetorically asked: "Why do they hate us?" as political prelude to the War on Terror (2001). Bush's rhetorical interrogation of armed resistance to empire, by the non–Western Other, produced an Us-and-Them mentality in American relations with the non-white peoples of the Middle East; hence, as foreign policy, the War on Terror is fought for control of imaginary geographies, which originated from the fetishised cultural representations of the Other invented by Orientalists; the cultural critic Edward Saïd said that: Imperialism and colonialism The contemporary, post-colonial world system of nation-states (with interdependent politics and economies) was preceded by the European imperial system of economic and settler colonies in which "the creation and maintenance of an unequal economic, cultural, and territorial relationship, usually between states, and often in the form of an empire, [was] based on domination and subordination." In the imperialist world system, political and economic affairs were fragmented, and the discrete empires "provided for most of their own needs ... [and disseminated] their influence solely through conquest [empire] or the threat of conquest [hegemony]." Racism The racialist perspective of the Western world during the 18th and 19th centuries was invented with the Othering of non-white peoples, which also was supported with the fabrications of scientific racism, such as the pseudo-science of phrenology, which claimed that, in relation to a white-man's head, the head-size of the non-European Other indicated inferior intelligence; e.g. the apartheid-era cultural representations of coloured people in South Africa (1948–94). Consequent to the Holocaust (1941–1945), with documents such as The Race Question (1950) and the Declaration on the Elimination of All Forms of Racial Discrimination (1963), the United Nations officially declared that racial differences are insignificant to anthropological likeness among human beings. Despite the United Nations' factual dismissal of racialism, institutional Othering in the United States produces the cultural misrepresentation of political refugees as illegal immigrants (from overseas) and of immigrants as illegal aliens (usually from México). Orientalism To European people, imperialism (military conquest of non-white people, annexation, and economic integration of their countries to the motherland) was intellectually justified by (among other reasons) orientalism, the study and fetishization of the Eastern world as "primitive peoples" requiring modernisation, the civilising mission. Colonial empires were justified and realised with essentialist and reductive representations (of people, places, and cultures) in books and pictures and fashion, which conflated different cultures and peoples into the binary relation of the Orient and the Occident. Orientalism created the artificial existence of the Western Self and the non–western Other. Orientalists rationalised the cultural artifice of a difference of essence between white and non-white peoples to fetishize (identify, classify, subordinate) the peoples and cultures of Asia into "the Oriental Other"—who exists in opposition to the Western Self. As a function of imperial ideology, Orientalism fetishizes people and things in three actions of cultural imperialism: (i) Homogenization (all Oriental peoples are one folk); (ii) Feminization (the Oriental always is subordinate in the East–West relation); and (iii) Essentialization (a people possess universal characteristics); thus established by Othering, the empire's cultural hegemony reduces to inferiority the people, places, and things of the Eastern world, as measured against the West, the standard of superior civilisation. Subaltern native Colonial stability requires the cultural subordination of the non-white Other for transformation into the subaltern native; a colonised people who facilitate the exploitation of their labour, of their lands, and of the natural resources of their country. The practise of Othering justifies the physical domination and cultural subordination of the native people by degrading them—first from being a national-citizen to being a colonial-subject—and then by displacing them to the periphery of the colony, and of geopolitical enterprise that is imperialism. Using the false dichotomy of "colonial strength" (imperial power) against "native weakness" (military, social, and economic), the coloniser invents the non-white Other in an artificial dominator-dominated relationship that can be resolved only through racialist noblesse oblige, the "moral responsibility" that psychologically allows the colonialist Self to believe that imperialism is a civilising mission to educate, convert, and then culturally assimilate the Other into the empire—thus transforming the "civilised" Other into the Self. In establishing a colony, Othering a non-white people allowed the colonisers to physically subdue and "civilise" the natives to establish the hierarchies of domination (political and social) required for exploiting the subordinated natives and their country. As a function of empire, a settler colony is an economic means for profitably disposing of two demographic groups: (i) the colonists (surplus population of the motherland) and (ii) the colonised (the subaltern native to be exploited) who antagonistically define and represent the Other as separate and apart from the colonial Self. Othering establishes unequal relationships of power between the colonised natives and the colonisers, who believe themselves essentially superior to the natives whom they othered into racial inferiority, as the non-white Other. That dehumanisation maintains the false binary-relations of social class, caste, and race, of sex and gender, and of nation and religion. The profitable functioning of a colony (economic or settler) requires continual protection of the cultural demarcations that are basic to the unequal socio-economic relation between the "civilised man" (the colonist) and the "savage man", thus the transformation of the Other into the colonial subaltern. Gender and sex LGBT identities The social exclusion function of Othering a person or a social group from mainstream society to the social margins—for being essentially different from the societal norm (the plural Self)—is a socio-economic function of gender. In a society wherein man–woman heterosexuality is the sexual norm, the Other refers to and identifies lesbians (women who love women) and gays (men who love men) as people of same-sex orientation whom society has othered as "sexually deviant" from the norms of binary-gender heterosexuality. In practise, sexual Othering is realised by applying the negative denotations and connotations of the terms that describe lesbian, gay, bisexual, and transgender people, in order to diminish their personal social status and political power, and so displace their LGBT communities to the legal margin of society. To neutralise such cultural Othering, LGBT communities queer a city by creating social spaces that use the spatial and temporal plans of the city to allow the LGBT communities free expression of their social identities, e.g. a boystown, a gay-pride parade, etc.; as such, queering urban spaces is a political means for the non-binary sexual Other to establish themselves as citizens integral to the reality (cultural and socio-economic) of their city's body politic. Woman as identity The philosopher of feminism, Cheshire Calhoun identified the female Other as the female-half of the binary-gender relation that is the Man and Woman relation. The deconstruction of the word Woman (the subordinate party in the Man and Woman relation) produced a conceptual reconstruction of the female Other as the Woman who exists independently of male definition, as rationalised by patriarchy. That the female Other is a self-aware Woman who is autonomous and independent of the patriarchy's formal subordination of the female sex with the institutional limitations of social convention, tradition, and customary law; the social subordination of women is communicated (denoted and connoted) in the sexist usages of the word Woman. In 1949, the philosopher of existentialism, Simone de Beauvoir applied Hegel's conception of "the Other" (as a constituent part of self-awareness) to describe a male-dominated culture that represents Woman as the sexual Other to Man. In a patriarchal culture, the Man–Woman relation is society's normative binary-gender relation, wherein the sexual Other is a social minority with the least socio-political agency, usually the women of the community, because patriarchal semantics established that "a man represents both the positive and the neutral, as indicated by the common use of [the word] Man to designate human beings in general; whereas [the word] Woman represents only the negative, defined by limiting criteria, without reciprocity" from the first sex, from Man. In 1957, Betty Friedan reported that a woman's social identity is formally established by the sexual politics of the Ordinate–Subordinate nature of the Man–Woman sexual relation, the social norm in the patriarchal West. When queried about their post-graduate lives, the majority of women interviewed at a university-class reunion, used binary gender language, and referred to and identified themselves by their social roles (wife, mother, lover) in the private sphere of life; and did not identify themselves by their own achievements (job, career, business) in the public sphere of life. Unawares, the women had acted conventionally, and automatically identified and referred to themselves as the social Other to men. Although the nature of the social Other is influenced by the society's social constructs (social class, sex, gender), as a human organisation, society holds the socio-political power to formally change the social relation between the male-defined Self and Woman, the sexual Other, who is not male. In feminist definition, women are the Other to men (but not the Other proposed by Hegel) and are not existentially defined by masculine demands; and also are the social Other who unknowingly accepts social subjugation as part of subjectivity, because the gender identity of woman is constitutionally different from the gender identity of man. The harm of Othering is in the asymmetric nature of unequal roles in sexual and gender relations; the inequality arises from the social mechanics of intersubjectivity. Knowledge Cultural representations About the production of knowledge of the Other who is not the Self, the philosopher Michel Foucault said that Othering is the creation and maintenance of imaginary "knowledge of the Other"—which comprises cultural representations in service to socio-political power and the establishment of hierarchies of domination. That cultural representations of the Other (as a metaphor, as a metonym, and as an anthropomorphism) are manifestations of the xenophobia inherent to the European historiographies that defined and labelled non–European peoples as the Other who is not the European Self. Supported by the reductive discourses (academic and commercial, geopolitical and military) of the empire's dominant ideology, the colonialist misrepresentations of the Other explain the Eastern world to the Western world as a binary relation of native weakness against colonial strength. In the 19th-century historiographies of the Orient as a cultural region, the Orientalists studied only what they said was the high culture (languages and literatures, arts and philologies) of the Middle East, but did not study that geographic space as a place inhabited by different nations and societies. About that Western version of the Orient, Edward Saïd said that: In so far as the Orient occurred in the existential awareness of the Western world, as a term, The Orient later accrued many meanings and associations, denotations, and connotations that did not refer to the real peoples, cultures, and geography of the Eastern world, but to Oriental studies, the academic field about the Orient as a word. Academia In the Eastern world, the field of Occidentalism, the investigation programme and academic curriculum of and about the essence of the West—Europe as a culturally homogeneous place—did not exist as a counterpart to Orientalism. In the postmodern era, the Orientalist practices of historical negationism, the writing of distorted histories about the places and peoples of "The East", continues in contemporary journalism; e.g. in the Third World, political parties practice Othering with fabricated facts about threat-reports and non-existent threats (political, social, military) that are meant to politically delegitimise opponent political parties composed of people from the social and ethnic groups designated as the Other in that society. The Othering of a person or of a social group—by means of an ideal ethnocentricity (the ethnic group of the Self) that evaluates and assigns negative, cultural meaning to the ethnic Other—is realised through cartography; hence, the maps of Western cartographers emphasised and bolstered artificial representations of the national-identities, the natural resources, and the cultures of the native inhabitants, as culturally inferior to the West. Historically, Western cartography often featured distortions (proportionate, proximate, and commercial) of places and true distances by placing the cartographer's homeland in the centre of the mapamundi; these ideas were often utilized to support imperialistic expansion. In contemporary cartography, the polar-perspective maps of the northern hemisphere, drawn by U.S. cartographers, also frequently feature distorted spatial relations (distance, size, mass) of and between the U.S. and Russia which according to historian Jerome D. Fellman emphasise the perceived inferiority (military, cultural, geopolitical) of the Russian Other. Practical perspectives In Key Concepts in Political Geography (2009), Alison Mountz proposed concrete definitions of the Other as a philosophic concept and as a term within phenomenology; as a noun, the Other identifies and refers to a person and to a group of persons; as a verb, the Other identifies and refers to a category and a label for persons and things. Post-colonial scholarship demonstrated that, in pursuit of empire, "the colonizing powers narrated an 'Other' whom they set out to save, dominate, control, [and] civilize . . . [in order to] extract resources through colonization" of the country whose people the colonial power designated as the Other. As facilitated by Orientalist representations of the non–Western Other, colonization—the economic exploitation of a people and their land—is misrepresented as a civilizing mission launched for the material, cultural, and spiritual benefit of the colonized peoples. Counter to the post-colonial perspective of the Other as part of a Dominator–Dominated binary relationship, postmodern philosophy presents the Other and Otherness as phenomenological and ontological progress for Man and Society. Public knowledge of the social identity of peoples classified as "Outsiders" is de facto acknowledgement of their being real, thus they are part of the body politic, especially in the cities. As such, "the post-modern city is a geographical celebration of difference that moves sites once conceived of as 'marginal' to the [social] centre of discussion and analysis" of the human relations between the Outsiders and the Establishment. See also References Sources Thomas, Calvin, ed. (2000). "Introduction: Identification, Appropriation, Proliferation", Straight with a Twist: Queer Theory and the Subject of Heterosexuality. University of Illinois Press. . Cahoone, Lawrence (1996). From Modernism to Postmodernism: An Anthology. Cambridge, Mass.: Blackwell. Colwill, Elizabeth. (2005). Reader—Wmnst 590: Feminist Thought. KB Books. Haslanger, Sally. Feminism and Metaphysics: Unmasking Hidden Ontologies. 28 November 2005. McCann, Carole. Kim, Seung-Kyung. (2003). Feminist Theory Reader: Local and Global Perspectives. Routledge. New York, NY. Rimbaud, Arthur (1966). "Letter to Georges Izambard", Complete Works and Selected Letters. Trans. Wallace Fowlie. Chicago: University of Chicago Press. Nietzsche, Friedrich (1974). The Gay Science. Trans. Walter Kaufmann. New York: Vintage. Saussure, Ferdinand de (1986). Course in General Linguistics. Eds. Charles Bally and Albert Sechehaye. Trans. Roy Harris. La Salle, Ill.: Open Court. Lacan, Jacques (1977). Écrits: A Selection. Trans. Alan Sheridan. New York: Norton. Althusser, Louis (1973). Lenin and Philosophy and Other Essays. Trans. Ben Brewster. New York: Monthly Review Press. Warner, Michael (1990). "Homo-Narcissism; or, Heterosexuality", Engendering Men, p. 191. Eds. Boone and Cadden, London UK: Routledge. Tuttle, Howard (1996). The Crowd is Untruth, Peter Lang Publishing, . Further reading Levinas, Emmanuel (1974). Autrement qu'être ou au-delà de l'essence. (Otherwise than Being or Beyond Essence). Levinas, Emmanuel (1972). Humanism de l'autre homme. Fata Morgana. Lacan, Jacques (1966). Ecrits. London: Tavistock, 1977. Lacan, Jacques (1964). The Four Fondamental Concepts of Psycho-analysis. London: Hogarth Press, 1977. Foucault, Michel (1990). The History of Sexuality vol. 1: An Introduction. Trans. Robert Hurley. New York: Vintage. Derrida, Jacques (1973). Speech and Phenomena and Other Essays on Husserl's Theory of Signs. Trans. David B. Allison. Evanston: Ill.: Northwestern University Press. Kristeva, Julia (1982). Powers of Horror: An Essay on Abjection. Trans. Leon S. Roudiez. New York: Columbia University Press. Butler, Judith (1990). Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge. Butler, Judith (1993). Bodies That Matter: On the Discursive Limits of "Sex". New York: Routledge. Zuckermann, Ghil'ad (2006), "'Etymythological Othering' and the Power of 'Lexical Engineering' in Judaism, Islam and Christianity. A Socio-Philo(sopho)logical Perspective", Explorations in the Sociology of Language and Religion, edited by Tope Omoniyi and Joshua A. Fishman, Amsterdam: John Benjamins, pp. 237–258. External links The Centre for Studies in Otherness Concepts in metaphysics Phenomenology Conceptions of self Identity (philosophy) Discrimination
Other (philosophy)
[ "Biology" ]
5,680
[ "Behavior", "Aggression", "Discrimination" ]
972,242
https://en.wikipedia.org/wiki/Conditions%20comorbid%20to%20autism
Autism spectrum disorder (ASD) or simply autism is a neurodevelopmental disorder that begins in early childhood, persists throughout adulthood, and is characterized by difficulties in social communication and restricted, repetitive patterns of behavior. There are many conditions comorbid to autism, such as attention deficit hyperactivity disorder, anxiety disorders, and epilepsy. In medicine, comorbidity is the presence of one or more additional conditions co-occurring with the primary one, or the effect of such additional disorders. Distinguishing between ASD and other diagnoses can be challenging because the traits of ASD often overlap with symptoms of other disorders, and the characteristics of ASD make traditional diagnostic procedures difficult. Autism is associated with several genetic disorders, perhaps due to an overlap in genetic causes. About 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndrome, a category referred to as syndromic autism. Comorbid conditions Abnormal folate metabolism Several lines of evidence indicate abnormalities of folate metabolism in ASD. These abnormalities can lead to a decrease in 5-methyltetrahydrofolate production, alter the production of folate metabolites and reduce folate transport across the blood-brain barrier and in neurons. The most significant abnormalities of folate metabolism associated with ASD may be autoantibodies to the alpha folate receptor (FRα). These autoantibodies have been associated with cerebral folate deficiency. Autoantibodies can bind to FRα and greatly impair its function. In 2013, one study reported that 60% and 44% of 93 children with ASD were positive for FRα-blocking and binding autoantibodies, respectively. This high rate of anti-FRα autoantibody positivity was confirmed by Ramaekers et al. who compared 75 children with ASD to 30 non-autistic "controls". These controls were children who had a developmental delay, but did not have ASD. FRα-blocking autoantibodies were positive in 47% of children with ASD, but only in 3% of children without ASD. Many children with ASD and cerebral folate deficiency have marked improvements in their clinical status when taking folinic acid. A series of five children with cerebral folate deficiency and low functioning autism with neurological deficits found a complete reduction of ASD symptoms with the use of folinic acid in a child and substantial improvements in communication in two other children. Abnormal redox metabolism An imbalance in glutathione-dependent redox metabolism has been shown to be associated with autism spectrum disorder (ASD). Glutathione synthesis and intracellular redox balance are related to folate metabolism and methylation, metabolic pathways that have also been shown to be abnormal in ASD. Together, these metabolic abnormalities define a distinct endophenotype of TSA closely associated with genetic, epigenetic and mitochondrial abnormalities, as well as environmental factors related to ASD. Glutathione is involved in neuroprotection against oxidative stress and neuroinflammation by improving the antioxidant stress system. In autistic children, studies have shown that glutathione metabolism can be improved: Subcutaneously by injection of methylcobalamin (a form of B12). Oral folinic acid. A vitamin and mineral supplement that includes antioxidants, coenzyme Q10 and vitamins B. Tetrahydrobiopterin. Interestingly, recent DBPC studies have shown that N-acetyl-1-cysteine, a glutathione precursor supplement, is effective in improving the symptoms and behaviors associated with ASD. However, glutathione was not measured in these studies. Small, medium and large DPBC trials and open small and medium-sized clinical trials demonstrate that new treatments for children with ASD for oxidative stress are associated with improvements in baseline symptoms of ASD, sleep, gastrointestinal symptoms, hyperactivity, seizures and parental impression, sensory and motor symptoms. These new treatments include N-acetyl-l-cysteine, methylcobalamin with and without oral folinic acid, vitamin C, and a vitamin and mineral supplement that includes antioxidants, enzyme Q10, and B vitamins. Several other treatments that have antioxidant properties, including carnosine, have also been reported to significantly improve ASD behaviors, suggesting that treatment of oxidative stress could be beneficial for children with ASD. Many antioxidants can also help improve mitochondrial function, suggesting that clinical improvements with antioxidants could occur through a reduction in oxidative stress and an improvement in mitochondrial function. Some of these treatments can have frequent serious side effects (bronchospasm, etc. ...). Anxiety Anxiety disorders are common among children and adults with ASD. Symptoms are likely affected by age, level of cognitive functioning, degree of social impairment, and ASD-specific difficulties. Many anxiety disorders, such as social anxiety disorder and generalized anxiety disorder, are not commonly diagnosed in people with ASD because such symptoms are better explained by ASD itself, and it is often difficult to tell whether symptoms such as compulsive checking are part of ASD or a co-occurring anxiety problem. The prevalence of anxiety disorders in children with ASD has been reported to be anywhere between 11% and 84%; the wide range is likely due to differences in the ways the studies were conducted. A systematic review summarized available evidence on interventions to reduce anxiety in school children with autism spectrum disorder. Of the 24 studies reviewed, 22 used a cognitive behavioral therapy (CBT) approach. The review found that CBT was moderately to highly effective at reducing anxiety in school children with autism spectrum disorder, but that effects varied depending on whether they were reported by clinicians, parents or self-reported. Treatments involving parents and one-on-one compared to group treatments were more effective. Attention deficit hyperactivity disorder The diagnosis manual DSM-IV did not allow the co-diagnosis of ASD and attention deficit hyperactivity disorder (ADHD). However, following years of clinical research, the DSM-5 released in 2013 removed this prohibition of co-morbidity. Thus, individuals with autism spectrum disorder may also have a diagnosis of ADHD, with the modifiers of a predominantly inattentive, hyperactive, combined, or not otherwise specified presentation. Clinically significant symptoms of these two conditions commonly co-occur, and children with both sets of symptoms may respond poorly to standard ADHD treatments. Individuals with autism spectrum disorder may benefit from additional types of medications. The term AuDHD is sometimes used for those with both autism and ADHD. There are also studies suggesting noticeable differences in presenting symptoms by gender which can complicate diagnosis, especially in adulthood. Avoidant/restrictive food intake disorder Avoidant/restrictive food intake disorder (ARFID) is a feeding or eating disorder in which individuals significantly limit the volume or variety of foods they consume, causing malnutrition, weight loss, and psychosocial problems. A 2023 review concluded that "there is considerable overlap between ARFID and autism," finding that 8% to 55% of children diagnosed with ARFID were autistic. Unlike eating disorders such as anorexia nervosa and bulimia, body image disturbance is not a root cause. Individuals with ARFID may have trouble eating due to the sensory characteristics of food (appearance, smell, texture, or taste); executive function disregulation; fears of choking or vomiting; low appetite; or a combination of these factors. Bipolar disorder Bipolar disorder, or manic-depression, is itself often claimed to be comorbid with a number of conditions, including autism. Autism includes some symptoms commonly found in mood and anxiety disorders. Bowel disease Gastrointestinal symptoms are a common comorbidity in patients with autism spectrum disorders (ASD), even though the underlying mechanisms are largely unknown. The most common gastrointestinal symptoms reported by proprietary tool developed and administered by Mayer, Padua, and Tillisch (2014) are abdominal pain, constipation, diarrhea and bloating, reported in at least 25 percent of participants. Carbohydrate digestion and transport is impaired in individuals with autism spectrum disorder, which is thought to be attributed to functional disturbances that cause increased intestinal permeability, deficient enzyme activity of disaccharides, increased secretin-induced pancreatico-biliary secretion, and abnormal fecal flora Clostridia taxa. Altered gastrointestinal function accompanied by pain may induce feeding issues and increase perceived negative behaviors, including self injury, in individuals with autism. Brain fog Brain fog is a constellation of symptoms that include reduced cognition, inability to concentrate and multitask, as well as loss of short- and long-term memory. Brain fog can be present in patients with autism spectrum disorder (ASD). Its prevalence, however, remains unknown. Depression Major depressive disorder has been shown by several studies to be one of the most common comorbid conditions in those with ASD, and is thought to develop and occur more in high-functioning individuals during adolescence, when the individual develops greater insight into their differences from others. In addition, the presentation of depression in ASDs can depend on the level of cognitive functioning in the individual, with lower functioning children displaying more behavioral issues and higher functioning children displaying more traditional depressive symptoms. Developmental coordination disorder (dyspraxia) The initial accounts of Asperger syndrome and other diagnostic schemes include descriptions of developmental coordination disorder. Children with ASD may be delayed in acquiring motor skills that require motor dexterity, such as bicycle riding or opening a jar, and may appear awkward or "uncomfortable in their own skin". They may be poorly coordinated, or have an odd or bouncy gait or posture, poor handwriting, other hand/dexterity impairments, or problems with visual-motor integration, visual-perceptual skills, and conceptual learning. They may show problems with proprioception (sensation of body position) on measures of developmental coordination disorder, balance, tandem gait, and finger-thumb apposition. Epilepsy ASD is also associated with epilepsy, with variations in risk of epilepsy due to age, cognitive level, and type of language disorder. One in four autistic children develops seizures, often starting either in early childhood or adolescence. Seizures, caused by abnormal electrical activity in the brain, can produce a temporary loss of consciousness (a "blackout"), a body convulsion, unusual movements, or staring spells. Sometimes a contributing factor is a lack of sleep or a high fever. An EEG can help confirm the seizure's presence. Typically, onset of epilepsy occurs before age five or during puberty, and is more common in females and individuals who also have a comorbid intellectual disability. Fetal alcohol spectrum disorder Fetal alcohol spectrum disorder (FASD) is a common disorder that can mimic the signs of ASD. Although results from studies are mixed, it is estimated that 2.6% of children with an FASD have an ASD as well, a rate almost two times higher than that reported in the general US population. Fragile X syndrome Fragile X syndrome is the most common inherited form of intellectual disability. It was so named because one part of the X chromosome has a defective piece that appears pinched and fragile when under a microscope. Fragile X syndrome affects about two to five percent of people with ASD. If one child has Fragile X, there is a 50% chance that boys born to the same parents will have Fragile X (see Mendelian genetics). Other members of the family who may be contemplating having a child may also wish to be checked for the syndrome. Gender dysphoria Gender dysphoria is a diagnosis given to transgender people who experience discomfort related to their gender identity. Autistic people are more likely to experience gender dysphoria. Around 20% of gender identity clinic-assessed individuals reported characteristics of ASD. Hypermobility spectrum disorder and Ehlers–Danlos syndromes Studies have confirmed a link between hereditary connective tissue disorders such as Ehlers-Danlos syndromes (EDS) and hypermobility spectrum disorder (HSD) with autism, as a comorbidity and a co-occurrence within the same families. Intellectual disability The fraction of autistic individuals who also meet criteria for intellectual disability has been reported as anywhere from 25% to 70%. This wide variation illustrates the difficulty of assessing intelligence in autistic individuals. For example, a 2001 British study of 26 autistic children found only about 30% with intelligence in the normal range (IQ above 70), 50% with a mild to moderate intellectual disability, and about 20% with a severe to profound intellectual disability (IQ below 35). For ASD other than autism the association is much weaker: the same study reported typical levels of intelligence in about 94% of 53 children with PDD-NOS. Estimates are that 40–69% of individuals with ASD have some degree of an intellectual disability, with females more likely to be in severe range of an intellectual disability. Learning disabilities are also highly comorbid in individuals with an ASD. Approximately 25–75% of individuals with an ASD also have some degree of learning disability, although the types of learning disability vary depending on the specific strengths and weaknesses of the individual. A 2006 review questioned the common assumption that most children with autism have an intellectual disability. It is possible that the association between an intellectual disability and autism is not because they usually have common causes, but because the presence of both makes it more likely that both will be diagnosed. The CDC states that based on information from 11 reporting states 46% of people with autism have above 85 IQ. Mitochondrial diseases The central player in bioenergetics is the mitochondrion. Mitochondria produce about 90% of cellular energy, regulate cellular redox status, produce ROS, maintain homeostasis, synthesize and degrade high-energy biochemical intermediates, and regulate cell death through activation of the mitochondrial permeability transition pore (mtPTP). When they fail, less and less energy is generated within the cell. Cell injury and even cell death follow. If this process is repeated throughout the body, whole organ systems begin to fail. Mitochondrial diseases are a heterogeneous group of disorders that can affect multiple organs with varying severity. Symptoms may be acute or chronic with intermittent decompensation. Neurological manifestations include encephalopathy, stroke, cognitive regression, seizures, cardiopathies (cardiac conduction defects, hypertensive heart disease, cardiomyopathy, etc...), diabetes, visual and hearing loss, organ failure, neuropathic pain and peripheral neuropathy. The prevalence estimates of mitochondrial disease and dysfunction across studies ranging from about 5 to 80%. This may be, in part, due to the unclear distinction between mitochondrial disease and dysfunction. Mitochondrial diseases are difficult to diagnose and have become better known and detected. Studies indicating the highest rates of mitochondrial diagnosis are usually the most recent. Some drugs are toxic to mitochondria. These can trigger or aggravate dysfunctions or mitochondrial diseases. Antiepileptics: Valproic acid (also used in various other indications) and phenytoin are the most toxic. Phenobarbital, carbamazepine, oxcarbazepine, ethosuximide, zonisamide, topiramate, gabapentin and vigabatrin are also. Other types of drugs: Corticosteroids (such as cortisone), isotretinoin (Accutane) and other vitamin A derivatives, barbiturates, certain antibiotics, propofol, volatile anesthetics, non-depolarizing muscle relaxants, some local anesthetics, statins, fibrates, glitazones, beta blockers, biguanides, amiodarone, some chemotherapies, some neuroleptics, nucleoside reverse transcriptase inhibitors and various other drugs. Neurofibromatosis type I ASD is also associated with neurofibromatosis type I (NF-1). NF-1 is a complex multi-system human disorder caused by the mutation of a gene on chromosome 17 that is responsible for production of a protein, called neurofibromin 1, which is needed for normal function in many human cell types. NF-1 causes tumors along the nervous system which can grow anywhere on the body. NF-1 is one of the most common genetic disorders and is not limited to any person's race or sex. NF-1 is an autosomal dominant disorder, which means that mutation or deletion of one copy (or allele) of the NF-1 gene is sufficient for the development of NF-1, although presentation varies widely and is often different even between relatives affected by NF-1. Neuroinflammation and immune disorders The role of the immune system and neuroinflammation in the development of autism is controversial. Until recently, there was scant evidence supporting immune hypotheses, but research into the role of immune response and neuroinflammation may have important clinical and therapeutic implications. The exact role of heightened immune response in the central nervous system (CNS) of patients with autism is uncertain, but may be a primary factor in triggering and sustaining many of the comorbid conditions associated with autism. Recent studies indicate the presence of heightened neuroimmune activity in both the brain tissue and the cerebrospinal fluid of patients with autism, supporting the view that heightened immune response may be an essential factor in the onset of autistic symptoms. A 2013 review also found evidence of microglial activation and increased cytokine production in postmortem brain samples from people with autism. Neuropathies The prevalence of peripheral neuropathies would be significantly increased in ASD. Peripheral neuropathies may be asymptomatic. Peripheral neuropathy is a common manifestation of mitochondrial diseases and polyneuropathies would be relatively common. Neuropathies could also be caused by other features of ASD. Neurotransmitter anomalies Nonverbal learning disorder is a proposed category of neurodevelopmental disorder characterized by core deficits in non-verbal skills, especially visual-spatial processing. People with this condition have normal or advanced verbal intelligence and significantly lower nonverbal intelligence. See for sources and more info. Nonverbal learning disorder Nonverbal learning disorder is a proposed category of neurodevelopmental disorder characterized by core deficits in non-verbal skills, especially visual-spatial processing. People with this condition have normal or advanced verbal intelligence and significantly lower nonverbal intelligence. Obsessive–compulsive disorder Obsessive–compulsive disorder is characterized by recurrent obsessive thoughts or compulsive acts. About 30% of individuals with autism spectrum disorders also have OCD. Obsessive–compulsive personality disorder Obsessive–compulsive personality disorder (OCPD) is a cluster C personality disorder characterized by a general pattern of excessive concern with orderliness, perfectionism, attention to details, mental and interpersonal control and a need for control over one's environment which interferes with personal flexibility, openness to experience and efficiency as well as interfering with relationships. There are considerable similarities and overlap between autism and OCPD, such as list-making, inflexible adherence to rules and obsessive aspects of routines, though the latter may be distinguished from OCPD especially regarding affective behaviors, bad social skills, difficulties with theory of mind and intense intellectual interests e.g. an ability to recall every aspect of a hobby. A 2009 study involving adult autistic people found that 40% of those diagnosed with autism met the diagnostic requirements for a co-morbid OCPD diagnosis. Psychosis and schizophrenia Childhood-onset schizophrenia is preceded by childhood autistic spectrum disorders in almost half of cases, and an increasing number of similarities are being discovered between the two disorders. Studies have also found that the presence of psychosis in adulthood is significantly higher in those with autism spectrum disorders, especially those with PDD-NOS, than in the general population. This psychosis generally occurs in an unusual way, with most individuals with ASD experiencing a highly atypical collection of symptoms. Recent studies have also found that the core ASD symptoms also generally present in a slightly different way during the childhood of the individuals that will later become psychotic, long before the actual psychosis develops. Reduced NMDA‐receptor function Reduced NMDA receptor function has been linked to reduced social interactions, locomotor hyperactivity, self-injury, prepulse inhibition (PPI) deficits, and sensory hypersensitivity, among others. Results suggest that NMDA dysregulation could contribute to core ASD symptoms. Schizoid personality disorder Schizoid personality disorder (SPD) is a personality disorder characterized by a lack of interest in social relationships, a tendency towards a solitary or sheltered lifestyle, secretiveness, emotional coldness, detachment and apathy. Other associated features include stilted speech, a lack of deriving enjoyment from most, if not all, activities, feeling as though one is an "observer" rather than a participant in life, an inability to tolerate emotional expectations of others, apparent indifference when praised or criticised, a degree of asexuality and idiosyncratic moral or political beliefs. Symptoms typically start in late childhood or adolescence. Several studies have reported an overlap, confusion or comorbidity with Asperger syndrome (which has been combined with autism spectrum disorder and no longer appears as a diagnostic label in the DSM-5). Asperger syndrome was at one time called "schizoid disorder of childhood". Eugen Bleuler coined the term "autism" to describe withdrawal to an internal fantasy, against which any influence from outside becomes an intolerable disturbance. In a 2012 study of a sample of 54 young adults with Asperger syndrome, it was found that 26% of them also met criteria for SPD, the highest comorbidity out of any personality disorder in the sample (the other comorbidities were 19% for obsessive–compulsive personality disorder, 13% for avoidant personality disorder and one female with schizotypal personality disorder). Additionally, twice as many men with Asperger syndrome met criteria for SPD than women. While 41% of the whole sample were unemployed with no occupation, this rose to 62% for the Asperger's and SPD comorbid group. Although the cause for this comorbidity is not yet certain, genetic evidence for a spectrum between cluster A personality disorders/schizophrenia and autism spectrum disorders has been found. Tantam suggested that Asperger syndrome may confer an increased risk of developing SPD. In the same 2012 study, it was noted that the DSM may complicate diagnosis of SPD by requiring the exclusion of a pervasive developmental disorder (PDD) before establishing a diagnosis of SPD. The study found that social interaction, stereotyped behaviours and specific interests were more severe in the individuals with Asperger syndrome also fulfilling SPD criteria, against the notion that social interaction skills are unimpaired in SPD. The authors believe that a substantial subgroup of people with autism spectrum disorder or PDD have clear "schizoid traits" and correspond largely to the "loners" in Lorna Wing's classification The autism spectrum (Lancet 1997), described by Sula Wolff. Sensory problems Unusual responses to sensory stimuli are more common and prominent in individuals with autism, and sensory abnormalities are commonly recognized as diagnostic criteria in autism spectrum disorder (ASD), as reported in the DSM-5; although there is no good evidence that sensory symptoms differentiate autism from other developmental disorders. Sensory processing disorder is comorbid with ASD, with comorbidity rates of 42–88%. With or without meeting the standards of SPD, about 90% of ASD individuals have some type of atypical sensory experiences, described as both hyper- and hypo-reactivity. The prevalence of reported "unusual sensory behaviors" that affect functioning in everyday life is also higher, ranging from 45 to 95% depending on factors such as age, IQ and the control group used. Several studies have reported associated motor problems that include poor muscle tone, poor motor planning, and toe walking; ASD is not associated with severe motor disturbances. Many with ASD often find it uncomfortable to sit or stand in a way which neurotypical people will find ordinary, and may stand in an awkward position, such as with both feet together, supinating, sitting cross-legged or with one foot on top of the other or simply having an awkward gait. However, despite evidently occurring more often in people with ASD, all evidence is anecdotal and unresearched at this point. It has been observed by some psychologists that there is commonality to the way in which these 'awkward' positions may manifest. Sleep disorders Sleep disorders are commonly reported by parents of individuals with ASDs, including late sleep onset, early morning awakening, and poor sleep maintenance; sleep disturbances are present in 53–78% of individuals with ASD. Unlike general pediatric insomnia, which has its roots in behavior, sleep disorders in individuals with ASD are comorbid with other neurobiological, medical, and psychiatric issues. If not addressed, severe sleep disorders can exacerbate ASD behaviors such as self-injury; however, there are no Food and Drug Administration-approved pharmacological treatments for pediatric insomnia at this time. Studies have found abnormalities in the physiology of melatonin and circadian rhythm in people with autism spectrum disorders (ASD). These physiological abnormalities include lower concentrations of melatonin or melatonin metabolites in ASDs compared to controls. Some evidence suggests that melatonin supplements improve sleep patterns in children with autism but robust, high-quality studies are overall lacking. Strabismus According to several studies, there is a high prevalence of strabismus in autistic individuals, with rates 3–10 times that of the general population. Tinnitus According to one study, 35% of people who are autistic would be affected by tinnitus, which is much higher than in the general population. Tourette syndrome The prevalence of Tourette syndrome among individuals who are autistic is estimated to be 6.5%, higher than the 2% to 3% prevalence for the general population. Several hypotheses for this association have been advanced, including common genetic factors and dopamine, glutamate or serotonin abnormalities. Tuberous sclerosis Tuberous sclerosis is a rare genetic disorder that causes benign tumors to grow in the brain as well as in other vital organs. It has a consistently strong association with the autism spectrum. One to four percent of autistic people also have tuberous sclerosis. Studies have reported that between 25% and 61% of individuals with tuberous sclerosis meet the diagnostic criteria for autism with an even higher proportion showing features of a broader pervasive developmental disorder. Turner syndrome Turner syndrome is an intersex condition wherein a person is born phenotypically female but with only one X chromosome or with X/XX mosaicism instead of XX or XY chromosomes. One study found that 23% of girls with Turner syndrome who were included met criteria for a diagnosis of an autism spectrum disorder and the majority had "significant social communication difficulties." Vitamin deficiencies Vitamin deficiencies are more common in autism spectrum disorders than in the general population. Vitamin D: Vitamin D deficiency was concerned in a German study 78% of hospitalized autistic population. 52% of the entire ASD group in the study was severely deficient, which is much higher than in the general population. Other studies also show a higher rate of vitamin D deficiencies in ASDs. Vitamin B12: The researchers found that, overall, B12 levels in the brain tissue of autistic children were three times lower than those of the brain tissue of children not affected by ASD. This lower-than-normal B12 profile persisted throughout life in the brain tissues of patients with autism. These deficiencies are not visible by conventional blood sampling. As for the classic deficiency of vitamin B12, it would affect up to 40% of the population, its prevalence has not yet been studied in autism spectrum disorders. Vitamin B12 deficiency is one of the most serious. Vitamin B9 (folic acid): Studies have been conducted regarding folic acid supplementation in autism in children. "The results showed that folic acid supplementation significantly improved certain symptoms of autism such as sociability, verbal / preverbal cognitive language, receptive language, and emotional expression and communication. In addition, this treatment improved the concentrations of folic acid, homocysteine and redox metabolism of standardized glutathione." Vitamin A: Vitamin A can induce mitochondrial dysfunction. According to a non-specific study on ASD: "Vitamin A and its derivatives, retinoids, are micronutrients necessary for the human diet in order to maintain several cellular functions of human development in adulthood as well as during aging ... Although it is either an essential micronutrient, used in clinical applications, vitamin A has several toxic effects on the redox environment and mitochondrial function. A decline in the quality of life and an increase in the mortality rate among users of vitamin A supplements have been reported. Although the exact mechanism by which vitamin A causes its deleterious effects is not yet clear ... Vitamin A and its derivatives, retinoids , disrupt mitochondrial function by a mechanism that is not fully understood." Zinc: Zinc deficiency incidence rates in children aged 0 to 3, 4 to 9 and 10 to 15 years were estimated at 43.5%, 28.1% and 3.3% for boys and at 52.5%, 28.7% and 3.5% among girls. Magnesium: Incidence rates of magnesium deficiency in children aged 0 to 3, 4 to 9 and 10 to 15 years were estimated at 27%, 17.1% and 4.2% for boys and at 22.9%, 12.7% and 4.3% among girls. Calcium: Incidence rates of calcium deficiency in children aged 0 to 3, 4 to 9 years and 10 to 15 years were estimated at 10.4%, 6.1% and 0.4% for boys and at 3.4%, 1.7% and 0.9% among girls. It has been found that special diets that are inappropriate for children with ASD usually result in excessive amounts of certain nutrients and persistent vitamin deficiencies. Other mental disorders Phobias and other psychopathological disorders have often been described along with ASD but this has not been assessed systematically. Notes References Autism
Conditions comorbid to autism
[ "Environmental_science" ]
6,492
[ "Epidemiology", "Environmental social science" ]
972,312
https://en.wikipedia.org/wiki/Liquid%20air
Liquid air is air that has been cooled to very low temperatures (cryogenic temperatures), so that it has condensed into a pale blue mobile liquid. It is stored in specialized containers, such as vacuum flasks, to insulate it from room temperature. Liquid air can absorb heat rapidly and revert to its gaseous state. It is often used for condensing other substances into liquid and/or solidifying them, and as an industrial source of nitrogen, oxygen, argon, and other inert gases through a process called air separation (industrially referred to as air rectification.). Properties Liquid air has a density of approximately . The density of a given air sample varies depending on the composition of that sample (e.g. humidity & concentration). Since dry gaseous air contains approximately 78% nitrogen, 21% oxygen, and 1% argon, the density of liquid air at standard composition is calculated by the percentage of the components and their respective liquid densities (see liquid nitrogen and liquid oxygen). Although air contains trace amounts of carbon dioxide (about 0.03%), carbon dioxide solidifies from the gas phase without passing through the intermediate liquid phase, and hence will not be present in liquid air at pressures less than . The boiling point of air is , intermediate between the boiling points of liquid nitrogen and liquid oxygen. However, it can be difficult to keep at a stable temperature as the liquid boils, since the nitrogen will boil off first, leaving the mixture oxygen-rich and changing the boiling point. This may also occur in some circumstances due to the liquid air condensing oxygen out of the atmosphere. Liquid air starts to freeze at approximately , precipitating nitrogen-rich solid (but with appreciable amount of oxygen in solid solution). Unless the oxygen is previously accommodated in the solid solution, the eutectic freezes at 50 K. Preparation Principle of production The constituents of air were once known as "permanent gases", as they could not be liquified solely by compression at room temperature. A compression process will raise the temperature of the gas. This heat is removed by cooling to the ambient temperature in a heat exchanger, and then expanding by venting into a chamber. The expansion causes a lowering of the temperature, and by counter-flow heat exchange of the expanded air, the pressurized air entering the expander is further cooled. With sufficient compression, flow, and heat removal, eventually droplets of liquid air will form, which may then be employed directly for low temperature demonstrations. The main constituents of air were liquefied for the first time by Polish scientists Karol Olszewski and Zygmunt Wróblewski in 1883. Devices for the production of liquid air are not commercially available, and not easily fabricated. Process of production The most common process for the preparation of liquid air is the two-column Hampson–Linde cycle using the Joule–Thomson effect. Air is fed at high pressure (>) into the lower column, in which it is separated into pure nitrogen and oxygen-rich liquid. The rich liquid and some of the nitrogen are fed as reflux into the upper column, which operates at low pressure (<), where the final separation into pure nitrogen and oxygen occurs. A raw argon product can be removed from the middle of the upper column for further purification. Air can also be liquefied by Claude's process, which combines cooling by Joule–Thomson effect, isentropic expansion and regenerative cooling. Application In manufacturing processes, the liquid air product is typically fractionated into its constituent gases in either liquid or gaseous form, as the oxygen is especially useful for fuel gas welding and cutting and for medical use, and the argon is useful as an oxygen-excluding shielding gas in gas tungsten arc welding. Liquid nitrogen is useful in various low-temperature applications, being nonreactive at normal temperatures (unlike oxygen), and boiling at . Transport and energy storage Between 1899 and 1902, the automobile Liquid Air was produced and demonstrated by a joint American/English company, with the claim that they could construct a car that would run a hundred miles on liquid air. On 2 October 2012, the Institution of Mechanical Engineers said liquid air could be used as a means of storing energy. This was based on a technology that was developed by Peter Dearman, a garage inventor in Hertfordshire, England to power vehicles. See also Liquid nitrogen Liquid oxygen Cryogenic energy storage Industrial gas Liquefaction of gases Liquid nitrogen vehicle References External links 2013-05-20 MIT Technology Review article on liquid air developments for transportation and grid energy storage Atmosphere Coolants Cryogenics Energy storage Energy technology Engineering thermodynamics Industrial gases Industrial processes Phases of matter
Liquid air
[ "Physics", "Chemistry", "Engineering" ]
966
[ "Applied and interdisciplinary physics", "Engineering thermodynamics", "Phases of matter", "Cryogenics", "Industrial gases", "Thermodynamics", "Mechanical engineering", "Chemical process engineering", "Matter" ]
972,328
https://en.wikipedia.org/wiki/Wedderburn%E2%80%93Etherington%20number
In mathematics and computer science, the Wedderburn–Etherington numbers are an integer sequence named after Ivor Malcolm Haddon Etherington and Joseph Wedderburn that can be used to count certain kinds of binary trees. The first few numbers in the sequence are 0, 1, 1, 1, 2, 3, 6, 11, 23, 46, 98, 207, 451, 983, 2179, 4850, 10905, 24631, 56011, ... () Combinatorial interpretation These numbers can be used to solve several problems in combinatorial enumeration. The nth number in the sequence (starting with the number 0 for n = 0) counts The number of unordered rooted trees with n leaves in which all nodes including the root have either zero or exactly two children. These trees have been called Otter trees, after the work of Richard Otter on their combinatorial enumeration. They can also be interpreted as unlabeled and unranked dendrograms with the given number of leaves. The number of unordered rooted trees with n nodes in which the root has degree zero or one and all other nodes have at most two children. Trees in which the root has at most one child are called planted trees, and the additional condition that the other nodes have at most two children defines the weakly binary trees. In chemical graph theory, these trees can be interpreted as isomers of polyenes with a designated leaf atom chosen as the root. The number of different ways of organizing a single-elimination tournament for n players (with the player names left blank, prior to seeding players into the tournament). The pairings of such a tournament may be described by an Otter tree. The number of different results that could be generated by different ways of grouping the expression for a binary multiplication operation that is assumed to be commutative but neither associative nor idempotent. For instance can be grouped into binary multiplications in three ways, as , , or . This was the interpretation originally considered by both Etherington and Wedderburn. An Otter tree can be interpreted as a grouped expression in which each leaf node corresponds to one of the copies of and each non-leaf node corresponds to a multiplication operation. In the other direction, the set of all Otter trees, with a binary multiplication operation that combines two trees by making them the two subtrees of a new root node, can be interpreted as the free commutative magma on one generator (the tree with one node). In this algebraic structure, each grouping of has as its value one of the n-leaf Otter trees. Formula The Wedderburn–Etherington numbers may be calculated using the recurrence relation beginning with the base case . In terms of the interpretation of these numbers as counting rooted binary trees with n leaves, the summation in the recurrence counts the different ways of partitioning these leaves into two subsets, and of forming a subtree having each subset as its leaves. The formula for even values of n is slightly more complicated than the formula for odd values in order to avoid double counting trees with the same number of leaves in both subtrees. Growth rate The Wedderburn–Etherington numbers grow asymptotically as where B is the generating function of the numbers and ρ is its radius of convergence, approximately 0.4027 , and where the constant given by the part of the expression in the square root is approximately 0.3188 . Applications use the Wedderburn–Etherington numbers as part of a design for an encryption system containing a hidden backdoor. When an input to be encrypted by their system can be sufficiently compressed by Huffman coding, it is replaced by the compressed form together with additional information that leaks key data to the attacker. In this system, the shape of the Huffman coding tree is described as an Otter tree and encoded as a binary number in the interval from 0 to the Wedderburn–Etherington number for the number of symbols in the code. In this way, the encoding uses a very small number of bits, the base-2 logarithm of the Wedderburn–Etherington number. describe a similar encoding technique for rooted unordered binary trees, based on partitioning the trees into small subtrees and encoding each subtree as a number bounded by the Wedderburn–Etherington number for its size. Their scheme allows these trees to be encoded in a number of bits that is close to the information-theoretic lower bound (the base-2 logarithm of the Wedderburn–Etherington number) while still allowing constant-time navigation operations within the tree. use unordered binary trees, and the fact that the Wedderburn–Etherington numbers are significantly smaller than the numbers that count ordered binary trees, to significantly reduce the number of terms in a series representation of the solution to certain differential equations. See also Catalan number Cryptography Information theory References Further reading . Integer sequences Trees (graph theory) Graph enumeration
Wedderburn–Etherington number
[ "Mathematics" ]
1,034
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Graph enumeration", "Recreational mathematics", "Mathematical objects", "Graph theory", "Combinatorics", "Mathematical relations", "Numbers", "Number theory" ]
972,333
https://en.wikipedia.org/wiki/Highly%20totient%20number
A highly totient number is an integer that has more solutions to the equation , where is Euler's totient function, than any integer smaller than it. The first few highly totient numbers are 1, 2, 4, 8, 12, 24, 48, 72, 144, 240, 432, 480, 576, 720, 1152, 1440 , with 2, 3, 4, 5, 6, 10, 11, 17, 21, 31, 34, 37, 38, 49, 54, and 72 totient solutions respectively. The sequence of highly totient numbers is a subset of the sequence of smallest number with exactly solutions to . The totient of a number , with prime factorization , is the product: Thus, a highly totient number is a number that has more ways of being expressed as a product of this form than does any smaller number. The concept is somewhat analogous to that of highly composite numbers, and in the same way that 1 is the only odd highly composite number, it is also the only odd highly totient number (indeed, the only odd number to not be a nontotient). And just as there are infinitely many highly composite numbers, there are also infinitely many highly totient numbers, though the highly totient numbers get tougher to find the higher one goes, since calculating the totient function involves factorization into primes, something that becomes extremely difficult as the numbers get larger. Example There are five numbers (15, 16, 20, 24, and 30) whose totient number is 8. No positive integer smaller than 8 has as many such numbers, so 8 is highly totient. Table See also Highly cototient number References Integer sequences
Highly totient number
[ "Mathematics" ]
365
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
972,457
https://en.wikipedia.org/wiki/Mass%20wasting
Mass wasting, also known as mass movement, is a general term for the movement of rock or soil down slopes under the force of gravity. It differs from other processes of erosion in that the debris transported by mass wasting is not entrained in a moving medium, such as water, wind, or ice. Types of mass wasting include creep, solifluction, rockfalls, debris flows, and landslides, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Jupiter's moon Io, and on many other bodies in the Solar System. Subsidence is sometimes regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement. Rapid mass wasting events, such as landslides, can be deadly and destructive. More gradual mass wasting, such as soil creep, poses challenges to civil engineering, as creep can deform roadways and structures and break pipelines. Mitigation methods include slope stabilization, construction of walls, catchment dams, or other structures to contain rockfall or debris flows, afforestation, or improved drainage of source areas. Types Mass wasting is a general term for any process of erosion that is driven by gravity and in which the transported soil and rock is not entrained in a moving medium, such as water, wind, or ice. The presence of water usually aids mass wasting, but the water is not abundant enough to be regarded as a transporting medium. Thus, the distinction between mass wasting and stream erosion lies between a mudflow (mass wasting) and a very muddy stream (stream erosion), without a sharp dividing line. Many forms of mass wasting are recognized, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Based on how the soil, regolith or rock moves downslope as a whole, mass movements can be broadly classified as either creeps or landslides. Subsidence is sometimes also regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement. Creep Soil creep is a slow and long term mass movement. The combination of small movements of soil or rock in different directions over time is directed by gravity gradually downslope. The steeper the slope, the faster the creep. The creep makes trees and shrubs curve to maintain their perpendicularity, and they can trigger landslides if they lose their root footing. The surface soil can migrate under the influence of cycles of freezing and thawing, or hot and cold temperatures, inching its way towards the bottom of the slope forming terracettes. Landslides are often preceded by soil creep accompanied with soil sloughing—loose soil that falls and accumulates at the base of the steepest creep sections. Solifluction Solifluction is a form of creep characteristics of arctic or alpine climates. It takes place in soil saturated with moisture that thaws during the summer months to creep downhill. It takes place on moderate slopes, relatively free of vegetation, that are underlain by permafrost and receive a constant supply of new debris by weathering. Solifluction affects the entire slope rather than being confined to channels and can produce terrace-like landforms or stone rivers. Landslide A landslide, also called a landslip, is a relatively rapid movement of a large mass of earth and rocks down a hill or a mountainside. Landslides can be further classified by the importance of water in the mass wasting process. In a narrow sense, landslides are rapid movement of large amounts of relatively dry debris down moderate to steep slopes. With increasing water content, the mass wasting takes the form of debris avalanches, then earthflows, then mudflows. Further increase in water content produces a sheetflood, which is a form of sheet erosion rather than mass wasting. Occurrences On Earth, mass wasting occurs on both terrestrial and submarine slopes. Submarine mass wasting is particularly common along glaciated coastlines where glaciers are retreating and great quantities of sediments are being released. Submarine slides can transport huge volumes of sediments for hundreds of kilometers in a few hours. Mass wasting is a common phenomenon throughout the Solar System, occurring where volatile materials are lost from a regolith. Such mass wasting has been observed on Mars, Io, Triton, and possibly Europa and Ganymede. Mass wasting also occurs in the equatorial regions of Mars, where stopes of soft sulfate-rich sediments are steepened by wind erosion. Mass wasting on Venus is associated with the rugged terrain of tesserae. Io shows extensive mass wasting of its volcanic mountains. Deposits and landforms Mass wasting affects geomorphology, most often in subtle, small-scale ways, but occasionally more spectacularly. Soil creep is rarely apparent but can produce such subtle effects as curved forest growth and tilted fences and telephone poles. It occasionally produces low scarps and shallow depressions. Solifluction produced lobed or sheetlike deposits, with fairly definite edges, in which clasts (rock fragments) are oriented perpendicular to the contours of the deposit. Rockfall can produce talus slopes at the feet of cliffs. A more dramatic manifestation of rockfall is rock glaciers, which form from rockfall from cliffs oversteepened by glaciers. Landslides can produce scarps and step-like small terraces. Landslide deposits are poorly sorted. Those rich in clay may show stretched clay lumps (a phenomenon called boudinage) and zones of concentrated shear. Debris flow deposits take the form of long, narrow tracks of very poorly sorted material. These may have natural levees at the sides of the tracks, and sometimes consist of lenses of rock fragments alternating with lenses of fine-grained earthy material. Debris flows often form much of the upper slopes of alluvial fans. Causes Triggers for mass wasting can be divided into passive and activating (initiating) causes. Passive causes include: Rock and soil lithology. Unconsolidated or weak debris are more susceptible to mass wasting, as are materials that lose cohesion when wetted. Stratigraphy, such as thinly bedded rock or alternating beds of weak and strong or impermeable or permiable rock lithologies. Faults or other geologic structures that weaken the rock. Topography, such as steep slopes or cliffs. Climate, with large temperature swings, frequent freezing and thawing, or abundant rainfall Lack of vegetation Activating causes include: Undercutting of the slope by excavation or erosion Increased overburden from structures Increased soil moisture Earthquakes Hazards and mitigation Mass wasting causes problems for civil engineering, particularly highway construction. It can displace roads, buildings, and other construction and can break pipelines. Historically, mitigation of landslide hazards on the Gaillard Cut of the Panama Canal accounted for of the of material removed while excavating the cut. Rockslides or landslides can have disastrous consequences, both immediate and delayed. The Oso disaster of March 2014 was a landslide that caused 43 fatalities in Oso, Washington, US. Delayed consequences of landslides can arise from the formation of landslide dams, as at Thistle, Utah, in April 1983. Volcano flanks can become over-steep resulting in instability and mass wasting. This is now a recognised part of the growth of all active volcanoes. It is seen on submarine volcanoes as well as surface volcanoes: Kamaʻehuakanaloa (formerly Loihi) in the Hawaiian–Emperor seamount chain and Kick 'em Jenny in the Lesser Antilles Volcanic Arc are two submarine volcanoes that are known to undergo mass wasting. The failure of the northern flank of Mount St. Helens in 1980 showed how rapidly volcanic flanks can deform and fail. Methods of mitigation of mass wasting hazards include: Afforestation Construction of fences, walls, or ditches to contain rockfall Construction of catchment dams to contain debris flows Improved drainage of source areas Slope stabilization See also Denudation Slope mass rating Slump (geology) References Further reading Fundamentals of Physical Geography (Class 11th NCERT). External links Georgia Perimeter College: Mass Wasting CSU Long Beach: Introduction to Physical Geography: Introduction to Gradational Processes WFPA: Steep Slopes: Geology, Topography, Storms and Landslides in Washington State NPS.gov: Mass Wasting Environmental soil science Geological hazards Geomorphology
Mass wasting
[ "Physics", "Environmental_science" ]
1,750
[ "Soil mechanics", "Environmental soil science", "Applied and interdisciplinary physics" ]
972,564
https://en.wikipedia.org/wiki/British%20National%20Vegetation%20Classification
The British National Vegetation Classification or NVC is a system of classifying natural habitat types in Great Britain according to the vegetation they contain. A large scientific meeting of ecologists, botanists, and other related professionals in the United Kingdom resulted in the publication of a compendium of five books: British Plant Communities, edited by John S. Rodwell, which detail the incidence of plant species in twelve major habitat types in the British natural environment. They are the first systematic and comprehensive account of the vegetation types of the country. They cover all natural, semi-natural and major artificial habitats in Great Britain (not Northern Ireland) and represent fifteen years of research by leading plant ecologists. From the data collated from the books, commercial software products have been developed to help to classify vegetation identified into one of the many habitat types found in Great Britain – these include MATCH, TABLEFIT and MAVIS. Terminology The following are lists of terms used in connection with the British National Vegetation Classification, together with their meanings. Communities, subcommunities and variants A community is the fundamental unit of categorisation for vegetation. A subcommunity is a distinct recognisable subdivision of a community. A variant is a further subdivision of a subcommunity. Constant species A constant species in a community is a species that is always present in any given stand of vegetation belonging to that community. For a list of the constant species, and the NVC communities in which they are present, see List of constant species in the British National Vegetation Classification. Rare species A rare species is a species which is associated with a particular community and is rare nationally. The sources used by the authors of British Plant Communities for assessing rarity were as follows. a) for vascular plants, two sources were used: Perring, F. H. and S. M. Walters (1962) Atlas of the British Flora – a species was regarded as rare if it was given an "A" rating in this work (these were plants which Perring & Walters judged to be sufficiently rare to merit a special search in order to ensure all records were included in the atlas). Any species included on lists compiled by the Nature Conservancy Council of plants found in less than 100 hectads. b) for bryophytes, the source used was Corley, M. F. V. and M. O. Hill (1981) Distribution of bryophytes in the British Isles. This lists the species and the vice-counties in which they are recorded; presence in under 20 vice-counties was the criterion used for selection as rare. c) for lichens, no source was available, and the authors used their own selection of species. For a list of these rare species, and the NVC communities in which they are present, see List of rare species in the British National Vegetation Classification. Communities by category In total there are 286 communities in the British National Vegetation Classification. They are grouped into the following major categories: Woodland and scrub communities (25 communities, prefixed with the letter "W" — 19 classed as woodland, four as scrub and two as 'underscrub') Mires (38 communities, prefixed "M") Heaths (22 communities, prefixed "H") Mesotrophic grasslands (13 communities, prefixed "MG") Calcicolous grasslands (14 communities, prefixed "CG") Calcifugous grasslands and montane communities (21 communities, prefixed "U") Aquatic communities (24 communities, prefixed "A") Swamps and tall-herb fens (28 communities, prefixed "S") Salt-marsh communities (28 communities, prefixed "SM") Shingle, strandline and sand-dune communities (19 communities, prefixed "SD" — one shingle, two strandline and 16 sand-dune communities) Maritime cliff communities (12 communities, prefixed "MC") Vegetation of open habitats (42 communities, prefixed "OV") A full list of these communities, grouped into the above categories, can be found at List of plant communities in the British National Vegetation Classification. References Biota by conservation status system Conservation in the United Kingdom Metadata standards
British National Vegetation Classification
[ "Biology" ]
866
[ "Biota by conservation status system", "Biota by conservation status", "British National Vegetation Classification" ]
972,584
https://en.wikipedia.org/wiki/Common%20frog
The common frog or grass frog (Rana temporaria), also known as the European common frog, European common brown frog, European grass frog, European Holarctic true frog, European pond frog or European brown frog, is a semi-aquatic amphibian of the family Ranidae, found throughout much of Europe as far north as Scandinavia and as far east as the Urals, except for most of the Iberian Peninsula, southern Italy, and the southern Balkans. The farthest west it can be found is Ireland. It is also found in Asia, and eastward to Japan. The nominative, and most common, subspecies Rana temporaria temporaria is a largely terrestrial frog native to Europe. It is distributed throughout northern Europe and can be found in Ireland, the Isle of Lewis and as far east as Japan. Common frogs metamorphose through three distinct developmental life stages — aquatic larva, terrestrial juvenile, and adult. They have corpulent bodies with a rounded snout, webbed feet and long hind legs adapted for swimming in water and hopping on land. Common frogs are often confused with the common toad (Bufo bufo), but frogs can easily be distinguished as they have longer legs, hop, and have a moist skin, whereas toads crawl and have a dry 'warty' skin. The spawn of the two species also differs, in that frog spawn is laid in clumps and toad spawn is laid in long strings. There are 3 subspecies of the common frog, R. t. temporaria, R. t. honnorati and R. t. palvipalmata. R. t. temporaria is the most common subspecies of this frog. Description The adult common frog has a body length of . In addition, its back and flanks vary in colour from olive green to grey-brown, brown, olive brown, grey, yellowish and rufous. However, it can lighten and darken its skin to match its surroundings. Some individuals have more unusual colouration—both black and red individuals have been found in Scotland, and albino frogs have been found with yellow skin and red eyes. During the mating season the male common frog tends to turn greyish-blue (see video below). The average mass is ; the female is usually slightly larger than the male. The flanks, limbs and backs are covered with irregular dark blotches and they usually sport a chevron-shaped spot on the back of their neck and a dark spot behind the eye. Unlike other amphibians, common frogs generally lack a mid-dorsal band but, when they have one, it is comparatively faint. In many countries moor frogs have a light dorsal band which easily distinguishes them from common frogs. The underbelly is white or yellow (occasionally more orange in females) and can be speckled with brown or orange. The eyes are brown with transparent horizontal pupils, and they have transparent inner eyelids to protect the eyes while underwater, as well as a 'mask' which covers the eyes and eardrums. Although the common frog has long hind legs compared to the common toad, they are shorter than those of the agile frog with which it shares some of its range. The longer hind legs and fainter colouration of the agile frog are the main features that distinguish the two species. Males are distinguishable from females as they are smaller and have hard swellings, known as nuptial pads, on the first digits of the forelegs, used for gripping females during mating. During the mating season males' throats often turn white, and their overall colour is generally light and greyish, whereas the female is browner, or even red. These smooth-skinned frogs can grow to an average weight of 22.7 grams and length of seven to ten centimeters (2.8-3.9 in) with colors varying from gray to green, brown, yellow, or red and may be covered in blotches. The underbelly is white or yellow often with speckles. Habitat and distribution Outside the breeding season, common frogs live a solitary life in damp wetland niches near ponds or marshes or among long riparian grass. They are normally active for much of the year, only hibernating in the coldest months. In the most northern extremities of their range they may be trapped under ice for up to nine months of the year, but recent studies have shown that in these conditions they may be relatively active at temperatures close to freezing. In the British Isles, common frogs typically hibernate from late October to January. They will re-emerge as early as February if conditions are favorable, and migrate to bodies of water such as garden ponds to spawn. Where conditions are harsher, such as in the Alps, they emerge as late as early June. Common frogs hibernate in running waters, muddy burrows, or in layers of decaying leaves and mud at the bottom of ponds or lakes primarily with a current. The oxygen uptake through the skin suffices to sustain the needs of the cold and motionless frogs during hibernation. Common frogs are found throughout much of Europe as far north as northern Scandinavia inside the Arctic Circle and as far east as the Urals, except for most of Iberia, southern Italy, and the southern Balkans. Other areas where the common frog has been introduced include the Isle of Lewis, Shetland, Orkney and the Faroe Islands. It is also found in Asia, and eastward to Japan. The common frog has long been thought to be an entirely introduced species in Ireland, however, genetic analyses suggest that particular populations in the south west of Ireland are indeed indigenous to the country. The authors propose that the Irish frog population is a mixed group that includes native frogs that survived the last glacial period in ice free refugia, natural post-glacial colonizers and recent artificial introductions from Western Europe. Genetic population structure The common frog is a very widely distributed species, being common all throughout Europe and northwest Asia. The more peripheral subpopulations of common frogs are significantly less in number, as well as less genetically variable. There is a steep genetic decline when approaching the periphery of the common frog's distribution range. Additionally, genetic differentiation of common frog subpopulations tends to decrease in relation to increasing latitude. The colder climates create a strong selective pressure favoring common frog populations able to behaviorally thermoregulate at a high degree. Conservation Long-term impact of diseases Of the many diseases affecting common frogs, one of the most deadly has been the Ranavirus, which has been responsible for causing declines in amphibian populations worldwide. Two of the main, and most deadly, symptoms caused by Ranavirus towards common frogs are skin ulcerations and hemorrhaging. Mortality rates associated with the disease are very high, in some events it is observed to be over 90%. Deaths caused by Ranavirus occur in all stages of common frog development and are concentrated mostly during the summer months. Overall, common frog populations affected by ranavirus experience consistent and substantial declines in population size. Recent metagenomics studies on common frogs from the United Kingdom have revealed widespread viral infections of Rana tamanavirus, a positive-sense RNA virus that is closely related to Tamana bat virus, as of yet no pathology or effect on life history traits have been observed. Impact of urbanization Due to the widespread nature of Rana temporaria, common frogs can make their homes in both urban and rural environments. However, many of the populations living in urban environments are subject to the detrimental effects of urbanization. The construction of roads and buildings – absolute barriers to migration – has stymied gene flow and drift between urban populations of common frogs, leading to lower levels of genetic diversity in urban common frog populations compared to their rural counterparts. Urban common frog populations also experience higher levels of mortality and developmental abnormality, indicative of forced inbreeding. However, the common frog is listed as a species of least concern on the IUCN Red List of Threatened Species. Diet Juvenile At metamorphosis, once the tadpole's fore legs have developed, the frog does not feed for a short time. Recently metamorphosed juvenile frog mostly feed on small insects like Collembola (hexapods), Acarina (mites and ticks), and small fly larvae. Rana temporaria tadpoles, however, mostly feed on algae and decomposed plants, but once their hind legs develop, they become carnivorous. Adults The common frog takes its place as an unspecialized and opportunistic feeder wherever it is located. In other words, common frogs will consume whatever prey that is most available and easy to capture. This usually means that the common frog feeds by remaining idle and waiting until a suitable prey enters the frog's domain of capture. As a corollary, this also means that the common frog's diet changes depending on the season where the associated prey become the most abundant. In the summer, the common frog's diet mostly consists of adult crane flies and the larvae of butterflies and moths. To a slightly lesser extent, common frogs will feed on woodlice, arachnids, beetles, slugs, snails, and earthworms. In addition, common frogs will typically feed on bigger prey as they become larger. Therefore, newly developed common frogs are limited to smaller insect prey, whereas larger frogs are able to consume a wide range of insects. Common frogs will hide in damp places, such as in the water, during the day, and at night, they will begin searching for food. Reproduction and mating patterns During the spring the frog's pituitary gland is stimulated by changes in external factors, such as rainfall, day length and temperature, to produce hormones which, in turn, stimulate the production of sex cells – eggs in the females and sperm in the male. The male's nuptial pad also swells and becomes more heavily pigmented. Common frogs breed in shallow, still, fresh water such as ponds, with spawning commencing sometime between late February and late June, but generally in April over the main part of their range. Competition among males Like its close cousin, the moor frog (R. arvalis), R. Temporaria does not exhibit territoriality which leads to lack of physical fighting among males. During breeding season, male common frogs undergo a period of a few days (less than 10 days) where they display rapid and frenzied breeding behavior, during which the purpose of the male is to quickly find and mate with as many female frogs as possible. Higher rates of mating success in males typically have longer thumbs than single males, which allows them to have a better grip on females. Mating interactions Around three years after being born, the common frog will return to its original site of birth and release a mating call. Males will be the first to arrive at the pond and await females as they enter. During this period of pre-female competition, the pond is significantly male dominant, and there is a large amount of intrasexual competition taking place. The shallow portion of the pond, which is more suitable for egg laying, is more predominantly occupied by the larger males. However, once the females arrive, this territoriality quickly dissipates and male-female amplexed pairs are free to migrate wherever in the pond. Additionally, once engaged in an amplexus, it is rare for single males to attempt to displace or "take over" the paired male. It is also important to note the effect of size on a male common frog's mating strategies. Smaller frogs, during the pre-spawning period get displaced from the shallow areas of the pond. Therefore, they circumvent this issue by searching for females on the land or in areas of the pond where they first arrive. Meanwhile, the larger frogs occupy the spawning site, where they encounter more amplexed pairs and therefore rely on their ability to displace amplexed males to secure a mate. However, the frequency of these takeovers is not consistent. Life cycle Female common frog clutch sizes range from a few hundred up to 5,000 eggs. Many of these eggs form large aggregates that serve to thermoregulate as well as protect the developing embryo from potential predators. By bunching the eggs together, it raises the temperature of the embryo compared to the surrounding water, which is important because the rate of tadpole development is faster in higher temperatures. Additionally, the eggs are typically laid in the shallower regions of the pond to prevent hypoxia-induced fatality of the embryos. It normally takes 2–3 weeks for the eggs to hatch. Afterwards, common frog larvae group up into schools where they help each other feed off of algae and larger plants, as well as avoid predators. By June and July, most tadpoles will have metamorphosized, and the remaining time until winter is used to feed and grow larger. Only the largest frogs will survive the winter, which places a large emphasis on rapid development until then. In fact, a common frog's rate of development correlates with temperature. In lower temperature regions, common frogs will hatch earlier and metamorphosize sooner than common frogs living in warmer climate regions. Sexual maturity occurs only after three years, and common frogs will typically live between six and eight years. Development in the presence of predators The presence of a predator in the early development of the tadpole has an effect on its metamorphosis traits. For instance, it can lead to a longer larval period and a smaller size and mass at metamorphosis. Once the predator is removed, the growth rate of the tadpole returns to, or even exceeds, baseline. This influence of predator threat is only significant during early tadpole development. One of the common frog's most pervasive predators is the red-eared slider (Trachemys scripta elegans), which is a very invasive species of turtle. Thermoregulation As an ectotherm, the common frog is very reliant on temperature as it directly influences their metabolism, development, reproduction, muscle ability, and respiration. As such, common frogs at mid and high elevations have developed a unique set of strategies to survive in cold climates. In fact, it is due to the common frog's ability to thermoregulate so effectively that the species has been able to become so pervasive across a multitude of environments and climates, living as far north as the Arctic Circle in Scandinavia, which is further north than any other amphibian in the region. Contrary to Lithobates sylvaticus (wood frogs), common frogs do not have the ability to freeze protect themselves by increasing their levels of blood glucose to serve as a cryoprotectant. As a result, common frogs must rely on behavioral thermoregulation by seeking out warm microhabitats (such as in the soil or between rocks) during wintertime. Additionally, common frogs will commonly hibernate throughout the winter season in groups to provide bodily heating. Social behavior Similar to other anuran species (Bufo americanus and Rana sylvatica), Rana temporaria are able to naturally discriminate others of its kind. Post-embryonic interaction with conspecifics is not necessary to induce associative behavior for common frogs as an adult. Rather, once common frog tadpoles have reached a certain age, they gain a strong innate associative tendency. Rana temporaria tend to aggregate as the result of environmental pressures, such as temperature or predators. Predators Tadpoles are eaten by fish, diving beetles, dragonfly larvae and birds. Adult frogs have many predators including storks, birds of prey, crows, gulls, ducks, terns, herons, pine martens, stoats, weasels, polecats, badgers, otters and snakes. Some frogs are killed, but rarely eaten, by domestic cats, and large numbers are killed on the roads by motor vehicles. Interactions with humans and livestock Common frogs have an important place in human ecology by controlling the insect populations. In particular, their consumption of mosquitos and other crop-damaging insects has been especially valuable. In addition, Rana temporaria, due to their ecological pervasiveness and relative abundance, have become a common laboratory specimen. Farming R. temporaria are farmed. Miles et al. 2004 provide improved ingredients for manufacturers of pellet food for farmed common frogs. Due to the spread of diseases such as ranavirus, the UK based amphibian charity Froglife advised the public to avoid transporting frogspawn, tadpoles or frogs from one pond to another, even if these are close by. It has also been recommended not to place goldfish or exotic frog species in outdoor ponds as this could have a negative effect on the frog population. References External links Amphibians of Europe FrogsWatch.com Web page developed around photographs of the common frog taken in the same suburban garden over a period of 10 years. Rana (genus) Amphibians of Europe Frog, Common Animal models Fauna of Finland Articles containing video clips Amphibians described in 1758 Taxa named by Carl Linnaeus Habitats Directive species
Common frog
[ "Biology" ]
3,527
[ "Model organisms", "Animal models" ]
972,601
https://en.wikipedia.org/wiki/Elementary%20equivalence
In model theory, a branch of mathematical logic, two structures M and N of the same signature σ are called elementarily equivalent if they satisfy the same first-order σ-sentences. If N is a substructure of M, one often needs a stronger condition. In this case N is called an elementary substructure of M if every first-order σ-formula φ(a1, …, an) with parameters a1, …, an from N is true in N if and only if it is true in M. If N is an elementary substructure of M, then M is called an elementary extension of N. An embedding h: N → M is called an elementary embedding of N into M if h(N) is an elementary substructure of M. A substructure N of M is elementary if and only if it passes the Tarski–Vaught test: every first-order formula φ(x, b1, …, bn) with parameters in N that has a solution in M also has a solution in N when evaluated in M. One can prove that two structures are elementarily equivalent with the Ehrenfeucht–Fraïssé games. Elementary embeddings are used in the study of large cardinals, including rank-into-rank. Elementarily equivalent structures Two structures M and N of the same signature σ are elementarily equivalent if every first-order sentence (formula without free variables) over σ is true in M if and only if it is true in N, i.e. if M and N have the same complete first-order theory. If M and N are elementarily equivalent, one writes M ≡ N. A first-order theory is complete if and only if any two of its models are elementarily equivalent. For example, consider the language with one binary relation symbol '<'. The model R of real numbers with its usual order and the model Q of rational numbers with its usual order are elementarily equivalent, since they both interpret '<' as an unbounded dense linear ordering. This is sufficient to ensure elementary equivalence, because the theory of unbounded dense linear orderings is complete, as can be shown by the Łoś–Vaught test. More generally, any first-order theory with an infinite model has non-isomorphic, elementarily equivalent models, which can be obtained via the Löwenheim–Skolem theorem. Thus, for example, there are non-standard models of Peano arithmetic, which contain other objects than just the numbers 0, 1, 2, etc., and yet are elementarily equivalent to the standard model. Elementary substructures and elementary extensions N is an elementary substructure or elementary submodel of M if N and M are structures of the same signature σ such that for all first-order σ-formulas φ(x1, …, xn) with free variables x1, …, xn, and all elements a1, …, an of N, φ(a1, …, an) holds in N if and only if it holds in M: This definition first appears in Tarski, Vaught (1957). It follows that N is a substructure of M. If N is a substructure of M, then both N and M can be interpreted as structures in the signature σN consisting of σ together with a new constant symbol for every element of N. Then N is an elementary substructure of M if and only if N is a substructure of M and N and M are elementarily equivalent as σN-structures. If N is an elementary substructure of M, one writes N M and says that M is an elementary extension of N: M N. The downward Löwenheim–Skolem theorem gives a countable elementary substructure for any infinite first-order structure in at most countable signature; the upward Löwenheim–Skolem theorem gives elementary extensions of any infinite first-order structure of arbitrarily large cardinality. Tarski–Vaught test The Tarski–Vaught test (or Tarski–Vaught criterion) is a necessary and sufficient condition for a substructure N of a structure M to be an elementary substructure. It can be useful for constructing an elementary substructure of a large structure. Let M be a structure of signature σ and N a substructure of M. Then N is an elementary substructure of M if and only if for every first-order formula φ(x, y1, …, yn) over σ and all elements b1, …, bn from N, if M x φ(x, b1, …, bn), then there is an element a in N such that M φ(a, b1, …, bn). Elementary embeddings An elementary embedding of a structure N into a structure M of the same signature σ is a map h: N → M such that for every first-order σ-formula φ(x1, …, xn) and all elements a1, …, an of N, N φ(a1, …, an) if and only if M φ(h(a1), …, h(an)). Every elementary embedding is a strong homomorphism, and its image is an elementary substructure. Elementary embeddings are the most important maps in model theory. In set theory, elementary embeddings whose domain is V (the universe of set theory) play an important role in the theory of large cardinals (see also Critical point). References . . Equivalence (mathematics) Mathematical logic Model theory
Elementary equivalence
[ "Mathematics" ]
1,188
[ "Mathematical logic", "Model theory" ]
972,615
https://en.wikipedia.org/wiki/Radial%20arm%20maze
The radial arm maze was designed by Olton and Samuelson in 1976 to measure spatial learning and memory in rats. The original apparatus consists of eight equidistantly spaced arms, each about 4 feet long, and all radiating from a small circular central platform (later versions have used as few as three and as many as 48 arms). At the end of each arm there is a food site, the contents of which are not visible from the central platform. Two types of memory that are assessed during the performance in this task are reference memory and working memory. Reference memory is assessed when the rats only visit the arms of the maze which contains the reward. The failure to do so will result in reference memory error. Working memory is assessed when the rats enter each arm a single time. Re-entry into the arms would result in a working memory error. The design ensures that, after checking for food at the end of each arm, the rat is always forced to return to the central platform before making another choice. As a result, the rat always has eight possible options. Elaborate controls are used to ensure that the rats are not simply using their sense of smell, either to sense unclaimed food objects or to sense their own tracks. Olton and Samuelson found that rats have excellent memories for visited and unvisited arms; they made, on average, about 7.0 novel entries in their first 8 choices, and thus were 88% correct. Chance performance with eight arms would be 5.3 novel entries in the first 8 choices (66% correct). Olton and Samuelson also found when they switched some already-visited arms into as yet unvisited locations partway through a trial, that the rats tended to visit as-yet unvisited locations even when doing so meant running down arms that had already been traversed, and tended to avoid arms that had not yet been traversed but were now in previously visited locations. It therefore seems that in remembering locations on the radial arm maze, rats do not rely on local intra-maze cues, but rather on extra-maze cues. Uses The maze has since been used extensively by researchers interested in studying the spatial learning and spatial memory of animals. For example, Olton and colleagues found that performance declined only slightly to 82% novel entries in the first 17 entries on a 17-arm maze. Roberts found no decline in the percentage of correct choices as the number of arms on a radial maze were increased from 8 to 16 and then to 24. Cole and Chappell-Stephenson, using a radial maze with food locations ranging from 8 to 48, estimated the limit of spatial memory in rats to be between 24 and 32 locations. In one experiment utilizing the radial arm maze, it was shown that spatial relations among hidden target sites control spatial decisions that rats make and are unrelated to visual or perceptual cues that are related to certain locations. In another experiment, it was shown that subjects with Williams syndrome performed significantly worse compared to control subjects in multiple parameters such as visuo-spatial memory, general spatial function, and procedural competence. In mice, large differences in learning ability exist among different inbred strains. These differences appear to be correlated with the size of a part of the hippocampal mossy fiber projection. The radial arm maze has shown to be practicable to investigate how drugs affect memory performance. It has also been shown to be useful in distinguishing the cognitive effects of an array of toxicants. The radial arm maze has also been use for several studies in children and adults. A particular study led by L. Mandolesi used subjects with William's Syndrome (WS) because of the interest placed on their cognitive profile. There is a dissociation between spatial processing and visuo-object processing suggests that in WS subjects spatial functions are more severely impaired than visuo-perceptual ones. This is what RAM tests for. Limitations Various different types of mazes are used to assess memory. It is believed that performance of animals in one type of maze cannot be generalized to other mazes because all mazes require animals to utilize a different set of skills. See also Spontaneous alternation References Animal testing mazes Behavioral neuroscience
Radial arm maze
[ "Biology" ]
852
[ "Behavioural sciences", "Behavior", "Behavioral neuroscience" ]
972,711
https://en.wikipedia.org/wiki/Single-photon%20avalanche%20diode
A single-photon avalanche diode (SPAD), also called Geiger-mode avalanche photodiode (G-APD or GM-APD) is a solid-state photodetector within the same family as photodiodes and avalanche photodiodes (APDs), while also being fundamentally linked with basic diode behaviours. As with photodiodes and APDs, a SPAD is based around a semi-conductor p-n junction that can be illuminated with ionizing radiation such as gamma, x-rays, beta and alpha particles along with a wide portion of the electromagnetic spectrum from ultraviolet (UV) through the visible wavelengths and into the infrared (IR). In a photodiode, with a low reverse bias voltage, the leakage current changes linearly with absorption of photons, i.e. the liberation of current carriers (electrons and/or holes) due to the internal photoelectric effect. However, in a SPAD, the reverse bias is so high that a phenomenon called impact ionisation occurs which is able to cause an avalanche current to develop. Simply, a photo-generated carrier is accelerated by the electric field in the device to a kinetic energy which is enough to overcome the ionisation energy of the bulk material, knocking electrons out of an atom. A large avalanche of current carriers grows exponentially and can be triggered from as few as a single photon-initiated carrier. A SPAD is able to detect single photons providing short duration trigger pulses that can be counted. However, they can also be used to obtain the time of arrival of the incident photon due to the high speed that the avalanche builds up and the device's low timing jitter. The fundamental difference between SPADs and APDs or photodiodes, is that a SPAD is biased well above its reverse-bias breakdown voltage and has a structure that allows operation without damage or undue noise. While an APD is able to act as a linear amplifier, the level of impact ionisation and avalanche within the SPAD has prompted researchers to liken the device to a Geiger-counter in which output pulses indicate a trigger or "click" event. The diode bias region that gives rise to this "click" type behaviour is therefore called the "Geiger-mode" region. As with photodiodes the wavelength region in which it is most sensitive is a product of its material properties, in particular the energy bandgap within the semiconductor. Many materials including silicon, germanium, germanium on silicon and III-V elements such as InGaAs/InP have been used to fabricate SPADs for the large variety of applications that now utilise the run-away avalanche process. There is much research in this topic with activity implementing SPAD-based systems in CMOS fabrication technologies, and investigation and use of III-V material combinations and Ge on Si for single-photon detection at short-wave infrared wavelengths suitable for telecommunications applications. Applications Since the 1970s, the applications of SPADs have increased significantly. Recent examples of their use include LIDAR, time of flight (ToF) 3D imaging, PET scanning, single-photon experimentation within physics, fluorescence lifetime microscopy, and optical communications (particularly quantum key distribution). Operation Structures SPADs are semiconductor devices that are based on a p–n junction that is reverse-biased at an operating voltage that exceeds the junction's breakdown voltage (Figure 1). "At this bias, the electric field is so high [higher than 3×105 V/cm] that a single charge carrier injected into the depletion layer can trigger a self-sustaining avalanche. The current rises swiftly [sub-nanosecond rise-time] to a macroscopic steady level in the milliampere range. If the primary carrier is photo-generated, the leading edge of the avalanche pulse marks [with picosecond time jitter] the arrival time of the detected photon." The current continues until the avalanche is quenched by lowering the bias voltage down to or below the breakdown voltage: the lower electric field is no longer able to accelerate carriers to impact-ionize with lattice atoms, therefore current ceases. In order to be able to detect another photon, the bias voltage must be raised again above breakdown. "This operation requires a suitable circuit, which has to: Sense the leading edge of the avalanche current. Generate a standard output pulse synchronous with the avalanche build-up. Quench the avalanche by lowering the bias down to the breakdown voltage. Restore the photodiode to the operative level. This circuit is usually referred to as a quenching circuit." Biasing regions and current-voltage characteristic A semiconductor p-n junction can be biased at several operating regions depending on the applied voltage. For normal uni-directional diode operation, the forward biasing region and the forward voltage are used during conduction, while the reverse bias region prevents conduction. When operated with a low reverse bias voltage, the p-n junction can operate as a unity gain photodiode. As the reverse bias increases, some internal gain through carrier multiplication can occur allowing the photodiode to operate as an avalanche photodiode (APD) with a stable gain and a linear response to the optical input signal. However, as the bias voltage continues to increase, the p-n junction breaks down when the electric field strength across the p-n junction reaches a critical level. As this electric field is induced by the bias voltage over the junction it is denoted as the breakdown voltage, VBD. A SPAD is reverse biased with an excess bias voltage, Vex, above the breakdown voltage, but below a second, higher breakdown voltage associated with the SPAD's guard ring. The total bias (VBD+Vex) therefore exceeds the breakdown voltage to such a degree that "At this bias, the electric field is so high [higher than 3×105 V/cm] that a single charge carrier injected into the depletion layer can trigger a self-sustaining avalanche. The current rises swiftly [sub-nanosecond rise-time] to a macroscopic steady level in the milliampere range. If the primary carrier is photo-generated, the leading edge of the avalanche pulse marks [with picosecond time jitter] the arrival time of the detected photon". As the current vs voltage (I-V) characteristic of a p-n junction gives information about the conduction behaviour of the diode, this is often measured using an analogue curve-tracer. This sweeps the bias voltage in fine steps under tightly controlled laboratory conditions. For a SPAD, without photon arrivals or thermally generated carriers, the I-V characteristic is similar to the reverse characteristic of a standard semi-conductor diode, i.e. an almost total blockage of charge flow (current) over the junction other than a small leakage current (nano-amperes). This condition can be described as an "off-branch" of the characteristic. However, when this experiment is conducted, a "flickering" effect and a second I-V characteristic can be observed beyond breakdown. This occurs when the SPAD has experienced a triggering event (photon arrival or thermally generated carrier) during the voltage sweeps that are applied to the device. The SPAD, during these sweeps, sustains an avalanche current which is described as the "on-branch" of the I-V characteristic. As the curve tracer increases the magnitude of the bias voltage over time, there are times that the SPAD is triggered during the voltage sweep above breakdown. In this case a transition occurs from the off-branch to the on-branch, with an appreciable current starting to flow. This leads to the flickering of the I-V characteristic that is observed and was denoted by early researchers in the field as "bifurcation" (def: the division of something into two branches or parts). To detect single-photons successfully, the p-n junction must have very low levels of the internal generation and recombination processes. To reduce thermal generation, devices are often cooled, while phenomena such as tunnelling across the p-n junctions also need to be reduced through careful design of semi-conductor dopants and implant steps. Finally, to reduce noise mechanisms being exacerbated by trapping centres within the p-n junction's band gap structure the diode needs to have a "clean" process free of erroneous dopants. Passive quenching circuits The simplest quenching circuit is commonly called passive quenching circuit and comprises a single resistor in series with the SPAD. This experimental setup has been employed since the early studies on the avalanche breakdown in junctions. The avalanche current self-quenches simply because it develops a voltage drop across a high-value ballast load RL (about 100 kΩ or more). After the quenching of the avalanche current, the SPAD bias slowly recovers to the operating bias, and therefore the detector is ready to be ignited again. This circuit mode is therefore called passive quenching passive reset (PQPR), although an active circuit element can be used for reset forming a passive quench active reset (PQAR) circuit mode. A detailed description of the quenching process is reported by Zappa et al. Active quenching circuits A more advanced quenching, which was explored from the 1970s onwards, is a scheme called active quenching. In this case a fast discriminator senses the steep onset of the avalanche current across a 50 Ω resistor (or integrated transistor) and provides a digital (CMOS, TTL, ECL, NIM) output pulse, synchronous with the photon arrival time. The circuit then quickly reduces the bias voltage to below breakdown (active quenching), then relatively quickly returns bias to above the breakdown voltage ready to sense the next photon. This mode is called active quench active reset (AQAR), however depending on circuit requirements, active quenching passive reset (AQPR) may be more suitable. AQAR circuits often allow lower dead times, and significantly reduced dead time variation. Photon counting and saturation The intensity of the input signal can be obtained by counting (photon counting) the number of output pulses within a measurement time period. This is useful for applications such as low light imaging, PET scanning and fluorescence lifetime microscopy. However, while the avalanche recovery circuit is quenching the avalanche and restoring bias, the SPAD cannot detect further photon arrivals. Any photons, (or dark counts or after-pulses), that reach the detector during this brief period are not counted. As the number of photons increases such that the (statistical) time interval between photons gets within a factor of ten or so of the avalanche recovery time, missing counts become statistically significant and the count rate begins to depart from a linear relationship with detected light level. At this point the SPAD begins to saturate. If the light level were to increase further, ultimately to the point where the SPAD immediately avalanches the moment the avalanche recovery circuit restores bias, the count rate reaches a maximum defined purely by the avalanche recovery time in the case of active quenching (hundred million counts per second or more). This can be harmful to the SPAD as it will be experiencing avalanche current nearly continuously. In the passive case, saturation may lead to the count rate decreasing once the maximum is reached. This is called paralysis, whereby a photon arriving as the SPAD is passively recharging, has a lower detection probability, but can extend the dead time. It is worth noting that passive quenching, while simpler to implement in terms of circuitry, incurs a 1/e reduction in maximum counting rates. Dark count rate (DCR) Besides photon-generated carriers, thermally-generated carriers (through generation-recombination processes within the semiconductor) can also fire the avalanche process. Therefore, it is possible to observe output pulses when the SPAD is in complete darkness. The resulting average number of counts per second is called dark count rate (DCR) and is the key parameter in defining the detector noise. It is worth noting that the reciprocal of the dark count rate defines the mean time that the SPAD remains biased above breakdown before being triggered by an undesired thermal generation. Therefore, in order to work as a single-photon detector, the SPAD must be able to remain biased above breakdown for a sufficiently long time (e.g., a few milliseconds, corresponding to a count rate well under a thousand counts per second, cps). Afterpulsing noise One other effect that can trigger an avalanche is known as afterpulsing. When an avalanche occurs, the PN junction is flooded with charge carriers and trap levels between the valence and conduction band become occupied to a degree that is much greater than that expected in a thermal-equilibrium distribution of charge carriers. After the SPAD has been quenched, there is some probability that a charge carrier in a trap level receives enough energy to free it from the trap and promote it to the conduction band, which triggers a new avalanche. Thus, depending on the quality of the process and exact layers and implants that were used to fabricate the SPAD, a significant number of extra pulses can be developed from a single originating thermal or photo-generation event. The degree of afterpulsing can be quantified by measuring the autocorrelation of the times of arrival between avalanches when a dark count measurement is set up. Thermal generation produces Poissonian statistics with an impulse function autocorrelation, and afterpulsing produces non-Poissonian statistics. Photon timing and jitter The leading edge of a SPAD's avalanche breakdown is particularly useful for timing the arrival of photons. This method is useful for 3D imaging, LIDAR and is used heavily in physical measurements relying on time-correlated single photon counting (TCSPC). However, to enable such functionality dedicated circuits such as time-to-digital converters (TDCs) and time-to-analogue (TAC) circuits are required. The measurement of a photon's arrival is complicated by two general processes. The first is the statistical fluctuation in the arrival time of the photon itself, which is a fundamental property of light. The second is the statistical variation in the detection mechanism within the SPAD due to a) depth of photon absorption, b) diffusion time to the active p-n junction, c) the build up statistics of the avalanche and d) the jitter of the detection and timing circuitry. Optical fill factor For a single SPAD, the ratio of its optically sensitive area, Aact, to its total area, Atot, is called the fill factor, . As SPADs require a guard ring to prevent premature edge breakdown, the optical fill factor becomes a product of the diode shape and size with relation its guard ring. If the active area is large and the outer guard ring is thin, the device will have a high fill factor. With a single device, the most efficient method to ensure full utilisation of the area and maximum sensitivity is to focus the incoming optical signal to be within the device's active area, i.e. all incident photons are absorbed within the planar area of the p-n junction such that any photon within this area can trigger an avalanche. Fill factor is more applicable when we consider arrays of SPAD devices. Here the diode active area may be small or commensurate with the guard ring's area. Likewise, the fabrication process of the SPAD array may put constraints on the separation of one guard ring to another, i.e. the minimum separation of SPADs. This leads to the situation where the area of the array becomes dominated by guard ring and separation regions rather than optically receptive p-n junctions. The fill factor is made worse when circuitry must be included within the array as this adds further separation between optically receptive regions. One method to mitigate this issue is to increase the active area of each SPAD in the array such that guard rings and separation are no longer dominant, however for CMOS integrated SPADs the erroneous detections caused by dark counts increases as the diode size increases. Geometric improvements One of the first methods to increase fill factors in arrays of circular SPADs was to offset the alignment of alternate rows such that the curve of one SPAD partially uses the area between the two SPADs on an adjacent row. This was effective but complicated the routing and layout of the array. To address fill factor limitations within SPAD arrays formed of circular SPADs, other shapes are utilised as these are known to have higher maximum area values within a typically square pixel area and have higher packing ratios. A square SPAD within a square pixel achieves the highest fill factor, however the sharp corners of this geometry are known to cause premature breakdown of the device, despite a guard ring and consequently produce SPADs with high dark count rates. To compromise, square SPADs with sufficiently rounded corners have been fabricated. These are termed Fermat shaped SPADs while the shape itself is a super-ellipse or a Lamé curve. This nomenclature is common in the SPAD literature, however the Fermat curve refers to a special case of the super-ellipse that puts restrictions on the ratio of the shape's length, "a" and width, "b" (they must be the same, a = b = 1) and restricts the degree of the curve "n" to be even integers (2, 4, 6, 8 etc). The degree "n" controls the curvature of the shape's corners. Ideally, to optimise the shape of the diode for both low noise and a high fill factor, the shape's parameters should be free of these restrictions. To minimise the spacing between SPAD active areas, researchers have removed all active circuitry from the arrays and have also explored the use of NMOS only CMOS SPAD arrays to remove SPAD guard ring to PMOS n-well spacing rules. This is of benefit but is limited by routing distances and congestion into the centre SPADs for larger arrays. The concept has been extended to develop arrays that use clusters of SPADs in so-called mini-SiPM arrangements whereby a smaller array is provided with its active circuitry at one edge, allowing a second small array to be abutted on a different edge. This reduced the routing difficulties by keeping the number of diodes in the cluster manageable and creating the required number of SPADs in total from collections of those clusters. A significant jump in fill factor and array pixel pitch was achieved by sharing the deep n-well of the SPADs in CMOS processes, and more recently also sharing portions of the guard-ring structure. This removed one of the major guard-ring to guard-ring separation rules and allowed the fill-factor to increase towards 60 or 70%. The n-well and guard ring sharing idea has been crucial in efforts towards lowering pixel pitch and increasing the total number of diodes in the array. Recently SPAD pitches have been reduced to 3.0 um and 2.2 um. Porting a concept from photodiodes and APDs, researchers have also investigated the use of drift electric fields within the CMOS substrate to attract photo generated carriers towards a SPAD's active p-n junction. By doing so a large optical collection area can be achieved with a smaller SPAD region. Another concept ported from CMOS image sensor technologies, is the exploration of stacked p-n junctions similar to Foveon sensors. The idea being that higher-energy photons (blue) tend to be absorbed at a short absorption depth, i.e. near the silicon surface. Red and infra-red photons (lower energy) travel deeper into the silicon. If there is a junction at that depth, red and IR sensitivity can be improved. IC fabrication improvements With the advancement of 3D IC technologies, i.e. stacking of integrated circuits, the fill factor could be enhanced further by allowing the top die to be optimised for a high fill-factor SPAD array, and the lower die for readout circuits and signal processing. As small dimension, high-speed processes for transistors may require different optimisations than optically sensitive diodes, 3D-ICs allow the layers to be separately optimised. Pixel-level optical improvements As with CMOS image sensors micro-lenses can be fabricated on the SPAD pixel array to focus light into the centre of the SPAD. As with a single SPAD, this allows light to only hit the sensitive regions and avoid both the guard ring and any routing that is needed within the array. This has also recently included Fresnel type lenses. Pixel pitch The above fill-factor enhancement methods, mostly concentrating on SPAD geometry along with other advancements, have led SPAD arrays to recently push the 1 mega pixel barrier. While this lags CMOS image sensors (with pitches now below 0.8 um), this is a product of both the youth of the research field (with CMOS SPADs introduced in 2003) and the complications of high voltages, avalanche multiplication within the silicon and the required spacing rules. Comparison with APDs While both APDs and SPADs are semiconductor p-n junctions that are heavily reverse biased, the principle difference in their properties is derived from their different biasing points upon the reverse I-V characteristic, i.e. the reverse voltage applied to their junction. An APD, in comparison to a SPAD, is not biased above its breakdown voltage. This is because the multiplication of charge carriers is known to occur prior to the breakdown of the device with this being utilised to achieve a stable gain that varies with the applied voltage. For optical detection applications, the resulting avalanche and subsequent current in its biasing circuit is linearly related to the optical signal intensity. The APD is therefore useful to achieve moderate up-front amplification of low-intensity optical signals but is often combined with a trans-impedance amplifier (TIA) as the APD's output is a current rather than the voltage of a typical amplifier. The resultant signal is a non-distorted, amplified version of the input, allowing for the measurement of complex processes that modulate the amplitude of the incident light. The internal multiplication gain factors for APDs vary by application, however typical values are of the order of a few hundred. The avalanche of carriers is not divergent in this operating region, while the avalanche present in SPADs quickly builds into a run-away (divergent) condition. In comparison, SPADs operate at a bias voltage above the breakdown voltage. This is such a highly unstable above-breakdown regime that a single photon or a single dark-current electron can trigger a significant avalanche of carriers. The semiconductor p-n junction breaks down completely, and a significant current is developed. A single photon can trigger a current spike equivalent to billions of billions of electrons per second (with this being dependent on the physical size of the device and its bias voltage). This allows subsequent electronic circuits to easily count such trigger events. As the device produces a trigger event, the concept of gain is not strictly compatible. However, as the photon detection efficiency (PDE) of SPADs varies with the reverse bias voltage, gain, in a general conceptual sense can be used to distinguish devices that are heavily biased and therefore highly sensitive in comparison to lightly biased and therefore of lower sensitivity. While APDs can amplify an input signal preserving any changes in amplitude, SPADs distort the signal into a series of trigger or pulse events. The output can still be treated as proportional to the input signal intensity, however it is now transformed into the frequency of trigger events, i.e. pulse frequency modulation (PFM). Pulses can be counted giving an indication of the input signal's optical intensity, while pulses can trigger timing circuits to provide accurate time-of-arrival measurements. One crucial issue present in APDs is multiplication noise induced by the statistical variation of the avalanche multiplication process. This leads to a corresponding noise factor on the output amplified photo current. Statistical variation in the avalanche is also present in SPAD devices, however due to the runaway process it is often manifest as timing jitter on the detection event. Along with their bias region, there are also structural differences between APDs and SPADs, principally due to the increased reverse bias voltages required and the need for SPADs to have a long quiescent period between noise trigger events to be suitable for the single-photon level signals to be measured. History, development and early pioneers The history and development of SPADs and APDs shares a number of important points with the development of solid-state technologies such as diodes and early p–n junction transistors (particularly war-efforts at Bell Labs). John Townsend in 1901 and 1903 investigated the ionisation of trace gases within vacuum tubes, finding that as the electric potential increased, gaseous atoms and molecules could become ionised by the kinetic energy of free electrons accelerated though the electric field. The new liberated electrons were then themselves accelerated by the field, producing new ionisations once their kinetic energy has reached sufficient levels. This theory was later instrumental in the development of the thyratron and the Geiger-Mueller tube. The Townsend discharge was also instrumental as a base theory for electron multiplication phenomena, (both DC and AC), within both silicon and germanium. However, the major advances in early discovery and utilisation of the avalanche gain mechanism were a product of the study of Zener breakdown, related (avalanche) breakdown mechanisms and structural defects in early silicon and germanium transistor and p–n junction devices. These defects were called 'microplasmas' and are critical in the history of APDs and SPADs. Likewise investigation of the light detection properties of p–n junctions is crucial, especially the early 1940s findings of Russel Ohl. Light detection in semiconductors and solids through the internal photoelectric effect is older with Foster Nix pointing to the work of Gudden and Pohl in the 1920s, who use the phrase primary and secondary to distinguish the internal and external photoelectric effects respectively. In the 1950s and 1960s, significant effort was made to reduce the number of microplasma breakdown and noise sources, with artificial microplasmas being fabricated for study. It became clear that the avalanche mechanism could be useful for signal amplification within the diode itself, as both light and alpha particles were used for the study of these devices and breakdown mechanisms. In the early 2000s, SPADs have been implemented within CMOS processes. This has radically increased their performance, (dark count rate, jitter, array pixel pitch etc), and has leveraged the analog and digital circuits that can be implemented alongside these devices. Notable circuits include photon counting using fast digital counters, photon timing using both time-to-digital converters (TDCs) and time-to-analog converters (TACs), passive quenching circuits using either NMOS or PMOS transistors in place of poly-silicon resistors, active quenching and reset circuits for high counting rates, and many on-chip digital signal processing blocks. Such devices, now reaching optical fill factors of >70%, with >1024 SPADs, with DCRs < 10 Hz and jitter values in the 50ps region are now available with dead times of 1-2ns. Recent devices have leaveraged 3D-IC technologies such as through-silicon-vias (TSVs) to present a high-fill-factor SPAD optimised top CMOS layer (90 nm or 65 nm node) with a dedicated signal processing and readout CMOS layer (45 nm node). Significant advancements in the noise terms for SPADs have been obtained by silicon process modelling tools such as TCAD, where guard rings, junction depths and device structures and shapes can be optimised prior to validation by experimental SPAD structures. See also Avalanche photodiode (APD) Oversampled binary image sensor p–n junction Silicon photomultiplier (SiPM) References Optical devices Optical diodes Particle detectors Photodetectors Single-photon detectors
Single-photon avalanche diode
[ "Materials_science", "Technology", "Engineering" ]
5,830
[ "Glass engineering and science", "Particle detectors", "Optical devices", "Measuring instruments" ]
972,745
https://en.wikipedia.org/wiki/Electra%20%28Euripides%20play%29
Euripides' Electra (, Ēlektra) is a tragedy probably written in the mid 410s BC, likely before 413 BC. A version of the myth of the house of Atreus, Euripides' play reworks important aspects of the story found in Aeschylus' Oresteia trilogy (especially the second play, Libation Bearers) and also in Sophocles' Electra, although the relative dating of Euripides' and Sophocles' plays remain uncertain. In his tragedy, Euripides introduces startling and disturbing elements that ask his audience (and readers) to question the nature of tragic 'heroism,' assumptions of appropriate gender behavior, and the morality of both human characters and the gods. Background Years before the start of the play, at the outset of the Trojan War, the Greek general Agamemnon sacrifices his daughter Iphigeneia in order to appease the goddess Artemis. While his sacrifice allows the Greek army to sail for Troy, it leads to deep resentment from his wife, Clytemnestra, who also has born Agamemnon's other children, Orestes and Electra. When Agamemnon returns victorious from the Trojan War ten years later, Clytemnestra (in some versions, helped by her lover Aegisthus) murder him in the bath. Orestes goes into exile, while Electra remains in the palace, under the thumb of her mother and Aegisthus, the new rulers in Argos. Plot The play opens with a prologue delivered by a poor farmer, who informs us that he is the erstwhile husband of Electra. When Electra reached marriageable age, Aegisthus feared that a noble husband would father children who might revenge Agamemnon's murder. So he and Clytemnestra marry off Electra to this poor Mycenaean, who treats her kindly, honors her royal lineage, and respects her virginity. While lamenting her father's murder and her loss of status, Electra helps her husband with the household chores, going off to fetch water from the spring. After Agamemnon's murder, Clytemnestra and Aegisthus send Orestes to Phocis, where he befriends the king's son, Pylades. Now grown, Orestes returns to Argos with Pylades, and he seeks out news of his sister Electra and considers how to revenge their father Agamemnon's murder. They come to a poor hillside cottage that turns out to be Electra and the farmer's home. Claiming to bring a message from her brother, Orestes keeps his identity hidden, even after he determines Electra's loyalty and her commitment to avenge Agamemnon's murder. Sent for by Electra, the aged servant of the family arrives at her homestead, and he "outs" Orestes when he recognizing him by the scar on his brow. Joyfully reunited, brother and sister plot how they will murder both Aegisthus and Clytemnestra. The old servant explains that Aegisthus is currently in country where his horses pasture, preparing a sacrifice and feast. Before Orestes and Pylades go to confront Aegisthus, Electra sends the old servant to tell Clytemnestra that she had a son ten days ago, knowing this will bring Clytemnestra to her house. A messenger arrives and describes Orestes’ murder of Aegisthus during the sacrifice. Orestes and Pylades return with the corpse, and Electra delivers a vindictive speech over the body. When Orestes sees Clytemnestra approaching in a wagon, he wavers in his commitment to murder her. Electra shames her brother, and he hides in the cottage awaiting the inevitable. Mother and daughter alternate speeches of accusation, until Clytemnestra is invited into the cottage to help Electra with the birth ritual for her (fictional) newborn. Helped by Electra, Orestes kills their mother with a sword. The two leave the house, filled with grief and guilt. As they lament, Clytemnestra's deified brothers, Castor and Pollux, appear. They tell Electra and Orestes that their mother received just punishment but their matricide was still a shameful act, and they instruct the siblings on what they must do to atone and purge their souls. Aeschylean parody and Homeric allusion The enduring popularity of Aeschylus' Oresteia trilogy (produced in 458 BC) is evident in Euripides' construction of the recognition scene between Orestes and Electra, which mocks Aeschylus' play. In The Libation Bearers (whose plot is roughly equivalent to the events in Electra), Electra recognizes her brother by a series of tokens: a lock of his hair, a footprint he leaves at Agamemnon's grave, and an article of clothing she had made for him years earlier. Euripides' own recognition scene clearly ridicules Aeschylus' account. In Euripides' play (510ff.), Electra laughs at the idea of using such tokens to recognize her brother because: there is no reason their hair should match; Orestes' footprint would in no way resemble her smaller footprint; and it would be illogical for a grown Orestes to still have a piece of clothing made for him when he was a small child. Orestes is instead recognized from a scar he received on the forehead while chasing a doe in the house as a child (571-74). This is a mock-heroic allusion to a scene from Homer's Odyssey. In Odyssey 19.428-54, the nurse Eurycleia recognizes a newly returned Odysseus from a scar on his thigh that he received as a child while on his first boar hunt. In the Odyssey, Orestes' return to Argos and taking revenge for his father's death is held up several times as a model for Telemachus' behavior (see Telemachy). Euripides in turn uses his recognition scene to allude to the one in Odyssey 19. Instead of an epic heroic boar hunt, Euripides instead invents a semi-comic incident involving a fawn. Translations Edward P. Coleridge, 1891 – prose: full text Aurthur S. Way, 1896 – verse: full text Gilbert Murray, 1911 – verse: full text Moses Hadas and John McLean, 1936 - prose D. W. Lucas, 1951 – prose Emily Townsend Vermeule, 1958 – verse M. J. Cropp, 1988 – verse J. Lembke & K.J. Reckford, 1994 James Morwood, 1997 – prose K. McLeish, 1997 J. Davie, 1998 J. Morwood, 1998 M. MacDonald and J. M. Walton, 2004 – verse G. Theodoridis, 2006 – prose: full text Ian C. Johnston, 2009 – verse: full text Brian Vinero, 2012: verse Emily Wilson, 2016 - verse Adaptations Electra, 1962 film References Sources Arnott, W. G. 1993. "Double the Vision: A Reading of Euripides' Electra (1981)" In Greek Tragedy. Greece and Rome Studies, Volume II. Edited by Ian McAuslan and Peter Walcot. New York: Oxford University Press Gallagher, Robert L. 2003. "Making the Stronger Argument the Weaker: Euripides, Electra 518-41." Classical Quarterly 53.2: 401-415 Garner, R. 1990. From Homer to Tragedy: The Art of Allusion in Greek Poetry. London: Routledge. Garvie, Alexander F. 2012. "Three Different Electras in Three Different Plots." Lexis 30:283–293. Gellie, G. H. 1981. "Tragedy and Euripides’ Electra." Bulletin of the Institute of Classical Studies 28:1–12. Goff, Barbara. 1999–2000. "Try to Make it Real Compared to What? Euripides’ Electra and the Play of Genres." Illinois Classical Studies 24–25:93–105. Hammond, N. G. L. 1985. "Spectacle and Parody in Euripides’ Electra." Greek, Roman and Byzantine Studies 25:373–387. Morwood, J. H. W. 1981. "The Pattern of the Euripides Electra." American Journal of Philology 102:362–370. Mossman, Judith. 2001. "Women’s Speech in Greek Tragedy: The Case of Electra and Clytemnestra in Euripides’ Electra." Classical Quarterly n 51:374–384. Raeburn, David. 2000. "The Significance of Stage Properties in Euripides’ Electra." Greece & Rome 47:149–168. Solmsen, F. 1967. Electra and Orestes: Three Recognitions in Greek Tragedy. Amsterdam: Noord-Hollandsche Uitgevers Mij. Tarkow, T. 1981. "The Scar of Orestes: Observations on a Euripidean Innovation." Rheinisches Museum 124: 143-53. Wohl, Victoria. 2015. "How to Recognise a Hero in Euripides’ Electra." Bulletin of the Institute of Classical Studies 58:61–76. External links Textual criticism. Theatre Database (online). Plays by Euripides Trojan War literature Mythology of Argolis Plays set in ancient Greece Greek plays adapted into films Plays based on classical mythology Castor and Pollux Clytemnestra Atreidai
Electra (Euripides play)
[ "Astronomy" ]
2,026
[ "Castor and Pollux", "Astronomical myths" ]
972,800
https://en.wikipedia.org/wiki/Abyssal%20plain
An abyssal plain is an underwater plain on the deep ocean floor, usually found at depths between . Lying generally between the foot of a continental rise and a mid-ocean ridge, abyssal plains cover more than 50% of the Earth's surface. They are among the flattest, smoothest, and least explored regions on Earth. Abyssal plains are key geologic elements of oceanic basins (the other elements being an elevated mid-ocean ridge and flanking abyssal hills). The creation of the abyssal plain is the result of the spreading of the seafloor (plate tectonics) and the melting of the lower oceanic crust. Magma rises from above the asthenosphere (a layer of the upper mantle), and as this basaltic material reaches the surface at mid-ocean ridges, it forms new oceanic crust, which is constantly pulled sideways by spreading of the seafloor. Abyssal plains result from the blanketing of an originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited by turbidity currents that have been channelled from the continental margins along submarine canyons into deeper water. The rest is composed chiefly of pelagic sediments. Metallic nodules are common in some areas of the plains, with varying concentrations of metals, including manganese, iron, nickel, cobalt, and copper. There are also amounts of carbon, nitrogen, phosphorus and silicon, due to material that comes down and decomposes. Owing in part to their vast size, abyssal plains are believed to be major reservoirs of biodiversity. They also exert significant influence upon ocean carbon cycling, dissolution of calcium carbonate, and atmospheric CO2 concentrations over time scales of a hundred to a thousand years. The structure of abyssal ecosystems is strongly influenced by the rate of flux of food to the seafloor and the composition of the material that settles. Factors such as climate change, fishing practices, and ocean fertilization have a substantial effect on patterns of primary production in the euphotic zone. Animals absorb dissolved oxygen from the oxygen-poor waters. Much dissolved oxygen in abyssal plains came from polar regions that had melted long ago. Due to scarcity of oxygen, abyssal plains are inhospitable for organisms that would flourish in the oxygen-enriched waters above. Deep sea coral reefs are mainly found in depths of 3,000 meters and deeper in the abyssal and hadal zones. Abyssal plains were not recognized as distinct physiographic features of the sea floor until the late 1940s and, until recently, none had been studied on a systematic basis. They are poorly preserved in the sedimentary record, because they tend to be consumed by the subduction process. Due to darkness and a water pressure that can reach about 750 times atmospheric pressure (76 megapascal), abyssal plains are not well explored. Oceanic zones The ocean can be conceptualized as zones, depending on depth, and presence or absence of sunlight. Nearly all life forms in the ocean depend on the photosynthetic activities of phytoplankton and other marine plants to convert carbon dioxide into organic carbon, which is the basic building block of organic matter. Photosynthesis in turn requires energy from sunlight to drive the chemical reactions that produce organic carbon. The stratum of the water column nearest the surface of the ocean (sea level) is referred to as the photic zone. The photic zone can be subdivided into two different vertical regions. The uppermost portion of the photic zone, where there is adequate light to support photosynthesis by phytoplankton and plants, is referred to as the euphotic zone (also referred to as the epipelagic zone, or surface zone). The lower portion of the photic zone, where the light intensity is insufficient for photosynthesis, is called the dysphotic zone (dysphotic means "poorly lit" in Greek). The dysphotic zone is also referred to as the mesopelagic zone, or the twilight zone. Its lowermost boundary is at a thermocline of , which, in the tropics generally lies between 200 and 1,000 metres. The euphotic zone is somewhat arbitrarily defined as extending from the surface to the depth where the light intensity is approximately 0.1–1% of surface sunlight irradiance, depending on season, latitude and degree of water turbidity. In the clearest ocean water, the euphotic zone may extend to a depth of about 150 metres, or rarely, up to 200 metres. Dissolved substances and solid particles absorb and scatter light, and in coastal regions the high concentration of these substances causes light to be attenuated rapidly with depth. In such areas the euphotic zone may be only a few tens of metres deep or less. The dysphotic zone, where light intensity is considerably less than 1% of surface irradiance, extends from the base of the euphotic zone to about 1,000 metres. Extending from the bottom of the photic zone down to the seabed is the aphotic zone, a region of perpetual darkness. Since the average depth of the ocean is about 4,300 metres, the photic zone represents only a tiny fraction of the ocean's total volume. However, due to its capacity for photosynthesis, the photic zone has the greatest biodiversity and biomass of all oceanic zones. Nearly all primary production in the ocean occurs here. Life forms which inhabit the aphotic zone are often capable of movement upwards through the water column into the photic zone for feeding. Otherwise, they must rely on material sinking from above, or find another source of energy and nutrition, such as occurs in chemosynthetic archaea found near hydrothermal vents and cold seeps. The aphotic zone can be subdivided into three different vertical regions, based on depth and temperature. First is the bathyal zone, extending from a depth of 1,000 metres down to 3,000 metres, with water temperature decreasing from to as depth increases. Next is the abyssal zone, extending from a depth of 3,000 metres down to 6,000 metres. The final zone includes the deep oceanic trenches, and is known as the hadal zone. This, the deepest oceanic zone, extends from a depth of 6,000 metres down to approximately 11,034 meters, at the very bottom of the Mariana Trench, the deepest point on planet Earth. Abyssal plains are typically in the abyssal zone, at depths from 3,000 to 6,000 metres. The table below illustrates the classification of oceanic zones: Formation Oceanic crust, which forms the bedrock of abyssal plains, is continuously being created at mid-ocean ridges (a type of divergent boundary) by a process known as decompression melting. Plume-related decompression melting of solid mantle is responsible for creating ocean islands like the Hawaiian islands, as well as the ocean crust at mid-ocean ridges. This phenomenon is also the most common explanation for flood basalts and oceanic plateaus (two types of large igneous provinces). Decompression melting occurs when the upper mantle is partially melted into magma as it moves upwards under mid-ocean ridges. This upwelling magma then cools and solidifies by conduction and convection of heat to form new oceanic crust. Accretion occurs as mantle is added to the growing edges of a tectonic plate, usually associated with seafloor spreading. The age of oceanic crust is therefore a function of distance from the mid-ocean ridge. The youngest oceanic crust is at the mid-ocean ridges, and it becomes progressively older, cooler and denser as it migrates outwards from the mid-ocean ridges as part of the process called mantle convection. The lithosphere, which rides atop the asthenosphere, is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Oceanic crust and tectonic plates are formed and move apart at mid-ocean ridges. Abyssal hills are formed by stretching of the oceanic lithosphere. Consumption or destruction of the oceanic lithosphere occurs at oceanic trenches (a type of convergent boundary, also known as a destructive plate boundary) by a process known as subduction. Oceanic trenches are found at places where the oceanic lithospheric slabs of two different plates meet, and the denser (older) slab begins to descend back into the mantle. At the consumption edge of the plate (the oceanic trench), the oceanic lithosphere has thermally contracted to become quite dense, and it sinks under its own weight in the process of subduction. The subduction process consumes older oceanic lithosphere, so oceanic crust is seldom more than 200 million years old. The overall process of repeated cycles of creation and destruction of oceanic crust is known as the Supercontinent cycle, first proposed by Canadian geophysicist and geologist John Tuzo Wilson. New oceanic crust, closest to the mid-oceanic ridges, is mostly basalt at shallow levels and has a rugged topography. The roughness of this topography is a function of the rate at which the mid-ocean ridge is spreading (the spreading rate). Magnitudes of spreading rates vary quite significantly. Typical values for fast-spreading ridges are greater than 100 mm/yr, while slow-spreading ridges are typically less than 20 mm/yr. Studies have shown that the slower the spreading rate, the rougher the new oceanic crust will be, and vice versa. It is thought this phenomenon is due to faulting at the mid-ocean ridge when the new oceanic crust was formed. These faults pervading the oceanic crust, along with their bounding abyssal hills, are the most common tectonic and topographic features on the surface of the Earth. The process of seafloor spreading helps to explain the concept of continental drift in the theory of plate tectonics. The flat appearance of mature abyssal plains results from the blanketing of this originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited from turbidity currents that have been channeled from the continental margins along submarine canyons down into deeper water. The remainder of the sediment comprises chiefly dust (clay particles) blown out to sea from land, and the remains of small marine plants and animals which sink from the upper layer of the ocean, known as pelagic sediments. The total sediment deposition rate in remote areas is estimated at two to three centimeters per thousand years. Sediment-covered abyssal plains are less common in the Pacific Ocean than in other major ocean basins because sediments from turbidity currents are trapped in oceanic trenches that border the Pacific Ocean. Abyssal plains are typically covered by deep sea, but during parts of the Messinian salinity crisis much of the Mediterranean Sea's abyssal plain was exposed to air as an empty deep hot dry salt-floored sink. Discovery The landmark scientific expedition (December 1872 – May 1876) of the British Royal Navy survey ship HMS Challenger yielded a tremendous amount of bathymetric data, much of which has been confirmed by subsequent researchers. Bathymetric data obtained during the course of the Challenger expedition enabled scientists to draw maps, which provided a rough outline of certain major submarine terrain features, such as the edge of the continental shelves and the Mid-Atlantic Ridge. This discontinuous set of data points was obtained by the simple technique of taking soundings by lowering long lines from the ship to the seabed. The Challenger expedition was followed by the 1879–1881 expedition of the Jeannette, led by United States Navy Lieutenant George Washington DeLong. The team sailed across the Chukchi Sea and recorded meteorological and astronomical data in addition to taking soundings of the seabed. The ship became trapped in the ice pack near Wrangel Island in September 1879, and was ultimately crushed and sunk in June 1881. The Jeannette expedition was followed by the 1893–1896 Arctic expedition of Norwegian explorer Fridtjof Nansen aboard the Fram, which proved that the Arctic Ocean was a deep oceanic basin, uninterrupted by any significant land masses north of the Eurasian continent. Beginning in 1916, Canadian physicist Robert William Boyle and other scientists of the Anti-Submarine Detection Investigation Committee (ASDIC) undertook research which ultimately led to the development of sonar technology. Acoustic sounding equipment was developed which could be operated much more rapidly than the sounding lines, thus enabling the German Meteor expedition aboard the German research vessel Meteor (1925–27) to take frequent soundings on east-west Atlantic transects. Maps produced from these techniques show the major Atlantic basins, but the depth precision of these early instruments was not sufficient to reveal the flat featureless abyssal plains. As technology improved, measurement of depth, latitude and longitude became more precise and it became possible to collect more or less continuous sets of data points. This allowed researchers to draw accurate and detailed maps of large areas of the ocean floor. Use of a continuously recording fathometer enabled Tolstoy & Ewing in the summer of 1947 to identify and describe the first abyssal plain. This plain, south of Newfoundland, is now known as the Sohm Abyssal Plain. Following this discovery many other examples were found in all the oceans. The Challenger Deep is the deepest surveyed point of all of Earth's oceans; it is at the south end of the Mariana Trench near the Mariana Islands group. The depression is named after HMS Challenger, whose researchers made the first recordings of its depth on 23 March 1875 at station 225. The reported depth was 4,475 fathoms (8184 meters) based on two separate soundings. On 1 June 2009, sonar mapping of the Challenger Deep by the Simrad EM120 multibeam sonar bathymetry system aboard the R/V Kilo Moana indicated a maximum depth of 10971 meters (6.82 miles). The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth (this is an error of about 22 meters at this depth). Terrain features Hydrothermal vents A rare but important terrain feature found in the bathyal, abyssal and hadal zones is the hydrothermal vent. In contrast to the approximately 2 °C ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 °C up to as high as 464 °C. Due to the high barometric pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. At a barometric pressure of 218 atmospheres, the critical point of water is 375 °C. At a depth of 3,000 meters, the barometric pressure of sea water is more than 300 atmospheres (as salt water is denser than fresh water). At this depth and pressure, seawater becomes supercritical at a temperature of 407 °C (see image). However the increase in salinity at this depth pushes the water closer to its critical point. Thus, water emerging from the hottest parts of some hydrothermal vents, black smokers and submarine volcanoes can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid. Sister Peak (Comfortless Cove Hydrothermal Field, , elevation −2996 m), Shrimp Farm and Mephisto (Red Lion Hydrothermal Field, , elevation −3047 m), are three hydrothermal vents of the black smoker category, on the Mid-Atlantic Ridge near Ascension Island. They are presumed to have been active since an earthquake shook the region in 2002. These vents have been observed to vent phase-separated, vapor-type fluids. In 2008, sustained exit temperatures of up to 407 °C were recorded at one of these vents, with a peak recorded temperature of up to 464 °C. These thermodynamic conditions exceed the critical point of seawater, and are the highest temperatures recorded to date from the seafloor. This is the first reported evidence for direct magmatic-hydrothermal interaction on a slow-spreading mid-ocean ridge. The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm (1 ft) per day have been recorded.[11] An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron (see iron cycle). Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. Cold seeps Another unusual feature found in the abyssal and hadal zones is the cold seep, sometimes called a cold vent. This is an area of the seabed where seepage of hydrogen sulfide, methane and other hydrocarbon-rich fluid occurs, often in the form of a deep-sea brine pool. The first cold seeps were discovered in 1983, at a depth of 3200 meters in the Gulf of Mexico. Since then, cold seeps have been discovered in many other areas of the World Ocean, including the Monterey Submarine Canyon just off Monterey Bay, California, the Sea of Japan, off the Pacific coast of Costa Rica, off the Atlantic coast of Africa, off the coast of Alaska, and under an ice shelf in Antarctica. Biodiversity Though the plains were once assumed to be vast, desert-like habitats, research over the past decade or so shows that they teem with a wide variety of microbial life. However, ecosystem structure and function at the deep seafloor have historically been poorly studied because of the size and remoteness of the abyss. Recent oceanographic expeditions conducted by an international group of scientists from the Census of Diversity of Abyssal Marine Life (CeDAMar) have found an extremely high level of biodiversity on abyssal plains, with up to 2000 species of bacteria, 250 species of protozoans, and 500 species of invertebrates (worms, crustaceans and molluscs), typically found at single abyssal sites. New species make up more than 80% of the thousands of seafloor invertebrate species collected at any abyssal station, highlighting our heretofore poor understanding of abyssal diversity and evolution. Richer biodiversity is associated with areas of known phytodetritus input and higher organic carbon flux. Abyssobrotula galatheae, a species of cusk eel in the family Ophidiidae, is among the deepest-living species of fish. In 1970, one specimen was trawled from a depth of 8370 meters in the Puerto Rico Trench. The animal was dead, however, upon arrival at the surface. In 2008, the hadal snailfish (Pseudoliparis amblystomopsis) was observed and recorded at a depth of 7700 meters in the Japan Trench. In December 2014 a type of snailfish was filmed at a depth of 8145 meters, followed in May 2017 by another sailfish filmed at 8178 meters. These are, to date, the deepest living fish ever recorded. Other fish of the abyssal zone include the fishes of the family Ipnopidae, which includes the abyssal spiderfish (Bathypterois longipes), tripodfish (Bathypterois grallator), feeler fish (Bathypterois longifilis), and the black lizardfish (Bathysauropsis gracilis). Some members of this family have been recorded from depths of more than 6000 meters. CeDAMar scientists have demonstrated that some abyssal and hadal species have a cosmopolitan distribution. One example of this would be protozoan foraminiferans, certain species of which are distributed from the Arctic to the Antarctic. Other faunal groups, such as the polychaete worms and isopod crustaceans, appear to be endemic to certain specific plains and basins. Many apparently unique taxa of nematode worms have also been recently discovered on abyssal plains. This suggests that the deep ocean has fostered adaptive radiations. The taxonomic composition of the nematode fauna in the abyssal Pacific is similar, but not identical to, that of the North Atlantic. A list of some of the species that have been discovered or redescribed by CeDAMar can be found here. Eleven of the 31 described species of Monoplacophora (a class of mollusks) live below 2000 meters. Of these 11 species, two live exclusively in the hadal zone. The greatest number of monoplacophorans are from the eastern Pacific Ocean along the oceanic trenches. However, no abyssal monoplacophorans have yet been found in the Western Pacific and only one abyssal species has been identified in the Indian Ocean. Of the 922 known species of chitons (from the Polyplacophora class of mollusks), 22 species (2.4%) are reported to live below 2000 meters and two of them are restricted to the abyssal plain. Although genetic studies are lacking, at least six of these species are thought to be eurybathic (capable of living in a wide range of depths), having been reported as occurring from the sublittoral to abyssal depths. A large number of the polyplacophorans from great depths are herbivorous or xylophagous, which could explain the difference between the distribution of monoplacophorans and polyplacophorans in the world's oceans. Peracarid crustaceans, including isopods, are known to form a significant part of the macrobenthic community that is responsible for scavenging on large food falls onto the sea floor. In 2000, scientists of the Diversity of the deep Atlantic benthos (DIVA 1) expedition (cruise M48/1 of the German research vessel RV Meteor III) discovered and collected three new species of the Asellota suborder of benthic isopods from the abyssal plains of the Angola Basin in the South Atlantic Ocean. In 2003, De Broyer et al. collected some 68,000 peracarid crustaceans from 62 species from baited traps deployed in the Weddell Sea, Scotia Sea, and off the South Shetland Islands. They found that about 98% of the specimens belonged to the amphipod superfamily Lysianassoidea, and 2% to the isopod family Cirolanidae. Half of these species were collected from depths of greater than 1000 meters. In 2005, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) remotely operated vehicle, KAIKO, collected sediment core from the Challenger Deep. 432 living specimens of soft-walled foraminifera were identified in the sediment samples. Foraminifera are single-celled protists that construct shells. There are an estimated 4,000 species of living foraminifera. Out of the 432 organisms collected, the overwhelming majority of the sample consisted of simple, soft-shelled foraminifera, with others representing species of the complex, multi-chambered genera Leptohalysis and Reophax. Overall, 85% of the specimens consisted of soft-shelled allogromiids. This is unusual compared to samples of sediment-dwelling organisms from other deep-sea environments, where the percentage of organic-walled foraminifera ranges from 5% to 20% of the total. Small organisms with hard calciferous shells have trouble growing at extreme depths because the water at that depth is severely lacking in calcium carbonate. The giant (5–20 cm) foraminifera known as xenophyophores are only found at depths of 500–10,000 metres, where they can occur in great numbers and greatly increase animal diversity due to their bioturbation and provision of living habitat for small animals. While similar lifeforms have been known to exist in shallower oceanic trenches (>7,000 m) and on the abyssal plain, the lifeforms discovered in the Challenger Deep may represent independent taxa from those shallower ecosystems. This preponderance of soft-shelled organisms at the Challenger Deep may be a result of selection pressure. Millions of years ago, the Challenger Deep was shallower than it is now. Over the past six to nine million years, as the Challenger Deep grew to its present depth, many of the species present in the sediment of that ancient biosphere were unable to adapt to the increasing water pressure and changing environment. Those species that were able to adapt may have been the ancestors of the organisms currently endemic to the Challenger Deep. Polychaetes occur throughout the Earth's oceans at all depths, from forms that live as plankton near the surface, to the deepest oceanic trenches. The robot ocean probe Nereus observed a 2–3 cm specimen (still unclassified) of polychaete at the bottom of the Challenger Deep on 31 May 2009. There are more than 10,000 described species of polychaetes; they can be found in nearly every marine environment. Some species live in the coldest ocean temperatures of the hadal zone, while others can be found in the extremely hot waters adjacent to hydrothermal vents. Within the abyssal and hadal zones, the areas around submarine hydrothermal vents and cold seeps have by far the greatest biomass and biodiversity per unit area. Fueled by the chemicals dissolved in the vent fluids, these areas are often home to large and diverse communities of thermophilic, halophilic and other extremophilic prokaryotic microorganisms (such as those of the sulfide-oxidizing genus Beggiatoa), often arranged in large bacterial mats near cold seeps. In these locations, chemosynthetic archaea and bacteria typically form the base of the food chain. Although the process of chemosynthesis is entirely microbial, these chemosynthetic microorganisms often support vast ecosystems consisting of complex multicellular organisms through symbiosis. These communities are characterized by species such as vesicomyid clams, mytilid mussels, limpets, isopods, giant tube worms, soft corals, eelpouts, galatheid crabs, and alvinocarid shrimp. The deepest seep community discovered thus far is in the Japan Trench, at a depth of 7700 meters. Probably the most important ecological characteristic of abyssal ecosystems is energy limitation. Abyssal seafloor communities are considered to be food limited because benthic production depends on the input of detrital organic material produced in the euphotic zone, thousands of meters above. Most of the organic flux arrives as an attenuated rain of small particles (typically, only 0.5–2% of net primary production in the euphotic zone), which decreases inversely with water depth. The small particle flux can be augmented by the fall of larger carcasses and downslope transport of organic material near continental margins. Exploitation of resources In addition to their high biodiversity, abyssal plains are of great current and future commercial and strategic interest. For example, they may be used for the legal and illegal disposal of large structures such as ships and oil rigs, radioactive waste and other hazardous waste, such as munitions. They may also be attractive sites for deep-sea fishing, and extraction of oil and gas and other minerals. Future deep-sea waste disposal activities that could be significant by 2025 include emplacement of sewage and sludge, carbon sequestration, and disposal of dredge spoils. As fish stocks dwindle in the upper ocean, deep-sea fisheries are increasingly being targeted for exploitation. Because deep sea fish are long-lived and slow growing, these deep-sea fisheries are not thought to be sustainable in the long term given current management practices. Changes in primary production in the photic zone are expected to alter the standing stocks in the food-limited aphotic zone. Hydrocarbon exploration in deep water occasionally results in significant environmental degradation resulting mainly from accumulation of contaminated drill cuttings, but also from oil spills. While the oil blowout involved in the Deepwater Horizon oil spill in the Gulf of Mexico originates from a wellhead only 1500 meters below the ocean surface, it nevertheless illustrates the kind of environmental disaster that can result from mishaps related to offshore drilling for oil and gas. Sediments of certain abyssal plains contain abundant mineral resources, notably polymetallic nodules. These potato-sized concretions of manganese, iron, nickel, cobalt, and copper, distributed on the seafloor at depths of greater than 4000 meters, are of significant commercial interest. The area of maximum commercial interest for polymetallic nodule mining (called the Pacific nodule province) lies in international waters of the Pacific Ocean, stretching from 118°–157°, and from 9°–16°N, an area of more than 3 million km2. The abyssal Clarion-Clipperton fracture zone (CCFZ) is an area within the Pacific nodule province that is currently under exploration for its mineral potential. Eight commercial contractors are currently licensed by the International Seabed Authority (an intergovernmental organization established to organize and control all mineral-related activities in the international seabed area beyond the limits of national jurisdiction) to explore nodule resources and to test mining techniques in eight claim areas, each covering 150,000 km2. When mining ultimately begins, each mining operation is projected to directly disrupt 300–800 km2 of seafloor per year and disturb the benthic fauna over an area 5–10 times that size due to redeposition of suspended sediments. Thus, over the 15-year projected duration of a single mining operation, nodule mining might severely damage abyssal seafloor communities over areas of 20,000 to 45,000 km2 (a zone at least the size of Massachusetts). Limited knowledge of the taxonomy, biogeography and natural history of deep sea communities prevents accurate assessment of the risk of species extinctions from large-scale mining. Data acquired from the abyssal North Pacific and North Atlantic suggest that deep-sea ecosystems may be adversely affected by mining operations on decadal time scales. In 1978, a dredge aboard the Hughes Glomar Explorer, operated by the American mining consortium Ocean Minerals Company (OMCO), made a mining track at a depth of 5000 meters in the nodule fields of the CCFZ. In 2004, the French Research Institute for Exploitation of the Sea (IFREMER) conducted the Nodinaut expedition to this mining track (which is still visible on the seabed) to study the long-term effects of this physical disturbance on the sediment and its benthic fauna. Samples taken of the superficial sediment revealed that its physical and chemical properties had not shown any recovery since the disturbance made 26 years earlier. On the other hand, the biological activity measured in the track by instruments aboard the crewed submersible bathyscaphe Nautile did not differ from a nearby unperturbed site. This data suggests that the benthic fauna and nutrient fluxes at the water–sediment interface has fully recovered. List of abyssal plains See also List of oceanic landforms List of submarine topographical features Oceanic ridge Physical oceanography References Bibliography External links Coastal and oceanic landforms Submarine topography Oceanic plateaus Oceanographical terminology Physical oceanography Aquatic ecology
Abyssal plain
[ "Physics", "Biology" ]
6,499
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
972,846
https://en.wikipedia.org/wiki/CCMP%20%28cryptography%29
Counter Mode Cipher Block Chaining Message Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode Protocol (CCMP) is an authenticated encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard. CCMP is a data cryptographic encapsulation mechanism designed for data confidentiality, integrity and authentication. It is based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol. Technical details CCMP uses CCM that combines CTR mode for data confidentiality and cipher block chaining message authentication code (CBC-MAC) for authentication and integrity. CCM protects the integrity of both the MPDU data field and selected portions of the IEEE 802.11 MPDU header. CCMP is based on AES processing and uses a 128-bit key and a 128-bit block size. CCMP uses CCM with the following two parameters: M = 8; indicating that the MIC is 8 octets (eight bytes). L = 2; indicating that the Length field is 2 octets. A CCMP Medium Access Control Protocol Data Unit (MPDU) comprises five sections. The first is the MAC header which contains the destination and source address of the data packet. The second is the CCMP header which is composed of 8 octets and consists of the packet number (PN), the Ext IV, and the key ID. The packet number is a 48-bit number stored across 6 octets. The PN codes are the first two and last four octets of the CCMP header and are incremented for each subsequent packet. Between the PN codes are a reserved octet and a Key ID octet. The Key ID octet contains the Ext IV (bit 5), Key ID (bits 6–7), and a reserved subfield (bits 0–4). CCMP uses these values to encrypt the data unit and the MIC. The third section is the data unit which is the data being sent in the packet. The fourth is the message integrity code (MIC) which protects the integrity and authenticity of the packet. Finally, the fifth is the frame check sequence (FCS) which is used for error detection and correction. Of these sections only the data unit and MIC are encrypted. Security CCMP is the standard encryption protocol for use with the Wi-Fi Protected Access II (WPA2) standard and is much more secure than the Wired Equivalent Privacy (WEP) protocol and Temporal Key Integrity Protocol (TKIP) of Wi-Fi Protected Access (WPA). CCMP provides the following security services: Data confidentiality; ensures only authorized parties can access the information Authentication; provides proof of genuineness of the user Access control in conjunction with layer management Because CCMP is a block cipher mode using a 128-bit key, it is secure against attacks to the 264 steps of operation. Generic meet-in-the-middle attacks do exist and can be used to limit the theoretical strength of the key to 2n/2 (where n is the number of bits in the key) operations needed. Known attacks References Cryptographic protocols Wireless networking IEEE 802.11 Secure communication Key management
CCMP (cryptography)
[ "Technology", "Engineering" ]
701
[ "Wireless networking", "Computer networks engineering" ]
972,878
https://en.wikipedia.org/wiki/PC%20Weasel%202000
PC Weasel 2000 was a line of graphics cards designed by Middle Digital Incorporated (Herb Peyerl and Jonathan Levine) which output to a serial port instead of a monitor. This allows servers using PC hardware with conventional BIOSes or operating systems lacking serial capability to be administered remotely. The product was introduced in 1999 and began shipping January 20, 2000. PCI and ISA models are available. The PC Weasel is also connected to a PS/2-compatible keyboard port and effectively emulates a keyboard by converting characters obtained from the serial port to keyboard scancodes. The card may also be connected to the reset pins of the motherboard and reboot the machine on command. The PC Weasel is an open-source product. Every purchaser of the board is granted a license for the card's onboard microcontroller. The microcode, stored in flash memory, is modifiable using a gcc-based toolchain. See also Lights out management (LOM) Network Console on Acid (NCA) coreboot External links PC Weasel 2000 web site at archive.org release announcement at archive.org Graphics cards System administration Out-of-band management
PC Weasel 2000
[ "Technology" ]
235
[ "Information systems", "System administration" ]
972,944
https://en.wikipedia.org/wiki/List%20of%20sensors
This is a list of sensors sorted by sensor type. Acoustic, sound, vibration Acoustic radiometer Geophone Hydrophone Microphone Pickup Seismometer Sound locator Automotive Air flow meter Air–fuel ratio meter Blind spot monitor Crankshaft position sensor (CKP) Curb feeler Defect detector Engine coolant temperature sensor Hall effect sensor Wheel speed sensor Airbag sensors Automatic transmission speed sensor Brake fluid pressure sensor Camshaft position sensor (CMP) Cylinder Head Temperature gauge Engine crankcase pressure sensor Exhaust gas temperature sensor Fuel level sensor Fuel pressure sensor Knock sensor Light sensor MAP sensor Mass airflow sensor Oil level sensor Oil pressure sensor Omniview technology Oxygen sensor (O2) Parking sensor Radar gun Radar sensor Speed sensor Throttle position sensor Tire pressure sensor Torque sensor Transmission fluid temperature sensor Turbine speed sensor Variable reluctance sensor Vehicle speed sensor Water-in-fuel sensor Wheel speed sensor ABS sensors Chemical Breathalyzer Carbon dioxide sensor Carbon monoxide detector Catalytic bead sensor Chemical field-effect transistor Chemiresistor Electrochemical gas sensor Electronic nose Electrolyte–insulator–semiconductor sensor Energy-dispersive X-ray spectroscopy Fluorescent chloride sensors Holographic sensor Hydrocarbon dew point analyzer Hydrogen sensor Hydrogen sulfide sensor Infrared point sensor Ion-selective electrode ISFET Nondispersive infrared sensor Microwave chemistry sensor Morphix Chameleon Nitrogen oxide sensor Nondispersive infrared sensor Olfactometer Optode Oxygen sensor Ozone monitor Pellistor pH glass electrode Potentiometric sensor Redox electrode Smoke detector Zinc oxide nanorod sensor Electric current, electric potential, magnetic, radio Current sensor Daly detector Electroscope Electron multiplier Faraday cup Galvanometer Hall effect sensor Hall probe Magnetic anomaly detector Magnetometer Magnetoresistance MEMS magnetic field sensor Metal detector Planar Hall sensor Radio direction finder Test light Voltage detector Environment, weather, moisture, humidity Actinometer Air pollution sensor Bedwetting alarm Ceilometer Dew warning Electrochemical gas sensor Fish counter Frequency domain sensor Gas detector Hook gauge evaporimeter Humistor Hygrometer Leaf sensor Lysimeter Pyranometer Pyrgeometer Psychrometer Rain gauge Rain sensor Seismometer SNOTEL Snow gauge Soil moisture sensor Stream gauge Tide gauge Weather radar Flow, fluid velocity Air flow meter Anemometer Flow sensor Gas meter Mass flow sensor Water meter Ionizing radiation, subatomic particles Bubble chamber Cloud chamber Geiger counter Geiger–Müller tube Ionization chamber Gaseous ionization detectors Neutron detection Particle detector Proportional counter Scintillator Scintillation counter Semiconductor detector Thermoluminescent dosimeter Wire chamber Navigation instruments Airspeed indicator Altimeter Attitude indicator Depth gauge Fluxgate compass Gyroscope Inertial navigation system Inertial reference unit Machmeter Magnetic compass MHD sensor Ring laser gyroscope Sextant Turn coordinator Variometer Vibrating structure gyroscope Yaw-rate sensor Position, angle, displacement, distance, speed, acceleration Accelerometer Auxanometer Capacitive displacement sensor Capacitive sensing Displacement sensor (general article) Flex sensor Free fall sensor Gravimeter Gyroscopic sensor Impact sensor Inclinometer Incremental encoder Integrated circuit piezoelectric sensor Laser rangefinder Laser surface velocimeter LIDAR Linear encoder Linear variable differential transformer (LVDT) Liquid capacitive inclinometers Odometer Photoelectric sensor Piezoelectric accelerometer Position sensor Position sensitive device Angular rate sensor Rotary encoder Rotary variable differential transformer Selsyn Shock detector Shock data logger Sudden Motion Sensor Tilt sensor Tachometer Ultrasonic thickness gauge Ultra-wideband radar Variable reluctance sensor Velocity receiver Magnetic sensor Optical, light, imaging, photon Charge-coupled device CMOS sensor Angle–sensitive pixel Colorimeter Contact image sensor Electro-optical sensor Flame detector Infra-red sensor Kinetic inductance detector LED as light sensor Light-addressable potentiometric sensor Nichols radiometer Fiber optic sensors Optical position sensor Thermopile laser sensors Photodetector Photodiode Photomultiplier Photomultiplier tube Phototransistor Photoelectric sensor Photoionization detector Photomultiplier Photoresistor Photoswitch Phototube Scintillometer Shack–Hartmann wavefront sensor Single-photon avalanche diode Superconducting nanowire single-photon detector Transition-edge sensor Visible Light Photon Counter Wavefront sensor Pressure Barograph Barometer Boost gauge Bourdon gauge Hot filament ionization gauge Ionization gauge McLeod gauge Oscillating U-tube Permanent downhole gauge Piezometer Pirani gauge Pressure sensor Pressure gauge Tactile sensor Time pressure gauge Force, density, level Bhangmeter Hydrometer Force gauge and Force Sensor Level sensor Load cell Magnetic level gauge Nuclear density gauge Piezocapacitive pressure sensor Piezoelectric sensor Strain gauge Torque sensor Viscometer Thermal, heat, temperature Bolometer Bimetallic strip Calorimeter Exhaust gas temperature gauge Flame detection Gardon gauge Golay cell Heat flux sensor Infrared thermometer Microbolometer Microwave radiometer Net radiometer Quartz thermometer Resistance thermometer Silicon bandgap temperature sensor Special sensor microwave/imager Temperature gauge Thermistor Thermocouple Thermometer Phosphor thermometry Pyrometer Proximity, presence Alarm sensor Doppler radar Motion detector Occupancy sensor Proximity sensor Passive infrared sensor Reed switch Stud finder Triangulation sensor Touch switch Wired glove Sensor technology Active pixel sensor Back-illuminated sensor BioFET Biochip Biosensor Capacitance probe Capacitive sensing Catadioptric sensor Carbon paste electrode Digital sensors Displacement receiver Electromechanical film Electro-optical sensor Electrochemical fatigue crack sensor Fabry–Pérot interferometer Fiber Bragg grating Fisheries acoustics Image sensor Image sensor format Inductive sensor Intelligent sensor Lab-on-a-chip Leaf sensor Machine vision Microelectromechanical systems MOSFET Photoelasticity Quantum sensor Radar Ground-penetrating radar Synthetic aperture radar Radar tracker Stretch sensor Sensor array Sensor fusion Sensor grid Sensor node Soft sensor Sonar Staring array Tapered element oscillating microbalance (TEOM) Transducer Ultrasonic sensor Video sensor Visual sensor network Wheatstone bridge Wireless sensor network Through-beam edge sensor Speed sensor Speed sensors are machines used to detect the speed of an object, usually a transport vehicle. They include: Wheel speed sensors Speedometers Pitometer logs Pitot tubes Airspeed indicators Piezo sensors (e.g. in a road surface) LIDAR Ground speed radar Doppler radar ANPR (where vehicles are timed over a fixed distance) Laser surface velocimeters for moving surfaces Others Actigraphy Air pollution sensor Analog image processing Atomic force microscopy Atomic Gravitational Wave Interferometric Sensor Attitude control (spacecraft): Horizon sensor, Earth sensor, Moon sensor, Satellite Sensor, Sun sensor Catadioptric sensor Chemoreceptor Compressive sensing Cryogenic particle detectors Dew warning Diffusion tensor imaging Digital holography Electronic tongue Fine Guidance Sensor Flat panel detector Functional magnetic resonance imaging Glass break detector Heartbeat sensor Hyperspectral sensors IRIS (Biosensor), Interferometric Reflectance Imaging Sensor Laser beam profiler Littoral Airborne Sensor/Hyperspectral LORROS Millimeter wave scanner Magnetic resonance imaging Moire deflectometry Molecular sensor Nanosensor Nano-tetherball Sensor Omnidirectional camera Organoleptic sensors Optical coherence tomography Phase unwrapping techniques Polygraph Truth Detection Positron emission tomography Push broom scanner Quantization (signal processing) Range imaging Scanning SQUID microscope Single-Photon Emission Computed Tomography (SPECT) Smartdust SQUID, Superconducting quantum interference device SSIES, Special Sensors-Ions, Electrons, and Scintillation thermal plasma analysis package SSMIS, Special Sensor Microwave Imager / Sounder Structured-light 3D scanner Sun sensor, Attitude control (spacecraft) Superconducting nanowire single-photon detector Thin-film thickness monitor Time-of-flight camera TriDAR, Triangulation and LIDAR Automated Rendezvous and Docking Unattended Ground Sensors References Global List of Sensor Manufacturers List of commercial sensor manufacturers from around the world Electrical components Technology-related lists
List of sensors
[ "Technology", "Engineering" ]
1,676
[ "Electrical components", "Measuring instruments", "Electrical engineering", "Sensors", "Components" ]
973,032
https://en.wikipedia.org/wiki/International%20Committee%20on%20Anthropogenic%20Soils
The International Committee on Anthropogenic Soils (ICOMANTH) defines its mission as follows. "ICOMANTH is charged with defining appropriate classes in soil taxonomy for soils that have their major properties derived from human activities. The committee should establish which criteria significantly reflect human activities, or when a soil's properties are dominantly the result of human activities." References External links ICOMANTH home page Land management Environmental soil science Pedology
International Committee on Anthropogenic Soils
[ "Environmental_science" ]
91
[ "Environmental soil science" ]
973,095
https://en.wikipedia.org/wiki/Auxology
Auxology (from Greek , auxō, or , auxanō 'grow'; and , -logia) is a meta-term covering the study of all aspects of human physical growth. (Although, it is also fundamental of biology.) Auxology is a multi-disciplinary science involving health sciences/medicine (pediatrics, general practice, endocrinology, neuroendocrinology, physiology, epidemiology), and to a lesser extent: nutrition science, genetics, anthropology, anthropometry, ergonomics, history, economic history, economics, socio-economics, sociology, public health, and psychology, among others. History of auxology ""Ancient Babylonians and Egyptians left some writings on child growth and variation in height between ethnic groups. In the late 18th century, scattered documents of child growth started to appear in the scientific literature, the studies of Jamberts in 1754 and the annual measurements of the son of Montbeillard published by Buffon in 1777 being the most cited ones [1]. Louis René Villermé (1829) was the first to realize that growth and adult height of an individual depend on the country's socio-economic situation. In the 19th century, the number of growth studies rapidly increased, with increasing interest also in growth velocity [2]. Günther documented monthly height increments in a group of 33 boys of various ages [3]. Kotelmann [4] first noted the adolescent growth spurt. In fact, the adolescent growth spurt appears to be a novel achievement in the history of human growth and the amount and intensity of the spurt seems to be greatest in tall and affluent populations [5]. By the beginning of the 20th century, national growth tables were published for most European nations with data for height, weight, and attempts to relate weight and height, though none of these were references in the proper sense of the word as the data were usually derived from small and unrepresentative samples. After the 1930s X-ray imaging of hand and wrist became popular for determining bone age. Current auxo-logical knowledge is based on the large national studies performed in the 1950s, 1960s and the 1970s, many of them inaugurated by James Tanner [1]. In the late 1970s a new school of anthropometric history [6] emerged among historians and economists. The main aim of this school was to evaluate secular changes in conscript height during the last 100–200 years and to associate them with socio-economic changes and political events in the different countries [7]. In the 1980s and the 1990s new mathematical approaches have been added of which the LMS method has strongly been recommended for constructing modern growth reference tables [8,9]: M stands for mean, S stands for a scaling parameter, and L stands for the Box–Cox power that is required to transform the skewed data to normality. Meanwhile, many national and international growth references are based on this technology. And in view of the general idea of growth and adult height being a mirror of nutritional status, health and wealth [10], these techniques have generally been accepted for routine screening pro-grams in Public Health. Anthropometry has also been considered essential for security purposes, for the usability of industrial products, and it has become routine for car and clothing industries, for furniture, housing, and many other aspects of design in the modern environment. Growth is defined as an increase in size over time. But the rigid metric of physical time is not directly related to the tempo at which an organism develops, matures and ages. Calendar time differs in its meaning in a fast maturing and in a slow maturing organism. Fast maturing children appear tall and "older" than their calendar age suggests, late maturers appear "too young" and often short even though both may later reach the same adult size. Whereas metric scales exist for height, weight and other amplitude parameters, there are no continuous scales for maturation and developmental tempo. Instead we are used to work with substitutes like the 5-step Tanner scale for staging puberty, and age equivalents for describing bone..." is an excerpt taken from Human Growth and Development by Borms, J., R. Hauspie, A. Sand, C. Susanne, and M. Hebbelinck, eds From the taken section above we can see that ancient cultures left writings and indicators of growth from childhood into adulthood, such as the Ancient Babylonians and Egyptians. Though it wouldn't been until the later part of the 1700s that it would appear in scientific literature in the light of the Age of Enlightenment. A movement of philosophical and scientific advancement and understanding that dominated the western world in the 18th century. As the ideals and respect of science and mathematics grew we see such men as Louis-René Villermé a physician and economist begin to take interest and realize the growth of individuals into adulthood had factors in their socio-economic situation and status. From there the study would continue to grow at a rapid rate. Contemporaries would then catch on reaching from the interest in Growth chart which kept marks on growth velocity into more medical sided interest for public health in the sense of tracking one's own growth and health to set standards. In relation to anthropology As Biological anthropology is a sub-field of anthropology that provides insight into the biological/physical perspective of human beings and our ancestors, one can easily see how auxology relates to the anthropological fields. This can be seen through the study of physical human development and growth to the slight Sexual dimorphism within Human along with the maturing of the body such as the physical change from childhood into adulthood. Auxology can also be used in the comparing of remains such as that of Neanderthal, Homo habilis and Australopithecus afarensis to any relation of Hominidae. Notable auxologists Joerg Baten (economist, anthropometric historian) Barry Bogin (anthropologist) Noel Cameron (pediatrician, anthropologist) J. W. Drukker (economist, historian, ergonomist) Stanley Engerman (economist) Robert Fogel (economist) Theo Gasser (statistician, human biologist) Michael Healy (statistician) Michael Hermanussen (pediatrician, human biologist) Francis E. Johnston (anthropologist) John Komlos (economist, anthropometric historian) Gregory Livshits (human biologist) Robert Margo (economist) Alex F. Roche (pediatrician) Lawrence M. Schell (anthropologist) Nevin Scrimshaw (nutritionist) Anne Sheehy (human biologist) Richard Steckel (economist, anthropometric historian) Pak Sunyoung (anthropologist) James M. Tanner (pediatrician) Vincent Tassenaar (historian) Louis-René Villermé (Economist, Physician) Lucio Vinicius (anthropologist, human biologist) Krishna Kumar Choudhary (Public Health scholar, India) Florencio Escardó Garrahan See also Anthropometric history Human biology Human development (biology) Human height Human weight Human variability Malnutrition Nature versus nurture Population health Quality of life Social determinants of health and Social epidemiology Socioeconomics Standard of living Rod Usher Physical Anthropology References External links International Association for Human Auxology Tall Tales: New Approaches to the Standard of Living (Oberlin Alumni Magazine) Human physiology Human development Human height Nutrition
Auxology
[ "Biology" ]
1,517
[ "Behavioural sciences", "Behavior", "Human development" ]
973,198
https://en.wikipedia.org/wiki/Federal%20Analogue%20Act
The Federal Analogue Act, , is a section of the United States Controlled Substances Act passed in 1986 which allows any chemical "substantially similar" to a controlled substance listed in Schedule I or II to be treated as if it were listed in Schedule I, but only if intended for human consumption. These similar substances are often called designer drugs. The law's broad reach has been used to successfully prosecute possession of chemicals openly sold as dietary supplements and naturally contained in foods (e.g., the possession of phenethylamine, a compound found in chocolate, has been successfully prosecuted based on its "substantial similarity" to the controlled substance methamphetamine). The law's constitutionality has been questioned by now Supreme Court Justice Neil Gorsuch on the basis of Vagueness doctrine. Definition (32) (A) Except as provided in subparagraph (C), the term controlled substance analogue means a substance - (i) the chemical structure of which is substantially similar to the chemical structure of a controlled substance in schedule I or II; (ii) which has a stimulant, depressant, or hallucinogenic effect on the central nervous system that is substantially similar to or greater than the stimulant, depressant, or hallucinogenic effect on the central nervous system of a controlled substance in schedule I or II; or (iii) with respect to a particular person, which such person represents or intends to have a stimulant, depressant, or hallucinogenic effect on the central nervous system that is substantially similar to or greater than the stimulant, depressant, or hallucinogenic effect on the central nervous system of a controlled substance in schedule I or II. (B) The designation of gamma butyrolactone or any other chemical as a listed chemical pursuant to paragraph (34) or (35) does not preclude a finding pursuant to subparagraph (A) of this paragraph that the chemical is a controlled substance analogue. (C) Such term does not include - (i) a controlled substance; (ii) any substance for which there is an approved new drug application; (iii) with respect to a particular person any substance, if an exemption is in effect for investigational use, for that person, under section 355 of this title to the extent conduct with respect to such substance is pursuant to such exemption; or (iv) any substance to the extent not intended for human consumption before such an exemption takes effect with respect to that substance. Case law United States v. Forbes United States v. Forbes, 806 F. Supp. 232 (D. Colo. 1992), a Colorado district court case, considered the question of whether the drug alphaethyltryptamine (AET) was a controlled substance analogue in the United States. The controlled drugs to which it was alleged that AET was substantially similar were the tryptamine analogues dimethyltryptamine (DMT) and diethyltryptamine (DET). AET DMT DET In this case, the court ruled that AET was not substantially similar to DMT or DET, on the grounds that (i) AET is a primary amine while DMT and DET are tertiary amines, (ii) AET cannot be synthesized from either DMT or DET, and (iii) the hallucinogenic or stimulant effects of AET are not substantially similar to the effects of DMT or DET. Furthermore, the court ruled that the definition of controlled substance analogue given in the Federal Analogue Act was unconstitutionally vague, in that "Because the definition of 'analogue' as applied here provides neither fair warning nor effective safeguards against arbitrary enforcement, it is void for vagueness." The common law principle that the people should have the right to know what the law is, means that the wording of laws should be sufficiently clear and precise that it is possible to give a definitive answer as to whether a particular course of action is legal or illegal. However, despite this ruling the Federal Analogue Act was not revised, and instead AET was specifically scheduled to avoid any future discrepancies. As a district court decision, this case is not binding precedent. United States v. Washam United States v. Washam (2002) 312 F.3d 926, 930 was an appellate decision for the eighth judicial circuit in which it was considered whether the drug 1,4-Butanediol (1,4-B) was a controlled substance analogue in the United States. The controlled drug which it was alleged 1,4-B was substantially similar to was gamma-hydroxybutyrate (GHB). 1,4-B GHB In this case the court ruled that 1,4-B was substantially similar to GHB, on the grounds that (i) "1,4-Butanediol and GHB are both linear compounds containing four carbons and that there is only one difference between the substances on one side of their molecules", and, more importantly, (ii) that 1,4-B is metabolized into GHB by the body and so produces substantially similar physiological effects. It was raised in defense that 1,4-B and GHB contain different functional groups. but these were not held to be grounds to consider 1,4-B not substantially similar to GHB. It was also raised in the case of Washam that the Federal Analogue Act was unconstitutionally vague, but in this case the court rejected this argument on the grounds that the defendant's actions in concealing her activities and lying to DEA agents showed that she knew her actions were illegal, and furthermore that "…a person of common intelligence has sufficient notice under the statute that 1,4-Butanediol is a controlled substance analogue." The court in Washam construed the Analogue Act to require parts A(i) and either A(ii) or A(iii), and concluded the Act was constitutionally permissible upon this construction. As a result of Washam, the Federal Analogue Act has been upheld (at least for the states and territories comprising the eighth judicial circuit) and can be considered valid at the present time. However, a jury in Federal District Court in Chicago in a different case found 1,4-butanediol not to be an analog of GHB under federal law, and the Seventh Circuit Court of Appeals upheld that verdict and so 1,4-butanediol is currently not a controlled substance analogue. See also DEA list of chemicals, aka the "DEA Watchlist" Operation Web Tryp References External links Section 813. Treatment of Controlled Substance Analogues . Drug Enforcement Administration Section 802. Definitions . Drug Enforcement Administration Appendix A - Controlled Substance Analogue Enforcement Act of 1986 - P.L. 99-570. Subtitle E, Title I. on Erowid United States federal controlled substances legislation History of drug control in the United States Regulation in the United States 1986 in American law 99th United States Congress Regulation of chemicals
Federal Analogue Act
[ "Chemistry" ]
1,455
[]
973,239
https://en.wikipedia.org/wiki/Ethion
Ethion (C9H22O4P2S4) is an organophosphate insecticide. It is known to affect the neural enzyme acetylcholinesterase and disrupt its function. History Ethion was first registered in the US as an insecticide in the 1950s. Annual usage of ethion since then has varied depending on overall crop yields and weather conditions. For example, 1999 was a very dry year; since the drought reduced yields, the use of ethion was less economically rewarding. Since 1998, risk assessment studies have been conducted by (among others) the EPA (United States Environmental Protection Agency). Risk assessments for ethion were presented at a July 14, 1999 briefing with stakeholders in Florida, which was followed by an opportunity for public comment on risk management for this pesticide. Regulatory review Ethion was one of many substances approved for use based on data from Industrial Bio-Test Laboratories, which was later discovered to have engaged in extensive scientific misconduct and fraud, prompting the Food and Agriculture Organization and World Health Organization to recommend ethion's reanalysis in 1982. Synthesis Ethion is produced under controlled pH conditions by reacting dibromomethane with Diethyl dithiophosphoric acid in ethanol. Other methods of synthesis include the reaction of methylene bromide and sodium diethyl phosphorodithioate or the reaction of diethyl dithiophosphoric acid and formaldehyde. Reactivity and mechanism Ethion is a small lipophilic molecule. This promotes rapid passive absorption across cell membranes. Thus absorption through skin, lungs, and the gut into the blood occurs via passive diffusion. Ethion is metabolized in the liver via desulfurization, producing the metabolite ethion monoxon. This transformation leads to liver damage. Ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE), which normally facilitates nerve impulse transmission; secondary damage thus occurs in the brain. Because the chemical structure of ethion monoxon is similar to that of an organophosphate, its mechanism of poisoning is thought to be the same. See the figure, "Inhibition of cholinesterase by ethion monoxon." The figure depicts enzyme inhibition as a two-step process. Here, a hydroxyl group (OH) from a serine residue in the active site of ChE is phosphorylated by an organophosphate, causing enzyme inhibition and preventing the serine hydroxyl group from participating in the hydrolysis of another enzyme called acetylcholinesterase (Ach). The phosphorylated form of the enzyme is highly stable, and depending on the R and groups attached to phosphorus, this inhibition can be either reversible or irreversible. Metabolism Goats exposed to ethion showed clear distinctions in excretion, absorption half-life and bioavailability. These differences depend on the method of administration. Intravenous injection resulted in a half-life time of 2 hours, while oral administration resulted in a half-life time of 10 hours. Dermal administration lead to a half-life time of 85 hours. These differences in half-life times can be correlated with differences in bio-availability. The bio-availability of ethion via oral administration was less than 5%, whereas the bio-availability via dermal administration of ethion was 20%. In a study conducted among rats, it was found that ethion is readily metabolized after oral administration. Rat urine samples contained four to six polar water-soluble ethion metabolites. A study among chickens revealed more about spontaneous ethion distribution in the body. In a representative study, liver, muscle, and fat tissues were examined after 10 days of ethion exposure. In all three cases, ethion or ethion derivatives were present, indicating that it is widely spread in the body. Chicken eggs were also investigated, and it was found that the egg white reaches a steady ethion derivative concentration after four days, while the concentration in yolk was still rising after ten days. In the investigated chickens, about six polar water-soluble metabolites were also found to be present. In a study performed on goats, heart and kidney tissues were investigated after a period of ethion exposure, and in these tissues, ethion-derivatives were found. This study indicates that the highest levels were found in the liver and kidneys, and the lowest levels in fat. Derivatives were also detected in the goats' milk. Biotransformation Biotransformation of ethion occurs in the liver, where it undergoes desulfurisation to form the active metabolite ethion monoxon. The enzyme cytochrome P-450 catalyzes this step. Because it contains an active oxygen, ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE). ChE can dephosphorylate organophosphate, so in the next step of the biotransformation, ethion monoxon is dephosphorylated and ChE is phosphorylated. The subsequent step in the biotransformation process is not yet completely known, yet it is understood that this happens via esterases in the blood and liver (1). Besides the dephosphorylation of ethion monoxon by ChE, it is likely that the ethion monoxon is partially oxidized toward ethion dioxon. After solvent partitioning of urine from rats that had been fed ethion, it became clear that the metabolites found in the urine were 99% dissolved in the aqueous phase. This means that only non-significant levels (<1 %) were present in the organic phase and that the metabolites are very hydrophilic. In a parallel study in goats, radioactive labeled ethion with incorporated 14C was used. After identification of the 14C residues in organs of the goats, such as the liver, heart, kidneys, muscles and fat tissue, it appeared that 0.03 ppm or less of the 14C compounds present was non-metabolized ethion. The metabolites ethion monoxon and ethion dioxon were also not detected in any samples with a substantial threshold (0.005-0.01 ppm). In total, 64% to 75% of the metabolites from the tissues were soluble in methanol. After the addition of a protease, another 17% to 32% were solubilized. In the aqueous phase, at least four different radioactive metabolites were found. However, characterization of these compounds was repeatedly unsuccessful due to their high volatility. One compound was trapped in the kidney and was identified as formaldehyde. This is an indication that the 14C of ethion is used in the formation of natural products. Toxicity Summary of toxicity Exposure to ethion can happen by ingestion, absorption via the skin, and inhalation. Exposure can lead to vomiting, diarrhea, headache, sweating, and confusion. Severe poisoning might lead to fatigue, involuntary muscle contractions, loss of reflexes and slurred speech. In even more severe cases, death will be the result of respiratory failure or cardiac arrest. When being exposed through skin contact, the lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for female rats and 3 hours for male rats, and the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for male rats and 62 mg/kg for female rats. When being exposed through ingestion, 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory track of rats, nor did 13-week testing on dogs (8.25 mg/kg/day). LD50 values for pure ethion in rats is 208 mg/kg, and for technical ethion is 21 to 191 mg/kg. Other reported oral LD50 values are 40 mg/kg in mice and guinea pigs. Furthermore, inhalation of ethion is very toxic - during one study which was looking at technical-grade ethion, an LC50 of 2.31 mg/m^3 was found in male rats and of 0.45 mg/m^3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. Acute toxicity Ethion causes toxic effects following absorption via the skin, ingestion, and inhalation, and may cause burns when skin is exposed to it. According to Extoxnet, any form of exposure could result in the following symptoms: pallor, nausea, vomiting, diarrhea, abdominal cramps, headache, dizziness, eye pain, blurred vision, constriction or dilation of the eye pupils, tears, salivation, sweating, and confusion may develop within 12 hours. Severe poisoning may result in distorted coordination, loss of reflexes, slurred speech, fatigue and weakness, tremors of the tongue and eyelids, involuntary muscle contractions and can also lead to paralysis and respiratory problems. In more severe cases, ethion poisoning can lead to involuntary discharge of urine or feces, irregular heart beats, psychosis, loss of consciousness, and, in some cases, coma or death. Death is the result of respiratory failure or cardiac arrest. Hypothermia, AC heart blocks and arrhythmias are also found to be possible consequences of ethion poisoning. Ethion may also lead to delayed symptoms of other organophosphates. Skin exposure In rabbits receiving 250 mg/kg of technical-grade ethion for 21 days, the dermal exposure lead to increased cases of erythema and desquamation. It also lead to inhibition of brain acetylcholinesterase at 1 mg/kg/day and the NOAEL was determined to be 0.8 mg/kg/day. In guinea pigs, ethion ALS lead to a slight erythema that cleared in 48 hours, and it was determined that the compound was not a skin sensitizer. In a study determining the LD50 of ethion, 80 male and 60 female adult rats were dermally exposed to ethion dissolved in xylene. The lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for females and 3 hours for males, while the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for males and 62 mg/kg for females. Skin contact with organophosphates, in general, may cause localized sweating and involuntary muscle contractions. Other studies found the LC50 via the dermal route to be 915 mg/kg in guinea pigs and 890 mg/kg in rabbits. Ethion can also cause slight redness and inflammation to the eye and skin that will clear within 48 hours. It is also known to cause blurred vision, pupil constriction and pain. Ingestion A six-month-old boy experienced shallow excursions and intercostal retractions after accidentally ingesting 15.7 mg/kg ethion. The symptoms started one hour after ingestion, and were treated. Five hours after ingestion, respiratory arrest occurred and mechanical ventilation was needed for three hours. Following examinations after one week, one month and one year suggested that full recovery was made. The same boy also showed occurrence of tachycardia, frothy saliva (1 hour after ingestion), watery bowel movements (90 minutes after ingestion), increased white blood cell counts in urine, inability to control his head and limbs, occasional twitching, pupils non-reactive to light, purposeless eye movements, palpable liver and spleen, and there were some symptoms of paralysis. Testing on rats with 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory tract, nor did 13 week testing on dogs (8.25 mg/kg/day). values for pure ethion in rats of 208 mg/kg, and for technical-grade ethion of 21 to 191 mg/kg,. Other reported oral LD50 values (for the technical product) are 40 mg/kg in mice and guinea pigs. In a group of six male volunteers no differences in blood pressure or pulse rate were noted, neither in mice or dogs. Diarrhea did occur in mice orally exposed to ethion, severe signs of neurotoxicity were also present. The effects were consistent with cholinergic over stimulation of the gastrointestinal tract. No hematological effects were reported in an experiment with six male volunteers, nor in rats or dogs. The volunteers did not show differences in muscle tone after intermediate-duration oral exposure, nor did the testing animals to different exposure. It is however knows that ethion can result in muscle tremors and fasciculations. The animal-testing studies on rats and dogs showed no effect on the kidneys and liver, but a different study showed an increased incidence in orange-colored urine. The animal-testing studies on rats and dogs did also not show dermal or ocular effects. Rabbits, receiving 2.5 mg/kg/day of ethion showed a decrease in body weight, but no effects were seen at 0.6 mg/kg/day. The decrease body, combined with reduced food consumption, was observed for rabbits receiving 9.6 mg/kg/day . Male and female dogs receiving 0.71 mg/kg/day did not show a change in body weight, but dogs receiving 6.9 and 8.25 mg/kg/day showed reduced food consumption and reduced body weight. In a study with human volunteers, a decrease of plasma cholinesterase was observed during 0.075 mg/kg/day (16% decrease), 0.1 mg/kg/day (23% decrease) and 0.15 mg/kg/day (31%decrease) treatment periods. This was partially recovered after 7 days, and fully recovered after 12 days. No effect on erythrocyte acetylcholinesterase was observed, nor signs of adverse neurological effects. Another study showed severe neurological effects after a single oral exposure in rats. For male rats, salivation, tremors, nose bleeding, urination, diarrhea, and convulsions occurred at 100 mg/kg, and for female rats, at 10 mg/kg. In a study with albino rats, it was observed that brain acetylcholinesterase was inhibited by 22%, erythrocyte acetylcholinesterase by 87%, and plasma cholinesterase by 100% in male rats after being fed 9 mg/kg/day of ethion for 93 days. After 14 days of recovery, plasma cholinesterase recovered completely, and erythrocyte acetylcholinesterase recovered 63%. There were no observed effects at 1 mg/kg/day. In a study involving various rats, researchers observed no effects on erythrocyte acetylcholinesterase at 0, 0.1, 0.2, and 2 mg/kg/day of ethion. In a 90-day study on dogs, in which the males received 6.9 mg/kg/day and the females 8.25 mg/kg/day, ataxia, emesis, miosis, and tremors were observed. Brain and erythrocyte acetylcholinesterase were inhibited (61-64% and 93-04%, respectively). At 0.71 mg/kg/day in male dogs, the reduction in brain acetylcholinesterase was 23%. There were no observed effects at 0.06 and 0.01 mg/kg/day. Based on these findings, a minimal risk level of 0,002 mg/kg/day for oral exposure for acute and intermediate duration was established. Researchers also calculated a chronic-duration minimal risk level of 0.0004 mg/kg/day. In one study, in which rats received a maximum of 1.25 mg/kg/day, no effects on reproduction were observed. In a study on pregnant river rats, eating 2.5 mg/kg/day, it was observed that the fetuses had increased incidence of delayed ossification of pubes. Another study found that the fetuses of pregnant rabbits, eating 9.6 mg/kg/day had increased incidence of fused sterna centers. Inhalation Ethion is quite toxic to lethal via inhalation. One study, looking at technical-grade ethion, found an LC50 of 2.31 mg/m3 in male rats and of 0.45 mg/m3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. As stated earlier, ethion can also lead to pupillary constriction, muscle cramps, excessive salivation, sweating, nausea, dizziness, labored breathing, convulsions, and unconsciousness. A sensation of tightness in the chest and rhinorrhea are also very common after inhalation. Carcinogenic effects There are no indications that ethion is carcinogenic in rats and mice. When rats and mice were fed ethion for two years, the animals did not develop cancer any faster than the control group of animals that were not given ethion. Ethion has not been classified for carcinogenicity by the United States Department of Health and Human Services (DHHS), the International Agency for Research on Cancer (IARC) or the EPA. Treatment When orally exposed, gastric lavage shortly after exposure can be used to reduce the peak absorption. It is also suspected that treatment with active charcoal could be effective to reduce peak absorption. Safety guidelines also encourage to induce vomiting to reduce oral exposure, if the victim is still conscious. In case of skin exposure, it is advised to wash and rinse with plenty of water and soap to reduce exposure. In case of inhalation, fresh air is advised to reduce exposure. Treating the ethion-exposure itself is done in the same way as exposure with other organophosphates. The main danger lies in respiratory problems - if symptoms are present, then artificial respiration with an endotracheal tube is used as a treatment. The effect of ethion on muscles or nerves is counteracted with atropine. Pralidoxime can be used to act against organophosphate poisoning, this must be given as fast as possible after the ethion poisoning, for its efficacy is inhibited by the chemical change of ethion-enzyme in the body that occurs over time. Effects on animals Ethion has an influence on the environment as it is persistent and thus might accumulate through plants and animals. When looking at songbirds, ethion is very toxic. The LD50 in red-winged blackbirds is 45 mg/kg. However, it is moderately toxic to birds like the bobwhite quail (LD50 is 128.8 mg/kg) and starlings (LD50 is greater than 304 mg/kg). These birds would be classified as 'medium sized birds. When looking at larger, upland game birds (like the ring-necked pheasant and waterfowl like the mallard duck, ethion varies from barely toxic to nontoxic. Ethion, however, is very toxic to aquatic organisms like freshwater and marine fish, and is extremely toxic to freshwater invertebrates, with an average LD50 of 0.056 μg/L to 0.0077 mg/L. The LD50 for marine and estuarine invertebrates are 0.04 to 0.05 mg/L. In a chronic toxicity study, rats were fed 0, 0.1, 0.2 or 2 mg/kg/day ethion for 18 months, and no severe toxic effects were observed. The only significant change was a decrease of cholinesterase levels in the group with the highest dose. Therefore, the NOEL of this study was 0.2 mg/kg. The oral LD50 for pure ethion in rats is 208 mg/kg. The dermal LD50 in rats is 62 mg/kg, 890 mg/kg in rabbits, and 915 mg/kg in guinea pigs. For rats, the 4-hour LD50 is 0.864 mg/L ethion. Detection Methods Insecticides such as ethion can be detected by using a variety of chemical analysis methods. Some analysis methods, however, are not specific for this substance. In a recently introduced method, the interaction of silver nanoparticles (AgNPs) with ethion results in the quenching of the resonance relay scattering (RRS) intensity. The change in RRS was shown to be linearly correlated to the concentration of ethion (range: 10.0–900 mg/L). Another advantage of this method over general detection methods is that ethion can be measured in just 3 minutes with no requirement for pretreatment of the sample. From interference tests, it was shown that this method achieves good selectivity for ethion. The limit of detection (LOD) was 3.7 mg/L and limit of quantification (LOQ) was 11.0 mg/L. Relative standard deviations (RSDs) for samples containing 15.0 and 60.0 mg/L of ethion in water were 4.1 and 0.2 mg/L, respectively. Microbial degradation Ethion remains a major environmental contaminant in Australia, among other locations, because of its former usage in agriculture. However, there are some microbes that can convert ethion into less toxic compounds. Some Pseudomonas and Azospirillum bacteria were shown to degrade ethion when cultivated in minimal salts medium, where ethion was the only source of carbon. Analysis of the compounds present in the medium after bacterial digestion of ethion demonstrated that no abiotic hydrolytic degradation products of ethion (e.g., ethion dioxon or ethion monoxon) were present. The biodigestion of ethion is likely used to support rapid growth of these bacteria. References External links Acetylcholinesterase inhibitors Organophosphate insecticides Phosphorodithioates Ethyl esters
Ethion
[ "Chemistry" ]
4,764
[ "Functional groups", "Phosphorodithioates" ]
973,296
https://en.wikipedia.org/wiki/Tong%20Dizhou
Tong Dizhou (; May 28, 1902 – March 30, 1979) was a Chinese embryologist known for his contributions to the field of cloning. He was a vice president of Chinese Academy of Science. Biography Born in Yinxian, Zhejiang province, Tong graduated from Fudan University in 1924 with a degree in biology, and received a PhD in zoology in 1930 from Free University Brussels. In 1963, Tong inserted DNA of a male carp into the egg of a female carp and became the first to successfully clone a fish. He is regarded as "the father of China's clone". Tong was also an academician at the Chinese Academy of Sciences and the first director of its Institute of Oceanology from its founding in 1950 until 1978. Tong died on 30 March 1979 at Beijing Hospital in Beijing. References 1902 births 1979 deaths 20th-century Chinese biologists 20th-century Chinese scientists Biologists from Zhejiang Cloning Educators from Ningbo Free University of Brussels (1834–1969) alumni Fudan University alumni Academic staff of Fudan University Members of Academia Sinica Members of the Chinese Academy of Sciences Academic staff of the National Central University Scientists from Ningbo Academic staff of Tongji University Vice Chairpersons of the National Committee of the Chinese People's Political Consultative Conference
Tong Dizhou
[ "Engineering", "Biology" ]
257
[ "Cloning", "Genetic engineering" ]
973,372
https://en.wikipedia.org/wiki/IBM%20Future%20Systems%20project
The Future Systems project (FS) was a research and development project undertaken in IBM in the early 1970s to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware. The new systems were intended to replace the System/370 in the market some time in the late 1970s. There were two key components to FS. The first was the use of a single-level store that allows data stored on secondary storage like disk drives to be referred to within a program as if it was data stored in main memory; variables in the code could point to objects in storage and they would invisibly be loaded into memory, eliminating the need to write code for file handling. The second was to include instructions corresponding to the statements in high-level programming languages, allowing the system to directly run programs without the need for a compiler to convert from the language to machine code. One could, for instance, write a program in a text editor and the machine would be able to run that directly. Combining the two concepts in a single system in a single step proved to be an impossible task. This concern was pointed out from the start by the engineers, but it was ignored by management and project leaders for many reasons. Officially started in the fall of 1971, by 1974 the project was moribund, and formally cancelled in February 1975. The single-level store was implemented in the System/38 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product. History 370 The System/360 was announced in April 1964. Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future. One significant change was the introduction of useful integrated circuits (ICs), which would allow the many individual components of the 360 to be replaced with a smaller number of ICs. This would allow a more powerful machine to be built for the same price as existing models. By the mid-1960s, the 360 had become a massive best-seller. This influenced the design of the new machines, as it led to demands that the machines have complete backward compatibility with the 360 series. When the machines were announced in 1970, now known as the System/370, they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes. A few new instructions were added and others cleaned up, but the system was largely identical from the programmer's point of view. The recession of 1969–1970 led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier. For the first time in decades, IBM's growth stalled. While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term. Replacing the 370 Two months before the announcement of the 370s, the company once again started considering changes in the market and how that would influence future designs. In 1965, Gordon Moore predicted that integrated circuits would see exponential growth in the number of circuits they supported, today known as Moore's Law. IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured. An internal Corporate Technology Committee (CTC) study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that. If IBM was going to maintain its sales figures, it was going to have to sell 30 times as much memory in five years, and 900 times as much five years later. Similarly, hard disk cost was expected to fall ten times in the next ten years. To maintain their traditional 15% year-over-year growth, by 1980 they would have to be selling 40 times as much disk space and 3600 times as much memory. In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost. There was no way they could sell such a machine at their current pricing, if they tried, another company would introduce far less expensive systems. They could instead produce much more powerful machines at the same price points, but their customers were already underutilizing their existing systems. To provide a reasonable argument to buy a new high-end machine, IBM had to come up with reasons for their customers to need this extra power. Another strategic issue was that while the cost of computing was steadily going down, the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost. AFS In 1969, Bob O. Evans, president of the IBM System Development Division which developed their largest mainframes, asked Erich Bloch of the IBM Poughkeepsie Lab to consider how the company might use these much cheaper components to build machines that would still retain the company's profits. Bloch, in turn, asked Carl Conti to outline such systems. Having seen the term "future systems" being used, Evans referred to the group as Advanced Future Systems. The group met roughly biweekly. Among the many developments initially studied under AFS, one concept stood out. At the time, the first systems with virtual memory (VM) were emerging, and the seminal Multics project had expanded on this concept as the basis for a single-level store. In this concept, all data in the system is treated as if it is in main memory, and if the data is physically located on secondary storage, the VM system automatically loads it into memory when a program calls for it. Instead of writing code to read and write data in files, the programmer simply told the operating system they would be using certain data, which then appeared as objects in the program's memory and could be manipulated like any other variable. The VM system would ensure that the data was synchronized with storage when needed. This was seen as a particularly useful concept at the time, as the emergence of bubble memory suggested that future systems would not have separate core memory and disk drives, instead everything would be stored in a large amount of bubble memory. Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate application programming interfaces (APIs) for the same data depending on whether it was loaded or not. HLS Evans also asked John McPherson at IBM's Armonk headquarters to chair another group to consider how IBM would offer these new designs across their many divisions. A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970. A key component of HLS was the idea that programming was more expensive than hardware. If a system could greatly reduce the cost of development, then that system could be sold for more money, as the overall cost of operation would still be lower than the competition. The basic concept of the System/360 series was that a single instruction set architecture (ISA) would be defined that offered every possible instruction the assembly language programmer might desire. Whereas previous systems might be dedicated to scientific programming or currency calculations and had instructions for that sort of data, the 360 offered instructions for both of these and practically every other task. Individual machines were then designed that targeted particular workloads and ran those instructions directly in hardware and implemented the others in microcode. This meant any machine in the 360 family could run programs from any other, just faster or slower depending on the task. This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run. Although the 360's instruction set was large, those instructions were still low-level, representing single operations that the central processing unit (CPU) would perform, like "add two numbers" or "compare this number to zero". Programming languages and their links to the operating system allowed users to type in programs using high-level concepts like "open file" or "add these arrays". The compilers would convert these higher-level abstractions into a series of machine code instructions. For HLS, the instructions would instead represent those higher-level tasks directly. That is, there would be instructions in the machine code for "open file". If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation. This worked hand-in-hand with the single-level store; to implement HLS, every bit of data in the system was paired with a descriptor, a record that contained the type of the data, its location in memory, and its precision and size. As descriptors could point to arrays and record structures as well, this allowed the machine language to process these as atomic objects. By representing these much higher-level objects directly in the system, user programs would be much smaller and simpler. For instance, to add two arrays of numbers held in files in traditional languages, one would generally open the two files, read one item from each, add them, and then store the value to a third file. In the HLS approach, one would simply open the files and call add. The underlying operating system would map these into memory, create descriptors showing them both to be arrays and then the add instruction would see they were arrays and add all the values together. Assigning that value into a newly created array would have the effect of writing it back to storage. A program that might take a page or so of code was now reduced to a few lines. Moreover, as this was the natural language of the machine, the command shell was itself programmable in the same way, there would be no need to "write a program" for a simple task like this, it could be entered as a command. The report concluded: Compatible concerns Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services. Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. IBM responded by refusing to service machines with these third-party add-ons, which led almost immediately to sweeping anti-trust investigations and many subsequent legal remedies. In 1969, the company was forced to end its bundling arrangements and announced they would sell software products separately. Gene Amdahl saw an opportunity to sell compatible machines without software; the customer could purchase a machine from Amdahl and the operating system and other software from IBM. If IBM refused to sell it to them, they would be breaching their legal obligations. In early 1970, Amdahl quit IBM and announced his intention to introduce System/370 compatible machines that would be faster than IBM's high-end offerings but cost less to purchase and operate. At first, IBM was unconcerned. They made most of their money on software and support, and that money would still be going to them. But to be sure, in early 1971 an internal IBM task force, Project Counterpoint, was formed to study the concept. They concluded that the compatible mainframe business was indeed viable and that the basis for charging for software and services as part of the hardware price would quickly vanish. These events created a desire within the company to find some solution that would once again force the customers to purchase everything from IBM but in a way that would not violate antitrust laws. If IBM followed the suggestions of the HLS report, this would mean that other vendors would have to copy the microcode implementing the huge number of instructions. As this was software, if they did, those companies would be subject to copyright violations. At this point, the AFS/HLS concepts gained new currency within the company. Future Systems In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order-of-magnitude reduction in the costs of developing, operating and maintaining application software. The major objectives of the FS project were consequently stated as follows: make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies, diminish greatly the costs and efforts involved in application development and operations, provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services) It was hoped that a new architecture making heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers. Technology Data access One design principle of FS was a "single-level store" which extended the idea of virtual memory (VM) to cover persistent data. In traditional designs, programs allocate memory to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a hard drive, and then read it back in the future. To ease these common operations, a number of database engines emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand. Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of main memory in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again. A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other backing store by VM, all memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart. This concept had been explored in the Multics system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in core with a far slower backing store in the form of a hard drive or drum. With the introduction of new forms of non-volatile memory, most notably bubble memory, that worked at speeds similar to core but had memory density similar to a hard disk, it appeared a single-level store would no longer have any performance downside. Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's application programming interface (API) to retrieve memory. And those API calls would be based on particular hardware or microcode implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it. Processor Another principle was the use of very high-level complex instructions to be implemented in microcode. As an example, one of the instructions, CreateEncapsulatedModule, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN, COBOL, and PL/I. In effect, FS was designed to be the ultimate complex instruction set computer (CISC). Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode, and conventional software. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects). The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC). Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (RISC). In the long run, the IBM 801 RISC architecture, which eventually evolved into IBM's POWER, PowerPC, and Power architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate. Development Project start The FS project was officially started in September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it. Project management During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time. In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved. As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence. Planned product lines Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY, where IBM's largest and fastest computers were built; the next model down was being designed in Endicott, NY, which had responsibility for the mid-range computers; the model below that was being designed in Böblingen, Germany, and the smallest model was being designed in Hursley, UK. A continuous range of performance could be offered by varying the number of processors in a system at each of the four implementation levels. Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie). Project end The FS project was terminated in 1975. The reasons given for terminating the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero. One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, foretelling the IBM AS/400's partitioning of the System/38's single-level store. Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine. The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembly language based applications to FS was in many cases likely to be greater than the cost of acquiring a new system. Results Although the FS project as a whole was terminated, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38, which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set. In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine. The dedicated hardware platform was replaced in 2008 by the IBM Power Systems platform running the IBM i operating system. Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line: the IBM 3081 mainframe computer, which was essentially the top-of-the line machine designed in Poughkeepsie, using the System/370 emulator microcode, and with the FS microcode removed and used the 3800 laser printer, and some machines that would lead to the IBM 3279 terminal and GDDM the IBM 3850 automatic magnetic tape library the IBM 8100 mid-range computer, which was based on a CPU called the Universal Controller, which had been intended for FS input/output processing network enhancements concerning VTAM and NCP References Citations Bibliography External links An internal memo by John F. Sowa. This outlines the technical and organizational problems of the FS project in late 1974. Overview of IBM Future Systems Computing platforms Future Systems project Information technology projects
IBM Future Systems project
[ "Technology", "Engineering" ]
5,135
[ "Information technology", "Computing platforms", "Information technology projects" ]
973,479
https://en.wikipedia.org/wiki/Pseudo-differential%20operator
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space. History The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza. They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators. Motivation Linear differential operators with constant coefficients Consider a linear differential operator with constant coefficients, which acts on smooth functions with compact support in Rn. This operator can be written as a composition of a Fourier transform, a simple multiplication by the polynomial function (called the symbol) and an inverse Fourier transform, in the form: Here, is a multi-index, are complex numbers, and is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants to facilitate the calculation of Fourier transforms. Derivation of formula () The Fourier transform of a smooth function u, compactly supported in Rn, is and Fourier's inversion formula gives By applying P(D) to this representation of u and using one obtains formula (). Representation of solutions to partial differential equations To solve the partial differential equation we (formally) apply the Fourier transform on both sides and obtain the algebraic equation If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ): By Fourier's inversion formula, a solution is Here it is assumed that: P(D) is a linear differential operator with constant coefficients, its symbol P(ξ) is never zero, both u and ƒ have a well defined Fourier transform. The last assumption can be weakened by using the theory of distributions. The first two assumptions can be weakened as follows. In the last formula, write out the Fourier transform of ƒ to obtain This is similar to formula (), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind. Definition of pseudo-differential operators Here we view pseudo-differential operators as a generalization of differential operators. We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x: where is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class. For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class Properties Linear differential operators of order m with smooth bounded coefficients are pseudo-differential operators of order m. The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator. If a differential operator of order m is (uniformly) elliptic (of order m) and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly by using the theory of pseudo-differential operators. Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth. Just as a differential operator can be expressed in terms of D = −id/dx in the form for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis. Kernel of pseudo-differential operator Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel. See also Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings. Fourier transform Fourier integral operator Oscillatory integral operator Sato's fundamental theorem Operational calculus Footnotes References . Further reading Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010. Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976. External links Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org. Differential operators Microlocal analysis Functional analysis Harmonic analysis Generalized functions Partial differential equations
Pseudo-differential operator
[ "Mathematics" ]
1,267
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Differential operators" ]
973,601
https://en.wikipedia.org/wiki/Charles%20Bassett
Charles Arthur "Charlie" Bassett II (December 30, 1931 – February 28, 1966), (Major, USAF), was an American electrical engineer and United States Air Force test pilot. He went to Ohio State University for two years and later graduated from Texas Tech University with a Bachelor of Science degree in Electrical Engineering. He joined the Air Force as a pilot and graduated from both the Air Force's Experimental Test Pilot School and the Aerospace Research Pilot School. Bassett was married and had two children. He was selected as a NASA astronaut in 1963 and was assigned to Gemini 9. He died in an airplane crash during training for his first spaceflight. He is memorialized on the Space Mirror Memorial; The Astronaut Monument; and the Fallen Astronaut memorial plaque, which was placed on the Moon during the Apollo 15 mission. Early life and education Bassett was born on December 30, 1931, in Dayton, Ohio, to Charles Arthur "Pete" Bassett (1897–1957) and Fannie Belle Milby Bassett ( James; 1907–1993). Bassett was active in the Boy Scouts of America, where he achieved its second highest rank, Life Scout. During high school, Bassett was a model plane aficionado. He belonged to a club that built gasoline-powered models and flew them in the school gym. Bassett's interest in model airplanes translated to real aircraft; he made his first solo flight at age 16. He worked odd jobs at the airport to earn money for flying lessons and earned his private pilot license at age seventeen. After graduating from Berea High School, in Berea, in 1950, he attended Ohio State University, in Columbus, from 1950 to 1952. Midway through college in 1952, Bassett enrolled in Air Force ROTC; he entered the U.S. Air Force as an aviation cadet in October of that year. He attended Texas Technological College, now Texas Tech University, from 1958 to 1960. He received a Bachelor of Science degree with honors in electrical engineering from Texas Tech and did graduate work at University of Southern California (USC) in Los Angeles. Military service He started his career with training at Stallings Air Base, North Carolina, and Bryan Air Force Base, Texas. Bassett graduated from Bryan in December 1953 and was commissioned in the Air Force. He arrived for additional training in Nellis Air Force Base, Nevada, as a second lieutenant. There, he flew trainer aircraft, such as the T-6, the T-28, and the T-33, and flew the jet fighter F-86 Sabre in 1954. He went to Korea with the 8th Fighter Bomber Group and flew a F-86 Sabre. Bassett was too late to fly any combat missions, and said, "If you don't have any challenge, you never know how good you are." Bassett was promoted to first lieutenant in May 1955. He returned from Korea in 1955 and was assigned to Suffolk County Air Force Base, in New York, flying aircraft such as the F-86D, the F-102, and the C-119. In November 1960, Bassett went to Maxwell Air Force Base, in Alabama, to attend Squadron Officer School. He also graduated from the Air Force Experimental Test Pilot School (Class 62A) and the Aerospace Research Pilot School (Class III) and was promoted to captain. Bassett was an experimental test pilot and engineering test pilot in the Fighter Projects Office at Edwards Air Force Base, California, and logged over 3,600 hours of flying time, including over 2,900 hours in a jet aircraft. NASA career Bassett was one of NASA's third group of astronauts, named in October 1963. In addition to participating in the overall astronaut training program, he had specific responsibilities related to training and simulators. On November 8, 1965, he was selected as pilot of the Gemini 9 mission with Elliot See as command pilot. Bassett was scheduled to make an untethered ninety-minute spacewalk, which was undertaken by Gene Cernan on Gemini 9A. According to chief astronaut Deke Slayton's autobiography, he chose Bassett for Gemini 9 because he was "strong enough to carry" both himself and See. Slayton had also assigned Bassett as command module pilot for the second backup Apollo crew, alongside Frank Borman and William Anders. Personal life On June 22, 1955, Bassett married Jeannie Martin. They had two children. Death Bassett and Elliot See died on February 28, 1966, when their T-38 trainer jet, piloted by See, crashed into McDonnell Aircraft Building 101, known as the McDonnell Space Center, from Lambert Field airport in St. Louis, Missouri. Building 101 was where the Gemini spacecraft was built, and the two astronauts were going there that Monday morning to train for two weeks in a simulator. They died within of their spacecraft. Both astronauts died instantly from trauma sustained in the crash. See was thrown clear of the cockpit and was found in the parking lot still strapped to his ejection seat with the parachute partially open. Bassett was decapitated on impact; his severed head was found later in the day in the rafters of the damaged assembly building. Both men's remains were buried in Arlington National Cemetery on Friday, March 4. During funeral services in Texas two days earlier, astronauts Jim McDivitt and Jim Lovell and civilian pilot Jere Cobb flew the missing man formation in Bassett's honor, while Buzz Aldrin, Bill Anders, and Walter Cunningham did the same to honor See. A NASA investigative panel later concluded that pilot error, caused by poor visibility due to bad weather, was the principal cause of the accident. The panel concluded that See was flying too low to the ground during his second approach, probably because of the poor visibility. Memorials Bassett is honored at the Kennedy Space Center Visitor Center's Space Mirror Memorial, alongside 24 other NASA astronauts who died in the pursuit of space exploration. His name also appears on the Fallen Astronaut memorial plaque at Hadley Rille on the Moon, placed by the Apollo 15 mission in 1971. Texas Tech University dedicated an Electrical Engineering Research Laboratory building in Bassett's honor in November 1996. See also List of spaceflight-related accidents and incidents References Bibliography External links Astronauts memorial foundation website (a different archived version from 2011) Astronautix biography of Charles Bassett Arlington National Cemetery 1931 births 1966 deaths Accidental deaths in Missouri American electrical engineers American test pilots Aviators from Ohio Aviators killed in aviation accidents or incidents in the United States Burials at Arlington National Cemetery Deaths by decapitation Engineers from Ohio Military personnel from Dayton, Ohio Ohio State University alumni Space program fatalities Texas Tech University alumni 20th-century American engineers United States Air Force astronauts United States Air Force officers U.S. Air Force Test Pilot School alumni USC Viterbi School of Engineering alumni Victims of aviation accidents or incidents in 1966
Charles Bassett
[ "Engineering" ]
1,367
[ "Space program fatalities", "Space programs" ]
973,614
https://en.wikipedia.org/wiki/Slighting
Slighting is the deliberate damage of high-status buildings to reduce their value as military, administrative or social structures. This destruction of property is sometimes extended to the contents of buildings and the surrounding landscape. It is a phenomenon with complex motivations and was often used as a tool of control. Slighting spanned cultures and periods, with especially well-known examples from the English Civil War in the 17th century. Meaning and use Slighting is the act of deliberately damaging a high-status building, especially a castle or fortification, which could include its contents and the surrounding area. The first recorded use of the word slighting to mean a form of destruction was in 1613. Castles are complex structures combining military, social, and administrative uses, and the decision to slight them took these various roles into account. The purpose of slighting was to reduce the value of the building, whether military, social, or administrative. Destruction often went beyond what was needed to prevent an enemy from using the fortification, indicating the damage was important symbolically. When Eccleshall Castle in Staffordshire was slighted as a result of the English Civil War, the act was politically motivated. In some cases, it was used as a way of punishing the king's rebels or was used to undermine the authority of the owner by demonstrating his inability to protect his property. As part of the peace negotiations bringing The Anarchy of 1135–1154 to an end, both sides agreed to dismantle fortifications built since the start of the conflict. Similarly, in 1317 Edward II ordered the dismantling of Harbottle Castle in Northumberland in England as part of a treaty with Robert the Bruce. In England, Scotland, and Wales, it was uncommon for someone to slight his own fortifications but not unknown; during the First War of Scottish Independence, Robert the Bruce systematically slighted Scottish castles, often after capturing them from English control. More than a century earlier, John, King of England, ordered the demolition of Château de Montrésor in France, during his war with the French king over control of Normandy. In the Levant, Muslim rulers adopted a policy of slighting castles and fortified towns and cities to deny them to Crusaders; Sultan Baybars, for example, instigated the destruction of fortifications at Jaffa in 1267, Antioch in 1268, and Ashkelon in 1270. Methods of destruction Castles were demolished with a range of methods, each affecting the buildings in different ways. Fire might be used, especially against timber structures; digging underneath stone structures (known as undermining) could cause them to collapse; dismantling a structure by hand was sometimes done, but was time- and labour-intensive, as was filling ditches and digging away earthworks; and in later periods gunpowder was sometimes used. Manually dismantling a castle ("picking") can be split into two categories: primary damage where the intention was to slight the castle; and secondary damage which was incidental through activity such as retrieving reusable materials. Undermining involved digging underneath a wall or removing stones at its base. When successful, the tunnel or cavity would collapse, making it difficult to identify through archaeology. Archaeological investigations have identified 61 castles that were slighted in the Middle Ages, and only five were undermined. While surviving mines are rare, one was discovered in the 1930s during excavations at Bungay Castle in Suffolk. It probably dates from around 1174 when the owner rebelled against Henry II. The effect of slighting Dismantling a castle was a skilled process, and stone, metal, and glass were sometimes removed for sale or reuse. After the castle at Papowo Biskupie in Poland was slighted, some of the materials from the castle were used to build a seminary at nearby Chełmża. The impact of slighting ranged from almost complete destruction of a site, as can be seen at Deganwy Castle, to a token gesture, for example damaging elements such as arrowslits. In 1268, the court of King Louis IX of France gave orders to slight a new fortification near Étampes, specifying that the bailiff carrying out orders should "destroy the arrow-slits and so to break them through that it may be abundantly clear that the fortification has been slighted". Destruction was often carefully targeted rather than indiscriminate, even when carried out on a large scale. In cases of medieval slighting, domestic areas such as free-standing halls and chapels were typically excluded from the destruction. When King Władysław II Jagiełło of Poland gave the order to slight the castle at Mała Nieszawka, after negotiation with the Teutonic Order who owned the castle, one of the conditions was that the buildings in the outer bailey would be left intact while the walls were reduced in height. In 1648, Parliament gave orders to slight Bolsover Castle but that "so much only be done to it as to make it untenable as a garrison and that it may not be unnecessarily spoiled and defaced." When a castle had a keep, it was usually the most visible part of the castle and a focus of symbolism. This would sometimes attract the attention of people carrying out slighting. Kenilworth was one of many castles to be slighted during the English Civil War, and the side of the keep most visible to people outside the castle was demolished. Documentary sources for the medieval period typically have little information on what slighting involved, so archaeology helps to understand which areas of buildings were targeted and how they were demolished. For the English Civil War, destruction accounts are rare but there are some instances such as Sheffield Castle where detailed records survive. At Sheffield military and social concerns combined: there may have been a desire to prevent the Royalist owner from using the fortification against Parliament, and the destruction undermined the owner's authority. Despite this, the profits from the demolition went to the owner, contrasting with Pontefract Castle, where the money went to the townspeople. When castles were slighted in the Middle Ages this often led to their complete abandonment, but some were repaired and others reused. This was also the case with places slighted as a result of the English Civil War. In 1650, Parliament gave orders to slight Wressle Castle in East Yorkshire; the south part of the castle was left standing so that the owner could still use it as a manor house. Berkeley Castle in Gloucestershire was also slighted in the same period – meaning that a small but significant part of the curtain wall was demolished, but the remaining structure was left intact, and the castle remains inhabited to this day. The use of destruction both to control and to subvert control spans periods and cultures. Slighting was prevalent in the Middle Ages and the 17th century; notable episodes include The Anarchy, the English Civil War, and France in the 16th and 17th centuries, as well as Japan. The ruins left by the destruction of castles in 17th-century England and Wales encouraged the later Romantic movement. See also Notes Bibliography Further reading External links Fortifications Medieval studies Destruction of buildings Military tactics Castles Wars of Scottish Independence English Civil War
Slighting
[ "Engineering" ]
1,455
[ "Fortifications", "Destruction of buildings", "Military engineering", "Architecture" ]
973,701
https://en.wikipedia.org/wiki/Baggage%20reclaim
In airport terminals, a baggage reclaim area is an area where arriving passengers claim checked-in baggage after disembarking from an airline flight. The alternative term baggage claim is used at airports in the US and some other airports internationally. Similar systems are also used at train stations served by companies that offer checked bags, such as Amtrak in the United States. Overview A typical baggage claim area contains baggage carousels or conveyor systems that deliver checked baggage to the passenger. The baggage claim area generally contains the airline's customer service counter for claiming oversized baggage or reporting missing or damaged baggage. Some airports require that passengers display their baggage receipt obtained at check-in so that it can be positively matched against the bag they are trying to remove from baggage reclaim. Many airports still recommend the baggage receipt is checked against the bag tag of the bag reclaimed. This serves two purposes: first, it reduces baggage theft, and second, it helps to prevent passengers from accidentally leaving the airport with another passenger's bag that bears resemblance to their own. For international arrivals, the baggage reclaim area is a restricted area, after passport and visa control and before clearing customs, so that all baggage can be inspected by customs agents, but the passenger does not have to handle heavy baggage while moving through the passport booth. In the United States and Canada, and also in some airports in Asia, all arriving international passengers' baggage is reclaimed here and can be re-checked by the airline for connecting flights on the other side of customs (for connection from international to domestic flights in most countries, all passengers must reclaim their baggage). In most other countries passengers transferring to an onward flight do not need to collect their bags unless their airline does not offer to through-check their bags to their final destination. This is required in American and some Canadian airports because international terminals are not enclosed (the only exit is through customs) and often serve domestic flights. The same rule applies in the case of airports that have U.S. border preclearance facilities. This means that passengers continuing onto the U.S. from other cities must retrieve their checked baggage first, then re-check them in after clearing U.S. Customs. Depending on the airport, the domestic baggage reclaims area may be located next to or shared with the international reclaim area, or sometimes located in the public part of the airport alongside car rental desks and airport exits, and only passengers at their final destination claim their bags here. In most large airports in the United States and in some small ones as well, the domestic baggage reclaims are located on a different floor than the ticket counter, usually lower. Efficiency of baggage reclaim units The efficiency of baggage reclaim units can be measured in a number of ways including the amount of time a unit is in use for a given flight or the amount of baggage a unit can hold. A number of factors can independently affect the efficiency of a particular unit: Aircraft seating capacity Proportion of passengers with checked luggage Proportion of passengers who are terminating at a given destination Average number of luggage pieces per passenger Average traveling party size Average number of people at baggage reclaim Average rate at which luggage are unloaded from the flight (this also depends on the physical properties of checked luggage) See also Duty-free shop References Airport infrastructure Luggage Aircraft ground handling
Baggage reclaim
[ "Engineering" ]
662
[ "Airport infrastructure", "Aerospace engineering" ]
973,778
https://en.wikipedia.org/wiki/Pedantry
Pedantry ( ) is an excessive concern with formalism, minor details, and rules that are not important. Etymology Pedantry is the adjective form of the 1580s English word pedant, which meant a male schoolteacher at the time. The word pedant originated from the French word for "schoolmaster," pédant, in the 1560s, or from the Italian word for "teacher, schoolmaster," pedante. Both of these words are likely an alteration of Late Latin word "paedagogantem", meaning a "person who trumpets minor points of learning, one who overrates learning or lays undue stress on exact knowledge of details...as compared with large matters or general principles." In ancient Greece, a paedagogus was a slave entrusted with teaching young Roman boys. Analysis Notably, the distinction between pedantry and perfectionism is that pedantry typically focuses on highlighting trivial, unimportant details of others and is associated with the desire for attention or superiority, whereas, perfectionism typically focuses on oneself and with the desire of success and achievement. Therefore, pedantry is typically associated with more of an annoyance and ill-mannered, whereas perfectionism is associated more positively. Ultimately, pedantry could be viewed as an attempt to show superiority by appearing more intelligent, through tasks as simple as correcting a peer's grammar online. In modern times, pedantry is also often used as an intentional tactic or unintentional act which distracts from larger issues by focusing on minor details instead. For instance, a pedant might dismiss or invalidate a comprehensive, logical argument due to a few minor grammatical errors. Fowler's Concise Dictionary of Modern English (1926) recognised that the term pedantry was "relative" and subjective, stating "my pedantry is your scholarship, his reasonable accuracy, her irreducible minimum of education, and someone else’s ignorance". See also Perfectionism (psychology) Anti-intellectualism References Human behavior Pejorative terms for people
Pedantry
[ "Biology" ]
422
[ "Behavior", "Human behavior" ]
973,790
https://en.wikipedia.org/wiki/Infinite%20conjugacy%20class%20property
In mathematics, a group is said to have the infinite conjugacy class property, or to be an ICC group, if the conjugacy class of every group element but the identity is infinite. The von Neumann group algebra of a group is a factor if and only if the group has the infinite conjugacy class property. It will then be, provided the group is nontrivial, of type II1, i.e. it will possess a unique, faithful, tracial state. Examples of ICC groups are the group of permutations of an infinite set that leave all but a finite subset of elements fixed, and free groups on two generators. In abelian groups, every conjugacy class consists of only one element, so ICC groups are, in a way, as far from being abelian as possible. References Infinite group theory Properties of groups
Infinite conjugacy class property
[ "Mathematics" ]
181
[ "Algebra stubs", "Mathematical structures", "Properties of groups", "Algebraic structures", "Algebra" ]
973,828
https://en.wikipedia.org/wiki/Relativistic%20Euler%20equations
In fluid mechanics and astrophysics, the relativistic Euler equations are a generalization of the Euler equations that account for the effects of general relativity. They have applications in high-energy astrophysics and numerical relativity, where they are commonly used for describing phenomena such as gamma-ray bursts, accretion phenomena, and neutron stars, often with the addition of a magnetic field. Note: for consistency with the literature, this article makes use of natural units, namely the speed of light and the Einstein summation convention. Motivation For most fluids observable on Earth, traditional fluid mechanics based on Newtonian mechanics is sufficient. However, as the fluid velocity approaches the speed of light or moves through strong gravitational fields, or the pressure approaches the energy density (), these equations are no longer valid. Such situations occur frequently in astrophysical applications. For example, gamma-ray bursts often feature speeds only less than the speed of light, and neutron stars feature gravitational fields that are more than times stronger than the Earth's. Under these extreme circumstances, only a relativistic treatment of fluids will suffice. Introduction The equations of motion are contained in the continuity equation of the stress–energy tensor : where is the covariant derivative. For a perfect fluid, Here is the total mass-energy density (including both rest mass and internal energy density) of the fluid, is the fluid pressure, is the four-velocity of the fluid, and is the metric tensor. To the above equations, a statement of conservation is usually added, usually conservation of baryon number. If is the number density of baryons this may be stated These equations reduce to the classical Euler equations if the fluid three-velocity is much less than the speed of light, the pressure is much less than the energy density, and the latter is dominated by the rest mass density. To close this system, an equation of state, such as an ideal gas or a Fermi gas, is also added. Equations of motion in flat space In the case of flat space, that is and using a metric signature of , the equations of motion are, Where is the energy density of the system, with being the pressure, and being the four-velocity of the system. Expanding out the sums and equations, we have, (using as the material derivative) Then, picking to observe the behavior of the velocity itself, we see that the equations of motion become Note that taking the non-relativistic limit, we have . This says that the energy of the fluid is dominated by its rest energy. In this limit, we have and , and can see that we return the Euler Equation of . Derivation In order to determine the equations of motion, we take advantage of the following spatial projection tensor condition: We prove this by looking at and then multiplying each side by . Upon doing this, and noting that , we have . Relabeling the indices as shows that the two completely cancel. This cancellation is the expected result of contracting a temporal tensor with a spatial tensor. Now, when we note that where we have implicitly defined that , we can calculate that and thus Then, let's note the fact that and . Note that the second identity follows from the first. Under these simplifications, we find that and thus by , we have We have two cancellations, and are thus left with See also Relativistic heat conduction Equation of state (cosmology) References Euler equations Equations of fluid dynamics
Relativistic Euler equations
[ "Physics", "Chemistry" ]
715
[ "Equations of fluid dynamics", "Equations of physics", "Special relativity", "Relativity stubs", "Theory of relativity", "Fluid dynamics" ]
973,852
https://en.wikipedia.org/wiki/Addition%20theorem
In mathematics, an addition theorem is a formula such as that for the exponential function: ex + y = ex · ey, that expresses, for a particular function f, f(x + y) in terms of f(x) and f(y). Slightly more generally, as is the case with the trigonometric functions and , several functions may be involved; this is more apparent than real, in that case, since there is an algebraic function of (in other words, we usually take their functions both as defined on the unit circle). The scope of the idea of an addition theorem was fully explored in the nineteenth century, prompted by the discovery of the addition theorem for elliptic functions. To "classify" addition theorems it is necessary to put some restriction on the type of function G admitted, such that F(x + y) = G(F(x), F(y)). In this identity one can assume that F and G are vector-valued (have several components). An algebraic addition theorem is one in which G can be taken to be a vector of polynomials, in some set of variables. The conclusion of the mathematicians of the time was that the theory of abelian functions essentially exhausted the interesting possibilities: considered as a functional equation to be solved with polynomials, or indeed rational functions or algebraic functions, there were no further types of solution. In more contemporary language this appears as part of the theory of algebraic groups, dealing with commutative groups. The connected, projective variety examples are indeed exhausted by abelian functions, as is shown by a number of results characterising an abelian variety by rather weak conditions on its group law. The so-called quasi-abelian functions are all known to come from extensions of abelian varieties by commutative affine group varieties. Therefore, the old conclusions about the scope of global algebraic addition theorems can be said to hold. A more modern aspect is the theory of formal groups. See also Timeline of abelian varieties Addition theorem for spherical harmonics Mordell–Weil theorem References Theorems in algebraic geometry Theorems in algebra
Addition theorem
[ "Mathematics" ]
430
[ "Theorems in algebraic geometry", "Theorems in algebra", "Theorems in geometry", "Mathematical problems", "Mathematical theorems", "Algebra" ]
973,888
https://en.wikipedia.org/wiki/Security%20Assertion%20Markup%20Language
Security Assertion Markup Language (SAML, pronounced SAM-el, ) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML is an XML-based markup language for security assertions (statements that service providers use to make access-control decisions). SAML is also: A set of XML-based protocol messages A set of protocol message bindings A set of profiles (utilizing all of the above) An important use case that SAML addresses is web-browser single sign-on (SSO). Single sign-on is relatively easy to accomplish within a security domain (using cookies, for example) but extending SSO across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The SAML Web Browser SSO profile was specified and standardized to promote interoperability. In practice, SAML SSO is most commonly used for authentication into cloud-based business software. Overview The SAML specification defines three roles: the principal (typically a human user), the identity provider (IdP) and the service provider (SP). In the primary use case addressed by SAML, the principal requests a service from the service provider. The service provider requests and obtains an authentication assertion from the identity provider. On the basis of this assertion, the service provider can make an access control decision, that is, it can decide whether to perform the service for the connected principal. At the heart of the SAML assertion is a subject (a principal within the context of a particular security domain) about which something is being asserted. The subject is usually (but not necessarily) a human. As in the SAML 2.0 Technical Overview, the terms subject and principal are used interchangeably in this document. Before delivering the subject-based assertion from IdP to the SP, the IdP may request some information from the principal—such as a user name and password—in order to authenticate the principal. SAML specifies the content of the assertion that is passed from the IdP to the SP. In SAML, one identity provider may provide SAML assertions to many service providers. Similarly, one SP may rely on and trust assertions from many independent IdPs. SAML does not specify the method of authentication at the identity provider. The IdP may use a username and password, or some other form of authentication, including multi-factor authentication. A directory service such as RADIUS, LDAP, or Active Directory that allows users to log in with a user name and password is a typical source of authentication tokens at an identity provider. The popular Internet social networking services also provide identity services that in theory could be used to support SAML exchanges. History The Organization for the Advancement of Structured Information Standards (OASIS) Security Services Technical Committee (SSTC), which met for the first time in January 2001, was chartered "to define an XML framework for exchanging authentication and authorization information." To this end, the following intellectual property was contributed to the SSTC during the first two months of that year: Security Services Markup Language (S2ML) from Netegrity AuthXML from Securant XML Trust Assertion Service Specification (X-TASS) from VeriSign Information Technology Markup Language (ITML) from Jamcracker Building on these initial contributions, in November 2002 OASIS announced the Security Assertion Markup Language (SAML) 1.0 specification as an OASIS Standard. Meanwhile, the Liberty Alliance, a large consortium of companies, non-profit and government organizations, proposed an extension to the SAML standard called the Liberty Identity Federation Framework (ID-FF). Like its SAML predecessor, Liberty ID-FF proposed a standardized, cross-domain, web-based, single sign-on framework. In addition, Liberty described a circle of trust where each participating domain is trusted to accurately document the processes used to identify a user, the type of authentication system used, and any policies associated with the resulting authentication credentials. Other members of the circle of trust could then examine these policies to determine whether to trust such information. While Liberty was developing ID-FF, the SSTC began work on a minor upgrade to the SAML standard. The resulting SAML 1.1 specification was ratified by the SSTC in September 2003. Then, in November of that same year, Liberty contributed ID-FF 1.2 to OASIS, thereby sowing the seeds for the next major version of SAML. In March 2005, SAML 2.0 was announced as an OASIS Standard. SAML 2.0 represents the convergence of Liberty ID-FF and proprietary extensions contributed by the Shibboleth project, as well as early versions of SAML itself. Most SAML implementations support v2.0 while many still support v1.1 for backward compatibility. By January 2008, deployments of SAML 2.0 became common in government, higher education, and commercial enterprises worldwide. Versions SAML has undergone one minor and one major revision since 1.0. SAML 1.0 was adopted as an OASIS Standard in November 2002 SAML 1.1 was ratified as an OASIS Standard in September 2003 SAML 2.0 became an OASIS Standard in March 2005 The Liberty Alliance contributed its Identity Federation Framework (ID-FF) to the OASIS SSTC in September 2003: ID-FF 1.1 was released in April 2003 ID-FF 1.2 was finalized in November 2003 Versions 1.0 and 1.1 of SAML are similar even though small differences exist., however, the differences between SAML 2.0 and SAML 1.1 are substantial. Although the two standards address the same use case, SAML 2.0 is incompatible with its predecessor. Although ID-FF 1.2 was contributed to OASIS as the basis of SAML 2.0, there are some important differences between SAML 2.0 and ID-FF 1.2. In particular, the two specifications, despite their common roots, are incompatible. Design SAML is built upon a number of existing standards: Extensible Markup Language (XML): Most SAML exchanges are expressed in a standardized dialect of XML, which is the root for the name SAML (Security Assertion Markup Language). XML Schema (XSD): SAML assertions and protocols are specified (in part) using XML Schema. XML Signature: Both SAML 1.1 and SAML 2.0 use digital signatures (based on the XML Signature standard) for authentication and message integrity. XML Encryption: Using XML Encryption, SAML 2.0 provides elements for encrypted name identifiers, encrypted attributes, and encrypted assertions (SAML 1.1 does not have encryption capabilities). XML Encryption is reported to have severe security concerns. Hypertext Transfer Protocol (HTTP): SAML relies heavily on HTTP as its communications protocol. Simple Object Access Protocol (SOAP): SAML specifies the use of SOAP, specifically SOAP 1.1 . SAML defines XML-based assertions and protocols, bindings, and profiles. The term SAML Core refers to the general syntax and semantics of SAML assertions as well as the protocol used to request and transmit those assertions from one system entity to another. SAML protocol refers to what is transmitted, not how (the latter is determined by the choice of binding). So SAML Core defines "bare" SAML assertions along with SAML request and response elements. A SAML binding determines how SAML requests and responses map onto standard messaging or communications protocols. An important (synchronous) binding is the SAML SOAP binding. A SAML profile is a concrete manifestation of a defined use case using a particular combination of assertions, protocols and bindings. Assertions A SAML assertion contains a packet of security information: <saml:Assertion ...> .. </saml:Assertion> Loosely speaking, a relying party interprets an assertion as follows: Assertion A was issued at time t by issuer R regarding subject S provided conditions C are valid. SAML assertions are usually transferred from identity providers to service providers. Assertions contain statements that service providers use to make access-control decisions. Three types of statements are provided by SAML: Authentication statements Attribute statements Authorization decision statements Authentication statements assert to the service provider that the principal did indeed authenticate with the identity provider at a particular time using a particular method of authentication. Other information about the authenticated principal (called the authentication context) may be disclosed in an authentication statement. An attribute statement asserts that a principal is associated with certain attributes. An attribute is simply a name–value pair. Relying parties use attributes to make access-control decisions. An authorization decision statement asserts that a principal is permitted to perform action A on resource R given evidence E. The expressiveness of authorization decision statements in SAML is intentionally limited. More-advanced use cases are encouraged to use XACML instead. Protocols A SAML protocol describes how certain SAML elements (including assertions) are packaged within SAML request and response elements, and gives the processing rules that SAML entities must follow when producing or consuming these elements. For the most part, a SAML protocol is a simple request-response protocol. The most important type of SAML protocol request is called a query. A service provider makes a query directly to an identity provider over a secure back channel. Thus query messages are typically bound to SOAP. Corresponding to the three types of statements, there are three types of SAML queries: Authentication query Attribute query Authorization decision query The result of an attribute query is a SAML response containing an assertion, which itself contains an attribute statement. See the SAML 2.0 topic for an example of attribute query/response. Beyond queries, SAML 1.1 specifies no other protocols. SAML 2.0 expands the notion of protocol considerably. The following protocols are described in detail in SAML 2.0 Core: Assertion Query and Request Protocol Authentication Request Protocol Artifact Resolution Protocol Name Identifier Management Protocol Single Logout Protocol Name Identifier Mapping Protocol Most of these protocols are new in SAML 2.0. Bindings A SAML binding is a mapping of a SAML protocol message onto standard messaging formats and/or communications protocols. For example, the SAML SOAP binding specifies how a SAML message is encapsulated in a SOAP envelope, which itself is bound to an HTTP message. SAML 1.1 specifies just one binding, the SAML SOAP Binding. In addition to SOAP, implicit in SAML 1.1 Web Browser SSO are the precursors of the HTTP POST Binding, the HTTP Redirect Binding, and the HTTP Artifact Binding. These are not defined explicitly, however, and are only used in conjunction with SAML 1.1 Web Browser SSO. The notion of binding is not fully developed until SAML 2.0. SAML 2.0 completely separates the binding concept from the underlying profile. In fact, there is a brand new binding specification in SAML 2.0 that defines the following standalone bindings: SAML SOAP Binding (based on SOAP 1.1) Reverse SOAP (PAOS) Binding HTTP Redirect (GET) Binding HTTP POST Binding HTTP Artifact Binding SAML URI Binding This reorganization provides tremendous flexibility: taking just Web Browser SSO alone as an example, a service provider can choose from four bindings (HTTP Redirect, HTTP POST and two flavors of HTTP Artifact), while the identity provider has three binding options (HTTP POST plus two forms of HTTP Artifact), for a total of twelve possible deployments of the SAML 2.0 Web Browser SSO Profile. Profiles A SAML profile describes in detail how SAML assertions, protocols, and bindings combine to support a defined use case. The most important SAML profile is the Web Browser SSO Profile. SAML 1.1 specifies two forms of Web Browser SSO, the Browser/Artifact Profile and the Browser/POST Profile. The latter passes assertions by value whereas Browser/Artifact passes assertions by reference. As a consequence, Browser/Artifact requires a back-channel SAML exchange over SOAP. In SAML 1.1, all flows begin with a request at the identity provider for simplicity. Proprietary extensions to the basic IdP-initiated flow have been proposed (by Shibboleth, for example). The Web Browser SSO Profile was completely refactored for SAML 2.0. Conceptually, SAML 1.1 Browser/Artifact and Browser/POST are special cases of SAML 2.0 Web Browser SSO. The latter is considerably more flexible than its SAML 1.1 counterpart due to the new "plug-and-play" binding design of SAML 2.0. Unlike previous versions, SAML 2.0 browser flows begin with a request at the service provider. This provides greater flexibility, but SP-initiated flows naturally give rise to the so-called Identity Provider Discovery problem, the focus of much research today. In addition to Web Browser SSO, SAML 2.0 introduces numerous new profiles: SSO Profiles Web Browser SSO Profile Enhanced Client or Proxy (ECP) Profile Identity Provider Discovery Profile Single Logout Profile Name Identifier Management Profile Artifact Resolution Profile Assertion Query/Request Profile Name Identifier Mapping Profile SAML Attribute Profiles Aside from the SAML Web Browser SSO Profile, some important third-party profiles of SAML include: OASIS Web Services Security (WSS) Technical Committee Liberty Alliance OASIS eXtensible Access Control Markup Language (XACML) Technical Committee Security The SAML specifications recommend, and in some cases mandate, a variety of security mechanisms: TLS 1.0+ for transport-level security XML Signature and XML Encryption for message-level security Requirements are often phrased in terms of (mutual) authentication, integrity, and confidentiality, leaving the choice of security mechanism to implementers and deployers. Use The primary SAML use case is called Web Browser Single Sign-On (SSO). A user utilizes a user agent (usually a web browser) to request a web resource protected by a SAML service provider. The service provider, wishing to know the identity of the requesting user, issues an authentication request to a SAML identity provider through the user agent. The resulting protocol flow is depicted in the following diagram. 1. Request the target resource at the SP (SAML 2.0 only) The principal (via an HTTPs user agent) requests a target resource at the service provider: <nowiki>https://sp.example.com/myresource</nowiki> The service provider performs a security check on behalf of the target resource. If a valid security context at the service provider already exists, skip steps 2–7. 2. Redirect to the SSO Service at the IdP (SAML 2.0 only)The service provider determines the user's preferred identity provider (by unspecified means) and redirects the user agent to the SSO Service at the identity provider: <nowiki>https://idp.example.org/SAML2/SSO/Redirect?SAMLRequest=request</nowiki> The value of the SAMLRequest parameter (denoted by the placeholder request above) is the Base64 encoding of a deflated <samlp:AuthnRequest> element. 3. Request the SSO Service at the IdP (SAML 2.0 only) The user agent issues a GET request to the SSO service at the URL from step 2. The SSO service processes the AuthnRequest (sent via the SAMLRequest URL query parameter) and performs a security check. If the user does not have a valid security context, the identity provider identifies the user (details omitted). 4. Respond with an XHTML form The SSO service validates the request and responds with a document containing an XHTML form: <form method="post" action="https://sp.example.com/SAML2/SSO/POST" ...> <input type="hidden" name="SAMLResponse" value="response" /> ... <input type="submit" value="Submit" /> </form> The value of the SAMLResponse element (denoted by the placeholder response above) is the base64 encoding of a <samlp:Response> element. 5. Request the Assertion Consumer Service at the SP The user agent issues a POST request to the assertion consumer service at the service provider. The value of the SAMLResponse parameter is taken from the XHTML form at step 4. 6. Redirect to the target resource The assertion consumer service processes the response, creates a security context at the service provider and redirects the user agent to the target resource. 7. Request the target resource at the SP again The user agent requests the target resource at the service provider (again): <nowiki>https://sp.example.com/myresource</nowiki> 8. Respond with requested resource Since a security context exists, the service provider returns the resource to the user agent. In SAML 1.1, the flow begins with a request to the identity provider's inter-site transfer service at step 3. In the example flow above, all depicted exchanges are front-channel exchanges, that is, an HTTP user agent (browser) communicates with a SAML entity at each step. In particular, there are no back-channel exchanges or direct communications between the service provider and the identity provider. Front-channel exchanges lead to simple protocol flows where all messages are passed by value using a simple HTTP binding (GET or POST). Indeed, the flow outlined in the previous section is sometimes called the Lightweight Web Browser SSO Profile. Alternatively, for increased security or privacy, messages may be passed by reference. For example, an identity provider may supply a reference to a SAML assertion (called an artifact) instead of transmitting the assertion directly through the user agent. Subsequently, the service provider requests the actual assertion via a back channel. Such a back-channel exchange is specified as a SOAP message exchange (SAML over SOAP over HTTP). In general, any SAML exchange over a secure back channel is conducted as a SOAP message exchange. On the back channel, SAML specifies the use of SOAP 1.1. The use of SOAP as a binding mechanism is optional, however. Any given SAML deployment will choose whatever bindings are appropriate. See also SAML 2.0 SAML metadata SAML-based products and services Identity management Identity management systems Federated identity Information card WS-Federation OAuth OpenID Connect References External links OASIS Security Services Technical Committee Cover Pages: Security Assertion Markup Language (SAML) How to Study and Learn SAML Demystifying SAML First public SAML 2.0 identity provider XML-based standards Computer access control Identity management Federated identity Identity management systems Metadata standards
Security Assertion Markup Language
[ "Technology", "Engineering" ]
3,934
[ "Computer standards", "Cybersecurity engineering", "XML-based standards", "Computer access control" ]
973,934
https://en.wikipedia.org/wiki/TickIT
TickIT is a certification program for companies in the software development and computer industries, supported primarily by the United Kingdom and Swedish industries through UKAS and SWEDAC respectively. Its general objective is to improve software quality. History In the 1980s, the UK government's CCTA organisation promoted the use of IT standards in the UK public sector, with work on BS5750 (Quality Management) leading to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MoD and participation of software development companies. The TickIT Guide TickIT also includes a guide. This provides guidance in understanding and applying ISO 9001 in the IT industry. It gives a background to the TickIT scheme, including its origins and objectives. Furthermore, it provides detailed information on how to implement a Quality System and the expected structure and content relevant to software activities. The TickIT guide also assists in defining appropriate measures and/or metrics. Various TickIT Guides have been issued, including "Guide to Software Quality Management and Certification using EN29001". References Bamford, Robert; Deibler, William (2003). ISO 9001: 2000 for Software and Systems Providers: An Engineering Approach (1st ed.). CRC-Press. External links TickITplus ISO 9001 Certification for Small and Midsize Businesses Information assurance standards Information technology governance Information technology organisations based in the United Kingdom Software quality
TickIT
[ "Technology" ]
288
[ "Computer standards", "Information assurance standards" ]
974,016
https://en.wikipedia.org/wiki/Clerk%20of%20works
A clerk of works or clerk of the works (CoW) is employed by an architect or a client on a construction site. The role is primarily to represent the interests of the client in regard to ensuring that the quality of both materials and workmanship are in accordance with the design information such as specification and engineering drawings, in addition to recognized quality standards. The role is defined in standard forms of contract such as those published by the Joint Contracts Tribunal. Clerks of works are also the most highly qualified non-commissioned tradesmen in the Royal Engineers. The qualification can be held in three specialisms: electrical, mechanical and construction. Historically, the clerk of works was employed by the architect on behalf of a client, or by local authorities to oversee public works. The clerks of works can also be employed by the client (state body/local authority/private client) to monitor design and build projects where the traditional role of the architect is within the design and build project team. Maître d'oeuvre (master of work) is a term used in many Francophone jurisdictions for the office that carries out this job in major projects; the Channel Tunnel project had such an office. In Italy, the term used is direttore dei lavori (manager of the works). Origins of the title The job title is believed to derive from the 13th century when monks and priests (i.e. "clerics" or "clerks") were accepted as being more literate than the builders of the age and took on the responsibility of supervising the works associated with the erection of churches and other religious property. As craftsmen and masons became more educated they in turn took on the role, but the title did not change. By the 19th century the role had expanded to cover the majority of building works, and the clerk of works was drawn from experienced tradesmen who had wide knowledge and understanding of the building process. The role The role, to this day, is based on the impartiality of the clerk of works in ensuring that value for money for the client - rather than the contractor - is achieved through rigorous and detailed inspection of materials and workmanship throughout the building process. In many cases, the traditional title has been discarded to comply with modern trends, such as site inspector, architectural inspector and quality inspector, but the requirement for the role remains unchanged since the origins of the title. The clerk of works is a very isolated profession on site. He/she is the person that must ensure quality of both materials and workmanship and, to this end, must be absolutely impartial and independent in decisions and judgments. He/she cannot normally, by virtue of the quality role, be employed by the contractor - only the client, normally by the architect on behalf of the client. His/her role is not to judge, but simply to report all occurrences that are relevant to the role. Clerks of works are either on site all the time or make regular visits. They must be vigilant in their inspections of a large range of technical aspects of the work. This involves: making sure that work is carried out to the client's standards, specification, correct materials, workmanship and schedule becoming familiar with all the relevant drawings and written instructions, checking them and using them as a reference when inspecting work making visual inspections taking measurements and samples on site to make sure that the work and the materials meet the specifications and quality standards being familiar with legal requirements and checking that the work complies with them. having a working knowledge of health and safety legislation and bringing any shortfalls observed to the attention of the resident engineer. advising the contractor about certain aspects of the work, particularly when something has gone wrong, but this advice should not be interpreted as an instruction Notable clerks of works Geoffrey Chaucer (1343–1400) was an English author, poet, philosopher, bureaucrat, courtier, diplomat and Clerk of the King's Works. John Louth was appointed first clerk of works of the Board of Ordnance by Henry V in 1414 along with Nicholas Merbury, Master of Ordnance. The Royal Artillery, Royal Engineers and Royal Army Ordnance Corps can all trace their origins to this date. William of Wykeham, Lord Chancellor and Bishop of Winchester (1323–1404) was Clerk of the King's Works. William Dickinson, Clerk of the King's Works from 1660 to his death in 1702 and Controller Clerk at Windsor Castle. His son, William Dickinson, was architect and Deputy Surveyor of Westminster Abbey under Sir Christopher Wren. James Nedeham or Needham, was appointed Clerk of the King's Works on 30 April 1530, and during that and the two following years devised and superintended the building alterations at Esher, York Place, and Westminster Palace. In September 1532, he was engaged in the "re-edifying" of St. Thomas's Tower within the Tower of London, and was occupied on that and other works in the Tower during the next three years. In April 1533, he was appointed by grant Clerk and Overseer of the King's Works in England. The Institute of Clerks of Works and Construction Inspectorate of Great Britain Incorporated The ICWCI - motto: Potestate, Probitate et Vigilantia (Ability, Integrity and Vigilance) - is the professional body that supports quality construction through systematic inspection. As a membership organisation, it provides a support network of meeting centres, technical advice, publications and events to help keep members up to date with the ever-changing construction industry. Post-nominals for members are FICWCI (Fellow), MICWCI (Member) and LICWCI (Licentiate). History The institute was founded in 1882 as the Clerk of Works Association, becoming the Incorporated Clerk of Works Association of Great Britain in 1903. In 1947, its name was amended again to the Institute of Clerks of Works of Great Britain Incorporated, a title it retained until 2009 when it was expanded to the Institute of Clerks of Works and Construction Inspectorate of Great Britain Incorporated. The organisation was founded to allow those who were required to operate in isolation on site, a central organisation to look after the interests of their chosen profession, be it through association with other professional bodies, educational means or simply through social intercourse among their own peers and contemporaries. Essential to this, as the Institute developed, was the development of a central body that could lobby Parliament in relation to their profession, and the quality issues that it stands for. Though the means of construction, training of individuals, and the way individuals are employed have changed dramatically over the years, the principles the Institute originally formed for remain sacrosanct. Experience in the many facets of the building trade is essential and, in general terms, most practitioners have come from the tools, though further third-level education in the built environment is essential. 'Building on Quality' Awards The Institute of Clerks of Works and Construction Inspectorate hold the biannual Building on Quality Awards, and nominations are accepted from all involved in quality site inspection regardless of whether they are members of the institute. Judging is based on the Clerk of Works' ability, his/her contribution to the projects he/she is involved with, his/her record keeping and reports, and his/her commitment to the role of Clerk of Works. Awards given in each category are Overall Winner, Highly Commended and Commended. The Overall Winner is chosen from all categories and is widely considered the highest accolade that can be awarded to a clerk of works in recognition of his work. Newly introduced in 2013 was the Peter Wilson Memorial trophy, which has now had two deserving recipients. The trophy was donated by the Cumbria and North Lancashire Chapter to the ICWCI in memoriam of Vice President Peter Wilson FICWCI. 2019 Award Winners Overall Winner - David Pugh MICWCI (Audley St George's Place Retirement Village, Edgbaston) Peter Wilson Memorial Award - Jon Tucker (Russell Hotel, London) Rex S. Reynolds Memorial Award - David Bristow FICWCI (Littleport Academy, Ely) 2017 Award Winners Overall Winner - Joel Trimby MICWCI (Ulster Hospital redevelopment project, Northern Ireland) Peter Wilson Memorial Award - William Tarling FICWCI (The National Heritage Centre for Horseracing and Sporting Art, Newmarket, England) 2015 Award Winners Overall Winner - Trevor King MICWCI (Boldrewood Campus, University of Southampton) Peter Wilson Memorial Award - Roy Burke MICWCI (Stratford Halo, London) 2013 Award Winners Overall Winner - Frank Miller MICWCI (Twin Sails Bridge, Poole, Dorset) Peter Wilson Memorial Award - Anthony Smith FICWCI (Taff Bargoed Park and Lakes, Merthyr Tydfil) 2011 Award Winners: Overall Winner - Brian Duncan MICWCI (Hanover Lodge Outer Circle, Regent's Park, London) New Build / Refurbishment - Tony Hood (Mossley Hill, Newtownabbey) New Build - Mark Heggs MICWCI (University of Leicester) 2009 Award Winners: Overall Winner - Les Howard MICWCI (New Eircom Headquarters, Dublin) New Build – Peter McGuone FICWCI (Altnagelvin Hospital, Derry) Refurbishment – Peter Airey MICWCI (Eden Court Theatre, Inverness) New Build / Refurbishment – Allan Sherwood MICWCI (The Spa, Bridlington) Civil Engineering – Mike Readman FICWCI (A590 High and Low Newton Bypass, Cumbria) Special Judges Award – Carol Heidschuster MICWCI (Lincoln Cathedral) ICWCI meeting centres Cumbria and North Lancashire, Deeside, Devon and Cornwall, Dublin, East Anglia, East Midlands, East of Scotland, Gibraltar, Home Counties North, Hong Kong, Isle of Man, London (North and South), Merseyside, North Cheshire, North East, Northern, Northern Ireland, Nottingham, Scotland, South Wales, Southern, Staffordshire and District & Western Counties Clerks of works in Canada The earliest record of a clerk of works in Canada was John Mactaggart who was the clerk of works for the Rideau Canal project in 1826. John Mactaggart was a British civil engineer and the chief clerk of works in charge of the project, reporting to Lieutenant-Colonel John By. John Morris was a notable clerk of works in the mid-1800s, completing several notable projects such as University College, University of Toronto, (1856–59), Parliament Building, the Departmental Buildings, and Government House. During the renovations to Pembroke City Hall, W. J. Moore was clerk of the works for the addition in 1912 and J. L. Morris for the alterations in 1914. See also Quality control Quality management system Quality assurance References External links Clerk of Works position gives peace of mind on Projects The Institute of Clerks of Works and Construction Inspectorate Clerk of Works.ca Biography – MACTAGGART, JOHN Biography - MORRIS, JOHN Construction trades workers Civil engineering Quality control Product certification Architecture occupations
Clerk of works
[ "Engineering" ]
2,229
[ "Construction", "Architecture occupations", "Civil engineering", "Architecture" ]
974,084
https://en.wikipedia.org/wiki/List%20of%20manifolds
This is a list of particular manifolds, by Wikipedia page. See also list of geometric topology topics. For categorical listings see :Category:Manifolds and its subcategories. Generic families of manifolds Euclidean space, Rn n-sphere, Sn n-torus, Tn Real projective space, RPn Complex projective space, CPn Quaternionic projective space, HPn Flag manifold Grassmann manifold Stiefel manifold Lie groups provide several interesting families. See Table of Lie groups for examples. See also: List of simple Lie groups and List of Lie group topics. Manifolds of a specific dimension 1-manifolds Circle, S1 Long line Real line, R Real projective line, RP1 ≅ S1 2-manifolds Cylinder, S1 × R Klein bottle, RP2 # RP2 Klein quartic (a genus 3 surface) Möbius strip Real projective plane, RP2 Sphere, S2 Surface of genus g Torus Double torus 3-manifolds 3-sphere, S3 3-torus, T3 Poincaré homology sphere SO(3) ≅ RP3 Solid Klein bottle Solid torus Whitehead manifold Meyerhoff manifold Weeks manifold For more examples see 3-manifold. 4-manifolds Complex projective plane Del Pezzo surface E8 manifold Enriques surface Exotic R4 Hirzebruch surface K3 surface For more examples see 4-manifold. Special types of manifolds Manifolds related to spheres Brieskorn manifold Exotic sphere Homology sphere Homotopy sphere Lens space Spherical 3-manifold Special classes of Riemannian manifolds Einstein manifold Ricci-flat manifold G2 manifold Kähler manifold Calabi–Yau manifold Hyperkähler manifold Quaternionic Kähler manifold Riemannian symmetric space Spin(7) manifold Categories of manifolds Manifolds definable by a particular choice of atlas Affine manifold Analytic manifold Complex manifold Differentiable (smooth) manifold Piecewise linear manifold Lipschitz manifold Topological manifold Manifolds with additional structure Almost complex manifold Almost symplectic manifold Calibrated manifold Complex manifold Contact manifold CR manifold Finsler manifold Hermitian manifold Hyperkähler manifold Kähler manifold Lie group Pseudo-Riemannian manifold Riemannian manifold Sasakian manifold Spin manifold Symplectic manifold Infinite-dimensional manifolds Banach manifold Fréchet manifold Hilbert manifold See also References Manifolds
List of manifolds
[ "Mathematics" ]
488
[ "Topological spaces", "Topology", "Manifolds", "Space (mathematics)" ]
974,140
https://en.wikipedia.org/wiki/Better%20dead%20than%20red
"Better dead than red" and the reverse "better red than dead" are dueling slogans regarding communism, and generally socialism, the former a anti-communist slogan ("rather dead than a communist"), and the latter a pro-communist slogan ("rather a communist than dead"). The slogans are interlingual with a variety of variants amongst them. Etymology Red is the emblematic color of communism and has thus become a synonym for "communist" (plural reds). Thus "better dead than red" means that 'one would rather die or be dead than to become or be a communist', and vice versa. History The slogans became widespread during the Cold War, first gaining currency in the United States during the late 1950s, amid debates about anti-communism and nuclear disarmament. The first phrase, "better red than dead", is often credited to British philosopher Bertrand Russell, but in his 1961 Has Man a Future? he attributes it to "West German friends of peace". In any event, Russell agreed with the sentiment, having written in 1958 that if "no alternatives remain except Communist domination or extinction of the human race, the former alternative is the lesser of two evils", and the slogan was adopted by the Campaign for Nuclear Disarmament, which he helped found. The first known English-language use of either term came in 1930, long before their widespread popularity. In an editorial criticizing John Edgerton, a Tennessee businessman who had mandated morning prayers in his factories to help keep out "dangerous ideas", The Nation sarcastically wrote: The first known use of "better red than dead" came in August 1958, when the Oakland Tribune wrote: "The popular phrase 'better red than dead' has lost what appeal it ever had." As anti-communist fever took hold in mid-century, the version "better dead than red" became popular in the United States, especially during the McCarthy era. The quote was also used by Oleg Troyanovsky, the Permanent Representative of the Soviet Union to the United Nations in 1980 when a dissident Marxist group threw red paint on him and US ambassador William vanden Heuvel in the United Nations Security Council chamber. With the end of the Cold War, the phrases have increasingly been repurposed as their original meanings have waned. For example, "better dead than red" is sometimes used as a schoolyard taunt aimed at redhaired children or Chinese American children. Some American alt-right groups such as Patriot Front have also used the phrase in their propaganda, in particular against Chinese Americans during the COVID-19 pandemic in the United States. Other languages The phrases may have been invented or inspired by Germans. Folklorist Mac E. Barrick linked it to Lewwer duad üs Slaav ("better dead than a slave"), a phrase used by Prussian poet Detlev von Liliencron in his ballad . Later, in Nazi Germany, Slav replaced Slaav, giving the anti-Slavic "better dead than a Slav". Also during the Nazi period, lieber tot als rot ("better dead than red") was used as a slogan. It is unclear whether it was the inspiration for either of the English phrases. The opposite slogan, lieber rot als tot ("better red than dead"), was popular among German speakers during the Cold War as well. In the strong pacifist movement in France in 1937, Jean Giono, a leading spokesman, asked, "What's the worst that can happen if Germany invades France? Become Germans? For my part, I prefer being a living German to being a dead Frenchman." Another version of the phrase took hold in Francoist Spain, adapted to Antes roja que rota ("better red than broken"), in reference to the threat posed by separatist groups in the regions of Catalonia and the Basque Country. During Romanian revolt of 1990 a song Imnul golanilor by Cristian Pațurcă was written, that has become an anthem of the revolt. It contains the words Mai bine mort, decât comunist, which means Better dead than communist. See also Liever Turks dan Paaps ("Rather Turkish than Papist") – slogan used during the 16th-century Dutch Revolt References 1920s neologisms 1920s quotations 1930s neologisms 1940s neologisms 1950s neologisms Anti-communism Anti-Marxism Communism Marxism Political catchphrases Harassment and bullying
Better dead than red
[ "Biology" ]
926
[ "Harassment and bullying", "Behavior", "Aggression" ]
974,148
https://en.wikipedia.org/wiki/Automorphic%20function
In mathematics, an automorphic function is a function on a space that is invariant under the action of some group, in other words a function on the quotient space. Often the space is a complex manifold and the group is a discrete group. Factor of automorphy In mathematics, the notion of factor of automorphy arises for a group acting on a complex-analytic manifold. Suppose a group acts on a complex-analytic manifold . Then, also acts on the space of holomorphic functions from to the complex numbers. A function is termed an automorphic form if the following holds: where is an everywhere nonzero holomorphic function. Equivalently, an automorphic form is a function whose divisor is invariant under the action of . The factor of automorphy for the automorphic form is the function . An automorphic function is an automorphic form for which is the identity. Some facts about factors of automorphy: Every factor of automorphy is a cocycle for the action of on the multiplicative group of everywhere nonzero holomorphic functions. The factor of automorphy is a coboundary if and only if it arises from an everywhere nonzero automorphic form. For a given factor of automorphy, the space of automorphic forms is a vector space. The pointwise product of two automorphic forms is an automorphic form corresponding to the product of the corresponding factors of automorphy. Relation between factors of automorphy and other notions: Let be a lattice in a Lie group . Then, a factor of automorphy for corresponds to a line bundle on the quotient group . Further, the automorphic forms for a given factor of automorphy correspond to sections of the corresponding line bundle. The specific case of a subgroup of SL(2, R), acting on the upper half-plane, is treated in the article on automorphic factors. Examples Kleinian group Elliptic modular function Modular function Complex torus References Automorphic forms Discrete groups Types of functions Complex manifolds
Automorphic function
[ "Mathematics" ]
432
[ "Mathematical objects", "Functions and mappings", "Types of functions", "Mathematical relations" ]
974,163
https://en.wikipedia.org/wiki/Backstaff
The backstaff is a navigational instrument that was used to measure the altitude of a celestial body, in particular the Sun or Moon. When observing the Sun, users kept the Sun to their back (hence the name) and observed the shadow cast by the upper vane on a horizon vane. It was invented by the English navigator John Davis, who described it in his book Seaman's Secrets in 1594. Types of backstaffs Backstaff is the name given to any instrument that measures the altitude of the sun by the projection of a shadow. It appears that the idea for measuring the sun's altitude using back observations originated with Thomas Harriot. Many types of instruments evolved from the cross-staff that can be classified as backstaffs. Only the Davis quadrant remains dominant in the history of navigation instruments. Indeed, the Davis quadrant is essentially synonymous with backstaff. However, Davis was neither the first nor the last to design such an instrument and others are considered here as well. Davis quadrant Captain John Davis invented a version of the backstaff in 1594. Davis was a navigator who was quite familiar with the instruments of the day such as the mariner's astrolabe, the quadrant and the cross-staff. He recognized the inherent drawbacks of each and endeavoured to create a new instrument that could reduce those problems and increase the ease and accuracy of obtaining solar elevations. One early version of the quadrant staff is shown in Figure 1. It had an arc affixed to a staff so that it could slide along the staff (the shape is not critical, though the curved shape was chosen). The arc (A) was placed so that it would cast its shadow on the horizon vane (B). The navigator would look along the staff and observe the horizon through a slit in the horizon vane. By sliding the arc so that the shadow aligned with the horizon, the angle of the sun could be read on the graduated staff. This was a simple quadrant, but it was not as accurate as one might like. The accuracy in the instrument is dependent on the length of the staff, but a long staff made the instrument more unwieldy. The maximum altitude that could be measured with this instrument was 45°. The next version of his quadrant is shown in Figure 2. The arc on the top of the instrument in the previous version was replaced with a shadow vane placed on a transom. This transom could be moved along a graduated scale to indicate the angle of the shadow above the staff. Below the staff, a 30° arc was added. The horizon, seen through the horizon vane on the left, is aligned with the shadow. The sighting vane on the arc is moved until it aligns with the view of the horizon. The angle measured is the sum of the angle indicated by the position of the transom and the angle measured on the scale on the arc. The instrument that is now identified with Davis is shown in Figure 3. This form evolved by the mid-17th century. The quadrant arc has been split into two parts. The smaller radius arc, with a span of 60°, was mounted above the staff. The longer radius arc, with a span of 30° was mounted below. Both arcs have a common centre. At the common centre, a slotted horizon vane was mounted (B). A moveable shadow vane was placed on the upper arc so that its shadow was cast on the horizon vane. A moveable sight vane was mounted on the lower arc (C). It is easier for a person to place a vane at a specific location than to read the arc at an arbitrary position. This is due to Vernier acuity, the ability of a person to align two line segments accurately. Thus an arc with a small radius, marked with relatively few graduations, can be used to place the shadow vane accurately at a specific angle. On the other hand, moving the sight vane to the location where the line to the horizon meets the shadow requires a large arc. This is because the position may be at a fraction of a degree and a large arc allows one to read smaller graduations with greater accuracy. The large arc of the instrument, in later years, was marked with transversals to allow the arc to be read to greater accuracy than the main graduations allow. Thus Davis was able to optimize the construction of the quadrant to have both a small and a large arc, allowing the effective accuracy of a single arc quadrant of large radius without making the entire instrument so large. This form of the instrument became synonymous with the backstaff. It was one of the most widely used forms of the backstaff. Continental European navigators called it the English Quadrant. A later modification of the Davis quadrant was to use a Flamsteed glass in place of the shadow vane; this was suggested by John Flamsteed. This placed a lens on the vane that projected an image of the sun on the horizon vane instead of a shadow. It was useful under conditions where the sky was hazy or lightly overcast; the dim image of the sun was shown more brightly on the horizon vane where a shadow could not be seen. Usage In order to use the instrument, the navigator would place the shadow vane at a location anticipating the altitude of the sun. Holding the instrument in front of him, with the sun at his back, he holds the instrument so that the shadow cast by the shadow vane falls on the horizon vane at the side of the slit. He then moves the sight vane so that he observes the horizon in a line from the sight vane through the horizon vane's slit while simultaneously maintaining the position of the shadow. This permits him to measure the angle between the horizon and the sun as the sum of the angle read from the two arcs. Since the shadow's edge represents the limb of the sun, he must correct the value for the semidiameter of the sun. Instruments that derived from the Davis quadrant The Elton's quadrant derived from the Davis quadrant. It added an index arm with spirit levels to provide an artificial horizon. Demi-cross The demi-cross was an instrument that was contemporary with the Davis quadrant. It was popular outside England. The vertical transom was like a half-transom on a cross-staff, hence the name demi-cross. It supported a shadow vane (A in Figure 4) that could be set to one of several heights (three according to May, four according to de Hilster). By setting the shadow vane height, the range of angles that could be measured was set. The transom could be slid along the staff and the angle read from one of the graduated scales on the staff. The sight vane (C) and horizon vane (B) were aligned visually with the horizon. With the shadow vane's shadow cast on the horizon vane and aligned with the horizon, the angle was determined. In practice, the instrument was accurate but more unwieldy than the Davis quadrant. Plough The plough was the name given to an unusual instrument that existed for a short time. It was part cross-staff and part backstaff. In Figure 5, A is the transom that casts its shadow on the horizon vane at B. It functions in the same manner as the staff in Figure 1. C is the sighting vane. The navigator uses the sighting vane and the horizon vane to align the instrument horizontally. The sighting vane can be moved left to right along the staff. D is a transom just as one finds on a cross-staff. This transom has two vanes on it that can be moved closer or farther from the staff to emulate different-length transoms. The transom can be moved on the staff and used to measure angles. Almucantar staff The Almucantar staff is a device specifically used for measuring the altitude of the sun at low altitudes. Cross-staff The cross-staff was normally a direct observation instrument. However, in later years it was modified for use with back observations. Quadrant There was a variation of the quadrant – the Back observation quadrant – that was used for measuring the sun's altitude by observing the shadow cast on a horizon vane. Thomas Hood cross-staff Thomas Hood invented this cross-staff in 1590. It could be used for surveying, astronomy or other geometric problems. It consists of two components, a transom and a yard. The transom is the vertical component and is graduated from 0° at the top to 45° at the bottom. At the top of the transom, a vane is mounted to cast a shadow. The yard is horizontal and is graduated from 45° to 90°. The transom and yard are joined by a special fitting (the double socket in Figure 6) that permits independent adjustments of the transom vertically and the yard horizontally. It was possible to construct the instrument with the yard at the top of the transom rather than at the bottom. Initially, the transom and yard are set so that the two are joined at their respective 45° settings. The instrument is held so that the yard is horizontal (the navigator can view the horizon along the yard to assist in this). The socket is loosened so that the transom is moved vertically until the shadow of the vane is cast at the yard's 90° setting. If the movement of just the transom can accomplish this, the altitude is given by the transom's graduations. If the sun is too high for this, the yard horizontal opening in the socket is loosened and the yard is moved to allow the shadow to land on the 90° mark. The yard then yields the altitude. It was a fairly accurate instrument, as the graduations were well spaced compared to a conventional cross-staff. However, it was a bit unwieldy and difficult to handle in wind. Benjamin Cole quadrant A late addition to the collection of backstaves in the navigation world, this device was invented by Benjamin Cole in 1748. The instrument consists of a staff with a pivoting quadrant on one end. The quadrant has a shadow vane, which can optionally take a lens like the Davis quadrant's Flamsteed glass, at the upper end of the graduated scale (A in Figure 7). This casts a shadow or projects an image of the sun on the horizon vane (B). The observer views the horizon through a hole in the sight vane (D) and a slit in the horizon vane to ensure the instrument is level. The quadrant component is rotated until the horizon and the sun's image or shadow are aligned. The altitude can then be read from the quadrant's scale. In order to refine the reading, a circular vernier is mounted on the staff (C). The fact that such an instrument was introduced in the middle of the 18th century shows that the quadrant was still a viable instrument even in the presence of the octant. English scientist George Adams created a very similar backstaff at the same time. Adam's version ensured that the distance between the Flamsteed glass and horizon vane was the same as the distance from the vane to the sight vane. Cross bow quadrant Edmund Gunter invented the cross bow quadrant, also called the mariner's bow, around 1623. It gets its name from the similarity to the archer's crossbow. This instrument is interesting in that the arc is 120° but is only graduated as a 90° arc. As such, the angular spacing of a degree on the arc is slightly greater than one degree. Examples of the instrument can be found with a 0° to 90° graduation or with two mirrored 0° to 45° segments centred on the midpoint of the arc. The instrument has three vanes, a horizon vane (A in Figure 8) which has an opening in it to observe the horizon, a shadow vane (B) to cast a shadow on the horizon vane and a sighting vane (C) that the navigator uses to view the horizon and shadow at the horizon vane. This serves to ensure the instrument is level while simultaneously measuring the altitude of the sun. The altitude is the difference in the angular positions of the shadow and sighting vanes. With some versions of this instrument, the sun's declination for each day of the year was marked on the arc. This permitted the navigator to set the shadow vane to the date and the instrument would read the altitude directly. References Ephraim Chambers, Cyclopædia, The First Volume, 1728 explaining the use of a backstaff Maurice Daumas, Scientific Instruments of the Seventeenth and Eighteenth Centuries and Their Makers, Portman Books, London 1989 Gerard L'Estrange Turner, Antique Scientific Instruments, Blandford Press Ltd. 1980 Notes External links "Backstaff" at answers.com – Good diagram of how a backstaff is held in use. Attribution Navigational equipment Measuring instruments Astronomical instruments Celestial navigation Historical scientific instruments
Backstaff
[ "Astronomy", "Technology", "Engineering" ]
2,620
[ "Celestial navigation", "Measuring instruments", "Astronomical instruments" ]
974,169
https://en.wikipedia.org/wiki/Algebraic%20function
In mathematics, an algebraic function is a function that can be defined as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are: Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by . In more precise terms, an algebraic function of degree in one variable is a function that is continuous in its domain and satisfies a polynomial equation of positive degree where the coefficients are polynomial functions of , with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the 's. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients. The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number. Sometimes, coefficients that are polynomial over a ring are considered, and one then talks about "functions algebraic over ". A function which is not algebraic is called a transcendental function, as it is for example the case of . A composition of transcendental functions can give an algebraic function: . As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n functions, sometimes also called branches. Consider for example the equation of the unit circle: This determines y, except only up to an overall sign; accordingly, it has two branches: An algebraic function in m variables is similarly defined as a function which solves a polynomial equation in m + 1 variables: It is normally assumed that p should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem. Formally, an algebraic function in m variables over the field K is an element of the algebraic closure of the field of rational functions K(x1, ..., xm). Algebraic functions in one variable Introduction and overview The informal definition of an algebraic function provides a number of clues about their properties. To gain an intuitive understanding, it may be helpful to regard algebraic functions as functions which can be formed by the usual algebraic operations: addition, multiplication, division, and taking an nth root. This is something of an oversimplification; because of the fundamental theorem of Galois theory, algebraic functions need not be expressible by radicals. First, note that any polynomial function is an algebraic function, since it is simply the solution y to the equation More generally, any rational function is algebraic, being the solution to Moreover, the nth root of any polynomial is an algebraic function, solving the equation Surprisingly, the inverse function of an algebraic function is an algebraic function. For supposing that y is a solution to for each value of x, then x is also a solution of this equation for each value of y. Indeed, interchanging the roles of x and y and gathering terms, Writing x as a function of y gives the inverse function, also an algebraic function. However, not every function has an inverse. For example, y = x2 fails the horizontal line test: it fails to be one-to-one. The inverse is the algebraic "function" . Another way to understand this, is that the set of branches of the polynomial equation defining our algebraic function is the graph of an algebraic curve. The role of complex numbers From an algebraic perspective, complex numbers enter quite naturally into the study of algebraic functions. First of all, by the fundamental theorem of algebra, the complex numbers are an algebraically closed field. Hence any polynomial relation p(y, x) = 0 is guaranteed to have at least one solution (and in general a number of solutions not exceeding the degree of p in y) for y at each point x, provided we allow y to assume complex as well as real values. Thus, problems to do with the domain of an algebraic function can safely be minimized. Furthermore, even if one is ultimately interested in real algebraic functions, there may be no means to express the function in terms of addition, multiplication, division and taking nth roots without resorting to complex numbers (see casus irreducibilis). For example, consider the algebraic function determined by the equation Using the cubic formula, we get For the square root is real and the cubic root is thus well defined, providing the unique real root. On the other hand, for the square root is not real, and one has to choose, for the square root, either non-real square root. Thus the cubic root has to be chosen among three non-real numbers. If the same choices are done in the two terms of the formula, the three choices for the cubic root provide the three branches shown, in the accompanying image. It may be proven that there is no way to express this function in terms of nth roots using real numbers only, even though the resulting function is real-valued on the domain of the graph shown. On a more significant theoretical level, using complex numbers allows one to use the powerful techniques of complex analysis to discuss algebraic functions. In particular, the argument principle can be used to show that any algebraic function is in fact an analytic function, at least in the multiple-valued sense. Formally, let p(x, y) be a complex polynomial in the complex variables x and y. Suppose that x0 ∈ C is such that the polynomial p(x0, y) of y has n distinct zeros. We shall show that the algebraic function is analytic in a neighborhood of x0. Choose a system of n non-overlapping discs Δi containing each of these zeros. Then by the argument principle By continuity, this also holds for all x in a neighborhood of x0. In particular, p(x, y) has only one root in Δi, given by the residue theorem: which is an analytic function. Monodromy Note that the foregoing proof of analyticity derived an expression for a system of n different function elements fi(x), provided that x is not a critical point of p(x, y). A critical point is a point where the number of distinct zeros is smaller than the degree of p, and this occurs only where the highest degree term of p or the discriminant vanish. Hence there are only finitely many such points c1, ..., cm. A close analysis of the properties of the function elements fi near the critical points can be used to show that the monodromy cover is ramified over the critical points (and possibly the point at infinity). Thus the holomorphic extension of the fi has at worst algebraic poles and ordinary algebraic branchings over the critical points. Note that, away from the critical points, we have since the fi are by definition the distinct zeros of p. The monodromy group acts by permuting the factors, and thus forms the monodromy representation of the Galois group of p. (The monodromy action on the universal covering space is related but different notion in the theory of Riemann surfaces.) History The ideas surrounding algebraic functions go back at least as far as René Descartes. The first discussion of algebraic functions appears to have been in Edward Waring's 1794 An Essay on the Principles of Human Knowledge in which he writes: let a quantity denoting the ordinate, be an algebraic function of the abscissa x, by the common methods of division and extraction of roots, reduce it into an infinite series ascending or descending according to the dimensions of x, and then find the integral of each of the resulting terms. See also Algebraic expression Analytic function Complex function Elementary function Function (mathematics) Generalized function List of special functions and eponyms List of types of functions Polynomial Rational function Special functions Transcendental function References External links Definition of "Algebraic function" in the Encyclopedia of Math Definition of "Algebraic function" in David J. Darling's Internet Encyclopedia of Science Analytic functions Functions and mappings Meromorphic functions Special functions Types of functions Polynomials Algebraic number theory
Algebraic function
[ "Mathematics" ]
1,721
[ "Functions and mappings", "Mathematical analysis", "Special functions", "Algebra", "Polynomials", "Mathematical objects", "Combinatorics", "Mathematical relations", "Algebraic number theory", "Types of functions", "Number theory" ]
974,191
https://en.wikipedia.org/wiki/Bulkhead%20%28partition%29
A bulkhead is an upright wall within the hull of a ship, within the fuselage of an airplane, or a car. Other kinds of partition elements within a ship are decks and deckheads. Etymology The word bulki meant "cargo" in Old Norse. During the 15th century sailors and builders in Europe realized that walls within a vessel would prevent cargo from shifting during passage. In shipbuilding, any vertical panel was called a head. So walls installed abeam (side-to-side) in a vessel's hull were called "bulkheads". Now, the term bulkhead applies to every vertical panel aboard a ship, except for the hull itself. History Bulkheads were known to the ancient Greeks, who employed bulkheads in triremes to support the back of rams. By the Athenian trireme era (500 BC), the hull was strengthened by enclosing the bow behind the ram, forming a bulkhead compartment. Instead of using bulkheads to protect ships against rams, Greeks preferred to reinforce the hull with extra timber along the waterline, making larger ships almost resistant to ramming by smaller ones. Bulkhead partitions are considered to have been a feature of Chinese junks, a type of ship. Song dynasty author Zhu Yu (fl. 12th century) wrote in his book of 1119 that the hulls of Chinese ships had a bulkhead build. The 5th-century book Garden of Strange Things by Liu Jingshu mentioned that a ship could allow water to enter the bottom without sinking. Archaeological evidence of bulkhead partitions has been found on a 24 m (78 ft) long Song dynasty ship dredged from the waters off the southern coast of China in 1973, the hull of the ship divided into twelve walled compartmental sections built watertight, dated to about 1277. Texts written by writers such as Marco Polo (1254–1324), Ibn Battuta (1304–1369), Niccolò Da Conti (1395–1469), and Benjamin Franklin (1706–1790) describe the bulkhead partitions of East Asian shipbuilding. An account of the early fifteenth century describes Indian ships as being built in compartments so that even if one part was damaged, the rest remained intact—a forerunner of the modern day watertight compartments using bulkheads. As wood began to be replaced by iron in European ships in the 18th century, new structures, like bulkheads, started to become prevalent. Bulkhead partitions became widespread in Western shipbuilding during the early 19th century. Benjamin Franklin wrote in a 1787 letter that "as these vessels are not to be laden with goods, their holds may without inconvenience be divided into separate apartments, after the Chinese manner, and each of these apartments caulked tight so as to keep out water." A 19th-century book on shipbuilding attributes the introduction of watertight bulkheads to Charles Wye Williams, known for his steamships. Purpose Bulkheads in a ship serve several purposes: increase the structural rigidity of the vessel, divide functional areas into rooms and create watertight compartments that can contain water in the case of a hull breach or other leak. some bulkheads and decks are fire-resistance rated to achieve compartmentalisation, a passive fire protection measure; see firewall (construction). Not all bulkheads are intended to be watertight, in modern ships the bottom floor is supported against the hull by transverse walls(bulkheads) and longitudinal walls, being common to use bulkheads with lightening holes. On an aircraft, bulkheads divide the cabin into multiple areas. On passenger aircraft a common application is for physically dividing cabins used for different classes of service (e.g. economy and business.) On combination cargo/passenger, or "combi" aircraft, bulkhead walls are inserted to divide areas intended for passenger seating and cargo storage. Requirements of bulkheads Fire-resistance Openings in fire-resistance rated bulkheads and decks must be firestopped to restore the fire-resistance ratings that would otherwise be compromised if the openings were left unsealed. The authority having jurisdiction for such measures varies depending upon the flag of the ship. Merchant vessels are typically subject to the regulations and inspections of the coast guards of the flag country. Combat ships are subject to the regulations set out by the navy of the country that owns the ship. Prevention of electromagnetic damage Bulkheads and decks of warships may be fully electrically grounded as a countermeasure against damage from electromagnetic interference and electromagnetic pulse due to nearby nuclear or electromagnetic bomb detonations, which could severely damage the vital electronic systems on a ship. In the case of firestops, cable jacketing is usually removed within the seal and firestop rubber modules are internally fitted with copper shields, which contact the cables' armour to ground the seal. Automotive Most passenger vehicles and some freight vehicles will have a bulkhead which separates the engine compartment from the passenger compartment or cab; the automotive use is analogous to the nautical term in that the bulkhead is an internal wall which separates different parts of the vehicle. Some passenger vehicles (particularly sedan/saloon-type vehicles) will also have a rear bulkhead, which separates the passenger compartment from the trunk/boot. Other uses of the term The term was later applied to other vehicles, such as railroad cars, hopper cars, trams, automobiles, aircraft or spacecraft, as well as to containers, intermediate bulk containers and fuel tanks. In some of these cases bulkheads are airtight to prevent air leakage or the spread of a fire. The term may also be used for the "end walls" of bulkhead flatcars. Mechanically, a partition or panel through which connectors pass, or a connector designed to pass through a partition. In architecture the term is frequently used to denote any boxed in beam or other downstand from a ceiling and by extension even the vertical downstand face of an area of lower ceiling beyond. This usage presumably derives from experience on boats where to maintain the structural function personnel openings through bulkheads always retain a portion of the bulkhead crossing the head of the opening. Head strikes on these downstand elements are commonplace, hence in architecture any overhead downstand element comes to be referred to as a bulkhead. Bulkhead also refers to a moveable structure often found in an Olympic-size swimming pool, as a means to set the pool into a "double-ended short course" configuration, or long-course, depending on the type of event being run. Pool bulkheads are usually air-fillable, but power driven solutions do exist. The term is also used to refer to large retroactively installed pressure barriers for temporary or permanent use, often during maintenance or construction activities. See also References External links Britannica definition Merriam-Webster definition WIPO Bulkhead for motor vehicle Canadian Armed Forces Glossary, see Fire Zone, page 5 of 14 Det Norske Veritas Type Approval for a fire damper inside and A60 bulkhead Subject-related patent by Free Patents Online An example treatise on the use of A60 bulkheads onboard tankers. Shipbuilding Nautical terminology Chinese inventions Ship compartments
Bulkhead (partition)
[ "Engineering" ]
1,439
[ "Shipbuilding", "Marine engineering" ]
974,207
https://en.wikipedia.org/wiki/Ring%20circuit
In electricity supply design, a ring circuit is an electrical wiring technique in which sockets and the distribution point are connected in a ring. It is contrasted with the usual radial circuit, in which sockets and the distribution point are connected in a line with the distribution point at one end. Ring circuits are also known as ring final circuits and often incorrectly as ring mains, a term used historically, or informally simply as rings. It is used primarily in the United Kingdom, where it was developed, and to a lesser extent in Ireland and Hong Kong. This design enables the use of smaller-diameter wire than would be used in a radial circuit of equivalent total current capacity. The reduced diameter conductors in the flexible cords connecting an appliance to the plug intended for use with sockets on a ring circuit are individually protected by a fuse in the plug. Its advantages over radial circuits are therefore reduced quantity of copper used, and greater flexibility of appliances and equipment that can be connected. Ideally, the ring circuit acts like two radial circuits proceeding in opposite directions around the ring, the dividing point between them dependent on the distribution of load in the ring. If the load is evenly split across the two directions, the current in each direction is half of the total, allowing the use of wire with half the total current-carrying capacity. In practice, the load does not always split evenly, so thicker wire is used. Description The ring starts at the consumer unit (also known as fuse box, distribution board, or breaker box), visits each socket in turn, and then returns to the consumer unit. The ring is fed from a fuse or circuit breaker in the consumer unit. Ring circuits are commonly used in British wiring with socket-outlets taking fused plugs to BS 1363. Because the breaker rating is much higher than that of any one socket outlet, the system can only be used with fused plugs or fused appliance outlets. They are generally wired with 2.5 mm2 cable and protected by a 30 A fuse, an older 30 A circuit breaker, or a European harmonised 32 A circuit breaker. Sometimes 4 mm2 cable is used if very long cable runs (to help reduce voltage drop) or derating factors such as very thick thermal insulation are involved. 1.5 mm2 mineral-insulated copper-clad cable (known as pyro) may also be used (as mineral insulated cable can withstand heat more effectively than normal PVC) though more care must be taken with regard to voltage drop on longer runs. The protection devices for the fixed wiring need to be rated higher than would protect flexible appliance cords, so BS 1363 requires that all plugs and connection units incorporate fuses appropriate to the appliance cord. History and use The ring circuit and the associated BS 1363 plug and socket system were developed in Britain during 1942–1947. They are commonly used in the United Kingdom and to a lesser extent in the Republic of Ireland. They are also found in the United Arab Emirates, Singapore, Hong Kong, Beijing, Indonesia and many places where the UK had a strong influence, including for example Cyprus and Uganda. Pre-World War II practice was to use various sizes of plugs and sockets to suit the current requirement of the appliance, and these were connected to suitably fused radial circuits; the ratings of those fuses were appropriate to protect both the fixed wiring and the flexible cord attached to the plug. The Electrical Installations Committee which was convened in 1942 as part of the Post War Building Studies programme determined, amongst other things, that the ring final circuit offered a more efficient and lower cost system which would safely support a greater number of sockets. The scheme was specified to use 13 A socket-outlets and fused plugs; several designs for the plugs and sockets were considered. The design chosen as the British Standard was the flat pin system now known as BS 1363. Other designs of 13 A fused plugs and socket-outlets, notably the Wylex and Dorman & Smith systems, which did not conform to the chosen standard, were used into the 1950s, but by the 1960s BS 1363 had become the single standard for new installations. The committee mandated the ring circuit both to increase consumer safety and to combat the anticipated post-war copper shortage. The committee estimated that using ring-circuit and single-pole fusing would reduce raw materials requirements by approximately 25% compared with pre-war regulations. The ring circuit is still the most common mains wiring configuration in the UK, although both 20 A and 30 A radial circuits are also permitted by the Wiring Regulations, with a recommendation based on the floor area served (20 A for area up to 25 m2, 30 A for up to 100 m2). Installation rules Rules for ring circuits provide that the cable rating must be no less than two thirds of the rating of the protective device. This means that the risk of sustained overloading of the cable can be considered minimal. In practice, however, it is extremely uncommon to encounter a ring with a protective device other than a 30 A fuse, 30 A breaker, or 32 A breaker, and a cable size other than those mentioned above. Because the BS 1363 plug contains a fuse not exceeding 13A, the load at any one point on the ring is limited. The IET Wiring Regulations (BS 7671) permit an unlimited number of 13A socket outlets (at any point unfused single or double, or any number fused) to be installed on a ring circuit, provided that the floor area served does not exceed 100 m2. In practice, most small and medium houses have one ring circuit per storey, with larger premises having more. An installation designer may determine if additional circuits are required for areas of high demand. For example, it is common practice to put kitchens on their own ring circuit or sometimes a ring circuit shared with a utility room to avoid putting a heavy load at one point on the main downstairs ring circuit. Since any load on a ring is fed by the ring conductors on either side of it, it is desirable to avoid a concentrated load placed very near the consumer unit, since the shorter conductors will have less resistance and carry a disproportionate share of the load. Unfused spurs from a ring wired in the same cable as the ring are allowed to run one socket (single or double) or one fused connection unit (FCU). Before 1970 the use of two single sockets on one spur was allowed, but has since been disallowed because of their conversion to double sockets. Spurs may either start from a socket or be joined to the ring cable with a junction box or other approved method of joining cables. BS 1363 compliant triple and larger sockets are always fused at 13A and therefore can also be placed on a spur. Since 1970 it is permitted to have more spurs than sockets on the ring, but it is considered poor practice by many electricians to have too many unfused spurs in a new installation. Where loads other than BS 1363 sockets are connected to a ring circuit or it is desired to place more than one socket for low power equipment on a spur, a BS 1363 fused connection unit (FCU) is used. In the case of fixed appliances this will be a switched fused connection unit (SFCU) to provide a point of isolation for the appliance, but in other cases such as feeding multiple lighting points (putting lighting on a ring though is generally considered bad practice in new installation but is often done when adding lights to an existing property) or multiple sockets, an unswitched one is often preferable. Fixed appliances with a power rating of 3 kW or more (for example, water heaters and some electric cookers) or with a non-trivial power demand for long periods (for example, immersion heaters) may be connected to a ring circuit, but it is strongly recommended that instead they are connected to their own dedicated circuit. However, there are plenty of older installations with such loads on a ring circuit. Advantages Proponents of the ring circuit point out that, when correctly installed, there are also a number of advantages to be considered. Area served For rooms that are square or circular, a ring circuit can deliver more power per unit of floor area for a given cable size than a simple radial circuit, and the source impedance and therefore voltage drop to the furthest point is lower. Alternatively, to deliver the same power to the same building with radial circuits would require more final circuits or a heavier cable. High integrity earthing As all fittings on the ring are earthed from both sides, two independent faults are needed to create an 'off earth' fault. Continuous continuity verification from any point The continuity of each conductor right round all the points on the ring can be verified from any point, and if this needs to be done as part of live installation monitoring, it can be verified by current clamp injection with the system energised. Criticism The ring final circuit concept has been criticized in a number of ways compared to radials, and some of these concerns could explain the lack of widespread adoption outside the United Kingdom. Fault conditions are not apparent when in use Ring circuits may continue to operate without the user being aware of any problem if there are certain types of fault condition or installation errors. This gives both robustness against failure and a potential for danger. Safety tests are complex At least one author claims that testing ring circuits may take 5–6 times longer than testing radial circuits. The installation tests required for the safe operation of a ring circuit are more time-consuming than those for a radial circuit, and DIY installers or electricians qualified in other countries may not be familiar with them. Load balance required Regulation 433-02-04 of BS 7671 requires that the installed load must be distributed around the ring such that no part of the cable exceeds its rated capacity. In some cases this requirement is difficult to guarantee, and may be largely ignored in practice, as loads are often co-located (e.g., washing machine, tumble dryer, dish washer all next to kitchen sink) at a point not necessarily near the centre of the ring. However, the fact that the cable rating is 67% that of the circuit breaker, not 50%, means that a ring has to be significantly out of balance to cause a problem. In a ring circuit, if any poor joint causes a high resistance on one branch of the ring, current will be unevenly distributed, possibly overloading the remaining conductor of the ring. See also Electrical wiring in the United Kingdom References External links Ring Circuit wiring guide Electrical wiring Electricity supply circuits
Ring circuit
[ "Physics", "Engineering" ]
2,151
[ "Electrical systems", "Building engineering", "Physical systems", "Electricity supply circuits", "Electrical engineering", "Electrical wiring" ]
974,383
https://en.wikipedia.org/wiki/Bulkhead%20line
Bulkhead line is an officially set line along a shoreline, usually beyond the dry land, to demark a territory allowable to be treated as dry land, to separate the jurisdictions of dry land and water authorities, for construction and riparian activities, to establish limits to the allowable obstructions to navigation and other waterfront uses. In particular, it may limit the construction of piers in the absence of an official pierhead line. Various jurisdictions may define it in different ways. A formal definition may read as follows: A geographic line along a reach of navigable water that has been adopted by a municipal ordinance and approved by the Department of Natural Resources, and which allows limited filling between this bulkhead line and the original ordinary high water mark, except where such filling is prohibited by the floodway provisions. (Several municipalities in Wisconsin use wording closely approximating this sample.) References Coastal geography
Bulkhead line
[ "Environmental_science" ]
179
[ "Hydrology", "Hydrology stubs" ]
974,405
https://en.wikipedia.org/wiki/IBM%20Director
IBM Systems Director is an element management system (EMS) (sometimes referred to as a "workgroup management system") first introduced by IBM in 1993 as NetFinity Manager. The software was originally written to run on OS/2 2.0. It has subsequently gone through a number of name changes in the interim. It was changed in 1996 to IBM PC SystemView. Later that same year, it was renamed to TME 10 NetFinity. The following year, it reverted to a slightly altered version of its original name: IBM Netfinity Manager (note the lowercase 'f'). In 1999, IBM announced Netfinity Director; a new product based on Tivoli IT Director. It was intended as a replacement for IBM Netfinity Manager. When IBM renamed its Netfinity line of enterprise servers to xSeries, the name was changed to IBM Director. With the release of version 6.1, the product was renamed from IBM Director to IBM Systems Director. IBM Director consists of 3 components: an agent, a console and a server. To take full advantage of IBM Director's capabilities, the IBM Director Agent must be installed on the monitored system. Inventory and management data are stored in an SQL database (Oracle, SQL Server, IBM DB2 Universal Database or PostgreSQL) which can be separate or on the same server where IBM Director Server resides. Smaller deployments can also utilize Microsoft Jet or MSDE. The server is configured and managed using the IBM Director Console from any Linux or Microsoft Windows workstation. IBM Systems Director has been removed from marketing and is scheduled to reach end of service in April 2018. IBM Director is composed of these major tasks Asset ID BladeCenter Management CIM Browser Command Automation Configure SNMP Agent Data Capture Policy Manager Event Action Plans Event Log External Application Launch File Transfer Hardware Status Inventory JMX Browser Microsoft Cluster Browser Network Configuration Process Management Rack Manager Remote Control Remote Session Retail Peripheral Management (needs Retail Extensions) RMA Software Distribution Resource Monitors Scheduler Server Configuration Manager Service and Support Management SNMP Browser Software Distribution System Accounts Update Manager User Administration Major releases IBM Systems Director 6.3 (out of service) IBM Systems Director 6.2 (out of service) IBM Systems Director 6.1 (out of service) IBM Director 5.20.3 (out of service) IBM Director 5.20.2 (out of service) IBM Director 5.20.1 (out of service) IBM Director 5.20.0 (out of service) IBM Director 5.10.3 (out of service) IBM Director 5.10.2 (out of service) IBM Director 5.10.1 (out of service) IBM Director 5.10.0 (out of service) IBM Director 4.22 (out of service) IBM Director 4.21 (out of service) IBM Director 4.20 (out of service) IBM Director 4.12 (out of service) IBM Director 4.11 (out of service) IBM Director 4.10 (out of service) IBM Director 3.1.1 (out of service) See also IBM Systems Director Console for AIX System Center Operations Manager Oracle Enterprise Manager References IBM Official Director Forums External links IBM Director home page IBM Systems Director download page An Introduction to Using IBM Systems Director to Manage IBM i IBM Systems Director 6.3.x IBM Systems Director Release Notes IBM Software Information Center IBM Systems Director 6.2.x IBM Systems Director Release Notes IBM Software Information Center Director Network management
IBM Director
[ "Engineering" ]
715
[ "Computer networks engineering", "Network management" ]
974,707
https://en.wikipedia.org/wiki/Beggars%20in%20Spain
Beggars in Spain is a 1993 science fiction novel by American writer Nancy Kress. It was originally published as a novella with the same title in Isaac Asimov's Science Fiction Magazine and as a limited edition paperback by Axolotl Press in 1991. Kress expanded it, adding three additional parts to the novel, and eventually two sequels, Beggars and Choosers (1994) and Beggars Ride (1996). It is held to be an important work, and is often hailed for its predictions of emerging technologies and society. The original novella won the Hugo Award and Nebula Award. The novel was also nominated for both the Hugo Award and the Nebula Award, but did not win. Plot introduction Beggars in Spain and its sequels take place in a future where genetic engineering has become a reality, and society and culture face the consequences of genetic modifications (genemods), particularly in the United States. The story revolves around the existence of the "Sleepless": individuals genetically modified to not need sleep, who have greater potential for intelligence and accomplishment than ordinary humans, called "Sleepers". The world of Beggars in Spain is also powered by cold fusion, named "Y-energy" after its pioneer Kenzo Yagai. Yagai also founded "Yagaiism", a moral worldview Kress based on objectivism, in which dignity is solely the product of what a person can achieve through his or her own efforts, and the contract is the basis of society. As a corollary, the weak and unproductive are not owed anything. The novel's title comes from its primary moral question, as presented by character Tony Indivino: what do productive and responsible members of society owe the "beggars in Spain", the unproductive masses who have nothing to offer except need? This is underscored by the rift between the Sleepers and the Sleepless; the Sleepless are superior in mind and body, and easily capable of outperforming their normal cousins. All men are not created equal. Where, then, is the line between equality and excellence? How far should any superior minority hold themselves back for fear of engendering feelings of inadequacy in their inferiors?—especially if this minority is not hated and feared, but rather the elite? This question is explored, but not elaborated on by the novel. Nancy Kress has explained that the book, and the trilogy generally, grapples with the conflicting principles of Ayn Rand on one hand and Ursula K. Le Guin's picture of communist-like community on the other. Plot summary Book I (the original novella) Leisha Camden, born in 2008, is the twenty-first human being to have the genemod for sleeplessness. She is the daughter of one of Yagai's most noted sponsors, financier Roger Camden, who felt he had wasted far too much of his life in sleep, and his wife Elizabeth Camden, an Englishwoman who wanted a normal child. Sleeplessness confers a number of secondary benefits—higher IQ and a sunnier disposition most notably, as well as 50% more productive time (vs the time the unmodified spend asleep); Sleepless not only don't need sleep, they cannot sleep (though they can be knocked unconscious). By the age of fifteen Leisha has become a part of the community of Sleepless, few though there are in the world; she, like all of them, is several grades ahead of her age; the oldest, Kevin Baker, has already become the most wealthy computer software designer since Bill Gates at the age of 16 (in 2020). The first she meets, Richard Keller, becomes her lover; the others become friends and confidants. Two, however, trouble her. One is Tony Indivino, whose mother had problems adjusting to his Sleepless ways and forced him to live as a "Sleeper." Tony advocates a banding-together of all Sleepless in a sort of socio-economic fortress. He predicts that the Sleepers will soon begin to discriminate against Sleepless, and is quickly proved right: a Sleepless athlete is barred from the Olympics, for instance, because her 16-hour practice days are impossible for other competitors to compete with. Likewise some cities forbid Sleepless from running "24-hour" convenience stores. Tony is eventually jailed (for illegal actions on behalf of the Sleepless community), though not before attracting the attention of Jennifer Sharifi, the other person who makes Leisha nervous. The Sleepless daughter of a movie star and an Arab oil tycoon, Jennifer's money purchases land in upstate New York (Cattaraugus County, specifically) to create a Sleepless-only community known as Sanctuary. Finally, Leisha faces rocky relations with her twin sister Alice. By sheer chance, Elizabeth conceived a natural daughter at the same time Leisha was implanted in vitro, resulting in fraternal twins, only one of whom is Sleepless. Alice is constantly in her sister's shadow: "Whatever was yours was yours, and whatever wasn't yours was yours, too. That's the way Daddy set it up. The way he hard-wired it into our genes." When Leisha is approaching her bar exams at the age of 22, her father dies of old age. On the drive home from the funeral, Leisha's surrogate mother Susan Melling (not only Roger Camden's second wife, but the genetic researcher who devised Sleeplessness) reports some startling news. Bernie Kuhn, a Sleepless in Seattle, has died due to a road accident at the age of 17. Autopsy reveals every one of his organs is in pristine condition. Evidently Sleeplessness unlocks a heretofore-unknown cell regeneration system. The bottom line is that Sleepless will not physically age. Their estimated lifespan is totally unknown. They might be immortal. Leisha passes her exams, but shortly thereafter she is informed by Richard that various acts of prejudice and violence against Sleepless have culminated in the murder of Tony Indivino by fellow inmates. The Sleepless have no choice but to retreat to Sanctuary. However, Leisha is sent out on one last errand of mercy: a Sleepless child, Stella Bevington, is being abused by her parents. Alice turns the tables by masterminding and almost singlehandedly carrying out the kidnapping, not only saving both Leisha and Stella but proving that even the most privileged and elite can be beggars too. Leisha is left with the revelation that trade is not linear, but rather an ecology, and that today's beggar may be tomorrow's savior. Book II (2051) The book opens on Jordan Watrous, Alice's son (born 2025), an employee at a We-Sleep factory. The "We-Sleep" movement is an attempt by founder Calvin Hawke to rejuvenate working-class pride by buying and selling only products made by Sleepers; despite the fact that the products themselves are often shoddy and over-priced, revenues have been lucrative. He shepherds his aunt Leisha on a tour of the factory; afterwards she meets with Hawke to "rail against stupidity;" since America is founded on the premise that all men should be treated equally, encouraging class hatred will only lead to destruction. Leisha then receives an unusual client at her law firm: a genetic researcher, Dr. Adam Walcott, who claims to have discovered a post-partum gene therapy to turn Sleepers into Sleepless. Unfortunately for him, his research has been stolen from the safe-deposit box in which he left it; even worse, his patents have already been filed ... In the name of Sanctuary, Incorporated. Thankfully, the research is incomplete, but evidently Sanctuary is concerned about keeping its edge. Leisha asks Susan Melling to attempt to complete it and determine its legitimacy. Leisha also discovers that Sanctuary Council leader-for-life Jennifer Sharifi has decided to institute a loyalty oath, in which all Sleepless swear to place the needs of Sanctuary above their own. Jennifer has always been convinced of the need to protect her people from the Sleepers, but her husband, Richard Keller, has his own reservations about the paranoid atmosphere his children are being fostered in. He doesn't think his wife is capable of murder, though ... Until Jennifer is indicted for the murder, via sabotage and destruction of his vehicle (a We-Sleep scooter), of Dr. Walcott's primary research partner. The People vs. Jennifer Fatima Sharifi is a circus. Though the sabotage was clearly performed by a Sleepless, a piece of jewelry that serves as Sanctuary's equivalent of a garage-door opener was found on the scene, which no Sleepless would be sloppy enough to leave behind. Meanwhile, Leisha's life is slowly unraveling: Sanctuary has voted in the oath of solidarity and, furthermore, voted to ban Leisha for life; her partner Kevin Baker chooses to take the oath and abandon her; Stella Bevington, the closest thing she has to a daughter, is considering the same; and Susan is dying of an incurable brain condition. Fortunately, Alice comes to save the day, knowing (evidently through twin ESP) that her sister needs her; Stella confesses that the pendant is hers, which was stolen from her at a party; and Susan discovers that Walcott's research is a sham, completely infeasible. With that information, Leisha now knows who has orchestrated the entire campaign: Calvin Hawke. He stole the pendant from Stella at a house-warming party Alice threw; he propagated the research, which he knew to be false, hoping that Sanctuary would react as it did; and, for reasons that remain unspecified, he had Walcott's assistant killed. The volume ends with Leisha on retreat with Susan and Alice, and Jennifer informing her children that she will keep them safe: Sanctuary is moving into space. Book III (2075) In the year of America's tricentennial, all is placid. America has re-stratified itself into a three-tiered society. At the bottom are the "Livers," an under-educated but well-fed 80% of the population who enjoy a life of leisure. Above them (or below them) are the "donkeys," the genemod white-collar force who run the infrastructure and are elected into office by the Livers, earning votes via bread and circuses. Finally, the Sleepless are the source of just about all technological, genetic and scientific advances. Two new faces swiftly turn the tables. One arrives at Leisha's Susan Melling Foundation in New Mexico, a ten-year-old Liver named Drew Arlen. He is intent on enrolling in the Foundation, which (in Leisha's words) "asks beggars why they're beggars and provides funding for those who want to be something else." Drew has charisma and a harmless nature, but runs afoul of Eric Bevington-Watrous, second son of Jordan and Stella; a fistfight between the two leaves Drew paralyzed from the waist down. Attending various private schools, Drew finds a flair for artistic expression, but consistently fails or flunks out of each of them; by nineteen he has becomes a delinquent. Eric forces him into an experimental therapy in which the pathways between the limbic system and the neocortex are strengthened, supposedly forcing the brain to cope with its more primitive, bestial nature. In Drew, the treatment backfires, and he gains access to a sort of genetic collective unconscious, which he perceives in visual terms (in the next book, in which Arlen is a first-person narrator, he constantly describes people, things, concepts and emotions as having shape, texture, color and so on). Drew learns to project these shapes in holographic form and becomes the Lucid Dreamer, a performance artist who places his viewers in a waking dream, the contents of which are determined by the holograms. The other new face is born at Sanctuary Orbital: Miranda Serena Sharifi, the first of the "Superbrights." Her genemods cause her brain to operate at three or four times the speed of a standard Sleepless, at the cost of muscle control (she and all the other Supers, including her brother Tony, twitch, jerk and vibrate with "manic vitality"). Within the first few years of Miri's life, it becomes clear that she and all the other Supers think differently than do normal Sleepless; their thoughts take the form of "strings," which are entire piles of data arranged in geometric shapes and involving analogy and cross-reference. Her growth is set against a Sanctuary becoming even more careful and even more suspicious of the earth-bound Sleeper haters. Five children have been conceived that, through regression to the mean, lack the dominant Sleeplessness gene, and Jennifer is obsessed with declaring Sanctuary independent of America. To that end, Sharifi Enterprises begins research into an airborne, instantly-fatal biological weapon which can be used as a deterrent. Book IV (2091) In 2080 the United States loses its exclusive patents on Y-energy, leading to a massive economic depression. In October 2091, a new sliding-scale tax package is proposed to take advantage of the huge revenues going to Sanctuary Inc. and all associated businesses, which are, after all, incorporated in America. (To be specific, Sanctuary Inc. will be taxed a staggering 92% of total income.) With this in mind, Jennifer and the Sanctuary Council prepare to bid for their independence. Miri starts the volume with a trauma: her beloved younger brother Tony receives neural injury in a playground accident. Regardless of the total damage to his person and faculties, he will doubtless need to sleep for at least a portion of the day. Miri flies into a rage when Jennifer reminds her of Sanctuary's Yagaiist, community-first philosophy, and must be sedated; when she wakes, she is told that Tony has died of his injuries. Regardless, she and the other Supers band together for defense, recognizing that the Sleepless of Sanctuary have become so nervous of outsiders that even the Supers, created by the community and to serve it, constitute a threat due to their sheer alienness. Miri names the group "the Beggars." Miri's thought-strings—indeed, the thought-strings of every Super—have had structural flaws from the beginning, gaps where information ought to go that they don't have. Miri rectifies this gap when she is introduced to one of Drew Arlen's Lucid Dreaming concerts; the ability to tap into their unconscious allows the Supers to make a number of technological, medical and conceptual breakthroughs, including allowing Miri to cure the twitching and stuttering. The Beggars decide to install defensive overrides throughout Sanctuary's systems so that they can take over if necessary, discovering in the process the Sharifi Enterprises bioweapon. Packets of the organism have been secreted in several cities across the United States and it can be deployed at the touch of a button. On 1 January 2092 Sanctuary declares independence from the United States of America. The Internal Revenue Service decides to wait until non-payment of taxes on January 15 and then seize the orbital as collateral, but before then Sanctuary demonstrates its bioweapon on a cattle-ranching space station purchased solely for the purpose (all human tenants were evicted prior to the demonstration). The stand-off is averted when Miri and the Beggars use their overrides to force Sanctuary to stand down. Jennifer refuses Miri's offer to surrender in exchange for immunity to the rest of the Council, proving that "all of Sanctuary's political philosophy ... comes down to [Jennifer's] personal needs." The novel ends with Miri and the Superbrights moving to the Susan Melling Foundation complex in New Mexico, and Leisha deciding to act as the counsel for the defense in Jennifer Sharifi's trial. There are, after all, no permanent beggars in Spain. Translations German: "", 1997, Heyne Verlag, Munich, Russian: "" ("Spanish beggars"), 1997, Spanish: "", 1997, Bolsillo Byblos, Polish: "", 1996, Prószyński i S-ka, References External links 1993 science fiction novels Hard science fiction Works by Nancy Kress Biological weapons in popular culture Hugo Award for Best Novella–winning works Nebula Award for Best Novella–winning works Novels about genetic engineering Postcyberpunk novels Works originally published in Asimov's Science Fiction
Beggars in Spain
[ "Biology" ]
3,462
[ "Biological weapons in popular culture", "Biological warfare" ]
974,736
https://en.wikipedia.org/wiki/Cloxacillin
Cloxacillin is an antibiotic useful for the treatment of several bacterial infections. This includes impetigo, cellulitis, pneumonia, septic arthritis, and otitis externa. It is not effective for methicillin-resistant Staphylococcus aureus (MRSA). It can be used by mouth and by injection. Side effects include nausea, diarrhea, and allergic reactions including anaphylaxis. Clostridioides difficile diarrhea may also occur. It is not recommended in people who have previously had a penicillin allergy. Use during pregnancy appears to be relatively safe. Cloxacillin is in the penicillin family of medications. Cloxacillin was patented in 1960 and approved for medical use in 1965. It is on the World Health Organization's List of Essential Medicines. It is not commercially available in the United States. Mechanism of action It is semisynthetic and in the same class as penicillin. Cloxacillin is used against staphylococci that produce beta-lactamase, due to its large R chain, which does not allow the beta-lactamases to bind. This drug has a weaker antibacterial activity than benzylpenicillin, and is devoid of serious toxicity except for allergic reactions. Society and culture Cloxacillin was discovered and developed by Beecham (now GlaxoSmithKline). It is sold under a number of trade names, including Cloxapen, Cloxacap, Tegopen and Orbenin. See also Dicloxacillin Flucloxacillin Nafcillin Oxacillin References External links 2-Chlorophenyl compounds Enantiopure drugs Isoxazoles Penicillins World Health Organization essential medicines Wikipedia medicine articles ready to translate
Cloxacillin
[ "Chemistry" ]
393
[ "Stereochemistry", "Enantiopure drugs" ]
974,761
https://en.wikipedia.org/wiki/Water%20memory
Water memory is the purported ability of water to retain a memory of substances previously dissolved in it even after an arbitrary number of serial dilutions. It has been claimed to be a mechanism by which homeopathic remedies work, even when they are diluted to the point that no molecule of the original substance remains, but there is no theory for it. Water memory is pseudoscientific in nature; it contradicts the scientific understanding of physical chemistry and is generally not accepted by the scientific community. In 1988, Jacques Benveniste and colleagues published a study supporting a water memory effect amid controversy in Nature, accompanied by an editorial by Natures editor John Maddox urging readers to "suspend judgement" until the results could be replicated. In the years after publication, multiple supervised experiments were made by Benveniste's team, the United States Department of Defense, BBC's Horizon programme, and other researchers, but no one has ever reproduced Benveniste's results under controlled conditions. Benveniste's study Jacques Benveniste was a French immunologist who sought to demonstrate the plausibility of homeopathic remedies "independently of homeopathic interests" in a major scientific journal. To that end, Benveniste and his team at Institut National de la Santé et de la Recherche Médicale (INSERM, French for National Institute of Health and Medical Research) diluted a solution of human antibodies in water to such a degree that there was virtually no possibility that a single molecule of the antibody remained in the water solution. Nonetheless, they reported, human basophils responded to the solutions just as though they had encountered the original antibody (part of the allergic reaction). The effect was reported only when the solution was shaken violently during dilution. Benveniste stated: "It's like agitating a car key in the river, going miles downstream, extracting a few drops of water, and then starting one's car with the water." At the time, Benveniste offered no theoretical explanation for the effect, which was later coined as "water memory" by a journalist reporting on the study. Implications While Benveniste's study demonstrated a mechanism by which homeopathic remedies could operate, the mechanism defied scientific understanding of physical chemistry. A paper about hydrogen bond dynamics is mentioned by some secondary sources in connection to the implausibility of water memory. Publication in Nature Benveniste submitted his research to the prominent science journal Nature for publication. There was concern on the part of Nature's editorial oversight board that the material, if published, would lend credibility to homeopathic practitioners even if the effects were not replicable. There was equal concern that the research was simply wrong, given the changes that it would demand of the known laws of physics and chemistry. The editor of Nature, John Maddox, stated that, "Our minds were not so much closed as unready to change our whole view of how science is constructed." Rejecting the paper on any objective grounds was deemed unsupportable, as there were no methodological flaws apparent at the time. In the end, a compromise was reached. The paper was published in Nature Vol. 333 on 30 June 1988, but it was accompanied with an editorial by Maddox that noted "There are good and particular reasons why prudent people should, for the time being, suspend judgement" and described some of the fundamental laws of chemistry and physics which it would violate, if shown to be true. Additionally, Maddox demanded that the experiments be re-run under the supervision of a hand-picked group of what became known as "ghostbusters", including Maddox, famed magician and paranormal researcher James Randi, and Walter W. Stewart, a chemist and freelance debunker at the U.S. National Institutes of Health. Post-publication supervised experiments Under supervision of Maddox and his team, Benveniste and his team of researchers followed the original study's procedure and produced results similar to those of the first published data. Maddox, however, noted that during the procedure, the experimenters were aware of which test tubes originally contained the antibodies and which did not. Benveniste's team then started a second, blinded experimental series with Maddox and his team in charge of the double-blinding: notebooks were photographed, the lab videotaped, and vials juggled and secretly coded. Randi even went so far as to wrap the labels in newspaper, seal them in an envelope, and then stick them on the ceiling. This was done so that Benveniste and his team could not read them. The blinded experimental series showed no water memory effect. Maddox's team published a report on the supervised experiments in the next issue (July 1988) of Nature. Maddox's team concluded "that there is no substantial basis for the claim that anti-IgE at high dilution (by factors as great as 10120) retains its biological effectiveness, and that the hypothesis that water can be imprinted with the memory of past solutes is as unnecessary as it is fanciful." Maddox's team initially speculated that someone in the lab "was playing a trick on Benveniste", but later concluded that, "We believe the laboratory has fostered and then cherished a delusion about the interpretation of its data." Maddox also pointed out that two of Benveniste's researchers were being paid by the French homeopathic company Boiron. Aftermath In a response letter published in the same July issue of Nature, Benveniste lashed out at Maddox and complained about the "ordeal" that he had endured at the hands of the Nature team, comparing it to "Salem witchhunts or McCarthy-like prosecutions". Both in the Nature response and during a later episode of Quirks and Quarks, Benveniste especially complained about Stewart, who he claimed acted as if they were all frauds and treated them with disdain, complaining about his "typical know-it-all attitude". In his Nature letter, Benveniste also implied that Randi was attempting to hoodwink the experimental run by doing magic tricks, "distracting the technician in charge of its supervision!" He was more apologetic on Quirks and Quarks, re-phrasing his mention of Randi to imply that he had kept the team amused with his tricks and that his presence was generally welcomed. He also pointed out that although it was true two of his team members were being paid by a homeopathic company, the same company had paid Maddox's team's hotel bill. Maddox was unapologetic, stating "I'm sorry we didn't find something more interesting." On the same Quirks and Quarks show, he dismissed Benveniste's complaints, stating that, because of the possibility that the results would be unduly promoted by the homeopathy community, an immediate re-test was necessary. The failure of the tests demonstrated that the initial results were likely due to the experimenter effect. He also pointed out that the entire test procedure, that Benveniste later complained about, was one that had been agreed upon in advance by all parties. It was only after the test had failed that Benveniste disputed its appropriateness. The debate continued in the letters section of Nature for several issues before being ended by the editorial board. It continued in the French press for some time, and in September Benveniste appeared on the British television discussion programme After Dark to debate the events live with Randi and others. In spite of all the arguing over the retests, it had done nothing to stop what Maddox worried about: even in light of the tests' failure, they were still being used to claim that the experiments "prove" that homeopathy works. One of Benveniste's co-authors on the Nature paper, Francis Beauvais, later stated that while unblinded experimental trials usually yielded "correct" results (i.e. ultradiluted samples were biologically active, controls were not), "the results of blinded samples were almost always at random and did not fit the expected results: some 'controls' were active and some 'active' samples were without effect on the biological system." Subsequent research In the cold fusion or polywater controversies, many scientists started replications immediately, because the underlying theories did not go directly against scientific fundamental principles and could be accommodated with a few tweaks to those principles. But Benveniste's experiment went directly against several principles, causing most researchers to outright reject the results as errors or fabrication, with only a few researchers willing to perform replications or experiments that could validate or reject his hypotheses. After the Nature controversy, Benveniste gained the public support of Brian Josephson, a Nobel laureate physicist with a reputation for openness to paranormal claims. Experiments continued along the same basic lines, culminating with a 1997 paper claiming the effect could be transmitted over phone lines. This was followed by two additional papers in 1999 and another from 2000, in the controversial non-peer reviewed Medical Hypotheses, on remote-transmission, by which time it was claimed that it could also be sent over the Internet. Time magazine reported in 1999 that, in response to skepticism from physicist Robert Park, Josephson had challenged the American Physical Society (APS) to oversee a replication by Benveniste. This challenge was to be "a randomized double-blind test", of his claimed ability to transfer the characteristics of homeopathically altered solutions over the Internet:[Benveniste's] latest theory, and the cause of the current flap, is that the "memory" of water in a homeopathic solution has an electromagnetic "signature." This signature, he says, can be captured by a copper coil, digitized and transmitted by wire—or, for extra flourish, over the Internet—to a container of ordinary water, converting it to a homeopathic solution.The APS accepted the challenge and offered to cover the costs of the test. When he heard of this, Randi offered to throw in the long-standing $1 million prize for any positive demonstration of the paranormal, to which Benveniste replied: "Fine to us." In his DigiBio NewsLetter. Randi later noted that Benveniste and Josephson did not follow up on their challenge, mocking their silence on the topic as if they were missing persons. An independent test of the 2000 remote-transmission experiment was carried out in the US by a team funded by the United States Department of Defense. Using the same experimental devices and setup as the Benveniste team, they failed to find any effect when running the experiment. Several "positive" results were noted, but only when a particular one of Benveniste's researchers was running the equipment. "We did not observe systematic influences such as pipetting differences, contamination, or violations in blinding or randomization that would explain these effects from the Benveniste investigator. However, our observations do not exclude these possibilities." Benveniste admitted to having noticed this himself. "He stated that certain individuals consistently get digital effects and other individuals get no effects or block those effects." Third-party attempts at replication of the Benveniste experiment to date have failed to produce positive results that could be independently replicated. In 1993, Nature published a paper describing a number of follow-up experiments that failed to find a similar effect, and an independent study published in Experientia in 1992 showed no effect. An international team led by Madeleine Ennis of Queen's University of Belfast claimed in 1999 to have replicated the Benveniste results. Randi then forwarded the $1 million challenge to the BBC Horizon program to prove the "water memory" theory following Ennis's experimental procedure. In response, experiments were conducted with the vice-president of the Royal Society, John Enderby, overseeing the proceedings. The challenge ended with no memory effect observed by the Horizon team. For a piece on homeopathy, the ABC program 20/20 also attempted, unsuccessfully, to reproduce Ennis's results. Ennis has claimed that these tests did not follow her own experiment protocols. Other scientists In 2003, Louis Rey, a chemist from Lausanne, reported that frozen samples of lithium and sodium chloride solutions prepared according to homeopathic prescriptions showed – after being exposed to radiation – different thermoluminescence peaks compared with pure water. Rey claimed that this suggested that the networks of hydrogen bonds in homeopathic dilutions were different. These results have never been replicated and are not generally accepted - even Benveniste criticised them, pointing out that they were not blinded. In January 2009, Luc Montagnier, the Nobel Laureate virologist who led the team that discovered the human immunodeficiency virus (HIV), claimed (in a paper published in a journal that he set up, which seems to have avoided conventional peer review as it was accepted three days after submission) that the DNA of pathogenic bacteria and viruses massively diluted in water emit radio waves that he can detect. The device used to detect these signals was developed by Jacques Benveniste, and was independently tested, with the co-operation of the Benveniste team, at the request of the United States Defense Advanced Research Projects Agency. That investigation was unable to replicate any effects of digital signals using the device. In 2010, at the age of 78, Montagnier announced that he would take on the leadership of a new research institute at Jiaotong University in Shanghai, where he plans to continue this work. He claims that the findings "are very reproducible and we are waiting for confirmation by other labs", but said, in an interview with Science, "There is a kind of fear around this topic in Europe. I am told that some people have reproduced Benveniste's results, but they are afraid to publish it because of the intellectual terror from people who don't understand it." Montagnier had called Benveniste "a modern Galileo", but the problem was that "his results weren't 100% reproducible". Homeopathic coverage To most scientists, the "memory of water" is not something that deserves serious consideration; the only evidence is the flawed Benveniste work. By contrast, the notion of "memory of water" has been taken seriously among homeopaths. For them, it seemed to explain how some of their remedies might work. An overview of the issues surrounding the memory of water was the subject of a special issue of Homeopathy. In an editorial, the editor of Homeopathy, Peter Fisher, acknowledged that Benveniste's original method does not yield reproducible results and declared "...the memory of water is a bad memory: it casts a long shadow over homeopathy and is just about all that many scientists recall about the scientific investigation of homeopathy, equating it with poor or even fraudulent science." The issue was an attempt to restore some credibility to the notion with articles proposing various, very different theories of water memory, such as electromagnetic exchange of information between molecules, breaking of temporal symmetry, thermoluminescence, entanglement described by a new quantum theory, formation of hydrogen peroxide, clathrate formation, etc. Some of the proposed mechanisms would require overthrowing much of 20th-century physics. See also Hexagonal water DNA teleportation List of experimental errors and frauds in physics List of topics characterized as pseudoscience Pathological science Pseudoscience Scientific misconduct Masaru Emoto Homeopathic dilutions References Homeopathy Pseudoscience Water chemistry controversies
Water memory
[ "Chemistry" ]
3,224
[ "Water chemistry controversies" ]
974,908
https://en.wikipedia.org/wiki/Messier%2096
Messier 96 (also known as M96 or NGC 3368) is an intermediate spiral galaxy about 31 million light-years away in the constellation Leo. Observational history and appearance It was discovered by French astronomer Pierre Méchain in 1781. After communicating his finding, French astronomer Charles Messier confirmed the finding four days later and added it to his catalogue of nebulous objects. Finding this object is burdensome with large binoculars. Ideal minimum resolution, in a good sky, is via a telescope of aperture, to reveal its halo with a brighter core region. This complex galaxy is inclined by an angle of about 53° to the line of sight from the Earth, which is oriented at a position angle of 172°. Properties It is categorized as a double-barred spiral galaxy with a small inner bulge through the core along with an outer bulge. The nucleus displays a weak level of activity of the LINER2 type. Variations in ultraviolet emission from the core suggest the presence of a supermassive black hole. Estimates for the mass of this object range from to solar masses (). On May 9, 1998 a supernova was discovered in this galaxy by Mirko Villi. Designated SN 1998bu, this was a Type Ia supernova explosion. It reached maximum brightness on May 21 at about magnitude 11.6, then steadily faded. Observations of the ejecta a year later showed creation of 0.4 solar masses of iron. The spectrum of the supernova remnant confirmed too radioactive 56Co, which decays into 56Fe. Messier 96 is about the same mass and size as the Milky Way. It is a very asymmetric galaxy; its dust and gas are unevenly spread throughout its weak spiral arms, and its core is just offset from the midpoint of its extremes. Its arms are also asymmetrical, thought to have been influenced by the gravitational pull of other galaxies within its group. Messier 96 is being studied as part of a survey of 50 nearby galaxies known as the Legacy ExtraGalactic UV Survey (LEGUS), providing an unprecedented view of star formation within the local universe. M96 group M96 is the brightest galaxy within the M96 Group, a group of galaxies in Leo, the other Messier objects of which are M95 and M105. To this are added at least nine other galaxies. This is the nearest group to the Local Group to combine bright spirals and a bright elliptical galaxy (Messier 105). See also List of Messier objects References External links NOAO: M96 SEDS: Spiral Galaxy M96 Intermediate spiral galaxies Messier 096 Messier 096 096 Messier 096 05882 32192 17810320 Discoveries by Pierre Méchain
Messier 96
[ "Astronomy" ]
564
[ "Leo (constellation)", "Constellations" ]
974,931
https://en.wikipedia.org/wiki/Owl%20Nebula
The Owl Nebula (also known as Messier 97, M97 or NGC 3587) is a planetary nebula approximately 2,030 light years away in the constellation Ursa Major. Estimated to be about 8,000 years old, it is approximately circular in cross-section with a faint internal structure. It was formed from the outflow of material from the stellar wind of the central star as it evolved along the asymptotic giant branch. The nebula is arranged in three concentric shells, with the outermost shell being about 20–30% larger than the inner shell. The owl-like appearance of the nebula is the result of an inner shell that is not circularly symmetric, but instead forms a barrel-like structure aligned at an angle of 45° to the line of sight. The nebula holds about 0.13 solar masses () of matter, including hydrogen, helium, nitrogen, oxygen, and sulfur; all with a density of less than 100 particles per cubic centimeter. Its outer radius is around and it is expanding with velocities in the range of 27–39 km/s into the surrounding interstellar medium. The 14th magnitude central star has passed the turning point in its evolution and is condensing to form a white dwarf. It has 55–60% of solar mass, is 41 to 148 times solar luminosity (), and has an effective temperature of 123,000 K. The star has been successfully resolved by the Spitzer Space Telescope as a point source that does not show the infrared excess characteristic of a circumstellar disk. History The Owl Nebula was discovered by French astronomer Pierre Méchain on February 16, 1781. Pierre Méchain was Charles Messier's observing colleague, and the nebula was observed by Messier himself a few weeks following the initial sighting. Thus, the object was named Messier 97, and included in his catalog on March 24, 1781. Of the object, he noted:Nebula in the great Bear, near Beta: It is difficult to see, reports M. Méchain, especially when one illuminates the micrometer wires: its light is faint, without a star. M. Méchain saw it the first time on Feb 16, 1781, & the position is that given by him. Near this nebula he has seen another one, [the position of] which has not yet been determined [Messier 108], and also a third which is near Gamma of the Great Bear [Messier 109]. (diam. 2′).In 1844, Admiral William H. Smyth classified the object as a planetary nebula. When William Parsons, 3rd Earl of Rosse, observed the nebula in Ireland in 1848, his hand-drawn illustration resembled an owl's head. In his notes, the object was described as "Two stars considerably apart in the central region, dark penumbra round each spiral arrangement, with stars as apparent centres of attraction. Stars sparkling in it; resolvable." It has been known as the Owl Nebula ever since. More recent developments in the late 1900s include the discovery of a giant red halo of wind extended around its inner shells, and the mapping of the nebula's structure. Observing Although the Owl Nebula can not be seen with the naked eye, a faint image of it can be observed under remarkably good conditions with a small telescope or 20×80 binoculars. To make out the nebula's more distinctive owl like eye features, a telescope with an aperture 10" or better is required. To locate the nebula in the night sky, look to the southwest corner of the Big Dipper's bowl, marked by the star Beta Ursae Majoris. From there, M97 lies just over 2.5 degrees in the southeast direction towards the star positioned opposite Beta Ursae Majoris in the other bottom corner of the Big Dippers Bowl, Gamma Ursae Majoris; which marks the constellations southwest corner. M97, together with Alpha Ursae Majoris, point the way to Polaris. Gallery See also Messier object List of Messier objects New General Catalogue List of planetary nebulae References External links The Owl Nebula @ SEDS Messier pages The Owl Nebula at Calar Alto Observatory NightSkyInfo.com – M97, the Owl Nebula The Owl Nebula (M97) at Constellation Guide Messier 97: Owl Nebula at Messier Objects Messier objects NGC objects Planetary nebulae Ursa Major 17810216 Discoveries by Pierre Méchain
Owl Nebula
[ "Astronomy" ]
924
[ "Ursa Major", "Constellations" ]
974,942
https://en.wikipedia.org/wiki/Messier%2098
Messier 98, M98 or NGC 4192, is an intermediate spiral galaxy about 44.4 million light-years away in slightly northerly Coma Berenices, about 6° to the east of the bright star Denebola (Beta Leonis). It was discovered by French astronomer Pierre Méchain on 1781, along with nearby M99 and M100, and was catalogued by compatriot Charles Messier 29 days later in his . It has a blueshift, denoting ignoring of its fast other movement (vectors of proper motion), it is approaching at about 140 km/s. The morphological classification of this galaxy is SAB(s)ab, which indicates it is a spiral galaxy that displays mixed barred and non-barred features with intermediate to tightly wound arms and no ring. It is highly inclined to the line of sight at an angle of 74° and has a maximum rotation velocity of 236 km/s. The combined mass of the stars in this galaxy is an estimated 76 billion () times the mass of the Sun. It contains about 4.3 billion solar masses of neutral hydrogen and 85 million solar masses in dust. The nucleus is active, displaying characteristics of a "transition" type object. That is, it shows properties of a LINER-type galaxy intermixed with an H II region around the nucleus. Messier 98 is a member of the Virgo Cluster, which is a large cluster of galaxies, part of the local supercluster. About 750 million years ago, it may have interacted with the large spiral galaxy Messier 99. These are now separated by . See also List of Messier objects Messier 86, another blueshifted galaxy References External links Spiral Galaxy M98 @ SEDS Messier pages Messier Object 98 Intermediate spiral galaxies Messier 098 Messier 098 098 Messier 098 07231 39028 17810413 Discoveries by Pierre Méchain
Messier 98
[ "Astronomy" ]
400
[ "Coma Berenices", "Constellations" ]
974,952
https://en.wikipedia.org/wiki/Messier%2099
Messier 99 or M99, also known as NGC 4254 or St. Catherine's Wheel, is a grand design spiral galaxy in the northern constellation Coma Berenices approximately from the Milky Way. It was discovered by Pierre Méchain on 17 March 1781. The discovery was then reported to Charles Messier, who included the object in the Messier Catalogue of comet-like objects. It was one of the first galaxies in which a spiral pattern was seen. This pattern was first identified by Lord Rosse in the spring of 1846. This galaxy has a morphological classification of SA(s)c, indicating a pure spiral shape with loosely wound arms. It has a peculiar shape with one normal looking arm and an extended arm that is less tightly wound. The galaxy is inclined by 42° to the line-of-sight with a major axis position angle of 68°. A bridge of neutral hydrogen gas links NGC 4254 with VIRGOHI21, an HI region and a possible dark galaxy. The gravity from the latter may have distorted M99 and drawn out the gas bridge, as the two galaxy-sized objects may have had a close encounter before parting greatly. However, VIRGOHI21 may instead be tidal debris from an interaction with the lenticular galaxy NGC 4262 some 280 million years ago. It is expected that the drawn out arm will relax to match the normal arm once the encounter is over. While not classified as a starburst galaxy, M99 has a star formation activity three times larger than other galaxies of similar Hubble type that may have been triggered by the encounter. M99 is likely entering the Virgo Cluster for the first time bound to the periphery of the cluster at a projected separation of 3.7°, or around one megaparsec, from the cluster center at Messier 87. The galaxy is undergoing ram-pressure stripping of much of its interstellar medium as it moves through the intracluster medium. Supernovae Four supernovae have been observed in M99: SN 1967H (type II, mag. 14.0) was discovered by Fritz Zwicky on 1 July 1967. SN 1972Q (type unknown, mag. 15.8) was discovered by Leonida Rosino on 14 December 1972. SN 1986I (type II, mag. 14) was discovered by Carlton Pennypacker et al. on 17 May 1986. SN 2014L (type Ic, mag. 17.2) was discovered by the THU-NAOC Transient Survey (TNTS) on 26 January 2014. See also List of Messier objects Messier 83 – a similar face-on spiral galaxy Pinwheel Galaxy – a similar face-on spiral galaxy References External links SEDS: Spiral Galaxy M99 UniverseToday: Dark Matter Galaxy? PPARC: New evidence for a Dark Matter Galaxy Unbarred spiral galaxies Messier 099 Messier 099 099 Messier 099 07345 39578 17810317 Discoveries by Pierre Méchain
Messier 99
[ "Astronomy" ]
624
[ "Coma Berenices", "Constellations" ]
974,971
https://en.wikipedia.org/wiki/Exterior%20Gateway%20Protocol
The Exterior Gateway Protocol (EGP) was a routing protocol used to connect different autonomous systems on the Internet from the mid-1980s until the mid-1990s, when it was replaced by Border Gateway Protocol (BGP). History EGP was developed by Bolt, Beranek and Newman in the early 1980s. It was first described in RFC 827 and formally specified in RFC 904. RFC 1772 outlined a migration path from EGP to BGP. References See also Interior gateway protocol Internet protocols Internet Standards Routing protocols # # # # # # #
Exterior Gateway Protocol
[ "Technology" ]
113
[ "Computing stubs", "Computer network stubs" ]
974,980
https://en.wikipedia.org/wiki/Messier%2093
Messier 93 or M93, also known as NGC 2447 or the Critter Cluster, is an open cluster in the modestly southern constellation Puppis, the imagined poop deck of the legendary Argo. Observational history and appearance It was discovered by Charles Messier then added to his catalogue of comet-like objects in 1781. Caroline Herschel, the younger sister of William Herschel, independently discovered it in 1783, thinking it had not yet been catalogued by Messier. Walter Scott Houston (died 1993) described its appearance: Some observers mention the cluster as having the shape of a starfish. With a fair-sized telescope, this is its appearance on a dull night, but [a four-inch refractor] shows it as a typical star-studded galactic cluster. Properties It has a Trumpler class of , indicating it is strongly concentrated (I) with a large range in brightness (3) and is rich in stars (r). M93 is about 3,380 light-years from the solar radius and has a great spatial radius of 10 light-years, a tidal radius of , and a core radius of . Its age is estimated at 387.3 million years. It is nearly on the galactic plane and has an orbit that varies between from the Galactic Center over a period of  Myr. Fifty-four variable stars have been found in M93, including one slowly pulsating B-type star, one rotating ellipsoidal variable, seven Delta Scuti variables, six Gamma Doradus variables, and one hybrid δ Sct/γ Dor pulsator. Four spectroscopic binary systems within include a yellow straggler component. Gallery See also List of Messier objects References External links SEDS: Open Star Cluster M93 Messier 093 Orion–Cygnus Arm Messier 093 093 Messier 093 Astronomical objects discovered in 1781 Discoveries by Charles Messier
Messier 93
[ "Astronomy" ]
398
[ "Puppis", "Constellations" ]
975,003
https://en.wikipedia.org/wiki/Messier%20105
Messier 105 or M105, also known as NGC 3379, is an elliptical galaxy 36.6 million light-years away in the equatorial constellation of Leo. It is the biggest elliptical galaxy in the Messier catalogue that is not in the Virgo cluster. It was discovered by Pierre Méchain in 1781, just a few days after he discovered the nearby galaxies Messier 95 and Messier 96. This galaxy is one of a few not object-verified by Messier so omitted in the editions of his Catalogue of his era. It was appended when Helen S. Hogg found a letter by Méchain locating and describing this object which matched those aspects under its first-published name, NGC 3379. It has a morphological classification of E1, indicating a standard elliptical galaxy with a flattening of 10%. The major axis is aligned along a position angle of 71°. Isophotes of the galaxy are near perfect ellipses, twisting no more than 5° out of alignment, with changes in ellipticity of no more than 0.06. There is no fine structure apparent in the isophotes, such as ripples. Observation of giant stars in the halo indicate there are two general populations: a dominant metal-rich subpopulation and a weaker metal-poor group. Messier 105 is known to have a supermassive black hole at its core whose mass is estimated to be between and . The galaxy has a weak active galactic nucleus of the LINER type with a spectral class of L2/T2, meaning no broad Hα line and intermediate emission line ratios between a LINER and a H II region. The galaxy also contains a few young stars and stellar clusters, suggesting some elliptical galaxies still form new stars, but very slowly. This galaxy, along with its companion the barred lenticular galaxy NGC 3384, is surrounded by an enormous ring of neutral hydrogen with a radius of and a mass of where star formation has been detected. Messier 105 is one of several galaxies within the M96 Group (also known as the Leo I Group), a group of galaxies in the constellation Leo, the other Messier objects of which are M95 and M96. It is one of the richest group of galaxies in the Local Volume, and unlike the Local Group, it is dominated by not one but several galaxies. See also List of Messier objects References and footnotes External links "StarDate: M105 Fact Sheet" SEDS: Elliptical Galaxy M105 ESA/Hubble image of M105 Elliptical galaxies M96 Group Leo (constellation) 105 Messier 105 05902 32256 17810324 Discoveries by Pierre Méchain
Messier 105
[ "Astronomy" ]
550
[ "Leo (constellation)", "Constellations" ]
975,020
https://en.wikipedia.org/wiki/Messier%20106
Messier 106 (also known as NGC 4258) is an intermediate spiral galaxy in the constellation Canes Venatici. It was discovered by Pierre Méchain in 1781. M106 is at a distance of about 22 to 25 million light-years away from Earth. M106 contains an active nucleus classified as a Type 2 Seyfert, and the presence of a central supermassive black hole has been demonstrated from radio-wavelength observations of the rotation of a disk of molecular gas orbiting within the inner light-year around the black hole. NGC 4217 is a possible companion galaxy of Messier 106. Besides the two visible arms, it has two "anomalous arms" detectable using an X-ray telescope. Characteristics M106 has a water vapor megamaser (the equivalent of a laser operating in microwave instead of visible light and on a galactic scale) that is seen by the 22-GHz line of ortho-H2O that evidences dense and warm molecular gas. Water masers are useful for observing nuclear accretion disks in active galaxies. The water masers in M106 enabled the first case of a direct measurement of the distance to a galaxy, thereby providing an independent anchor for the cosmic distance ladder. M106 has a slightly warped, thin, almost edge-on Keplerian disc which is on a subparsec scale. It surrounds a central area with mass . It is one of the largest and brightest nearby galaxies, similar in size and luminosity to the Andromeda Galaxy. The supermassive black hole at the core has a mass of . M106 has also played an important role in calibrating the cosmic distance ladder. Before, Cepheid variables from other galaxies could not be used to measure distances since they cover ranges of metallicities different from the Milky Way's. M106 contains Cepheid variables similar to both the metallicities of the Milky Way and other galaxies' Cepheids. By measuring the distance of the Cepheids with metallicities similar to our galaxy, astronomers are able to recalibrate the other Cepheids with different metallicities, a key fundamental step in improving quantification of distances to other galaxies in the universe. Supernovae Two supernovae have been observed in M106: SN 1981K (type II, mag. 17) was reported by E. Hummel and verified by Paul Wild by examining archival photos dated 3 November 1981. SN 2014bc (type II, mag. 14.8) was discovered by the PS1 Science Consortium 3Pi survey on 19 May 2014. See also List of Messier objects Canes II Group References External links StarDate: M106 Fact Sheet Spiral Galaxy M106 at SEDS Messier pages NGC 4258: Mysterious Arms Revealed Spiral Galaxy Messier 106 (NGC 4258) at the astro-photography site of Takayuki Yoshida Messier 106 at Constellation Guide Intermediate spiral galaxies Seyfert galaxies Canes II Group Canes Venatici 106 NGC objects 07353 39600 Astronomical objects discovered in 1781 Discoveries by Pierre Méchain
Messier 106
[ "Astronomy" ]
647
[ "Canes Venatici", "Constellations" ]
975,035
https://en.wikipedia.org/wiki/Messier%20108
Messier 108 (also known as NGC 3556, nicknamed the Surfboard Galaxy) is a barred spiral galaxy about 46 million light-years away from Earth in the northern constellation Ursa Major. It was discovered by Pierre Méchain in 1781 or 1782. From the Earth, this galaxy is seen almost edge-on. This galaxy is an isolated member of the Ursa Major Cluster of galaxies in the local supercluster. It has a morphological classification of type SBbc in the de Vaucouleurs system, which means it is a barred spiral galaxy with somewhat loosely wound arms. The maximum angular size of the galaxy in the optical band is 11′.1 × 4′.6, and it is inclined 75° to the line of sight. This galaxy has an estimated mass of 125 billion solar masses () and bears about 290 ± 80 globular clusters. Examination of the distribution of neutral hydrogen in this galaxy shows discrete shells of expanding gas extending for several kiloparsecs, known as H1 supershells. These may be driven by currents of dark matter, dust and gas contributing to large star formation, having caused supernovae explosions. Alternatively they may result from an infall from the intergalactic medium or arise from radio jets. Observations with the Chandra X-ray Observatory have identified 83 X-ray sources, including a source at the nucleus. The brightest of these is consistent with an intermediate-mass black hole accreting matter. The galaxy is also emitting a diffuse soft X-ray radiation within 2.6 arcminutes of the optical galaxy. The spectrum of the source at the core is consistent with an active galactic nucleus, but an examination with the Spitzer Space Telescope showed no indication of activity. The supermassive black hole at the core has an estimated mass of 24 million solar masses (). Supernovae Three supernovae have been observed in M108: SN 1969B (type unknown, mag. 16) was discovered by Paul Wild on 6 February 1969. It reached a brightness of mag. 13.9. SPIRITS 16tn was discovered by the Spitzer Space Telescope in August 2016. The supernova was only visible in infrared light, because it was heavily obscured by dust. Its extinction was estimated to be 8–9 mag, making it one of the most heavily obscured supernovae ever observed. SN 2023dbc (Type Ic, mag. 17) was discovered by the Zwicky Transient Facility on 13 March 2023. 2023dbc is likely a stripped-envelope supernova as there is no evidence for hydrogen in these spectra beyond narrow emission associated with the underlying HII region. It is among the nearest type Ic supernovae discovered to date. See also List of Messier objects NGC 2403 - a similar spiral galaxy NGC 4631 - a similar spiral galaxy NGC 7793 - a similar spiral galaxy Notes References External links SEDS: Spiral Galaxy M108 Barred spiral galaxies Ursa Major 108 Messier 108 06225 34030 Astronomical objects discovered in 1781 Discoveries by Pierre Méchain Ursa Major Cluster
Messier 108
[ "Astronomy" ]
634
[ "Ursa Major", "Constellations" ]
975,051
https://en.wikipedia.org/wiki/Messier%20109
Messier 109 (also known as NGC 3992 or the Vacuum Cleaner Galaxy) is a barred spiral galaxy exhibiting a weak inner ring structure around the central bar approximately away in the northern constellation Ursa Major. M109 can be seen south-east of the star Phecda (γ UMa, Gamma Ursa Majoris). History Messier 109 was discovered by Pierre Méchain in 1781. Two years later Charles Messier catalogued the object, as an appended object to his publication. Between the 1920s through the 1950s, it was considered that Messier objects over 103 were not official, but later the additions, further referred target objects from Méchain, became more widely accepted. David H. Levy mentions the modern 110 object catalog while Sir Patrick Moore places the limit at 104 objects but has M105 to 109 listed as addenda. By the late 1970s all 110 objects are commonly used among astronomers and remain so. General information This galaxy is the most distant object in the Messier Catalog, followed by M91. M109 has three satellite galaxies (UGC 6923, UGC 6940 and UGC 6969) and possibly more. Detailed hydrogen line observations have been obtained from M109 and its satellites. M109's H I (H one) distribution is regular with a low-level radial extension outside the stellar disc, while in the bar is a central H I hole in the gas distribution. Possibly the gas has been transported inwards by the bar, and because of the emptiness of the hole no large accretion events can have happened in the recent past. M109 is the brightest galaxy in the M109 Group, a large group of galaxies in the constellation Ursa Major that may number over 50. Supernova One supernova has been observed in M109: SN1956A (typeIa, mag. 12.3) was discovered by H. S. Gates on 8 March 1956, using the 18-inch Schmidt telescope at the Palomar Observatory. It was located 67" east and 9" south of the center of the galaxy. Gallery See also List of Messier objects NGC 1300 – a similar barred spiral galaxy References External links M109 Barred spiral galaxies M109 Group Ursa Major 109 NGC objects 06937 Astronomical objects discovered in 1781 037617 +09-20-044 11549+5339 Discoveries by Pierre Méchain
Messier 109
[ "Astronomy" ]
499
[ "Ursa Major", "Constellations" ]
975,069
https://en.wikipedia.org/wiki/Messier%20110
Messier 110, or M110, also known as NGC 205, is a dwarf elliptical galaxy that is a satellite of the Andromeda Galaxy in the Local Group. Early observational history Charles Messier never included the galaxy in his list, but it was depicted by him, together with M32, on his drawing of "Nébuleuse D'Andromède", later known as the Andromeda Galaxy. A label of the drawing indicates that Messier first saw the object in 1773. M110 was independently discovered by Caroline Herschel on August 27, 1783; her brother William Herschel described her discovery in 1785. The suggestion to assign the galaxy a Messier number was made by Kenneth Glyn Jones in 1967, making it the last member of the Messier List. Properties This galaxy has a morphological classification of pec dE5, indicating a dwarf elliptical galaxy with a flattening of 50%. It is designated peculiar (pec) due to patches of dust and young blue stars near its center. This is unusual for dwarf elliptical galaxies in general, and the reason is unclear. Unlike M32, M110 lacks evidence for a supermassive black hole at its center. The interstellar dust in M110 has a mass of with a temperature of , and the interstellar gas has . The inner region has sweeping deficiencies in its interstellar medium IM, most likely expelled by supernova explosions. Tidal interactions with M31 may have stripped away a significant fraction of the expelled gas and dust, leaving the galaxy as a whole, as it presents, deficient in its IM density. Novae have been detected in this galaxy, including one discovered in 1999, and another in 2002. The latter, designated EQ J004015.8+414420, had also been captured in images taken by the Sloan Digital Sky Survey (SDSS) that October. Local context About half of the Andromeda's satellite galaxies are orbiting it along a highly flattened plane, with 14 out of 16 following the same sense of rotation. One theory proposes that these 16 once belonged to a subhalo surrounding M110, then the group was broken up by tidal forces during a close encounter with Andromeda. See also List of Messier objects Notes References External links Messier 110 Data Sheed and additional information – Telescopius. (Deep Sky Objects Browser has been renamed and reformatted – the old links below no longer work correctly) Messier 110 data sheet, altitude charts, sky map and related objects – Deep Sky Objects Browser Messier 110 amateur astrophotography – Deep Sky Objects Browser SEDS: Elliptical Galaxy M110 Dwarf elliptical galaxies Peculiar galaxies Local Group Andromeda Subgroup Andromeda (constellation) 110 NGC objects 00426 002429 002429 17730810 Discoveries by Caroline Herschel +07-02-014
Messier 110
[ "Astronomy" ]
585
[ "Andromeda (constellation)", "Constellations" ]
975,090
https://en.wikipedia.org/wiki/January%20effect
The January effect is a hypothesis that there is a seasonal anomaly in the financial market where securities' prices increase in the month of January more than in any other month. This calendar effect would create an opportunity for investors to buy stocks for lower prices before January and sell them after their value increases. As with all calendar effects, if true, it would suggest that the market is not efficient, as market efficiency would suggest that this effect should disappear. The effect was first observed around 1942 by investment banker Sidney B. Wachtel. He noted that since 1925 small stocks had outperformed the broader market in the month of January, with most of the disparity occurring before the middle of the month. It has also been noted that when combined with the four-year US presidential cycle, historically the largest January effect occurs in year three of a president's term. The most common theory explaining this phenomenon is that individual investors, who are income tax-sensitive and who disproportionately hold small stocks, sell stocks for tax reasons at year end (such as to claim a capital loss) and reinvest after the first of the year. Another cause is the payment of year-end bonuses in January. Some of this bonus money is used to purchase stocks, driving up prices. The January effect does not always materialize; for example, small stocks underperformed large stocks in 1982, 1987, 1989 and 1990. Criticism Burton Malkiel asserts that seasonal anomalies such as the January Effect are transient and do not present investors with reliable arbitrage opportunities. He sums up his critique of the January Effect:Wall Street traders now joke that the January effect is more likely to occur on the previous Thanksgiving. Moreover, these nonrandom effects (even if they were dependable) are very small relative to the transaction costs involved in trying to exploit them. They do not appear to offer arbitrage opportunities that would enable investors to make excess risk adjusted returns. See also Financial market efficiency July effect Limits to arbitrage Market timing Sell in May Santa Claus rally References Behavioral finance Calendar effect
January effect
[ "Biology" ]
424
[ "Behavioral finance", "Behavior", "Human behavior" ]
15,918,153
https://en.wikipedia.org/wiki/Oleoylethanolamide
Oleoylethanolamide (OEA) is an endogenous peroxisome proliferator-activated receptor alpha (PPAR-α) agonist. It is a naturally occurring ethanolamide lipid that regulates feeding and body weight in vertebrates ranging from mice to pythons. OEA is a shorter, monounsaturated analogue of the endocannabinoid anandamide, but unlike anandamide it acts independently of the cannabinoid pathway, regulating PPAR-α activity to stimulate lipolysis. OEA is produced by the small intestine following feeding in two steps. First an N-acyl transferase (NAT) activity joins the free amino terminus of phosphatidylethanolamine (PE) to the oleoyl group (one variety of acyl group) derived from sn-1-oleoyl-phosphatidylcholine, which contains the fatty acid oleic acid at the sn-1 position. This produces an N-acylphosphatidylethanolamine, which is then split (hydrolyzed) by N-acyl phosphatidylethanolamine-specific phospholipase D (NAPE-PLD) into phosphatidic acid and OEA. The biosynthesis of OEA and other bioactive lipid amides is modulated by bile acids. OEA has been demonstrated to bind to the novel cannabinoid receptor GPR119. OEA has been suggested to be the receptor's endogenous ligand. OEA has been hypothesized to play a key role in the inhibition of food seeking behavior and in the lipolysis of brown bears "ursus arctos" during the hibernation season together with the alteration of the endocannabinoid system required for the metabolic changes for hibernation. OEA has been reported to lengthen the life span of the roundworm Caenorhabditis elegans through interactions with lysomal molecules. OEA is mainly known by its anorexigenic effects. However, it has also neuroprotective properties. In this sense, recent research has demonstrated that OEA reduces neuronal death in a murine model of aggressive neurodegeneration. Such neuroprotective effect is triggered by a stabilization of microtubule dynamics and by the modulation of neuroinflammation References External links Science Magazine BBC: Fatty foods 'offer memory boost' Neurotransmitters Fatty acid amides Endocannabinoids
Oleoylethanolamide
[ "Chemistry" ]
544
[ "Neurochemistry", "Neurotransmitters" ]
15,918,280
https://en.wikipedia.org/wiki/Carbon%20diselenide
Carbon diselenide is an inorganic compound with the chemical formula . It is a yellow-orange oily liquid with pungent odor. It is the selenium analogue of carbon disulfide () and carbon dioxide (). This light-sensitive compound is insoluble in water and soluble in organic solvents. Synthesis, structure and reactions Carbon diselenide is a linear molecule with D∞h symmetry. It is produced by reacting selenium powder with dichloromethane vapor near 550 °C. It was first reported by Grimm and Metzger, who prepared it by treating hydrogen selenide with carbon tetrachloride in a hot tube. Like carbon disulfide, carbon diselenide polymerizes under high pressure. The structure of the polymer is thought to be a head-to-head structure with a backbone in the form of . The polymer is a semiconductor with a room-temperature conductivity of 50 S/cm. In addition, carbon diselenide is a precursor to tetraselenafulvalenes, the selenium analogue of tetrathiafulvalene, which can be further used to synthesize organic conductors and organic superconductors. Carbon diselenide reacts with secondary amines to give : Safety Carbon diselenide has high vapor pressure. It has a moderate toxicity and presents an inhalation hazard. It may be dangerous due to its easy membrane transport. It decomposes slowly in storage (about 1% per month at –30 °C). When obtained commercially, its cost is high. Pure distilled carbon diselenide has an odor very similar to that of carbon disulfide, but mixed with air, it creates extremely offensive odors (corresponding to new, highly toxic reaction products). Its smell forced an evacuation of a nearby village when it was first synthesized in 1936. Because of the odor, synthetic pathways have been developed to avoid its use. References Selenides Inorganic carbon compounds Foul-smelling chemicals Dichalcogenides IV-VI semiconductors
Carbon diselenide
[ "Chemistry" ]
425
[ "Semiconductor materials", "Inorganic carbon compounds", "Inorganic compounds", "IV-VI semiconductors" ]
15,919,284
https://en.wikipedia.org/wiki/Improved%20water%20source
An improved water source (or improved drinking-water source or improved water supply) is a term used to categorize certain types or levels of water supply for monitoring purposes. It is defined as a type of water source that, by nature of its construction or through active intervention, is likely to be protected from outside contamination, in particular from contamination with fecal matter. The term was coined by the Joint Monitoring Program (JMP) for Water Supply and Sanitation of UNICEF and WHO in 2002 to help monitor the progress towards Goal Number 7 of the Millennium Development Goals (MDGs). The opposite of "improved water source" has been termed "unimproved water source" in the JMP definitions. The same terms are used to monitor progress towards Sustainable Development Goal 6 (Target 6.1, Indicator 6.1.1) from 2015 onwards. Here, they are a component of the definition for "safely managed drinking water service". Definitions During SDG period (2015 to 2030) Indicator 6.1.1 of SDG 6 is "Proportion of population using safely managed drinking water services". The term "safely managed drinking water services" is defined as: "Drinking water from an improved water source that is located on premises, available when needed and free from fecal and priority chemical contamination". In 2017, the JMP defined a new term: "basic water service". This is defined as the drinking water coming from an improved source, and provided the collection time is not more than 30 minutes for a round trip. A lower level of service is now called "limited water service" which is the same as basic service but the collection time is longer than 30 minutes. Service levels are defined as (from lowest to highest): Surface water, unimproved, limited, basic, safely managed. During MDG period (2000 until 2015) To allow for international comparability of estimates for monitoring the Millennium Development Goals (MDGs), the World Health Organization/UNICEF Joint Monitoring Program (JMP) for Water Supply and Sanitation defines "improved" drinking water sources as follows: Piped water into dwelling Piped water into yard/plot Public tap/standpipes Tubewell/boreholes Protected dug wells Protected springs (normally part of a spring supply) Rainwater collection Bottled water, if the secondary source used by the household for cooking and personal hygiene is improved Water sources that are not considered as "improved" are: Unprotected dug wells Unprotected springs Vendor provided water Cart with small tank/drum Bottled water, if the secondary source used by the household for cooking and personal hygiene is unimproved Tanker-truck Surface water See also Human right to water and sanitation References Water supply
Improved water source
[ "Chemistry", "Engineering", "Environmental_science" ]
553
[ "Hydrology", "Water supply", "Environmental engineering" ]
15,919,460
https://en.wikipedia.org/wiki/Social%20network%20analysis%20software
Social network analysis (SNA) software is software which facilitates quantitative or qualitative analysis of social networks, by describing features of a network either through numerical or visual representation. Overview Networks can consist of anything from families, project teams, classrooms, sports teams, legislatures, nation-states, disease vectors, membership on networking websites like Twitter or Facebook, or even the Internet. Networks can consist of direct linkages between nodes or indirect linkages based upon shared attributes, shared attendance at events, or common affiliations. Network features can be at the level of individual nodes, dyads, triads, ties and/or edges, or the entire network. For example, node-level features can include network phenomena such as betweenness and centrality, or individual attributes such as age, sex, or income. SNA software generates these features from raw network data formatted in an edgelist, adjacency list, or adjacency matrix (also called sociomatrix), often combined with (individual/node-level) attribute data. Though the majority of network analysis software uses a plain text ASCII data format, some software packages contain the capability to utilize relational databases to import and/or store network features. Features Visual representations of social networks are important to understand network data and convey the result of the analysis. Visualization often also facilitates qualitative interpretation of network data. With respect to visualization, network analysis tools are used to change the layout, colors, size and other properties of the network representation. Some SNA software can perform predictive analysis. This includes using network phenomena such as a tie to predict individual level outcomes (often called peer influence or contagion modeling), using individual-level phenomena to predict network outcomes such as the formation of a tie/edge (often called homophily models) or particular type of triad, or using network phenomena to predict other network phenomena, such as using a triad formation at time 0 to predict tie formation at time 1. Collection of social network analysis tools and libraries See also Comparison of research networking tools and research profiling systems Social network Social network analysis Social networking Organizational Network Analysis References Notes Barnes, J. A. "Class and Committees in a Norwegian Island Parish", Human Relations 7:39-58 Borgatti, S. (2002). NetDraw Software for Network Visualization. Lexington, KY: Analytic Technologies. Borgatti, S. E. (2002). Ucinet for Windows: Software for Social Network Analysis. Harvard, MA: Analytic Technologies. Berkowitz, S. D. 1982. An Introduction to Structural Analysis: The Network Approach to Social Research. Toronto: Butterworth. Brandes, Ulrik, and Thomas Erlebach (Eds.). 2005. Network Analysis: Methodological Foundations Berlin, Heidelberg: Springer-Verlag. Breiger, Ronald L. 2004. "The Analysis of Social Networks." Pp. 505–526 in Handbook of Data Analysis, edited by Melissa Hardy and Alan Bryman. London: Sage Publications. Excerpts in pdf format Burt, Ronald S. (1992). Structural Holes: The Structure of Competition. Cambridge, MA: Harvard University Press. Carrington, Peter J., John Scott and Stanley Wasserman (Eds.). 2005. Models and Methods in Social Network Analysis. New York: Cambridge University Press. Christakis, Nicholas and James H. Fowler "The Spread of Obesity in a Large Social Network Over 32 Years," New England Journal of Medicine 357 (4): 370-379 (26 July 2007) Doreian, Patrick, Vladimir Batagelj, and Anuska Ferligoj. (2005). Generalized Blockmodeling. Cambridge: Cambridge University Press. Freeman, Linton C. (2004) The Development of Social Network Analysis: A Study in the Sociology of Science. Vancouver: Empirical Press. Hansen, William B. and Reese, Eric L. 2009. Network Genie Users Manual. Greensboro, NC: Tanglewood Research. Hill, R. and Dunbar, R. 2002. "Social Network Size in Humans." Human Nature, Vol. 14, No. 1, pp. 53–72.Google pdf Huisman, M. and Van Duijn, M. A. J. (2005). Software for Social Network Analysis. In P J. Carrington, J. Scott, & S. Wasserman (Editors), Models and Methods in Social Network Analysis (pp. 270–316). New York: Cambridge University Press. Krebs, Valdis (2002) Uncloaking Terrorist Networks, First Monday, volume 7, number 4 (Application of SNA software to terror nets Web Reference.) Krebs, Valdis (2008) A Brief Introduction to Social Network Analysis (Common metrics in most SNA software Web Reference.) Krebs, Valdis (2008) Various Case Studies & Projects using Social Network Analysis software Web Reference . Lin, Nan, Ronald S. Burt and Karen Cook, eds. (2001). Social Capital: Theory and Research. New York: Aldine de Gruyter. Mullins, Nicholas. 1973. Theories and Theory Groups in Contemporary American Sociology. New York: Harper and Row. Müller-Prothmann, Tobias (2006): Leveraging Knowledge Communication for Innovation. Framework, Methods and Applications of Social Network Analysis in Research and Development, Frankfurt a. M. et al.: Peter Lang, . Moody, James, and Douglas R. White (2003). "Structural Cohesion and Embeddedness: A Hierarchical Concept of Social Groups." American Sociological Review 68(1):103-127. Nohria, Nitin and Robert Eccles (1992). Networks in Organizations. second ed. Boston: Harvard Business Press. Nooy, Wouter d., A. Mrvar and Vladimir Batagelj. (2005). Exploratory Social Network Analysis with Pajek. Cambridge: Cambridge University Press. Scott, John. (2000). Social Network Analysis: A Handbook. 2nd Ed. Newberry Park, CA: Sage. Tilly, Charles. (2005). Identities, Boundaries, and Social Ties. Boulder, CO: Paradigm press. Valente, Thomas. (1995). Network Models of the Diffusion of Innovation. Cresskill, NJ: Hampton Press. Wasserman, Stanley, & Faust, Katherine. (1994). Social Networks Analysis: Methods and Applications. Cambridge: Cambridge University Press. Watkins, Susan Cott. (2003). "Social Networks." Pp. 909–910 in Encyclopedia of Population. rev. ed. Edited by Paul Demeny and Geoffrey McNicoll. New York: Macmillan Reference. Watts, Duncan. (2004). Six Degrees: The Science of a Connected Age. W. W. Norton & Company. Wellman, Barry (1999). Networks in the Global Village. Boulder, CO: Westview Press. Wellman, Barry and Berkowitz, S.D. (1988). Social Structures: A Network Approach. Cambridge: Cambridge University Press. White, Harrison, Scott Boorman and Ronald Breiger. 1976. "Social Structure from Multiple Networks: I Blockmodels of Roles and Positions." American Journal of Sociology 81: 730–80. External links International Network for Social Network Analysis (INSNA) list of software packages and libraries: Computer Programs for Social Network Analysis page. 2010 : A comparative study of social network analysis tools by Combe, Largeron, Egyed-Zsigmond and Géry: Social networks Comparisons of mathematical software Data analysis software Social network analysis
Social network analysis software
[ "Mathematics" ]
1,574
[ "Comparisons of mathematical software", "Mathematical software" ]
15,922,421
https://en.wikipedia.org/wiki/Platelet-derived%20growth%20factor%20receptor%20A
Platelet-derived growth factor receptor A, also termed CD140a, is a receptor located on the surface of a wide range of cell types. The protein is encoded in the human by the PDGFRA gene. This receptor binds to certain isoforms of platelet-derived growth factors (PDGFs) and thereby becomes active in stimulating cell signaling pathways that elicit responses such as cellular growth and differentiation. The receptor is critical for the embryonic development of certain tissues and organs, and for their maintenance, particularly hematologic tissues, throughout life. Mutations in PDGFRA, are associated with an array of clinically significant neoplasms, notably ones of the clonal hypereosinophilia class of malignancies, as well as gastrointestinal stromal tumors (GISTs). Overall structure This gene encodes a typical receptor tyrosine kinase, which is a transmembrane protein consisting of an extracellular ligand binding domain, a transmembrane domain and an intracellular tyrosine kinase domain. The molecular mass of the mature, glycosylated PDGFRα protein is approximately 170 kDA. cell surface tyrosine kinase receptor for members of the platelet-derived growth factor family. Modes of activation Activation of PDGFRA requires de-repression of the receptor's kinase activity. The ligand for PDGFRα (PDGF) accomplishes this in the course of assembling a PDGFRα dimer. Four of the five PDGF isoforms activate PDGFRα (PDGF-A, PDGF-B, PDGF-AB and PDGF-C). The activated receptor phosphorylates itself and other proteins, and thereby engages intracellular signaling pathways that trigger cellular responses such as migration and proliferation. There are also PDGF-independent modes of de-repressing the PDGFRα's kinase activity and hence activating it. For instance, forcing PDGFRα into close proximity of each other by overexpression or with antibodies directed against the extracellular domain. Alternatively, mutations in the kinase domain that stabilize a kinase active conformation result in constitutive activation. Finally, growth factors outside of the PDGFR family (non-PDGFs) activate PDGFRα indirectly. Non-PDGFs bind to their own receptors that trigger intracellular events that de-repress the kinase activity of PDGFRα monomers. The intracellular events by which non-PDGFs indirectly activate PDGFRα include elevation of reactive oxygen species that activate Src family kinases, which phosphorylate PDGFRα. The mode of activation determines the duration that PDGFRα remains active. The PDGF-mediated mode, which dimerized PDGFRα, accelerates internalization and degradation of activated PDGFRα such that the half-life of PDGF-activated PDGFRα is approximately 5 min. Enduring activation of PDGFRα (half-life greater than 120 min) occurs when PDGFRα monomers are activated. Role in physiology/pathology The importance of PDGFRA during development is apparent from the observation that the majority of mice lacking a functional Pdgfra gene develop a plethora of embryonic defects, some of which are lethal; the mutant mice exhibit defects in kidney glomeruli because of a lack of mesangial cells but also suffer an ill-defined blood defect characterized by thrombocytopenic, a bleeding tendency, and severe anemia which could be due to blood loss. The mice die at or shortly before birth. PDGF-A and PDGF-C seem to be the important activators of PDGFRα during development because mice lacking functional genes for both these PDGFRA activating ligands, i.e. Pdgfa/Pdgfc- double null mice show similar defects to Pdgra null mice. Mice genetically engineered to express a constitutively (i.e. continuously) activated PDGFRα mutant receptor eventually develop fibrosis in the skin and multiple internal organs. The studies suggest that PDGFRA plays fundamental roles in the development and function of mesodermal tissues, e.g., blood cells, connective tissue, and mesangial cells. Clinical significance PDGFRA mutations Myeloid and lymphoid cells Somatic mutations that cause the fusion of the PDGFRA gene with certain other genes occur in hematopoietic stem cells and cause a hematological malignancy in the clonal hypereosinophilia class of malignancies. These mutations create fused genes which encode chimeric proteins that possess continuously active PDGFRA-derived tyrosine kinase. They thereby continuously stimulate cell growth and proliferation and lead to the development of leukemias, lymphomas, and myelodysplastic syndromes that are commonly associated with hypereosinophilia and therefore regarded as a sub-type of clonal eosinophilia. In the most common of these mutations, the PDGFRA gene on human chromosome 4 at position q12 (notated as 4q12) fuses with the FIP1L1 gene also located at position 4q12. This interstitial (i.e. on the same chromosome) fusion creates a FIP1L1-PDGFRA fusion gene while usually losing intervening genetic material, typically including either the CHIC2 or LNX gene. The fused gene encodes a FIP1L1-PDGFRA protein that causes: a) chronic eosinophilia which progresses to chronic eosinophilic leukemia; b) a form of myeloproliferative neoplasm/myeloblastic leukemia associated with little or no eosinophilia; c) T-lymphoblastic leukemia/lymphoma associated with eosinophilia; d) myeloid sarcoma with eosinophilia (see FIP1L1-PDGFRA fusion genes); or e) mixtures of these presentations. Variations in the type of malignancy formed likely reflects the specific type(s) of hematopoietic stem cells that bear the mutation. The PDGFRA gene may also mutate through any one of several chromosome translocations to create fusion genes which, like the Fip1l1-PDGFRA fusion gene, encode a fusion protein that possesses continuously active PDGFRA-related tyrosine kinase and causes myeloid and/or lymphoid malignancies. These mutations, including the Fip1l1-PDGFRA mutation, along with the chromosomal location of PDGFRAs partner and the notation used to identify the fused gene are given in the following table. Patients afflicted with any one of these translocation mutations, similar to those afflicted with the interstitial PDGFRA-FIP1l1 fusion gene: a) present with findings of chronic eosinophilia, hypereosinophilia, the hypereosinophilic syndrome, or chronic eosinophilic leukemia; myeloproliferative neoplasm/myeloblastic leukemia; a T-lymphoblastic leukemia/lymphoma; or myeloid sarcoma; b) are diagnosed cytogenetically, usually by analyses that detect breakpoints in the short arm of chromosome 4 using Fluorescence in situ hybridization; and c)''' where treated (many of the translocations are extremely rare and have not be fully tested for drug sensitivity), respond well or are anticipated to respond well to imatinib therapy as described for the treatment of diseases caused by FIP1L1-PDGFRA fusion genes. Gastrointestinal tract Activating mutations in PDGFRA are also involved in the development of 2–15% of gastrointestinal stromal tumors (GISTs), which is the most common mesenchymal neoplasm of the gastrointestinal tract. GIST tumors are sarcomas derived from the GI tract's connective tissue whereas most GI tract tumors are adenocarcinomas derived from the tract's epithelium cells. GIST tumors occur throughout the GI tract but most (66%) occur in the stomach and when developing there have a lower malignant potential than GIST tumors found elsewhere in the GI tract. The most common PDGFRA mutations found in GIST tumors occur in exon 18 and are thought to stabilize PDGFRA's tyrosine kinase in an activated conformation. A single mutation, D842V, in this exon accounts for >70% of GIST tumors. The next most common GIST tumor mutation occurs in exon 18, accounts for <1% of GISTs tumors, and is a deletion of codons 842 to 845. Exon 12 is the second most commonly mutated PDGFRA exon in GIST, being found in ~1% of GIST tumors. Mutations in PDGFRA's exon 14 are found in <1% of GIST tumors. While some PDGFRA mutation-induced GIST tumors are sensitive to the tyrosine kinase inhibitor, imatinib, the most common mutation, D842V, as well as some very rare mutations are resistant to this drug: median overall survival is reported to be only 12.8 months in patients whose tumors bear the D842V mutation compared to 48–60 months in large series of imatinib-treated patients with other types of GIST mutations. Consequently, it is critical to define the exact nature of PDGFR-induced mutant GIST tumors in order to select appropriate therapy particularly because a novel PDGFRA selective kinase inhibitor, crenolanib, is under investigation for treating D842V-induced and other imatinib-resistant GIST tumors. A randomized trial testing the efficacy of crenolanib in patients with GIST tumors bearing the D842V mutation is under recruitment. Olaratumab (LY3012207) is a human IgG1 monoclonal antibody designed to bind to human PDGFRα with high affinity and block PDGF-AA, PDGF-BB, and PDGF-CC ligands from binding to the receptor. Numerous studies using it to treat soft tissue sarcomas including GIST are ongoing. Studies on GIST have focused on inoperable, metastatic, and/or recurrent disease and have tested olagatumab with Doxorubicin versus doxorubicin along. The US FDA granted approval for the use of olaratumab-dcoxorbicin therapy of soft tissue sarcomas under its Accelerated Approval Program based on the results of the phase II trial, (NCT01185964). In addition, the European Medicines Agency granted conditional approval for olaratumab in this indication in November 2016 following a review under the EMA's Accelerated Assessment Program. Nervous system Gain-of-function H3K27M mutations in protein histone H3 lead to inactivation of polycomb repressive complex 2 (PRC2) methyltransferase and result in global hypomethylation of H3K27me3 and transcriptional derepression of potential oncogenes. About 40% of these mutation are associated with gain of function or amplifications mutations in the PDGFRA gene in cases of pediatric diffuse Gliomas of the pons. It appears that the initial histone H3 mutations alone are insufficient but rather require cooperating secondary mutations such as PDGFRA-activating mutations or PDGFRA'' amplifications to develop this type of brain tumor. In a small non-randomized trial study, imatinib therapy in patients with glioblastoma selected on the basis of having imatinib-inhibitable tyrosine kinases in biopsy tissue caused marginal disease improvement compared to similar treatment of patients with unselected recurrent glioblastoma. This suggests that patient sub-populations with excessive PDGFRA-related or other tyrosine kinase-related over-activity might benefit from imatinib therapy. Several phase I and Phase II clinical glioma/glioblastoma studies have been conducted using imatinib but no decisive follow-up phase III studies have been reported. Interactions PDGFRA has been shown to interact with: CRK, Caveolin 1, Cbl gene, PDGFC, PDGFR-β, PLCG1, and Sodium-hydrogen antiporter 3 regulator 1. Notes See also Platelet-derived growth factor receptor Clonal eosinophilia References Further reading Tyrosine kinase receptors
Platelet-derived growth factor receptor A
[ "Chemistry" ]
2,662
[ "Tyrosine kinase receptors", "Signal transduction" ]
15,922,429
https://en.wikipedia.org/wiki/Scattered%20order
In mathematical order theory, a scattered order is a linear order that contains no densely ordered subset with more than one element. A characterization due to Hausdorff states that the class of all scattered orders is the smallest class of linear orders that contains the singleton orders and is closed under well-ordered and reverse well-ordered sums. Laver's theorem (generalizing a conjecture of Roland Fraïssé on countable orders) states that the embedding relation on the class of countable unions of scattered orders is a well-quasi-order. The order topology of a scattered order is scattered. The converse implication does not hold, as witnessed by the lexicographic order on . References Order theory
Scattered order
[ "Mathematics" ]
141
[ "Mathematical logic stubs", "Mathematical logic", "Order theory" ]
15,922,484
https://en.wikipedia.org/wiki/Ryanodine%20receptor%201
Ryanodine receptor 1 (RYR-1) also known as skeletal muscle calcium release channel or skeletal muscle-type ryanodine receptor is one of a class of ryanodine receptors and a protein found primarily in skeletal muscle. In humans, it is encoded by the RYR1 gene. Function RYR1 functions as a calcium release channel in the sarcoplasmic reticulum, as well as a connection between the sarcoplasmic reticulum and the transverse tubule. RYR1 is associated with the dihydropyridine receptor (L-type calcium channels) within the sarcolemma of the T-tubule, which opens in response to depolarization, and thus effectively means that the RYR1 channel opens in response to depolarization of the cell. RYR1 plays a signaling role during embryonic skeletal myogenesis. A correlation exists between RYR1-mediated Ca2+ signaling and the expression of multiple molecules involved in key myogenic signaling pathways. Of these, more than 10 differentially expressed genes belong to the Wnt family which are essential for differentiation. This coincides with the observation that without RYR1 present, muscle cells appear in smaller groups, are underdeveloped, and lack organization. Fiber type composition is also affected, with less type 1 muscle fibers when there are decreased amounts of RYR1. These findings demonstrate RYR1 has a non-contractile role during muscle development. RYR1 is mechanically linked to neuromuscular junctions for the calcium release-calcium induced biological process. While nerve-derived signals are required for acetylcholine receptor cluster distribution, there is evidence to suggest RYR1 activity is an important mediator in the formation and patterning of these receptors during embryological development. The signals from the nerve and RYR1 activity appear to counterbalance each other. When RYR1 is eliminated, the acetylcholine receptor clusters appear in an abnormally narrow pattern, yet without signals from the nerve, the clusters are scattered and broad. Although their direct role is still unknown, RYR1 is required for proper distribution of acetylcholine receptor clusters. Clinical significance Mutations in the RYR1 gene are associated with malignant hyperthermia susceptibility, central core disease, minicore myopathy with external ophthalmoplegia and samaritan myopathy, a benign congenital myopathy. Alternatively spliced transcripts encoding different isoforms have been demonstrated. Dantrolene may be the only known drug that is effective during cases of malignant hyperthermia. Interactions RYR1 has been shown to interact with: calmodulin FKBP1A HOMER1 HOMER2 HOMER3 and TRDN. See also Ryanodine receptor References Further reading External links GeneReviews/NIH/UW entry on Multiminicore Disease GeneReviews/NCBI/NIH/UW entry on Malignant Hyperthermia Susceptibility RYR1 Variation Database Ion channels Muscle stabilizers
Ryanodine receptor 1
[ "Chemistry" ]
630
[ "Neurochemistry", "Ion channels" ]
11,725,057
https://en.wikipedia.org/wiki/Mobbing%20%28animal%20behavior%29
Mobbing in animals is an anti-predator adaptation in which individuals of prey species cooperatively attack or harass a predator, usually to protect their offspring. A simple definition of mobbing is an assemblage of individuals around a potentially dangerous predator. This is most frequently seen in birds, though it is also known to occur in many other animals such as the meerkat and some bovines. While mobbing has evolved independently in many species, it only tends to be present in those whose young are frequently preyed upon. This behavior may complement cryptic adaptations in the offspring themselves, such as camouflage and hiding. Mobbing calls may be used to summon nearby individuals to cooperate in the attack. Konrad Lorenz, in his book On Aggression (1966), attributed mobbing among birds and animals to instincts rooted in the Darwinian struggle to survive. In his view, humans are subject to similar innate impulses but capable of bringing them under rational control (see mobbing). In birds Birds that breed in colonies such as gulls are widely seen to attack intruders, including encroaching humans. In North America, the birds that most frequently engage in mobbing include mockingbirds, crows and jays, chickadees, terns, and blackbirds. Behavior includes flying about the intruder, dive bombing, loud squawking and defecating on the predator. Mobbing can also be used to obtain food, by driving larger birds and mammals away from a food source, or by harassing a bird with food. One bird might distract while others quickly steal food. Scavenging birds such as gulls frequently use this technique to steal food from humans nearby. A flock of birds might drive a powerful animal away from food. Costs of mobbing behavior include the risk of engaging with predators, as well as energy expended in the process. The black-headed gull is a species which aggressively engages intruding predators, such as carrion crows. Classic experiments on this species by Hans Kruuk involved placing hen eggs at intervals from a nesting colony, and recording the percentage of successful predation events as well as the probability of the crow being subjected to mobbing. The results showed decreasing mobbing with increased distance from the nest, which was correlated with increased predation success. Mobbing may function by reducing the predator's ability to locate nests (as a distraction) since predators cannot focus on locating eggs while they are under attack. Besides the ability to drive the predator away, mobbing also draws attention to the predator, making stealth attacks impossible. Mobbing plays a critical role in the identification of predators and inter-generational learning about predator identification. Reintroduction of species is often unsuccessful, because the established population lacks this cultural knowledge of how to identify local predators. Scientists are exploring ways to train populations to identify and respond to predators before releasing them into the wild. Adaptationist hypotheses regarding why an organism should engage in such risky behavior have been suggested by Eberhard Curio, including advertising their physical fitness and hence uncatchability (much like stotting behavior in gazelles), distracting predators from finding their offspring, warning their offspring, luring the predator away, allowing offspring to learn to recognize the predator species, directly injuring the predator or attracting a predator of the predator itself. The much lower frequency of attacks between nesting seasons suggests such behavior may have evolved due to its benefit for the mobber's young. Niko Tinbergen argued that the mobbing was a source of confusion to gull chick predators, distracting them from searching for prey. Indeed, an intruding carrion crow can only avoid incoming attacks by facing its attackers, which prevents it from locating its target. Besides experimental research, the comparative method can also be employed to investigate hypotheses such as those given by Curio above. For example, not all gull species show mobbing behavior. The kittiwake nests on sheer cliffs that are almost completely inaccessible to predators, meaning its young are not at risk of predation like other gull species. This is an example of divergent evolution. Another hypothesis for mobbing behavior is known as the “attract the mightier hypothesis.” Within this hypothesis, prey species produce a mobbing call in order to attract stronger secondary predator to address the threat of the present primary predator. A study conducted by Fang et al., showed significant findings for this unproved functional thesis, utilizing three different call types for the prey species light-vented bulbuls, Pycnonotus sinensis: the typical call (TC, the control treatment), a mobbing call to a collared scops owl (the MtO treatment) and a mobbing call to a crested goshawk, Accipiter trivirgatus (the superior predator; the MtH treatment). Looking at variation in the behavioural responses of 22 different passerine species to a potential predator, the Eurasian Pygmy Owl, extent of mobbing was positively related with a species prevalence in the owls' diet. Furthermore, the intensity of mobbing was greater in autumn than spring. Mobbing is thought to carry risks to roosting predators, including potential harm from the mobbing birds, or attracting larger, more dangerous predators. Birds at risk of mobbing such as owls have cryptic plumage and hidden roosts which reduces this danger. Effect of environment on mobbing behavior Environment has an effect on mobbing behavior as seen in a study conducted by Dagan & Izhaki (2019), wherein mobbing behavior was examined particularly observing the effects of Pine Forest structure. Their findings showed that mobbing behavior varied by season, i.e., high responses in the winter, and moderate response in the fall. Additionally, the presence of a forest understory had a significant impact on mobbing behavior, i.e., the denser the understory vegetation, the more birds responded to mobbing calls. That is to say, the presence of cover in the forest highly contributes to willingness to respond to the aforementioned call. In other animals Another way the comparative method can be used here is by comparing gulls with distantly related organisms. This approach relies on the existence of convergent evolution, where distantly related organisms evolve the same trait due to similar selection pressures. As mentioned, many bird species such as the swallows also mob predators, however more distantly related groups including mammals have been known to engage in this behavior. One example is the California ground squirrel, which distracts predators such as the rattlesnake and gopher snake from locating their nest burrows by kicking sand into the snake's face, thus disrupting its sensory organs; for crotaline snakes, this includes the heat-detecting organs in the loreal pits. This social species also uses alarm calls. Some fish engage in mobbing; for example, bluegills sometimes attack snapping turtles. Bluegills, which form large nesting colonies, were seen to attack both released and naturally occurring turtles, which may advertise their presence, drive the predator from the area, or aid in the transmission of predator recognition. Similarly, humpback whales are known to mob killer whales when the latter are attacking other species, including other cetacean species, seals, sea lions, and fish. There is a distinction though, between mobbing in animals, and fight-or-flight response. The former relies heavily on group dynamics, whereas the latter’s central focus conceptually is on that of the individual and its offspring in some cases. A study conducted by Adamo & McKee (2017) examining the cricket Gryllus texensis showcases this by activating high predation risk repeatedly to examine how animals in general perceive such risks. Based on perceived threat, crickets took action to save themselves or attempted to preserve their offspring. Mobbing calls Mobbing calls are signals made by the mobbing species while harassing a predator. These differ from alarm calls, which allow con-specifics to escape from the predator. The great tit, a European songbird, uses such a signal to call on nearby birds to harass a perched bird of prey, such as an owl. This call occurs in the 4.5kHz range, and carries over long distances. However, when prey species are in flight, they employ an alarm signal in the 7–8 kHz range. This call is less effective at traveling great distances, but is much more difficult for both owls and hawks to hear (and detect the direction from which the call came). In the case of the alarm call, it could be disadvantageous to the sender if the predator picks up on the signal, hence selection has favored those birds able to hear and employ calls in this higher frequency range. Furthermore, bird vocalizations vary acoustically as a byproduct of adapting to the environment, according to the acoustic adaptation hypothesis. In a study by Billings (2018) examining, specifically the low-frequency acoustic structure of mobbing calls across habitat types (closed, open, and urban) in three passerine families (Corvidae, Icteridae, Turdidae), it was discovered that the size of the bird was a factor in the variation of mobbing calls. Additionally, species in closed and urban habitats had lower energy and lower low frequencies in their mobbing calls, respectively. Mobbing calls may also be part of an animal's arsenal in harassing the predator. Studies of Phainopepla mobbing calls indicate it may serve to enhance the swooping attack on the predators, including scrub jays. In this species, the mobbing call is smoothly upsweeping, and is made when swooping down in an arc beside the predator. This call was also heard during agonistic behavior interactions with conspecifics, and may serve additionally or alternatively as an alarm call to their mate. Evolution The evolution of mobbing behavior can be explained using evolutionarily stable strategies, which are in turn based on game theory. Mobbing involves risks (costs) to the individual and benefits (payoffs) to the individual and others. The individuals themselves are often genetically related, and mobbing is increasingly studied with the gene-centered view of evolution by considering inclusive fitness (the carrying on of one's genes through one's family members), rather than merely benefit to the individual. Mobbing behavior varies in intensity depending on the perceived threat of a predator according to a study done by Dutour et al. (2016). However, particularly in terms of its surfacing in avian species, it is accepted to be the byproduct of mutualism, rather than reciprocal altruism according to Russell & Wright (2009). By cooperating to successfully drive away predators, all individuals involved increase their chances of survival and reproduction. An individual stands little chance against a larger predator, but when a large group is involved, the risk to each group member is reduced or diluted. This so-called dilution effect proposed by W. D. Hamilton is another way of explaining the benefits of cooperation by selfish individuals. Lanchester's laws also provide an insight into the advantages of attacking in a large group rather than individually. Another interpretation involves the use of signalling theory, and possibly the handicap principle. Here the idea is that a mobbing bird, by apparently putting itself at risk, displays its status and health so as to be preferred by potential partners. References External links Interspecific reciprocity explains mobbing behaviour of the breeding chaffinches, Fringilla coelebs Paper by Indrikis Krams and Tatjana Krama (PDF) Nature Photography – Using mobbing behavior in photography Birds mob Puff Adder – paper in ejournal Ornithological Observations Antipredator adaptations Evolutionary game theory Animal communication Bird behavior
Mobbing (animal behavior)
[ "Mathematics", "Biology" ]
2,415
[ "Behavior by type of animal", "Behavior", "Evolutionary game theory", "Biological defense mechanisms", "Game theory", "Antipredator adaptations", "Bird behavior" ]
11,726,298
https://en.wikipedia.org/wiki/Linear%20extension
In order theory, a branch of mathematics, a linear extension of a partial order is a total order (or linear order) that is compatible with the partial order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Definitions Linear extension of a partial order A partial order is a reflexive, transitive and antisymmetric relation. Given any partial orders and on a set is a linear extension of exactly when is a total order, and For every if then It is that second property that leads mathematicians to describe as extending Alternatively, a linear extension may be viewed as an order-preserving bijection from a partially ordered set to a chain on the same ground set. Linear extension of a preorder A preorder is a reflexive and transitive relation. The difference between a preorder and a partial-order is that a preorder allows two different items to be considered "equivalent", that is, both and hold, while a partial-order allows this only when . A relation is called a linear extension of a preorder if: is a total preorder, and For every if then , and For every if then . Here, means " and not ". The difference between these definitions is only in condition 3. When the extension is a partial order, condition 3 need not be stated explicitly, since it follows from condition 2. Proof: suppose that and not . By condition 2, . By reflexivity, "not " implies that . Since is a partial order, and imply "not ". Therefore, . However, for general preorders, condition 3 is needed to rule out trivial extensions. Without this condition, the preorder by which all elements are equivalent ( and hold for all pairs x,y) would be an extension of every preorder. Order-extension principle The statement that every partial order can be extended to a total order is known as the order-extension principle. A proof using the axiom of choice was first published by Edward Marczewski (Szpilrajin) in 1930. Marczewski writes that the theorem had previously been proven by Stefan Banach, Kazimierz Kuratowski, and Alfred Tarski, again using the axiom of choice, but that the proofs had not been published. There is an analogous statement for preorders: every preorder can be extended to a total preorder. This statement was proved by Hansson. In modern axiomatic set theory the order-extension principle is itself taken as an axiom, of comparable ontological status to the axiom of choice. The order-extension principle is implied by the Boolean prime ideal theorem or the equivalent compactness theorem, but the reverse implication doesn't hold. Applying the order-extension principle to a partial order in which every two elements are incomparable shows that (under this principle) every set can be linearly ordered. This assertion that every set can be linearly ordered is known as the ordering principle, OP, and is a weakening of the well-ordering theorem. However, there are models of set theory in which the ordering principle holds while the order-extension principle does not. Related results The order extension principle is constructively provable for sets using topological sorting algorithms, where the partial order is represented by a directed acyclic graph with the set's elements as its vertices. Several algorithms can find an extension in linear time. Despite the ease of finding a single linear extension, the problem of counting all linear extensions of a finite partial order is #P-complete; however, it may be estimated by a fully polynomial-time randomized approximation scheme. Among all partial orders with a fixed number of elements and a fixed number of comparable pairs, the partial orders that have the largest number of linear extensions are semiorders. The order dimension of a partial order is the minimum cardinality of a set of linear extensions whose intersection is the given partial order; equivalently, it is the minimum number of linear extensions needed to ensure that each critical pair of the partial order is reversed in at least one of the extensions. Antimatroids may be viewed as generalizing partial orders; in this view, the structures corresponding to the linear extensions of a partial order are the basic words of the antimatroid. This area also includes one of order theory's most famous open problems, the 1/3–2/3 conjecture, which states that in any finite partially ordered set that is not totally ordered there exists a pair of elements of for which the linear extensions of in which number between 1/3 and 2/3 of the total number of linear extensions of An equivalent way of stating the conjecture is that, if one chooses a linear extension of uniformly at random, there is a pair which has probability between 1/3 and 2/3 of being ordered as However, for certain infinite partially ordered sets, with a canonical probability defined on its linear extensions as a limit of the probabilities for finite partial orders that cover the infinite partial order, the 1/3–2/3 conjecture does not hold. Algebraic combinatorics Counting the number of linear extensions of a finite poset is a common problem in algebraic combinatorics. This number is given by the leading coefficient of the order polynomial multiplied by Young tableau can be considered as linear extensions of a finite order-ideal in the infinite poset and they are counted by the hook length formula. References Order theory
Linear extension
[ "Mathematics" ]
1,119
[ "Order theory" ]
11,726,394
https://en.wikipedia.org/wiki/PG%201159%20star
A PG 1159 star, often also called a pre-degenerate, is a star with a hydrogen-deficient atmosphere that is in transition between being the central star of a planetary nebula and being a hot white dwarf. These stars are hot, with surface temperatures between 75,000 K and 200,000 K, and are characterized by atmospheres with little hydrogen and absorption lines for helium, carbon and oxygen. Their surface gravity is typically between 104 and 106 meters per second squared. Some PG 1159 stars are still fusing helium., § 2.1.1, 2.1.2, Table 2. The PG 1159 stars are named after their prototype, PG 1159-035. This star, found in the Palomar-Green survey of ultraviolet-excess stellar objects, was the first PG 1159 star discovered. It is thought that the atmospheric composition of PG 1159 stars is odd because, after they have left the asymptotic giant branch, they have reignited helium fusion. As a result, a PG 1159 star's atmosphere is a mixture of material which was between the hydrogen- and helium-burning shells of its AGB star progenitor., §1. They are believed to eventually lose mass, cool, and become DO white dwarfs.; , §4. Some PG 1159 stars have varying luminosities. These stars vary slightly (5–10%) in brightness due to non-radial gravity wave pulsations within themselves. They vibrate in a number of modes simultaneously, with typical periods between 300 and 3,000 seconds., Table 1. The first known star of this type is also PG 1159-035, which was found to be variable in 1979, and was given the variable star designation GW Vir in 1985. These stars are called GW Vir stars, after their prototype, or the class may be split into DOV and PNNV stars., § 1.1; See also Planetary nebula White dwarf References Star types White dwarfs
PG 1159 star
[ "Astronomy" ]
417
[ "Star types", "Astronomical classification systems" ]
11,726,660
https://en.wikipedia.org/wiki/Canadian%20Society%20for%20Biomechanics
Canadian Society for Biomechanics / Société canadienne de biomécanique (CSB/SCB) was formed in 1973. The CSB is an Affiliated Society with the International Society of Biomechanics (ISB). The purpose of the Society is to foster research and the interchange of information on the biomechanics of human physical activity. Biomechanics research is being performed more and more by people from diverse disciplinary and professional backgrounds. CSB/SCB is attempting to enhance interdisciplinary communication and thereby improve the quality of biomechanics research and facilitate application of findings by bringing together therapists, physicians, engineers, sport researchers, ergonomists, and others who are using the same pool of basic biomechanics techniques but studying different human movement problems. External links Canadian Society for Biomechanics Official Site International Society of Biomechanics Official Site Canadian Society for Biomechanics podcast Biomechanics Professional associations based in Canada
Canadian Society for Biomechanics
[ "Physics" ]
201
[ "Biomechanics", "Mechanics" ]
11,726,833
https://en.wikipedia.org/wiki/Welding%20Procedure%20Specification
A Welding Procedure Specification (WPS) is a formal document describing welding procedures. It is an internal document used by welding companies to instruct welders (or welding operators) on how to achieve quality production welds that meet all relevant code requirements. Each company typically develops their own WPS for each material alloy and for each welding type used. Specific codes and/or engineering societies are often the driving force behind the development of a company's WPS. A WPS is supported by a Procedure Qualification Record (PQR or WPQR), a formal record of a test weld performed and rigorously tested to ensure that the procedure will produce a good weld. Individual welders are certified with a qualification test documented in a Welder Qualification Test Record (WQTR) that shows they have the understanding and demonstrated ability to work within the specified WPS. Introduction The following are definitions for WPS and PQR found in various codes and standards: According to the American Welding Society (AWS), a WPS provides in detail the required welding variables for specific application to assure repeatability by properly trained welders. The AWS defines welding PQR as a record of welding variables used to produce an acceptable test weldment and the results of tests conducted on the weldment to qualify a Welding Procedure Specification. For steel construction (civil engineering structures) AWS D1.1 is a widely used standard. It specifies either a pre-qualification option (chapter 3) or a qualification option (chapter 4) for approval of welding processes. The American Society of Mechanical Engineers (ASME) similarly defines a WPS as a written document that provides direction to the welder or welding operator for making production welds in accordance with Code requirements. ASME also defines welding PQR as a record of variables recorded during the welding of the test coupon. The record also contains the test results of the tested specimens. The Canadian Welding Bureau, through CSA Standards W47.1, W47.2 and W186, specifies both a WPS and a Welding Procedure Data Sheet (WPDS) to provide direction to the welding supervisor, welders and welding operators. The WPS provides general information on the welding process and material grouping being welded, while the WPDS provides specific welding variables/parameter/conditions for the specific weldment. All WPS and WPDS must be independently reviewed and accepted by the Canadian Welding Bureau prior to use. These CSA standards also define requirements for procedure qualification testing (PQT) to support the acceptance of the WPDS. A record of the procedure qualification test and the results must be documented on a procedure qualification record (PQR). All PQTs are independently witnessed by the Canadian Welding Bureau. In Europe, the European Committee for Standardization (CEN) has adopted the ISO standards on welding procedure qualification (ISO 15607 to ISO 15614), which replaced the former European standard EN 288. EN ISO 15607 defines a WPS as "A document that has been qualified by one of the methods described in clause 6 and provides the required variables of the welding procedure to ensure repeatability during production welding". The same standard defines a Welding Procedure Qualification Record (WPQR) as "Record comprising all necessary data needed for qualification of a preliminary welding procedure specification". In addition to the standard WPS qualification procedure specified in ISO 15614, the ISO 156xx series of standards provides also for alternative WPS approval methods. These include: Tested welding consumables (ISO 15610), Previous welding experience (ISO 15611), Standard welding procedure (ISO 15612) and Preproduction welding test (ISO 15613). In the oil and gas pipeline sector, the American Petroleum Institute API 1104 standard is used almost exclusively worldwide. API 1104 accepts the definitions of the American Welding Society code AWS A3.0. WPS is of two types- Prequalified WPS(pWPS) & qualified WPS. See also List of welding codes Welder certification Welding How to write a welding procedure specification (ISO 15614-1) Writing compliant welding procedure specifications ISO 15614-1 (2017) Changes and Updates from previous versions References List of standards ASME Boiler and Pressure Vessel Code section IX: "Qualification Standard for welding and brazing procedures, welders, brazers and welding and brazing operators" EN ISO 15607: "Specification and qualification of welding procedures for metallic materials - General rules (ISO 15607:2003)" EN ISO 15609: "Specification and qualification of welding procedures for metallic materials - Welding procedure specification (ISO 15609)", five parts. EN ISO 15614: "Specification and qualification of welding procedures for metallic materials - Welding procedure test (ISO 15614)", 13 parts. API 1104: "Welding of pipelines and related facilities", parts 5 (procedures) Welding
Welding Procedure Specification
[ "Engineering" ]
1,011
[ "Welding", "Mechanical engineering" ]
11,727,322
https://en.wikipedia.org/wiki/Telescoping%20%28mechanics%29
Telescoping in mechanics describes the movement of one part sliding out from another, lengthening an object (such as a telescope or the lift arm of an aerial work platform) from its rest state. In modern equipment this can be achieved by a hydraulics, but pulleys are generally used for simpler designs such as extendable ladders and amateur radio antennas. See also Telescoping bolt Telescopic cylinder Telescoping (rail cars) Mechanics Simple machines References
Telescoping (mechanics)
[ "Physics", "Technology", "Engineering" ]
99
[ "Machines", "Classical mechanics stubs", "Classical mechanics", "Physical systems", "Mechanics", "Mechanical engineering", "Simple machines" ]
11,727,583
https://en.wikipedia.org/wiki/Macropod%20hybrid
Macropod hybrids are hybrids of animals within the family Macropodidae, which includes kangaroos and wallabies. Several macropod hybrids have been experimentally bred, including: Some hybrids between similar species have been achieved by housing males of one species and females of the other together to limit the choice of a mate. To create a "natural" macropod hybrid, young animals of one species have been transferred to the pouch of another so as to imprint into them the other species. In-vitro fertilization has also been used and the fertilized egg implanted into a female of either species. References Macropods Mammal hybrids Intergeneric hybrids
Macropod hybrid
[ "Biology" ]
132
[ "Intergeneric hybrids", "Hybrid organisms" ]
11,728,075
https://en.wikipedia.org/wiki/Inclusion%20order
In the mathematical field of order theory, an inclusion order is the partial order that arises as the subset-inclusion relation on some collection of objects. In a simple way, every poset P = (X,≤) is (isomorphic to) an inclusion order (just as every group is isomorphic to a permutation group – see Cayley's theorem). To see this, associate to each element x of X the set then the transitivity of ≤ ensures that for all a and b in X, we have There can be sets of cardinality less than such that P is isomorphic to the inclusion order on S. The size of the smallest possible S is called the 2-dimension of P. Several important classes of poset arise as inclusion orders for some natural collections, like the Boolean lattice Qn, which is the collection of all 2n subsets of an n-element set, the interval-containment orders, which are precisely the orders of order dimension at most two, and the dimension-n orders, which are the containment orders on collections of n-boxes anchored at the origin. Other containment orders that are interesting in their own right include the circle orders, which arise from disks in the plane, and the angle orders. See also Birkhoff's representation theorem Intersection graph Interval order References Order theory
Inclusion order
[ "Mathematics" ]
273
[ "Algebra stubs", "Order theory", "Algebra" ]
11,728,427
https://en.wikipedia.org/wiki/Marlock
A marlock or moort is a shrubby or small-tree form of Eucalyptus found in Western Australia. Unlike the mallee, it is single-stemmed and lacks a lignotuber. It has a dense canopy of leaves which often extends to near ground level. Marlock species include: Bald Island marlock (Eucalyptus conferruminata or Eucalyptus lehmannii) black marlock, black-barked marlock (Eucalyptus redunca) Comet Vale marlock ( Eucalyptus comitae-vallis) flowering marlock, long-flowered marlock, long-leaved marlock (Eucalyptus macrandra) forrest's marlock (Eucalyptus forrestiana) limestone marlock (Eucalyptus decipiens) Quoin Head marlock (Eucalyptus mcquoidii) Moorts are a form of marlock with smooth, grey bark including the following species: moort or round-leaved moort (Eucalyptus platypus) red-flowered moort (Eucalyptus nutans) Stoate's moort (Eucalyptus stoatei) References Plant morphology Eucalyptus Myrtales of Australia
Marlock
[ "Biology" ]
228
[ "Plant morphology", "Plants" ]
11,728,902
https://en.wikipedia.org/wiki/Fixation%20%28psychology%29
Fixation () is a concept (in human psychology) that was originated by Sigmund Freud (1905) to denote the persistence of anachronistic sexual traits. The term subsequently came to denote object relationships with attachments to people or things in general persisting from childhood into adult life. Freud In Three Essays on the Theory of Sexuality (1905), Freud distinguished the fixations of the libido on an incestuous object from a fixation upon a specific, partial aim, such as voyeurism. Freud theorized that some humans may develop psychological fixation due to one or more of the following: A lack of proper gratification during one of the psychosexual stages of development. Receiving a strong impression from one of these stages, in which case the person's personality would reflect that stage throughout adult life. "An excessively strong manifestation of these instincts at a very early age [which] leads to a kind of partial fixation, which then constitutes a weak point in the structure of the sexual function". As Freud's thought developed, so did the range of possible 'fixation points' he saw as significant in producing particular neuroses. However, he continued to view fixation as "the manifestation of very early linkages which it is hard to between instincts and impressions and the objects involved in those impressions". Psychoanalytic therapy involved producing a new transference fixation in place of the old one. The new for example a father-transference onto the may be very different from the old, but will absorb its energies and enable them eventually to be released for non-fixated purposes. Objections Whether a particularly obsessive attachment is a fixation or a defensible expression of love is at times debatable. Fixation to intangibles (i.e., ideas, ideologies, etc.) can also occur. The obsessive factor of fixation is also found in symptoms pertaining to obsessive compulsive disorder, which psychoanalysts linked to a mix of early (pregenital) frustrations and gratifications. Fixation has been compared to psychological imprinting at an early and sensitive period of development. Others object that Freud was attempting to stress the looseness of the ties between libido and object, and the need to find a specific cause any given (perverse or neurotic) fixation. Post-Freudians Melanie Klein saw fixation as inherently pathological – a blocking of potential sublimation by way of repression. Erik H. Erikson distinguished fixation to zone – oral or anal, for example – from fixation to mode, such as taking in, as with his instance of the man who "may eagerly absorb the 'milk of wisdom' where he once desired more tangible fluids from more sensuous containers". Eric Berne, developed his insight further as part of transactional analysis, suggesting that "particular games and scripts, and their accompanying physical symptoms, are based in appropriate zones and modes". Heinz Kohut saw the grandiose self as a fixation upon a normal childhood stage; while other post-Freudians explored the role of fixation in aggression and criminality. In popular culture Coleridge's Christabel has been seen as using witchcraft as a vehicle to explore psychological fixation. Tennyson has been considered to show a romantic fixation on days of old. See also References External links Claude Smadja, "Fixation" Fixation Freudian psychology Psychoanalytic terminology Sexology 1900s neologisms
Fixation (psychology)
[ "Biology" ]
732
[ "Behavioural sciences", "Behavior", "Sexology" ]
11,729,408
https://en.wikipedia.org/wiki/Hibbertia%20scandens
Hibbertia scandens, sometimes known by the common names snake vine, climbing guinea flower and golden guinea vine, is a species of flowering plant in the family Dilleniaceae and is endemic to eastern Australia. It is climber or scrambler with lance-shaped or egg-shaped leaves with the narrower end towards the base, and yellow flowers with more than thirty stamens arranged around between three and seven glabrous carpels. Description Hibbertia scandens is a climber or scrambler with stems long. The leaves are lance-shaped or egg-shaped with the narrower end towards the base, long and wide, sessile and often stem-clasping with the lower surface silky-hairy. The flowers are arranged in leaf axils, each flower on a peduncle long. The sepals are long and the petals are yellow, long with more than thirty stamens surrounding the three to seven glabrous carpels. Flowering occurs in most months and the fruit is an orange aril. Plants near the coast tend to be densely hairy with spatula-shaped leaves and have flowers with six or seven carpels, whilst those further inland are usually more or less glabrous with tapering leaves and flowers with three or four carpels. The flowers have been reported as having an unpleasant odour variously described as similar to mothballs or animal urine or sweet but with "a pronounced faecal element". Taxonomy Snake vine was first formally described in 1799 by German botanist Carl Willdenow who gave it the name Dillenia scandens in Species Plantarum. In 1805, Swedish botanist Jonas Dryander transferred the species into the genus Hibbertia as H. scandens in the Annals of Botany. The specific epithet (scandens) is derived from Latin, and means "climbing". Three varieties of H. scandens have been described and the names are accepted by the Australian Plant Census but not by the National Herbarium of New South Wales: Hibbertia scandens var. glabra (Maiden) C.T.White; Hibbertia scandens var. oxyphylla Domin; Hibbertia scandens (Willd.) Dryand. var. scandens. Distribution and habitat Hibbertia scandens grows on coastal sand dunes, in open forest and at rainforest margins in an area extending from Proserpine in north-eastern Queensland to the far south coast of New South Wales. The species also occurs as an uncommon weed in Auckland, New Zealand. Ecology Some pollination surveys place beetles (from the Scarabaeidae, Chrysomelidae and Curculionidae) as the main pollinators of Hibbertia scandens, as well as Hibbertia hypericoides , and other species from the Dilleniaceae family, they also place bees and flies as secondary importance (such as Keighery 1975). Use in horticulture This species is common in cultivation and adapts to a wide range of growing conditions, including where it is exposed to salt-laden winds. Although it readily grows in semi-shaded areas, it flowers best in full sun and prefers well-drained soil. As it is only hardy down to it requires winter protection in temperate regions. In the United Kingdom it has gained the Royal Horticultural Society's Award of Garden Merit. In popular culture Hibbertia scandens appeared on an Australian postage stamp in 1999. See also List of flora on stamps of Australia References External links scandens Flora of New South Wales Flora of Queensland Plants described in 1806 Taxa named by Carl Ludwig Willdenow Plants that can bloom all year round
Hibbertia scandens
[ "Biology" ]
748
[ "Plants that can bloom all year round", "Plants" ]
11,730,273
https://en.wikipedia.org/wiki/GRAVES%20%28system%29
GRAVES () is a French radar-based space surveillance system, akin to the United States Space Force Space Surveillance System. Space surveillance system Using radar measurements, the French Air and Space Force is able to spot satellites orbiting the Earth and determine their orbit. The GRAVES system took 15 years to develop, and became operational in November, 2005. GRAVES is also a contributing system to the European Space Agency's Space Situational Awareness Programme (SSA). GRAVES is a bistatic radar system using Doppler and directional information to derive the orbits of the detected satellites. Its operating frequency is 143.050 MHz, with the transmitter being located on a decommissioned airfield near Broye-lès-Pesmes at and the receiver at a former missile site near Revest du Bion on the Plateau d'Albion at . Data processing and generation of satellite orbital elements is performed at the Balard Air Complex in Paris, . References External links Official website Military radars of France Bistatic and multistatic radars Space Situational Awareness Programme
GRAVES (system)
[ "Environmental_science" ]
209
[ "Space Situational Awareness Programme" ]
11,730,351
https://en.wikipedia.org/wiki/LSE%20%28programming%20language%29
LSE () is a programming language developed at Supélec and Télémécanique from the late 1960s to the mid-1970s. It is similar to BASIC, except with French-language instead of English-language keywords. It was derived from an earlier language called LSD, also developed at Supélec. It is most commonly said to be an acronym for Langage Symbolique d'Enseignement (Symbolic Teaching Language), but other expansions are also known (e.g. Langage de Sup-Élec, or the more cynical Langage Sans Espoir (hopeless language)). LSE originally flourished because being "interpreted", the "tokens" used were common to all languages and with a nationalized "editor", tokenized programs could be listed in any language. Obviously, the support from the French Ministry of National Education, was very important, but it declined as the ministry lost interest. It went through a number of revisions; earlier versions of LSE lacked full support for structured programming, later versions such as LSE-83 (aka LSE-1983) by Jacques Arsac added structured programming support, along with exception handling. Even later revisions, such as LSE-2000, added more functionality, new types, new operators (NI, ET QUE, OU QUE and SELON-DANS-SINON), flow control commands, etc. Code examples 99 Bottles (AFNOR Z 65-020) 1*CHANSON DES 99 BOUTEILLES DE BIERE 2*PASCAL BOURGUIGNON, 2003 10 FAIRE 20 POUR N←99 PAS -1 JUSQUA 1 20 &STROF(N) 30 AFFICHER['IL EST TEMPS D’’ALLER AU MAGASIN.',/] 40 TERMINER 100 PROCEDURE &STROF(N) LOCAL S1,S0;CHAINE S1,S0;S1←"S";S0←"S" 110 SI N=2 ALORS S0←"" SINON SI N=1 ALORS DEBUT S1←"";S0←"" FIN 120 AFFICHER[U,' BOUTEILLE',U,' DE BIERE SUR LE MUR.',/]N,S1 130 AFFICHER[U,' BOUTEILLE',U,' DE BIERE.',/]N,S1 140 AFFICHER['EN PRENDRE UNE, LA FAIRE PASSER.',/] 150 AFFICHER[U,' BOUTEILLE',U,' DE BIERE SUR LE MUR.',2/]N-1,S0 160 RETOUR Anagrams (LSE-1983) Example from Jacques Arsac in LSE83: 1 CHAINE A,B,BP 5 FAIRE 10 AFFICHER 'A = ' ;LIRE A ; SI A=’’ ALORS FINI IS 11 AFFICHER 'B = ' ;LIRE B ; BP ← B 12 15 R SI LGR(A) # LGR(B) ALORS .FAUX. SINON &ANAG(A,B) IS 20 SI R ALORS AFFICHER A, 'EST ANAGRAMME DE ',BP 21 SINON AFFICHER A, 'N’’EST PAS ANAGRAMME DE 1, BP 22 IS 25 BOUCLER 29 30 TERMINER 31 50 FONCTION &ANAG(U,V) LOCAL J {lgr(u)=lgr(v)} 51 SI U=' ' ALORS RESULTAT .VRAI. IS 52 SI J = 0 ALORS RESULTAT .FAUX. IS 54 RESULTAT &ANAG(SCH(U,2, ' '),MCH(V,J,l, ' ')) $55 &ANAG $99 Largest common divisor, Euclid's algorithm (LSE2000) (* ** MÉTHODE D'EUCLIDE POUR TROUVER LE PLUS GRAND DIVISEUR COMMUN D'UN ** NUMÉRATEUR ET D'UN DÉNOMINATEUR. ** L. Goulet 2010 *) PROCÉDURE &PGDC(ENTIER U, ENTIER V) : ENTIER LOCAL U, V ENTIER T TANT QUE U > 0 FAIRE SI U< V ALORS T←U U←V V←T FIN SI U ← U - V BOUCLER RÉSULTAT V FIN PROCÉDURE PROCÉDURE &DEMO(ENTIER U, ENTIER V) LOCAL U, V AFFICHER ['Le PGDC de ',U,'/',U,' est ',U,/] U, V, &PGDC(U,V) FIN PROCÉDURE &DEMO(9,12) References External links 99 Bottles program written in LSE An implementation of L.S.E. Procedural programming languages Non-English-based programming languages BASIC programming language family Programming languages created in the 1970s
LSE (programming language)
[ "Technology" ]
1,040
[ "Non-English-based programming languages", "Natural language and computing" ]
11,730,386
https://en.wikipedia.org/wiki/Canna%20leaf%20roller
Canna leaf roller refers to two different Lepidoptera species that are pests of cultivated cannas. Caterpillars of the Brazilian skipper butterfly (Calpodes ethlius), also known as the larger canna leaf roller, cut the leaves and roll them over to live inside while pupating and eating the leaf. In addition, caterpillars of the lesser canna leaf roller (Geshna cannalis), a grass moth, will sew the leaves shut before they can unfurl by spinning a silk thread around the leaf. The resultant leaf damage can be distressing to a gardener. References External links Galveston County Master Gardener Association Butterflies and Moths of North America: Brazilian Skipper Garden pests Insect common names
Canna leaf roller
[ "Biology" ]
152
[ "Pests (organism)", "Garden pests" ]
11,730,514
https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Norway
The geographical centre of Norway has been identified as a spot in the mountains at the southeastern end of the Ogndalen valley in the southeastern part of Steinkjer Municipality in Trøndelag county, located at . A monument marking the significance of the spot was unveiled in a ceremony on 3 September 2006, with the hope that it would become a tourist attraction. The site lies just to the west of the large lake Skjækervatnet. Method of calculation Harald Stavestrand at the Norwegian Mapping and Cadastre Authority looked for the balancing point of mainland Norway with its islands, not including sea area, the overseas areas of Svalbard and Jan Mayen, or considering elevation. Stavestrand had feared that the centre would turn out to be in Sweden due to the curved shape of Norway, but it ultimately ended up within the borders of Norway. Other locations Several other places have been claimed to be the centre of Norway, using differing methods. They include: Harran in Grong Municipality (halving the mainland's latitude "length") the village of Vilhelmina in Vilhelmina Municipality in Sweden (halving the great-circle distance) Grane Church in Grane Municipality (found by halving longer great circle distances) Alstahaug Church in Alstahaug Municipality (found by halving longer great circle distances) Mosjøen in Vefsn Municipality (midpoint along the road from Lindesnes at the southern tip of Norway to the North Cape at the northern tip of Norway) References External links Statens kartverk: Norges midtpunkt Centre Geography of Trøndelag Norway Steinkjer
Geographical centre of Norway
[ "Physics", "Mathematics" ]
343
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
11,730,667
https://en.wikipedia.org/wiki/Oleamide
Oleamide is an organic compound with the formula . It is the amide derived from the fatty acid oleic acid. It is a colorless waxy solid and occurs in nature. Sometimes labeled as a fatty acid primary amide (FAPA), it is biosynthesized from N-oleoylglycine. Biochemical and medical aspects In terms of natural occurrence, oleamide was first detected in human plasma. It was later shown to accumulate in the cerebrospinal fluid during sleep deprivation and induces sleep in animals. It has been considered as a treatment for mood and sleep disorders, as well as cannabinoid-regulated depression. In terms of its sleep inducing effects, it is speculated that oleamide interacts with multiple neurotransmitter systems. Some in-vitro studies show that cis-oleamide is an agonist for the cannabinoid receptor CB-1 with an affinity around 8 micromolar. However, given oleamide's relatively low affinity for CB-1 and uncertainty about the concentration and biological role of oleamide in-vivo, it has been argued that it is premature to classify oleamide as an endocannabinoid. At larger doses oleamide can lower the body temperature of mice by about 2 degrees, with the effect lasting about two hours. The mechanism for this remains unknown. Oleamide has been found to enhance PPARα-dependent increase in doublecortin, a marker of neurogenesis in the hippocampus Oleamide is rapidly metabolized by fatty acid amide hydrolase (FAAH), the same enzyme that metabolizes anandamide. It has been postulated that some effects of oleamide are caused by increased concentrations of anandamide brought about through the inhibition of FAAH. It has been claimed that oleamide increases the activity of choline acetyltransferase, an enzyme that is critical in the production of acetylcholine. Other occurrences Oleamide has been found in Ziziphus jujuba, also known as Jujube fruit. Synthetic oleamide has a variety of industrial uses, including as a lubricant. Oleamide was found to be leaching out of polypropylene plastics in laboratory experiments, affecting experimental results. Since polypropylene is used in a wide number of food containers such as those for yogurt, the problem is being studied. Oleamide is "one of the most frequent non-cannabinoid ingredients associated with Spice products." Analysis of 44 products synthetic cannabinoid revealed oleamide in 7 of the products tested. See also Anandamide Fatty acid amide hydrolase Virodhamine References Eicosanoids Endocannabinoids Fatty acid amides Hypnotics Lipids Lubricants
Oleamide
[ "Chemistry", "Biology" ]
591
[ "Biomolecules by chemical classification", "Behavior", "Hypnotics", "Organic compounds", "Sleep", "Lipids" ]
11,730,865
https://en.wikipedia.org/wiki/Noninvasive%20glucose%20monitor
Noninvasive glucose monitoring (NIGM), called Noninvasive continuous glucose monitoring when used as a CGM technique, is the measurement of blood glucose levels, required by people with diabetes to prevent both chronic and acute complications from the disease, without drawing blood, puncturing the skin, or causing pain or trauma. The search for a successful technique began about 1975 and has continued to the present without a clinically or commercially viable product. Early history , only one such product had been approved for sale by the FDA, based on a technique for electrically pulling glucose through intact skin, and it was withdrawn after a short time owing to poor performance and occasional damage to the skin of users. Hundreds of millions of dollars have been invested in companies who have sought the solution to this long-standing problem. Approaches that have been tried include near-infrared spectroscopy (NIRS, measuring glucose through the skin using light of slightly longer wavelengths than the visible region), transdermal measurement (attempting to pull glucose through the skin using either chemicals, electricity or ultrasound), measuring the amount that polarized light is rotated by glucose in the front chamber of the eye (containing the aqueous humor), and many others. A 2012 study reviewed ten technologies: bioimpedance spectroscopy, microwave/RF sensing, fluorescence technology, mid-infrared spectroscopy, near-infrared spectroscopy, optical coherence tomography, optical polarimetry, Raman spectroscopy, reverse iontophoresis, and ultrasound technology, concluding with the observation that none of these had produced a commercially available, clinically reliable device and that therefore, much work remained to be done. , disregarding the severe shortcomings mentioned above, at least one non-invasive glucose meter was being marketed in a number of countries. Still, as the mean absolute deviation of this device was nearly 30% in clinical trials, "further research efforts were desired to significantly improve the accuracy [...]". While multiple technologies have been tried, Raman spectroscopy has gained traction as one promising technology for measuring glucose in interstitial fluid. Early attempts include C8 Medisensors and the Laser Biomedical Research Center at Massachusetts Institute of Technology (MIT) which have been working on a Raman spectroscopy sensor for more than 20 years and conducting clinical investigations in collaboration with the Clinical Research Center at University of Missouri, Columbia, US. In 2018 a paper in PLOS ONE showed independent validation data from a clinical investigation comprising 15 subjects with diabetes mellitus type 1 with a mean absolute relative difference (MARD) of 25.8%. The system used, was a custom-built confocal Raman setup. In 2019 researchers at the Samsung Advanced Institute of Technology (SAIT), Samsung Electronics, in collaboration with the Laser Biomedical Research Center MIT developed a new approach based on Raman spectroscopy that allowed them to see the glucose signal directly. The researchers tested the system in pigs and could get accurate glucose readings for up to an hour after initial calibration. In 2020, German Institute for Diabetes-Technology published data from 15 subjects with type 1 diabetes on a new prototype GlucoBeam based on Raman spectroscopy from RSP Systems Denmark, showing a MARD of 23.6% on independent validation in out-patient setup with up till 8 days without recalibration. With accuracy on marketed BGM devices in the US between 5.6 and 20.8%. A NIGM solution would likely need to have an accuracy with a MARD below 20% to be widely accepted. The number of clinical trials of non-invasive glucose monitors has grown throughout the 21st century. While the National Institutes of Health recorded only 4 clinical investigations of the technology from 2000 to 2015, there were 16 from 2016 to 2020. Wave of new research and development (2020-) From approximately 2020, onwards there has been increased R&D activity in the space of new NIGM solutions (particularly CGM ones) with renewed focus on approaches that had already been explored, and new ones altogether. This includes both large tech companies, such as Apple and Samsung, and startup companies. Optical sensing techniques Optical spectroscopy methods in continuous glucose monitoring (CGM) utilize light to measure glucose levels in the interstitial fluid or blood. These methods typically involve shining a specific wavelength of light (near-infrared, mid-infrared, or Raman) onto the skin, where it interacts with the glucose molecules. The light either gets absorbed or scattered by the glucose, and the resulting changes in the light's properties are detected and analyzed. Mid-Infrared spectroscopy DiaMonTech AG is a Berlin, Germany-based privately-held company developing the D-Pocket, a medical device that uses infrared laser technology to scan the tissue fluid in the skin and detect glucose molecules. Short pulses of infrared light are sent to the skin, which are absorbed by the glucose molecules. This generates heat waves that are detected using its patented IRE-PTD method. The company claims a high selectivity of its method, results of a first study have been published in the Journal of Diabetes Science and Technology. In this study, a Median Absolute Relative Difference of 11.3% is claimed. DiaMonTech has announced that its envisioned follow-up product D-Sensor, will feature continuous measurements, making it a CGM though no release date has been given. Near-Infrared spectroscopy Apple has been working on a noninvasive CGM combining silicon photonics and optical absorption spectroscopy, that it seeks to integrate into its Apple Watch. In March 2023 it was reported to have established proof-of-concept of a noninvasive CGM. Another company working on noninvasive CGM is Masimo, which sued Apple for patent infringement in this area in 2020. Masimo has also filed new patents through its subsidiary Cercacor (pending as of September 2023) covering a joint continuous glucose monitoring and pump-closed loop delivery system. U.S. company Rockley Photonics is building a Near-Infrared system. This approach integrates Rockley's short-wave infrared (SWIR) spectroscopy technology into its miniaturized photonic integrated circuit (PIC) chips, with the resulting mechanism aiming to be embedded into a smartwatch-style wearable device. Lithuanian company BROLIS is another NIR Spectroscopy emerging NIGM player. Based on news reports, it developed a fully functioning prototype in 2019. Raman spectroscopy Samsung announced that it would be incorporating glucose monitoring with its smartwatch with a targeted release year of 2025. It is not clear whether the watch will integrate readings from an external CGM such as Dexcom's or Abbott's, or work standalone. The company in 2020 published literature regarding the aforementioned (see above) non-invasive method it had developed with MIT scientists to engage in continuous glucose monitoring using spectroscopy. The company has filed patents related to this technology. In January 2024, Samsung gave an update affirming its NIGM ambitions but did not give a release date. Seoul, South Korea-based start-up Apollon commenced work on a Raman CGM and secured a partnership with MIT in 2023. In 2024, it secured 2.3B won (approximately $1.5M) in pre-Series A funding for development of needle-free glucose monitoring. Liom (formerly called Spiden) is a Swiss startup building a multi-biomarker and drug level monitoring noninvasive smartwatch wearable (using Raman spectroscopy) with continuous glucose monitoring capability as its first application. It has so far not attained regulatory approval as of October 2023. In January of 2024, Liom declared it had developed a prototype, with a claimed MARD (Mean Absolute Relative Difference) value to a reference glucose measurement of approximately 9%. In 2023, RSP Systems Denmark published data from prospective measurements with at least 15 days following calibration. the study was a home-based clinical study involving 160 subjects with diabetes, the largest study of its kind to date. Data from a subset of subjects with type 2 diabetes shoved 99.8% of measurements within A + B zones in the consensus error grid and a mean absolute relative difference of 14.3%. The full clinical study involved 160 subjects, 137 with type 1 on intensive insulin therapy or insulin pumps. The measurements form the type 1 diabetes subgroup, showed 96.5% of the points in zones A + B, while the typical indices of accuracy, the mean absolute relative difference (MARD) and RMSE, over the 15 days were 19.9% and 1.9 mmol/L, respectively. With 12,374 paired data points, the size of the dataset demonstrates the robustness of the Raman spectroscopy-based approach. Electromagnetic sensing techniques Electromagnetic sensing for non-invasive glucose monitoring utilizes the interaction between electromagnetic waves and the glucose molecules present in the body. These techniques typically involve applying a specific radio frequency or microwave signal to the skin, which then penetrates the underlying tissues. The presence of glucose alters the dielectric properties (permittivity and conductivity) of the tissue, leading to changes in the amplitude, phase, or other characteristics of the transmitted or reflected electromagnetic waves. Electrochemical glucose monitoring is based on the glucose oxidation reaction. Glucose oxidase is the enzyme that is specific to glucose. Glucose is oxidized by oxygen in the presence of glucose oxidase and water to make gluconolactone and hydrogen peroxide. Hydrogen peroxide is further oxidized at the electrode, producing free electrons, resulting in an electrical current proportional to the glucose concentration in an area of interest. By measuring this current, the sensor can accurately determine the glucose level. Radiofrequency-based approaches Haifa, Israel-based company HAGAR completed a study of its GWave non-invasive CGM, reporting high accuracy. This sensor uses radiofrequency waves to measure glucose levels in the blood. The device had not received regulatory approval anywhere as of August 2023. One of the criticisms of radiofrequency technology as a way of measuring glucose is that studies in 2019 found that glucose can only be detected in the far infrared (nanometer wavelengths), rather than radiofrequencies even in the centimeter and millimeter wavelength range, putting into question the viability of radio frequencies for measuring glucose. A second study (performed in Israel) reported a GWave prototype showed a MARD of 6.7% though Food and Drug Administration (FDA) comparator standards were not applied (the study determined accuracy (MARD) by comparing with a regular Abbott Blood Glucose Monitoring/fingerstick device as a comparator, which measures capillary blood glucose levels, not venous ones as required for FDA CGM approval). KnowLabs is a Seattle, U.S-based company building a CGM called the Bio-RFID sensor, which works by sending radio waves through the skin to measure molecular signatures in the blood, which Know Labs' machine learning algorithms use to compute the user's blood sugar levels. The company reported that it had built a prototype, but had not attained regulatory approval as of August 2023. In March 2024, news outlets reported that the company's sensor had attained a MARD of 11.1%. The BioXensor developed by British company BioRX uses patented radio frequency technology, alongside a multiple sensor (also capturing blood oxygen levels, ECG, respiration rate, heart rate and body temperature) approach. The company claims this enables the measurement of blood glucose levels every minute reliably, accurately, and non-invasively. BioXensor had not received regulatory approval . Afon Technology, based in Wales, is developing Glucowear, a non-invasive continuous glucose monitor (CGM) using radiofrequency (RF) technology. This device, worn under a smartwatch, has the goal to monitor blood glucose in real-time. Their approach uses RF signals to detect glucose levels beneath the skin, differing from optical sensor-based methods. Synex Medical (based in Boston, US and Toronto, Canada) uses portable magnetic resonance spectroscopy (MRS) for non-invasive glucose monitoring. Their compact devices aim to measure blood metabolites like glucose in real-time by analyzing the magnetic properties of hydrogen atoms in glucose molecules. Another noninvasive system was attempted to be built by US company Movano Health. It envisioned a small ring placed on the arm. Movano said in 2021 that it was building the smallest ever custom radio frequency (RF)-enabled sensor designed for simultaneous blood pressure and glucose monitoring. Movano is listed as MOVE on NASDAQ. By August 2023 Movano had shifted to building sensor rings for other parameters, such as heart rate, blood oxygen levels, respiration rate, skin temperature variability, and menstrual symptom tracking. Reverse iontophoresis (Electromagnetic sensing in sweat) SugarBeat, built by Nemaura Medical, is a wireless non-invasive blood glucose monitoring system using a disposable skin patch. The patch connects to a rechargeable transmitter which detects blood sugar and transfers the data to a mobile app every five minutes. The patch can be used for 24 hours. Electronic currents are used to draw interstitial fluid to the surface to analyse the glucose level. SugarBeat has achieved regulatory approval in Saudi Arabia and Europe, though market penetration rates remain very low. The company declared US$503,906 in revenue for the fiscal year ending March 2022, which compares to Dexcom's more than $3 billion. it had submitted a US FDA premarket approval application for sugarBEAT. Nemaura was listed on NASDAQ since January 2018 as NMRD. However, due to poor performance (a below than $35m market cap) and low trading volumes it was threatened with delisting from NASDAQ (in April 2023). It was delisted from NASDAQ January 4, 2024 and is currently trading on OTC. Magnetohydrodynamic approaches Glucomodicum is based in Helsinki, Finland and was founded as a spin out of the University of Helsinki. Their attempted solution uses interstitial fluid to non-invasively measure glucose levels continuously. It does not have regulatory approval. Its device is a combination of magnetohydrodynamic (MHD) technology, advanced algorithms and highly-sensitive biosensors which link to a smartphone app for data collection and reporting. It works by sending a small amount of energy through the skin to the interstitial fluid between the cells, bringing the fluid to the surface of the skin for non-invasive sample capture. Eye scanning Occuity, a Reading, UK-based startup is taking a different approach to noninvasive glucose monitoring, by using the eye. The company states it is developing the Occuity Indigo, which aims to measure the change in refractive index of the eye to determine the concentration of glucose in the blood. Breath analysis BOYDSense is a French-based startup developing a noninvasive glucose monitoring device that analyzes breath-based volatile organic compounds (VOCs). The company's device, Lassie, measures specific VOCs in the breath, which are metabolic byproducts of glucose usage in the body. Early clinical trials have demonstrated that these VOCs can reliably indicate blood glucose levels in individuals with type 2 diabetes. BOYDSense's goal is to provide a compact, affordable alternative to traditional CGMs, which rely on blood samples. The technology is currently in clinical trials, with ongoing research to refine its accuracy and algorithm. References Blood tests Diabetes-related supplies and medical equipment Medical monitoring equipment
Noninvasive glucose monitor
[ "Chemistry" ]
3,220
[ "Blood tests", "Chemical pathology" ]
11,730,924
https://en.wikipedia.org/wiki/Merit%20order
The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price (which may reflect the order of their short-run marginal costs of production) and sometimes pollution, together with amount of energy that will be generated. In a centralized management scheme, the ranking is such that those with the lowest marginal costs are the first sources to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching power generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. The effect of renewable energy on merit order The high demand for electricity during peak demand pushes up the bidding price for electricity, and the often relatively inexpensive baseload power supply mix is supplemented by 'peaking power plants', which produce electrical power at higher cost, and therefore are priced higher for their electrical output. Increasing the supply of renewable energy tends to lower the average price per unit of electricity because wind energy and solar energy have very low marginal costs: they do not have to pay for fuel, and the sole contributors to their marginal cost is operations and maintenance. With cost often reduced by feed-in-tariff revenue, their electricity is as a result, less costly on the spot market than that from coal or natural gas, and transmission companies typically` buy from them first. Solar and wind electricity therefore substantially reduce the amount of highly priced peak electricity that transmission companies need to buy, during the times when solar/wind power is available, reducing the overall cost. A study by the Fraunhofer Institute ISI found that this "merit order effect" had allowed solar power to reduce the price of electricity on the German energy exchange by 10% on average, and by as much as 40% in the early afternoon. In 2007; as more solar electricity was fed into the grid, peak prices may come down even further. By 2006, the "merit order effect" indicated that the savings in electricity costs to German consumers, on average, more than offset the support payments paid by customers for renewable electricity generation. A 2013 study estimated the merit order effect of both wind and photovoltaic electricity generation in Germany between the years 2008 and 2012. For each additional GWh of renewables fed into the grid, the price of electricity in the day-ahead market was reduced by 0.11–0.13¢/kWh. The total merit order effect of wind and photovoltaics ranged from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012. The near-zero marginal cost of wind and solar energy does not, however, translate into zero marginal cost of peak load electricity in a competitive open electricity market system as wind and solar supply alone often cannot be dispatched to meet peak demand without incurring marginal transmission costs and potentially the costs of ``batteries. The purpose of the merit order dispatching paradigm was to enable the lowest net cost electricity to be dispatched first thus minimising overall electricity system costs to consumers. Intermittent wind and solar is sometimes able to supply this economic function. If peak wind (or solar) supply and peak demand both coincide in time and quantity, the price reduction is larger. On the other hand, solar energy tends to be most abundant at noon, whereas peak demand is late afternoon in warm climates, leading to the so-called duck curve. A 2008 study by the Fraunhofer Institute ISI in Karlsruhe, Germany found that windpower saves German consumers €5billion a year. It is estimated to have lowered prices in European countries with high wind generation by between 3 and 23€/MWh. On the other hand, renewable energy in Germany increased the price for electricity, consumers there now pay 52.8 €/MWh more only for renewable energy (see German Renewable Energy Sources Act), average price for electricity in Germany now is increased to 26¢/kWh. Increasing electrical grid costs for new transmission, market trading and storage associated with wind and solar are not included in the marginal cost of power sources, instead grid costs are combined with source costs at the consumer end. Economic dispatch Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission and operational constraints. The Economic Dispatch Problem can be solved by specialized computer software which should satisfy the operational and system constraints of the available resources and corresponding transmission capabilities. In the US Energy Policy Act of 2005, the term is defined as "the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognising any operational limits of generation and transmission facilities". The main idea is that, in order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs must be used first, with the marginal cost of the final generator needed to meet load setting the system marginal cost. This is the cost of delivering one additional MWh of energy onto the system. Due to transmission constraints, this cost can vary at different locations within the power grid - these different cost levels are identified as "locational marginal prices" (LMPs). The historic methodology for economic dispatch was developed to manage fossil fuel burning power plants, relying on calculations involving the input/output characteristics of power stations. Basic mathematical formulation The following is based on an analytical methodology following Biggar and Hesamzadeh (2014) and Kirschen (2010). The economic dispatch problem can be thought of as maximising the economic welfare of a power network whilst meeting system constraints. For a network with buses (nodes), suppose that is the rate of generation, and is the rate of consumption at bus . Suppose, further, that is the cost function of producing power (i.e., the rate at which the generator incurs costs when producing at rate ), and is the rate at which the load receives value or benefits (expressed in currency units) when consuming at rate . The total welfare is then The economic dispatch task is to find the combination of rates of production and consumption () which maximise this expression subject to a number of constraints: The first constraint, which is necessary to interpret the constraints that follow, is that the net injection at each bus is equal to the total production at that bus less the total consumption: The power balance constraint requires that the sum of the net injections at all buses must be equal to the power losses in the branches of the network: The power losses depend on the flows in the branches and thus on the net injections as shown in the above equation. However it cannot depend on the injections on all the buses as this would give an over-determined system. Thus one bus is chosen as the Slack bus and is omitted from the variables of the function . The choice of Slack bus is entirely arbitrary, here bus is chosen. The second constraint involves capacity constraints on the flow on network lines. For a system with lines this constraint is modeled as: where is the flow on branch , and is the maximum value that this flow is allowed to take. Note that the net injection at the slack bus is not included in this equation for the same reasons as above. These equations can now be combined to build the Lagrangian of the optimization problem: where π and μ are the Lagrangian multipliers of the constraints. The conditions for optimality are then: where the last condition is needed to handle the inequality constraint on line capacity. Solving these equations is computationally difficult as they are nonlinear and implicitly involve the solution of the power flow equations. The analysis can be simplified using a linearised model called a DC power flow. There is a special case which is found in much of the literature. This is the case in which demand is assumed to be perfectly inelastic (i.e., unresponsive to price). This is equivalent to assuming that for some very large value of and inelastic demand . Under this assumption, the total economic welfare is maximised by choosing . The economic dispatch task reduces to: Subject to the constraint that and the other constraints set out above. Environmental dispatch In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. Due to the added complexity, a number of algorithms have been employed to optimize this environmental/economic dispatch problem. Notably, a modified bees algorithm implementing chaotic modeling principles was successfully applied not only in silico, but also on a physical model system of generators. Other methods used to address the economic emission dispatch problem include Particle Swarm Optimization (PSO) and neural networks Another notable algorithm combination is used in a real-time emissions tool called Locational Emissions Estimation Methodology (LEEM) that links electric power consumption and the resulting pollutant emissions. The LEEM estimates changes in emissions associated with incremental changes in power demand derived from the locational marginal price (LMP) information from the independent system operators (ISOs) and emissions data from the US Environmental Protection Agency (EPA). LEEM was developed at Wayne State University as part of a project aimed at optimizing water transmission systems in Detroit, MI starting in 2010 and has since found a wider application as a load profile management tool that can help reduce generation costs and emissions. References External links Economic Dispatch: Concepts, Practices and Issues See also Electricity market Bid-based, security-constrained, economic dispatch with nodal prices Unit commitment problem in electrical power production German Renewable Energy Sources Act Electric power Energy production Energy in the United Kingdom Energy economics
Merit order
[ "Physics", "Engineering", "Environmental_science" ]
2,061
[ "Government by algorithm", "Physical quantities", "Energy economics", "Automation", "Power (physics)", "Electric power", "Electrical engineering", "Environmental social science" ]
11,731,171
https://en.wikipedia.org/wiki/Cs%C3%A1sz%C3%A1r%20polyhedron
In geometry, the Császár polyhedron () is a nonconvex toroidal polyhedron with 14 triangular faces. This polyhedron has no diagonals; every pair of vertices is connected by an edge. The seven vertices and 21 edges of the Császár polyhedron form an embedding of the complete graph onto a topological torus. Of the 35 possible triangles from vertices of the polyhedron, only 14 are faces. Complete graph The tetrahedron and the Császár polyhedron are the only two known polyhedra (having a manifold boundary) without any diagonals: every two vertices of the polygon are connected by an edge, so there is no line segment between two vertices that does not lie on the polyhedron boundary. That is, the vertices and edges of the Császár polyhedron form a complete graph. The combinatorial description of this polyhedron has been described earlier by Möbius. Three additional different polyhedra of this type can be found in a paper by . If the boundary of a polyhedron with v vertices forms a surface with h holes, in such a way that every pair of vertices is connected by an edge, it follows by some manipulation of the Euler characteristic that This equation is satisfied for the tetrahedron with h = 0 and v = 4, and for the Császár polyhedron with h = 1 and v = 7. The next possible solution, h = 6 and v = 12, would correspond to a polyhedron with 44 faces and 66 edges, but it is not realizable as a polyhedron. It is not known whether such a polyhedron exists with a higher genus. More generally, this equation can be satisfied only when v is congruent to 0, 3, 4, or 7 modulo 12. History and related polyhedra The Császár polyhedron is named after Hungarian topologist Ákos Császár, who discovered it in 1949. The dual to the Császár polyhedron, the Szilassi polyhedron, was discovered later, in 1977, by Lajos Szilassi; it has 14 vertices, 21 edges, and seven hexagonal faces, each sharing an edge with every other face. Like the Császár polyhedron, the Szilassi polyhedron has the topology of a torus. There are other known polyhedra such as the Schönhardt polyhedron for which there are no interior diagonals (that is, all diagonals are outside the polyhedron) as well as non-manifold surfaces with no diagonals. References External links Császár’s polyhedron in virtual reality in NeoTrie VR. Nonconvex polyhedra Toroidal polyhedra Articles containing video clips
Császár polyhedron
[ "Mathematics" ]
560
[ "Toroidal polyhedra", "Topology" ]
11,731,300
https://en.wikipedia.org/wiki/Rubberized%20asphalt
Rubberized asphalt concrete (RAC), also known as asphalt rubber or just rubberized asphalt, is noise reducing pavement material that consists of regular asphalt concrete mixed with crumb rubber made from recycled tires. Asphalt rubber is the largest single market for ground rubber in the United States, consuming an estimated , or approximately 12 million tires annually. Use of rubberized asphalt as a pavement material was pioneered by the city of Phoenix, Arizona in the 1960s because of its high durability. Since then it has garnered interest for its ability to reduce road noise. In 2003 the Arizona Department of Transportation began a three-year, $34-million Quiet Pavement Pilot Program, in cooperation with the Federal Highway Administration to determine if sound walls can be replaced by rubberized asphalt to reduce noise alongside highways. After about one year it was determined that asphalt rubber overlays resulted in up to 12 decibels of in road noise reduction, with a typical reduction of 7 to 9 decibels. Arizona has been the leader in using rubberized asphalt, but California, Florida, Texas, and South Carolina are also using asphalt rubber. Tests are currently underway in other parts of the United States to determine the durability of rubberized asphalt in northern climates, including a 1.3 mile stretch of Interstate 405 in Bellevue and Kirkland, Washington and a handful of local roads in the city of Colorado Springs, Colorado. In 2012, the State of Georgia issued a specification for the use of rubber-modified asphalt as a replacement for polymer-modified asphalt. In Belgium, tests in the ring of Brussel and in the F1 circuit of Francorchamp (see the film by Jean-Marie Piquint Rubberized Asphalt for Esso Belgium). Two quality control requirements are necessary when using asphalt rubber: (a) crumb rubber tends to separate and settle down in the asphalt cement and therefore asphalt rubber needs to be agitated continuously to keep the rubber particles in suspension and (b) crumb rubber is prone to degradation (devulcanization and depolymerization) and thus lose its elasticity if asphalt rubber is maintained at high temperatures for more than 6–8 hours. This means asphalt rubber must be used within 8 hours after production. Porous Elastic Road Surfaces Porous Elastic Road Surfaces (PERS) or poroelastic road surfaces improve RAC by incorporating voids and channels, making the pavement porous and further reducing traffic noise. References External links Arizona Department of Transportation Quiet Pavement Pilot Program Asphalt Rubber Usage Guide, from California Department of Transportation Recycled building materials Asphalt Pavements Concrete
Rubberized asphalt
[ "Physics", "Chemistry", "Engineering" ]
518
[ "Structural engineering", "Unsolved problems in physics", "Chemical mixtures", "Concrete", "Asphalt", "Amorphous solids" ]
11,731,664
https://en.wikipedia.org/wiki/Vacuum%20arc%20remelting
Vacuum arc remelting (VAR) is a secondary melting process for production of metal ingots with elevated chemical and mechanical homogeneity for highly demanding applications. The VAR process has revolutionized the specialty traditional metallurgical techniques industry, and has made possible tightly controlled materials used in biomedical, aviation and aerospace. Overview VAR is used most frequently in high value applications. It is an additional processing step to improve the quality of metal. Because it is time consuming and expensive, a majority of commercial alloys do not employ the process. Nickel, titanium, and specialty steels are materials most often processed with this method. The conventional path for production of titanium alloys includes single, double or even triple VAR processing. Use of this technique over traditional methods presents several advantages: The solidification rate of molten material can be tightly controlled. This allows a high degree of control over the microstructure as well as the ability to minimize segregation The gases dissolved in liquid metal during melting metals in open furnaces, such as nitrogen, oxygen and hydrogen are considered to be detrimental to the majority of steels and alloys. Under vacuum conditions these gases escape from liquid metal. Elements with high vapor pressure such as carbon, sulfur, and magnesium (frequently contaminants) are lowered in concentration. Centerline porosity and segregation are eliminated. Certain metals and alloys, such as Ti, cannot be melted in open air furnaces Process description The alloy to undergo VAR is formed into a cylinder typically by vacuum induction melting (VIM) or ladle refining (airmelt). This cylinder, referred to as an electrode is then put into a large cylindrical enclosed crucible and brought to a metallurgical vacuum (). At the bottom of the crucible is a small amount of the alloy to be remelted, which the top electrode is brought close to prior to starting the melt. Several kiloamperes of DC current are used to start an arc between the two pieces, thus a continuous melt is derived. The crucible (typically made of copper) is surrounded by a water jacket to cool the melt and control the solidification rate. To prevent arcing between the electrode and the crucible walls, the diameter of the crucible is larger than the electrode. As a result, the electrode must be lowered as the melt consumes it. Control of the current, cooling water, and electrode gap is essential to effective control of the process and production of defect-free material. Ideally, the melt rate stays constant throughout the process cycle, but monitoring and control of the vacuum arc remelting process is not simple. This is because there is a complex heat transfer occurring involving conduction, radiation, convection within the liquid metal, and advection caused by the Lorentz force. Ensuring the consistency of the melt process in terms of pool geometry, and melt rate is crucial in ensuring the best possible properties of the alloy. Materials and applications The VAR process is used on many different materials. Certain applications almost always use a material that has been VAR treated. A list of materials that may be VAR treated include: Stainless Steel 15-5 13-8 17-4 304 316 Alloy Steel 9310 4340 & 4330+V 300M AF1410 Aermet 100 M50 BG42 Nitralloy 16NCD13 35NCD16 HY-100 HY-180 HY-TUF D6AC Maraging steels UT-18 HP 9-4-30 Titanium Ti-6Al-4V Ti-10V-2Al-3Fe Ti-5Al-5V-5Mo-3Cr Invar Nitinol Nickel superalloys Inconel alloys Hastelloy alloys Rene alloys RR1000 Zirconium Niobium Platinum Tantalum Rhodium Note that pure titanium and most titanium alloys are double or triple VAR processed. Nickel-based super alloys for aerospace applications are usually VAR processed. Zirconium and niobium alloys used in the nuclear industry are routinely VAR processed. Pure platinum, tantalum, and rhodium may be VAR processed. See also Electro-slag remelting Vacuum arc Vacuum metallurgy References Further reading Britannica Steelmaking Metallurgical processes
Vacuum arc remelting
[ "Chemistry", "Materials_science" ]
862
[ "Metallurgical processes", "Steelmaking", "Metallurgy" ]
11,732,012
https://en.wikipedia.org/wiki/Hy-V
The Hy-V Scramjet Flight Experiment is a research project being led by the University of Virginia, the goal of which is to better understand dual-mode scramjet combustion by analyzing and comparing wind tunnel and flight data. The work is being conducted with industrial, academic and government collaborators. Overview The goal of the Hy-V project is to conduct a flight experiment of a scramjet engine at Mach 5 using a sounding rocket. Currently the project is in its design and test phase and plans to launch a flight test in the future. In a dual-mode scramjet, combustion can either occur at subsonic or supersonic speeds, or a mixture of the two. The experiment will be performed at speed of Mach 5 because significant effects on mode transition occur at this speed. In this particular case, the mode transition will occur inside the scramjet when the vehicle's airspeed reaches Mach 5. Teams of students, both undergraduate and graduate, and faculty from the member universities of the Virginia Space Grant Consortium are involved in the project and are collaborating with aerospace industry, NASA, and the Department of Defense. Payload Design The current payload design will enable researchers to conduct two separate experiments by having two separate scramjet ducts. One will resemble the geometry of the University of Virginia's supersonic wind tunnel and the other will be a variation of this geometry. The data recorded while in-flight will be used to better understand dual-mode scramjet (DMSJ) combustion and to make better numerical methods of predicting mode-transition processes. It will also provide a set of comparative data to help understand and isolate differences with wind tunnel data. UVA Wind Tunnel The University of Virginia's Supersonic wind tunnel was built in the late 1980s inside the Aerospace Research Laboratory (ARL). Previously housing classified gas centrifuge research, the ARL became endorsed purely for research by the National Space Council in 1989. Shortly thereafter, the wind tunnel was constructed in order to aid in the development of the National Aero-Space Plane, also known as the X-30, which was projected to fly at Mach 25. The wind tunnel is not only known for its supersonic combustion capabilities, but also for its unique design. The air is heated electrically, rather than through combustion processes, thus eliminating contaminants introduced by combustion. Additionally, the wind tunnel is capable of operating for an indefinite period of time, allowing unlimited duration scramjet testing. In order to recreate the conditions inside the DMSJ combustor, the geometry of the experimental scramjet will be a full-scale copy of that of the DMSJ combustor. External links Hy-V Website NASA's "What's a Scramjet?" o Virginia Space Grant Consortium Similar Projects University of Queensland's HyShot References Aircraft engines Jet engines Scramjet-powered aircraft Spacecraft propulsion Single-stage-to-orbit Space access
Hy-V
[ "Technology" ]
600
[ "Jet engines", "Engines", "Aircraft engines" ]
11,732,172
https://en.wikipedia.org/wiki/OpenEpi
OpenEpi is a free, web-based, open source, operating system-independent series of programs for use in epidemiology, biostatistics, public health, and medicine, providing a number of epidemiologic and statistical tools for summary data. OpenEpi was developed in JavaScript and HTML, and can be run in modern web browsers. The program can be run from the OpenEpi website or downloaded and run without a web connection. The source code and documentation is downloadable and freely available for use by other investigators. OpenEpi has been reviewed, both by media organizations and in research journals. The OpenEpi developers have had extensive experience in the development and testing of Epi Info, a program developed by the Centers for Disease Control and Prevention (CDC) and widely used around the world for data entry and analysis. OpenEpi was developed to perform analyses found in the DOS version of Epi Info modules StatCalc and EpiTable, to improve upon the types of analyses provided by these modules, and to provide a number of tools and calculations not currently available in Epi Info. It is the first step toward an entirely web-based set of epidemiologic software tools. OpenEpi can be thought of as an important companion to Epi Info and to other programs such as SAS, PSPP, SPSS, Stata, SYSTAT, Minitab, Epidata, and R (see the R programming language). Another functionally similar Windows-based program is Winpepi. See also list of statistical packages and comparison of statistical packages. Both OpenEpi and Epi Info were developed with the goal of providing tools for low and moderate resource areas of the world. The initial development of OpenEpi was supported by a grant from the Bill and Melinda Gates Foundation to Emory University. Types The types of calculations currently performed by OpenEpi include: Various confidence intervals for proportions, rates, standardized mortality ratio, mean, median, percentiles 2x2 crude and stratified tables for count and rate data Matched case-control analysis Test for trend with count data Independent t-test and one-way ANOVA Diagnostic and screening test analyses with receiver operating characteristic (ROC) curves Sample size for proportions, cross-sectional surveys, unmatched case-control, cohort, randomized controlled trials, and comparison of two means Power calculations for proportions (unmatched case-control, cross-sectional, cohort, randomized controlled trials) and for the comparison of two means Random number generator For epidemiologists and other health researchers, OpenEpi performs a number of calculations based on tables not found in most epidemiologic and statistical packages. For example, for a single 2x2 table, in addition to the results presented in other programs, OpenEpi provides estimates for: Etiologic or prevented fraction in the population and in exposed with confidence intervals, based on risk, odds, or rate data The cross-product and MLE odds ratio estimate Mid-p exact p-values and confidence limits for the odds ratio Calculations of rate ratios and rate differences with confidence intervals and statistical tests. For stratified 2x2 tables with count data, OpenEpi provides: Mantel-Haenszel (MH) and precision-based estimates of the risk ratio and odds ratio Precision-based adjusted risk difference Tests for interaction for the risk ratio, odds ratio, and risk difference Four different confidence limit methods for the odds ratio. Similar to Epi Info, in a stratified analysis, both crude and adjusted estimates are provided so that the assessment of confounding can be made. With rate data, OpenEpi provides adjusted rate ratio's and rate differences, and tests for interaction. Finally, with count data, OpenEpi also performs a test for trend, for both crude data and stratified data. In addition to being used to analyze data by health researchers, OpenEpi has been used as a training tool for teaching epidemiology to students at: Emory University, University of Massachusetts, University of Michigan, University of Minnesota, Morehouse College, Columbia University, University of Wisconsin, San Jose State University, University of Medicine and Dentistry of New Jersey, University of Washington, and elsewhere. This includes campus-based and distance learning courses. Because OpenEpi is easy to use, requires no programming experience, and can be run on the internet, students can use the program and focus on the interpretation of results. Users can run the program in English, French, Spanish, Portuguese or Italian. Comments and suggestions for improvements are welcomed and the developers respond to user queries. The developers encourage others to develop modules that could be added to OpenEpi and provide a developer's tool at the website. Planned future development include improvements to existing modules, development of new modules, translation into other languages, and add the ability to cut and paste data and/or read data files. See also Free statistical software Web based simulation References External links Biostatistics Epidemiology Free statistical software Software using the MIT license
OpenEpi
[ "Environmental_science" ]
1,044
[ "Epidemiology", "Environmental social science" ]
11,732,979
https://en.wikipedia.org/wiki/Michigan%20Alternative%20and%20Renewable%20Energy%20Center
The Michigan Alternative and Renewable Energy Center (MAREC) was a facility located in Muskegon, Michigan that promoted research, education and business development in alternative and renewable energy technologies. In May 2016, the Center was renamed the Muskegon Innovation Center and the organization refocused on supporting innovation and entrepreneurship. History and development Development and planning for the center began in 1999 when a group of Grand Valley State University faculty and Muskegon business people proposed a research and development facility focused on alternative energy. Subsequent partnerships between the business, community, and the private sector resulted in groundbreaking for the center in late 2002, and its completion in 2003. The building The facility was powered, in part, by a fuel cell and a micro turbine, which turned natural gas into electricity. In addition, the building's photovoltaic solar roof tiles harnessed the solar power generated by the sun to create useful energy. MAREC then used a nickel metal hydride battery system to store some of the energy produced by these sources for use during peak energy consumption periods. The building was also constructed using many alternative and renewable building materials including flooring surfaces produced from fast-growing bamboo and recycled tires, and rigid wall surfaces made from pressed wheat. These materials were used to conserve and recycle valuable natural resources. Economic development MAREC was part of the Muskegon Lakeshore SmartZone, a joint venture with the Michigan Economic Development Corporation, the city of Muskegon, and Grand Valley State University. The Muskegon Lakeshore SmartZone is intended to promote and attract high technology business development in Muskegon and the region. MAREC had of space devoted to incubating businesses that would research and develop alternative energy sources and uses. The focus on alternative energy was expected to be a catalyst for economic development and job growth in the area. Research and development initiatives was also intended to fuel business expansion at Edison Landing, the SmartZone being transformed into a multi-use office, retail, and residential center. Overall SmartZone development was expected to complement the array of human, physical, and capital investments made at MAREC. References Grand Valley State University, MAREC informational pamphlet External links Official site Tribune article Research institutes in Michigan Grand Valley State University Buildings and structures in Muskegon, Michigan Energy research institutes Renewable energy organizations based in the United States 2003 establishments in Michigan
Michigan Alternative and Renewable Energy Center
[ "Engineering" ]
476
[ "Energy research institutes", "Energy organizations" ]
11,733,018
https://en.wikipedia.org/wiki/Ishimoda%20Sh%C5%8D
, born in Sapporo, was a Japanese historian specializing in ancient Japanese history, with a particular interest in the nature of the structural transition from the ancient to the medieval period. As an orthodox materialist, he was a lifetime member of the Communist Party, and influential Marxist scholar in the analyses on Japanese history conducted by members of the post-war Rekiken group. In the 1950s, after the success of the Chinese Communist Revolution in 1949, he espoused that model as the Asian alternative to Westernization, which had failed in Japan. Life Born in his mother's family house in Hokkaidō, Ishimoda was raised in what is now Ishinomaki city, Miyagi Prefecture, where his father was mayor. He enrolled in the faculty of philosophy at Tokyo Imperial University, but switched to Japanese history. On graduation he became a journalist for the Asahi Shimbun, then professor at Hosei University. In 1973 he was diagnosed as suffering from Parkinson's disease. Works His first major work "The formation of the medieval world" was written before the war. but the manuscript was destroyed when his house went up in flames during a wartime incendiary bombing raid over Tokyo. A legend often mentioned by prominent academics to their students has it that, soon after war's end, he returned to what remained of his house, shut himself in for a summer and rewrote the whole work. According to the afterword by Ishii Susumu appended to a popular reprint of this work however, he secluded himself in a room of his home in October 1944, and, with the curtains drawn, wrote out the whole 700 page manuscript within just one month. Recently Ishimoda's historical materialism has come in for criticism. However it cannot be denied that he prompted and quickened the reconstruction of the discipline in postwar world of Japanese historical studies, which at the time was caught up in the chaos and stagnation caused by the collapse of an historicism centered on Japan's Imperial institution. Bibliography Aoki Kazuo (青木和夫) is now editing the Collected Works of Ishimoda Shō (Ishimoda Shō Chosakushū), published by Iwanami Shoten in 16 volumes. References 1912 births 1986 deaths Historians of Japan Japanese Marxists People from Sapporo People from Miyagi Prefecture Japanese communists People with Parkinson's disease Academic staff of Hosei University University of Tokyo alumni Maoists Materialists 20th-century Japanese historians The Asahi Shimbun people Japanese medievalists
Ishimoda Shō
[ "Physics" ]
514
[ "Materialism", "Matter", "Materialists" ]
11,733,139
https://en.wikipedia.org/wiki/Friction%20torque
In mechanics, friction torque is the torque caused by the frictional force that occurs when two objects in contact move. Like all torques, it is a rotational force that may be measured in newton meters or pounds-feet. Engineering Friction torque can be disruptive in engineering. There are a variety of measures engineers may choose to take to eliminate these disruptions. Ball bearings are an example of an attempt to minimize the friction torque. Friction torque can also be an asset in engineering. Bolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastened. This is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose. Examples When a cyclist applies the brake to the forward wheel, the bicycle tips forward due to the frictional torque between the wheel and the ground. When a golf ball hits the ground it begins to spin in part because of the friction torque applied to the golf ball from the friction between the golf ball and the ground. References See also Torque Force Engineering Mechanics Moment (physics)
Friction torque
[ "Physics", "Mathematics", "Engineering" ]
255
[ "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Moment (physics)" ]
11,733,219
https://en.wikipedia.org/wiki/Tin%28II%29%20iodide
Tin(II) iodide, also known as stannous iodide, is an ionic tin salt of iodine with the formula SnI2. It has a formula weight of 372.519 g/mol. It is a red to red-orange solid. Its melting point is 320 °C, and its boiling point is 714 °C. Tin(II) iodide can be synthesised by heating metallic tin with iodine in 2 M hydrochloric acid. Sn + I2 → SnI2 References Tin(II) compounds Iodides Metal halides Reducing agents
Tin(II) iodide
[ "Chemistry" ]
126
[ "Redox", "Inorganic compounds", "Salts", "Reducing agents", "Metal halides" ]
11,733,250
https://en.wikipedia.org/wiki/Peter%20L.%20Hurd
Peter L. Hurd is an academic specialising in biology. He is an Associate Professor at the University of Alberta within the Department of Psychology's Biocognition Unit and the University's Centre for Neuroscience. His research primarily focuses on the study of the evolution of aggressive behaviour, including investigation of aggression, communication and other social behaviour which takes place between animals with conflicting interests. Major tools for this research are mathematical modeling (principally game theory and genetic algorithms). He is also interested in how the process of sexual differentiation produces individual differences in social behaviour. Research Evolution of animal signalling Some of Hurd's most cited papers deal with the evolution of mating displays, including the idea that sexually selected traits have evolved to exploit previously existing biases in the sensory, or recognition, systems of their receivers, rather than being handicapped displays. Hurd has argued against the handicap principle view of animal communication, demonstrating the evolutionary stability of conventional (non-handicap) threat displays using game theoretical models. Adding empirical support to this theoretical work, Hurd has also argued that threat displays in birds and headbob displays in the lizard Anolis carolinensis are conventional signals, rather than handicaps. Hurd attributes the preponderance of handicap models in biology to the use of simple signalling games which are incapable of modelling conventional signalling. Aggressiveness Hurd has classified models of fighting behaviour into those driven by: 1) fighting ability (aka resource holding potential), 2) perceived value of winning, and 3) aggressiveness and argues that if variation in the last trait -aggressiveness- exists in a biologically meaningful way, it ought to be fixed for life at an early stage of development. Many studies on both human, and non-human, animals suggest that inter-individual variation in adult aggressiveness is largely organised by prenatal exposure to androgens. Digit ratio (2D:4D, the ratio of index to ring finger length) is a widely used as a proxy measure for prenatal testosterone exposure. Hurd demonstrated that men with more feminine typical-digit ratios showed lower aggressive tendency than males with more masculine-typical digit ratios. Digit ratio Hurd conducted a study on digit ratios suggesting a positive correlation in males between aggressive tendency and the ratio of the lengths of the ring finger to his index finger. These gathered significant media attention, being reported on the BBC, in The New York Times, Discover Magazine, Scientific American Mind, National Geographic and on Jay Leno. Hurd has demonstrated that, while there is no difference in digit ratio between the sexes in most laboratory mice, that pups which suggested next to brothers have higher digit ratios than those whose uterine neighbours were sisters, and that the large differences in digit ratios between populations may be explained by Allen's rule and Bergmann's rule. Academic history Strongly influenced as a youth by the anarcho-punk movement and such influences as Jonathan Kozol and A. S. Neill's Summerhill School, Hurd was an enthusiastic member of a student run free school group while unenthusiastically attending Colonel By Secondary School. He then completed a BSc at Carleton University, Canada in 1990, followed by an MSc in 1993 from Simon Fraser University. He moved to Sweden to undertake a PhD at Stockholm University (Awarded in 1997) before committing to an initial postdoctoral fellowship with Mike Ryan at the University of Texas. Hurd then became a lecturer at the University of Texas in 2000 until 2001 when he moved to the University of Alberta, Canada as an Assistant Professor. Hurd was promoted to Associate Professor in 2007. See also Aggression Digit ratio Signalling theory References External links Pete Hurd homepage from the University of Alberta Living people Academic staff of the University of Alberta University of Texas at Austin faculty Carleton University alumni Simon Fraser University alumni 21st-century Canadian biologists Game theorists Year of birth missing (living people)
Peter L. Hurd
[ "Mathematics" ]
781
[ "Game theorists", "Game theory" ]
11,733,267
https://en.wikipedia.org/wiki/Tin%28IV%29%20iodide
Tin(IV) iodide, also known as stannic iodide, is the chemical compound with the formula SnI4. This tetrahedral molecule crystallizes as a bright orange solid that dissolves readily in nonpolar solvents such as benzene. Preparation The compound is usually prepared by the reaction of iodine and tin: Chemical properties The compound hydrolyses in water. In aqueous hydroiodic acid, it reacts to form a rare example of a hexaiodometallate: SnI4 + 2 I− → [SnI6]2− Physical properties Tin(IV) iodide is an orange solid under standard conditions. It has a cubic crystal structure with the space group Pa (space group no. 205), the lattice parameter a = 1226 pm and eight formula units per unit cell. This corresponds approximately to a cubic close packing of iodine atoms in which 1/8 of all tetrahedral gaps are occupied by tin atoms. This leads to discrete tetrahedral SnI4 molecules. See also Tin(II) iodide Tin(IV) chloride References Tin(IV) compounds Iodides Metal halides
Tin(IV) iodide
[ "Chemistry" ]
242
[ "Inorganic compounds", "Metal halides", "Salts" ]
11,733,308
https://en.wikipedia.org/wiki/Friedel%27s%20law
Friedel's law, named after Georges Friedel, is a property of Fourier transforms of real functions. Given a real function , its Fourier transform has the following properties. where is the complex conjugate of . Centrosymmetric points are called Friedel's pairs. The squared amplitude () is centrosymmetric: The phase of is antisymmetric: . Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. Note that a twin operation ( Opération de maclage) is equivalent to an inversion centre and the intensities from the individuals are equivalent under Friedel's law. References Fourier analysis Crystallography
Friedel's law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
148
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics" ]
11,734,586
https://en.wikipedia.org/wiki/Info-Mac
Info-Mac is an online community, news aggregator and shareware file hosting service covering Apple Inc. products, including the iPhone, iPod and especially the Macintosh. Established in 1984 as an electronic mailing list, Info-Mac is notable as being the first online community for Apple's then-new Macintosh computer. Info-Mac was the dominant Internet resource for Mac OS software and community-based support throughout the 1980s and early 1990s. Original format Info-Mac consisted of two distinct services: the Info-Mac Archive, a user-submitted collection of nearly all contemporary freeware and shareware available for the Macintosh, and the Info-Mac Digest, an electronic mailing list open to public participation. Both the Info-Mac Archive and Info-Mac Digest were operated by volunteers. Info-Mac Digest The Info-Mac Digest was published daily via Stanford University servers, and was itself archived on the Info-Mac Archive. At its height, the Info-Mac Digest was read daily by several thousand people, and was mirrored in the Usenet group comp.sys.mac.digest. The Info-Mac Digest was published in "volumes" that covered the period of one calendar year, with some exceptions. Info-Mac Archive The Info-Mac Archive was the centralized collection of Macintosh software with over 100 mirror sites located around the world. At the time, disk space on a server was cost-prohibitive and hard to come by. Free public archives such as Info-Mac were often the only means for shareware authors to deliver their product over the Internet. Some early commercial software download sites, like CNET's Shareware.com, were originally mirrors of the Info-Mac Archive. Due to the low-bandwidth connections accessible by early Internet users, which made downloading large files an onerous task, Info-Mac partnered with Pacific HiTech to periodically publish CD-ROMs containing selected shareware and freeware from the archive. These CDs were sold through Mac-related magazines and publications. Licensing issues required software authors to specifically allow their contributions to be included on the CD-ROM through a statement in the file's abstract. The CDs allowed wider distribution to users who did not have network access or could not spare the long download times associated with software applications. As the software was encoded in BinHex or MacBinary format it could be stored on non-Mac file systems such as a BBS or FTP server. Starting with the Info-Mac VI CD-ROM, the discs included the utility "Spelunker" which allowed users to search the archive in a user-friendly manner. Starting with the Info-Mac VIII CD-ROM, the package included two discs to offer twice the shareware and freeware. Decline and 2007 relaunch The popularity of Info-Mac services in their original format waned in the late 1990s. As the growing popularity of the World Wide Web and web hosting services allowed software authors to distribute their own software, and for users to communicate on online message boards, demand for Info-Mac's services grew beyond the capability of an all-volunteer staff to provide and maintain it at an acceptable level. Unable to maintain relevance on the rapidly evolving Internet, the Info-Mac Digest was discontinued in November 2002, while the Info-Mac Archive stopped accepting new file submissions in December 2005. In December 2007, Info-Mac was redesigned and relaunched with a Web 2.0 interface, combining previous Info-Mac Digest and Info-Mac Archive content with a modernized forum-based community and news aggregator. Today, Info-Mac has expanded to cover all Apple product lines. A new, opt in Info-Mac Digest automatically generated from forum content is published daily. Info-Mac also distributes an iOS app called iForum on the App Store. References External links Info-Mac Archive Info-Mac Digest File hosting Internet properties established in 1984 Internet forums Macintosh websites Macintosh operating systems
Info-Mac
[ "Technology" ]
791
[ "Macintosh websites", "Computing websites" ]
11,734,649
https://en.wikipedia.org/wiki/Prince%20Interactive
Prince Interactive is an interactive multimedia CD-ROM video game. It was released in 1994, based on the musician Prince and his Paisley Park Studios recording complex. Gameplay The disc contains a video game, songs, music videos, a virtual tour through Paisley Park Studios, and other multimedia resources. Complete gameplay can last for hours. The video game is a graphic adventure with gameplay mechanics similar to Myst, requiring the player to explore the many different rooms in Paisley Park Studios and solve puzzles to collect the five pieces of Prince's symbol. It features six complete songs, including several which were previously unreleased, 52 song clips, four full-length music videos, 31 video clips, and nine morphs. There is an interactive mixing board for adjusting music. The private club contains clips of musicians, including Eric Clapton, Little Richard, George Clinton, and Miles Davis discussing Prince's career. Reception Ty Burr of Entertainment Weekly rated it as B+, describing it as "dopey but fun" and "imbued with [Prince's] goofball carnality", but more "marketing than entertainment". The gameplay was described as "meaningless scavenger hunts" and a "pointless" mixing board function. He summarized it as "a blast" to show it to friends but with no replay value. He cynically lamented this example of the entire two-year-old medium of the "craven new world of multimedia ... encrusted with cliches" within the "copycat mentality that rules pop music [with] a stable of aging rock & roll acts with time on their hands and a desperate need to seem relevant again". References External links Screenshots of the program Solution to the puzzle in the Studio at justadventure.com 1994 video games Multimedia works Interactive Video games based on musicians Classic Mac OS games Video games developed in the United States Windows games Band-centric video games Single-player video games
Prince Interactive
[ "Technology" ]
398
[ "Multimedia", "Multimedia works" ]
11,734,700
https://en.wikipedia.org/wiki/Pistonless%20pump
A pistonless pump is a type of pump designed to move fluids without any moving parts other than three chamber valves. The pump contains a chamber which has a valved inlet from the fluid to be pumped, a valved outlet – both of these at the bottom of the pump, and a pressurant inlet at the top of the pump. A pressurant is used, such as steam or pressurized helium, to drive the fluid through the pump. Introduction NASA have developed a low-cost rocket-fuel pump which has comparable performance to a turbopump at 80–90% lower cost. Perhaps the most difficult barrier to entry in the liquid rocket business is the turbopump. A turbopump design requires a large engineering effort and is expensive to manufacture and test. Starting a turbopump-fed rocket engine is a complex process, requiring a careful synchronisation of many valves and subsystems. In fact, Beal aerospace tried to avoid the issue entirely by building a huge pressure feed booster. Their booster never flew, but the engineering behind it was sound and, if they had a low cost pump at their disposal, they might be competing against Boeing. This pump saves up to 90% of the mass of the tanks as compared to a pressure-fed system. This pump has really proved to be a boon for rockets. By using this pump, the rocket does not have to carry such a heavy load and can travel with very high speed. Working cycle The cycle is as follows: The fluid enters and fills the chamber from the inlet valve. The outlet and pressurant valves are closed. The inlet valve closes, the outlet and pressurant valves open. The presurant forces the fluid through the outlet valve. As the chamber empties, the presurant valve closes and the inlet valve opens, followed by the outlet valve closing. The cycle is repeated. Pumping rate Rocket engines requires a tremendous amount of fuel at high pressure. Often the pump costs more than the thrust chamber. One way to supply fuel is to use the expensive turbopump mentioned above, another way is to pressurize fuel tank. Pressurizing a large fuel tank requires a heavy, expensive tank. However suppose instead of pressurizing the entire tank, the main tank is drained into a small pump chamber which is then pressurized. To achieve steady flow, the pump system consists of two pump chambers such that each one supplies fuel for half of each cycle. The pump is powered by pressurized gas which acts directly on the fluid. For each half of the pump system, a chamber is filled from the main tank under low pressure and at a high flow rate, then the chamber is pressurized, and then the fluid is delivered to the engine at a moderate flow rate under high pressure. The chamber is then vented and cycle repeats. The system is designed so that the inlet flow rate is higher than the outlet flow rate. This allows time for one chamber to be vented, refilled and pressurized while the other is being emptied. A bread board pump has been tested and it works great. A high version has been designed and built and is pumping at 20 gpm and 550 psi. Application in rocketry It is most commonly used to supply propellants to rocket engines. In this configuration there are often two pumps working in opposite cycles to ensure a constant flow of propellants to the engine. The pump has the advantage over a pressure-fed system in that the tanks can be much lighter. Compared to a turbopump the pistonless pump is a much simpler design and has less stringent design tolerances. Advantages Nearly all of the hardware in this pump consists of pressure vessels, so the weight is low. There are fewer than 10 moving parts, and no lubrication issues which might cause problems with other pumps. The design and construction of this pump is straight forward and no precision parts are required. This device has advantage over standard turbopumps in that the weight is about the same, the unit, engineering and test costs are less and the chance for catastrophic failure is lower. This pump has the advantage over pressure-fed designs in that the weight of the complete rocket is much less, and the rocket is much safer because the tanks of rocket fuel do not need to be at high pressure. The pump could be started after being stored for an extended period with high reliability. It can be used to replace turbopumps for rocket booster option or it can be used to replace high pressure tanks for deep space propulsion. It can also be used for satellite orbit changes and station keeping. Disadvantages The pistonless pumps also has some disadvantages, such as: They cannot pump to higher pressure than drive gas (area ratio is 1:1). They cannot use either a staged combustion or expander cycle. A gas generator cycle is also difficult to integrate with the pistonless pump. The generated gas must be chemically compatible with both the propellants. This gas generator lowers the Ignition start period of the engine. See also Thomas Savery Pulsometer steam pump Hydraulic ram Cyclic pump References External links Flometrics pistonless pump page Pistonless Pump Documentation See also Thomas Savery Pulsometer pump Hydraulic ram Cyclic pump Pumps
Pistonless pump
[ "Physics", "Chemistry" ]
1,060
[ "Physical systems", "Hydraulics", "Turbomachinery", "Pumps" ]