id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,477,343
https://en.wikipedia.org/wiki/Post-acute-withdrawal%20syndrome
Post-acute withdrawal syndrome (PAWS) is a hypothesized set of persistent impairments that occur after withdrawal from alcohol, opiates, benzodiazepines, antidepressants, and other substances. Infants born to mothers who used substances of dependence during pregnancy may also experience a PAWS. While PAWS has been frequently reported by those withdrawing from opiate and alcohol dependence, the research has limitations. Protracted benzodiazepine withdrawal has been observed to occur in some individuals prescribed benzodiazepines. Drug use, including alcohol and prescription drugs, can induce symptomatology which resembles mental illness. This can occur both in the intoxicated state and during the withdrawal state. In some cases these substance-induced psychiatric disorders can persist long after detoxification from amphetamine, cocaine, opioid, and alcohol use, causing prolonged psychosis, anxiety or depression. A protracted withdrawal syndrome can occur with symptoms persisting for months to years after cessation of substance use. Benzodiazepines, opioids, alcohol, and any other drug may induce prolonged withdrawal and have similar effects, with symptoms sometimes persisting for years after cessation of use. Psychosis including severe anxiety and depression are commonly induced by sustained alcohol, opioid, benzodiazepine, and other drug use which in most cases abates with prolonged abstinence. Any continued use of drugs or alcohol may increase anxiety, psychosis, and depression levels in some individuals. In almost all cases drug-induced psychiatric disorders fade away with prolonged abstinence, although permanent damage to the brain and nervous system may be caused by continued substance use. Signs and symptoms Symptoms can sometimes come and go with wave-like re-occurrences or fluctuations in severity of symptoms. Common symptoms include impaired cognition, irritability, depressed mood, and anxiety; all of which may reach severe levels which can lead to relapse. The protracted withdrawal syndrome from benzodiazepines, opioids, alcohol and other addictive substances can produce symptoms identical to generalized anxiety disorder as well as panic disorder. Due to the sometimes prolonged nature and severity of benzodiazepine, opioid and alcohol withdrawal, abrupt cessation is not advised. Hypothesized symptoms of PAWS are: Psychosocial dysfunction Anhedonia Depression Impaired interpersonal skills Obsessive-compulsive behaviour Feelings of guilt Autonomic disturbances Pessimistic thoughts Impaired attentional control Lack of initiative Craving Inability to think clearly Memory problems Emotional overreactions or numbness Sleep disturbances Extreme fatigue Physical coordination problems Stress sensitivity Increased sensitivity to pain Panic disorder Psychosis Generalized anxiety disorder Sleep disturbance (dreams of using, behaviors associated with the life style) Mourning (the change in lifestyle) Symptoms occur intermittently, but are not always present. They are made worse by stress or other triggers and may arise at unexpected times and for no apparent reason. They may last for a short while or longer. Any of the following may trigger a temporary return or worsening of the symptoms of PAWS: Stressful and/or frustrating situations Multitasking Feelings of anxiety, fearfulness or anger Social conflicts Unrealistic expectations of oneself Post-acute benzodiazepine withdrawal Disturbances in mental function can persist for several months or years after withdrawal from benzodiazepines. Psychotic depression persisting for more than a year following benzodiazepine withdrawal has been documented in the medical literature. The patient had no prior psychiatric history. The symptoms reported in the patient included, major depressive disorder with psychotic features, including persistent depressed mood, poor concentration, decreased appetite, insomnia, anhedonia, anergia and psychomotor retardation. The patient also experienced paranoid ideation (believing she was being poisoned and persecuted by co-employees), accompanied by sensory hallucinations. Symptoms developed after abrupt withdrawal of chlordiazepoxide and persisted for 14 months. Various psychiatric medications were trialed which were unsuccessful in alleviating the symptomatology. Symptoms were completely relieved by recommending chlordiazepoxide for irritable bowel syndrome 14 months later. Another case report noted a similar phenomenon in a female patient who abruptly reduced her diazepam dosage from 30 mg to 5 mg per day. She developed electric shock sensations, depersonalization, anxiety, dizziness, left temporal lobe EEG spiking activity, hallucinations, visual perceptual and sensory distortions which persisted for years. A clinical trial of patients taking the benzodiazepine alprazolam (Xanax) for eight weeks triggered protracted symptoms of memory deficits which were still present after up to eight weeks post cessation of alprazolam. Dopamine agonist protracted withdrawal After long-term use of dopamine agonists, a withdrawal syndrome may occur during dose reduction or discontinuation with the following possible side effects: anxiety, panic attacks, dysphoria, depression, agitation, irritability, suicidal ideation, fatigue, orthostatic hypotension, nausea, vomiting, diaphoresis, generalized pain, and drug cravings. For some individuals, these withdrawal symptoms are short-lived and make a full recovery, for others a protracted withdrawal syndrome may occur with withdrawal symptoms persisting for months or years. Cause The syndrome may be in part due to persisting physiological adaptations in the central nervous system manifested in the form of continuing but slowly reversible tolerance, disturbances in neurotransmitters and resultant hyperexcitability of neuronal pathways. However, data supports "neuronal and overwhelming cognitive normalization" in regards to chronic amphetamine use and PAWS. Stressful situations arise in early recovery, and the symptoms of post acute withdrawal syndrome produce further distress. It is important to avoid or to deal with the triggers that make post acute withdrawal syndrome worse. The types of symptomatology and impairments in severity, frequency, and duration associated with the condition vary depending on the drug of use. Treatment The condition gradually improves over a period of time which can range from six months to several years in more severe cases. Flumazenil was found to be more effective than placebo in reducing feelings of hostility and aggression in patients who had been free of benzodiazepines for 4 to 266 weeks. This may suggest a role for flumazenil in treating protracted benzodiazepine withdrawal symptoms. Acamprosate has been found to be effective in alleviating some of the post acute withdrawal symptoms of alcohol withdrawal. Carbamazepine or trazodone may also be effective in the treatment of post acute withdrawal syndrome in regards to alcohol use. Cognitive behavioral therapy can also help the post acute withdrawal syndrome especially when cravings are a prominent feature. See also Alcohol withdrawal syndrome Antidepressant discontinuation syndrome Benzodiazepine withdrawal syndrome Opioid use disorder References Adverse effects of psychoactive drugs Withdrawal syndromes
Post-acute-withdrawal syndrome
[ "Chemistry" ]
1,445
[ "Drug safety", "Adverse effects of psychoactive drugs" ]
14,477,647
https://en.wikipedia.org/wiki/Quack.com
Quack.com was an early voice portal company. The domain name later was used for Quack, an iPad search application from AOL. History It was founded in 1998 by Steven Woods, Jeromy Carriere and Alex Quilici as a Pittsburgh, Pennsylvania, USA, based voice portal infrastructure company named Quackware. Quack was the first company to try to create a voice portal: a consumer-based destination "site" in which consumers could not only access information by voice alone, but also complete transactions. Quackware launched a beta phone service in 1999 that allowed consumers to purchase books from sites such as Amazon and CDs from sites such as CDNow by answering a short set of questions. Quack followed with a set of information services from movie listings (inspired by, but expanding upon, Moviefone) to news, weather and stock quotes. This concept introduced a series of lookalike startups including Tellme Networks which raised more money than any Internet startup in history on a similar concept. Quack received its first venture funding from HDL Capital in 1999 and moved operations to Mountain View in Silicon Valley, California in 1999. A deal with Lycos was announced in May 2000. In September 2000 Quack was acquired for $200 million by America Online (AOL) and moved onto the Netscape campus with what was left of the Netscape team. Quack was attacked in the Canadian press for being representative of the Canadian "brain drain" to the US during the Internet bubble, focusing its recruiting efforts on the University of Waterloo, hiring more than 50 engineers from Waterloo in less than 10 months. Quack competitor Tellme Networks raised enormous funds in what became a highly competitive market in 2000, with the emergence of more than a dozen additional competitors in a 12-month period. Following its acquisition by America Online in an effort led by Ted Leonsis to bring Quack into AOL Interactive, the Quack voice service became AOLbyPhone as one of AOL's "web properties" along with MapQuest, Moviefone and others. Quack secured several patents that underlie the technical challenges of delivering interactive voice services. Constructing a voice portal required integrations and innovations not only in speech recognition and speech generation, but also in databases, application specification, constraint-based reasoning and artificial intelligence and computational linguistics. "Quack"'s name derived from the company goal of providing not only voice-based services, but more broadly "Quick Ubiquitous Access to Consumer Knowledge". The patents assigned to Quack.com include: System and method for voice access to Internet-based information, System and method for advertising with an Internet Voice Portal and recognizing the axiom that in interactive voice systems one must "know the set of possible answers to a question before asking it". System and method for determining if one web site has the same information as another web site. Quack.com was spoofed in The Simpsons in March 2002 in the episode "Blame It on Lisa" in which a "ComQuaak" sign is replaced by another equally crazy telecom company name. 2010 onwards In July 2010, quack.com became the focus of a new AOL iPad application, that was a web search experience. The product delivers web results and blends in picture, video and Twitter results. It enables you to preview the web results before you go to the site, search within each result, and flip through the results pages, making full use of the iPad's touch screen features. The iPad app was free via iTunes, but support discontinued in 2012. See also List of speech recognition software References External links iTunes App Link Speech recognition Computational linguistics Speech synthesis Speech processing Companies based in Mountain View, California Telecommunications companies of the United States Telecommunications companies established in 1998
Quack.com
[ "Technology" ]
762
[ "Natural language and computing", "Computational linguistics" ]
14,477,915
https://en.wikipedia.org/wiki/CHRNA7
Neuronal acetylcholine receptor subunit alpha-7, also known as nAChRα7, is a protein that in humans is encoded by the CHRNA7 gene. The protein encoded by this gene is a subunit of certain nicotinic acetylcholine receptors (nAchR). Function The nicotinic acetylcholine receptors (nAChRs) are members of a superfamily of ligand-gated ion channels that mediate fast signal transmission at synapses. The nAChRs are thought to be hetero-pentamers composed of homologous subunits. The proposed structure for each subunit is a conserved N-terminal extracellular domain followed by three conserved transmembrane domains, a variable cytoplasmic loop, a fourth conserved transmembrane domain, and a short C-terminal extracellular region. The protein encoded by this gene forms a homo-oligomeric channel, displays marked permeability to calcium ions and is a major component of brain nicotinic receptors that are blocked by, and highly sensitive to, alpha-bungarotoxin. Once this receptor binds acetylcholine, it undergoes an extensive change in conformation that affects all subunits and leads to opening of an ion-conducting channel across the plasma membrane. This gene is located in a region identified as a major susceptibility locus for juvenile myoclonic epilepsy and a chromosomal location involved in the genetic transmission of schizophrenia. An evolutionarily recent partial duplication event in this region results in a hybrid containing sequence from this gene and a novel FAM7A gene. Disruption of alpha-7 nicotinic receptors in schizophrenia is believed to contribute at least in part to the abnormally high prevalence of extremely heavy smoking in those affected by the disease. This observed particularly high nicotine intake compared to the average smoker is hypothesized to be a subconscious effort to activate the low-affinity alpha-7 receptors. Interactions CHRNA7 has been shown to interact with FYN. Gene expression The CHRNA7 gene is primarily expressed in the posterior amygdalar nucleus and the field CA3 of Ammon's horn in the mouse, and in the mammillary body in humans. Gene expression patterns from the Allen Brain Atlases can be seen here. See also Alpha-7 nicotinic receptor Nicotinic acetylcholine receptor Acetylcholine receptor References Further reading External links Ion channels Nicotinic acetylcholine receptors
CHRNA7
[ "Chemistry" ]
519
[ "Neurochemistry", "Ion channels" ]
14,477,930
https://en.wikipedia.org/wiki/Urban%20anthropology
Urban anthropology is a subset of anthropology concerned with issues of urbanization, poverty, urban space, social relations, and neoliberalism. The field has become consolidated in the 1960s and 1970s. Ulf Hannerz quotes a 1960s remark that traditional anthropologists were "a notoriously agoraphobic lot, anti-urban by definition". Various social processes in the Western World as well as in the "Third World" (the latter being the habitual focus of attention of anthropologists) brought the attention of "specialists in 'other cultures'" closer to their homes. Overview Urban anthropology is heavily influenced by sociology, especially the Chicago School of Urban Sociology. The traditional difference between sociology and anthropology was that the former was traditionally conceived as the study of civilized populations, whilst anthropology was approached as the study of primitive populations. There were, in addition, methodological differences between these two disciplines—sociologists would normally study a large population sample while anthropologists relied on fewer informants with deeper relations. As interest in urban societies increased, methodology between these two fields and subject matters began to blend, leading some to question the differences between urban sociology and urban anthropology (Prato and Pardo 2013). The lines between the two fields have blurred with the interchange of ideas and methodology, to the advantage and advancement of both disciplines. While for a long-time urban anthropology has not been officially acknowledged in the mainstream discipline, anthropologists have been conducting work in the area for a long time. Anthropologists, like sociologists, have attempted to define exactly what the city is and pinpoint the ways in which urbanism sets apart modern city lifestyles from what used to be regarded as the "primitive society" (Wirth 1933, Redfield and Singer 1954, Pocock 1960, Leeds 1972, Fox 1977). It is increasingly acknowledged in urban anthropology that, although there are significant differences in the characteristics and forms of organization of urban and non-urban communities, there are also important similarities, insofar as the city can also be conceived in anthropological studies as a form of community. Urban anthropology is an expansive and continuously evolving area of research. With a different playing field, anthropologists have had to modify their methods (Pardo and Prat 2012) and even readdress traditional ethics in order to adjust to different obstacles and expectations. Several for-profit and non-profit organizations now do work in the field of urban anthropology. Perhaps the best known of these is the non-profit organization called Urban Anthropology. Numerous universities now teach urban anthropology. History of the discipline In its early stages during the 19th century, anthropology was principally concerned with the comparative study of foreign (i.e. non-Western) cultures, which were frequently regarded as exotic and primitive. The attitude of ethnographers towards the subject of study was one of supposed scientific detachment, as they undertook the – self-serving and Eurocentric – mission of identifying, classifying and arranging cultural groups worldwide into clearly defined socio-cultural evolutionist stages of human development. During the 20th century, several factors began leading more anthropologists away from the bipolar notions of foreign savagery versus Western civilization and more towards the study of urban cultures in general. A strong influence in this direction was the discovery of vast regions of the world thanks to a significant increase in human mobility, which had been brought about, among other factors, by the fast expansion of the rail network and the popularisation of travel in the late Victorian era. This meant that, by the mid 20th century, it was generally perceived that there were relatively few undiscovered “exotic” cultures left to study through “first contact” encounters. Moreover, after World War I, a number of developing nations began to emerge. Some anthropologists were attracted to the study of these “peasant societies”, which were essentially different from the “folk societies” that ethnographers had traditionally researched. Robert Redfield was a prominent anthropologist who studied both folk and peasant societies. While researching peasant societies of developing nations, such as India, he discovered that these communities were dissimilar to folk societies in that they were not self-contained. For example, peasant societies were economically linked to forces outside of their own community. In other words, they were part of a bigger society — the city. This realisation opened the door to more anthropologists focusing their study of societies (regardless of whether they were Western or non-Western) from the perspective of the city (conceived as a structuring element). This crossover was instrumental in the development of urban anthropology as an independent field. Clearly, this was not the first occasion on which the social sciences had expressed an interest in the study of the city. Archaeology, for instance, already placed a strong emphasis on the exploration of the origins of urbanism, and anthropology itself had adopted the notion of the city as a referent in the study of what was referred to as pre-industrial society. Their efforts, however, were largely unrelated. A significant development in the anthropological study of the city was the research conducted by the Chicago School of Urban Ecology. As early as the 1920s, the school defined the city, in terms of urban ecology, as “made up of adjacent ecological niches accompanied by human groups in... rings surrounding the core.” The Chicago School became a main referent in urban anthropology, setting theoretical trends that have influenced the discipline until the present day. Among the various individual scholars who contributed to laying the foundations for what urban anthropology has become today (i.e. the study of the city conceived as a community) was the sociologist Louis Wirth. His essay “Urbanism as a Way of Life” proved to be essential in distinguishing urbanism as a unique form of society that could be studied from three perspectives: “a physical structure, as a system of social organization, and as a set of attitudes and ideas.” Another notable academic in the field of urban anthropology, Lloyd Warner, led the “Community Study” approach and was one of the first anthropologists to unequivocally transition from the exploration of primitive cultures (the aborigines in his case) to studying urban cities using similar anthropological methods. The Community Study approach was an important influence leading to the study of the city as a community. William Whyte later expanded Warner’s methods for small urban centres in his study of larger neighbourhoods. Methods, techniques and ethics Anthropologists typically have one significant difference from their affiliated field of science: their method of gathering information. Scientists prefer research design, where defined independent and dependent variables are used. Anthropologists, however, prefer the ethnographic method Pardo 1996, Pardo and Prato eds. 2012), which is broader and does not oversimplify a case. With urban anthropology, the subject is exactingly broad as it is, there needs to be a degree and channel of control. For this reason, urban anthropologists find it easier to incorporate research design in their methods and usually define the city as either the independent variable or the dependent variable. So, the study would be conducted on either the city as the factor on some measure, such as immigration, or the city as something that is responding to some measure. A common technique used by anthropologists is “myth debunking.” In this process, anthropologists present a specific question and conduct a study to either verify or negate its validity. Research design is actually an important part of this process, allowing anthropologists to present a specific question and answer it. Being able to hone into such a broad subject specifically while remaining holistic is largely the reason why this technique is popular among anthropologists. Another technique is based on how anthropologists conduct their studies; they either use single case studies or controlled comparisons. By using case studies, they present and analyze a single urban society. The more sophisticated method is using controlled comparisons, where different societies are compared with controlled variables so that the associations are more valid and not merely correlations. In order to conduct either type of study, the anthropologist must define a basic unit, which is the ethnographic target population. The target population can be central to the research question, but not necessarily; for example, when studying migrant immigration, the people are being studied, not the neighbourhoods. Common ways to define target populations that are central to the research design are by spatial boundaries, common cultures, or common work. Ethics largely remain the same for all anthropologists. Still, working in an urban setting and a more complex society raises new issues. The societies that anthropologists are now studying are more similar to their own, and familiarity raises issues concerning objectivity. The best idea is for an anthropologist to identify his or her own values explicitly and adapt to a society based on what he or she is studying. With primitive societies, it would have been acceptable for an anthropologist to enter the society and explain at the beginning their intentions of studying the society. In urban cultures, however, they are not in what are considered alien cultures. Therefore, an anthropologist finds that a more detailed explanation of their intentions is needed and often finds that their intent must be explained multiple times throughout the study. Main areas of study There are two main ways to go about researching urban anthropology: by examining the types of cities or examining the social issues within the cities. These two methods are overlapping and dependent of each other. By defining different types of cities, one would use social factors as well as economic and political factors to categorize the cities. By directly looking at the different social issues, one would also be studying how they affect the dynamic of the city. There are four central approaches to the anthropological study of cities. The first is the urban ecology model in which the community and family network are central. The second is based on power and knowledge, specifically of how the city is planned. The third approach is studying local and supralocal and the link between the two degrees of units in the city. The last approach focuses on cities where political economy is central to the city’s infrastructure. Low uses several prominent studies from urban anthropologists to compile a list of the different types of cities that do not fall into only one category, and what factors individualize them. These types of cities include those focused on religious, economic, and social processes. An example of the religious city is what Low calls the “sacred city” in which religion is central to the daily life processes of the city. An example of an economic-centered city image is the “Deindustrialized city”. In America, this type of city is usually found in areas where coal mining was the main industry in the city, and once coal mines were shut down, the city became a ghost city rampant with unemployment and displaced workers. Globalization has been studied as a force that severely affects these areas, and anthropological studies have greatly increased the knowledge of the implications. Other types of cities include, but are certainly not limited to the contested city, in which urban resistance is a key image; the gendered city, dominant in urbanizing areas such as Africa where women find themselves newly employed in low-wage labour; postmodern city, that is centred on capitalism; and fortress city, where different populations within the city are separated, usually based on socioeconomic factors. The main reasons for the current studies focusing on types of cities are to understand the patterns in which cities are now developing in, to study theoretical cities that may come about in the future based on these current trends, and to increase the implications of anthropological studies. Anthropological studies have serious implications on the understanding of urban society: with the rapid rate of globalization, many peasant societies are quickly attempting to modernize their cities and populations, but at an expense of the interests of the people within the cities. Studies can illustrate these negative effects and project how the overall city will fare poorly in the future. The other method of studying urban anthropology is by studying various factors, such as social, economic, and political processes, within the general city. Focuses on these factors include studies on rural-urban migration, kinship in the city, problems that arise from urbanism, and social stratification. These studies are largely comparative between how these relations function in an urban setting versus how they function in a rural setting. When studying kinship, anthropologists have been focusing on the importance of extended family for urban natives versus migrants. Studies have shown, generally, that the more “native” one becomes with the urban city, the less importance is placed on maintaining familial relations. Another important and commonly studied aspect of the urban society is poverty, which is believed to be a problem that arises out of urbanism. Urban anthropologists study several aspects individually and attempt to tie different aspects together, such as the relationship between poverty and social stratification. See also Rural-Urban gradient Urban Sociology Urban vitality Notes References Basham, Richard (1978) "Urban Anthropology. The Cross-Cultural Study of Complex Societies", Mayfield Publishing Company. Fox, Richard G. (1977) "Urban Anthropology. Cities in their Cultural Settings". Englewood Cliffs, NJ: Prentice-Hall. Ulf Hannerz (1980) Exploring the City: Inquiries Toward an Urban Anthropology, Gregory Eliyu Guldin, Aidan William Southall (eds.) (1993) Urban Anthropology in China, Jacqueline Knörr (2007) Kreolität und postkoloniale Gesellschaft. Integration und Differenzierung in Jakarta, Frankfurt & New York: Campus Verlag, Eames, Edwin. Anthropology of the City, An Introduction to Urban Anthropology. Englewood Cliffs, NJ: Prentice-Hall. Gmelch, George. Urban Life: Readings in the Anthropology of the City. 4th ed. Waveland Press, 2002. Leeds, Anthony. (1972) Urban anthropology and urban studies. Urban Anthropology Newsletter, 1 (1): 4-5. Low, Setha. (2005) Theorizing the City: The New Urban Anthropology Reader. Rutgers University Press. Pardo, Italo. (1996) Managing Existence in Naples: Morality, Action, and Structure. Cambridge: Cambridge University Press. Pardo, Italo and Prato, Giuliana B. eds. (2012) Anthropology in the City: Methodology and Theory. London: Routledge. Pocock, D. (1960) Sociologies – Urban and Rural. Contributions to Urban Sociology, 4: 63-81. Prato, Giuliana B. and Pardo, Italo. ‘Urban Anthropology’. Urbanities-Journal of Urban Ethnography, Vol. 3 • No 2 • November 2013, pp 80–110, https://www.anthrojournal-urbanities.com/docs/tableofcontents_5/7-Discussions%20and%20Comments.pdf Pardo, Italo and Prato, Giuliana B. (2017) The Palgrave Handbook of Urban Ethnography. New York: Palgrave Macmillan. https://link.springer.com/book/10.1007/978-3-319-64289-5 Wirth, Louis (1938) Urbanism as a way of life. American Journal of Sociology, 44:1-24. Anthropology Urban planning
Urban anthropology
[ "Engineering" ]
3,088
[ "Urban planning", "Architecture" ]
14,478,153
https://en.wikipedia.org/wiki/Stars%20and%20bars%20%28combinatorics%29
In combinatorics, stars and bars (also called "sticks and stones", "balls and bars", and "dots and dividers") is a graphical aid for deriving certain combinatorial theorems. It can be used to solve many simple counting problems, such as how many ways there are to put indistinguishable balls into distinguishable bins. The solution to this particular problem is given by the binomial coefficient , which is the number of subsets of size that can be formed from a set of size . If, for example, there are two balls and three bins, then the number of ways of placing the balls is . The table shows the six possible ways of distributing the two balls, the strings of stars and bars that represent them (with stars indicating balls and bars separating bins from one another), and the subsets that correspond to the strings. As two bars are needed to separate three bins and there are two balls, each string contains two bars and two stars. Each subset indicates which of the four symbols in the corresponding string is a bar. Statements of theorems The stars and bars method is often introduced specifically to prove the following two theorems of elementary combinatorics concerning the number of solutions to an equation. Theorem one For any pair of positive integers and , the number of -tuples of positive integers whose sum is is equal to the number of -element subsets of a set with elements. For example, if and , the theorem gives the number of solutions to (with ) as the binomial coefficient where is the number of combinations of elements taken at a time. This corresponds to compositions of an integer. Theorem two For any pair of positive integers and , the number of -tuples of non-negative integers whose sum is is equal to the number of multisets of size taken from a set of size , or equivalently, the number of multisets of size taken from a set of size , and is given by For example, if and , the theorem gives the number of solutions to (with ) as where the multiset coefficient is the number of multisets of size , with elements taken from a set of size . This corresponds to weak compositions of an integer. With fixed, the numbers for are those in the st diagonal of Pascal's triangle. For example, when the th number is the st triangular number, which falls on the second diagonal, 1, 3, 6, 10, …. Proofs via the method of stars and bars Theorem one proof The problem of enumerating k-tuples whose sum is n is equivalent to the problem of counting configurations of the following kind: let there be n objects to be placed into k bins, so that all bins contain at least one object. The bins are distinguished (say they are numbered 1 to k) but the n objects are not (so configurations are only distinguished by the number of objects present in each bin). A configuration is thus represented by a k-tuple of positive integers. The n objects are now represented as a row of n stars; adjacent bins are separated by bars. The configuration will be specified by indicating the boundary between the first and second bin, the boundary between the second and third bin, and so on. Hence bars need to be placed between stars. Because no bin is allowed to be empty, there is at most one bar between any pair of stars. There are gaps between stars and hence positions in which a bar may be placed. A configuration is obtained by choosing of these gaps to contain a bar; therefore there are configurations. Example With and , start by placing seven stars in a line: Now indicate the boundaries between the bins: In general two of the six possible bar positions must be chosen. Therefore there are such configurations. Theorem two proof In this case, the weakened restriction of non-negativity instead of positivity means that we can place multiple bars between stars and that one or more bars also be placed before the first star and after the last star. In terms of configurations involving objects and bins, bins are now allowed to be empty. Rather than a -set of bar positions taken from a set of size as in the proof of Theorem one, we now have a -multiset of bar positions taken from a set of size (since bar positions may repeat and since the ends are now allowed bar positions). An alternative interpretation in terms of multisets is the following: there is a set of bin labels from which a multiset of size is to be chosen, the multiplicity of a bin label in this multiset indicating the number of objects placed in that bin. The equality can also be understood as an equivalence of different counting problems: the number of -tuples of non-negative integers whose sum is equals the number of -tuples of non-negative integers whose sum is , which follows by interchanging the roles of bars and stars in the diagrams representing configurations. To see the expression directly, observe that any arrangement of stars and bars consists of a total of symbols, of which are stars and of which are bars. Thus, we may lay out slots and choose of these to contain bars (or, equivalently, choose n of the slots to contain stars). Example When and , the tuple (4, 0, 1, 2, 0) may be represented by the following diagram: If possible bar positions are labeled 1, 2, 3, 4, 5, 6, 7, 8 with label corresponding to a bar preceding the th star and following any previous star and 8 to a bar following the last star, then this configuration corresponds to the -multiset , as described in the proof of Theorem two. If bins are labeled 1, 2, 3, 4, 5, then it also corresponds to the -multiset , also as described in the proof of Theorem two. Relation between Theorems one and two Theorem one can be restated in terms of Theorem two, because the requirement that each variable be positive can be imposed by shifting each variable by −1, and then requiring only that each variable be non-negative. For example: with is equivalent to: with where for each . Further examples Example 1 If one wishes to count the number of ways to distribute seven indistinguishable one dollar coins among Amber, Ben, and Curtis so that each of them receives at least one dollar, one may observe that distributions are essentially equivalent to tuples of three positive integers whose sum is 7. (Here the first entry in the tuple is the number of coins given to Amber, and so on.) Thus Theorem 1 applies, with and , and there are ways to distribute the coins. Example 2 If , , and the bin labels are , then ★|★★★||★ could represent either the 4-tuple , or the multiset of bar positions , or the multiset of bin labels . The solution of this problem should use Theorem 2 with stars and bars to give configurations. Example 3 In the proof of Theorem two there can be more bars than stars, which cannot happen in the proof of Theorem one. So, for example, 10 balls into 7 bins gives configurations, while 7 balls into 10 bins gives configurations, and 6 balls into 11 bins gives configurations. Example 4 The graphical method was used by Paul Ehrenfest and Heike Kamerlingh Onnes—with symbol ε (quantum energy element) in place of a star and the symbol 0 in place of a bar—as a simple derivation of Max Planck's expression for the number of "complexions" for a system of "resonators" of a single frequency. By complexions (microstates) Planck meant distributions of energy elements ε over resonators. The number of complexions is The graphical representation of each possible distribution would contain copies of the symbol ε and copies of the symbol 0. In their demonstration, Ehrenfest and Kamerlingh Onnes took and (i.e., combinations). They chose the 4-tuple (4, 2, 0, 1) as the illustrative example for this symbolic representation: εεεε0εε00ε. Relation to generating functions The enumerations of Theorems one and two can also be found using generating functions involving simple rational expressions. The two cases are very similar; we will look at the case when , that is, Theorem two first. There is only one configuration for a single bin and any given number of objects (because the objects are not distinguished). This is represented by the generating function The series is a geometric series, and the last equality holds analytically for , but is better understood in this context as a manipulation of formal power series. The exponent of indicates how many objects are placed in the bin. Each additional bin is represented by another factor of ; the generating function for bins is , where the multiplication is the Cauchy product of formal power series. To find the number of configurations with objects, we want the coefficient of (denoted by prefixing the expression for the generating function with ), that is, . This coefficient can be found using binomial series and agrees with the result of Theorem two, namely . This Cauchy product expression is justified via stars and bars: the coefficient of in the expansion of the product is the number of ways of obtaining the th power of by multiplying one power of from each of the factors. So the stars represent s and a bar separates the s coming from one factor from those coming from the next factor. For the case when , that is, Theorem one, no configuration has an empty bin, and so the generating function for a single bin is . The Cauchy product is therefore , and the coefficient of is found using binomial series to be . See also Gaussian binomial coefficient Partition (number theory) Twelvefold way Dirichlet-multinomial distribution References Further reading Stars and bars (probability) Stars and bars (probability) Articles containing proofs
Stars and bars (combinatorics)
[ "Mathematics" ]
2,051
[ "Discrete mathematics", "Applied probability", "Applied mathematics", "Combinatorics", "Articles containing proofs" ]
14,479,423
https://en.wikipedia.org/wiki/Sagittarius%20Window%20Eclipsing%20Extrasolar%20Planet%20Search
The Sagittarius Window Eclipsing Extrasolar Planet Search, or SWEEPS, was a 2006 astronomical survey project using the Hubble Space Telescope's Advanced Camera for Surveys - Wide Field Channel to monitor 180,000 stars for seven days to detect extrasolar planets via the transit method. Area examined The stars that were monitored in this astronomical survey were all located in the Sagittarius-I Window. The Sagittarius Window is a rare view to the Milky Way's central bulge stars: our view to most of the galaxy's central stars is generally blocked by lanes of dust. These stars in the galaxy's central bulge region are approximately 27,000 light years from Earth. Planets discovered Sixteen candidate planets were discovered with orbital periods ranging from 0.6 to 4.2 days. Planets with orbital periods less than 1.2 days have not previously been detected, and have been dubbed "ultra-short period planets" (USPPs) by the search team. USPPs were discovered only around low-mass stars, suggesting that larger stars destroyed any planets orbiting so closely or that planets were unable to migrate as far inward around larger stars. Planets were found with roughly the same frequency of occurrence as in the local neighborhood of Earth. SWEEPS-4 and SWEEPS-11 orbited stars that were sufficiently visually distinct from their neighbors that follow-up observations using the radial velocity method were possible, allowing their masses to be determined. This table is constructed from information obtained from the Extrasolar Planets Encyclopedia and SIMBAD databases that reference the Nature article as their source. See also Baade's Window Optical Gravitational Lensing Experiment or OGLE also examines the galactic bulge for planets. References External links News Release Number: STScI-2006-34 Hubble Finds Extrasolar Planets Far Across Galaxy Sagittarius (constellation) Astronomical surveys Exoplanet search projects Hubble Space Telescope
Sagittarius Window Eclipsing Extrasolar Planet Search
[ "Astronomy" ]
381
[ "Exoplanet search projects", "Astronomical surveys", "Works about astronomy", "Constellations", "Astronomy projects", "Sagittarius (constellation)", "Astronomical objects" ]
14,479,711
https://en.wikipedia.org/wiki/Self-concordant%20function
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's method A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization. Self-concordant functions Multivariate self-concordant function Here is the general definition of a self-concordant function. Let C be a convex nonempty open set in Rn. Let f be a function that is three-times continuously differentiable defined on C. We say that f is self-concordant on C if it satisfies the following properties: 1. Barrier property: on any sequence of points in C that converges to a boundary point of C, f converges to ∞. 2. Differential inequality: for every point x in C, and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality: . Equivalently: Univariate self-concordant function A function is self-concordant on if: Equivalently: if wherever it satisfies: and satisfies elsewhere. Examples Linear and convex quadratic functions are self-concordant, since their third derivative is zero. Any function where is defined and convex for all and verifies , is self concordant on its domain which is . Some examples are for for for any function satisfying the conditions, the function with also satisfies the conditions. Some functions that are not self-concordant: Self-concordant barriers Here is the general definition of a self-concordant barrier (SCB). Let C be a convex closed set in Rn with a non-empty interior. Let f be a function from interior(C) to R. Let M>0 be a real parameter. We say that f is a M-self-concordant barrier for C if it satisfies the following: 1. f is a self-concordant function on interior(C). 2. For every point x in interior(C), and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality:. Constructing SCBs Due to the importance of SCBs in interior-point methods, it is important to know how to construct SCBs for various domains. In theory, it can be proved that every closed convex domain in Rn has a self-concordant barrier with parameter O(n). But this “universal barrier” is given by some multivariate integrals, and it is too complicated for actual computations. Hence, the main goal is to construct SCBs that are efficiently computable. SCBs can be constructed from some basic SCBs, that are combined to produce SCBs for more complex domains, using several combination rules. Basic SCBs Every constant is a self-concordant barrier for all Rn, with parameter M=0. It is the only self-concordant barrier for the entire space, and the only self-concordant barrier with M < 1. [Note that linear and quadratic functions are self-concordant functions, but they are not self concordant barriers]. For the positive half-line (), is a self-concordant barrier with parameter . This can be proved directly from the definition. Substitution rule Let G be a closed convex domain in Rn, and g an M-SCB for G. Let x = Ay+b be an affine mapping from Rk to Rn with its image intersecting the interior of G. Let H be the inverse image of G under the mapping: H = {y in Rk | Ay+b in G}. Let h be the composite function h(y) := g(Ay+b). Then, h is an M-SCB for H. For example, take n=1, G the positive half-line, and . For any k, let a be a k-element vector and b a scalar. Let H = {y in Rk | aTy+b ≥ 0} = a k-dimensional half-space. By the substitution rule, is a 1-SCB for H. A more common format is H = {x in Rk | aTx ≤ b}, for which the SCB is . The substitution rule can be extended from affine mappings to a certain class of "appropriate" mappings, and to quadratic mappings. Cartesian product rule For all i in 1,...,m, let Gi be a closed convex domains in Rni, and let gi be an Mi-SCB for Gi. Let G be the cartesian product of all Gi. Let g(x1,...,xm) := sumi gi(xi). Then, g is a SCB for G, with parameter sumi Mi. For example, take all Gi to be the positive half-line, so that G is the positive orthant . Let is an m-SCB for G. We can now apply the substitution rule. We get that, for the polytope defined by the linear inequalities ajTx ≤ bj for j in 1,...,m, if it satisfies Slater's condition, then is an m-SCB. The linear functions can be replaced by quadratic functions. Intersection rule Let G1,...,Gm be closed convex domains in Rn. For each i in 1,...,m, let gi be an Mi-SCB for Gi, and ri a real number. Let G be the intersection of all Gi, and suppose its interior is nonempty. Let g := sumi ri*gi. Then, g is a SCB for G, with parameter sumi ri*Mi. Therefore, if G is defined by a list of constraints, we can find a SCB for each constraint separately, and then simply sum them to get a SCB for G. For example, suppose the domain is defined by m linear constraints of the form ajTx ≤ bj, for j in 1,...,m. Then we can use the Intersection rule to construct the m-SCB (the same one that we previously computed using the Cartesian product rule). SCBs for epigraphs The epigraph of a function f(x) is the area above the graph of the function, that is, . The epigraph of f is a convex set if and only if f is a convex function. The following theorems present some functions f for which the epigraph has an SCB. Let g(t) be a 3-times continuously-differentiable concave function on t>0, such that is bounded by a constant (denoted 3*b) for all t>0. Let G be the 2-dimensional convex domain: Then, the function f(x,t) = -ln(f(t)-x) - max[1,b2]*ln(t) is a self-concordant barrier for G, with parameter (1+max[1,b2]). Examples: Let g(t) = t1/p, for some p≥1, and b=(2p-1)/(3p). Then has a 2-SCB. Similarly, has a 2-SCB. Using the Intersection rule, we get that has a 4-SCB. Let g(t)=ln(t) and b=2/3. Then has a 2-SCB. We can now construct a SCB for the problem of minimizing the p-norm: , where vj are constant scalars, uj are constant vectors, and p>0 is a constant. We first convert it into minimization of a linear objective: , with the constraints: for all j in [m]. For each constraint, we have a 4-SCB by the affine substitution rule. Using the Intersection rule, we get a (4n)-SCB for the entire feasible domain. Similarly, let g be a 3-times continuously-differentiable convex function on the ray x>0, such that: for all x>0. Let G be the 2-dimensional convex domain: closure({ (t,x) in R2: x>0, t ≥ g(x) }). Then, the function f(x,t) = -ln(t-f(x)) - max[1,b2]*ln(x) is a self-concordant barrier for G, with parameter (1+max[1,b2]). Examples: Let g(x) = x−p, for some p>0, and b=(2+p)/3. Then has a 2-SCB. Let g(x)=x ln(x) and b=1/3. Then has a 2-SCB. SCBs for cones For the second order cone , the function is a self-concordant barrier. For the cone of positive semidefinite of m*m symmetric matrices, the function is a self-concordant barrier. For the quadratic region defined by where where is a positive semi-definite symmetric matrix, the logarithmic barrier is self-concordant with For the exponential cone , the function is a self-concordant barrier. For the power cone , the function is a self-concordant barrier. History As mentioned in the "Bibliography Comments" of their 1994 book, self-concordant functions were introduced in 1988 by Yurii Nesterov and further developed with Arkadi Nemirovski. As explained in their basic observation was that the Newton method is affine invariant, in the sense that if for a function we have Newton steps then for a function where is a non-degenerate linear transformation, starting from we have the Newton steps which can be shown recursively . However, the standard analysis of the Newton method supposes that the Hessian of is Lipschitz continuous, that is for some constant . If we suppose that is 3 times continuously differentiable, then this is equivalent to for all where . Then the left hand side of the above inequality is invariant under the affine transformation , however the right hand side is not. The authors note that the right hand side can be made also invariant if we replace the Euclidean metric by the scalar product defined by the Hessian of defined as for . They then arrive at the definition of a self concordant function as . Properties Linear combination If and are self-concordant with constants and and , then is self-concordant with constant . Affine transformation If is self-concordant with constant and is an affine transformation of , then is also self-concordant with parameter . Convex conjugate If is self-concordant, then its convex conjugate is also self-concordant. Non-singular Hessian If is self-concordant and the domain of contains no straight line (infinite in both directions), then is non-singular. Conversely, if for some in the domain of and we have , then for all for which is in the domain of and then is linear and cannot have a maximum so all of is in the domain of . We note also that cannot have a minimum inside its domain. Applications Among other things, self-concordant functions are useful in the analysis of Newton's method. Self-concordant barrier functions are used to develop the barrier functions used in interior point methods for convex and nonlinear optimization. The usual analysis of the Newton method would not work for barrier functions as their second derivative cannot be Lipschitz continuous, otherwise they would be bounded on any compact subset of . Self-concordant barrier functions are a class of functions that can be used as barriers in constrained optimization methods can be minimized using the Newton algorithm with provable convergence properties analogous to the usual case (but these results are somewhat more difficult to derive) to have both of the above, the usual constant bound on the third derivative of the function (required to get the usual convergence results for the Newton method) is replaced by a bound relative to the Hessian Minimizing a self-concordant function A self-concordant function may be minimized with a modified Newton method where we have a bound on the number of steps required for convergence. We suppose here that is a standard self-concordant function, that is it is self-concordant with parameter . We define the Newton decrement of at as the size of the Newton step in the local norm defined by the Hessian of at Then for in the domain of , if then it is possible to prove that the Newton iterate will be also in the domain of . This is because, based on the self-concordance of , it is possible to give some finite bounds on the value of . We further have Then if we have then it is also guaranteed that , so that we can continue to use the Newton method until convergence. Note that for for some we have quadratic convergence of to 0 as . This then gives quadratic convergence of to and of to , where , by the following theorem. If then with the following definitions If we start the Newton method from some with then we have to start by using a damped Newton method defined by For this it can be shown that with as defined previously. Note that is an increasing function for so that for any , so the value of is guaranteed to decrease by a certain amount in each iteration, which also proves that is in the domain of . References Functions and mappings
Self-concordant function
[ "Mathematics" ]
2,889
[ "Mathematical analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
14,479,902
https://en.wikipedia.org/wiki/Monogenic%20system
In classical mechanics, a physical system is termed a monogenic system if the force acting on the system can be modelled in a particular, especially convenient mathematical form. The systems that are typically studied in physics are monogenic. The term was introduced by Cornelius Lanczos in his book The Variational Principles of Mechanics (1970). In Lagrangian mechanics, the property of being monogenic is a necessary condition for certain different formulations to be mathematically equivalent. If a physical system is both a holonomic system and a monogenic system, then it is possible to derive Lagrange's equations from d'Alembert's principle; it is also possible to derive Lagrange's equations from Hamilton's principle. Mathematical definition In a physical system, if all forces, with the exception of the constraint forces, are derivable from the generalized scalar potential, and this generalized scalar potential is a function of generalized coordinates, generalized velocities, or time, then, this system is a monogenic system. Expressed using equations, the exact relationship between generalized force and generalized potential is as follows: where is generalized coordinate, is generalized velocity, and is time. If the generalized potential in a monogenic system depends only on generalized coordinates, and not on generalized velocities and time, then, this system is a conservative system. The relationship between generalized force and generalized potential is as follows: See also Scleronomous References Mechanics Classical mechanics Lagrangian mechanics Hamiltonian mechanics Dynamical systems
Monogenic system
[ "Physics", "Mathematics", "Engineering" ]
313
[ "Theoretical physics", "Lagrangian mechanics", "Classical mechanics", "Hamiltonian mechanics", "Mechanics", "Mechanical engineering", "Dynamical systems" ]
14,481,299
https://en.wikipedia.org/wiki/Ana%20Mar%C3%ADa%20L%C3%B3pez%20Colom%C3%A9
Ana María López Colomé is a distinguished Mexican biochemist who won the 2002 L'Oréal-UNESCO Award for Women in Science – Latin America for her studies on the human retina and the prevention of retinitis pigmentosa and several retinopathies. López Colomé is a former head of the Department of Biochemistry at the Faculty of Medicine and a researcher at the Institute of Cellular Physiology of the National Autonomous University of Mexico (UNAM). She holds a bachelor's degree in biology, a master's degree in chemistry and a doctorate degree in biochemistry. Awards and honors SNI Nivel III (This award is given by Mexico's National Science Foundation based on a person's publication record and impact.) “Mexicanos Notables”. Canal 11. 2009 (This award is given by Mexico's state funded TV channel 11 and is given only to noteworthy Mexicans.) Award Ciudad Capital: Heberto Castillo Martínez. Denominación “Thalía Harmony Baillet”, in the area of health. 2008 Distinción “Mujer Líder 2008”. Consorcio “Mundo Ejecutivo” (Empresarial). 2008 “Sor Juana Inés de la Cruz” Award UNAM. 2006 (This award is given only to one woman in each school of UNAM and is a prestigious award to recognize women leaders in the university.) UNAM Award, Research in Natural Sciences, 2002 “Mujer del Año” (Woman of the Year). Patronato Nacional de La Mujer del Año, 2002 Recognized as the Smartest Woman of Mexico with the “Laureana Wright Award given by the Mexican Society in Geography and Statistics. 2003 Award for conducting the best basic research in her first University year. Mexico's National Academy of Medicine, 2003 Recognized as the Woman of the Year by the Rotary Club of the Pedregal, 2003 “Hartley” Award. University of Southampton, UK, 1985 “Gabino Barreda” Award 1985. UNAM (This prestigious award is given to the student with the highest GPA of each generation at UNAM). References Year of birth missing (living people) Living people Mexican biochemists Women biochemists Mexican women chemists Academic staff of the National Autonomous University of Mexico Mexican women scientists L'Oréal-UNESCO Awards for Women in Science laureates 21st-century Mexican women scientists 21st-century Mexican scientists 20th-century Mexican scientists 20th-century Mexican women scientists
Ana María López Colomé
[ "Chemistry" ]
505
[ "Biochemists", "Women biochemists" ]
14,481,648
https://en.wikipedia.org/wiki/Binder%20parameter
The Binder parameter or Binder cumulant in statistical physics, also known as the fourth-order cumulant is defined as the kurtosis of the order parameter, s, introduced by Austrian theoretical physicist Kurt Binder. It is frequently used to determine accurately phase transition points in numerical simulations of various models. The phase transition point is usually identified comparing the behavior of as a function of the temperature for different values of the system size . The transition temperature is the unique point where the different curves cross in the thermodynamic limit. This behavior is based on the fact that in the critical region, , the Binder parameter behaves as , where . Accordingly, the cumulant may also be used to identify the universality class of the transition by determining the value of the critical exponent of the correlation length. In the thermodynamic limit, at the critical point, the value of the Binder parameter depends on boundary conditions, the shape of the system, and anisotropy of correlations. References Statistical mechanics
Binder parameter
[ "Physics" ]
212
[ "Statistical mechanics stubs", "Statistical mechanics" ]
14,481,951
https://en.wikipedia.org/wiki/GHD%20Group
GHD Group Pty Ltd (formerly known as Gutteridge Haskins & Davey) is a global employee-owned multinational technical professional services firm providing advisory, architecture and design, buildings, digital, energy and resources, environmental, geosciences, project management, transportation and water services. GHD employs more than 11,000 people—engineers, architects, planners, scientists, project managers and economists— operating in over 160 offices across five continents serving clients in water, energy and resources, environment, property and buildings, and transportation markets. GHD has delivered projects in over 135 countries. History GHD was founded as a private practice in Melbourne, Australia in 1928 by Alan Gordon Gutteridge who operated as a consulting engineer with focuses on water and sewerage. The partnership of Gerald Haskins and Geoffrey Innes Davey joined with Gutteridge's practice in 1939, establishing the formal partnership of Gutteridge Haskins & Davey. During the 1950s and 60s GHD grew to more than 400 employees while expanding into transportation, manufacturing plants, building and civil works, energy, mining and dams. A notable project of the 1960s was the extension of potable water and sewage infrastructure across Tasmania. GHD expanded globally in the 1970s with a joint venture in Malaysia. During the 1990s GHD expanded its services into architecture, environmental and business consulting while expanding its presence in Southeast Asia. During the 2000s GHD continued to grow through a series of mergers and acquisitions in the US, Canada, Europe, Australia, New Zealand, the Middle East, China, Chile and Malaysia. By 2013 GHD had grown to more than 1000 employees in North America. In 2014, GHD merged with Canadian firm Conestoga-Rovers & Associates in one of the largest private stock transactions in the engineering and environmental consulting industry, creating a combined company of 8,500 employees. At the time of the merger Conestoga-Rovers had about 3,000 employees mostly in North America and the United Kingdom, while GHD had 5,500 employees across five continents. The combined company became the sixth-largest employee-owned engineering consultancy in the world, with $1.5 billion in combined revenue. Also in 2014, GHD acquired the brand and business of Australian architecture firm Woodhead, later renamed GHD Woodhead. In 2018 GHD opened a new North American headquarters facility in Waterloo, Ontario. At that time, the company said that the North American region accounted for over half of GHD's global revenue. GHD ranks #9 in international design firms operating in the US and #8 in Canada according to Engineering News-Record’s 2021 annual survey of key market segments. On 2 April 2024, Jim Giannopoulos was appointed CEO. Market sectors In FY2022 28% of revenues were in the Environment sector, 25% in transportation, 17% in water, 17% in property and buildings, and 13% in Energy and resources. GHD saw 6.3% organic growth on FY2021 with revenues of 2.322 Billion AUD (1.617 Billion USD). Water – As of 2018, eighteen percent of GHD’s revenues were derived from water-related design services. Past projects have included desalination projects for the City of Carlsbad, Camp Pendleton, City of Huntington Beach and South Orange County in the US, the Brisbane and Christchurch rebuilding efforts, Manila Sewerage implementation, Codelco Colon Processing Plant in Chile and the Oakura Sewerage scheme in New Zealand. GHD designed and administered the contract to upgrade the Hespeler trunk sanitary sewer line for the city of Cambridge, Ontario without digging a 2 km trench through an environmentally sensitive area. It also designed a tunnel aqueduct for Manila Water to provide water to approximately 7 million people. It designed and constructed two Ultraviolet Disinfection surface water treatment facilities for Westchester County, New York. It constructed a water treatment system using membrane bioreactor technology to provide 100,000 gallons of reclaimed water per day for Wickenburg, Arizona. Energy & Resources – As of 2018, sixteen percent of GHD’s revenues were derived from energy and resources. Past projects of note include the Hawsons Magnetite Iron Ore Mine in Australia, QCLNG Export Pipeline in Australia and the Taysan Copper Mine in the Philippines. Within Australia the company is working with both renewable and baseload generators to help navigate from a coal-reliant power grid to one balanced with wind and solar power. Recent projects include Front End Engineering Design for a project developed by Tourian Renewables Limited aimed at turning waste plastic into fuels, oils and chemicals; and the expansion of the Sales de Jujuy lithium carbonate plant in Argentina. Environment – As of 2018, twenty-seven percent of GHD’s revenues were generated from environmental services. Past projects of note include the HydroAysen Transmission System in Chile, Minimbah Bank Third Track Biodiversity project and the Townsville Marine Precinct, both in Australia. Recent projects include the installation of an articulated concrete block mat and sand-water slurry to remediate industrial contamination of Bayou d'Inde in Louisiana; advisement for the North East Link Authority on its road infrastructure project in Victoria; and conversion of a landfill in the Mariana Islands to a park, including the installation of drainage and liner systems reducing the amount of contaminated water and protection from erosion in adherence to EPA regulations. Property & Buildings – As of 2018, seventeen percent of GHD’s revenues were generated from property and buildings. Past projects of note include the Al Walkra Hospital in Qatar, the Kerikeri Police Station in New Zealand and Richlands Rail Station in Australia. Recent projects include construction of the Barwon Water head office in Geelong, Victoria; restoration of Grantley Hall in North Yorkshire; and engineering design services for the redevelopment of Qasr Al Hosn in Abu Dhabi. Transportation – As of 2018, twenty-two percent of GHD’s revenues were generated from transportation. Past projects of note in include the Ahuriri to Napier Transportation Link in New Zealand, the San Antonio Port Expansion in Chile and South Road Superway in South Australia. Recent projects include the implementation of a multi-lane roundabout on Highway 68 in Monterey, California, including the project management, traffic analysis, and specialty engineering design; business case development and economic assessment, including highway and bridge concept designs, for New Zealand State Highway 3 connecting the Manawatū-Whanganui and Hawke’s Bay regions; and the design and construction of 1 km of acoustic barriers along New Zealand State Highway 1 to mitigate noise and air pollution. Digital – GHD employs over 500 technology professionals who provide data analytics, location intelligence, cyber security, virtual and augmented reality technology services. Recent projects include the creation of a connected site for the Level Crossing Removal Authority to facilitate data handling and sharing across a complex rail infrastructure upgrade; and preparation of a smart city framework, roadmap and governance process for Glenelg Shire, Victoria. Achievements In 2018 GHD was ranked 16th in Financial Review’s annual top 500 private companies in Australia list and 29th in Engineering News-Record’s annual Top 150 Global Design Firms. The International Water Association named awarded GHD's Birmingham Resilience Project the bronze medal for Exceptional Project Execution and Delivery at its 2018 Innovation Awards. The American Council of Engineering Companies of California honored GHD for Comprehensive Large-Scale Habitat Restoration for the wetland mitigation work the firm did with for the Border Coast Regional Airport Authority. GHD’s Ellerslie Acoustic Barrier project for the New Zealand Transport Agency received the Excellence in Concrete for the Community award from Concrete3. The Australian Institute of Project Management recognized GHD for the firm's Pesticide Container Management in the Pacific. GHD received an Award of Merit, Environmental from Engineering News-Record for the firm's involvement in the closure of a Puerto Rico Dump and the construction of Eloy S. Inos Peace Park. GHD Woodhead received the Harry Seidler Award for Commercial Architecture and a national award for sustainable architecture from the Australian Institute of Architects for the design of the Barwon Water HQ in Geelong, Australia. GHD has been appointed as the Fund Coordinator for the Australian Government’s Water for Women Fund, an initiative of the Australian Government to improve the health, gender equality and wellbeing of Asian and Pacific communities through socially-inclusive and sustainable Water, Sanitation and Hygiene (WASH) programs. Survey by GHD following COVID-19 In April 2021, GHD conducted a survey across the UK that revealed that 40% of people in the UK are considering moving to another location as a result of the COVID-19 Pandemic. References Construction and civil engineering companies of Australia International engineering consulting firms Engineering consulting firms of Australia Construction and civil engineering companies established in 1928 Australian companies established in 1928 Privately held companies of Australia
GHD Group
[ "Engineering" ]
1,797
[ "Engineering consulting firms", "International engineering consulting firms" ]
14,482,131
https://en.wikipedia.org/wiki/Excavator%20controls
Excavator controls specifies ways of how a human operator controls the digging components (i.e. swing, boom, stick, bucket) of a piece of heavy machinery, such as a backhoe or an excavator. ISO controls The most commonly used control pattern throughout the world is the ISO controls. In the ISO control pattern, the left hand joystick controls Swing (left & right) and the Stick Boom (away & close), and the right hand joystick controls the Main Boom (up & down) and Bucket motions (close & dump). This control pattern is standardised in ISO 10968 and SAE J1177. Left hand left = Swing left. Left hand right = Swing right. Left hand forward = Stick Boom (Dipper) away. Left hand back = Stick Boom (Dipper) close. Right hand left = Bucket curl in (closed) Right hand right = Bucket curl out (dump) Right hand forward = Main Boom down. Right hand back = Main Boom up. SAE controls Beside ISO, the SAE controls is one of the most common control patterns in the United States. It differs from the ISO control pattern only in that SAE controls exchange the hands that control the boom and the stick. This control pattern is standardized in J1814 . In the SAE control pattern, the left hand joystick controls Swing (left & right) and the Main Boom (up &down), and the right hand joystick controls the Stick Boom(away & close) and Bucket motions (close & dump). Left hand left = Swing left. Left hand right = Swing right. Left hand forward = Main Boom down. Left hand back = Main Boom up. Right hand left = Bucket curl in (closed) Right hand right = Bucket curl out (dump) Right hand forward = Stick Boom (Dipper) away. Right hand back = Stick Boom (Dipper) close. Some excavators have the possibility to change the operating mode from SAE to ISO and from ISO to SAE. See also Society of Automotive Engineers External links ISO 10968:2020 - Earth-moving machinery Operator's controls SAE J1177 Hydraulic Excavator Operator Controls SAE J1814 Operator Controls - Off-Road Machines http://www.digbits.co.uk/technical.htm https://mobile-automation.eu/products/excavator-control Excavator bucket grade control system Engineering vehicles Standards Construction standards
Excavator controls
[ "Engineering" ]
517
[ "Construction", "Engineering vehicles", "Construction standards" ]
14,482,403
https://en.wikipedia.org/wiki/Johan%20Sebastiaan%20Ploem
Johan Sebastiaan Ploem (born 25 August 1927) is a Dutch microscopist and digital artist. He made significant contribution to the field of fluorescence microscopy, and invented reflection interference contrast microscopy. Early life and education Ploem was born on 25 August 1927 in Sawahlunto in West Sumatra, then part of the Dutch East Indies. At 2 years old his family moved to the Netherlands, where he remained the rest of his youth. Ploem received an MD at University of Utrecht in 1962. He also worked as an intern in the Broussais Hospital in Paris with Louis Pasteur Vallery-Radot. In 1963, Ploem was elected a Fulbright Fellow for study at the Harvard University School of Public Health, where he received a MPH Cum Laude in 1964. He obtained a Ph.D. degree in 1967 from the University of Amsterdam, titled Enkele methoden voor toxiciteitsonderzoek met behulp van weefselkweekcellen. Scientific career Ploem has been employed by a number of academic institutions, including the University of Miami, Harvard University, the University of Amsterdam and the University of Leiden. At Leiden he served as a professor at the Faculty of Medicine in the Department of Cytochemistry and Cytometry from 1980 to 1992, after which he became a Professor Emeritus there. Ploem has also served as visiting lecturer or professor at various universities, including University of Dundee, University of Florida, Monash University, University of Beijing and Free University of Brussels. Fluorescence microscopy Ploem is most well known for inventing the epi-illumination cube used in fluorescence microscopy. Around 1962 Ploem started work in collaboration with Schott on the development of dichroic beam splitters for reflection of blue and green light for fluorescence microscopy using epi illumination. At the time of his first publication on fluorescence microscopy using epi illumination with narrow-band blue and green light, he was not aware of the development of a dichroic beam splitter for UV excitation with incident light by Brumberg and Krylova. Ploem's prototype fluorescence epi-illuminators and microscopes form a part of the permanent exposition of the Dutch National Museum for the history of Science and Medicine. Reflection contrast microscopy In 1973, at the Second Conference on Mononuclear Phagocytes held in Leiden, Ploem introduced an improvement to Interference Reflection Microscopy (IRM), which he called Reflection Contrast Microscopy (RCM). He also wrote a book chapter in the associated conference proceeding edited by Ralph van Furth. The improvement is the addition of crossed polarizers and a so-called "anti-flex objective", the combination of which further reduces stray light in an IRM microscope, allowing even better interference contrast. RCM is more commonly known as Reflection Contrast Interference Microscopy (RICM) today. Honors and awards Various awards and honors have been bestowed on Ploem: Fellow of the Papanicolaou Cancer Research Institute, Miami, Florida, 1977 Fellowship to the Institute for Cell Analysis at the University of Miami, Florida, 1979 C. E. Alken Foundation award, co-recipient, Switzerland, 1982 Ernst Abbe Medal and Award of the New York Microscopical Society, 1998 Erica Wachtel medal from the British Society for Clinical Cytology, 1993 (held the Erica Wachtel Medal Lecture) The first Honorary member of the International Society for Analytical Cytology, 1993 Honorary fellow of the Royal Microscopical Society (FRMS), 1976, Oxford Honorary fellow of the New York Microscopical Society Honorary fellow of the Polish Society of Surgery Honorary fellow of the German Society of Surgery In 1994, the European Society for Analytical Cellular Pathology established a Conference Keynote "Ploem" Lecture for invited scientists at its future general meetings The International Society of Analytical Cytology invited Professor Ploem to present its inaugural "Robert Hooke" lecture. In 1995, he was invited by the Royal Microscopical Society to give the inaugural CYTO lecture. Digital painting Ploem started painting as a small boy and was educated in drawing and painting in Maastricht, the Netherlands. While still at secondary school he attended an evening course in drawing and painting at the Kunstnijverheidsschool Maastricht. Ploem's presence in Paris was important for his knowledge and interest in art since he could regularly visit his cousins in Paris, the painter Frits Klein and his son Yves Klein. He visited the Kleins when Yves was making his first monochromes. In the last years of his activities at the faculty of medicine at Leiden University, he concentrated on research in image analysis. He was asked to participate in a European project with the aim of automating cancer cell recognition using computer analysis. It concerned a collaborative project with the German optical company Leitz/Leica Microsystems, and the Institute for Mathematical Morphology in Fontainebleau, France. Together with a team, Professor Jean Serra at this institute had developed an image analysis method, now internationally known as mathematical morphology. With his experience as an analogue painter, Ploem saw the possibility of also applying the methods of mathematical morphology to the creation of digital art. At the International Symposium on Mathematical Morphology in Amsterdam (1998), Ploem presented a paper on the creation of computer graphics with Mathematical Morphology, using for the first time, the transforming algorithms from the Fontainebleau group for the creation of digital art. He wrote about it in the chapter of a book () published on that occasion. His first digital graphics of nature scenes were shown in his exposition at a regional art centre in the Pyrenees (Ossega, June 1997). He was invited for a symposium on Art et Science at the University of Caen, France (April 2001). At the art exposition connected with this symposium, he presented 6 digital graphics that were dominated by chaotic transformations of rock art themes. A similar invitation was made by the University of Basel in Switzerland (April 2002). References External links Website Leiden Professors 1927 births Living people Microscopists Dutch digital artists Academic staff of Leiden University People from Heerlen Fellows of the Royal Microscopical Society University of Amsterdam alumni Utrecht University alumni Harvard T.H. Chan School of Public Health alumni Dutch people of the Dutch East Indies
Johan Sebastiaan Ploem
[ "Chemistry" ]
1,295
[ "Microscopists", "Microscopy" ]
14,482,581
https://en.wikipedia.org/wiki/Long%20number
A long number (e.g. +44 7624 800555 in international notation or 07624 800555 in UK national notation), also known as a virtual mobile number (VMN), dedicated phone number MSISDN or long code, is a reception mechanism used by businesses to receive SMS messages and voice calls. As well as being internationally available, long numbers enable businesses to have their own number, rather than short codes which are generally shared across a lot of brands. Long numbers allows a wide range of industries to generate large amounts of mobile-originated SMS from the subscribers, such as wireless application service providers, mobile virtual network operators, mobile virtual network enablers, SMS aggregators, e-sellers, advertising agencies, media channels and mobile infrastructure providers. Long numbers vs. short codes Both long numbers and short codes have their advantages and disadvantages. International accessibility is useful for global organizations who wish to run international campaigns. Limited to national borders, short codes have to be activated in each country where the campaign will take place, which might be expensive and time-consuming. For long-term campaigns or any other assignment, long numbers are also a good solution, as the number can be assigned exclusively for a long term. The long numbers option can be obtained directly from an SMS provider with SS7 access, which is the shortest way possible to have an SMS reception option. Alternatively, long numbers can be obtained from SMS aggregators or SMS providers. To have access to a short code, service providers must enter a bilateral agreement with the mobile network operator that actually owns the number. This process can take time, and potentially cause delays in implementing campaigns. Alternatively, service providers can rent short codes from aggregators, creating another middleman in the value-chain. Premium messaging services are not possible on long numbers; those require short codes and operator agreements. Application of long numbers Competitions and voting initiated by TV and radio shows Product feedback, campaigns and promotions (the number can be printed on a product package) Globally available number for international companies and events Reception of SMS for companies wishing to interact with consumers Reply path to online tools, software packages, etc. 2-way communication with service engineers, sales forces and suppliers Reception of SMS to be forwarded to computer or user account SMS-to-email applications SMS multi-party chat services Feedback SMS for mass mailings or promotional activities References Telephone numbers
Long number
[ "Mathematics" ]
485
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
14,482,695
https://en.wikipedia.org/wiki/Drush
Drush (DRUpal SHell) is a computer software shell-based application used to control, manipulate, and administer Drupal websites. Details Drush was originally developed by Arto Bendiken for Drupal 4.7. In May 2007, it was partly rewritten and redesigned for Drupal 5 by Franz Heinzmann. Drush is maintained by Moshe Weitzman with the support of Owen Barton, greg.1.anderson, jonhattan, Mark Sonnabaum, Jonathan Hedstrom and Christopher Gervais. External links References Web applications Free software programmed in PHP Free and open-source software Software using the GNU General Public License Command-line software
Drush
[ "Technology" ]
140
[ "Command-line software", "Computing commands" ]
14,482,760
https://en.wikipedia.org/wiki/Loewe%20%28electronics%29
Loewe Technology GmbH, doing business as Loewe (), is a German company that develops, designs, manufactures, and sells consumer electronics and electromechanical products and systems. The company was founded in Berlin, in 1923, by brothers Siegmund and David L. Loewe. Since 1948, the company has based its headquarters and production facilities in the Bavarian town of Kronach, Upper Franconia. History The company was started in 1923 in Berlin, when Siegmund Loewe and his brother David Ludwig Loewe established a radio manufacturing company named Radiofrequenz GmbH. Siegmund Loewe belonged to a circle which promoted public broadcasting in Germany and did his best to initiate what later became known as the radio boom. His work with the young physicist Manfred von Ardenne in 1926 led to the development of the Loewe 3NF, an early attempt to combine several functions into one electronic device, similar to the modern integrated circuit. It was the basis for the broadcast receiver OE 333 that Loewe produced in his factory from 1926 on, of which for the first time in Germany several hundred of thousands sets were sold. Television development began at Loewe in 1929. The company worked together with British television pioneer John Logie Baird. In 1931, Manfred von Ardenne presented the world's first fully electronic television to the public on the Loewe stand at the 8th Berlin Radio Show. The New York Times reported on the invention on its front page. Between 1930 and 1935, Loewe registered the most television patents worldwide. When Adolf Hitler came to power in Germany, Siegmund Loewe was forced to emigrate to the US in 1938, where he developed a friendship with Albert Einstein. From 1939 Loewe mainly produced radio technology for the German Luftwaffe and in 1940 came into the possession of the Reich Aviation Ministry. In 1949, Siegmund Loewe regained possession of the company's property and took over as chairman of the supervisory board. In the 1950s, Loewe began producing the Optaphon, an early cassette tape recorder with auto-reverse function, but it was not a commercial success. In contrast, the start of radio and television production at the current site in Kronach was very successful and Loewe was able to increase its turnover from 10 to 169 million Deutsch Marks between 1949 and 1960. In 1961, Loewe launched with Optacord 500 the first European video recorder for professional use. In 1962, the family company tradition ended with the death of Siegmund Loewe. Subsidiaries of the Philips group took over the majority of shares. Under this management, which continued until 1985, the company specialised increasingly in the development and production of televisions. In 1963, Loewe launched the Optaport, a portable television. It had for the first time a 25 cm-wide screen and built-in FM radio. The first Loewe colour televisions were launched along with the introduction of colour television in Germany in 1967. In 1979, Loewe began production of the fully integrated chassis television, which secured the future of the company. In February 1981, Loewe presented Europe's first stereo sound television to the press. In 1985, a management buyout (MBO) made Loewe independent again after Philips sold its shares. A new automotive electronics division was successfully launched in cooperation with BMW. In 1991, the Japanese group Matsushita (Panasonic) acquired a stake in Loewe and also took over the BMW share in 1993; however, Matsushita sold its shares in 1997. The company subsequently went public. Also in 1985, Loewe designer Heinz Jünger created the Art 1 television, laying the foundation for Loewe's rise to become an internationally renowned premium brand with a clear design strategy. While Loewe had previously repeatedly attracted attention with its independent product designs (e.g. Opta 537, Palette, line 2001, Loewe MCS), it was only now that it developed its own profile. With the success of the Art 1 behind it, a separate corporate design was developed and the Loewe design department was systematically transformed into a design management department. Numerous well-known designers such as Hubertus Carl Frey alias hace, who designed the Loewe brand, as well as industrial design agencies such as Phoenix, Neumeister and Design3 worked as external designers for Loewe during these years. 1998 marked two more milestones in the company history: the launch of the Xelos @Media, a television with internet access, and that of the Spheros, the first Loewe flat-screen television. In the following year, Loewe AG had its IPO, led by Rainer Hecker (CEO) and Burkhard Bamberger (CFO). By 2002, Loewe had established a market position as a supplier of high-quality and design-oriented CRT televisions. However, due to the triumph of flat-screen televisions, Loewe's sales of picture tube televisions in the premium segment collapsed in 2003. Loewe responded with a reorganization program, switched the television range completely to flat screens and revitalized the brand by pursuing an even stricter premium course. In 2008, Loewe was honored with the German Brand Award 2008 in the Best Brand Relaunch category because the company had mastered the brand crisis with a consistent premium strategy and achieved a turnaround. Following financial hardships, in July 2013 the company filed for bankruptcy protection, but on 1 October 2013, the Loewe Group entered into a self-administration process. In March 2014, major assets from Loewe AG were taken by the Munich-based investor Stargate Capital GmbH. In December 2019, Skytec Group Ltd took 100% ownership of the brand, creating Loewe Technology GmbH and associated subsidiaries. In 2021, Loewe acquired 65,000 m² of land and buildings from the town of Kronach to secure its location for the long term future. The plans consider step-by-step renovation of the complete area with erection of new office and administrative facilities. During 2021, Loewe introduced a new sub-brand: We.by.Loewe. French football star Kylian Mbappe bought a stake in Loewe on 30 September 2024, which the company expects will lead to an increase in sales and possibly an IPO. Bibliography 75 Jahre Loewe (1923–1998). Und die Zukunft geht weiter, author's edition 1998 Oskar Blumtritt: The flying spot scanner, Manfred von Ardenne and the telecinema, in: Presenting Pictures. NMSI Trading Ltd, Science Museum, London 2004. p. 84-115. Frank Keuper, Jürgen Kindervater, Heiko Dertinger, Andreas Heim (Ed.): Das Diktat der Markenführung. 11 Thesen zur nachhaltigen Markenführung und -implementierung. Mit einem umfassenden Fallbeispiel der Loewe AG, Gabler Fachverlage, Wiesbaden 2009, Speidel, Markus: Netzwerke, Kooperationen und Management-Buy-Out. Die Geschichte des Unternehmens Loewe zwischen 1962 und 1985 (in German). Klartext Verlag, Essen 2012, Kilian Steiner: Ortsempfänger, Volksfernseher und Optaphon. Die Entwicklung der deutschen Radio- und Fernsehindustrie und das Unternehmen Loewe 1923–1962. Klartext Verlag, Essen 2005, Kilian Steiner: Loewe. 100 Jahre Designgeschichte. Loewe. 100 years design history (in German and English). Stuttgart: avedition, Stuttgart 2023 References External links German brands Electronics companies of Germany Companies based in Bavaria Electronics companies established in 1923 1923 establishments in Germany Companies acquired from Jews under Nazi rule Radio manufacturers
Loewe (electronics)
[ "Engineering" ]
1,662
[ "Radio electronics", "Radio manufacturers" ]
14,483,033
https://en.wikipedia.org/wiki/Aurora%20%28protocol%29
The Aurora Protocol is a link layer communications protocol for use on point-to-point serial links. Developed by Xilinx, it is intended for use in high-speed (gigabits/second and more) connections internally in a computer or in an embedded system. It uses either 8b/10b encoding or 64b/66b encoding. External links Official Document (8b/10b) Official Document (64b/66b) Serial buses Link protocols
Aurora (protocol)
[ "Technology" ]
98
[ "Computing stubs", "Computer network stubs" ]
14,483,315
https://en.wikipedia.org/wiki/Bonnor%E2%80%93Ebert%20mass
In astrophysics, the Bonnor–Ebert mass is the largest mass that an isothermal gas sphere embedded in a pressurized medium can have while still remaining in hydrostatic equilibrium. Clouds of gas with masses greater than the Bonnor–Ebert mass must inevitably undergo gravitational collapse to form much smaller and denser objects. As the gravitational collapse of an interstellar gas cloud is the first stage in the formation of a protostar, the Bonnor–Ebert mass is an important quantity in the study of star formation. For a gas cloud embedded in a medium with a gas pressure , the Bonnor–Ebert mass is given by where G is the gravitational constant and is the isothermal sound speed () with as the molecular mass. is a dimensionless constant which varies based on the density distribution of the cloud. For a uniform mass density and for a centrally peaked density . See also Jeans mass References Interstellar media Equations of astronomy
Bonnor–Ebert mass
[ "Physics", "Astronomy" ]
186
[ "Interstellar media", "Outer space", "Plasma physics", "Concepts in astronomy", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Plasma physics stubs", "Equations of astronomy", "Outer space stubs" ]
14,483,784
https://en.wikipedia.org/wiki/Eagle%20Cap%20Wilderness
Eagle Cap Wilderness is a wilderness area located in the Wallowa Mountains of northeastern Oregon (United States), within the Wallowa–Whitman National Forest. The wilderness was established in 1940. In 1964, it was included in the National Wilderness Preservation System. A boundary revision in 1972 added and the Wilderness Act of 1964 added resulting in its current size of , making Eagle Cap by far Oregon's largest wilderness area. Eagle Cap Wilderness is named after a peak in the Wallowa Mountains, which were once called the Eagle Mountains. At Eagle Cap was incorrectly thought to be the highest peak in the range. Topography The Eagle Cap Wilderness is characterized by high alpine lakes and meadows, bare granite peaks and ridges, and U-shaped glacial valleys. Thick timber is found in the lower valleys and scattered alpine timber on the upper slopes. Elevations in the wilderness range from approximately in lower valleys to at the summit of Sacajawea Peak with 30 other summits exceeding . The wilderness is home to Legore Lake, the highest true lake in Oregon at , as well as more than 60 named alpine lakes and tarns (12 of which are above 8,000 feet), and more than of streams. History The Eagle Cap Wilderness and surrounding country in the Wallowa–Whitman National Forest was first occupied by the ancestors of the Nez Perce Indian tribe around 1400 AD, and later by the Cayuse, the Shoshone, and Bannocks. The wilderness was used as hunting grounds for bighorn sheep and deer and to gather huckleberries. It was the summer home to the Joseph Band of the Nez Perce tribe. 1860 marked the year the first settlers moved into the Wallowa Valley. In 1930, the Eagle Cap was established as a primitive area and in 1940 earned wilderness designation. Wildlife Eagle Cap Wilderness is home to a variety of wildlife, including black bears, cougars, Rocky Mountain bighorn sheep, and mountain goats. In the summer white-tailed deer, mule deer, and Rocky Mountain elk roam the wilderness. Smaller mammals that inhabit the area year-round include the pika, pine martens, badgers, squirrels, and marmots. Birds include peregrine falcons, bald eagles, golden eagles, ferruginous hawks, and gray-crowned rosy finch. Trout can be found in many of the lakes and streams in the wilderness. The Oregon State record golden trout was caught in the wilderness in 1987, by Douglas White. The lake where it was caught was not named. Moose have recently returned to the wilderness; the herd now numbers about 40. There is possible evidence that grizzly bears and wolverines are returning as well. Sheep and cattle graze throughout Eagle Cap Wilderness, especially the surroundings of Mount Nebo. Shortly after World War II with the impact of the wool industry, the number of sheep nearly disappeared in the Eagle Cap Wilderness, while at the beginning of the 1900, their numbers exceeded the carrying capacity of the wilderness. Wolves Wolves have returned to Eagle Cap Wilderness with no reported encounters with humans, although some losses of sheep and cattle have been attributed to wolves in the area. In 2012, a trail-cam recorded a female black wolf. Tracking of the wolf revealed at least three total wolves in an area east of Minam River. Further surveys by the end of 2012 showed a count of at least seven wolves in a pack within the Upper Minam River area. The Oregon Department of Fish and Wildlife reported in 2013 a total of six known packs with 46 total wolves. All animals belonged to the same pack and are designated Minam Pack. The first grey wolf trapped and radio-collared tagged by the ODFW was an female individual and marks the twentieth radio-collared wolf in Oregon. Another female was radio-collared which dispersed from the Miniam Pack and found traveling with another male wolf within the Miniam Area and into the Keating Unit. Through 2019 the Minam Pack produced litters annually within the Eagle Cap Wilderness. One of the females from the Minam Pack formed a pair bond in 2014 with a male member of the Snake River Pack forming a new pack within the Eagle Cap Wilderness, designated the Catherine Pack. The adult female was found deceased in 2019 although the pack remained classified as a breeding pack through 2019. Vegetation Plant communities in the Eagle Cap Wilderness range from low elevation grasslands and ponderosa pine forest to alpine meadows. Engelmann spruce, larch, mountain hemlock, sub-alpine fir, and whitebark pine can be found in the higher elevations. Varieties of Indian paintbrush, sego lilies, elephanthead, larkspur, shooting star, and bluebells are abundant in the meadows. The wilderness does contain some small groves of old growth forest. Recreation As Oregon's largest wilderness area, Eagle Cap offers many recreational activities, including hiking, backpacking, horseback riding, hunting, fishing, camping, and wildlife watching. Winter brings backcountry skiing and snowshoeing opportunities. Several Alpine Huts and campsites are located throughout the McCully Basin, which are used as a base camp in the winter for telemark skiing. There are 47 trailheads and approximately of trails in Eagle Cap, accessible from Wallowa, Union, and Baker Counties, and leading to all areas of the wilderness. Wild and Scenic Rivers Four designated Wild and Scenic Rivers originate in Eagle Cap Wilderness—the Lostine, Eagle Creek, Minam, and Imnaha. Lostine River of the Lostine from its headwaters in the wilderness to the Wallowa–Whitman National Forest boundary are designated Wild and Scenic. Established in 1988, of the river are designated "wild" and are designated "recreational." A small portion of the river is on private property. Eagle Creek of Eagle Creek from its output at Eagle Lake in the wilderness to the Wallowa–Whitman National Forest boundary at Skull Creek are designated Wild and Scenic. In 1988, of the river were designated "wild," are designated "scenic," and are designated "recreational." Minam of the Minam River from its headwaters at the south end of Minam Lake to the wilderness boundary, one-half mile downstream from Cougar Creek, are designated Wild and Scenic. In 1988, all were designated "wild." Imnaha of the Imnaha River from its headwaters are designated Wild and Scenic. The designation comprises the main stem from the confluence of the North and South Forks of the Imnaha River to its mouth, and the South Fork from its headwaters to the confluence with the main stem. In 1988, were designated "wild," were designated "scenic," and were designated "recreational," though only a portion of the Wild and Scenic Imnaha is located within Eagle Cap Wilderness. Lakes See also List of U.S. Wilderness Areas List of old growth forests References External links Eagle Cap Wilderness - Wallowa–Whitman National Forest Eagle Cap Wilderness - Wilderness.net EagleCapWilderness.com Eagle Cap Wilderness - JosephOregon.com Protected areas of Baker County, Oregon IUCN Category Ib Protected areas of Union County, Oregon Protected areas of Wallowa County, Oregon Wilderness areas of Oregon Old-growth forests 1940 establishments in Oregon
Eagle Cap Wilderness
[ "Biology" ]
1,453
[ "Old-growth forests", "Ecosystems" ]
14,484,306
https://en.wikipedia.org/wiki/Proof%20mining
In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that studies or analyzes formalized proofs, especially in analysis, to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive. This research has led to improved results in analysis obtained from the analysis of classical proofs. References Further reading Ulrich Kohlenbach and Paulo Oliva, "Proof Mining: A systematic way of analysing proofs in mathematics", Proc. Steklov Inst. Math, 242:136–164, 2003 Paulo Oliva, "Proof Mining in Subsystems of Analysis", BRICS PhD thesis citeseer Proof theory
Proof mining
[ "Mathematics" ]
156
[ "Mathematical logic stubs", "Mathematical logic", "Proof theory" ]
14,484,389
https://en.wikipedia.org/wiki/Network%20equipment%20provider
Network equipment providers (NEPs) – sometimes called telecommunications equipment manufacturers (TEMs) – sell products and services to communication service providers such as fixed or mobile operators as well as to enterprise customers. NEP technology allows for calls on mobile phones, Internet surfing, joining a conference calls, or watching video on demand through IPTV (internet protocol TV). The history of the NEPs goes back to the mid-19th century when the first telegraph networks were set up. Some of these players still exist today. Telecommunications equipment manufacturers The terminology of the traditional telecommunications industry has rapidly evolved during the Information Age. The terms "Network" and "Telecoms" are often used interchangeably. The same is true for "provider" and "manufacturer". Historically, NEPs sell integrated hardware/software systems to carriers such as NTT-DoCoMo, ATT, Sprint, and so on. They purchase hardware from TEMs (telecom equipment manufacturers), such as Vertiv, Kontron, and NEC, to name a few. TEMs are responsible for manufacturing the hardware, devices, and equipment the telecommunications industry requires. The distinction between NEP and TEM is sometimes blurred, because all the following phrases may imply NEP: Telecommunications equipment provider Telecommunications equipment industry Telecommunications equipment company Telecommunications equipment manufacturer (TEM) Telecommunications equipment technology Network equipment provider (NEP) Network equipment industry Network equipment companies Network equipment manufacturer Network equipment technology Services This is a highly competitive industry that includes telephone, cable, and data services segments. Products and services include: Mobile networks like GSM (Global System for Mobile Communication), Enhanced Data Rates for GSM Evolution (EDGE) or GPRS (General Packet Radio Service). Networks of this kind are typically also known as 2G and 2.5G networks. The 3G mobile networks are based on UMTS (Universal Mobile Telecommunication Standard) which allows much higher data rates than 2G or *5G. Fixed networks which are typically based on PSTN (Public Switched Telephone Network). Enterprise networks, like Unified Communication infrastructure Internet infrastructures, like routers and switches Companies Some providers in each customer segment are: Majority of revenues from service providers: Alcatel-Lucent Ericsson Huawei Samsung TP-Link D-Link Juniper Networks NEC Nokia Networks Ciena ZTE Majority of revenues from enterprise customers: Avaya Cisco Motorola Unify The NEPs have recently undergone a significant consolidation or M&A activity, for example, the joint venture of Nokia and Siemens (Nokia Siemens Networks), the acquisition of Marconi by Ericsson, the merger between Alcatel and Lucent, and many numerous acquisitions by Cisco. A look at the financial performance of these players according to the segment they serve creates a diverse picture: Power balance in the NEP ecosystem NEPs face high pressure from old & new rivals and a stronger, more consolidated customer base. Threat of New entrants: Growing importance software applications has led to the entry of new players like System integrators and other ISVs. (For some NEPs, SIs are being considered as competitors for selected network services i.e. application, services, and control layers of the network) In the area of managed and hosted services, NEPs are likely to face competition from new players like Google due to lower entry barriers Bargaining Power of Suppliers: Increasing standardization and commoditization of network components leads to more competition among component suppliers, thus lowering their bargaining position. Overcapacities have led to lower bargaining power of Semiconductor suppliers As more standardized networks components are expected to be used for NGNs, a shift in the current supplier structure may balance the bargaining between suppliers and NEPs Bargaining Power of Buyers: Consolidation among communication service providers due to convergence leads to greater dependence on a few large clients, which means higher bargaining strength of customers Due to pressures on their profitability, service providers are increasingly looking at lowering their operating costs and capital expenditures (lowering cost per subscriber), and this is putting pressures on NEPs margins. Enterprises increasingly demand end-to-end solutions through a single vendor for their Unified Communication needs Threat of Substitution: Switch from PSTN to Next-Generation Network Increasing use of standardized network components (COTS) compared to more proprietary equipment Software to increasingly replace traditional network components Open Source Age The SCOPE Alliance was a non-profit and influential Network Equipment provider (NEP) industry group aimed at standardizing "carrier-grade" systems for telecom in the Information Age, successfully in accelerating the NEP transformation towards Carrier-grade Open Source Hardware, OS, Middleware, Virtualization, and Cloud see table: NFV, SDN, 5G, Cloud transformation Age From 2010 onwards, Telecom carriers (NEP customers) wanted direct involvement in driving transformation. The NEP-only SCOPE Alliance was retired, as the industry combined forces on Service Availability, ETSI Network function virtualization standardization, Software-defined networking adoption, and 5G network slicing initiatives. References External links IBM study related to the NEP industry Computer networking
Network equipment provider
[ "Technology", "Engineering" ]
1,018
[ "Computer networking", "Computer science", "Computer engineering" ]
14,484,954
https://en.wikipedia.org/wiki/PC%20System%20Design%20Guide
The PC System Design Guide (also known as the PC-97, PC-98, PC-99, or PC 2001 specification) is a series of hardware design requirements and recommendations for IBM PC compatible personal computers, compiled by Microsoft and Intel Corporation during 1997–2001. They were aimed at helping manufacturers provide hardware that made the best use of the capabilities of the Microsoft Windows operating system, and to simplify setup and use of such computers. Every part of a standard computer and the most common kinds of peripheral devices are defined with specific requirements. Systems and devices that meet the specification should be automatically recognized and configured by the operating system. Versions Four versions of the PC System Design Guide were released. In PC-97, a distinction was made between the requirements of a Basic PC, a Workstation PC and an Entertainment PC. In PC-98, the Mobile PC was added as a category. In PC 2001, the Entertainment PC was dropped. PC-97 Required: 120 MHz Pentium, MIPS R4x00, Digital Alpha 21064 (EV4) or IBM PowerPC architecture (latter three only under Windows NT) 16 MB RAM Initial version. Introduced color code for PS/2 keyboard (purple) and PS/2 mouse (green) connectors PC-98 (Not to be confused with NEC's incompatible PC-98 series) Aimed at systems to be used with Windows 98 or Windows 2000. Required: 200 MHz Pentium processor with MMX technology (or equivalent performance) 256 KB L2 cache 32 MB RAM (recommended: 64 MB of 66 MHz DRAM) ACPI 1.0 (including power button behavior) Fast BIOS power-up (limited RAM test, no floppy test, minimal startup display, etc.) BIOS Y2K compliance PXE preboot environment It was published as . PC-99 Required: 300 MHz CPU 64 MB RAM USB Comprehensive color-coding scheme for ports and connectors (see below) Strongly discouraged: Non plug-and-play hardware ISA slots It was published as . PC 2001 Required: 667 MHz CPU 64 MB RAM Final version. First to require IO-APICs to be enabled on all desktop systems. Places a greatly increased emphasis on legacy-reduced and legacy-free systems. Some "legacy" items such as ISA expansion slots and device dependence on MS-DOS are forbidden entirely, while others are merely strongly discouraged. PC 2001 removes compatibility for the A20 line: "If A20M# generation logic is still present in the system, this logic must be terminated such that software writes to I/O port 92, bit 1, do not result in A20M# being asserted to the processor." Color-coding scheme for connectors and ports Perhaps the most end-user visible and lasting impact of PC 99 was that it introduced a color code for the various standard types of plugs and connectors used on PCs. As many of the connectors look very similar, particularly to a novice PC user, this made it far easier for people to connect peripherals to the correct ports on a PC. This color code was gradually adopted by almost all PC and motherboard manufacturers. Some of the color codes have also been widely adopted by peripheral manufacturers. See also ATX Legacy-free PC Multimedia PC Sound card IBM PC–compatible PoweredUSB (proprietary high-power USB extension using other color-coded ports) References External links Legacy PC Design Guides – Microsoft Download Center PDF versions: PC-97 System Design Guide PC-98 System Design Guide PC-99 System Design Guide PC 2001 System Design Guide Color codes Computer standards IBM PC compatibles
PC System Design Guide
[ "Technology" ]
732
[ "Computer standards" ]
14,485,522
https://en.wikipedia.org/wiki/NGC%2087
NGC 87 is a diffuse, highly disorganized barred irregular galaxy, part of Robert's Quartet, a group of four interacting galaxies. One supernova has been observed in NGC 87: SN 1994Z (type II, mag. 14.6) was discovered Alexander Wassilieff on 2 October 1994. See also Robert's Quartet List of NGC objects (1–1000) References External links NGC 87 http://www.astro.pef.zcu.cz/ 18340930 Barred irregular galaxies 194-8 0087 1357 Phoenix (constellation) Robert's Quartet
NGC 87
[ "Astronomy" ]
127
[ "Phoenix (constellation)", "Constellations" ]
14,485,655
https://en.wikipedia.org/wiki/List%20of%20members%20of%20the%20National%20Academy%20of%20Sciences%20%28Computer%20and%20information%20sciences%29
Computer and information sciences National Academy of Sciences (Computer and information sciences) Lists of computer scientists
List of members of the National Academy of Sciences (Computer and information sciences)
[ "Technology" ]
20
[ "Computing-related lists", "Lists of computer scientists" ]
14,485,830
https://en.wikipedia.org/wiki/List%20of%20members%20of%20the%20National%20Academy%20of%20Sciences%20%28Human%20environmental%20sciences%29
Human environmental sciences !
List of members of the National Academy of Sciences (Human environmental sciences)
[ "Environmental_science" ]
6
[ "American environmental scientists", "Environmental scientists" ]
14,485,857
https://en.wikipedia.org/wiki/Taft%20equation
The Taft equation is a linear free energy relationship (LFER) used in physical organic chemistry in the study of reaction mechanisms and in the development of quantitative structure–activity relationships for organic compounds. It was developed by Robert W. Taft in 1952 as a modification to the Hammett equation. While the Hammett equation accounts for how field, inductive, and resonance effects influence reaction rates, the Taft equation also describes the steric effects of a substituent. The Taft equation is written as: where is the ratio of the rate of the substituted reaction compared to the reference reaction, ρ* is the sensitivity factor for the reaction to polar effects, σ* is the polar substituent constant that describes the field and inductive effects of the substituent, δ is the sensitivity factor for the reaction to steric effects, and Es is the steric substituent constant. Polar substituent constants, σ* Polar substituent constants describe the way a substituent will influence a reaction through polar (inductive, field, and resonance) effects. To determine σ* Taft studied the hydrolysis of methyl esters (RCOOMe). The use of ester hydrolysis rates to study polar effects was first suggested by Ingold in 1930. The hydrolysis of esters can occur through either acid and base catalyzed mechanisms, both of which proceed through a tetrahedral intermediate. In the base catalyzed mechanism the reactant goes from a neutral species to negatively charged intermediate in the rate determining (slow) step, while in the acid catalyzed mechanism a positively charged reactant goes to a positively charged intermediate. Due to the similar tetrahedral intermediates, Taft proposed that under identical conditions any steric factors should be nearly the same for the two mechanisms and therefore would not influence the ratio of the rates. However, because of the difference in charge buildup in the rate determining steps it was proposed that polar effects would only influence the reaction rate of the base catalyzed reaction since a new charge was formed. He defined the polar substituent constant σ* as: where log(ks/kCH3)B is the ratio of the rate of the base catalyzed reaction compared to the reference reaction, log(ks/kCH3)A is ratio of a rate of the acid catalyzed reaction compared to the reference reaction, and ρ* is a reaction constant that describes the sensitivity of the reaction series. For the definition reaction series, ρ* was set to 1 and R = methyl was defined as the reference reaction (σ* = zero). The factor of 1/2.48 is included to make σ* similar in magnitude to the Hammett σ values. Steric substituent constants, Es Although the acid catalyzed and base catalyzed hydrolysis of esters gives transition states for the rate determining steps that have differing charge densities, their structures differ only by two hydrogen atoms. Taft thus assumed that steric effects would influence both reaction mechanisms equally. Due to this, the steric substituent constant Es was determined from solely the acid catalyzed reaction, as this would not include polar effects. Es was defined as: where ks is the rate of the studied reaction and \mathit k_{CH3} is the rate of the reference reaction (R = methyl). δ is a reaction constant that describes the susceptibility of a reaction series to steric effects. For the definition reaction series δ was set to 1 and Es for the reference reaction was set to zero. This equation is combined with the equation for σ* to give the full Taft equation. From comparing the Es values for methyl, ethyl, isopropyl, and tert-butyl, it is seen that the value increases with increasing steric bulk. However, because context will have an effect on steric interactions some Es values can be larger or smaller than expected. For example, the value for phenyl is much larger than that for tert-butyl. When comparing these groups using another measure of steric bulk, axial strain values, the tert-butyl group is larger. Other steric parameters for LFERs In addition to Taft's steric parameter Es, other steric parameters that are independent of kinetic data have been defined. Charton has defined values v that are derived from van der Waals radii. Using molecular mechanics, Meyers has defined Va values that are derived from the volume of the portion of the substituent that is within 0.3 nm of the reaction center. Sensitivity factors Polar sensitivity factor, ρ* Similar to ρ values for Hammett plots, the polar sensitivity factor ρ* for Taft plots will describe the susceptibility of a reaction series to polar effects. When the steric effects of substituents do not significantly influence the reaction rate the Taft equation simplifies to a form of the Hammett equation: The polar sensitivity factor ρ* can be obtained by plotting the ratio of the measured reaction rates (ks) compared to the reference reaction (\mathit k_{CH3}) versus the σ* values for the substituents. This plot will give a straight line with a slope equal to ρ*. Similar to the Hammett ρ value: If ρ* > 1, the reaction accumulates negative charge in the transition state and is accelerated by electron withdrawing groups. If 1 > ρ* > 0, negative charge is built up and the reaction is mildly sensitive to polar effects. If ρ* = 0, the reaction is not influenced by polar effects. If 0 > ρ* > −1, positive charge is built up and the reaction is mildly sensitive to polar effects. If −1 > ρ*, the reaction accumulates positive charge and is accelerated by electron donating groups. Steric sensitivity factor, δ Similar to the polar sensitivity factor, the steric sensitivity factor δ for a new reaction series will describe to what magnitude the reaction rate is influenced by steric effects. When a reaction series is not significantly influenced by polar effects, the Taft equation reduces to: A plot of the ratio of the rates versus the Es value for the substituent will give a straight line with a slope equal to δ. Similarly to the Hammett ρ value, the magnitude of δ will reflect to what extent a reaction is influenced by steric effects: A very steep slope will correspond to high steric sensitivity, while a shallow slope will correspond to little to no sensitivity. Since Es values are large and negative for bulkier substituents, it follows that: If δ is positive, increasing steric bulk decreases the reaction rate and steric effects are greater in the transition state. If δ is negative, increasing steric bulk increases the reaction rate and steric effects are lessened in the transition state. Reactions influenced by polar and steric effects When both steric and polar effects influence the reaction rate the Taft equation can be solved for both ρ* and δ through the use of standard least squares methods for determining a bivariant regression plane. Taft outlined the application of this method to solving the Taft equation in a 1957 paper. Taft plots in QSAR The Taft equation is often employed in biological chemistry and medicinal chemistry for the development of quantitative structure–activity relationships (QSARs). In a recent example, Sandri and co-workers have used Taft plots in studies of polar effects in the aminolysis of β-lactams. They have looked at the binding of β-lactams to a poly(ethyleneimine) polymer, which functions as a simple mimic for human serum albumin (HSA). The formation of a covalent bond between penicillins and HSA as a result of aminolysis with lysine residues is believed to be involved in penicillin allergies. As a part of their mechanistic studies Sandri and co-workers plotted the rate of aminolysis versus calculated σ* values for 6 penicillins and found no correlation, suggesting that the rate is influenced by other effects in addition to polar and steric effects. See also Free-energy relationship Hammett equation Quantitative structure–activity relationship References Physical organic chemistry Equations
Taft equation
[ "Chemistry", "Mathematics" ]
1,694
[ "Equations", "Mathematical objects", "Physical organic chemistry" ]
14,485,938
https://en.wikipedia.org/wiki/Transmodel
Transmodel, also known as Reference Data Model For Public Transport (EN 12896), is a European Standard for modelling and exchanging public transport information. It provides a standard data model and specialised data structures to uniformly represent common public transport concepts, facilitating the use of data in a wide variety of public transport information systems, including for timetabling, fares, operational management, real-time data, and journey planning. As of 2021, the current version of Transmodel is 6.0. Scope Transmodel provides a comprehensive conceptual model for public transport information systems, covering multiple subdomains including transport network infrastructure and topology, schedules, journey planning, fares, fare validation, real-time passenger information, and operational systems. Transmodel is an entity-relationship model in Unified Modeling Language (UML), accompanied by detailed descriptions of the concepts, elements and attributes needed to represent transport information. It uses modern information architecture principles to separate different concerns into independent information layers, using node and link concepts to describe individual transport layers. It supports the reuse of transport information entities for different applications. It can represent multi-modal, multi-operator transport systems and complex fare models, bringing together data from many different organisations with different standard and practices. The Transmodel standard also establishes consistent terminology for public transport concepts, providing definitive equivalents for use in the national languages of each participant nation. In cases where vernacular words related to public transport could have more than one possible meaning or overlap in meaning, it establishes a precise and unambiguous technical term for use in information systems. For example, the terms 'trip', 'journey', and 'service', are overlapping concepts that in Transmodel have specific usages. History Transmodel was originally developed within a range of European projects under several European Programmes (Drive I, Drive II, TAP) with the support of the European Commission (DGXIII), and national public institutions, in particular the French Ministry of Transport (Direction des Transports Terrestres), as well as several private companies. Initial development & first generation uses Transmodel originated in the Cassiope project (Computer Aided System for Scheduling Information and Operation of Public Transport in Europe, 1989-1991), carried out under the initial EEC DRIVE programme. The results of Cassiope were then developed further by the EuroBus and Harpist (Drive II) projects. This produced Transmodel V4.1 ENV 12896 with a E/R “Oracle” formalism. The Telematics Applications Programme project TITAN (1996-1998) continued to validate and enhance Transmodel, implementing it in three European pilot sites. TITAN accompanied the standardisation process of Transmodel, which was chosen in 1997 as the European Experimental Norm ENV 12896. This led to Transmodel V5.0: with multi-modality, real-time control, layers, and data versioning The Système d'Information pour le Transport Public (SITP) (Information System for Public Transport), which began in 1999 under the sponsorship of the French Ministry of Transport, developed Transmodel 5.1, adding a UML formalism. In 2006, version 5.1 of Transmodel was formally adopted by the European Committee for Standardization (CEN) as the European Standard EN 12896. Second generation (Transmodel v5.1) Transmodel has been fundamental to the development of a number of concrete national data models and European Standards, including both European standards Service Interface for Real Time Information (SIRI: 2001-2005, now a CEN technical specification) for Real-time data exchange for buses, and Identification of Fixed Objects In Public Transport (2006-2007), now assimilated into Transmodel v6.0 Part 2, and national standards such as TransXChange (2001-2005, now the UK standard for bus PT timetables), and the French Trident standard (1999-2003). Its provision of a uniform conceptual framework, consistent terminology and well grounded abstractions make it especially valuable for comparing, harmonising and modernising legacy standards and systems and for international cooperation. Third generation (Transmodel v6.0) Transmodel based applications are now in widespread use through many parts of Europe for exchanging timetable and real-time data. A revised V6.0 version of Transmodel incorporating additional capabilities and breaking the specification into eight separate modules is underdevelopment. Parts 1, 2 & 3, covering respectively common concepts, network descriptions, and timetables were published in 2015. Parts 4 to 8, covering operational actions, fares, passenger information services, driver management, and management information and statistics were published in 2019. NeTEx (NETwork EXchange) is a concrete XML scheme implementing the central components of the Transmodel model as a modular W3C schema. It was developed as Standard by CEN/TC 278/WG 3 (public transport working group) between 2009 and 2014 as a format for exchanging inter-modal stop, timetable, and fare data for public transport Europe-wide. In 2017, under the Intelligent Transport Systems Priority Action A Directive (2010/40/E), the European Commission recognised NeTEx as a strategic standard for the cross-border exchange of data to enable the provision of EU-wide multi-modal travel information services. It aims to make data available in NeTEx format at National Access Points (government-designated open databases) in all European countries. See also TransXChange Transport Direct Transport standards organisations NeTEx Identification of Fixed Objects In Public Transport (IFOPT) Service Interface for Real Time Information (SIRI) References Bibliography Comité Européen de Normalisation (CEN), Reference Data Model For Public Transport, EN12896 EN 12896:2006, Public Transport Reference Data Model (“Transmodel v5.1”) EN 12896-1:2016, Public Transport Reference Data Model - Part 1: Common Concepts (“Transmodel v6”) EN 12896-2:2016, Public Transport Reference Data Model - Part 2: Public Transport Network (“Transmodel v6”) EN 12896-3:2016, Public Transport Reference Data Model - Part 3: Timing Information and Vehicle Scheduling (“Transmodel v6”) EN 12896-4:2019, Public Transport Reference Data Model – Part 4: Operations Monitoring and Control (“Transmodel v6”) EN 12896-5:2019, Public Transport Reference Data Model – Part 5: Fare Management (“Transmodel v6”) EN 12896-6:2019, Public Transport Reference Data Model – Part 6: Passenger Information (“Transmodel v6”) EN 12896-7:2019, Public Transport Reference Data Model – Part 7: Driver Management (“Transmodel v6”) EN 12896-8:2019, Public Transport Reference Data Model – Part 8: Management Information & Statistics (“Transmodel v6”) PD CEN/TR 12896-9:2019, Public Transport Reference Data Model – Informative documentation External links Transmodel NeTEx OpRa Public transport information systems Travel technology Transport in Europe
Transmodel
[ "Technology" ]
1,477
[ "Public transport information systems", "Information systems" ]
14,486,300
https://en.wikipedia.org/wiki/Identification%20of%20Fixed%20Objects%20in%20Public%20Transport
IFOPT (Identification of Fixed Objects in Public Transport) is a CEN Technical Specification that provides a Reference Data Model for describing the main fixed objects required for public access to Public transport, that is to say Transportation hubs (such as airports, stations, bus stops, ports, and other destination places and points of interest, as well as their entrances, platforms, concourses, internal spaces, equipment, facilities, accessibility etc.). Such a model is a fundamental component of the modern Public transport information systems needed both to operate Public transport and to inform passengers about services. IFOPT has been revised and incorporated into Transmodel v6 – Part 2. Scope IFOPT is itself built upon the CEN Transmodel standard and defines four related sub models. Stop Place Model: Describes the detailed structure of a Stop Place (that is stations, airports, ferry ports, bus stops, coach stations, etc., providing a point of access to public transport) including Entrances, pathways, and accessibility limitations. Point of Interest Model: Describes the structure of a point of interest (that is tourist attractions, leisure facilities, stadia, public buildings, parks, prisons, etc.) to which people may wish to travel by public transport) including physical points of access, i.e. Entrances. Gazetteer Topographical Model: Provides a topographical representation of the settlements (cities, towns, villages etc.) between which people travel. It is used to associate Stop and Station elements with the appropriate topographic names and concepts to support the functions of journey planning, stop finding, etc. Administrative Model. Provides an organisational model for assigning responsibility to create and maintain data as a collaborative process involving distributed stakeholders. Includes namespace management to manage the decentralised issuing of unique identifiers. Stop Places The Stop Place model defines a conceptual model and identification principles for places of access (Stop Places) for all modes of transport (including airports, stations, ports, bus stops, coach stations, taxi ranks, etc.). It distinguishes all physical points of access to transport such as platforms, gates, quays, bays, stances, taxi ranks, and also other areas of an interchange such as booking halls, concourses, waiting rooms, etc. It describes the navigation paths between such points allowing the routing by journey planners. It can represent detailed accessibility data about access for wheelchair users, the visually impaired, and other categories of users with special needs, etc. It can also represent likely points of delay due to processes such as checkin, security, etc. Stop Places and their component elements can be assigned the names, labels and codes needed to identify them to the public in different contexts. Components can be associated with elements of other information layers such as the Road and Path Network to allow for integrated journey routing. See also Transmodel NaPTAN Transportation hub Intermodal Journey Planner History IFOPT was originally developed between 2008 and 2011 as an extension to the Transmodel model and included both a conceptual model expressed in Unified Modeling Language and a W3C XML Schema. It developed a detailed access model for stations and points of interest. Between 2001 and 2012 a new more general Transmodel based schema NeTEx was developed which incorporated and extended the features of the IFOPT IFOPT schema as a uniform part of a data model for public transport stops and timetables. Starting in 2014 a program to update Transmodel was commenced and the IFOPT conceptual model was integrated into the Transmodel Part2 describing transport networks including stops, points of interest and other IFOPT concerns. Part 2 was published in 2016. References prCEN Technical Specification Identification of Fixed Objects In Public Transport. External links Public transport information systems Travel technology Transport in Europe
Identification of Fixed Objects in Public Transport
[ "Technology" ]
760
[ "Public transport information systems", "Information systems" ]
14,486,776
https://en.wikipedia.org/wiki/Contact%20analysis
In cryptanalysis, contact analysis is the study of the frequency with which certain symbols precede or follow other symbols. The method is used as an aid to breaking classical ciphers. Contact analysis is based on the fact that, in any sample of any written language, certain symbols appear adjacent to other symbols with varying frequencies. Moreover, these frequencies are roughly the same for almost all samples of that language, even when the distribution of the symbols themselves differs significantly from normal. This is true regardless of whether the symbols being used are words or letters. In some ciphers, these properties of the natural language plaintext are preserved in the ciphertext, and have the potential to be exploited in a ciphertext-only attack. Although in a sense contact analysis can be considered a type of frequency analysis, most discussions of frequency analysis concern themselves with the simple probabilities of the symbols in the text: or Contact analysis is based on the conditional probability that certain letters will precede or succeed other letters: , or , or even , where and are subsets of the alphabet being used. Where frequency analysis is based on first-order statistics, contact analysis is based on second or third-order statistics. References External links Statistical Distributions of English Text Cryptographic attacks
Contact analysis
[ "Technology" ]
251
[ "Cryptographic attacks", "Computer security exploits" ]
14,486,847
https://en.wikipedia.org/wiki/Mycena%20galopus
Mycena galopus, commonly known as the milky mycena, milking bonnet or milk-drop mycena, is an inedible species of fungus in the family Mycenaceae of the order Agaricales. It produces small mushrooms that have grayish-brown, bell-shaped, radially-grooved caps up to wide. The gills are whitish to gray, widely spaced, and squarely attached to the stem. The slender stems are up to long, and pale gray at the top, becoming almost black at the hairy base. The stem will ooze a whitish latex if it is injured or broken. The variety nigra has a dark gray cap, while the variety candida is white. All varieties of the mushroom occur during summer and autumn on leaf litter in coniferous and deciduous woodland. Mycena galopus is found in North America and Europe. The saprobic fungus is an important leaf litter decomposer, and able to utilize all the major constituents of plant litter. It is especially adept at attacking cellulose and lignin, the latter of which is the second most abundant renewable organic compound in the biosphere. The mushroom latex contains chemicals called benzoxepines, which are thought to play a role in a wound-activated chemical defense mechanism against yeasts and parasitic fungi. Taxonomy The mushroom was first described as Agaricus galopus by Christian Hendrik Persoon in 1800, and later transferred to the genus Mycena by Paul Kummer in 1871. An Australian taxon formerly considered a variety, Mycena galopus var. mellea, was raised to species level and renamed M. thunderboltensis in 1998. The variety candida was described by Jakob Emanuel Lange in 1914 based on specimens he found in Denmark; variety nigra was named by Carleton Rea in 1922. Mycena galopoda is an orthographical variant spelling. The specific epithet galopus is derived from the Greek γαλα "milk", and πονς "foot". The mushroom is commonly known as the "milking bonnet", or the "milk-drop Mycena". The varieties candida and nigra are the white and black milking bonnets, respectively. Description The cap of M. galopus is egg-shaped when young, later becoming conic to somewhat bell-shaped, and eventually reaching a diameter of . In age it often has a margin curved inward, and a prominent umbo. The cap surface has a hoary sheen (remnants of the universal veil that once covered the immature fruit body) that soon sloughs off, leaving it naked and smooth. The cap margin, which is initially pressed against the stem, is translucent when moist, so that the outline of the gills underneath the cap may be seen, and has deep narrow grooves when dry. The color is largely fuscous-black except for the whitish margin that fades to pale gray; the umbo remains blackish or becomes dark gray, sometimes with a very pale ashy gray over all when moist, and opaque and ashy gray after drying. The flesh is thin, soft, and fragile, without any distinctive odor and taste. The gills are subdistantly spaced, narrow, ascending-adnate, whitish to gray, usually darker in age, with edges that are pallid or grayish. The stem is (rarely up to 12 cm) long, 1–2 mm thick, equal in length throughout, smooth, and fragile. The lower portion of the stem is dark blackish-brown to a dark ashy color. The apex of the stem is pallid, and the whitish base covered with coarse, stiff hairs. When broken it exudes a white milk-like liquid. The variety candida is similar in appearance to the main variety, except its fruit body is completely white. Variety nigra has a dark or blackish-gray cap, and gills that are initially whitish before turning gray. Although not poisonous, M. galopus and the varieties candida and nigra are inedible. Microscopic characteristics The spores are 9–13 by 5–6.5 μm, smooth, ellipsoid, occasionally somewhat pear-shaped, and very weakly amyloid. The basidia are four-spored. The pleurocystidia and cheilocystidia are similar and very abundant, and measure 70–90 by 9–15 μm. They are narrowly fusoid-ventricose and usually have abruptly pointed tips, sometimes forked or branched near the apex, hyaline, and smooth. The flesh of the gill is homogeneous, and stains dark vinaceous-brown in iodine. The flesh of the cap has a thin but clearly differentiated pellicle, a well-developed hypoderm (the tissue layer immediately underneath the pellicle), and the remainder is filamentous. All but the pellicle stain vinaceous-brown in iodine. Similar species The "red edge bonnet", Mycena rubromarginata, is also grayish-brown, but it has gill edges that are red, and it does not ooze latex when broken. It has amyloid, pip-shaped to roughly spherical spores that measure 9.2–13.4 by 6.5–9.4 μm. Ecology, habitat, and distribution Mycena galopus is a saprobic fungus, and plays an important role in forest ecosystems as a decomposer of leaf litter. It has been estimated in the UK to account for a large portion of the decomposition of the autumn leaf litter in British woodlands. It is able to break down the lignin and cellulose components of leaf litter. Grown in axenic culture in the laboratory, the fungus mycelium has been shown to degrade (in addition to lignin and cellulose) hemicelluloses, protein, soluble carbohydrates, and purified xylan and pectin using enzymes such as polyphenol oxidases, cellulases, and catalase. It is particularly adept at breaking down lignin, which is the second most abundant renewable organic compound in the biosphere, after cellulose. Research also suggests that the fungus weathers soil minerals, making them more available to mycorrhizal plants. Phosphorus, an important macronutrient influencing plant growth, typically occurs in primary minerals like apatite, or other organic complexes, and its low solubility often results in low phosphorus availability in soil. The biological activity of M. galopus mycelium can increase the availability of phosphorus and other nutrients, both as a result of soil acidification due to cation uptake and via the release of weathering agents such low molecular mass organic acids. Studies have shown that the fungus is sensitive to low concentrations of sulphite (SO32−), a byproduct of sulphur dioxide pollution, suggesting that this pollution can be toxic to the growth of the fungus (and the subsequent decomposition of leaf litter) at environmentally relevant concentrations. The fruit bodies of Mycena galopus grow in groups to scattered on humus under hardwoods or conifers. In the United States, it is very abundant along the Pacific Coast from Washington to California, and also in Tennessee and North Carolina; its northern distribution extends to Canada (Nova Scotia). In Europe, it has been collected from Britain, Germany, Ireland, and Norway. Chemistry In 1999, Wijnberg and colleagues reported the presence of several structurally related antifungal compounds called benzoxepines in the latex of Mycena galopus. One of these compounds, 6-hydroxypterulone, is a derivative of pterulone, a potent antifungal metabolite first isolated from submerged cultures of Pterula species in 1997. The antifungal activity of pterulone is based on selective inhibition of the NADH dehydrogenase enzyme of the electron transport chain. A 2008 publication reported that fatty acid esters of benzoxepine serve as precursors to wound-activated chemical defense. When the fruit body is injured and the latex is exposed, an esterase enzyme (an enzyme that splits esters into an acid and an alcohol in a chemical reaction with water called hydrolysis) presumably cleaves the inactive esterified benzoxepines into their active forms, where they can help defend the mushroom against yeasts and parasitic fungi. In nature, the mushroom is rarely attacked by parasitic fungi, however, it is prone to infection by the "bonnet mold" Spinellus fusiger, which is insensitive to the benzoxepines of M. galopus. In an English field study, where the two fungi M. galopus and Marasmius androsaceus made up over 99% of the fruit bodies in a site under Sitka spruce, the fungivorous collembolan arthropod Onychiurus latus preferred to graze on the mycelium of M. androsaceus. This selective grazing influences the vertical distribution of the two fungi in the field. See also List of bioluminescent fungi Footnotes Cited text External links Mycena galopus photo Bioluminescent fungi Fungi described in 1800 galopus Fungi of Europe Fungi of North America Taxa named by Christiaan Hendrik Persoon Fungus species
Mycena galopus
[ "Biology" ]
1,933
[ "Fungi", "Fungus species" ]
14,487,095
https://en.wikipedia.org/wiki/NGC%2088
NGC 88 is a barred spiral galaxy exhibiting an inner ring structure located about 160 million light years from the Earth in the Phoenix constellation. NGC 88 is interacting with the galaxies NGC 92, NGC 87 and NGC 89. It is part of a family of galaxies called Robert's Quartet discovered by astronomer John Herschel in the 1830s. References NGC 88 External links Phoenix (constellation) 0088 01370 Robert's Quartet Barred spiral galaxies 18340930
NGC 88
[ "Astronomy" ]
93
[ "Phoenix (constellation)", "Constellations" ]
14,487,205
https://en.wikipedia.org/wiki/NGC%2089
NGC 89 is a barred spiral or lenticular galaxy, part of Robert's Quartet, a group of four interacting galaxies. This member has a Seyfert 2 nucleus with extra-planar features emitting H-alpha radiation. There are filamentary features on each side of the disk, including a jet-like structure extending about 4 kpc in the NE direction. It may have lost its neutral hydrogen (H1) gas due to interactions with the other members of the clusters—most likely NGC 92. References External links NGC 89 0089 01374 Phoenix (constellation) Robert's Quartet Barred spiral galaxies 194-11 18340930
NGC 89
[ "Astronomy" ]
137
[ "Phoenix (constellation)", "Constellations" ]
14,487,338
https://en.wikipedia.org/wiki/NGC%2092
NGC 92 is a highly warped interacting unbarred spiral galaxy in Robert's Quartet; it is interacting with three neighbouring galaxies NGC 87, NGC 88 and NGC 89. References External links 0092 01388 Phoenix (constellation) Robert's Quartet Unbarred spiral galaxies 194-G012 18340930
NGC 92
[ "Astronomy" ]
67
[ "Phoenix (constellation)", "Constellations" ]
14,488,084
https://en.wikipedia.org/wiki/Eltrombopag
Eltrombopag, sold under the brand name Promacta among others, is a medication used to treat thrombocytopenia (abnormally low platelet counts) and severe aplastic anemia. Eltrombopag is sold under the brand name Revolade outside the US and is marketed by Novartis. It is a thrombopoietin receptor agonist. It is taken by mouth. Eltrombopag was discovered as a result of research collaboration between GlaxoSmithKline and Ligand Pharmaceuticals and is transferred to Novartis Pharmaceuticals. Medical uses Eltrombopag was approved by the US Food and Drug Administration (FDA) in November 2008, for the treatment of thrombocytopenia in people with chronic immune (idiopathic) thrombocytopenic purpura who have had an insufficient response to corticosteroids, immunoglobulin therapy, or splenectomy. In August 2015, the FDA approved eltrombopag (Promacta for oral suspension) for the treatment of thrombocytopenia in children one year of age and older with idiopathic thrombocytopenia who have had an insufficient response to corticosteroids, immunoglobulins, or splenectomy. Development In preclinical studies, the compound was shown to interact selectively with the thrombopoietin receptor, leading to activation of the JAK-STAT signaling pathway and increased proliferation and differentiation of megakaryocytes. Animal studies confirmed that it increased platelet counts. In 73 healthy volunteers, higher doses of eltrombopag caused larger increases in the number of circulating platelets without tolerability problems. Clinical trials Eltrombopag has been shown to be effective in two major clinical syndromes: idiopathic thrombocytopenic purpura (ITP) and cirrhosis due to hepatitis C (in which low platelet counts may be a contraindication for interferon treatment). After six weeks of therapy in a phase III trial, eltrombopag 50 mg/day was associated with a significantly higher response rate than placebo in adult patients with chronic idiopathic thrombocytopenic purpura (ITP). History Eltrombopag received breakthrough therapy designation from the US Food and Drug Administration (FDA) in February 2014, for people with aplastic anemia for which immunosuppression has not been successful. In 2017, the NIH made Eltrombopag a standard of care in aplastic anemia. Society and culture Legal status In October 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Eltrombopag Viatris, intended for the treatment of people with primary immune thrombocytopenia (ITP) and thrombocytopenia associated with chronic hepatitis C. The applicant for this medicinal product is Viatris Limited. Eltrombopag Viatris was authorized in December 2024. Research It has been shown to produce a trilineage hematopoiesis in some people with aplastic anemia, resulting in increased platelet counts, along with red and white blood cell counts. References External links Biphenyls Carboxylic acids Drugs acting on the blood and blood forming organs Drugs developed by GSK plc Hydrazines Drugs developed by Novartis Orphan drugs Thrombopoietin receptor agonists
Eltrombopag
[ "Chemistry" ]
747
[ "Carboxylic acids", "Functional groups", "Hydrazines" ]
14,488,117
https://en.wikipedia.org/wiki/Electromechanical%20film
Electromechanical Film (EMFI, EMFIT, trademarks of Emfit Ltd) is a thin, flexible film that can function as a sensor or actuator. It is composed of a charged polymer coated with two conductive layers, making it an electret. It was invented and first made by Finnish inventor Kari Kirjavainen. Its voided internal structure and high resistivity allow it to hold a high electric charge and make the film very sensitive to force. Changes in the film's thickness create an electric charge and make it operate as a sensor, or when an electric voltage is applied, it can function as an actuator. This gives the film applications in different fields of technology, including, but not limited to, mechanical vibration and ultrasound sensors, microphones, loudspeaker panels, keyboards, and physiological touch sensors. Other than being cheap, its main advantage is its versatility; it can be cut, reshaped, and resized depending on its surface of application. Manufacturing and structure The base film is first made from bi-axially orienting a polypropylene film. It is created through a "film-blowing" process, in which the plastic is extruded using a film blowing machine in the shape of a tube. Through the process of foaming, gaseous bubbles can be formed at a fixed density in the tube, which would give rise to EMFi's "voided internal structure". It is then expanded into two different directions depending on the desired thickness and orientation (bi-axial orientation). The tube is then coated with some electrically conductive material and then cut open into a film. This film is then charged using the Corona Treatment, and the electrically conductive layers create electrodes. EMFiT sensor has three layers, two of which that are homogeneous and act as electrodes as mentioned above, and a middle layer that is filled with flat, disk-shaped voids. Upon receiving charge from the Corona method, electrical breakdowns occur and the surfaces of the voids are permanently charged. There is one basic type of EMFFIT sensor film manufactured currently, the thicknesses being 70 μm respectively. Operation Sensor The film can be used as a sensor. As the film is charged, it creates an electric field. When pressure is applied to the film, the film's thickness is reduced and changes in the shapes of the individual voids in its structure occur. Any electric charges residing in these voids will move and create mirror charges at the electrode surfaces of the film. These charges are proportional to the force applied to the film, which is given by the equation: Δq = kΔF where ΔF is the dynamic force, Δq is the charge generated, and k is the sensitivity factor. Actuator The same sensor film can also be used as an actuator. Changes in thickness can be induced by applying a voltage on the film; compression and expansion of the film depends on the polarity of the voltage, and it occurs when both the outer surfaces of the film either attract or repel from each other. The attractive force between the surfaces while the film is uncharged is given by the equation: F = 12CU2x where C is the capacitance of the film and x is the film's thickness. Applications EMFIT sensor film has a diverse range of applications due to it being flexible, durable, and sensitive to a wide range of frequencies. These properties are attributed to its base material: cellular voided Ferro-electret film. Due to these properties, in conjunction with the two modes of operation, it has already seen use in vandalism-proof keyboards, guitar pickups, flat speakers, and vital signs ballistocardiography sensors. esmicrophones. In active noise cancellation, a part of a sensor product can be used in the sensor mode to identify sound signals, and a part can be used as an actuator and then be used to produce sound signals that cancel out the first. EMFIT sensors has been implemented in physiological bio-signal sensors where no direct contact with the skin is required, such as a BCG, as its application is non-invasive. Limits Due to the thermal constraints faced by using polypropylene as base material, applications where high sensitivity is needed, long-term temperatures should be below 70 °C, which limits its scope in terms of some potential applications such as the automotive industry. The air voids present in the structure become smaller and higher in pressure as force is applied to the film. This means that the film becomes harder to compress as it goes under more load, meaning that in the sensor mode, the charge output is non-linear, which can make calibrating the sensor difficult. References Chemical engineering
Electromechanical film
[ "Chemistry", "Engineering" ]
979
[ "Chemical engineering", "nan" ]
14,488,460
https://en.wikipedia.org/wiki/Mycena%20cyanorrhiza
Mycena cyanorrhiza is a small white mushroom which has blue colors. Unlike hallucinogenic mushrooms, the blue color is not related to psilocin polymerization. It grows in forests on wood and has a white spore print. Gallery References External links Mycena cyanorrhiza description and photo cyanorrhiza Fungi of Europe Fungi of North America Taxa named by Lucien Quélet Fungus species
Mycena cyanorrhiza
[ "Biology" ]
86
[ "Fungi", "Fungus species" ]
14,488,528
https://en.wikipedia.org/wiki/Turbojet%20train
A turbojet train is a train powered by turbojet engines. Like a jet aircraft, but unlike a gas turbine locomotive, the train is propelled by the jet thrust of the engines, rather than by its wheels. Only a handful of jet-powered trains have been built, for experimental research in high-speed rail. Turbojet engines have been built with the engine incorporated into a railcar combining both propulsion and passenger accommodation rather than as separate locomotives hauling passenger coaches. As turbojet engines are most efficient at high speeds, the experimental research has focused in applications for high-speed passenger services, rather than the heavier trains (with more frequent stops) used for freight services. M-497 The first attempt to use turbojet engines on a railroad was made in 1966 by the New York Central Railroad (NYCR), a company with operations throughout the Great Lakes region. They streamlined a Budd Rail Diesel Car, added two General Electric J47-19 jet engines, and nicknamed it the M-497 Black Beetle. Testing was performed on a length of the normal NYCR system – a virtually arrow-straight layout of regular existing track between Butler, Indiana, and Stryker, Ohio. On July 23, 1966, the train reached a speed of . LIMRV In the early 1970s, the U.S. Federal Railroad Administration developed the Linear Induction Motor Research Vehicle (LIMRV), meant to test the use of linear induction motors. The LIMRV was a specialized wheeled vehicle, running on standard-gauge railroad track. Speed was limited due to the length of the track and vehicle acceleration rates. One stage of research saw the addition of two Pratt & Whitney J52 jet engines to propel the LIMRV. Once the LIMRV had accelerated to desired velocity, the engines were throttled back so that the thrust equaled their drag. On 14 August 1974, using the jet engines, the LIMRV achieved a world record speed of for vehicles on conventional rail. SVL In 1970, researchers in the USSR developed the (SVL) turbojet train. The SVL was able to reach a speed of . The researchers placed jet engines on an ER22 railcar, normally part of an electric-powered multiple unit train. The SVL had a mass of 54.4 tonnes (including 7.4 tonnes of fuel) and was long. If the research had been successful, there was a plan to use the turbojet powered vehicle to pull a "Russian troika" express service. As of 2014, the train still exists in a dilapidated and unmaintained state, while the research project has been honoured with a monument made from the front of the railcar, outside a railcar factory in Tver, a city in western Russia. See also Aérotrain – a contemporary French hovercraft train, also powered by a jet engine Aerowagon Schienenzeppelin – a German propeller-driven railcar of 1929 Turboshaft Notes References External links A collection of photographs of the ER22 turbojet locomotive Gas turbine multiple units Jet engines Railcars of Russia High-speed trains of Russia Experimental locomotives
Turbojet train
[ "Technology" ]
634
[ "Jet engines", "Engines" ]
14,489,318
https://en.wikipedia.org/wiki/Autoinducer
In biology, an autoinducer is a signaling molecule that enables detection and response to changes in the population density of bacterial cells. Synthesized when a bacterium reproduces, autoinducers pass outside the bacterium and into the surrounding medium. They are a key component of the phenomenon of quorum sensing: as the density of quorum-sensing bacterial cells increases, so does the concentration of the autoinducer. A bacterium’s detection of an autoinducer above some minimum threshold triggers altered gene expression. Performed by both Gram-negative and Gram-positive bacteria, detection of autoinducers allows them to sense one another and to regulate a wide variety of physiological activities, including symbiosis, virulence, motility, production of antibiotics, and formation of biofilms. Autoinducers take a number of different forms depending on the species of bacteria, but their effect is in many cases similar. They allow bacteria to communicate both within and between species, and thus to mount coordinated responses to their environments in a manner that is comparable to behavior and signaling in higher organisms. Not surprisingly, it has been suggested that quorum sensing may have been an important evolutionary milestone that ultimately gave rise to multicellular life forms. Discovery The term autoinduction was first coined in 1970, when it was observed that the bioluminescent marine bacterium Vibrio fischeri produced a luminescent enzyme (luciferase) only when cultures had reached a threshold population density. At low cell concentrations, V. fischeri did not express the luciferase gene. However, during the cultures’ exponential growth phase, the luciferase gene was rapidly activated. This phenomenon was called autoinduction because it involved a molecule (the autoinducer) produced by the bacteria themselves that accumulated in the growth medium and induced the synthesis of components of the luminescence system. Subsequent research revealed that the actual autoinducer used by V. fischeri is an acylated homoserine lactone (AHL) signaling molecule. Mechanism In the most simplified quorum sensing systems, bacteria only need two components to make use of autoinducers. They need a way to produce a signal and a way to respond to that signal. These cellular processes are often tightly coordinated and involve changes in gene expression. The production of autoinducers generally increases as bacterial cell densities increase. Most signals are produced intracellularly and are subsequently secreted in the extracellular environment. Detection of autoinducers often involves diffusion back into cells and binding to specific receptors. Usually, binding of autoinducers to receptors does not occur until a threshold concentration of autoinducers is achieved. Once this has occurred, bound receptors alter gene expression either directly or indirectly. Some receptors are transcription factors themselves, while others relay signals to downstream transcription factors. In many cases, autoinducers participate in forward feedback loops, whereby a small initial concentration of an autoinducer amplifies the production of that same chemical signal to much higher levels. Classes Acylated homoserine lactones Primarily produced by Gram-negative bacteria, acylated homoserine lactones (AHLs) are a class of small neutral lipid molecules composed of a homoserine lactone ring with an acyl chain. AHLs produced by different species of Gram-negative bacteria vary in the length and composition of the acyl side chain, which often contains 4 to 18 carbon atoms. AHLs are synthesized by AHL synthases. They diffuse in and out of cells by both passive transport and active transport mechanisms. Receptors for AHLs include a number of transcriptional regulators called "R proteins," which function as DNA binding transcription factors or sensor kinases. Peptides Gram-positive bacteria that participate in quorum sensing typically use secreted oligopeptides as autoinducers. Peptide autoinducers usually result from posttranslational modification of a larger precursor molecule. In many Gram-positive bacteria, secretion of peptides requires specialized export mechanisms. For example, some peptide autoinducers are secreted by ATP-binding cassette transporters that couple proteolytic processing and cellular export. Following secretion, peptide autoinducers accumulate in extracellular environments. Once a threshold level of signal is reached, a histidine sensor kinase protein of a two-component regulatory system detects it and a signal is relayed into the cell. As with AHLs, the signal ultimately ends up altering gene expression. Unlike some AHLs, however, most oligopeptides do not act as transcription factors themselves. Furanosyl borate diester The free-living bioluminescent marine bacterium, Vibrio harveyi, uses another signaling molecule in addition to an acylated homoserine lactone. This molecule, termed Autoinducer-2 (or AI-2), is a furanosyl borate diester. AI-2, which is also produced and used by a number of Gram-negative and Gram-positive bacteria, is believed to be an evolutionary link between the two major types of quorum sensing circuits. In gram-negative bacteria As mentioned, Gram-negative bacteria primarily use acylated homoserine lactones (AHLs) as autoinducer molecules. The minimum quorum sensing circuit in Gram-negative bacteria consists of a protein that synthesizes an AHL and a second, different protein that detects it and causes a change in gene expression. First identified in V. fischeri, these two such proteins are LuxI and LuxR, respectively. Other Gram-negative bacteria use LuxI-like and LuxR-like proteins (homologs), suggesting a high degree of evolutionary conservation. However, among Gram-negatives, the LuxI/LuxI-type circuit has been modified in different species. Described in more detail below, these modifications reflect bacterial adaptations to grow and respond to particular niche environments. Vibrio fischeri: bioluminescence Ecologically, V. fischeri is known to have symbiotic associations with a number of eukaryotic hosts, including the Hawaiian Bobtail Squid (Euprymna scolopes). In this relationship, the squid host maintains the bacteria in specialized light organs. The host provides a safe, nutrient rich environment for the bacteria and in turn, the bacteria provide light. Although bioluminescence can be used for mating and other purposes, in E. scolopes it is used for counter illumination to avoid predation. The autoinducer molecule used by V. fischeri is N-(3-oxohexanoyl)-homoserine lactone. This molecule is produced in the cytoplasm by the LuxI synthase enzyme and is secreted through the cell membrane into the extracellular environment. As is true of most autoinducers, the environmental concentration of N-(3-oxohexanoyl)-homoserine lactone is the same as the intracellular concentration within each cell. N-(3-oxohexanoyl)-homoserine lactone eventually diffuses back into cells where it is recognized by LuxR once a threshold concentration (~10 μg/ml) has been reached. LuxR binds the autoinducer and directly activates transcription of the luxICDABE operon. This results in an exponential increase in both the production of autoinducer and in bioluminescence. LuxR bound by autoinducer also inhibits the expression of luxR, which is thought to provide a negative feedback compensatory mechanism to tightly control levels of the bioluminescence genes. Pseudomonas aeruginosa: virulence and antibiotic production P. aeruginosa is an opportunistic human pathogen associated with cystic fibrosis. In P. aeruginosa infections, quorum sensing is critical for biofilm formation and pathogenicity. P. aeruginosa contains two pairs of LuxI/LuxR homologs, LasI/LasR and RhlI, RhlR. LasI and RhlI are synthase enzymes that catalyze the synthesis of N-(3-oxododecanoyl)-homoserine lactone and N-(butyryl)-homoserine lactone, respectively. The LasI/LasR and the RhlI/RhlR circuits function in tandem to regulate the expression of a number of virulence genes. At a threshold concentration, LasR binds N-(3-oxododecanoyl)-homoserine lactone. Together this bound complex promotes the expression of virulence factors that are responsible for early stages of the infection process. LasR bound by its autoinducer also activates the expression of the RhlI/RhlR system in P. aeruginosa. This causes the expression of RhlR which then binds its autoinducer, N-(butryl)-homoserine lactone. In turn, autoinducer-bound RhlR activates a second class of genes involved in later stages of infection, including genes needed for antibiotic production. Presumably, antibiotic production by P. aeruginosa is used to prevent opportunistic infections by other bacterial species. N-(3-oxododecanoyl)-homoserine lactone prevents binding between N-(butryl)-homoserine lactone and its cognate regulator, RhlR. It is believed that this control mechanism allows P. aeruginosa to initiate the quorum-sensing cascades sequentially and in the appropriate order so that a proper infection cycle can ensue. Other gram-negative autoinducers P. aeruginosa also uses 2-heptyl-3-hydroxy-4-quinolone (PQS) for quorum sensing. This molecule is noteworthy because it does not belong to the homoserine lactone class of autoinducers. PQS is believed to provide an additional regulatory link between the Las and Rhl circuits involved in virulence and infection. Agrobacterium tumefaciens is a plant pathogen that induces tumors on susceptible hosts. Infection by A. tumefaciens involves the transfer of an oncogenic plasmid from the bacterium to the host cell nucleus, while quorum sensing controls the conjugal transfer of plasmids between bacteria. Conjugation, on the other hand, requires the HSL autoinducer, N-(3-oxooctanoyl)-homoserine lactone. Erwinia carotovora is another plant pathogen that causes soft-rot disease. These bacteria secrete cellulases and pectinases, which are enzymes that degrade plant cell walls. ExpI/ExpR are LuxI/LuxR homologs in E. carotovora believed to control secretion of these enzymes only when a high enough local cell density is achieved. The autoinducer involved in quorum sensing in E. carotovora is N-(3-oxohexanoyl)-L-homoserine lactone. In gram-positive bacteria Whereas Gram-negative bacteria primarily use acylated homoserine lactones, Gram-positive bacteria generally use oligopeptides as autoinducers for quorum sensing. These molecules are often synthesized as larger polypeptides that are cleaved post-translationally to produce "processed" peptides. Unlike AHLs that can freely diffuse across cell membranes, peptide autoinducers usually require specialized transport mechanisms (often ABC transporters). Additionally, they do not freely diffuse back into cells, so bacteria that use them must have mechanisms to detect them in their extracellular environments. Most Gram-positive bacteria use a two-component signaling mechanism in quorum sensing. Secreted peptide autoinducers accumulate as a function of cell density. Once a quorum level of autoinducer is achieved, its interaction with a sensor kinase at the cell membrane initiates a series of phosphorylation events that culminate in the phosphorylation of a regulator protein intracellularly. This regulator protein subsequently functions as a transcription factor and alters gene expression. Similar to Gram-negative bacteria, the autoinduction and quorum sensing system in Gram-positive bacteria is conserved, but again, individual species have tailored specific aspects for surviving and communicating in unique niche environments. Streptococcus pneumoniae: competence S. pneumoniae is human pathogenic bacterium in which the process of genetic transformation was first described in the 1930s. In order for a bacterium to take up exogenous DNA from its surroundings, it must become competent. In S. pneumoniae, a number of complex events must occur to achieve a competent state, but it is believed that quorum sensing plays a role. Competence stimulating peptide (CSP) is a 17-amino acid peptide autoinducer required for competency and subsequent genetic transformation. CSP is produced by proteolytic cleavage of a 41-amino acid precursor peptide (ComC); is secreted by an ABC transporter (ComAB); and is detected by a sensor kinase protein (ComD) once it has reached a threshold concentration. Detection is followed by autophosphorylation of ComD, which in turn, phosphorylates ComE. ComE is a response regulator responsible for activating transcription of comX, the product of which is required to activate transcription of a number of other genes involved in the development of competence. Bacillus subtilis: competence & sporulation B. subtilis is a soil-dwelling microbe that uses quorum sensing to regulate two different biological processes: competence and sporulation. During stationary growth phase when B. subtilis are at high cell density, approximately 10% of the cells in a population are induced to become competent. It is believed that this subpopulation becomes competent to take up DNA that could potentially be used for the repair of damaged (mutated) chromosomes. ComX (also known as competence factor) is a 10-amino acid peptide that is processed from a 55-amino acid peptide precursor. Like most autoinducers, ComX is secreted and accumulates as a function of cell density. Once a threshold extracellular level is achieved, ComX is detected by a two-component ComP/ComA sensor kinase/response regulator pair. Phosphorylation of ComA activates the expression of comS gene, ComS inhibits the degradation of ComK, and finally ComK activates the expression of a number of genes required for competence. Sporulation, on the other hand, is a physiological response of B. subtilis to depletion of nutrients within a particular environment. It is also regulated by extracellular signaling. When B. subtilis populations sense waning conditions, they respond by undergoing asymmetric cell division. This ultimately produces spores that are adapted for dispersal and survival in unfavorable conditions. Sporulation in B. subtilis is mediated by CSF (sporulation factor), a pentapeptide cleaved from the precursor peptide PhrC. CSF is secreted into the extracellular environment and is taken back up into cells via the ABC transporter Opp where it acts intracellularly. While low internal concentrations of CSF contribute to competence, high concentrations induce sporulation. CSF inhibits a phosphatase, RabB, which increases the activity of Spo0A, favoring a switch in commitment from competence to the sporulation pathway References Signal transduction
Autoinducer
[ "Chemistry", "Biology" ]
3,242
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
14,489,504
https://en.wikipedia.org/wiki/Trans-1%2C2-Diaminocyclohexane
trans-1,2-Diaminocyclohexane is an organic compound with the formula C6H10(NH2)2. This diamine is a building block for C2-symmetric ligands that are useful in asymmetric catalysis. A mixture of all three stereoisomers of 1,2-diaminocyclohexane is produced by the hydrogenation of o-phenylenediamine. It is also side product in hydrogenation of adiponitrile. The racemic trans isomer (1:1 mixture of (1R,2R)-1,2-diaminocyclohexane and (1S,2S)-1,2-diaminocyclohexane) can be separated into the two enantiomers using enantiomerically pure tartaric acid as the resolving agent. Derived ligands Representative ligands prepared from (1R,2R)- or (1S,2S)-1,2-diaminocyclohexane are diaminocyclohexanetetraacetic acid (CyDTAH4), Trost ligand, and the salen analogue used in the Jacobsen epoxidation. References Diamines Chelating agents Cyclohexanes
Trans-1,2-Diaminocyclohexane
[ "Chemistry" ]
279
[ "Chelating agents", "Process chemicals" ]
14,489,936
https://en.wikipedia.org/wiki/Coenzyme%20F420
{{DISPLAYTITLE:Coenzyme F420}} Coenzyme F420 is a family of coenzymes involved in redox reactions in a number of bacteria and archaea. It is derived from coenzyme FO (7,8-didemethyl-8-hydroxy-5-deazariboflavin) and differs by having a oligoglutamyl tail attached via a 2-phospho-L-lactate bridge. F420 is so named because it is a flavin derivative with an absorption maximum at 420 nm. F420 was originally discovered in methanogenic archaea and in Actinomycetota (especially in Mycobacterium). It is now known to be used also by Cyanobacteria and by soil Proteobacteria, Chloroflexi and Firmicutes. Eukaryotes including the fruit fly Drosophila melanogaster and the algae Ostreococcus tauri also use Coenzyme FO. F420 is structurally similar to FMN, but catalytically it is similar to NAD and NADP: it has low redox potential and always transfer a hydride. As a result, it is not only a versatile cofactor in biochemical reactions, but also being eyed for potential as an industrial catalyst. Similar to FMN, it has two states: one reduced state, notated as F420-H2, and one oxidized state, written as just F420. FO has largely similar redox properties, but cannot carry an electric charge and as a result probably slowly leaks out of the cellular membrane. A number of F420 molecules, differing by the length of the oligoglutamyl tail, are possible; F420-2, for example, refers to the version with two glutamyl units attached. Lengths from 4 to 9 are typical. Biosynthesis Coenzyme F420 is synthesized via a multi-step pathway: 7,8-didemethyl-8-hydroxy-5-deazariboflavin synthase (FbiC) produces Coenzyme FO (also written F0), itself a cofactor of DNA photolyase (antenna). This is the head portion of the molecule. 2-phospho-L-lactate transferase (FbiA) produces Coenzyme F420-0, the portion containing the head, the diphosphate bridge, and ending with a carboxylic acid group. Coenzyme F420-0:L-glutamate ligase (one part of FbiB) puts a glutamate residue at the -COOH end, producing Coenzyme F420-1. Coenzyme F420-1:gamma-L-glutamate ligase (other part of FbiB) puts a gamma-glutamate residue at the -COOH end, producing Coenzyme F420-2, the final compound (in its oxidized form). Also responsible for adding additional units. Oxidized F420 can be converted to reduced F420-H2 by multiple enzymes such as Glucose-6-phosphate dehydrogenase (coenzyme-F420) (Fgd1). Function The coenzyme is a substrate for coenzyme F420 hydrogenase, 5,10-methylenetetrahydromethanopterin reductase and methylenetetrahydromethanopterin dehydrogenase. A long list of other enzymes use F420 to oxidize (dehydrogenate) or F420-H2 to reduce substrates. F420 plays a central role in redox reactions across diverse organisms, including archaea and bacteria, by participating in methanogenesis, antibiotic biosynthesis, DNA repair and the activation of antitubercular drugs. Its ability to carry out hydride transfer reactions is enabled by its low redox potential, which is optimized for specific biochemical pathway. Clinical relevance Delamanid, a drug used to treat multi-drug-resistant tuberculosis (MDRTB) in combination with other antituberculosis medications, is activated in the mycobacterium by deazaflavin-dependent nitroreductase (Ddn), an enzyme which uses dihydro-F420 (reduced form). The activated form of the drug is highly reactive and attacks cell wall synthesis enzymes such as DprE2. Pretomanid works in the same way. Clinical isolates resistant to these two drugs tend to have mutations in the biosynthetic pathway for F420. See also Coenzyme M Coenzyme B Methanofuran Tetrahydromethanopterin References External links KEGG: FO F420-0 F420-1 Reduced F420 Oxidised F420 Coenzymes Flavins
Coenzyme F420
[ "Chemistry" ]
1,059
[ "Organic compounds", "Coenzymes" ]
1,527,537
https://en.wikipedia.org/wiki/Bevacizumab
Bevacizumab, sold under the brand name Avastin among others, is a monoclonal antibody medication used to treat a number of types of cancers and a specific eye disease. For cancer, it is given by slow injection into a vein (intravenous) and used for colon cancer, lung cancer, ovarian cancer, glioblastoma, hepatocellular carcinoma, and renal-cell carcinoma. In many of these diseases it is used as a first-line therapy. For age-related macular degeneration it is given by injection into the eye (intravitreal). Common side effects when used for cancer include nose bleeds, headache, high blood pressure, and rash. Other severe side effects include gastrointestinal perforation, bleeding, allergic reactions, blood clots, and an increased risk of infection. When used for eye disease side effects can include vision loss and retinal detachment. Bevacizumab is a monoclonal antibody that functions as an angiogenesis inhibitor. It works by slowing the growth of new blood vessels by inhibiting vascular endothelial growth factor A (VEGF-A), in other words anti–VEGF therapy. Bevacizumab was approved for medical use in the United States in 2004. It is on the World Health Organization's List of Essential Medicines. Medical uses Colorectal cancer Bevacizumab was approved in the United States in February 2004, for use in metastatic colorectal cancer when used with standard chemotherapy treatment (as first-line treatment). In June 2006, it was approved with 5-fluorouracil-based therapy for second-line metastatic colorectal cancer. It was approved by the European Medicines Agency (EMA) in January 2005, for use in colorectal cancer. Bevacizumab has also been examined as an add on to other chemotherapy drugs in people with non-metastatic colon cancer. The data from two large randomized studies showed no benefit in preventing the cancer from returning and a potential to cause harm in this setting. In the EU, bevacizumab in combination with fluoropyrimidine-based chemotherapy is indicated for treatment of adults with metastatic carcinoma of the colon or rectum. Lung cancer In 2006, the US Food and Drug Administration (FDA) approved bevacizumab for use in first-line advanced nonsquamous non-small cell lung cancer in combination with carboplatin/paclitaxel chemotherapy. The approval was based on the pivotal study E4599 (conducted by the Eastern Cooperative Oncology Group), which demonstrated a two-month improvement in overall survival in patients treated with bevacizumab (Sandler, et al. NEJM 2004). A preplanned analysis of histology in E4599 demonstrated a four-month median survival benefit with bevacizumab for people with adenocarcinoma (Sandler, et al. JTO 2010); adenocarcinoma represents approximately 85% of all non-squamous cell carcinomas of the lung. A subsequent European clinical trial, AVAiL, was first reported in 2009 and confirmed the significant improvement in progression-free survival shown in E4599 (Reck, et al. Ann. Oncol. 2010). An overall survival benefit was not demonstrated in patients treated with bevacizumab; however, this may be due to the more limited use of bevacizumab as maintenance treatment in AVAiL versus E4599 (this differential effect is also apparent in the European vs US trials of bevacizumab in colorectal cancer: Tyagi and Grothey, Clin Colorectal Cancer, 2006). As an anti-angiogenic agent, there is no mechanistic rationale for stopping bevacizumab before disease progression. Stated another way, the survival benefits achieved with bevacizumab can only be expected when used in accordance with the clinical evidence: continued until disease progression or treatment-limiting side effects. Another large European-based clinical trial with bevacizumab in lung cancer, AVAPERL, was reported in October 2011 (Barlesi, et al. ECCM 2011). First-line patients were treated with bevacizumab plus cisplatin/pemetrexed for four cycles, and then randomized to receive maintenance treatment with either bevacizumab/pemetrexed or bevacizumab alone until disease progression. Maintenance treatment with bevacizumab/pemetrexed demonstrated a 50% reduction in risk of progression vs bevacizumab alone (median PFS: 10.2 vs 6.6 months). Maintenance treatment with bevacizumab/pemetrexed did not confer a significant increase in overall survival vs bevacizumab alone on follow up analysis. In the EU, bevacizumab, in addition to platinum-based chemotherapy, is indicated for first-line treatment of adults with unresectable advanced, metastatic or recurrent non-small cell lung cancer other than predominantly squamous cell histology. Bevacizumab, in combination with erlotinib, is indicated for first-line treatment of adults with unresectable advanced, metastatic or recurrent non-squamous non-small cell lung cancer with Epidermal Growth Factor Receptor (EGFR) activating mutations. Breast cancer In December 2010, the US Food and Drug Administration (FDA) notified its intention to remove the breast cancer indication from bevacizumab, saying that it had not been shown to be safe and effective in breast cancer patients. The combined data from four different clinical trials showed that bevacizumab neither prolonged overall survival nor slowed disease progression sufficiently to outweigh the risk it presents to patients. This only prevented Genentech from marketing bevacizumab for breast cancer. Doctors are free to prescribe bevacizumab off label, although insurance companies are less likely to approve off-label treatments. In June 2011, an FDA panel unanimously rejected an appeal by Roche. A panel of cancer experts ruled for a second time that Avastin should no longer be used in breast cancer patients, clearing the way for the US government to remove its endorsement from the drug. The June 2011 meeting of the FDA's oncologic drug advisory committee was the last step in an appeal by the drug's maker. The committee concluded that breast cancer clinical studies of patients taking Avastin have shown no advantage in survival rates, no improvement in quality of life, and significant side effects. In the EU, bevacizumab in combination with paclitaxel is indicated for first-line treatment of adults with metastatic breast cancer. Bevacizumab in combination with capecitabine is indicated for first-line treatment of adults with metastatic breast cancer in whom treatment with other chemotherapy options including taxanes or anthracyclines is not considered appropriate. Kidney cancer In certain kidney cancers, bevacizumab improves the progression free survival time but not survival time. In 2009, the FDA approved bevacizumab for use in metastatic renal cell cancer (a form of kidney cancer). following earlier reports of activity EU approval was granted in 2007. In the EU, bevacizumab in combination with interferon alfa-2a is indicated for first-line treatment of adults with advanced and/or metastatic renal cell cancer. Brain cancers Bevacizumab slows tumor growth but does not affect overall survival in people with glioblastoma. The FDA granted accelerated approval for the treatment of recurrent glioblastoma multiforme in May 2009. A 2018 Cochrane review deemed there to not be good evidence for its use in recurrences either. Macular degeneration In the EU, bevacizumab gamma (Lytenava) is indicated for the treatment of neovascular (wet) age-related macular degeneration (nAMD). Ovarian cancer In 2018, the US Food and Drug Administration (FDA) approved bevacizumab in combination with chemotherapy for stage III or IV of ovarian cancer after initial surgical operation, followed by single-agent bevacizumab. The approval was based on a study of the addition of bevacizumab to carboplatin and paclitaxel. Progression-free survival was increased to 18 months from 13 months. In the EU, bevacizumab, in combination with carboplatin and paclitaxel is indicated for the front-line treatment of adults with advanced (International Federation of Gynecology and Obstetrics (FIGO) stages IIIB, IIIC and IV) epithelial ovarian, fallopian tube, or primary peritoneal cancer. Bevacizumab, in combination with carboplatin and gemcitabine or in combination with carboplatin and paclitaxel, is indicated for treatment of adults with first recurrence of platinum-sensitive epithelial ovarian, fallopian tube or primary peritoneal cancer who have not received prior therapy with bevacizumab or other VEGF inhibitors or VEGF receptor-targeted agents. In May 2020, the FDA expanded the indication of olaparib to include its combination with bevacizumab for first-line maintenance treatment of adults with advanced epithelial ovarian, fallopian tube, or primary peritoneal cancer who are in complete or partial response to first-line platinum-based chemotherapy and whose cancer is associated with homologous recombination deficiency positive status defined by either a deleterious or suspected deleterious BRCA mutation, and/or genomic instability. Cervical cancer In the EU, bevacizumab, in combination with paclitaxel and cisplatin or, alternatively, paclitaxel and topotecan in people who cannot receive platinum therapy, is indicated for the treatment of adults with persistent, recurrent, or metastatic carcinoma of the cervix. Adverse effects Bevacizumab inhibits the growth of blood vessels, which is part of the body's normal healing and maintenance. The body grows new blood vessels in wound healing, and as collateral circulation around blocked or atherosclerotic blood vessels. One concern is that bevacizumab will interfere with these normal processes, and worsen conditions like coronary artery disease or peripheral artery disease. The main side effects are hypertension and heightened risk of bleeding. Bowel perforation has been reported. Fatigue and infection are also common. In advanced lung cancer, less than half of patients qualify for treatment. Nasal septum perforation and renal thrombotic microangiopathy have been reported. In December 2010, the FDA warned of the risk of developing perforations in the body, including in the nose, stomach, and intestines. In 2013, Hoffmann-La Roche announced that the drug was associated with 52 cases of necrotizing fasciitis from 1997 to 2012, of which 17 patients died. About 2/3 of cases involved patients with colorectal cancer, or patients with gastrointestinal perforations or fistulas. These effects are largely avoided in ophthalmological use since the drug is introduced directly into the eye thus minimizing any effects on the rest of the body. Neurological adverse events include reversible posterior encephalopathy syndrome. Ischemic and hemorrhagic strokes are also possible. Protein in the urine occurs in approximately 20% of people. This does not require permanent discontinuation of the drug. Nonetheless, the presence of nephrotic syndrome necessitates permanent discontinuation of bevacizumab. Mechanism of action Bevacizumab is a recombinant humanized monoclonal antibody that blocks angiogenesis by inhibiting vascular endothelial growth factor A (VEGF-A). VEGF-A is a growth factor protein that stimulates angiogenesis in a variety of diseases, especially in cancer. By binding VEGF-A, bevacizumab should act outside the cell, but in some cases (cervical and breast cancer) it is taken up by cells through constitutive endocytosis. It also is taken up by retinal photoreceptor cells after intravitreal injection. Chemistry Bevacizumab was originally derived from a mouse monoclonal antibody generated from mice immunized with the 165-residue form of recombinant human vascular endothelial growth factor. It was humanized by retaining the binding region and replacing the rest with a human full light chain and a human truncated IgG1 heavy chain, with some other substitutions. The resulting plasmid was transfected into Chinese hamster ovary cells which are grown in industrial fermentation systems. History Bevacizumab is a recombinant humanized monoclonal antibody and in 2004, it became the first clinically used angiogenesis inhibitor. Its development was based on the discovery of human vascular endothelial growth factor (VEGF), a protein that stimulated blood vessel growth, in the laboratory of Genentech scientist Napoleone Ferrara. Ferrara later demonstrated that antibodies against VEGF inhibit tumor growth in mice. His work validated the hypothesis of Judah Folkman, proposed in 1971, that stopping angiogenesis might be useful in controlling cancer growth. Approval It received its first approval in the United States in 2004, for combination use with standard chemotherapy for metastatic colon cancer. It has since been approved for use in certain lung cancers, renal cancers, ovarian cancers, and glioblastoma multiforme of the brain. In 2008, bevacizumab was approved for breast cancer by the FDA, but the approval was revoked on 18 November 2011 because, although there was evidence that it slowed progression of metastatic breast cancer, there was no evidence that it extended life or improved quality of life, and it caused adverse effects including severe high blood pressure and hemorrhaging. In 2008, the FDA gave bevacizumab provisional approval for metastatic breast cancer, subject to further studies. The FDA's advisory panel had recommended against approval. In July 2010, after new studies failed to show a significant benefit, the FDA's advisory panel recommended against the indication for advanced breast cancer. Genentech requested a hearing, which was granted in June 2011. The FDA ruled to withdraw the breast cancer indication in November 2011. FDA approval is required for Genentech to market a drug for that indication. Doctors may sometimes prescribe it for that indication, although insurance companies are less likely to pay for it. The drug remains approved for breast cancer use in other countries, including Australia. It has been funded by the English NHS Cancer Drugs Fund, but in January 2015 it was proposed to remove it from the approved list. It remains on the Cancer Drugs Fund as of March 2023. Society and culture Use for macular degeneration In 2015, there was a fierce debate in the UK and other European countries concerning the choice of prescribing bevacizumab or ranibizumab (Lucentis) for wet AMD. In the UK, part of the tension was between on the one hand, both the European Medicines Agency and the Medicines and Healthcare products Regulatory Agency which had approved Lucentis but not Avastin for wet AMD, and their interest in ensuring that doctors do not use medicines off-label when there are other, approved medications for the same indication, and on the other hand, NICE in the UK, which sets treatment guidelines, and has been unable so far to appraise Avastin as a first-line treatment, in order to save money for the National Health Service. Novartis and Roche (which respectively have marketing rights and ownership rights for Avastin) had not conducted clinical trials to get approval for Avastin for wet AMD and had no intention of doing so. Further, both companies lobbied against treatment guidelines that would make Avastin a first-line treatment, and when government-funded studies comparing the two drugs were published, they published papers emphasizing the risks of using Avastin for wet AMD. In March 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Lytenava (bevacizumab gamma), intended for treatment of neovascular (wet) age-related macular degeneration (nAMD). The applicant for this medicinal product is Outlook Therapeutics Limited. Lytenava was approved for medical use in the European Union in May 2024. Breast cancer approval In March 2007, the European Commission approved bevacizumab in combination with paclitaxel for the first-line treatment of metastatic breast cancer. In 2008, the FDA approved bevacizumab for use in breast cancer. A panel of outside advisers voted 5 to 4 against approval, but their recommendations were overruled. The panel expressed concern that data from the clinical trial did not show any increase in quality of life or prolonging of life for patients—two important benchmarks for late-stage cancer treatments. The clinical trial did show that bevacizumab reduced tumor volumes and showed an increase in progression free survival time. It was based on this data that the FDA chose to overrule the recommendation of the panel of advisers. This decision was lauded by patient advocacy groups and some oncologists. Other oncologists felt that granting approval for late-stage cancer therapies that did not prolong or increase the quality of life for patients would give license to pharmaceutical companies to ignore these important benchmarks when developing new late-stage cancer therapies. In 2010, before the FDA announcement, The National Comprehensive Cancer Network (NCCN) updated the NCCN Clinical Practice Guidelines for Oncology (NCCN Guidelines) for Breast Cancer to affirm the recommendation regarding the use of bevacizumab in the treatment of metastatic breast cancer. In 2011, the US Food and Drug Administration removed bevacizumab indication for metastatic breast cancer after concluding that the drug has not been shown to be safe and effective. The specific indication that was withdrawn was for the use of bevacizumab in metastatic breast cancer, with paclitaxel for the treatment of people who have not received chemotherapy for metastatic HER2-negative breast cancer. Counterfeit In February 2012, Roche and its US biotech unit Genentech announced that counterfeit Avastin had been distributed in the United States. The investigation is ongoing, but differences in the outer packaging make identification of the bogus drugs simple for medical providers. Roche analyzed three bogus vials of Avastin and found they contained salt, starch, citrate, isopropanol, propanediol, t-butanol, benzoic acid, di-fluorinated benzene ring, acetone and phthalate moiety, but no active ingredients of the cancer drug. According to Roche, the levels of the chemicals were not consistent; whether the chemicals were at harmful concentrations could not therefore be determined. The counterfeit Avastin has been traced back to Egypt, and it entered legitimate supply chains via Europe to the United States. Biosimilars In July 2014, two pharming companies, PlantForm and PharmaPraxis, announced plans to commercialize a biosimilar version of bevacizumab made using a tobacco expression system in collaboration with the Fraunhofer Center for Molecular Biology. In September 2017, the US FDA approved Amgen's biosimilar (generic name bevacizumab-awwb, product name Mvasi) for six cancer indications. In January 2018, Mvasi was approved for use in the European Union. In February 2019, Zirabev was approved for use in the European Union. Zirabev was approved for medical use in the United States in June 2019, and in Australia in November 2019. In June 2020, Mvasi was approved for medical use in Australia. In August 2020, Aybintio was approved for use in the European Union. In September 2020, Equidacent was approved for use in the European Union. In January 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Alymsys, intended for the treatment of carcinoma of the colon or rectum, breast cancer, non-small cell lung cancer, renal cell cancer, epithelial ovarian, fallopian tube or primary peritoneal cancer, and carcinoma of the cervix. Alymsys was approved for medical use in the European Union in March 2021. In January 2021, Onbevzi was approved for medical use in the European Union. In June 2019, and June 2021, Zirabev was approved for medical use in Canada. Oyavas was approved for medical use in the European Union in March 2021. Abevmy was approved for medical use in the European Union in April 2021, and in Australia in September 2021. In September 2021, Bambevi was approved for medical use in Canada. Bevacip and Bevaciptin were approved for medical use in Australia in November 2021. In November 2021, Abevmy and Aybintio were approved for medical use in Canada. In April 2022, bevacizumab-maly (Alymsys) was approved for medical use in the United States. In August 2022, Vegzelma was approved for medical use in the European Union. In September 2022, bevacizumab-adcd (Vegzelma) was approved for medical use in the United States. In June 2023, Enzene Biosciences launched its bevacizumab biosimilar in India. Bevacizumab-tnjn (Avzivi) was approved for medical use in the United States in December 2023. In May 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Avzivi, intended for the treatment of carcinoma of the colon or rectum, breast cancer, non-small cell lung cancer, renal cell cancer, epithelial ovarian, fallopian tube or primary peritoneal cancer and carcinoma of the cervix. The applicant for this medicinal product is FGK Representative Service GmbH. Avzivi was approved for medical use in the European Union in July 2024. Research A study released in April 2009, found that bevacizumab is not effective at preventing recurrences of non-metastatic colon cancer following surgery. Bevacizumab has been tested in ovarian cancer where it has shown improvement in progression-free survival but not in overall survival. and glioblastoma multiforme where it failed to improve overall survival. Bevacizumab has been investigated as a possible treatment of pancreatic cancer, as an addition to chemotherapy, but studies have shown no improvement in survival. It may also cause higher rates of high blood pressure, bleeding in the stomach and intestine, and intestinal perforations. The drug has also undergone trials as an addition to established chemotherapy protocols and surgery in the treatment of pediatric osteosarcoma, and other sarcomas, such as leiomyosarcoma. Bevacizumab has been studied as a treatment for cancers that grow from the nerve connecting the ear and the brain. References Further reading External links Angiogenesis inhibitors Drugs developed by Genentech Drugs developed by Hoffmann-La Roche Monoclonal antibodies for tumors Ophthalmology drugs Orphan drugs Specialty drugs World Health Organization essential medicines Wikipedia medicine articles ready to translate
Bevacizumab
[ "Biology" ]
5,014
[ "Angiogenesis", "Specialty drugs", "Angiogenesis inhibitors" ]
1,527,574
https://en.wikipedia.org/wiki/Rotamer
In chemistry, rotamers are chemical species that differ from one another primarily due to rotations about one or more single bonds. Various arrangements of atoms in a molecule that differ by rotation about single bonds can also be referred to as different conformations. Conformers/rotamers differ little in their energies, so they are almost never separable in a practical sense. Rotations about single bonds are subject to small energy barriers. When the time scale for interconversion is long enough for isolation of individual rotamers (usually arbitrarily defined as a half-life of interconversion of 1000 seconds or longer), the species are termed atropisomers (see: atropisomerism). The ring-flip of substituted cyclohexanes constitutes a common form of conformers. The study of the energetics of bond rotation is referred to as conformational analysis. In some cases, conformational analysis can be used to predict and explain product selectivity, mechanisms, and rates of reactions. Conformational analysis also plays an important role in rational, structure-based drug design. Types Rotating their carbon–carbon bonds, the molecules ethane and propane have three local energy minima. They are structurally and energetically equivalent, and are called the staggered conformers. For each molecule, the three substituents emanating from each carbon–carbon bond are staggered, with each H–C–C–H dihedral angle (and H–C–C–CH3 dihedral angle in the case of propane) equal to 60° (or approximately equal to 60° in the case of propane). The three eclipsed conformations, in which the dihedral angles are zero, are transition states (energy maxima) connecting two equivalent energy minima, the staggered conformers. The butane molecule is the simplest molecule for which single bond rotations result in two types of nonequivalent structures, known as the anti- and gauche-conformers (see figure). For example, butane has three conformers relating to its two methyl (CH3) groups: two gauche conformers, which have the methyls ±60° apart and are enantiomeric, and an anti conformer, where the four carbon centres are coplanar and the substituents are 180° apart (refer to free energy diagram of butane). The energy difference between gauche and anti is 0.9 kcal/mol associated with the strain energy of the gauche conformer. The anti conformer is, therefore, the most stable (≈ 0 kcal/mol). The three eclipsed conformations with dihedral angles of 0°, 120°, and 240° are transition states between conformers. Note that the two eclipsed conformations have different energies: at 0° the two methyl groups are eclipsed, resulting in higher energy (≈ 5 kcal/mol) than at 120°, where the methyl groups are eclipsed with hydrogens (≈ 3.5 kcal/mol). While simple molecules can be described by these types of conformations, more complex molecules require the use of the Klyne–Prelog system to describe the different conformers. More specific examples of conformations are detailed elsewhere: Ring conformation Cyclohexane conformations, including with chair and boat conformations among others. Cycloalkane conformations, including medium rings and macrocycles Carbohydrate conformation, which includes cyclohexane conformations as well as other details. Allylic strain – energetics related to rotation about the single bond between an sp2 carbon and an sp3 carbon. Atropisomerism – due to restricted rotation about a bond. Folding, including the secondary and tertiary structure of biopolymers (nucleic acids and proteins). Akamptisomerism – due to restricted inversion of a bond angle. Equilibrium of conformers Conformers generally exist in a dynamic equilibrium Three isotherms are given in the diagram depicting the equilibrium distribution of two conformers at different temperatures. At a free energy difference of 0 kcal/mol, this gives an equilibrium constant of 1, meaning that two conformers exist in a 1:1 ratio. The two have equal free energy; neither is more stable, so neither predominates compared to the other. A negative difference in free energy means that a conformer interconverts to a thermodynamically more stable conformation, thus the equilibrium constant will always be greater than 1. For example, the ΔG° for the transformation of butane from the gauche conformer to the anti conformer is −0.47 kcal/mol at 298 K. This gives an equilibrium constant is about 2.2 in favor of the anti conformer, or a 31:69 mixture of gauche:anti conformers at equilibrium. Conversely, a positive difference in free energy means the conformer already is the more stable one, so the interconversion is an unfavorable equilibrium (K < 1). Even for highly unfavorable changes (large positive ΔG°), the equilibrium constant between two conformers can be increased by increasing the temperature, so that the amount of the less stable conformer present at equilibrium increases (although it always remains the minor conformer). Population distribution of conformers The fractional population distribution of different conformers follows a Boltzmann distribution: The left hand side is the proportion of conformer i in an equilibrating mixture of M conformers in thermodynamic equilibrium. On the right side, Ek (k = 1, 2, ..., M) is the energy of conformer k, R is the molar ideal gas constant (approximately equal to 8.314 J/(mol·K) or 1.987 cal/(mol·K)), and T is the absolute temperature. The denominator of the right side is the partition function. Factors contributing to the free energy of conformers The effects of electrostatic and steric interactions of the substituents as well as orbital interactions such as hyperconjugation are responsible for the relative stability of conformers and their transition states. The contributions of these factors vary depending on the nature of the substituents and may either contribute positively or negatively to the energy barrier. Computational studies of small molecules such as ethane suggest that electrostatic effects make the greatest contribution to the energy barrier; however, the barrier is traditionally attributed primarily to steric interactions. In the case of cyclic systems, the steric effect and contribution to the free energy can be approximated by A values, which measure the energy difference when a substituent on cyclohexane in the axial as compared to the equatorial position. In large (>14 atom) rings, there are many accessible low-energy conformations which correspond to the strain-free diamond lattice. Observation of conformers The short timescale of interconversion precludes the separation of conformer in most cases. Atropisomers are conformational isomers which can be separated due to restricted rotation. The equilibrium between conformational isomers can be observed using a variety of spectroscopic techniques. Protein folding also generates conformers which can be observed. The Karplus equation relates the dihedral angle of vicinal protons to their J-coupling constants as measured by NMR. The equation aids in the elucidation of protein folding as well as the conformations of other rigid aliphatic molecules. Protein side chains exhibit rotamers, whose distribution is determined by their steric interaction with different conformations of the backbone. This is evident from statistical analysis of the conformations of protein side chains in the Backbone-dependent rotamer library. Spectroscopy Conformational dynamics can be monitored by variable temperature NMR spectroscopy. The technique applies to barriers of 8–14 kcal/mol, and species exhibiting such dynamics are often called "fluxional". For example, in cyclohexane derivatives, the two chair conformers interconvert rapidly at room temperature. The ring-flip proceeds at a rates of approximately 105 ring-flips/sec, with an overall energy barrier of 10 kcal/mol (42 kJ/mol). This barrier precludes separation at ambient temperatures. However, at low temperatures below the coalescence point one can directly monitor the equilibrium by NMR spectroscopy and by dynamic, temperature dependent NMR spectroscopy the barrier interconversion. Besides NMR spectroscopy, IR spectroscopy is used to measure conformer ratios. For the axial and equatorial conformer of bromocyclohexane, νCBr differs by almost 50 cm−1. Conformation-dependent reactions Reaction rates are highly dependent on the conformation of the reactants. In many cases the dominant product arises from the reaction of the less prevalent conformer, by virtue of the Curtin-Hammett principle. This is typical for situations where the conformational equilibration is much faster than reaction to form the product. The dependence of a reaction on the stereochemical orientation is therefore usually only visible in Configurational analysis, in which a particular conformation is locked by substituents. Prediction of rates of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution is possible if all conformers and their relative stability ruled by their strain is taken into account. One example where the rotamers become significant is elimination reactions, which involve the simultaneous removal of a proton and a leaving group from vicinal or antiperiplanar positions under the influence of a base. The mechanism requires that the departing atoms or groups follow antiparallel trajectories. For open chain substrates this geometric prerequisite is met by at least one of the three staggered conformers. For some cyclic substrates such as cyclohexane, however, an antiparallel arrangement may not be attainable depending on the substituents which might set a conformational lock. Adjacent substituents on a cyclohexane ring can achieve antiperiplanarity only when they occupy trans diaxial positions (that is, both are in axial position, one going up and one going down). One consequence of this analysis is that trans-4-tert-butylcyclohexyl chloride cannot easily eliminate but instead undergoes substitution (see diagram below) because the most stable conformation has the bulky t-Bu group in the equatorial position, therefore the chloride group is not antiperiplanar with any vicinal hydrogen (it is gauche to all four). The thermodynamically unfavored conformation has the t-Bu group in the axial position, which is higher in energy by more than 5 kcal/mol (see A value). As a result, the t-Bu group "locks" the ring in the conformation where it is in the equatorial position and substitution reaction is observed. On the other hand, cis-4-tert-butylcyclohexyl chloride undergoes elimination because antiperiplanarity of Cl and H can be achieved when the t-Bu group is in the favorable equatorial position. The repulsion between an axial t-butyl group and hydrogen atoms in the 1,3-diaxial position is so strong that the cyclohexane ring will revert to a twisted boat conformation. The strain in cyclic structures is usually characterized by deviations from ideal bond angles (Baeyer strain), ideal torsional angles (Pitzer strain) or transannular (Prelog) interactions. Alkane stereochemistry Alkane conformers arise from rotation around sp3 hybridised carbon–carbon sigma bonds. The smallest alkane with such a chemical bond, ethane, exists as an infinite number of conformations with respect to rotation around the C–C bond. Two of these are recognised as energy minimum (staggered conformation) and energy maximum (eclipsed conformation) forms. The existence of specific conformations is due to hindered rotation around sigma bonds, although a role for hyperconjugation is proposed by a competing theory. The importance of energy minima and energy maxima is seen by extension of these concepts to more complex molecules for which stable conformations may be predicted as minimum-energy forms. The determination of stable conformations has also played a large role in the establishment of the concept of asymmetric induction and the ability to predict the stereochemistry of reactions controlled by steric effects. In the example of staggered ethane in Newman projection, a hydrogen atom on one carbon atom has a 60° torsional angle or torsion angle with respect to the nearest hydrogen atom on the other carbon so that steric hindrance is minimised. The staggered conformation is more stable by 12.5 kJ/mol than the eclipsed conformation, which is the energy maximum for ethane. In the eclipsed conformation the torsional angle is minimised. In butane, the two staggered conformations are no longer equivalent and represent two distinct conformers:the anti-conformation (left-most, below) and the gauche conformation (right-most, below). 150px Both conformations are free of torsional strain, but, in the gauche conformation, the two methyl groups are in closer proximity than the sum of their van der Waals radii. The interaction between the two methyl groups is repulsive (van der Waals strain), and an energy barrier results. A measure of the potential energy stored in butane conformers with greater steric hindrance than the 'anti'-conformer ground state is given by these values: Gauche, conformer – 3.8 kJ/mol Eclipsed H and CH3 – 16 kJ/mol Eclipsed CH3 and CH3 – 19 kJ/mol. The eclipsed methyl groups exert a greater steric strain because of their greater electron density compared to lone hydrogen atoms. The textbook explanation for the existence of the energy maximum for an eclipsed conformation in ethane is steric hindrance, but, with a C-C bond length of 154 pm and a Van der Waals radius for hydrogen of 120 pm, the hydrogen atoms in ethane are never in each other's way. The question of whether steric hindrance is responsible for the eclipsed energy maximum is a topic of debate to this day. One alternative to the steric hindrance explanation is based on hyperconjugation as analyzed within the Natural Bond Orbital framework. In the staggered conformation, one C-H sigma bonding orbital donates electron density to the antibonding orbital of the other C-H bond. The energetic stabilization of this effect is maximized when the two orbitals have maximal overlap, occurring in the staggered conformation. There is no overlap in the eclipsed conformation, leading to a disfavored energy maximum. On the other hand, an analysis within quantitative molecular orbital theory shows that 2-orbital-4-electron (steric) repulsions are dominant over hyperconjugation. A valence bond theory study also emphasizes the importance of steric effects. Nomenclature Naming alkanes per standards listed in the IUPAC Gold Book is done according to the Klyne–Prelog system for specifying angles (called either torsional or dihedral angles) between substituents around a single bond: a torsion angle between 0° and ±90° is called syn (s) a torsion angle between ±90° and 180° is called anti (a) a torsion angle between 30° and 150° or between −30° and −150° is called clinal (c) a torsion angle between 0° and ±30° or ±150° and 180° is called periplanar (p) a torsion angle between 0° and ±30° is called synperiplanar (sp), also called syn- or cis- conformation a torsion angle between 30° to 90° and −30° to −90° is called synclinal (sc), also called gauche or skew a torsion angle between 90° and 150° or −90° and −150° is called anticlinal (ac) a torsion angle between ±150° and 180° is called antiperiplanar (ap), also called anti- or trans- conformation Torsional strain or "Pitzer strain" refers to resistance to twisting about a bond. Special cases In n-pentane, the terminal methyl groups experience additional pentane interference. Replacing hydrogen by fluorine in polytetrafluoroethylene changes the stereochemistry from the zigzag geometry to that of a helix due to electrostatic repulsion of the fluorine atoms in the 1,3 positions. Evidence for the helix structure in the crystalline state is derived from X-ray crystallography and from NMR spectroscopy and circular dichroism in solution. See also Anomeric effect Backbone-dependent rotamer library Cycloalkane Cyclohexane Cyclohexane conformations. Gauche effect Klyne–Prelog system Macrocyclic stereocontrol Molecular configuration Molecular modelling Steric effects Strain (chemistry) References Physical organic chemistry Stereochemistry
Rotamer
[ "Physics", "Chemistry" ]
3,619
[ "Stereochemistry", "Space", "nan", "Physical organic chemistry", "Spacetime" ]
1,527,578
https://en.wikipedia.org/wiki/Critical%20variable
Critical variables are defined, for example in thermodynamics, in terms of the values of variables at the critical point. On a PV diagram, the critical point is an inflection point. Thus: For the van der Waals equation, the above yields: References Thermodynamic properties Conformal field theory
Critical variable
[ "Physics", "Chemistry", "Mathematics" ]
68
[ "Thermodynamic properties", "Quantity", "Thermodynamics", "Physical quantities" ]
1,527,625
https://en.wikipedia.org/wiki/Khabarovsk%20war%20crimes%20trials
The Khabarovsk war crimes trials were the Soviet hearings of twelve Japanese Kwantung Army officers and medical staff charged with the manufacture and use of biological weapons, and human experimentation, during World War II. The war crimes trials were held between 25 and 31 December 1949 in the Soviet industrial city of Khabarovsk (Хабаровск), the largest in the Russian Far East. Both Soviet Union and United States allegedly gathered data from the Unit after the fall of Japan. While twelve Unit 731 researchers arrested by Soviet forces were tried at the December 1949 Khabarovsk war crimes trials, they were sentenced lightly to the Siberian labor camp from two to 25 years, seemingly in exchange for the information they held. Those captured by the US military were secretly given immunity, while being covered up with stipends to the perpetrators. The US was purported to had co-opted the researchers' bioweapons information and experience for use in their own warfare program (resembling Operation Paperclip), so did the Soviet Union in building their bioweapons facility in Sverdlovsk using documentation captured from the Unit in Manchuria. In 1956, those still serving their sentences were released and repatriated to Japan. History During the trials, the accused, including Major General Kiyoshi Kawashima, testified that as early as 1941, some 40 members of Unit 731 air-dropped plague-contaminated fleas on Changde, China, causing epidemic plague outbreaks. Judges found all twelve accused war criminals guilty, sentencing them to terms ranging from two to twenty-five years in labour camps. In 1956, those still serving their sentences were released and repatriated to Japan. In 1950, the Soviet Union published official trial materials in English, titled Materials on the Trial of Former Servicemen of the Japanese Army Charged with Manufacturing and Employing Bacteriological Weapons. These included documents from the preliminary investigation (the indictment, some of the documentary evidence, and some interrogation records), testimony from both the accused and witnesses, final pleas of the accused, some expert findings, and speeches from the state prosecutor and defense counsel, verbatim. Published by state-run Foreign Languages Publishing House, the Soviet publication has long been out of print. But in November 2015, Google Books determined it was now in the public domain and published a facsimile of it online, also offering it for sale as an ebook. Trial controversies Speaking to the overall judicial integrity of the proceedings, bioethics expert Jing-Bao Nie said the following: Despite its strong ideological tone and many obvious shortcomings such as the lack of international participation, the trial established beyond reasonable doubt that the Japanese army had prepared and deployed bacteriological weapons and that Japanese researchers had conducted cruel experiments on living human beings. However, the trial, together with the evidence presented to the court and its major findings—which have proved remarkably accurate—was dismissed as communist propaganda and totally ignored in the West until the 1980s. Historian Sheldon Harris described the trial in his history of Unit 731: Evidence introduced during the hearings was based on eighteen volumes of interrogations and documentary material gathered in investigations over the previous four years. Some of the volumes included more than four hundred pages of depositions.... Unlike the Moscow Show Trials of the 1930s, the Japanese confessions made in the Khabarovsk trial were based on fact and not the fantasy of their handlers. Yet the very wealth of trial documentation that tended to confirm that the Khabarovsk proceedings were no mere show trial also led Harris to question the relatively light punishment meted out there. All of defendants (aside from one who died in prison and another who committed suicide) had been freed by 1956, a mere seven years after the trial took place. Chief trial translator Georgy Permyakov alleged that Soviet leader Joseph Stalin may have initially feared that Japan would execute Soviet prisoners of war if the Khabarovsk defendants were hanged. But Harris also claimed that "the Soviets made a deal with the Japanese similar to the one completed by the Americans: Information [in exchange] for... extremely light sentences":The Soviets and their successors never released the interrogation reports of the Japanese, some 18 volumes. This leads me to believe that the Japanese did arrange a deal, did yield some information, and the Soviets settled for the best goodies they could get.Harris also noted other controversies unleashed by the trial, which linked Emperor Hirohito to the Japanese biological warfare program, as well as allegations that Japanese biological warfare experiments had also been conducted on Allied prisoners of war. One of the experts called upon by Soviet prosecutors during the trial, N. N. Zhukov-Verezhnikov, later served on the panel of scientists, led by Joseph Needham, investigating Chinese and North Korean allegations of US biological warfare in the Korean War. Accused and their sentences 25 years imprisonment: General Otozō Yamada (born 1881), former Commander-in-Chief of the Kwantung Army (released from prison in 1956) Lieutenant General Kajitsuka Ryuji (born 1888), former Chief of Medical Administration (released from prison in 1956) Lieutenant General Takahashi Takaatsu (born 1888), former Chief of Veterinary Service (died in prison in 1951) Major General Kawashima Kiyoshi (born 1893), former Chief of Unit 731 (released from prison in 1956) 20 years imprisonment: Major General Sato Shunji (born 1896), former Chief of Medical Service, 5th Army (released from prison in 1956) Major Karasawa Tomio (born 1911), former chief of a section of Unit 731 (killed himself in prison in 1956) 18 years imprisonment: Lieutenant Colonel Nishi Toshihide (born 1904), former chief of a division of Unit 731 (released from prison in 1956) 15 years imprisonment: Senior Sergeant Mitomo Kazuo (born 1924), former member of Unit 100 (released from prison in 1956) 12 years imprisonment: Major Onoue Masao (born 1910), former chief of a branch of Unit 731 (released from prison in 1956) 10 years imprisonment: Lieutenant Hirazakura Zensaku (born 1916), former researcher of Unit 100 (released from prison in 1956) 3 years imprisonment: Kurushima Yuji (born 1923), former lab orderly of Branch 162 of Unit 731 (released in 1952) 2 years imprisonment: Corporal Kikuchi Norimitsu (born 1922), former medical orderly of Branch 643 of Unit 731 (released in 1951) See also Japanese war crimes International Military Tribunal for the Far East Military history of the Soviet Union Notes References Boris G. Yudin, Research on humans at the Khabarovsk War Crimes Trial, in: Japan's Wartime Medical Atrocities: Comparative Inquiries in Science, History, and Ethics (Asia's Transformations), Jing Bao Nie, Nanyan Guo, Mark Selden, Arthur Kleinman (Editors); Routledge, 2010, Materials on the Trial of Former Servicemen of the Japanese Army Charged with Manufacturing and Employing Bacteriological Weapons, Foreign Languages Publishing House, 1950, 535 pp. (No ISBN) Biological warfare Military history of the Soviet Union War crimes trials in the Soviet Union Japan–Soviet Union relations 1949 in the Soviet Union World War II war crimes trials Trials in Russia Japanese biological weapons program Khabarovsk
Khabarovsk war crimes trials
[ "Biology" ]
1,508
[ "Biological warfare" ]
1,527,655
https://en.wikipedia.org/wiki/Implicit%20surface
In mathematics, an implicit surface is a surface in Euclidean space defined by an equation An implicit surface is the set of zeros of a function of three variables. Implicit means that the equation is not solved for or or . The graph of a function is usually described by an equation and is called an explicit representation. The third essential description of a surface is the parametric one: , where the -, - and -coordinates of surface points are represented by three functions depending on common parameters . Generally the change of representations is simple only when the explicit representation is given: (implicit), (parametric). Examples: The plane The sphere The torus A surface of genus 2: (see diagram). The surface of revolution (see diagram wineglass). For a plane, a sphere, and a torus there exist simple parametric representations. This is not true for the fourth example. The implicit function theorem describes conditions under which an equation can be solved (at least implicitly) for , or . But in general the solution may not be made explicit. This theorem is the key to the computation of essential geometric features of a surface: tangent planes, surface normals, curvatures (see below). But they have an essential drawback: their visualization is difficult. If is polynomial in , and , the surface is called algebraic. Example 5 is non-algebraic. Despite difficulty of visualization, implicit surfaces provide relatively simple techniques to generate theoretically (e.g. Steiner surface) and practically (see below) interesting surfaces. Formulas Throughout the following considerations the implicit surface is represented by an equation where function meets the necessary conditions of differentiability. The partial derivatives of are . Tangent plane and normal vector A surface point is called regular if and only if the gradient of at is not the zero vector , meaning . If the surface point is not regular, it is called singular. The equation of the tangent plane at a regular point is and a normal vector is Normal curvature In order to keep the formula simple the arguments are omitted: is the normal curvature of the surface at a regular point for the unit tangent direction . is the Hessian matrix of (matrix of the second derivatives). The proof of this formula relies (as in the case of an implicit curve) on the implicit function theorem and the formula for the normal curvature of a parametric surface. Applications of implicit surfaces As in the case of implicit curves it is an easy task to generate implicit surfaces with desired shapes by applying algebraic operations (addition, multiplication) on simple primitives. Equipotential surface of point charges The electrical potential of a point charge at point generates at point the potential (omitting physical constants) The equipotential surface for the potential value is the implicit surface which is a sphere with center at point . The potential of point charges is represented by For the picture the four charges equal 1 and are located at the points . The displayed surface is the equipotential surface (implicit surface) . Constant distance product surface A Cassini oval can be defined as the point set for which the product of the distances to two given points is constant (in contrast, for an ellipse the sum is constant). In a similar way implicit surfaces can be defined by a constant distance product to several fixed points. In the diagram metamorphoses the upper left surface is generated by this rule: With the constant distance product surface is displayed. Metamorphoses of implicit surfaces A further simple method to generate new implicit surfaces is called metamorphosis or homotopy of implicit surfaces: For two implicit surfaces (in the diagram: a constant distance product surface and a torus) one defines new surfaces using the design parameter : In the diagram the design parameter is successively . Smooth approximations of several implicit surfaces -surfaces can be used to approximate any given smooth and bounded object in whose surface is defined by a single polynomial as a product of subsidiary polynomials. In other words, we can design any smooth object with a single algebraic surface. Let us denote the defining polynomials as . Then, the approximating object is defined by the polynomial where stands for the blending parameter that controls the approximating error. Analogously to the smooth approximation with implicit curves, the equation represents for suitable parameters smooth approximations of three intersecting tori with equations (In the diagram the parameters are ) Visualization of implicit surfaces There are various algorithms for rendering implicit surfaces, including the marching cubes algorithm. Essentially there are two ideas for visualizing an implicit surface: One generates a net of polygons which is visualized (see surface triangulation) and the second relies on ray tracing which determines intersection points of rays with the surface. The intersection points can be approximated by sphere tracing, using a signed distance function to find the distance to the surface. External Links Implicit surface software Free implicit surface software Open-source or free software supporting algebraic implicit surface modelling: K3DSurf — A program to visualize and manipulate Mathematical models in 3-6 dimensions. K3DSurf supports Parametric equations and Isosurfaces CGAL (Computational Geometry Algorithms Library), written in C++, has strong support for implicit surface modeling (Boolean operations on implicit surfaces, Surface meshing for visualization, Implicit curve arrangements). PyVista, a Python wrapper around VTK for easier handling of implicit surfaces. Simplified API for rendering and manipulating implicit surfaces. It can integrate with numpy. Some Blender add-ons (metaballs and volumetric modeling for implicit surfaces, and scripting support for custom implicit functions). SculptsFEM (for solving PDEs on implicit surfaces, Implicit curve generation) ImpliSolid (open-source), supports sharp edges. Houdini (supports implicit surface modeling using SDFs and procedural techniques). Houdini Apprentice License is free. POV-Ray (Persistence of Vision Raytracer) has built-in support for defining complex implicit surfaces. Vision-based surface reconstruction use implicit functions for statistical modelling of surfaces: SDFStudio, Geo-Neus , PointSDF, etc. Various other software exist for polygonization of implicit surfaces, in context of Marching cubes, and in general Image-based meshing and Mesh generation, but they are not necessary based on an algebraic close-form field. Industrial or commercial software using implicit surface software Altair Inspire Studio RM, a Geologic modelling software by Datamine Software. Maple has a library for plotting implicit surfaces. See also Implicit curve References Further reading Gomes, A., Voiculescu, I., Jorge, J., Wyvill, B., Galbraith, C.: Implicit Curves and Surfaces: Mathematics, Data Structures and Algorithms, 2009, Springer-Verlag London, Thorpe: Elementary Topics in Differential Geometry, Springer-Verlag, New York, 1979, Surfaces Computer-aided design Geometry processing Implicit surface modeling
Implicit surface
[ "Physics", "Engineering" ]
1,402
[ "Computer-aided design", "Mesh generation", "Design engineering", "Tessellation", "Symmetry" ]
1,527,720
https://en.wikipedia.org/wiki/Plowshare
In agriculture, a plowshare (US) or ploughshare (UK; ) is a component of a plow (or plough). It is the cutting or leading edge of a moldboard which closely follows the coulter (one or more ground-breaking spikes) when plowing. The plowshare itself is often a hardened blade dressed into an integral moldboard (by the blacksmith) so making a unified combination of plowshare and moldboard, the whole being responsible for entering the cleft in the earth (made by the coulter's first cutting-through) and turning the earth over. In well-tilled terrain the plowshare may do duty without a preceding coulter. In modern plows both coulter and plowshare are detachable for easy replacement when worn or broken. History Triangular-shaped stone plowshares are found at the sites of Chinese Majiabang culture dated to 3500 BC around Lake Tai. Plowshares have also been discovered at the nearby Liangzhu and Maqiao sites roughly dated to the same period. The British archaeologist David R. Harris says this indicates that more intensive cultivation in fixed, probably bunded, fields had developed by this time. According to Mu Yongkang and Song Zhaolin's classification and methods of use, the triangular plow assumed many kinds and were the departure from the Hemudu and Luojiajiao spade, with the Songze small plow in mid-process. The post-Liangzhu plows used draft animals. In heraldry Plowshares are often used in heraldry. In ancient cultures The ancient phrase from the biblical Book of Isaiah, "to turn swords to ploughshares," is still in common use today. These plowshares represent peaceful use of wartime capabilities. On the other hand, the Book of Joel uses the phrase in reverse, "Beat your plowshares into swords". However, in classical antiquity during the Battle of Marathon, many Persians were slain by a deadly plowshare-wielding ally who appeared suddenly on the side of the ancient Athenians. After their victory and his disappearance, an oracle told the Athenians to worship the hero under the name Echetlaeus: the hero with the "echetlon", or plowshare. References External links The Rotherham Plow - the first commercially successful iron plow Animal equipment Agricultural machinery Chinese inventions Heraldic charges Ploughs
Plowshare
[ "Biology" ]
508
[ "Animal equipment", "Animals" ]
1,528,061
https://en.wikipedia.org/wiki/Omitted-variable%20bias
In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included. More specifically, OVB is the bias that appears in the estimates of parameters in a regression analysis, when the assumed specification is incorrect in that it omits an independent variable that is a determinant of the dependent variable and correlated with one or more of the included independent variables. In linear regression Intuition Suppose the true cause-and-effect relationship is given by: with parameters a, b, c, dependent variable y, independent variables x and z, and error term u. We wish to know the effect of x itself upon y (that is, we wish to obtain an estimate of b). Two conditions must hold true for omitted-variable bias to exist in linear regression: the omitted variable must be a determinant of the dependent variable (i.e., its true regression coefficient must not be zero); and the omitted variable must be correlated with an independent variable specified in the regression (i.e., cov(z,x) must not equal zero). Suppose we omit z from the regression, and suppose the relation between x and z is given by with parameters d, f and error term e. Substituting the second equation into the first gives If a regression of y is conducted upon x only, this last equation is what is estimated, and the regression coefficient on x is actually an estimate of (b + cf ), giving not simply an estimate of the desired direct effect of x upon y (which is b), but rather of its sum with the indirect effect (the effect f of x on z times the effect c of z on y). Thus by omitting the variable z from the regression, we have estimated the total derivative of y with respect to x rather than its partial derivative with respect to x. These differ if both c and f are non-zero. The direction and extent of the bias are both contained in cf, since the effect sought is b but the regression estimates b+cf. The extent of the bias is the absolute value of cf, and the direction of bias is upward (toward a more positive or less negative value) if cf > 0 (if the direction of correlation between y and z is the same as that between x and z), and it is downward otherwise. Detailed analysis As an example, consider a linear model of the form where xi is a 1 × p row vector of values of p independent variables observed at time i or for the i th study participant; β is a p × 1 column vector of unobservable parameters (the response coefficients of the dependent variable to each of the p independent variables in xi) to be estimated; zi is a scalar and is the value of another independent variable that is observed at time i or for the i th study participant; δ is a scalar and is an unobservable parameter (the response coefficient of the dependent variable to zi) to be estimated; ui is the unobservable error term occurring at time i or for the i th study participant; it is an unobserved realization of a random variable having expected value 0 (conditionally on xi and zi); yi is the observation of the dependent variable at time i or for the i th study participant. We collect the observations of all variables subscripted i = 1, ..., n, and stack them one below another, to obtain the matrix X and the vectors Y, Z, and U: and If the independent variable z is omitted from the regression, then the estimated values of the response parameters of the other independent variables will be given by the usual least squares calculation, (where the "prime" notation means the transpose of a matrix and the -1 superscript is matrix inversion). Substituting for Y based on the assumed linear model, On taking expectations, the contribution of the final term is zero; this follows from the assumption that U is uncorrelated with the regressors X. On simplifying the remaining terms: The second term after the equal sign is the omitted-variable bias in this case, which is non-zero if the omitted variable z is correlated with any of the included variables in the matrix X (that is, if X′Z does not equal a vector of zeroes). Note that the bias is equal to the weighted portion of zi which is "explained" by xi. Effect in ordinary least squares The Gauss–Markov theorem states that regression models which fulfill the classical linear regression model assumptions provide the most efficient, linear and unbiased estimators. In ordinary least squares, the relevant assumption of the classical linear regression model is that the error term is uncorrelated with the regressors. The presence of omitted-variable bias violates this particular assumption. The violation causes the OLS estimator to be biased and inconsistent. The direction of the bias depends on the estimators as well as the covariance between the regressors and the omitted variables. A positive covariance of the omitted variable with both a regressor and the dependent variable will lead the OLS estimate of the included regressor's coefficient to be greater than the true value of that coefficient. This effect can be seen by taking the expectation of the parameter, as shown in the previous section. See also Confounding variable References Regression analysis Experimental bias
Omitted-variable bias
[ "Mathematics" ]
1,133
[ "Experimental bias", "Statistical concepts" ]
1,528,195
https://en.wikipedia.org/wiki/Gamma%20Cygni
Gamma Cygni (γ Cygni, abbreviated Gamma Cyg, γ Cyg), officially named Sadr , is a star in the northern constellation of Cygnus, forming the intersection of an asterism of five stars called the Northern Cross. Based upon parallax measurements obtained during the Hipparcos mission, it is approximately 1,800 light-years (560 parsecs) from the Sun. It forms the primary or 'A' component of a multiple star system designated WDS J20222+4015 (the secondary or 'BCD' component is WDS J20222+4015BCD, a close triplet of stars 41" away from γ Cygni). Nomenclature γ Cygni (Latinised to Gamma Cygni) is the star's Bayer designation. WDS J20222+4015A is its designation in the Washington Double Star Catalog. It bore the traditional name Sadr (also rendered Sadir or Sador), derived from the Arabic صدر ṣadr "chest", the same word which gave rise to the star Schedar (Alpha Cassiopeiae). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Sadr for this star (WDS J20222+4015A) on 21 August 2016 and it is now so included in the List of IAU-approved Star Names. In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Sadr al Dedjadjet, (صدر الدجاجة / ṣadr al-dajāja), which was translated into Latin as Pectus Gallinǣ, meaning the hen's chest. In Chinese, (), meaning Celestial Ford, refers to an asterism consisting of Gamma Cygni, Delta Cygni, 30 Cygni, Alpha Cygni, Nu Cygni, Tau Cygni, Upsilon Cygni, Zeta Cygni and Epsilon Cygni. Consequently, the Chinese name for Gamma Cygni itself is (, ). Properties With an apparent visual magnitude of 2.23, Gamma Cygni is among the brighter stars visible in the night sky. The stellar classification of this star is F8 Iab, indicating that it has reached the supergiant stage of its stellar evolution. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Compared to the Sun this is an enormous star, with 14.5 times the Sun's mass and about 180 times the Sun's radius. It is emitting over 33,000 times as much energy as the Sun, at an effective temperature of 5,790 K in its outer envelope. This temperature is what gives the star the characteristic yellow-white hue of an F-type star. Massive stars such as this consume their nuclear fuel much more rapidly than the Sun, so the estimated age of this star is only about 12 million years old. The spectrum of this star shows some unusual dynamic features, including variations in radial velocity of up to , occurring on a time scale of 100 days or more. Indeed, on the Hertzsprung–Russell diagram, Gamma Cygni lies close to the instability strip and its spectrum is markedly like that of a Cepheid variable. This star is surrounded by a diffuse nebula called IC 1318, or the Gamma Cygni region. Notes References External links Sadr by Jim Kaler. http://www.atlasoftheuniverse.com/nebulae/ic1318.html Astronomy Picture of the Day: Supergiant Star Gamma Cygni, NASA, July 9, 2013 Cygni, Gamma Cygnus (constellation) F-type supergiants Sadr Cygni, 37 7796 194093 100453 Durchmusterung objects Suspected variables
Gamma Cygni
[ "Astronomy" ]
837
[ "Cygnus (constellation)", "Constellations" ]
1,528,346
https://en.wikipedia.org/wiki/Totally%20bounded%20space
In topology and related branches of mathematics, total-boundedness is a generalization of compactness for circumstances in which a set is not necessarily closed. A totally bounded set can be covered by finitely many subsets of every fixed “size” (where the meaning of “size” depends on the structure of the ambient space). The term precompact (or pre-compact) is sometimes used with the same meaning, but precompact is also used to mean relatively compact. These definitions coincide for subsets of a complete metric space, but not in general. In metric spaces A metric space is totally bounded if and only if for every real number , there exists a finite collection of open balls of radius whose centers lie in M and whose union contains . Equivalently, the metric space M is totally bounded if and only if for every , there exists a finite cover such that the radius of each element of the cover is at most . This is equivalent to the existence of a finite ε-net. A metric space is said to be totally bounded if every sequence admits a Cauchy subsequence; in complete metric spaces, a set is compact if and only if it is closed and totally bounded. Each totally bounded space is bounded (as the union of finitely many bounded sets is bounded). The reverse is true for subsets of Euclidean space (with the subspace topology), but not in general. For example, an infinite set equipped with the discrete metric is bounded but not totally bounded: every discrete ball of radius or less is a singleton, and no finite union of singletons can cover an infinite set. Uniform (topological) spaces A metric appears in the definition of total boundedness only to ensure that each element of the finite cover is of comparable size, and can be weakened to that of a uniform structure. A subset of a uniform space is totally bounded if and only if, for any entourage , there exists a finite cover of by subsets of each of whose Cartesian squares is a subset of . (In other words, replaces the "size" , and a subset is of size if its Cartesian square is a subset of .) The definition can be extended still further, to any category of spaces with a notion of compactness and Cauchy completion: a space is totally bounded if and only if its (Cauchy) completion is compact. Examples and elementary properties Every compact set is totally bounded, whenever the concept is defined. Every totally bounded set is bounded. A subset of the real line, or more generally of finite-dimensional Euclidean space, is totally bounded if and only if it is bounded. The unit ball in a Hilbert space, or more generally in a Banach space, is totally bounded (in the norm topology) if and only if the space has finite dimension. Equicontinuous bounded functions on a compact set are precompact in the uniform topology; this is the Arzelà–Ascoli theorem. A metric space is separable if and only if it is homeomorphic to a totally bounded metric space. The closure of a totally bounded subset is again totally bounded. Comparison with compact sets In metric spaces, a set is compact if and only if it is complete and totally bounded; without the axiom of choice only the forward direction holds. Precompact sets share a number of properties with compact sets. Like compact sets, a finite union of totally bounded sets is totally bounded. Unlike compact sets, every subset of a totally bounded set is again totally bounded. The continuous image of a compact set is compact. The uniformly continuous image of a precompact set is precompact. In topological groups Although the notion of total boundedness is closely tied to metric spaces, the greater algebraic structure of topological groups allows one to trade away some separation properties. For example, in metric spaces, a set is compact if and only if complete and totally bounded. Under the definition below, the same holds for any topological vector space (not necessarily Hausdorff nor complete). The general logical form of the definition is: a subset of a space is totally bounded if and only if, given any size there exists a finite cover of such that each element of has size at most is then totally bounded if and only if it is totally bounded when considered as a subset of itself. We adopt the convention that, for any neighborhood of the identity, a subset is called () if and only if A subset of a topological group is () if it satisfies any of the following equivalent conditions: : For any neighborhood of the identity there exist finitely many such that For any neighborhood of there exists a finite subset such that (where the right hand side is the Minkowski sum ). For any neighborhood of there exist finitely many subsets of such that and each is -small. For any given filter subbase of the identity element's neighborhood filter (which consists of all neighborhoods of in ) and for every there exists a cover of by finitely many -small subsets of is : for every neighborhood of the identity and every countably infinite subset of there exist distinct such that (If is finite then this condition is satisfied vacuously). Any of the following three sets satisfies (any of the above definitions of) being (left) totally bounded: The closure of in This set being in the list means that the following characterization holds: is (left) totally bounded if and only if is (left) totally bounded (according to any of the defining conditions mentioned above). The same characterization holds for the other sets listed below. The image of under the canonical quotient which is defined by (where is the identity element). The sum The term usually appears in the context of Hausdorff topological vector spaces. In that case, the following conditions are also all equivalent to being (left) totally bounded: In the completion of the closure of is compact. Every ultrafilter on is a Cauchy filter. The definition of is analogous: simply swap the order of the products. Condition 4 implies any subset of is totally bounded (in fact, compact; see above). If is not Hausdorff then, for example, is a compact complete set that is not closed. Topological vector spaces Any topological vector space is an abelian topological group under addition, so the above conditions apply. Historically, statement 6(a) was the first reformulation of total boundedness for topological vector spaces; it dates to a 1935 paper of John von Neumann. This definition has the appealing property that, in a locally convex space endowed with the weak topology, the precompact sets are exactly the bounded sets. For separable Banach spaces, there is a nice characterization of the precompact sets (in the norm topology) in terms of weakly convergent sequences of functionals: if is a separable Banach space, then is precompact if and only if every weakly convergent sequence of functionals converges uniformly on Interaction with convexity The balanced hull of a totally bounded subset of a topological vector space is again totally bounded. The Minkowski sum of two compact (totally bounded) sets is compact (resp. totally bounded). In a locally convex (Hausdorff) space, the convex hull and the disked hull of a totally bounded set is totally bounded if and only if is complete. See also Compact space Locally compact space Measure of non-compactness Orthocompact space Paracompact space Relatively compact subspace References Bibliography Uniform spaces Metric geometry Topology Functional analysis Compactness (mathematics)
Totally bounded space
[ "Physics", "Mathematics" ]
1,543
[ "Functions and mappings", "Functional analysis", "Uniform spaces", "Mathematical objects", "Space (mathematics)", "Topological spaces", "Topology", "Mathematical relations", "Space", "Geometry", "Spacetime" ]
1,528,353
https://en.wikipedia.org/wiki/Flora%20of%20the%20Marquesas%20Islands
The Marquesas Islands have a diverse flora, with a high rate of endemism. They are in the floristic Polynesian subkingdom of the Oceanian realm. Food Plants Most of the food plants are not endemic, and include: Avocados Bananas Breadfruit (mei) from which "mā" is made. Cashews Coconuts Jambul Grapefruits Guavas Lemons Mangos Pandanus Papayas Pineapples Plantains Soursops Sugar apples Taro (tao) from which "poke", similar to poi, is made. Vanilla Other plants Frangipani Hibiscus Mape Nono Tiara Pelagodoxa henryana, the only species in the genus Pelagodoxa, is a palm tree that is endemic to the Marquesas Islands. See also Marquesan Nature Reserves References External links Botany.si.edu: Flora of the Marquesas Islands website F Marquesas Islands Marquesas
Flora of the Marquesas Islands
[ "Biology" ]
205
[ "Lists of biota", "Lists of plants", "Plants" ]
1,528,467
https://en.wikipedia.org/wiki/Copper%20coulometer
The copper coulometer is a one application for the copper-copper(II) sulfate electrode. Such a coulometer consists of two identical copper electrodes immersed in slightly acidic pH-buffered solution of copper(II) sulfate. Passing of current through the element leads to the anodic dissolution of the metal on anode and simultaneous deposition of copper ions on the cathode. These reactions have 100% efficiency over a wide range of current density. Calculation The amount of electric charge (quantity of electricity) passed through the cell can easily be determined by measuring the change in mass of either electrode and calculating: , where: is the quantity of electricity (coulombs) is the mass transported (gm) is the charge of the copper ions, equal to +2 is the Faraday constant (96485.3383 coulombs per mole) is the atomic weight of copper, equal to 63.546 grams per mole. Although this apparatus is interesting from a theoretical and historical point of view, present-day electronic measurement of time and electric current provide in their multiplication the amount of passed coulombs much easier, with greater precision, and in a shorter period of time than is possible by weighing the electrodes. See also Mercury coulometer Coulometry References Physical chemistry Electroanalytical chemistry devices Coulometer Coulometers
Copper coulometer
[ "Physics", "Chemistry" ]
276
[ "Applied and interdisciplinary physics", "Electroanalytical chemistry", "Electroanalytical chemistry devices", "nan", "Physical chemistry", "Physical chemistry stubs" ]
1,528,572
https://en.wikipedia.org/wiki/Interactor
An interactor is a person who interacts with the members of the audience. or An interactor is an entity that natural selection acts upon. Definition Interactor is a concept commonly used in the field of evolutionary biology. A widely accepted theory of evolution is the theory from Charles Darwin. He states, in short, that in a population there is often variation in heritable traits among individuals, in which a form of the trait might be more beneficial than the other form(s). Due to this difference, the chance of getting more adjusted offspring to the environment is higher. The process describing the selection of the environment on the traits of organisms is called natural selection. Based on this idea natural selection seems to act on traits of individuals, which evolutionary biologist like to call the interactor. So stated in a different way; an interactor is defined as a part of an organism that natural selection acts upon. Replicators and vehicles Replicators Other terms that are often mentioned in the same context as interactors, are replicators and vehicles. When replicators are mentioned, they mean things that pass on their entire structure through successive replications, like genes. This is not the same as an interactor, as interactors are things that interact with their environment and natural selection can act upon. Due to this interaction with the environment, interactors cause differential replication. However, some things (for example genes) can be both replicators and interactors. Vehicles Vehicles are often used as a synonym of interactors, only in a way that vehicles can "drive" natural selection, as if they have the behaviour to steer natural selection in a specific way. The term "vehicle" makes it look that way and therefore some people (like Hull) prefer the word "interactor" to "vehicle" for the same concept. An example of an interactor is the shell colour of snails (see below). Research on common garden snails as illustration for natural selection and interactors A study on common garden snails was performed and showed how natural selection on an interactor works. This species is highly suitable for evolutionary research due to their easily to score phenotype and their very straightforward genotype causing the phenotypic variation. Phenotypic variation among common garden snails can be found in their shell colour and banding and both colouring and banding is regulated by one single gene. The snail shells have variations in colours namely brown, pink and yellow; with brown being more dominant than pink and yellow. Furthermore, banding variation can be described as unbanded and banded, with banded individuals differing from another by the number of bands. One of the conclusions that could be drawn out of this research is that in grasslands, yellow individuals had a higher survival rate and were more abundant in these grasslands. This means that natural selection acted on the shell colour, which means that shell colour is the interactor in this example. Furthermore, they found that the brown individuals were more abundant and had a higher survival rate in woodlands than the yellow individuals. Moreover, a specific form of natural selection called thermal selection showed that shell colour worked in the interaction with the environment by yellow shells being more abundant, so more adjusted to reflect heat, in warmer places. References Science and Selection, David Hull, 2001 (http://assets.cambridge.org/97805216/43399/sample/9780521643399ws.pdf) Replication and Reproduction, David Hull, 2001 (https://plato.stanford.edu/entries/replication/) Color polymorphism in a land snail Cepaea nemoralis (Pulmonata: Helicidae) as viewed by potential avian predators, Adrian Surmacki & Agata Ożarowska-Nowicka & Zuzanna M. Rosin, 2013 (https://link.springer.com/content/pdf/10.1007/s00114-013-1049-y.pdf) On the Origin of Species, Charles Darwin, 1859 External links The Role of Behavior in Evolution Interactors is also an IT Company Evolutionary biology
Interactor
[ "Biology" ]
833
[ "Evolutionary biology" ]
1,528,751
https://en.wikipedia.org/wiki/High%20Energy%20Astronomy%20Observatory%201
HEAO-1 was an X-ray telescope launched in 1977. HEAO-1 surveyed the sky in the X-ray portion of the electromagnetic spectrum (0.2 keV – 10 MeV), providing nearly constant monitoring of X-ray sources near the ecliptic poles and more detailed studies of a number of objects by observations lasting 3–6 hours. It was the first of NASA's three High Energy Astronomy Observatories, HEAO 1, launched August 12, 1977 aboard an Atlas rocket with a Centaur upper stage, operated until 9 January 1979. During that time, it scanned the X-ray sky almost three times HEAO included four X-ray and gamma-ray astronomy instruments, known as A1, A2, A3, and A4, respectively (before launch, HEAO 1 was known as HEAO A). The orbital inclination was about 22.7 degrees. HEAO 1 re-entered the Earth's atmosphere on 15 March 1979. A1: Large-Area Sky Survey instrument The A1, or Large-Area Sky Survey (LASS) instrument, covered the 0.25–25 keV energy range, using seven large proportional counters. It was designed, operated, and managed at the Naval Research Laboratory (NRL) under the direction of Principal Investigator Dr. Herbert D. Friedman, and the prime contractor was TRW. The HEAO A-1 X-Ray Source Catalog included 842 discrete X-ray sources. A2: Cosmic X-ray Experiment The A2, or Cosmic X-ray Experiment (CXE), from the Goddard Space Flight Center, covered the 2–60 keV energy range with high spatial and spectral resolution. The Principal Investigators were Dr. Elihu A. Boldt and Dr. Gordon P. Garmire. A3: Modulation Collimator instrument The A3, or Modulation Collimator (MC) instrument, provided high-precision positions of X-ray sources, accurate enough to permit follow-up observations to identify optical and radio counterparts. It was provided by the Center for Astrophysics (Smithsonian Astrophysical Observatory and the Harvard College Observatory, SAO/HCO). Principal Investigators were Dr. Daniel A. Schwartz of SAO and Dr. Hale V. Bradt of MIT. A4: Hard X-Ray / Low-Energy Gamma-ray experiment The A4, or Hard X-ray / Low Energy Gamma-ray Experiment, used sodium iodide (NaI) scintillation counters to cover the energy range from about 20 keV to 10 MeV. It consisted of seven clustered modules, of three distinct designs, in a roughly hexagonal array. Each detector was actively shielded by surrounding CsI scintillators, in active-anti-coincidence, so that an extraneous particle or gamma-ray event from the side or rear would be vetoed electronically, and rejected. (It was discovered in early balloon flight by experimenters in the 1960s that passive collimators or shields, made of materials such as lead, actually increase the undesired background rate, due to the intense showers of secondary particles and photons produced by the extremely high energy (GeV) particles characteristic of the space radiation environment.) A plastic anti-coincidence scintillation shield, essentially transparent to gamma-ray photons, protected the detectors from high-energy charged particles entering from the front. For all seven modules, the unwanted background effects of particles or photons entering from the rear was suppressed by a "phoswich" design, in which the active NaI detecting element was optically coupled to a layer of CsI on its rear surface, which was in turn optically coupled to a single photomultiplier tube for each of the seven units. Because the NaI has a much faster response time (~0.25 μs) than the CsI (~1 μs), electronic pulse shape discriminators could distinguish good events in the NaI from mixed events accompanied by a simultaneous interaction in the CsI. The largest, or High Energy Detector (HED), occupied the central position and covered the upper range from ~120 keV to 10 MeV, with a field-of-view (FOV) collimated to 37° FWHM. Its NaI detector was in diameter by thick. The extreme penetrating power of photons in this energy range made it necessary to operate the HED in electronic anti-coincidence with the surrounding CsI and also the six other detectors of the hexagon. Two Low Energy Detectors (LEDs) were located in positions 180° apart on opposite side of the hexagon. They had thin ~3 mm thick NaI detectors, also in diameter, covering the energy range from ~10–200 keV. Their FOV was defined to fan-shaped beams of 1.7° x 20° FWHM by passive, parallel slat-plate collimators. The slats of the two LEDs were inclined to ±30° to the nominal HEAO scanning direction, crossing each other at 60°. Thus, working together, they covered a wide field of view, but could localize celestial sources with a precision determined by their 1.7° narrow fields. The four Medium Energy Detectors (MEDs), with a nominal energy range of 80 keV — 3 MeV, had dia by thick NaI detector crystals, and occupied the four remaining positions in the hexagon of modules. They had circular FOVs with a 17° FWHM. The primary data from A4 consisted of "event-by-event" telemetry, listing each good (i.e., un-vetoed) event in the NaI detectors. The experiment had the flexibility to tag each event with its pulse height (proportional to its energy), and a one or two byte time tag, allowing precision timing of objects such as gamma-ray bursts and pulsars. Results of the experiment included a catalog of the positions and intensities of hard X-ray (10–200 keV) sources, a strong observational basis for extremely strong magnetic fields (of order 1013 G) on the rotating neutron stars associated with Her X-1 and 4U 0115+634, a definitive diffuse component spectrum between 13 and 200 keV, discovery of the power-law shape of the Cygnus X-1 power density spectrum, and discovery of slow intensity cycles in the X-Ray sources SMC X-1 and LMC X-4, resulting in approximately 15 Ph.D theses and ~100 scientific publications. The A4 instrument was provided and managed by the University of California at San Diego, under the direction of Prof. Laurence E. Peterson, in collaboration with the X-ray group at MIT, where the initial A4 data reduction was performed under the direction of Prof. Walter H. G. Lewin. See also Einstein Observatory (HEAO 2) HEAO Program High Energy Astronomy Observatory 3 Timeline of artificial satellites and space probes References External links 1st High Energy Astrophysics Observatory (HEAO 1. GSFC. NASA ) on the internet The Star Splitters by Wallace H. Tucker, 1984 Space telescopes X-ray telescopes Gamma-ray telescopes 1977 in spaceflight August 1977 events in the United States Spacecraft launched in 1977
High Energy Astronomy Observatory 1
[ "Astronomy" ]
1,492
[ "Space telescopes" ]
1,528,764
https://en.wikipedia.org/wiki/Advanced%20Satellite%20for%20Cosmology%20and%20Astrophysics
The Advanced Satellite for Cosmology and Astrophysics (ASCA, formerly named ASTRO-D) was the fourth cosmic X-ray astronomy mission by JAXA, and the second for which the United States provided part of the scientific payload. The satellite was successfully launched on 20 February 1993. The first eight months of the ASCA mission were devoted to performance verification. Having established the quality of performance of all ASCA's instruments, the spacecraft provided science observations for the remainder of the mission. In this phase the observing program was open to astronomers based at Japanese and U.S. institutions, as well as those located in member states of the European Space Agency. X-ray astronomy mission ASCA was the first X-ray astronomy mission to combine imaging capability with a broad passband, good spectral resolution, and a large effective area. The mission also was the first satellite to use CCDs for X-ray astronomy. With these properties, the primary scientific purpose of ASCA was the X-ray spectroscopy of astrophysical plasmas, especially the analysis of discrete features such as emission lines and absorption edges. ASCA carried four large-area X-ray telescopes. At the focus of two of the telescopes is a gas imaging spectrometer (GIS), while a solid-state imaging spectrometer (SIS) is at the focus of the other two. The GIS is a gas-imaging scintillation proportional counter and is based on the GSPC that flew on the second Japanese X-ray astronomy mission, Tenma. The two identical charge-coupled device (CCD) cameras were provided for the two SISs by a hardware team from MIT, Osaka University and ISAS. Significant contributions The ASCA was launched by ISAS (Institute of Space and Astronautical Sciences), Japan. The sensitivity of ASCA's instruments allowed for the first detailed, broad-band spectra of distant quasars to be derived. In addition, ASCA's suite of instruments provided the best opportunity at the time for identifying the sources whose combined emission makes up the cosmic X-ray background. It performed over 3000 observations, and produced over 1000 publications in refereed journals so far. The ASCA archive contains significant amounts of data for future analyses. Furthermore, the mission is termed highly successful when reflecting on what scientists in many counties have accomplished using ASCA data up to this time. The U.S. contributed significantly to ASCA's scientific payloads. In return, 40% of ASCA observing time was made available to U.S. scientists. (ISAS also opened up 10% of the time to ESA scientists as a good-will gesture.) In addition, all ASCA data enter the public domain after a suitable period (1 year for U.S. data, 18 months for Japanese data) and become available to scientists worldwide. The design of ASCA was optimized for X-ray spectroscopy; thus it complemented ROSAT (optimized for X-ray imaging) and RXTE (optimized for timing studies). Finally, ASCA results cover almost the entire range of objects, from nearby stars to the most distant objects in the universe. Mission end The mission operated successfully for over 7 years until attitude control was lost on 14 July 2000 during a geomagnetic storm, after which no scientific observations were performed. ASCA reentered the atmosphere on 2 March 2001 after more than 8 years in orbit. The primary responsibility of the U.S. ASCA GOF was to enable U.S. astronomers to make the best use of the ASCA mission, in close collaboration with the Japanese ASCA team. References External links ASCA website by JAXA ASCA website by NASA 1993 in spaceflight Satellites of Japan Space program of Japan Space telescopes X-ray telescopes
Advanced Satellite for Cosmology and Astrophysics
[ "Astronomy" ]
776
[ "Space telescopes" ]
1,528,776
https://en.wikipedia.org/wiki/Parish%20register
A parish register, alternatively known as a parochial register, is a handwritten volume, normally kept in the parish church of an ecclesiastical parish in which certain details of religious ceremonies marking major events such as baptisms (together with the dates and often names of the parents), marriages (with the names of both partners), and burials (within the parish) are recorded. Along with these events, church goods, the parish's business, and notes on various happenings in the parish may also be recorded. These records exist in England because they were required by law and for the purpose of preventing bigamy and consanguineous marriage. The information recorded in registers was also considered significant for secular governments’ own recordkeeping, resulting in the churches supplying the state with copies of all parish register entries. A good register permits the family structure of the community to be reconstituted as far back as the sixteenth century. Thus, these records can be distilled for the definitive study of the history of several nations’ populations. They also provide insight into the lives and interrelationships of parishioners. Historically, a parish's churchwarden was responsible for certifying the parish register and submitting it alongside the churchwarden's accounts for annual examination by the bishop. History England and Wales Parish registers were formally introduced in England and Wales on 5 September 1538 shortly after the formal split with Rome in 1534, when Thomas Cromwell, chief minister to Henry VIII, acting as his Vicar General, issued an injunction requiring that in each parish of the Church of England registers of all baptisms, marriages, and burials be kept. Before this, a few Roman Catholic religious houses and parish priests had kept informal notes on the baptisms, marriages, and burials of the prominent local families and obituaries of holy persons. This injunction was addressed to the rector or vicar of every church parish in England. By contrast, surviving Roman Catholic communities were discouraged from keeping similar records, as they needed their names to remain hidden in a country now hostile to the Church of Rome. Cromwell's order had, however, nothing to do with religious doctrine or the papacy, but rather indicated the desire of the central government to have better knowledge of the population of the country. Church historian Diarmaid MacCulloch has suggested that the measure may have been introduced as a means to identify infiltration into England by members of the outlawed Anabaptist sects: their adherents did not baptise infants, due to their doctrine that only active believers could be baptised, thereby excluding "dumb" or "unmindful" children. The book was to be kept in a "sure coffer" with two locks and keys, one held by the parish priest and one by the churchwardens. A fine of 3 shillings, 4 pence was to be levied for failure to comply. Many parishes ignored this order as it was commonly thought that it presaged a further tax. Finally, in 1597, both Queen Elizabeth I and the Church of England's Convocation reaffirmed the injunction, adding that the registers were of permagnus usus and must be kept in books of parchment leaves. They mandated the keeping of duplicate registers or bishop's transcripts, ordering that annually copies of every parish's records of baptism, marriage, and burial be sent into the diocesan bishop's registrar. These records survive sporadically from this date and may make up for some gaps in the regular parish register due to war, carelessness, and loss due to other causes (fire, etc.). At the same time, all previous parish records (most found in a less durable form) had to be copied into the new sturdier books. The parish clerk was paid to copy the old records into a new parchment book in order to keep the record up to date. During the English Civil War (1643–1647), and in the following periods of the Commonwealth and Protectorate, when the Church of England was suppressed and bishops abolished and replaced by Calvinist ministers under the Directory, records were poorly kept and many went missing after being destroyed (bored by beetles, chewed by rats or rendered illegible by damp) or hidden by the displaced Anglican clergy. Instead, for a brief period a civil official, confusingly also called the parish register, was elected locally and approved by two local justices of the peace. Often a semi-literal layman of Puritan hue, he was charged with keeping civil records of birth, marriage, and death in each parish for the balance of the Interregnum, and, in some cases, he even wrote his records into the old parish register. In the course of this passage from Anglican safekeeping to civil hands, however, many records were lost. The old format was re-adopted by the restored Church of England when the monarchy was restored in May 1660. Centuries later, this parsimony and neglect was belatedly remedied by depositing the surviving registers in county record offices where they were better safeguarded, conserved, and made accessible mostly on microfilm as that technology became available. On the other hand, the accurate parish registers of New France were rarely damaged by external events such as war, revolution, and fire. Thus, 300,000 entries were available for the time period 1621 to 1760. In England, the Parochial Registers Act 1812, an "Act for the better regulating and preserving Parish and other Registers of Birth, Baptisms, Marriages, and Burials, in England" was passed It stated that "amending the Manner and Form of keeping and of preserving Registers of Baptisms, Marriages, and Burials of His Majesty's Subjects in the several Parishes and Places in England, will greatly facilitate the Proof of Pedigrees of Persons claiming to be entitled to Real or Personal Estates, and otherwise of great public Benefit and Advantage". Separate, printed registers were to be supplied by the King's Printer, and used for baptisms, marriages and burials. These are more or less unchanged to this day. United States In the United States, at least the parishes in the Roman Catholic dioceses maintained a similar practice of recording baptisms, marriages, burials, and often also confirmations and first communions. From the earliest pioneer churches ministered by itinerant priests, the records were written in ecclesiastical Latin. But after the Second Vatican Council and its reforms that included translating the Mass into local languages, most register entries gradually came to be written in English. In Protestant communions with stronger similarities to Roman Catholicism, parish registers are also important sources that document baptisms, marriages, and funerals. In Protestant and Evangelical churches, individual ministers often kept records of faith-related events among the congregation, but under much less guidance from any central governing body. Italy The parish register became mandatory in Italy for baptisms and marriages in 1563 after the Council of Trent and in 1614 for burials when its rules of compilation were as well normalised by the Church. Prior to 1563, the oldest registers of baptisms are preserved since 1379 in Gemona del Friuli, 1381 in Siena, 1428 in Florence or 1459 in Bologna. France In France, parish registers have been in use since the Middle Ages. The oldest surviving registers date back to 1303 and are posted in Givry. Other existing registers prior to orders of civil legislation in 1539 reside in Roz-Landrieux 1451, Paramé 1453, Lanloup 1467, Trans-la-Forêt 1479 and Signes 1500. The parish register became mandatory in France for baptisms with the Ordinance of Villers-Cotterêts signed into law by Francis I of France on August 10, 1539, then for marriages and burials with the Ordinance of Blois in 1579. They had to be sent every year to the bailiwick or sénéchaussée in the south of France. In April 1667, the Ordinance of Saint-Germain-en-Laye ordered a copy to be kept by the parish clergy as before the ordinance. By decree of the National Assembly of September 20, 1792, the keeping of the civil registers was given to mayors and the old parish registers went then to the public records of the archives communales, and the old bailiwick registers to the created in 1796. But from 1795, the parish again kept some private registers, like the registres de catholicité for the Catholic Church which are also made in duplicate, one for the parish and one for the diocesan archives. The legalization of these documents, functioning both as a means of census as well as civil documentation, has in some cases been used to restore official acts of civil status such as after the downfall of the Paris commune and the reconstruction of Le Palais de Justice after the fires of 1871. New France The first Europeans to settle in North America continued the practice of establishing parish registers. Shortly after the establishment of Habitation, the arrival of Jesuit priests in 1615 facilitated the earliest beginnings of the parish register in New France.These earliest accounts entered into the register were recorded primarily within the Jesuits personal logs, and accounted exclusively for the number of deaths in the early settlement period of Quebec. However, over time the growing French population propagated the development and detailing of the parish register. Entries detailing births, marriages, baptisms and deaths were recorded and kept in the church of Notre Dame-de-la-Recouvrance. Unfortunately, in 1640 the church burned along with all parish records from 1620 to 1640. After the church burned, the parish priest commissioned at Notre Dame-de-la-Recouverance reconstructed the destroyed register entries from memory by recording the rather limited number of births, baptisms and marriages to take place within the colony during this 20-year period. Deaths however, were not recorded in the reconstructed registers and as a consequence there is no recorded account of the death of Samuel de Champlain who died in 1635. Although the creating and maintaining parish registers in Europe had been in practice since the Middle Ages, legislation regarding the widespread and legal use of parish registers in France was officially passed into law with the signing of the Ordnance of Villers-Cotterets in 1539. However, it was not until 1666 where after perceiving the immense advantages to be gained through civil registration that King Louis XIV revitalized the parish registration system in France and her colonies. This edict, set forth by the king, made it compulsory for individuals to register within their parish communities. Moreover, in 1667 the king revealed the Ordonnance de Saint Germain en Laye, a piece of legislation which required parish priests to produce a duplicate of all registers so that all copies may be stored in emerging records offices. In New France, these duplicates were stored in Quebec and Montreal’s Courts of Justice official records office and listed New France’s Roman Catholic population exclusively. It was only until after cession and the British conquest of New France in 1760 that parish registers began to more openly include Protestants within the registry, and as civil subjects of Quebec. Sweden Parish registers have been kept for each parish by the Church of Sweden for some Swedish counties (Västmanland and Dalarna) since the 1620s, and generally for the whole Sweden since the 1670s. The church was ordered to keep even more detailed church books in king Charles XI's Church Law from 1686. The primary motivation was to keep track of the number of soldiers that were taken out from each parish, and that were financed by each parish, through the allotment system that was introduced in 1682. Another motivation was to keep track of religious knowledge, literacy and health among the population. The church books constitute of birth, death, marriage and moving in/out records, all of which were linked to the parish catechetical book, which was replaced in 1895 by the parish book. In country side parishes, each village or industrial town had its own section in the catechetical book, each farmyard its own page, and each person its own row. For city parishes, the book was divided into districts. The majority of church records are still preserved in the state archives, and available electronically over the Internet. Contents and examples from England The contents have changed over time, not being standardised in England until the Acts of 1753 and 1812. The following are among what you can expect to find in later registers, though in the earlier ones it is quite common to find only names recorded. Early entries will be in some form of Latin, often abbreviated. Baptisms Date of baptism Date of birth (but this is often not recorded) Child's forename Child's surname (though normally omitted as father's name is assumed) Father's name — blank if illegitimate Mother's name (but this is often not recorded) Father's occupation or rank Place of birth (for large parishes) Examples: Baptised 21 August 1632 William son of Francis Knaggs Baptism 5 January 1783 Richard son of Thomas Knaggs, farmer, and his wife Mary, born 6 December 1782 Marriages Date of marriage For both man and woman Forename and Surname Whether bachelor or spinster, widower or widow Age Whether of-this-parish or of some other place Occupation (normally man only) Father's forename, surname and occupation or rank Signature Whether by Banns or by Licence Witness(es) signature(s) Note: from 1837, the information contained in parish records is the same as that on a civil marriage certificate. Examples: Married 2 May 1635 Francis Ducke and Anne Knaggs Married 16 May 1643 Leonard Huntroids yeoman of Brafferton and Lucy Knaggs widow of this parish [1643 Marriages] Married 11 August 1836 Richard Knaggs the younger, age 20, bachelor, farmer of Kilham and Elizabeth Wilson, age 25, spinster of this parish, by licence and with the consent of those whose consent is required Burials Date of burial Name of deceased Age of deceased Occupation, rank or relationship of deceased Normal place of abode of deceased Examples: Buried 6 January 1620 Richard Knags Buried 4 November 1653 stillborn daughter of Raiph Knaggs of Ugthorpe Buried 25th Dec 1723 Mr George Knaggs, gent of Pollington, aged 74 Buried 19 July 1762 Thomas Knaggs, son of Thomas tailor of Byers Green and Elizabeth, age 13, drowned, double fees Dade and Barrington Registers Dade and Barrington Registers are detailed registers that contain more information than standard contemporary baptism and burial registers. They usually commence in the late eighteenth century, but come to an end in 1812, when they were superseded by the requirements of George Rose's 1812 Act, which required more information to be recorded than in normal registers, but actually required less information to be recorded than in Dade and Barrington Registers. There are examples of a few parishes continuing to keep Dade or Barrington Registers after 1813. In some cases, two registers were kept, for example in the Co Durham parish of Whickham both Barrington and Rose Registers were kept for the period 1813–1819, after which the former were discontinued. William Dade, a Yorkshire clergyman of the 18th century, was ahead of his time, in seeing the value of including as much information on individuals in the parish register as possible. In 1777 Archbishop William Markham decided that Dade's scheme of registration forms should be introduced throughout his diocese. The resulting registers, and some that are related, are now known as "Dade registers". The baptismal registers were to include child's name, seniority (e.g. first son), father's name, profession, place of abode and descent (i.e. names, professions and places of abode of the father's parents), similar information about the mother, and mother's parents, the infant's date of birth and baptism. Registers of this period are a gold-mine for genealogists, but the scheme was so much work for the parish priests that it did not last long. In 1770 Dade wrote in the parish register of St. Helen's, York: "This scheme if properly put in execution will afford much clearer intelligence to the researches of posterity than the imperfect method hitherto generally pursued." His influence spread and the term Dade register has come to describe any parish registers that include more detail than expected for the time. The application of this system was somewhat haphazard and many clergymen, particularly in more populated areas, resented the extra work involved in making these lengthy entries. The thought of duplicating them for the Bishop's Transcripts put many of them off and some refused to follow the new rules. Barrington Registers From about 1783, as Lord Bishop of Salisbury, the Rt Rev. Shute Barrington instigated a similar system somewhat simpler than Dade's, and followed this in Northumberland and Durham from 1798, when he was transferred to the diocese of Durham. Transcriptions and indices Most registers in the world have been deposited in diocesan archives or county record offices. Where these have been filmed, copies are available to scan from the Church of Jesus Christ of Latter-day Saints through the Family History Library. Microfiche copies of parish registers, along with transcriptions, are usually available at larger local libraries and county record offices. England Since Victorian times, amateur genealogists have transcribed and indexed parish registers. Some societies have also produced printed transcripts and indexes — notably the Parish Register Society, the Harleian Society and Phillimore & Co. The Society of Genealogists, in London, has a very large selection of such transcripts and indexes. The Family History Library in Salt Lake City also has a vast collection of films of original registers. The Church of Jesus Christ of Latter-day Saints has also produced an index (the IGI), of very many register entries — mostly baptisms and marriages. The IGI is available as an online database and on microform matter at local "Family History Centers". Like all transcripts and indexes, the IGI should be used with caution, as errors can occur in legibility of the original or microfilm of the original, in reading the original handwriting, and in entering the material to the transcription. "Batch entries" are generally more reliable than "individual submissions." See also Civil registry Family register Nonconformist register Parish and Civil Registers in Paris Footnotes Bibliography Delsalle, Paul. 2009. Histoires de familles: les registres paroissiaux et d'état civil, du Moyen Âge à nos jours : démographie et généalogie. Besançon: Presses universitaires de Franche-Comté. Greer, Allan. 1997. The People of New France. Toronto: University of Toronto Press. Isbled, Bruno. “Le Premier Registre de Baptemes de France: Roz-Landrieux (1451)”. Place Public. April, 2011. http://www.placepublique-rennes.com/article/Le-premier-registre-de-baptemes-de-France-Roz-Landrieux-1451- Law, James Thomas. "The ecclesiastical statutes at large, extr. and arranged by J.T. Law." Milza, Pierre. 2009. "L'année terrible". [2], [2]. "L'année Terrible". Paris: Perrin. Pounds, N.J.G., 2000. A History of the English Parish: The Culture of Religion from Augustine to Victoria. Cambridge University. "Parochial Registers". Catholic Encyclopedia. New York: Robert Appleton Company. 1913. Parrot, Paul. “History of Civil Registration in Quebec”. Canadian Public Health Journal 21, no. 11 (1930) 529 –40. R.B. Outhwaite, Clandestine Marriage in England, 1500–1850". 1998. Tijdschrift Voor Rechtsgeschiedenis / Revue D'Histoire Du Droit / The Legal History Review. 66 (1-2): 191–192. Rose's Act http://freepages.genealogy.rootsweb.ancestry.com/~framland/acts/1812Act.htm Sheils, William Joseph. "Dade, William". Oxford Dictionary of National Biography (online ed.). Oxford University Press. Genealogy Catholic liturgy Catholic canonical documents Marriage in the Catholic Church Catholic matrimonial canon law Sacramental law
Parish register
[ "Biology" ]
4,172
[ "Phylogenetics", "Genealogy" ]
1,528,806
https://en.wikipedia.org/wiki/Array%20of%20Low%20Energy%20X-ray%20Imaging%20Sensors
The Array of Low Energy X-ray Imaging Sensors (ALEXIS, also known as P89-1B, COSPAR 1993-026A, SATCAT 22638) X-ray telescope featured curved mirrors whose multilayer coatings reflected and focused low-energy X-rays or extreme ultraviolet (EUV) light the way optical telescopes focus visible light. The satellite and payloads were funded by the United States Department of Energy and built by Los Alamos National Laboratory (LANL) in collaboration with Sandia National Laboratories and the University of California-Space Sciences Lab. The satellite bus was built by AeroAstro, Inc. of Herndon, VA. The Launch was provided by the United States Air Force Space Test Program on a Pegasus Booster on April 25, 1993. The mission was entirely controlled from a small groundstation at LANL. Features ALEXIS scanned half the sky with its three paired sets of EUV telescopes, although it could not locate any events with high resolution. Ground-based optical astronomers could look for visual counterparts to the EUV transients seen by ALEXIS by comparing observations made at two different times. Large telescopes, with their small fields of view, cannot quickly scan a large enough piece of the sky to effectively observe transients seen by ALEXIS, but amateur equipment is well suited to the task. Participants in the ALEXIS project combed the ALEXIS data for the coordinates of a likely current transient, then trained their telescopes and observe the area. There were six EUV telescopes which were arranged in three co-aligned pairs which cover three overlapping 33° fields-of-view. At each rotation of the satellite, ALEXIS monitored the entire anti-solar hemisphere. Each telescope consisted of a spherical mirror with a Mo-Si layered synthetic microstructure (LSM) or Multilayer coating, a curved profile microchannel plate detector located at the telescope's prime focus, a UV background-rejecting filter, electron rejecting magnets at the telescope aperture, and image processing readout electronics. The geometric collecting area of each telescope was about 25 cm2, with spherical aberration limiting resolution to about 0.25°s. Analysis of the pre-flight x-ray throughput calibration data indicated that the peak on-axis effective collecting area for each telescope's response function ranges from 0.25 to 0.05 cm2. The peak area-solid angle product response function of each telescope ranged from 0.04 to 0.015 cm2-sr. The spacing of the molybdenum and silicon layers on each telescope's mirror was the primary determinant of the telescope's photon energy response function. The ALEXIS multilayer mirrors also employed a "wavetrap" feature to significantly reduce the mirror's reflectance for He II 304 Angstrom geocoronal radiation which can be a significant background source for space borne EUV telescopes. These mirrors, produced by Ovonyx, Inc., were highly curved yet have been shown to have very uniform multilayer coatings and hence have very uniform EUV reflecting properties over their entire surfaces. The efforts in designing, producing and calibrating the ALEXIS telescope mirrors have been previously described in Smith et al., 1990. ALEXIS weighed 100 pounds, used 45 watts, and produced 10 kilobits/second of data. Position and time of arrival were recorded for each detected photon. ALEXIS was always in a survey-monitor mode, with no individual source pointings. It was suited for simultaneous observations with ground-based observers who prefer to observe sources at opposition. Coordinated observations needed not be arranged before the fact, because most sources in the anti-Sun hemisphere were observed and archived. ALEXIS was tracked from a single ground station in Los Alamos. Between ground station passes, data was stored in an on-board solid state memory of 78 Megabytes. ALEXIS, with its wide fields-of-view and well-defined wavelength bands, complemented the scanners on NASA's Extreme Ultraviolet Explorer (EUVE) and the ROSAT EUV Wide Field Camera (WFC), which were sensitive, narrow field-of-view, broad-band survey experiments. ALEXIS's results also highly complemented the data from EUVE's spectroscopy instrument. ALEXIS's scientific goals were to: Map the diffuse background in three emission line bands with the highest angular resolution to date, Perform a narrow-band survey of point sources, Search for transient phenomena in the ultrasoft X-ray band, and Provide synoptic monitoring of variable ultrasoft X-ray sources such as cataclysmic variables and flare stars. End of mission On 29 April 2005, after 12 years in orbit, the ALEXIS satellite reached the end of its mission and was decommissioned. The satellite exceeded expectations by operating well past its one year design life. See also 1993 in spaceflight References Spacecraft launched in 1993 Derelict satellites orbiting Earth Space telescopes X-ray telescopes Spacecraft launched by Pegasus rockets
Array of Low Energy X-ray Imaging Sensors
[ "Astronomy" ]
1,004
[ "Space telescopes" ]
1,528,827
https://en.wikipedia.org/wiki/330%20Adalberta
330 Adalberta (prov. designation: ) is a stony asteroid from the inner regions of the asteroid belt, approximately 9.5 kilometers in diameter. It is likely named for either Adalbert Merx or Adalbert Krüger. It was discovered by Max Wolf in 1910. In the 1980s, the asteroid's permanent designation was reassigned from the non-existent object . Discovery Adalberta was discovered on 2 February 1910, by German astronomer Max Wolf at Heidelberg Observatory in southern Germany. Previously, on 18 March 1892, another body discovered by Max Wolf with the provisional designation was originally designated , but was subsequently lost and never recovered (also see Lost minor planet). In 1982, it was determined that Wolf erroneously measured two images of stars, not asteroids. As it was a false positive and the body never existed, the name Adalberta and number "330" was then reused for this asteroid, . MPC citation was published on 6 June 1982 (). Orbit and classification The S-type asteroid orbits the Sun in the inner main-belt at a distance of 1.8–3.1 AU once every 3 years and 11 months (1,416 days). Its orbit has an eccentricity of 0.25 and an inclination of 7° with respect to the ecliptic. Adalbertas observation arc begins with its official discovery observation at Heidelberg in 1910. Naming This minor planet was named in honor of the discoverer's father-in-law, Adalbert Merx (after whom another minor planet 808 Merxia is also named). However it is also possible that it was named for Adalbert Krüger (1832–1896), a German astronomer and editor of the Astronomische Nachrichten, which was one of the first international journals in the field of astronomy. The naming citation was first mentioned in The Names of the Minor Planets by Paul Herget in 1955 (). Physical characteristics Rotation period In 2013, a rotational lightcurve of Adalberta was obtained from photometric observations at Los Algarrobos Observatory in Uruguay. Light-curve analysis gave a well-defined rotation period of hours with a brightness variation of 0.44 magnitude (). Diameter and albedo According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Adalberta measures 9.11 kilometers in diameter, and its surface has an albedo of 0.256, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for stony asteroids of 0.20 and calculates a diameter of 9.84 kilometers using an absolute magnitude of 12.4. Notes References External links Lightcurve Database Query (LCDB), at www.minorplanet.info Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 000330 Discoveries by Max Wolf Named minor planets 19100202 Recovered astronomical objects
330 Adalberta
[ "Astronomy" ]
624
[ "Recovered astronomical objects", "Astronomical objects" ]
1,528,853
https://en.wikipedia.org/wiki/Automated%20Patrol%20Telescope
The Automated Patrol Telescope (APT) was a wide-field CCD imaging telescope, operated by the University of New South Wales at Siding Spring Observatory, Australia. The telescope activated in June 1989. This was one of four (4) ROTSE telescopes around the World to detect Gamma ray bursts, with telescopes positioned in Australia, Namibia, Turkey, and Texas. The telescope was designed for robotic use, with 45 cm aperture. The telescope was converted for computer controlled operation and CCD imaging from an older retired Baker-nunn camera. This is a type of modified Schmidt canera. The telescope has a field of view of 5 degrees by 5 degrees. See also List of telescopes of Australia Lists of telescopes References Further reading Preliminary results from the Automated Patrol Telescope CCD camera (1990) External links Automated Patrol Telescope (APT) on the internet Telescopes Siding Spring Observatory
Automated Patrol Telescope
[ "Astronomy" ]
174
[ "Telescopes", "Astronomical instruments" ]
1,528,864
https://en.wikipedia.org/wiki/Broad%20Band%20X-ray%20Telescope
The Broad Band X-ray Telescope (BBXRT) was flown on the Space Shuttle Columbia (STS-35) from December 2 through December 11, 1990 as part of the ASTRO-1 payload. The flight of BBXRT marked the first opportunity for performing X-ray observations over a broad energy range (0.3-12 keV) with a moderate energy resolution (typically 90 eV and 150 eV at 1 and 6 keV, respectively). BBXRT was co-mounted with three ultraviolet telescopes HUT, WUPPE, and HIT for Astro-1 in 1990. This was, "..the first focusing X-ray telescope operating over a broad energy range 0.3-12 keV with a moderate energy resolution (90 eV at 1 keV and 150eV at 6 keV)." according to NASA. Hardware See also Spacelab X-ray astronomy List of X-ray space telescopes References External links Broad Band X-ray Telescope (BBXRT. GSFC. NASA) on the internet Space telescopes X-ray telescopes Crewed space observatories Space Shuttle program
Broad Band X-ray Telescope
[ "Astronomy" ]
226
[ "Space telescopes", "Crewed space observatories" ]
1,528,889
https://en.wikipedia.org/wiki/6%2C6%27-Dibromoindigo
6,6'-Dibromoindigo is an organic compound with the formula (BrC6H3C(O)CNH)2. A deep purple solid, the compound is also known as Tyrian purple, a dye of historic significance. Presently, it is only a curiosity, although the related derivative indigo is of industrial significance. It is produced by molluscs of the Muricidae species. The pure compound has semiconductor properties in the thin film phase, which is potentially useful for wearable electronics, and has better performance than the parent indigo in this context. Biosynthesis Biosynthesis of the molecule is intermediated by tyrindoxyl sulphate. The molecule consists of a pair of monobrominated indolin-3-one rings linked by a carbon-carbon double bond. Dibromoindigo can also be produced enzymatically in vitro from the amino acid tryptophan. The sequence begins with bromination of the benzo ring followed by conversion to 6-bromoindole. Flavin-containing monooxygenase then couples two of these indole units to give the dye. Chemical synthesis The main chemical constituent of the Tyrian dye was discovered by Paul Friedländer in 1909 to be 6,6′-dibromoindigo, derivative of indigo dye, which had been synthesized in 1903. Although the first chemical synthesis was reported in 1914, unlike indigo, it has never been synthesized at commercial level. An efficient protocol for laboratory synthesis of dibromoindigo was developed in 2010. References Indigo structure dyes Bromoarenes Organic semiconductors
6,6'-Dibromoindigo
[ "Chemistry" ]
333
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
1,528,972
https://en.wikipedia.org/wiki/Through-hole%20technology
In electronics, through-hole technology (also spelled "thru-hole") is a manufacturing scheme in which leads on the components are inserted through holes drilled in printed circuit boards (PCB) and soldered to pads on the opposite side, either by manual assembly (hand placement) or by the use of automated insertion mount machines. History Through-hole technology almost completely replaced earlier electronics assembly techniques such as point-to-point construction. From the second generation of computers in the 1950s until surface-mount technology (SMT) became popular in the mid 1980s, every component on a typical PCB was a through-hole component. PCBs initially had tracks printed on one side only, later both sides, then multi-layer boards were in use. Through holes became plated-through holes (PTH) in order for the components to make contact with the required conductive layers. Plated-through holes are no longer required with SMT boards for making the component connections, but are still used for making interconnections between the layers and in this role are more usually called vias. Leads Axial and radial leads Components with wire leads are generally used on through-hole boards. Axial leads protrude from each end of a typically cylindrical or elongated box-shaped component, on the geometrical axis of symmetry. Axial-leaded components resemble wire jumpers in shape, and can be used to span short distances on a board, or even otherwise unsupported through an open space in point-to-point wiring. Axial components do not protrude much above the surface of a board, producing a low-profile or flat configuration when placed "lying down" or parallel to the board. Radial leads project more or less in parallel from the same surface or aspect of a component package, rather than from opposite ends of the package. Originally, radial leads were defined as more-or-less following a radius of a cylindrical component (such as a ceramic disk capacitor). Over time, this definition was generalized in contrast to axial leads, and took on its current form. When placed on a board, radial components "stand up" perpendicular, occupying a smaller footprint on sometimes-scarce "board real estate", making them useful in many high-density designs. The parallel leads projecting from a single mounting surface gives radial components an overall "plugin nature", facilitating their use in high-speed automated component insertion ("board-stuffing") machines. When needed, an axial component can be effectively converted into a radial component, by bending one of its leads into a "U" shape so that it ends up close to and parallel with the other lead. Extra insulation with heat-shrink tubing may be used to prevent shorting out on nearby components. Conversely, a radial component can be pressed into service as an axial component by separating its leads as far as possible, and extending them into an overall length-spanning shape. These improvisations are often seen in breadboard or prototype construction, but are deprecated for mass production designs. This is because of difficulties in use with automated component placement machinery, and poorer reliability because of reduced vibration and mechanical shock resistance in the completed assembly. Multiple lead devices For electronic components with two or more leads, for example, diodes, transistors, ICs, or resistor packs, a range of standard-sized semiconductor packages are used, either directly onto the PCB or via a socket. Characteristics While through-hole mounting provides strong mechanical bonds when compared to SMT techniques, the additional drilling required makes the boards more expensive to produce. They also limit the available routing area for signal traces on layers immediately below the top layer on multilayer boards since the holes must pass through all layers to the opposite side. To that end, through-hole mounting techniques are now usually reserved for bulkier or heavier components such as electrolytic capacitors or semiconductors in larger packages such as the TO-220 that require the additional mounting strength, or for components such as plug connectors or electromechanical relays that require great strength in support. Design engineers often prefer the larger through-hole rather than surface mount parts when prototyping, because they can be easily used with breadboard sockets. However, high-speed or high-frequency designs may require SMT technology to minimize stray inductance and capacitance in wire leads, which would impair circuit function. Ultra-compact designs may also dictate SMT construction, even in the prototype phase of design. Through-hole components are ideal for prototyping circuits with breadboards using microprocessors such as Arduino or PICAXE. These components are large enough to be easy to use and solder by hand. See also Point-to-point construction Board-to-board connector Surface-mount technology Via (electronics) References Further reading External links Chip carriers Printed circuit board manufacturing
Through-hole technology
[ "Engineering" ]
998
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
1,529,023
https://en.wikipedia.org/wiki/Constellation-X%20Observatory
The Constellation-X Observatory (Con-X or HTXS) was a mission concept for an X-ray space observatory to be operated by NASA; in 2008 it was merged with ESA and JAXA efforts in the same direction to produce the International X-ray Observatory project, announced on 24 July 2008. The intention of the Con-X project was to provide enough X-ray collecting area to be able to feed a spectroscope of substantially higher resolution than the previous generation (XMM-Newton, Chandra X-ray Observatory and Suzaku) of space-based X-ray telescopes; this would allow the resolution of individual hot-spots at the event horizon of black holes, of warm intergalactic matter (by seeing absorption lines at various redshifts superposed onto the spectra of background quasars) and of dynamics within galaxy clusters. Technology for Con-X The project intended to have separate low-energy and high-energy X-ray telescopes, to work from 100eV to 40keV spectrum. The collecting area requirements would have been achieved using a segmented-mirror technique based on slumping thin (400 μm) glass sheets onto mandrels, which avoids the handling problems of dealing with whole thin shells. Dispersive optics for the spectrometer were developed, as well as a microcalorimeter-array detector providing energy resolution per pixel of about 5eV. The International X-ray Observatory (IXO) In May 2008, ESA and NASA established a coordination group involving ESA, NASA and JAXA, with the intent of exploring a joint mission merging the ongoing XEUS and Constellation-X efforts. The coordination group met twice, first in May 2008 at European Space Research and Technology Centre (ESTEC), then in June 2008 at the Center for Astrophysics. As a result of these meetings a joint understanding was reached by the coordination group on a proposal to proceed towards the goal of developing an International X-ray Observatory (IXO). The coordination group proposed the start of a joint study of IXO. A single merged set of top level science goals and derived key science measurement requirements were established. References Space telescopes X-ray telescopes Cancelled spacecraft
Constellation-X Observatory
[ "Astronomy" ]
446
[ "Space telescopes" ]
1,529,072
https://en.wikipedia.org/wiki/European%20Space%20Astronomy%20Centre
The European Space Astronomy Centre (ESAC) near Madrid in Spain is a research centre of the European Space Agency (ESA). ESAC is the lead institution for space science (astronomy, Solar System exploration and fundamental physics) using ESA missions. It hosts the science operation centres for all ESA astronomy and planetary missions and their scientific data archives. ESA's Cebreros Station deep-space communication antennas are located nearby. Location ESAC is located near , within the municipal limits of Villanueva de la Cañada, is located 30 km west of Madrid in the Guadarrama Valley. The site is surrounded by light woodland and is adjacent to the ruins of the 15th-century . Missions Past and present missions handled from ESAC include (in alphabetical order): Akari, BepiColombo, Cassini–Huygens, Cluster, Exomars, Gaia, Herschel Space Observatory, Hubble Space Telescope, ISO, INTEGRAL, IUE, James Webb Space Telescope, LISA Pathfinder, Mars Express, Planck, Rosetta, SOHO, Solar Orbiter, Venus Express, and XMM-Newton. Future missions include:Athena, Euclid, JUICE, and Plato. In addition to deep space and solar system exploration, ESAC hosts the data processing of SMOS, a satellite observing the Earth, and the CESAR educational programme. ESAC is also involved in ESA missions conducted in collaboration with other space agencies. One example is Akari, a Japanese-led mission to carry out an infrared sky survey, launched on 21 February 2006. Collaborative programmes include the NASA-led James Webb Space Telescope, the successor to the Hubble Space Telescope. Communications An ESA radio ground station for communication with spacecraft is located in Cebreros, Avila, about 90 km from Madrid and 65 km from ESAC. This installation provides essential support to the activities of ESAC. Inaugurated in September 2005, Cebreros has a 35-metre antenna used to communicate with distant missions to Mercury, Venus, Mars and beyond. The Madrid Deep Space Communications Complex is also located nearby, operated by the Instituto Nacional de Técnica Aeroespacial. It is a station of the Deep Space Network used primarily for NASA missions, but sometimes supplements Cebreros in communicating with ESA spacecraft. It has a 70-metre antenna, six 34-m antennae and one 26-m antenna. Two 15-metre radio antennae are located on the ESAC site, but were decommissioned in 2017. Other facilities ESAC also hosts a branch of the Spanish Astrobiology Center (CAB). See also Facilities of the European Space Agency References External links European Space Astronomy Centre website Astronomy in Europe European Space Agency facilities Space telescopes
European Space Astronomy Centre
[ "Astronomy" ]
545
[ "Space telescopes" ]
1,529,074
https://en.wikipedia.org/wiki/EXOSAT
The European X-ray Observatory Satellite (EXOSAT), originally named HELOS, was an X-ray telescope operational from May 1983 until April 1986 and in that time made 1780 observations in the X-ray band of most classes of astronomical object including active galactic nuclei, stellar coronae, cataclysmic variables, white dwarfs, X-ray binaries, clusters of galaxies, and supernova remnants. This European Space Agency (ESA) satellite for direct-pointing and lunar-occultation observation of X-ray sources beyond the Solar System was launched into a highly eccentric orbit (apogee 200,000 km, perigee 500 km) almost perpendicular to that of the Moon on 26 May 1983. The instrumentation includes two low-energy imaging telescopes (LEIT) with Wolter I X-ray optics (for the 0.04–2 keV energy range), a medium-energy experiment using Ar/CO2 and Xe/CO2 detectors (for 1.5–50 keV), a Xe/He gas scintillation spectrometer (GSPC) (covering 2–80 keV), and a reprogrammable onboard data-processing computer. Exosat was capable of observing an object (in the direct-pointing mode) for up to 80 hours and of locating sources to within at least 10 arcsec with the LEIT and about 2 arcsec with GSPC. History of Exosat During the period from 1967 to 1969, the European Space Research Organisation (ESRO) studied two separate missions: a European X-ray observatory satellite, as a combined X- and gamma-ray observatory (Cos-A), and a gamma-ray observatory (Cos-B). Cos-A was dropped after the initial study, and Cos-B was proceeded with. Later in 1969 a separate satellite (the Highly Eccentric Lunar Occultation Satellite - Helos) was proposed. The Helos mission was to determine accurately the location of bright X-ray sources using the lunar occultation technique. In 1973 the observatory part of the mission was added, and mission approval from the European Space Agency Council was given for Helos, now renamed Exosat. It was decided that the observatory should be made available to a wide community, rather than be restricted to instrument developers, as had been the case for all previous ESA (ESRO) scientific programmes. For the first time in an ESA project, this led to the approach of payload funding and management by the Agency. Instrument design and development became a shared responsibility between ESA and hardware groups. In July 1981 ESA released the first Announcement of Opportunity (AO) for participation in the Exosat observation programme to the scientific community of its Member States. By 1 November 1981, the closing of the AO window, some 500 observing proposals had been received. Of these, 200 were selected for the first nine months of operation. Exosat was the first ESA spacecraft to carry on board a digital computer (OBC), with its main purpose being scientific data processing. Spacecraft monitoring and control were secondary. To provide the data handling subsystem with an exceptional flexibility of operation, the OBC and Central Terminal Unit were in-flight reprogrammable. This flexibility far exceeded any other ESA spacecraft built up to then. Satellite operations Each of the three axes were stabilized and the optical axes of the three scientific instruments were coaligned. The entrance apertures of the scientific instruments were all located on one face of the central body. Once in orbit the flaps which cover the entrances to the ME and LEIT were swung open to act as thermal and stray-light shields for the telescopes and star trackers, respectively. The orbit of Exosat was different from any previous X-ray astronomy satellite. To maximize the number of sources occulted by the Moon, a highly eccentric orbit (e ~ 0.93) with a 90.6 hr period and an inclination of 73° was chosen. The initial apogee was 191,000 km and perigee 350 km. To be outside the Earth's radiation belts, the scientific instruments were operated above ~50,000 km, giving up to ~76 hr per 90 hr orbit. There was no need for any onboard data storage as Exosat was visible from the ground station at Villafranca, Spain for practically the entire time the scientific instruments were operated. References External links ESA's X-ray Observatory (EXOSAT at ESTEC, ESA) on the internet Data archive at NASA High Energy Astrophysics Science Archive Center (HEASARC) European Space Agency satellites X-ray telescopes Space telescopes 1983 in spaceflight Spacecraft launched in 1983
EXOSAT
[ "Astronomy" ]
954
[ "Space telescopes" ]
1,529,082
https://en.wikipedia.org/wiki/JEM-EUSO
The Extreme Universe Space Observatory onboard Japanese Experiment Module (JEM-EUSO) is the first space mission concept devoted to the investigation of cosmic rays and neutrinos of extreme energy (). Using the Earth's atmosphere as a giant detector, the detection is performed by looking at the streak of fluorescence produced when such a particle interacts with the Earth's atmosphere. EUSO EUSO was a mission of the European Space Agency, designed to be hosted on the International Space Station as an external payload of the Columbus. EUSO successfully completed the "Phase A" study, however in 2004, ESA decided not to proceed with the mission because of programmatic and financial constraints. The mission was then re-oriented as a payload to be hosted on board the JEM module of the Japanese KIBO facility of the ISS. The mission was then renamed JEM-EUSO. JEM-EUSO JEM-EUSO is currently (2013) studied by RIKEN and JAXA, in collaboration with 95 other institutions from 16 countries aiming for a flight after 2020. The proposed instrument consists of a set of three large Fresnel lenses of 2.65-metre diameter (with top and bottom cut off to reduce the minimum diameter to 1.9-metre so that they fit in the HTV resupply vehicle in which the instrument is to be launched) feeding a detector consisting of 137 modules each a 48 x 48 array of photomultipliers. The imaging takes place in the 300 nm-450 nm band (low-energy UV through deep-blue), and photons are time-tagged with 2.5-microsecond precision. Orbital debris detection In addition to its main, science mission, EUSO might also be used to detect orbiting space junk that could pose a threat to ISS, that is too small to be spotted by astronomers (1 to 10 cm). The ISS is shielded adequately against particles that are smaller than 1 cm. Particles in this range, or larger, can inflict serious damage, especially to other objects in orbit, since many of them are traveling at speeds of about 36,000 km/h. Nearly 3,000 tons of space debris resides in low Earth orbit; more than 700,000 pieces of debris larger than 1 cm now orbit Earth. A laser might then be used to deflect dangerous particles. The project could be ready to implement after about 2017–2018, using better lasers. Other projects under the EUSO framework EUSO-TA (Extreme Universe Space Observatory-Telescope Array): a ground-based telescope designed to prove the technology of EUSO telescopes. Was installed at Black Rock Mesa, Utah, United States at one of the Telescope Array fluorescence detectors in March 2013 (first observations in 2015). The experiment was on-going in 2018. The experiment has detected some UHECR-events (Ultra High Energy Cosmic Ray). EUSO-Balloon: a balloon-based EUSO telescope meant to further validate the technology. The balloon flight took place in 2014 in Canada and lasted 5 hours. The telescope observed laser-simulated cosmic ray events. EUSO-SPB (EUSO-Super Pressure Balloon): a high-altitude heavy-lift balloon EUSO telescope. Launched in 2017 from New Zealand (EUSO-SPB1-mission). The flight took 13 days, but was cut substantially shorter than the planned 100 days. Second mission (EUSO-SPB2) is planned for 2021. TUS (Tracking Ultraviolet Setup): a Russian mission on board the Lomonosov-satellite (launched 2016); included in the EUSO program as of 2018 (originally was not part of EUSO program). Mini-EUSO: an ultraviolet telescope operated at the ISS. The telescope serves as a pathfinder mission for UCHER-missions in space and maps the ultraviolet background produced by Earth atmosphere. The mapping of the UV-background is important for the follow-up missions K-EUSO and JEM-EUSO. The mission started as a co-operation between Italian Space Agency and Russian Space Agency. The Mini-EUSO telescope was launched to ISS on 22 August 2019. K-EUSO (KLYPVE-EUSO; KLYPVE is a Russian acronym for extreme energy cosmic rays): a Russian Space Agency project to place an UHECR telescope in the Russian segment of ISS. The project builds upon the TUS-experiment of the Russian Lomonosov-satellite. In 2017, the launch was scheduled for 2022. JEM-EUSO (Japanese Experiment Module-EUSO): the final goal of the JEM-EUSO program is to have the JEM-EUSO telescope installed into ISS. POEMMA (Probe Of Multi-Messenger Astrophysics): a dedicated satellite mission (2 satellites) to observe UHECR-events in the atmosphere. As of 2018, it is a NASA-sponsored concept study. References External links JEM-EUSO webpage ESA EUSO webpage High energy particle telescopes Components of the International Space Station International Space Station experiments Space telescopes
JEM-EUSO
[ "Astronomy" ]
1,034
[ "Space telescopes" ]
1,529,092
https://en.wikipedia.org/wiki/Einstein%20Observatory
Einstein Observatory (HEAO-2) was the first fully imaging X-ray telescope put into space and the second of NASA's three High Energy Astrophysical Observatories. Named HEAO B before launch, the observatory's name was changed to honor Albert Einstein upon its successfully attaining orbit. Project conception and design The High Energy Astronomy Observatory (HEAO) program originated in the late 1960's within the Astronomy Missions Board at NASA, which recommended the launch of a series of satellite observatories dedicated to high-energy astronomy. In 1970, NASA requested proposals for experiments to fly on these observatories, and a team organized by Riccardo Giacconi, Herbert Gursky, George W. Clark, Elihu Boldt, and Robert Novick responded in October 1970 with a proposal for an X-ray telescope. NASA approved four missions in the HEAO program, with the X-ray telescope planned to be the third mission. One of the three missions of the HEAO program was cancelled in February 1973, due to budgetary pressures within NASA that briefly resulted in the cancellation of the entire program, and the x-ray observatory was moved up to become the second mission of the program, receiving the designation HEAO B (later HEAO-2), and scheduled to launch in 1978. HEAO-2 was constructed by TRW Inc. and shipped to Marshall Space Flight Center in Huntsville, AL for testing in 1977. History HEAO-2 was launched on November 13, 1978, from Cape Canaveral, Florida, on an Atlas-Centaur SLV-3D booster rocket into a near-circular orbit at an altitude of approximately 470 km and orbital inclination of 23.5 degrees. The satellite was renamed Einstein upon achieving orbit, in honor of the centenary of the scientist's birth. Einstein ceased operations on April 26 1981, when the exhaustion of the satellite's thruster fuel supply rendered the telescope inoperable. The satellite reentered Earth's atmosphere and burned up on March 25, 1982. Instrumentation Einstein carried a single large grazing-incidence focusing X-ray telescope that provided unprecedented levels of sensitivity. It had instruments sensitive in the 0.15 to 4.5 keV energy range. Four instruments were installed in the satellite, mounted on a carousel arrangement that could be rotated into the focal plane of the telescope: The High Resolution Imaging camera (HRI) was a digital x-ray camera covering the central 25 arcmin of the focal plane. The HRI was sensitive to x-ray emissions between 0.15 and 3 keV and capable of ~2 arcsec spatial resolution. The Imaging Proportional Counter (IPC) was a proportional counter covering the entire focal plane. The IPC was sensitive to x-ray emissions between 0.4 and 4 keV and capable of ~1 arcmin spatial resolution. The Solid State Spectrometer (SSS) was a cryogenically cooled silicon drift detector. The SSS was sensitive to x-ray emissions between 0.5 and 4.5 keV. The cryogen keeping the SSS at its operational temperature ran out, as expected, in October 1979. Bragg Focal Plane Crystal Spectrometer (FPCS) was a Bragg crystal spectrometer. The FPCS was sensitive to x-ray emissions between 0.42 and 2.6 keV. Additionally, the Monitor Proportional Counter (MPC) was a non-focal plane, coaxially-mounted proportional counter that monitored the x-ray flux of the source being observed by the active focal plane instrument. Two filters could be used with the imaging detectors: The Broad Band Filter Spectrometer consisted of aluminum and beryllium filters than could be placed into the x-ray beam to change the spectral sensitivity. The Objective Grating Spectrometer transmission gratings. Riccardo Giacconi was the principal investigator for all of the experiments on board Einstein. Scientific results Einstein discovered approximately five thousand sources of x-ray emission during its operation and was the first x-ray experiment able to resolve an image of the observed sources. X-ray background Surveys by early x-ray astronomy experiments showed a uniform diffuse background of x-ray radiation across the sky. The uniformity of this background radiation indicated that it originated outside of the Milky Way Galaxy, with the most popular hypotheses being a hot gas spread uniformly throughout space, or numerous distant point sources of x-rays (such as quasars) that appear to blend together due to their great distance. Observations with Einstein showed that a large portion of this x-ray background originated from distant point sources, and observations with later x-ray experiments have confirmed and refined this conclusion. Stellar x-ray emissions Observations with Einstein showed that all stars emit x-rays. Main sequence stars emit only a small portion of their total radiation in the x-ray spectrum, primarily from their corona, while neutron stars emit a very large portion of their total radiation in the x-ray spectrum. Einstein data also indicated that coronal x-ray emissions in main sequence stars are stronger than was expected at the time. Galaxy clusters The Uhuru satellite discovered x-ray emissions from a hot, thin gas pervading distant clusters of galaxies. Einstein was able to observe this gas in greater detail. Einstein data indicated that the containment of this gas within these clusters by gravity could not be explained by the visible matter within those clusters, which provided further evidence for studies of dark matter. Observations by Einstein also helped to determine the frequency of irregularly-shaped clusters compared to round, uniform clusters. Galactic jets Einstein detected jets of x-rays emanating from Centaurus A and M87 that were aligned with previously-observed jets in the radio spectrum. See also Timeline of artificial satellites and space probes List of things named after Albert Einstein Sources References External links Einstein Observatory (HEAO-2) 1978 in spaceflight Space telescopes Spacecraft launched in 1978 TRW Inc. X-ray telescopes
Einstein Observatory
[ "Astronomy" ]
1,220
[ "Space telescopes" ]
1,529,106
https://en.wikipedia.org/wiki/Full-sky%20Astrometric%20Mapping%20Explorer
Full-sky Astrometric Mapping Explorer (or FAME) was a NASA proposed astrometric satellite designed to determine with unprecedented accuracy the positions, distances, and motions of 40 million stars within our galactic neighborhood (distances by stellar parallax possible). This database was to allow astronomers to accurately determine the distance to all of the stars on this side of the Milky Way galaxy, detect large planets and planetary systems around stars within 1,000 light years of the Sun, and measure the amount of dark matter in the galaxy from its influence on stellar motions. It was to be a collaborative effort between the United States Naval Observatory (USNO) and several other institutions. FAME would have measured stellar positions to less than 50 microarcseconds. The NASA MIDEX mission was scheduled for launch in 2004. In January 2002, however, NASA abruptly cancelled this mission, mainly due to concerns about costs, which had grown from US$160 million initially to US$220 million. This would have been an improvement over the High Precision Parallax Collecting Satellite (Hipparcos) which operated 1989–1993 and produced various star catalogs. Astrometric parallax measurements form part of the cosmic distance ladder, and can also be measured by other space telescopes such as Hubble (HST) or ground-based telescopes to varying degrees of precision. Compared to the FAME accuracy of 50 microarcseconds, the Gaia mission is planning 10 microarcseconds accuracy, for mapping stellar parallax up to a distance of tens of thousands of light-years from Earth. See also Explorer program Gaia (spacecraft) Nano-JASMINE References Explorers Program Space telescopes Space astrometry missions
Full-sky Astrometric Mapping Explorer
[ "Astronomy" ]
341
[ "Space telescopes", "Space astrometry missions" ]
1,529,118
https://en.wikipedia.org/wiki/Ginga%20%28satellite%29
ASTRO-C, renamed Ginga (Japanese for 'galaxy'), was an X-ray astronomy satellite launched from the Kagoshima Space Center on 5 February 1987 using M-3SII launch vehicle. The primary instrument for observations was the Large Area Counter (LAC). Ginga was the third Japanese X-ray astronomy mission, following Hakucho and Tenma (also Hinotori satellite - which preceded Ginga - had X-ray sensors, but it can be seen as a heliophysics rather than X-ray astronomy mission). Ginga reentered the Earth's atmosphere on 1 November 1991. Instruments Large Area Proportional Counter (LAC 1.5-37 keV) All-Sky Monitor (ASM 1-20 keV) Gamma-ray Burst Detector (GBD 1.5-500 keV) Highlights Discovery of transient Black Hole Candidates and study of their spectral evolution. Discovery of weak transients in the galactic ridge. Detection of cyclotron features in 3 X-ray pulsars: 4U1538-522, V0332+53, and Cep X-4. Evidence for emission and absorption Fe feature in Seyfert probing reprocessing by cold matter. Discovery of intense 6-7 keV iron line emission from the Galactic Center region. External links NASA/GSFC information of Ginga (ex Astro-C) Space telescopes X-ray telescopes Satellites of Japan Satellites formerly orbiting Earth 1987 in spaceflight Spacecraft launched in 1987
Ginga (satellite)
[ "Astronomy" ]
311
[ "Space telescopes" ]
1,529,163
https://en.wikipedia.org/wiki/LiTraCon
LiTraCon is a translucent concrete building material. The name is short for "light-transmitting concrete". The material is made of 96% concrete and 4% by weight of optical fibers. It was developed in 2001 by Hungarian architect Áron Losonczi working with scientists at the Technical University of Budapest. LiTraCon is manufactured by the inventor's company, LiTraCon Bt, which was founded in spring 2004. The head office and workshop is near the town of Csongrád. all LiTraCon products have been produced by LiTraCon Bt. The concrete comes in precast blocks of different sizes. The most notable installation of it to date is Europe Gate - a 4 m high sculpture made of LiTraCon blocks, erected in 2004 in observance of the entry of Hungary into the European Union. The product won the German "Red Dot 2005 Design Award" for 'highest design qualities'. Though expensive, Litracon appeals to architects because it is stronger than glass and translucent, unlike concrete. It was considered as possible sheathing for New York's One World Trade Center. References External links LiTraCon European Patent Concrete
LiTraCon
[ "Engineering" ]
235
[ "Structural engineering", "Concrete" ]
1,529,187
https://en.wikipedia.org/wiki/INTEGRAL
The INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) is a space telescope for observing gamma rays of energies up to 8 MeV. It was launched by the European Space Agency (ESA) into Earth orbit in 2002, and is designed to provide imaging and spectroscopy of cosmic sources. In the MeV energy range, it is the most sensitive gamma ray observatory in space. It is sensitive to higher energy photons than X-ray instruments such as NuSTAR, the Neil Gehrels Swift Observatory, XMM-Newton, and lower than other gamma-ray instruments such Fermi and HESS. Photons in INTEGRAL's energy range are emitted by relativistic and supra-thermal particles in violent sources, radioactivity from unstable isotopes produced during nucleosynthesis, X-ray binaries, and astronomical transients of all types, including gamma-ray bursts. The spacecraft's instruments have very wide fields of view, which is particularly useful for detecting gamma-ray emission from transient sources as they can continuously monitor large parts of the sky. INTEGRAL is an ESA mission with additional contributions from European member states including Italy, France, Germany, and Spain. Cooperation partners are the Russian Space Agency with IKI (military CP Command Punkt KW) and NASA. As of June 2023, INTEGRAL continues to operate despite the loss of its thrusters through the use of its reaction wheels and solar radiation pressure. Mission Radiation more energetic than optical light, such as ultraviolet, X-rays, and gamma rays, cannot penetrate Earth's atmosphere, and direct observations must be made from space. INTEGRAL is an observatory, scientists can propose for observing time of their desired target regions, data are public after a proprietary period of up to one year. INTEGRAL was launched from the Russian Baikonur spaceport, in Kazakhstan. The 2002 launch aboard a Proton-DM2 rocket achieved a 3-day elliptical orbit with an apogee of nearly 160,000 km and a perigee of above 2,000 km, hence mostly beyond radiation belts which would otherwise lead to high instrumental backgrounds from charged-particle activation. The spacecraft and instruments are controlled from ESOC in Darmstadt, Germany, ESA's control centre, through ground stations in Belgium (Redu) and California (Goldstone). 2015: Fuel usage is much lower than predictions. INTEGRAL has far exceeded its 2+3-year planned lifetime, and is set to enter Earth atmosphere in 2029 as a definite end of the mission. Its orbit was adjusted in Jan/Feb 2015 to cause such a safe (southern) reentry (due to lunar/solar perturbations, predicted for 2029), using half the remaining fuel then. In July 2020 INTEGRAL put itself in safe-mode, and it seemed the thrusters had failed. Since then alternative algorithms to slew and unload the reaction wheels have been developed and tested. In September 2021 a single event upset triggered a sequence of events that put INTEGRAL into an uncontrolled tumbling state, considered to be a 'mission critical anomaly'. The operations team used the reaction wheels to recover attitude control. In March 2023, INTEGRAL science operations were extended to the end of 2024, which will be followed by a two-year post-operations phase and further monitoring of the spacecraft until its estimated reentry in February 2029. Also in March 2023, a new software based safe mode was tested that would use reaction wheels (rather than the failed thrusters). Spacecraft The spacecraft body ("service module") is a copy of the XMM-Newton body. This saved development costs and simplified integration with infrastructure and ground facilities. An adapter was necessary to mate with the different launch vehicle, though. However, the denser instruments used for gamma rays and hard X-rays make INTEGRAL the heaviest scientific payload ever flown by ESA. The body is constructed largely of composites. Propulsion is by a hydrazine monopropellant system, containing 544 kg of fuel in four exposed tanks. The titanium tanks were charged with gas to 24 bar (2.4 MPa) at 30 °C, and have tank diaphragms. Attitude control is via a star tracker, multiple Sun sensors (ESM), and multiple momentum wheels. The dual solar arrays, spanning 16 meters when deployed and producing 2.4 kW at beginning of life (BoL), are backed up by dual nickel-cadmium battery sets. The instrument structure ("payload module") is also composite. A rigid base supports the detector assemblies, and an H-shaped structure holds the coded masks approximately 4 meters above their detectors. The payload module can be built and tested independently from the service module, reducing cost. Alenia Spazio (now Thales Alenia Space Italia) was the spacecraft prime contractor. Instruments Four instruments with large fields-of-view are co-aligned on this platform, to study targets across such a wide energy range of almost two orders of magnitude in energy (other astronomy instruments in X-rays or optical cover much smaller ranges of factors of a few at most). Imaging is achieved by coded masks casting a shadowgram onto pixelised cameras; the tungsten masks were provided by the University of Valencia, Spain. The INTEGRAL imager, IBIS (Imager on-Board the INTEGRAL Satellite) observes from 15 keV (hard X-rays) to 10 MeV (gamma rays). Angular resolution is 12 arcmin, enabling a bright source to be located to better than 1 arcmin. A 95 x 95 mask of rectangular tungsten tiles sits 3.2 meters above the detectors. The detector system contains a forward plane of 128 x 128 Cadmium-Telluride tiles (ISGRI- Integral Soft Gamma-Ray Imager), backed by a 64 x 64 plane of Caesium-Iodide tiles (PICsIT- Pixellated Caesium-Iodide Telescope). ISGRI is sensitive up to 1 MeV, while PICsIT extends to 10 MeV. Both are surrounded by passive shields of tungsten and lead. IBIS was provided by PI institutes in Rome/Italy and Paris/France. The spectrometer aboard INTEGRAL is SPI, the SPectrometer of INTEGRAL. It was conceived and assembled by the French Space Agency CNES, with PI institutes in Toulouse/France and Garching/Germany. It observes radiation between 20 keV and 8 MeV. SPI has a coded mask of hexagonal tungsten tiles, above a detector plane of 19 germanium crystals (also packed hexagonally). The high energy resolution of 2 keV at 1 MeV is capable to resolve all candidate gamma-ray lines. The Ge crystals are actively cooled with a mechanical system of Stirling coolers to about 80K. IBIS and SPI use active detectors to detect and veto charged particles that lead to background radiation. The SPI ACS (AntiCoincidence Shield) consists of a BGO scintillator blocks surrounding the camera and aperture, detecting all charged particles, and photons exceeding an energy of about 75 keV, that would hit the instrument from directions different from the aperture. A thin layer of plastic scintillator behind the tungsten tiles serves as additional charged-particle detector within the aperture. The large effective area of the ACS turned out to be useful as an instrument in its own right. Its all-sky coverage and sensitivity make it a natural gamma-ray burst detector, and a valued component of the IPN (InterPlanetary Network). Dual JEM-X units provide additional information on sources at soft and hard X-rays, from 3 to 35 keV. Aside from broadening the spectral coverage, imaging is more precise due to the shorter wavelength. Detectors are gas scintillators (xenon plus methane) in a microstrip layout, below a mask of hexagonal tiles. INTEGRAL includes an Optical Monitor (OMC) instrument, sensitive from 500 to 580 nm. It acts as both a framing aid, and can note the activity and state of some brighter targets, e.g. it had been useful to monitor supernova light over months from SN2014J. The spacecraft also includes a radiation monitor, INTEGRAL Radiation Environment Monitor (IREM), to note the orbital background for calibration purposes. IREM has an electron and a proton channel, though radiation up to cosmic rays can be sensed. Should the background exceed a preset threshold, IREM can shut down the instruments. Scientific results INTEGRAL contributes to multi-messenger astronomy, detecting gamma rays from the first merger of two neutron stars observed in gravitational waves, and from a fast radio burst. By 2018, approximately 5,600 scientific papers had been published, averaging one every 29 hours since the launch. See also BOOTES List of X-ray space telescopes References External links INTEGRAL at ESA (archived in 2013) INTEGRAL overview at CNES (French Space Agency) Integral operations page at ESA. Says "the currently planned end of mission is December 2014" ! INTEGRAL at the ISDC (INTEGRAL Science Data Centre) INTEGRAL Mission Profile by NASA's Solar System Exploration NSSDC overview page SPI/INTEGRAL more information on SPI the spectrometer for INTEGRAL A Catalogue of INTEGRAL Sources INTEGRAL Sources identified through optical and near-infrared spectroscopy INTEGRAL article on eoPortal by ESA European Space Agency satellites Space telescopes X-ray telescopes Gamma-ray telescopes Spacecraft launched in 2002 Explorers Program
INTEGRAL
[ "Astronomy" ]
1,932
[ "Space telescopes" ]
1,529,485
https://en.wikipedia.org/wiki/Neighbourhood%20%28mathematics%29
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set. Definitions Neighbourhood of a point If is a topological space and is a point in then a neighbourhood of is a subset of that includes an open set containing , This is equivalent to the point belonging to the topological interior of in The neighbourhood need not be an open subset of When is open (resp. closed, compact, etc.) in it is called an (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions. A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle. The collection of all neighbourhoods of a point is called the neighbourhood system at the point. Neighbourhood of a set If is a subset of a topological space , then a neighbourhood of is a set that includes an open set containing ,It follows that a set is a neighbourhood of if and only if it is a neighbourhood of all the points in Furthermore, is a neighbourhood of if and only if is a subset of the interior of A neighbourhood of that is also an open subset of is called an of The neighbourhood of a point is just a special case of this definition. In a metric space In a metric space a set is a neighbourhood of a point if there exists an open ball with center and radius such that is contained in is called a uniform neighbourhood of a set if there exists a positive number such that for all elements of is contained in Under the same condition, for the -neighbourhood of a set is the set of all points in that are at distance less than from (or equivalently, is the union of all the open balls of radius that are centered at a point in ): It directly follows that an -neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an -neighbourhood for some value of Examples Given the set of real numbers with the usual Euclidean metric and a subset defined as then is a neighbourhood for the set of natural numbers, but is a uniform neighbourhood of this set. Topology from neighbourhoods The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points. A neighbourhood system on is the assignment of a filter of subsets of to each in such that the point is an element of each in each in contains some in such that for each in is in One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system. Uniform neighbourhoods In a uniform space is called a uniform neighbourhood of if there exists an entourage such that contains all points of that are -close to some point of that is, for all Deleted neighbourhood A deleted neighbourhood of a point (sometimes called a punctured neighbourhood) is a neighbourhood of without For instance, the interval is a neighbourhood of in the real line, so the set is a deleted neighbourhood of A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things). See also Notes References General topology Mathematical analysis
Neighbourhood (mathematics)
[ "Mathematics" ]
799
[ "General topology", "Mathematical analysis", "Topology" ]
1,529,560
https://en.wikipedia.org/wiki/LEGRI
The Low Energy Gamma-Ray Imager (LEGRI) was a payload for the first mission of the Spanish MINISAT platform, and active from 1997 to 2002. The objective of LEGRI was to demonstrate the viability of HgI2 detectors for space astronomy, providing imaging and spectroscopical capabilities in the 10-100 KeV range. LEGRI was successfully launched on April 21, 1997, on a Pegasus XL rocket. The instrument was activated on May 19, 1997. It was active until February 2002. The LEGRI system included the Detector Unit, Mask Unit, Power Supply, Digital Processing Unit, Star Sensor, and Ground Support Unit. The LEGRI consortium included: University of Valencia University of Southampton University of Birmingham Rutherford Appleton Laboratory Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (Ciemat) INTA References External links Low Energy Gamma-Ray Imager (LEGRI) on the internet Space telescopes Gamma-ray telescopes 1997 in spaceflight
LEGRI
[ "Astronomy" ]
205
[ "Space telescopes" ]
1,529,621
https://en.wikipedia.org/wiki/Sixth%20Cambridge%20Survey%20of%20Radio%20Sources
The 6C Survey of Radio Sources (6C) is an astronomical catalogue of celestial radio sources as measured at 151-MHz. It was published between 1985 and 1993 by the Radio Astronomy Group of the University of Cambridge. The research that led to the catalogue's production also led to improvements in radio telescope design and, in due course, to the 7C survey of radio sources. A similar survey of the Southern Hemisphere was made by the Mauritius Radio Telescope. References 6
Sixth Cambridge Survey of Radio Sources
[ "Astronomy" ]
96
[ "Astronomical catalogue stubs", "Astronomy stubs" ]
1,529,775
https://en.wikipedia.org/wiki/Planck%20%28spacecraft%29
Planck was a space observatory operated by the European Space Agency (ESA) from 2009 to 2013. It was an ambitious project that aimed to map the anisotropies of the cosmic microwave background (CMB) at microwave and infrared frequencies, with high sensitivity and angular resolution. The mission was highly successful and substantially improved upon observations made by the NASA Wilkinson Microwave Anisotropy Probe (WMAP). The Planck observatory was a major source of information relevant to several cosmological and astrophysical issues. One of its key objectives was to test theories of the early Universe and the origin of cosmic structure. The mission provided significant insights into the composition and evolution of the Universe, shedding light on the fundamental physics that governs the cosmos. Planck was initially called COBRAS/SAMBA, which stands for the Cosmic Background Radiation Anisotropy Satellite/Satellite for Measurement of Background Anisotropies. The project started in 1996, and it was later renamed in honor of the German physicist Max Planck (1858–1947), who is widely regarded as the originator of quantum theory by deriving the formula for black-body radiation. Built at the Cannes Mandelieu Space Center by Thales Alenia Space, Planck was created as a medium-sized mission for ESA's Horizon 2000 long-term scientific program. The observatory was launched in May 2009 and reached the Earth/Sun L2 point by July 2009. By February 2010, it had successfully started a second all-sky survey. On 21 March 2013, the Planck team released its first all-sky map of the cosmic microwave background. The map was of exceptional quality and allowed researchers to measure temperature variations in the CMB with unprecedented accuracy. In February 2015, an expanded release was published, which included polarization data. The final papers by the Planck team were released in July 2018, marking the end of the mission. At the end of its mission, Planck was put into a heliocentric graveyard orbit and passivated to prevent it from endangering any future missions. The final deactivation command was sent to Planck in October 2013. The mission was a remarkable success and provided the most precise measurements of several key cosmological parameters. Planck's observations helped determine the age of the universe, the average density of ordinary matter and dark matter in the Universe, and other important characteristics of the cosmos. Objectives The mission had a wide variety of scientific aims, including: high resolution detections of both the total intensity and polarization of primordial CMB anisotropies, creation of a catalogue of galaxy clusters through the Sunyaev–Zel'dovich effect, observations of the gravitational lensing of the CMB, as well as the integrated Sachs–Wolfe effect, observations of bright extragalactic radio (active galactic nuclei) and infrared (dusty galaxy) sources, observations of the Milky Way, including the interstellar medium, distributed synchrotron emission and measurements of the Galactic magnetic field, and studies of the Solar System, including planets, asteroids, comets and the zodiacal light. Planck had a higher resolution and sensitivity than WMAP, allowing it to probe the power spectrum of the CMB to much smaller scales (×3). It also observed in nine frequency bands rather than WMAP's five, with the goal of improving the astrophysical foreground models. It is expected that most Planck measurements have been limited by how well foregrounds can be subtracted, rather than by the detector performance or length of the mission, a particularly important factor for the polarization measurements. The dominant foreground radiation depends on frequency, but could include synchrotron radiation from the Milky Way at low frequencies, and dust at high frequencies. Instruments The spacecraft carries two instruments: the Low Frequency Instrument (LFI) and the High Frequency Instrument (HFI). Both instruments can detect both the total intensity and polarization of photons, and together cover a frequency range of nearly 830 GHz (from 30 to 857 GHz). The cosmic microwave background spectrum peaks at a frequency of 160.2 GHz. Planck passive and active cooling systems allow its instruments to maintain a temperature of , or 0.1 °C above absolute zero. From August 2009, Planck was the coldest known object in space, until its active coolant supply was exhausted in January 2012. NASA played a role in the development of this mission and contributes to the analysis of scientific data. Its Jet Propulsion Laboratory built components of the science instruments, including bolometers for the high-frequency instrument, a 20-kelvin cryocooler for both the low- and high-frequency instruments, and amplifier technology for the low-frequency instrument. Low Frequency Instrument The LFI has three frequency bands, covering the range of 30–70 GHz, covering the microwave to infrared regions of the electromagnetic spectrum. The detectors use high-electron-mobility transistors. High Frequency Instrument The HFI was sensitive between 100 and 857 GHz, using 52 bolometric detectors, manufactured by JPL/Caltech, optically coupled to the telescope through cold optics, manufactured by Cardiff University's School of Physics and Astronomy, consisting of a triple horn configuration and optical filters, a similar concept to that used in the Archeops balloon-borne experiment. These detection assemblies are divided into 6 frequency bands (centred at 100, 143, 217, 353, 545 and 857 GHz), each with a bandwidth of 33%. Of these six bands, only the lower four have the capability to measure the polarisation of incoming radiation; the two higher bands do not. On 13 January 2012, it was reported that the on-board supply of helium-3 used in Planck dilution refrigerator had been exhausted, and that the HFI would become unusable within a few days. By this date, Planck had completed five full scans of the CMB, exceeding its target of two. The LFI (cooled by helium-4) was expected to remain operational for another six to nine months. Service module A common service module (SVM) was designed and built by Thales Alenia Space in its Turin plant, for both the Herschel Space Observatory and Planck missions, combined into one single program. The overall cost is estimated to be for the Planck and for the Herschel mission. Both figures include their mission's spacecraft and payload, (shared) launch and mission expenses, and science operations. Structurally, the Herschel and Planck SVMs are very similar. Both SVMs are octagonal in shape and each panel is dedicated to accommodate a designated set of warm units, while taking into account the dissipation requirements of the different warm units, of the instruments, as well as the spacecraft. On both spacecraft, a common design was used for the avionics, attitude control and measurement (ACMS), command and data management (CDMS), power, and tracking, telemetry and command (TT&C) subsystems. All units on the SVM are redundant. Power Subsystem On each spacecraft, the power subsystem consists of a solar array, employing triple-junction solar cells, a battery and the power control unit (PCU). The PCU is designed to interface with the 30 sections of each solar array, to provide a regulated 28 volt bus, to distribute this power via protected outputs, and to handle the battery charging and discharging. For Planck, the circular solar array is fixed on the bottom of the satellite, always facing the Sun as the satellite rotates on its vertical axis. Attitude and Orbit Control This function is performed by the attitude control computer (ACC), which is the platform for the attitude control and measurement subsystem (ACMS). It was designed to fulfil the pointing and slewing requirements of the Herschel and Planck payloads. The Planck satellite rotates at one revolution per minute, with an aim of an absolute pointing error less than 37 arc-minutes. As Planck is also a survey platform, there is the additional requirement for pointing reproducibility error less than 2.5 arc-minutes over 20 days. The main line-of-sight sensor in both Herschel and Planck is the star tracker. Launch and orbit The satellite was successfully launched, along with the Herschel Space Observatory, at 13:12:02 UTC on 14 May 2009 aboard an Ariane 5 ECA heavy launch vehicle from the Guiana Space Centre. The launch placed the craft into a very elliptical orbit (perigee: , apogee: more than ), bringing it near the Lagrangian point of the Earth-Sun system, from the Earth. The manoeuvre to inject Planck into its final orbit around was successfully completed on 3 July 2009, when it entered a Lissajous orbit with a radius around the Lagrangian point. The temperature of the High Frequency Instrument reached just a tenth of a degree above absolute zero (0.1 K) on 3 July 2009, placing both the Low Frequency and High Frequency Instruments within their cryogenic operational parameters, making Planck fully operational. Decommissioning In January 2012 the HFI exhausted its supply of liquid helium, causing the detector temperature to rise and rendering the HFI unusable. The LFI continued to be used until science operations ended on 3 October 2013. The spacecraft performed a manoeuvre on 9 October to move it away from Earth and its , placing it into a heliocentric orbit, while payload deactivation occurred on 19 October. Planck was commanded on 21 October to exhaust its remaining fuel supply; passivation activities were conducted later, including battery disconnection and the disabling of protection mechanisms. The final deactivation command, which switched off the spacecraft's transmitter, was sent to Planck on 23 October 2013 at 12:10:27 UTC. Results Planck started its First All-Sky Survey on 13 August 2009. In September 2009, the European Space Agency announced the preliminary results from the Planck First Light Survey, which was performed to demonstrate the stability of the instruments and the ability to calibrate them over long periods. The results indicated that the data quality is excellent. On 15 January 2010 the mission was extended by 12 months, with observation continuing until at least the end of 2011. After the successful conclusion of the First Survey, the spacecraft started its Second All Sky Survey on 14 February 2010. The last observations for the Second All Sky Survey were made on 28 May 2010. Some planned pointing list data from 2009 has been released publicly, along with a video visualization of the surveyed sky. On 17 March 2010, the first Planck photos were published, showing dust concentration within 500 light years from the Sun. On 5 July 2010, the Planck mission delivered its first all-sky image. The first public scientific result of Planck is the Early-Release Compact-Source Catalogue, released during the January 2011 Planck conference in Paris. On 5 May 2014 a map of the galaxy's magnetic field created using Planck was published. The Planck team and principal investigators Nazzareno Mandolesi and Jean-Loup Puget shared the 2018 Gruber Prize in Cosmology. Puget was also awarded the 2018 Shaw Prize in Astronomy. 2013 data release On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map of the cosmic microwave background. This map suggests the Universe is slightly older than thought: according to the map, subtle fluctuations in temperature were imprinted on the deep sky when the Universe was about 370,000 years old. The imprint reflects ripples that arose as early in the existence of the Universe as the first nonillionth (10−30) of a second. It is theorised that these ripples gave rise to the present vast cosmic web of galactic clusters and dark matter. According to the team, the Universe is billion-years-old, and contains ordinary matter, dark matter and dark energy. The Hubble constant was also measured to be . 2015 data release Results from an analysis of Planck full mission were made public on 1 December 2014 at a conference in Ferrara, Italy. A full set of papers detailing the mission results were released in February 2015. Some of the results include: More agreement with previous WMAP results on parameters such as the density and distribution of matter in the Universe, as well as more accurate results with less margin of error. Confirmation of the Universe having a 26% content of dark matter. These results also raise related questions about the positron excess over electrons detected by the Alpha Magnetic Spectrometer, an experiment on the International Space Station. Previous research suggested that positrons could be created by the collision of dark matter particles, which could only occur if the probability of dark matter collisions is significantly higher now than in the early Universe. Planck data suggests that the probability of such collisions must remain constant over time to account for the structure of the Universe, negating the previous theory. Validation of the simplest models of inflation, thus giving the Lambda-CDM model stronger support. That there are likely only three types of neutrinos, with a fourth proposed sterile neutrino unlikely to exist. Project scientists worked too with BICEP2 scientists to release joint research in 2015 answering whether a signal detected by BICEP2 was evidence of primordial gravitational waves, or was simple background noise from dust in the Milky Way galaxy. Their results suggest the latter. 2018 final data release See also DustPedia Lambda-CDM model List of cosmological computation software Observational cosmology Physical cosmology Terahertz radiation References Further reading External links ESA Planck mission website Planck science website Planck operations website Planck science results website NASA Planck mission website NASA/IPAC Planck archive European Space Agency space probes Cosmic microwave background experiments Space telescopes Infrared telescopes Submillimetre telescopes Derelict satellites in heliocentric orbit Spacecraft using Lissajous orbits Space probes launched in 2009 2013 disestablishments Max Planck
Planck (spacecraft)
[ "Astronomy" ]
2,848
[ "Space telescopes" ]
1,529,781
https://en.wikipedia.org/wiki/Palomar%20Testbed%20Interferometer
The Palomar Testbed Interferometer (PTI) was a near infrared, long-baseline stellar interferometer located at Palomar Observatory in north San Diego County, California, United States. It was built by Caltech and the Jet Propulsion Laboratory and was intended to serve as a testbed for developing interferometric techniques to be used at the Keck Interferometer. It began operations in 1995 and achieved routine operations in 1998, producing more than 50 refereed papers in a variety of scientific journals covering topics from high precision astrometry to stellar masses, stellar diameters and shapes. PTI concluded operations in 2008 and has since been dismantled. PTI was notable for being equipped with a "dual-star" system, making it possible to simultaneously observe pairs of stars; this cancels some of the atmospheric effects of astronomical seeing and makes very high precision measurements possible. A groundbreaking study with the Palomar Testbed Interferometer revealed that the star Altair is not spherical, but is rather flattened at the poles due to its high rate of rotation. See also List of astronomical interferometers at visible and infrared wavelengths List of observatories References External links Palomar Testbed Interferometer (PTI) at NASA Exoplanet Science Institute. Palomar Testbed Interferometer (PTI) at Caltech Astronomy. Telescopes Interferometric telescopes Palomar Observatory de:Palomar-Observatorium#Palomar-Testbed-Interferometer
Palomar Testbed Interferometer
[ "Astronomy" ]
302
[ "Telescopes", "Astronomical instruments" ]
1,529,808
https://en.wikipedia.org/wiki/Farrington%20Daniels
Farrington Daniels (March 8, 1889 – June 23, 1972) was an American physical chemist who is considered one of the pioneers of the modern direct use of solar energy. Biography Daniels was born in Minneapolis, Minnesota on March 8, 1889. Daniels began day school in 1895 at the Kenwood School and then on to Douglas School. As a boy, he was fascinated with Thomas Edison, Samuel F. B. Morse, Alexander Graham Bell, and John Charles Fields. He decided early that he wanted to be an electrician and inventor. He attended Central and East Side high schools. By this point he liked chemistry and physics, but equally enjoyed "Manual Training." In 1906, he entered the University of Minnesota, majoring in chemistry and adding to the usual mathematics and analytical courses some courses in botany and scientific German. He was initiated into the Beta Chapter of Alpha Chi Sigma in 1908. He sometimes worked summers as a railroad surveyor. He took his degree in chemistry in 1910. The following year he spent half his time in teaching and received an MS for graduate work in physical chemistry. He entered Harvard in 1911, paying for his studies partly through a teaching fellowship, and received a PhD in 1914. His doctoral research on the electrochemistry of thallium alloys was supervised by Theodore William Richards. In the summer of 1912, Daniels had visited England and Europe. After earning his PhD, Harvard would have sent him on a traveling fellowship in Europe, but World War I broke out. So instead he accepted a position as instructor at the Worcester Polytechnic Institute, where, besides teaching, he found he had considerable time for research in calorimetry, for which he received a grant from the American Academy of Arts and Sciences. He joined the University of Wisconsin in 1920 as an assistant professor in 1920, and remained until his retirement in 1959 as chairman of the chemistry department. During World War II, Daniels joined the staff of the Metallurgical Laboratory, a part of the Manhattan Project effort by the United States to develop the first nuclear weapons. He served first as associate director of the laboratory's chemistry division from the summer of 1944 before, on July 1, 1945, becoming overall director of that institution, a post held until May 1946. He was active in the planning of the laboratory's immediate successor, the Argonne National Laboratory, serving as first chairman of its Board of Governors from 1946 until 1948. It was in that role, in 1947, that Daniels conceived the pebble bed reactor, a reactor design in which helium rises through fissioning uranium oxide or carbide pebbles, cooling them by carrying away heat for power production. The "Daniels' pile" was an early version of the later high-temperature gas-cooled reactor developed further at ORNL without success, but which was developed later as nuclear power reactor by Rudolf Schulten. Daniels became concerned to limit or stop the nuclear arms race after the war. In that regard, he became a board member of the Bulletin of the Atomic Scientists. Daniels is also known for writing several textbooks on physical chemistry, including Mathematical preparation for physical chemistry (1928), Experimental physical chemistry, co-authored with J. Howard Mathews and John Warren (1934), Chemical Kinetics (1938), Physical Chemistry, co-authored with Robert Alberty (1957). Some of these books went through many subsequent editions until about 1980. He was elected in 1928 a Fellow of the American Association for the Advancement of Science (AAAS). He was elected to the United States National Academy of Sciences in 1947 and the American Philosophical Society in 1948. He was awarded the Priestley Medal and elected to the American Academy of Arts and Sciences in 1957. Daniels died on June 23, 1972, from complications from liver cancer. He was survived by his wife, four children, and twelve grandchildren. He was inducted posthumously to the Alpha Chi Sigma Hall of Fame in 1982. Involvement with solar energy Daniels became a leading American expert on the principles involved with the practical utilization of solar energy. He pursued understanding of the heat and the convection that can be derived from it, as well as the electrical energy that could be derived from it. As Director of the University of Wisconsin–Madison's Solar Energy Laboratory, he explored such areas of practical application as cooking, space heating, agricultural and industrial drying, distillation, cooling and refrigeration, and photo- and thermo-electric conversion, and he was also interested in energy storage. In particular, he believed there were many practical applications of solar energy for ready use in the developing world. Daniels was active with the Association for Applied Solar Energy in the mid-1950s. He suggested that AFASE embark upon the publication of a scientific journal, and the first issue of The Journal of Solar Energy Science and Engineering appeared in January, 1957. Later, as Professor Emeritus of Chemistry of the University of Wisconsin–Madison, he led a group of solar scientists who proposed that AFASE be reorganized, that its directors and officers be elected by the membership, and that the name be changed to The Solar Energy Society – all of which was done. He supported solar energy because, as he said in 1955, "We realize, as never before, that our fossil fuels – coal, oil, and gas – will not last forever." One of his classic books is Direct Use of the Sun's Energy, published by Yale University Press in 1964. The book was reprinted in a mass market edition in 1974 by Ballantine Books, after the 1973 oil crisis, and was described as "The best book on solar energy that I know of" by the Whole Earth Catalog's Steve Baer. References External links National Academy of Sciences Biographical Memoir 1889 births 1972 deaths American Congregationalists 20th-century American engineers American physical chemists Harvard University alumni University of Minnesota College of Liberal Arts alumni University of Wisconsin–Madison faculty Manhattan Project people Worcester Polytechnic Institute faculty People associated with renewable energy Presidents of the Geochemical Society Fellows of the American Association for the Advancement of Science Members of the American Philosophical Society 20th-century American chemists
Farrington Daniels
[ "Chemistry" ]
1,224
[ "Geochemists", "Presidents of the Geochemical Society" ]
1,529,885
https://en.wikipedia.org/wiki/Uniform%20property
In the mathematical field of topology a uniform property or uniform invariant is a property of a uniform space that is invariant under uniform isomorphisms. Since uniform spaces come as topological spaces and uniform isomorphisms are homeomorphisms, every topological property of a uniform space is also a uniform property. This article is (mostly) concerned with uniform properties that are not topological properties. Uniform properties Separated. A uniform space X is separated if the intersection of all entourages is equal to the diagonal in X × X. This is actually just a topological property, and equivalent to the condition that the underlying topological space is Hausdorff (or simply T0 since every uniform space is completely regular). Complete. A uniform space X is complete if every Cauchy net in X converges (i.e. has a limit point in X). Totally bounded (or Precompact). A uniform space X is totally bounded if for each entourage E ⊂ X × X there is a finite cover {Ui} of X such that Ui × Ui is contained in E for all i. Equivalently, X is totally bounded if for each entourage E there exists a finite subset {xi} of X such that X is the union of all E[xi]. In terms of uniform covers, X is totally bounded if every uniform cover has a finite subcover. Compact. A uniform space is compact if it is complete and totally bounded. Despite the definition given here, compactness is a topological property and so admits a purely topological description (every open cover has a finite subcover). Uniformly connected. A uniform space X is uniformly connected if every uniformly continuous function from X to a discrete uniform space is constant. Uniformly disconnected. A uniform space X is uniformly disconnected if it is not uniformly connected. See also Topological property References Uniform spaces
Uniform property
[ "Mathematics" ]
372
[ "Uniform spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
1,529,966
https://en.wikipedia.org/wiki/Zone%20Routing%20Protocol
Zone Routing Protocol, or ZRP is a hybrid wireless networking routing protocol that uses both proactive and reactive routing protocols when sending information over the network. ZRP was designed to speed up delivery and reduce processing overhead by selecting the most efficient type of protocol to use throughout the route. How ZRP works If a packet's destination is in the same zone as the origin, the proactive protocol using an already stored routing table is used to deliver the packet immediately. If the route extends outside the packet's originating zone, a reactive protocol takes over to check each successive zone in the route to see whether the destination is inside that zone. This reduces the processing overhead for those routes. Once a zone is confirmed as containing the destination node, the proactive protocol, or stored route-listing table, is used to deliver the packet. In this way packets with destinations within the same zone as the originating zone are delivered immediately using a stored routing table. Packets delivered to nodes outside the sending zone avoid the overhead of checking routing tables along the way by using the reactive protocol to check whether each zone encountered contains the destination node. Thus ZRP reduces the control overhead for longer routes that would be necessary if using proactive routing protocols throughout the entire route, while eliminating the delays for routing within a zone that would be caused by the route-discovery processes of reactive routing protocols. Details What is called the Intra-zone Routing Protocol (IARP), or a proactive routing protocol, is used inside routing zones. What is called the Inter-zone Routing Protocol (IERP), or a reactive routing protocol, is used between routing zones. IARP uses a routing table. Since this table is already stored, this is considered a proactive protocol. IERP uses a reactive protocol. Any route to a destination that is within the same local zone is quickly established from the source's proactively cached routing table by IARP. Therefore, if the source and destination of a packet are in the same zone, the packet can be delivered immediately. Most existing proactive routing algorithms can be used as the IARP for ZRP. In ZRP a zone is defined around each node, called the node's k-neighborhood, which consists of all nodes within k hops of the node. Border nodes are nodes which are exactly k hops away from a source node. For routes beyond the local zone, route discovery happens reactively. The source node sends a route request to the border nodes of its zone, containing its own address, the destination address and a unique sequence number. Each border node checks its local zone for the destination. If the destination is not a member of this local zone, the border node adds its own address to the route request packet and forwards the packet to its own border nodes. If the destination is a member of the local zone, it sends a route reply on the reverse path back to the source. The source node uses the path saved in the route reply packet to send data packets to the destination. References Haas, Z. J., 1997 (ps). A new routing protocol for the reconfigurable wireless networks. Retrieved 2011-05-06. The ZRP internet-draft The BRP internet-draft Wireless Networking Wireless networking Ad hoc routing protocols
Zone Routing Protocol
[ "Technology", "Engineering" ]
664
[ "Wireless networking", "Computer networks engineering" ]
1,530,008
https://en.wikipedia.org/wiki/Haraldur%20Sigur%C3%B0sson
Haraldur Sigurðsson or Haraldur Sigurdsson (born May 31, 1939) is an Icelandic volcanologist and geochemist. Education Sigurdsson was born in Stykkishólmur in western Iceland. He studied geology and geochemistry in the United Kingdom, where he obtained a Bachelor of Science (BSc) degree from Queen's University, Belfast, followed by a PhD under the supervision of George Malcolm Brown from Durham University in 1970. Career and research Sigurdsson worked on monitoring and research of the volcanoes of the Caribbean until 1974, when he was appointed professor at the Graduate School of Oceanography, University of Rhode Island. He is best known for his work on the reconstruction of major volcanic eruptions of the past, including the eruption of Vesuvius in 79 AD in Italy and the consequent destruction of the Roman cities of Pompeii and Herculaneum. In 1991, Sigurdsson discovered tektite glass spherules at the Cretaceous–Paleogene boundary (K–T boundary) in Haiti, providing proof for a meteorite impact at the time of the extinction of the dinosaurs. In 2004 he discovered the lost town of Tambora in Indonesia, which was buried by the colossal 1815 explosive eruption of Tambora volcano. In 1999, Sigurdsson published a scholarly account of the history of volcanology. He was also editor in chief of the Encyclopedia of Volcanoes, also published in 1999. He was awarded the Coke Medal of the Geological Society of London in 2004. Sigurdsson was a key scientist to uncover the sources of lake overturn that took the lives of entire villages nearby Lake Monoun and Lake Nyos in Cameroon. His story was popularized by the Youtuber MrBallen in an episode in January 2023. Active blogs Sigurdsson has in recent years been active in blogging in Icelandic on various issues related to his science, geology and geochemistry. There he has also been active in criticizing USA government, world capitalism and activities of Chinese companies in the Arctic. He openly supports the left movement in USA. Sigurdsson has written on the Solarsilicon Project being developed by the US Company Silicor Materials Inc. in Iceland and its pollution. As well as other environmental issues including global warming. Publications References Sigurdsson, Haraldur Sigurdsson, Haraldur Icelandic volcanologists Sigurdsson, Haraldur Icelandic geochemists Sigurdsson, Haraldur Alumni of Queen's University Belfast 20th-century Icelandic scientists 21st-century Icelandic scientists
Haraldur Sigurðsson
[ "Chemistry" ]
523
[ "Geochemists", "Icelandic geochemists" ]
1,530,070
https://en.wikipedia.org/wiki/NT%20%28cassette%29
NT (sometimes marketed under the name Scoopman) is a digital memo recording system introduced by Sony in 1992. The NT system was introduced to compete with the Microcassette, introduced by Olympus, and the Mini-Cassette, by Philips. Design The system was an R-DAT based system which stored memos using helical scan on special microcassettes, which were with a tape width of 2.5 mm, with a recording capacity of up to 120 minutes similar to Digital Audio Tape. The cassettes are offered in three versions: The Sony NTC-60, -90, and -120, each describing the length of time (in minutes) the cassette can record. NT stands for Non-Tracking, meaning the head does not precisely follow the tracks on the tape. Instead, the head moves over the tape at approximately the correct angle and speed, but performs more than one pass over each track. The data in each track is stored on the tape in blocks with addressing information that enables reconstruction in memory from several passes. This considerably reduced the required mechanical precision, reducing the complexity, size, and cost of the recorder. Another feature of NT cassettes is Non-Loading, which means instead of having a mechanism to pull the tape out of the cassette and wrap it around the drum, the drum is pushed inside the cassette to achieve the same effect. This also significantly reduces the complexity, size, and cost of the mechanism. Audio sampling is in stereo at 32 kHz with 12 bit nonlinear quantization, corresponding to 17 bit linear quantization. Data written to the tape is packed into data blocks and encoded with LDM-2 low deviation modulation. Uses The Sony NT-1 Digital Micro Recorder, introduced in 1992, features a real-time clock that records a time signal on the digital track along with the sound data, making it useful for journalism, police and legal work. Due to the machine's buffer memory, it is capable of automatically reversing the tape direction at the end of the reel without an interruption in the sound. The recorder uses a single "AA"-size cell for primary power, plus a separate CR-1220 lithium cell to provide continuous power to the real-time clock. The Sony NT-2, an improved successor to the Sony NT-1 Digital Micro Recorder, introduced in 1996, was the final machine in the series. NT cassettes were used in the film industry and law enforcement, as the quality was superior to most portable audio recorders in that time period. The data portion embedded in the recording made it an excellent choice for law enforcement in addition to recording to a proprietary tape in a proprietary format. As digital technology evolved, and became accepted in US court systems, the NT2 was replaced by devices that recorded to internal drives and removable digital media. The new media was much more cost-effective, and yielded premium quality audio recordings at a lesser cost. It was also easier to make court admissible copies utilizing other media, besides NT2 cassettes. Rebranded NT cassettes were used as the storage medium in the Datasonix Pereos backup system from 1994, claiming a capacity of up to 1.25 gigabytes per tape. Due to overhead and variable data compression ratios, the actual amount of data stored could be significantly below a gigabyte. See also References Digital electronics Sony products Audiovisual introductions in 1992 Discontinued media formats
NT (cassette)
[ "Engineering" ]
690
[ "Electronic engineering", "Digital electronics" ]
1,530,353
https://en.wikipedia.org/wiki/Mutation%20rate
In genetics, the mutation rate is the frequency of new mutations in a single gene, nucleotide sequence, or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense, nonsense, and synonymous mutations are three subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate. There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome. When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases. Background Different genetic variants within a species are referred to as alleles, therefore a new mutation can create a new allele. In population genetics, each allele is characterized by a selection coefficient, which measures the expected change in an allele's frequency over time. The selection coefficient can either be negative, corresponding to an expected decrease, positive, corresponding to an expected increase, or zero, corresponding to no expected change. The distribution of fitness effects of new mutations is an important parameter in population genetics and has been the subject of extensive investigation. Although measurements of this distribution have been inconsistent in the past, it is now generally thought that the majority of mutations are mildly deleterious, that many have little effect on an organism's fitness, and that a few can be favorable. Because of natural selection, unfavorable mutations will typically be eliminated from a population while favorable changes are generally kept for the next generation, and neutral changes accumulate at the rate they are created by mutations. This process happens by reproduction. In a particular generation the 'best fit' survive with higher probability, passing their genes to their offspring. The sign of the change in this probability defines mutations to be beneficial, neutral or harmful to organisms. Measurement An organism's mutation rates can be measured by a number of techniques. One way to measure the mutation rate is by the fluctuation test, also known as the Luria–Delbrück experiment. This experiment demonstrated that bacteria mutations occur in the absence of selection instead of the presence of selection. This is very important to mutation rates because it proves experimentally mutations can occur without selection being a component—in fact, mutation and selection are completely distinct evolutionary forces. Different DNA sequences can have different propensities to mutation (see below) and may not occur randomly. The most commonly measured class of mutations are substitutions, because they are relatively easy to measure with standard analyses of DNA sequence data. However substitutions have a substantially different rate of mutation (10−8 to 10−9 per generation for most cellular organisms) than other classes of mutation, which are frequently much higher (~10−3 per generation for satellite DNA expansion/contraction). Substitution rates Many sites in an organism's genome may admit mutations with small fitness effects. These sites are typically called neutral sites. Theoretically mutations under no selection become fixed between organisms at precisely the mutation rate. Fixed synonymous mutations, i.e. synonymous substitutions, are changes to the sequence of a gene that do not change the protein produced by that gene. They are often used as estimates of that mutation rate, despite the fact that some synonymous mutations have fitness effects. As an example, mutation rates have been directly inferred from the whole genome sequences of experimentally evolved replicate lines of Escherichia coli B. Mutation accumulation lines A particularly labor-intensive way of characterizing the mutation rate is the mutation accumulation line. Mutation accumulation lines have been used to characterize mutation rates with the Bateman-Mukai Method and direct sequencing of well-studied experimental organisms ranging from intestinal bacteria (E. coli), roundworms (C. elegans), yeast (S. cerevisiae), fruit flies (D. melanogaster), and small ephemeral plants (A. thaliana). Variation in mutation rates Mutation rates differ between species and even between different regions of the genome of a single species. Mutation rates can also differ even between genotypes of the same species; for example, bacteria have been observed to evolve hypermutability as they adapt to new selective conditions. These different rates of nucleotide substitution are measured in substitutions (fixed mutations) per base pair per generation. For example, mutations in intergenic, or non-coding, DNA tend to accumulate at a faster rate than mutations in DNA that is actively in use in the organism (gene expression). That is not necessarily due to a higher mutation rate, but to lower levels of purifying selection. A region which mutates at predictable rate is a candidate for use as a molecular clock. If the rate of neutral mutations in a sequence is assumed to be constant (clock-like), and if most differences between species are neutral rather than adaptive, then the number of differences between two different species can be used to estimate how long ago two species diverged (see molecular clock). In fact, the mutation rate of an organism may change in response to environmental stress. For example, UV light damages DNA, which may result in error prone attempts by the cell to perform DNA repair. The human mutation rate is higher in the male germ line (sperm) than the female (egg cells), but estimates of the exact rate have varied by an order of magnitude or more. This means that a human genome accumulates around 64 new mutations per generation because each full generation involves a number of cell divisions to generate gametes. Human mitochondrial DNA has been estimated to have mutation rates of ~3× or ~2.7×10−5 per base per 20 year generation (depending on the method of estimation); these rates are considered to be significantly higher than rates of human genomic mutation at ~2.5×10−8 per base per generation. Using data available from whole genome sequencing, the human genome mutation rate is similarly estimated to be ~1.1×10−8 per site per generation. The rate for other forms of mutation also differs greatly from point mutations. An individual microsatellite locus often has a mutation rate on the order of 10−4, though this can differ greatly with length. Some sequences of DNA may be more susceptible to mutation. For example, stretches of DNA in human sperm which lack methylation are more prone to mutation. In general, the mutation rate in unicellular eukaryotes (and bacteria) is roughly 0.003 mutations per genome per cell generation. However, some species, especially the ciliate of the genus Paramecium have an unusually low mutation rate. For instance, Paramecium tetraurelia has a base-substitution mutation rate of ~2 × 10−11 per site per cell division. This is the lowest mutation rate observed in nature so far, being about 75× lower than in other eukaryotes with a similar genome size, and even 10× lower than in most prokaryotes. The low mutation rate in Paramecium has been explained by its transcriptionally silent germ-line nucleus, consistent with the hypothesis that replication fidelity is higher at lower gene expression levels. The highest per base pair per generation mutation rates are found in viruses, which can have either RNA or DNA genomes. DNA viruses have mutation rates between 10−6 to 10−8 mutations per base per generation, and RNA viruses have mutation rates between 10−3 to 10−5 per base per generation. Mutation spectrum A mutation spectrum is a distribution of rates or frequencies for the mutations relevant in some context, based on the recognition that rates of occurrence are not all the same. In any context, the mutation spectrum reflects the details of mutagenesis and is affected by conditions such as the presence of chemical mutagens or genetic backgrounds with mutator alleles or damaged DNA repair systems. The most fundamental and expansive concept of a mutation spectrum is the distribution of rates for all individual mutations that might happen in a genome (e.g., ). From this full de novo spectrum, for instance, one may calculate the relative rate of mutation in coding vs non-coding regions. Typically the concept of a spectrum of mutation rates is simplified to cover broad classes such as transitions and transversions (figure), i.e., different mutational conversions across the genome are aggregated into classes, and there is an aggregate rate for each class. In many contexts, a mutation spectrum is defined as the observed frequencies of mutations identified by some selection criterion, e.g., the distribution of mutations associated clinically with a particular type of cancer, or the distribution of adaptive changes in a particular context such as antibiotic resistance (e.g., ). Whereas the spectrum of de novo mutation rates reflects mutagenesis alone, this kind of spectrum may also reflect effects of selection and ascertainment biases (e.g., both kinds of spectrum are used in ). Evolution The theory on the evolution of mutation rates identifies three principal forces involved: the generation of more deleterious mutations with higher mutation, the generation of more advantageous mutations with higher mutation, and the metabolic costs and reduced replication rates that are required to prevent mutations. Different conclusions are reached based on the relative importance attributed to each force. The optimal mutation rate of organisms may be determined by a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate (such as increasing the expression of DNA repair enzymes. or, as reviewed by Bernstein et al. having increased energy use for repair, coding for additional gene products and/or having slower replication). Secondly, higher mutation rates increase the rate of beneficial mutations, and evolution may prevent a lowering of the mutation rate in order to maintain optimal rates of adaptation. As such, hypermutation enables some cells to rapidly adapt to changing conditions in order to avoid the entire population from becoming extinct. Finally, natural selection may fail to optimize the mutation rate because of the relatively minor benefits of lowering the mutation rate, and thus the observed mutation rate is the product of neutral processes. Studies have shown that treating RNA viruses such as poliovirus with ribavirin produce results consistent with the idea that the viruses mutated too frequently to maintain the integrity of the information in their genomes. This is termed error catastrophe. The characteristically high mutation rate of HIV (Human Immunodeficiency Virus) of 3 x 10−5 per base and generation, coupled with its short replication cycle leads to a high antigen variability, allowing it to evade the immune system. See also Mutation Critical mutation rate Mutation frequency Dysgenics Allele frequency Rate of evolution Genetics Cancer References External links Mutation Evolutionary biology Temporal rates
Mutation rate
[ "Physics", "Biology" ]
2,322
[ "Temporal quantities", "Evolutionary biology", "Temporal rates", "Physical quantities" ]
1,530,478
https://en.wikipedia.org/wiki/Bioturbation
Bioturbation is defined as the reworking of soils and sediments by animals or plants. It includes burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land. Bioturbators are deemed ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation. Walruses, salmon, and pocket gophers are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as earthworms, polychaetes, ghost shrimp, mud shrimp, and midge larvae. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure. Functional groups Bioturbators have been organized by a variety of functional groupings based on either ecological characteristics or biogeochemical effects. While the prevailing categorization is based on the way bioturbators transport and interact with sediments, the various groupings likely stem from the relevance of a categorization mode to a field of study (such as ecology or sediment biogeochemistry) and an attempt to concisely organize the wide variety of bioturbating organisms in classes that describe their function. Examples of categorizations include those based on feeding and motility, feeding and biological interactions, and mobility modes. The most common set of groupings are based on sediment transport and are as follows: Gallery-diffusers create complex tube networks within the upper sediment layers and transport sediment through feeding, burrow construction, and general movement throughout their galleries. Gallery-diffusers are heavily associated with burrowing polychaetes, such as Nereis diversicolor and Marenzelleria spp. Biodiffusers transport sediment particles randomly over short distances as they move through sediments. Animals mostly attributed to this category include bivalves such as clams, and amphipod species, but can also include larger vertebrates, such as bottom-dwelling fish and rays that feed along the sea floor. Biodiffusers can be further divided into two subgroups, which include epifaunal (organisms that live on the surface sediments) biodiffusers and surface biodiffusers. This subgrouping may also include gallery-diffusers, reducing the number of functional groups. Upward-conveyors are oriented head-down in sediments, where they feed at depth and transport sediment through their guts to the sediment surface. Major upward-conveyor groups include burrowing polychaetes like the lugworm, Arenicola marina, and thalassinid shrimps. Downward-conveyor species are oriented with their heads towards the sediment-water interface and defecation occurs at depth. Their activities transport sediment from the surface to deeper sediment layers as they feed. Notable downward-conveyors include those in the peanut worm family, Sipunculidae. Regenerators are categorized by their ability to release sediment to the overlying water column, which is then dispersed as they burrow. After regenerators abandon their burrows, water flow at the sediment surface can push in and collapse the burrow. Examples of regenerator species include fiddler and ghost crabs. Ecological roles The evaluation of the ecological role of bioturbators has largely been species-specific. However, their ability to transport solutes, such as dissolved oxygen, enhance organic matter decomposition and diagenesis, and alter sediment structure has made them important for the survival and colonization by other macrofaunal and microbial communities. Microbial communities are greatly influenced by bioturbator activities, as increased transport of more energetically favorable oxidants, such as oxygen, to typically highly reduced sediments at depth alters the microbial metabolic processes occurring around burrows. As bioturbators burrow, they also increase the surface area of sediments across which oxidized and reduced solutes can be exchanged, thereby increasing the overall sediment metabolism. This increase in sediment metabolism and microbial activity further results in enhanced organic matter decomposition and sediment oxygen uptake. In addition to the effects of burrowing activity on microbial communities, studies suggest that bioturbator fecal matter provides a highly nutritious food source for microbes and other macrofauna, thus enhancing benthic microbial activity. This increased microbial activity by bioturbators can contribute to increased nutrient release to the overlying water column. Nutrients released from enhanced microbial decomposition of organic matter, notably limiting nutrients, such as ammonium, can have bottom-up effects on ecosystems and result in increased growth of phytoplankton and bacterioplankton. Burrows offer protection from predation and harsh environmental conditions. For example, termites (Macrotermes bellicosus) burrow and create mounds that have a complex system of air ducts and evaporation devices that create a suitable microclimate in an unfavorable physical environment. Many species are attracted to bioturbator burrows because of their protective capabilities. The shared use of burrows has enabled the evolution of symbiotic relationships between bioturbators and the many species that utilize their burrows. For example, gobies, scale-worms, and crabs live in the burrows made by innkeeper worms. Social interactions provide evidence of co-evolution between hosts and their burrow symbionts. This is exemplified by shrimp-goby associations. Shrimp burrows provide shelter for gobies and gobies serve as a scout at the mouth of the burrow, signaling the presence of potential danger. In contrast, the blind goby Typhlogobius californiensis lives within the deep portion of Callianassa shrimp burrows where there is not much light. The blind goby is an example of a species that is an obligate commensalist, meaning their existence depends on the host bioturbator and its burrow. Although newly hatched blind gobies have fully developed eyes, their eyes become withdrawn and covered by skin as they develop. They show evidence of commensal morphological evolution because it is hypothesized that the lack of light in the burrows where the blind gobies reside is responsible for the evolutionary loss of functional eyes. Bioturbators can also inhibit the presence of other benthic organisms by smothering, exposing other organisms to predators, or resource competition. While thalassinidean shrimps can provide shelter for some organisms and cultivate interspecies relationships within burrows, they have also been shown to have strong negative effects on other species, especially those of bivalves and surface-grazing gastropods, because thalassinidean shrimps can smother bivalves when they resuspend sediment. They have also been shown to exclude or inhibit polychaetes, cumaceans, and amphipods. This has become a serious issue in the northwestern United States, as ghost and mud shrimp (thalassinidean shrimp) are considered pests to bivalve aquaculture operations. The presence of bioturbators can have both negative and positive effects on the recruitment of larvae of conspecifics (those of the same species) and those of other species, as the resuspension of sediments and alteration of flow at the sediment-water interface can affect the ability of larvae to burrow and remain in sediments. This effect is largely species-specific, as species differences in resuspension and burrowing modes have variable effects on fluid dynamics at the sediment-water interface. Deposit-feeding bioturbators may also hamper recruitment by consuming recently settled larvae. Biogeochemical effects Since its onset around 539 million years ago, bioturbation has been responsible for changes in ocean chemistry, primarily through nutrient cycling. Bioturbators played, and continue to play, an important role in nutrient transport across sediments. For example, bioturbating animals are hypothesized to have affected the cycling of sulfur in the early oceans. According to this hypothesis, bioturbating activities had a large effect on the sulfate concentration in the ocean. Around the Cambrian-Precambrian boundary (539 million years ago), animals begin to mix reduced sulfur from ocean sediments to overlying water causing sulfide to oxidize, which increased the sulfate composition in the ocean. During large extinction events, the sulfate concentration in the ocean was reduced. Although this is difficult to measure directly, seawater sulfur isotope compositions during these times indicates bioturbators influenced the sulfur cycling in the early Earth. Bioturbators have also altered phosphorus cycling on geologic scales. Bioturbators mix readily available particulate organic phosphorus (P) deeper into ocean sediment layers which prevents the precipitation of phosphorus (mineralization) by increasing the sequestration of phosphorus above normal chemical rates. The sequestration of phosphorus limits oxygen concentrations by decreasing production on a geologic time scale. This decrease in production results in an overall decrease in oxygen levels, and it has been proposed that the rise of bioturbation corresponds to a decrease in oxygen levels of that time. The negative feedback of animals sequestering phosphorus in the sediments and subsequently reducing oxygen concentrations in the environment limits the intensity of bioturbation in this early environment. Organic contaminants Bioturbation can either enhance or reduce the flux of contaminants from the sediment to the water column, depending on the mechanism of sediment transport. In polluted sediments, bioturbating animals can mix the surface layer and cause the release of sequestered contaminants into the water column. Upward-conveyor species, like polychaete worms, are efficient at moving contaminated particles to the surface. Invasive animals can remobilize contaminants previously considered to be buried at a safe depth. In the Baltic Sea, the invasive Marenzelleria species of polychaete worms can burrow to 35-50 centimeters which is deeper than native animals, thereby releasing previously sequestered contaminants. However, bioturbating animals that live in the sediment (infauna) can also reduce the flux of contaminants to the water column by burying hydrophobic organic contaminants into the sediment. Burial of uncontaminated particles by bioturbating organisms provides more absorptive surfaces to sequester chemical pollutants in the sediments. Ecosystem impacts Nutrient cycling is still affected by bioturbation in the modern Earth. Some examples in the terrestrial and aquatic ecosystems are below. Terrestrial Plants and animals utilize soil for food and shelter, disturbing the upper soil layers and transporting chemically weathered rock called saprolite from the lower soil depths to the surface. Terrestrial bioturbation is important in soil production, burial, organic matter content, and downslope transport. Tree roots are sources of soil organic matter, with root growth and stump decay also contributing to soil transport and mixing. Death and decay of tree roots first delivers organic matter to the soil and then creates voids, decreasing soil density. Tree uprooting causes considerable soil displacement by producing mounds, mixing the soil, or inverting vertical sections of soil. Burrowing animals, such as earth worms and small mammals, form passageways for air and water transport which changes the soil properties, such as the vertical particle-size distribution, soil porosity, and nutrient content. Invertebrates that burrow and consume plant detritus help produce an organic-rich topsoil known as the soil biomantle, and thus contribute to the formation of soil horizons. Small mammals such as pocket gophers also play an important role in the production of soil, possibly with an equal magnitude to abiotic processes. Pocket gophers form above-ground mounds, which moves soil from the lower soil horizons to the surface, exposing minimally weathered rock to surface erosion processes, speeding soil formation. Pocket gophers are thought to play an important role in the downslope transport of soil, as the soil that forms their mounds is more susceptible to erosion and subsequent transport. Similar to tree root effects, the construction of burrows-even when backfilled- decreases soil density. The formation of surface mounds also buries surface vegetation, creating nutrient hotspots when the vegetation decomposes, increasing soil organic matter. Due to the high metabolic demands of their burrow-excavating subterranean lifestyle, pocket gophers must consume large amounts of plant material. Though this has a detrimental effect on individual plants, the net effect of pocket gophers is increased plant growth from their positive effects on soil nutrient content and physical soil properties. Freshwater Important sources of bioturbation in freshwater ecosystems include benthivorous (bottom-dwelling) fish, macroinvertebrates such as worms, insect larvae, crustaceans and molluscs, and seasonal influences from anadromous (migrating) fish such as salmon. Anadromous fish migrate from the sea into fresh-water rivers and streams to spawn. Macroinvertebrates act as biological pumps for moving material between the sediments and water column, feeding on sediment organic matter and transporting mineralized nutrients into the water column. Both benthivorous and anadromous fish can affect ecosystems by decreasing primary production through sediment re-suspension, the subsequent displacement of benthic primary producers, and recycling nutrients from the sediment back into the water column. Lakes and ponds The sediments of lake and pond ecosystems are rich in organic matter, with higher organic matter and nutrient contents in the sediments than in the overlying water. Nutrient re-regeneration through sediment bioturbation moves nutrients into the water column, thereby enhancing the growth of aquatic plants and phytoplankton (primary producers). The major nutrients of interest in this flux are nitrogen and phosphorus, which often limit the levels of primary production in an ecosystem. Bioturbation increases the flux of mineralized (inorganic) forms of these elements, which can be directly used by primary producers. In addition, bioturbation increases the water column concentrations of nitrogen and phosphorus-containing organic matter, which can then be consumed by fauna and mineralized. Lake and pond sediments often transition from the aerobic (oxygen containing) character of the overlaying water to the anaerobic (without oxygen) conditions of the lower sediment over sediment depths of only a few millimeters, therefore, even bioturbators of modest size can affect this transition of the chemical characteristics of sediments. By mixing anaerobic sediments into the water column, bioturbators allow aerobic processes to interact with the re-suspended sediments and the newly exposed bottom sediment surfaces. Macroinvertebrates including chironomid (non-biting midges) larvae and tubificid worms (detritus worms) are important agents of bioturbation in these ecosystems and have different effects based on their respective feeding habits. Tubificid worms do not form burrows, they are upward conveyors. Chironomids, on the other hand, form burrows in the sediment, acting as bioirrigators and aerating the sediments and are downward conveyors. This activity, combined with chironomid's respiration within their burrows, decrease available oxygen in the sediment and increase the loss of nitrates through enhanced rates of denitrification. The increased oxygen input to sediments by macroinvertebrate bioirrigation coupled with bioturbation at the sediment-water interface complicates the total flux of phosphorus . While bioturbation results in a net flux of phosphorus into the water column, the bio-irrigation of the sediments with oxygenated water enhances the adsorption of phosphorus onto iron-oxide compounds, thereby reducing the total flux of phosphorus into the water column. The presence of macroinvertebrates in sediment can initiate bioturbation due to their status as an important food source for benthivorous fish such as carp. Of the bioturbating, benthivorous fish species, carp in particular are important ecosystem engineers and their foraging and burrowing activities can alter the water quality characteristics of ponds and lakes. Carp increase water turbidity by the re-suspension of benthic sediments. This increased turbidity limits light penetration and coupled with increased nutrient flux from the sediment into the water column, inhibits the growth of macrophytes (aquatic plants) favoring the growth of phytoplankton in the surface waters. Surface phytoplankton colonies benefit from both increased suspended nutrients and from recruitment of buried phytoplankton cells released from the sediments by the fish bioturbation. Macrophyte growth has also been shown to be inhibited by displacement from the bottom sediments due to fish burrowing. Rivers and streams River and stream ecosystems show similar responses to bioturbation activities, with chironomid larvae and tubificid worm macroinvertebrates remaining as important benthic agents of bioturbation. These environments can also be subject to strong season bioturbation effects from anadromous fish. Salmon function as bioturbators on both gravel to sand-sized sediment and a nutrient scale, by moving and re-working sediments in the construction of redds (gravel depressions or "nests" containing eggs buried under a thin layer of sediment) in rivers and streams and by mobilization of nutrients. The construction of salmon redds functions to increase the ease of fluid movement (hydraulic conductivity) and porosity of the stream bed. In select rivers, if salmon congregate in large enough concentrations in a given area of the river, the total sediment transport from redd construction can equal or exceed the sediment transport from flood events. The net effect on sediment movement is the downstream transfer of gravel, sand and finer materials and enhancement of water mixing within the river substrate. The construction of salmon redds increases sediment and nutrient fluxes through the hyporheic zone (area between surface water and groundwater) of rivers and effects the dispersion and retention of marine derived nutrients (MDN) within the river ecosystem. MDN are delivered to river and stream ecosystems by the fecal matter of spawning salmon and the decaying carcasses of salmon that have completed spawning and died. Numerical modeling suggests that residence time of MDN within a salmon spawning reach is inversely proportional to the amount of redd construction within the river. Measurements of respiration within a salmon-bearing river in Alaska further suggest that salmon bioturbation of the river bed plays a significant role in mobilizing MDN and limiting primary productivity while salmon spawning is active. The river ecosystem was found to switch from a net autotrophic to heterotrophic system in response to decreased primary production and increased respiration. The decreased primary production in this study was attributed to the loss of benthic primary producers who were dislodged due to bioturbation, while increased respiration was thought to be due to increased respiration of organic carbon, also attributed to sediment mobilization from salmon redd construction. While marine derived nutrients are generally thought to increase productivity in riparian and freshwater ecosystems, several studies have suggested that temporal effects of bioturbation should be considered when characterizing salmon influences on nutrient cycles. Marine Major marine bioturbators range from small infaunal invertebrates to fish and marine mammals. In most marine sediments, however, they are dominated by small invertebrates, including polychaetes, bivalves, burrowing shrimp, and amphipods. Shallow and coastal Coastal ecosystems, such as estuaries, are generally highly productive, which results in the accumulation of large quantities of detritus (organic waste). These large quantities, in addition to typically small sediment grain size and dense populations, make bioturbators important in estuarine respiration. Bioturbators enhance the transport of oxygen into sediments through irrigation and increase the surface area of oxygenated sediments through burrow construction. Bioturbators also transport organic matter deeper into sediments through general reworking activities and production of fecal matter. This ability to replenish oxygen and other solutes at sediment depth allows for enhanced respiration by both bioturbators as well as the microbial community, thus altering estuarine elemental cycling. The effects of bioturbation on the nitrogen cycle are well-documented. Coupled denitrification and nitrification are enhanced due to increased oxygen and nitrate delivery to deep sediments and increased surface area across which oxygen and nitrate can be exchanged. The enhanced nitrification-denitrification coupling contributes to greater removal of biologically available nitrogen in shallow and coastal environments, which can be further enhanced by the excretion of ammonium by bioturbators and other organisms residing in bioturbator burrows. While both nitrification and denitrification are enhanced by bioturbation, the effects of bioturbators on denitrification rates have been found to be greater than that on rates of nitrification, further promoting the removal of biologically available nitrogen. This increased removal of biologically available nitrogen has been suggested to be linked to increased rates of nitrogen fixation in microenvironments within burrows, as indicated by evidence of nitrogen fixation by sulfate-reducing bacteria via the presence of nifH (nitrogenase) genes. Bioturbation by walrus feeding is a significant source of sediment and biological community structure and nutrient flux in the Bering Sea. Walruses feed by digging their muzzles into the sediment and extracting clams through powerful suction. By digging through the sediment, walruses rapidly release large amounts of organic material and nutrients, especially ammonium, from the sediment to the water column. Additionally, walrus feeding behavior mixes and oxygenates the sediment and creates pits in the sediment which serve as new habitat structures for invertebrate larvae. Deep sea Bioturbation is important in the deep sea because deep-sea ecosystem functioning depends on the use and recycling of nutrients and organic inputs from the photic zone. In low energy regions (areas with relatively still water), bioturbation is the only force creating heterogeneity in solute concentration and mineral distribution in the sediment. It has been suggested that higher benthic diversity in the deep sea could lead to more bioturbation which, in turn, would increase the transport of organic matter and nutrients to benthic sediments. Through the consumption of surface-derived organic matter, animals living on the sediment surface facilitate the incorporation of particulate organic carbon (POC) into the sediment where it is consumed by sediment dwelling animals and bacteria. Incorporation of POC into the food webs of sediment dwelling animals promotes carbon sequestration by removing carbon from the water column and burying it in the sediment. In some deep-sea sediments, intense bioturbation enhances manganese and nitrogen cycling. Mathematical modelling The role of bioturbators in sediment biogeochemistry makes bioturbation a common parameter in sediment biogeochemical models, which are often numerical models built using ordinary and partial differential equations. Bioturbation is typically represented as DB, or the biodiffusion coefficient, and is described by a diffusion and, sometimes, an advective term. This representation and subsequent variations account for the different modes of mixing by functional groups and bioirrigation that results from them. The biodiffusion coefficient is usually measured using radioactive tracers such as Pb210, radioisotopes from nuclear fallout, introduced particles including glass beads tagged with radioisotopes or inert fluorescent particles, and chlorophyll a. Biodiffusion models are then fit to vertical distributions (profiles) of tracers in sediments to provide values for DB. Parameterization of bioturbation, however, can vary, as newer and more complex models can be used to fit tracer profiles. Unlike the standard biodiffusion model, these more complex models, such as expanded versions of the biodiffusion model, random walk, and particle-tracking models, can provide more accuracy, incorporate different modes of sediment transport, and account for more spatial heterogeneity. Evolution The onset of bioturbation had a profound effect on the environment and the evolution of other organisms. Bioturbation is thought to have been an important co-factor of the Cambrian Explosion, during which most major animal phyla appeared in the fossil record over a short time. Predation arose during this time and promoted the development of hard skeletons, for example bristles, spines, and shells, as a form of armored protection. It is hypothesized that bioturbation resulted from this skeleton formation. These new hard parts enabled animals to dig into the sediment to seek shelter from predators, which created an incentive for predators to search for prey in the sediment (see Evolutionary Arms Race). Burrowing species fed on buried organic matter in the sediment which resulted in the evolution of deposit feeding (consumption of organic matter within sediment). Prior to the development of bioturbation, laminated microbial mats were the dominant biological structures of the ocean floor and drove much of the ecosystem functions. As bioturbation increased, burrowing animals disturbed the microbial mat system and created a mixed sediment layer with greater biological and chemical diversity. This greater biological and chemical diversity is thought to have led to the evolution and diversification of seafloor-dwelling species. An alternate, less widely accepted hypothesis for the origin of bioturbation exists. The trace fossil Nenoxites is thought to be the earliest record of bioturbation, predating the Cambrian Period. The fossil is dated to 555 million years, which places it in the Ediacaran Period. The fossil indicates a 5 centimeter depth of bioturbation in muddy sediments by a burrowing worm. This is consistent with food-seeking behavior, as there tended to be more food resources in the mud than the water column. However, this hypothesis requires more precise geological dating to rule out an early Cambrian origin for this specimen. The evolution of trees during the Devonian Period enhanced soil weathering and increased the spread of soil due to bioturbation by tree roots. Root penetration and uprooting also enhanced soil carbon storage by enabling mineral weathering and the burial of organic matter. Fossil record Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen. Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic (250 Million years ago), although bioturbation in other sediments has been seen as far back as 550 Ma. Research history Bioturbation's importance for soil processes and geomorphology was first realized by Charles Darwin, who devoted his last scientific book to the subject (The Formation of Vegetable Mould through the Action of Worms). Darwin spread chalk dust over a field to observe changes in the depth of the chalk layer over time. Excavations 30 years after the initial deposit of chalk revealed that the chalk was buried 18 centimeters under the sediment, which indicated a burial rate of 6 millimeters per year. Darwin attributed this burial to the activity of earthworms in the sediment and determined that these disruptions were important in soil formation. In 1891, geologist Nathaniel Shaler expanded Darwin's concept to include soil disruption by ants and trees. The term "bioturbation" was later coined by Rudolf Richter in 1952 to describe structures in sediment caused by living organisms. Since the 1980s, the term "bioturbation" has been widely used in soil and geomorphology literature to describe the reworking of soil and sediment by plants and animals. See also Argillipedoturbation Bioirrigation Zoophycos References External links Nereis Park (the World of Bioturbation) Worm Cam Biological oceanography Aquatic ecology Limnology Pedology Physical oceanography Sedimentology
Bioturbation
[ "Physics", "Biology" ]
6,150
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
1,530,481
https://en.wikipedia.org/wiki/Banburismus
Banburismus was a cryptanalytic process developed by Alan Turing at Bletchley Park in Britain during the Second World War. It was used by Bletchley Park's Hut 8 to help break German Kriegsmarine (naval) messages enciphered on Enigma machines. The process used sequential conditional probability to infer information about the likely settings of the Enigma machine. It gave rise to Turing's invention of the ban as a measure of the weight of evidence in favour of a hypothesis. This concept was later applied in Turingery and all the other methods used for breaking the Lorenz cipher. Overview The aim of Banburismus was to reduce the time required of the electromechanical Bombe machines by identifying the most likely right-hand and middle wheels of the Enigma. Hut 8 performed the procedure continuously for two years, stopping only in 1943 when sufficient bombe time became readily available. Banburismus was a development of the "clock method" invented by the Polish cryptanalyst Jerzy Różycki. Hugh Alexander was regarded as the best of the Banburists. He and I. J. Good considered the process more an intellectual game than a job. It was "not easy enough to be trivial, but not difficult enough to cause a nervous breakdown". History In the first few months after arriving at Bletchley Park in September 1939, Alan Turing correctly deduced that the message-settings of Kriegsmarine Enigma signals were enciphered on a common Grundstellung (starting position of the rotors), and were then super-enciphered with a bigram and a trigram lookup table. These trigram tables were in a book called the Kenngruppenbuch (K book). However, without the bigram tables, Hut 8 were unable to start attacking the traffic. A breakthrough was achieved after the Narvik pinch in which the disguised armed trawler Polares, which was on its way to Narvik in Norway, was seized by in the North Sea on 26 April 1940. The Germans did not have time to destroy all their cryptographic documents, and the captured material revealed the precise form of the indicating system, supplied the plugboard connections and Grundstellung for 23 and 24 April and the operators' log, which gave a long stretch of paired plaintext and enciphered message for the 25th and 26th. The bigram tables themselves were not part of the capture, but Hut 8 were able to use the settings-lists to read, retrospectively, all the Kriegsmarine traffic that had been intercepted from 22 to 27 April. This allowed them do a partial reconstruction of the bigram tables and start the first attempt to use Banburismus to attack Kriegsmarine traffic, from 30 April onwards. Eligible days were those where at least 200 messages were received, and for which the partial bigram-tables deciphered the indicators. The first day to be broken was 8 May 1940, thereafter celebrated as "Foss's Day" in honour of Hugh Foss, the cryptanalyst who achieved the feat. This task took until November that year, by which time the intelligence was very out of date, but it did show that Banburismus could work. It also allowed much more of the bigram tables to be reconstructed, which in turn allowed 14 April and 26 June to be broken. However, the Kriegsmarine had changed the bigram tables on 1 July. By the end of 1940, much of the theory of the Banburismus scoring system had been worked out. The First Lofoten pinch from the trawler Krebs on 3 March 1941 provided the complete keys for February – but no bigram tables or K book. The consequent decrypts allowed the statistical scoring system to be refined so that Banburismus could become the standard procedure against Kriegsmarine Enigma until mid-1943. Principles Banburismus utilised a weakness in the indicator procedure (the encrypted message settings) of Kriegsmarine Enigma traffic. Unlike the German Army and Airforce Enigma procedures, the Kriegsmarine used a Grundstellung provided by key lists, and so it was the same for all messages on a particular day (or pair of days). This meant that the three-letter indicators were all enciphered with the same rotor settings so that they were all in depth with each other. Normally, the indicators for two messages were never the same, but it could happen that, part-way through a message, the rotor positions became the same as the starting position of the rotors for another message, the parts of the two messages that overlapped in this way were in depth. The principle behind Banburismus is relatively simple (and seems to be rather similar to the Index of Coincidence). If two sentences in English or German are written down one above the other, and a count is made of how often a letter in one message is the same as the corresponding letter in the other message; there will be more matches than would occur if the sentences were random strings of letters. For a random sequence, the repeat rate for single letters is expected to be 1 in 26 (around 3.8%), and for the German Navy messages it was shown to be 1 in 17 (5.9%). If the two messages were in depth, then the matches occur just as they did in the plaintexts. However, if the messages were not in depth, then the two ciphertexts will compare as if they were random, giving a repeat rate of about 1 in 26. This allows an attacker to take two messages whose indicators differ only in the third character, and slide them against each other looking for the giveaway repeat pattern that shows where they align in depth. The comparison of two messages to look for repeats was made easier by punching the messages onto thin cards about high by several metres (yards) wide, depending on the length of message. A hole at the top of a column on the card represented an 'A' at that position, a hole at the bottom represented a 'Z'. The two message-cards were laid on top of each other on a light-box and where the light shone through, there was a repeat. This made it much simpler to detect and count the repeats. The cards were printed in Banbury in Oxfordshire. They became known as 'banburies' at Bletchley Park, and hence the procedure using them: Banburismus. The application of the scritchmus procedure (see below) gives a clue as to the possible right-hand rotor. Example Message with indicator "": Message with indicator "": Hut 8 would punch these onto banburies and count the repeats for all valid offsets −25 letters to +25 letters. There are two promising positions: XCYBGDSLVWBDJLKWIPEHVYGQZWDTHRQXIKEESQSSPZXARIXEABQIRUCKHGWUEBPF YNSCFCCPVIPEMSGIZWFLHESCIYSPVRXMCFQAXVXDVUQILBJUABNLKMKDJMENUNQ - -- - - - - -- This offset of eight letters shows nine repeats, including two bigrams, in an overlap of 56 letters (16%). The other promising position looks like this: XCYBGDSLVWBDJLKWIPEHVYGQZWDTHRQXIKEESQSSPZXARIXEABQIRUCKHGWUEBPF YNSCFCCPVIPEMSGIZWFLHESCIYSPVRXMCFQAXVXDVUQILBJUABNLKMKDJMENUNQ --- This offset of seven shows just a single trigram in an overlap of 57 letters. Turing's method of accumulating a score of a number of decibans allows the calculation of which of these situations is most likely to represent messages in depth. As might be expected, the former is the winner with odds of 5:1 on, the latter is only 2:1 on. Turing calculated the scores for the number of single repeats in overlaps of so many letters, and the number of bigrams and trigrams. Tetragrams often represented German words in the plaintext and their scores were calculated according to the type of message (from traffic analysis), and even their position within the message. These were tabulated and the relevant values summed by Banburists in assessing pairs of messages to see which were likely to be in depth. Bletchley Park used the convention that the indicator plaintext of "VFX", being eight characters ahead of "VFG", or (in terms of just the third, differing, letter) that "X = G+8". Scritchmus Scritchmus was the part of the Banburismus procedure that could lead to the identification of the right-hand (fast) wheel. The Banburist might have evidence from various message-pairs (with only the third indicator letter differing) showing that "X = Q−2", "H = X−4" and "B = G+3". He or she would search the deciban sheets for all distances with odds of better than 1:1 (i.e. with scores ≥ +34). An attempt was then made to construct the 'end wheel alphabet' by forming 'chains' of end-wheel letters out of these repeats. They could then construct a "chain" as follows: G--B-H---X-Q If this is then compared at progressive offsets with the known letter-sequence of an Enigma rotor, quite a few possibilities are discounted due to violating either the "reciprocal" property or the "no-self-ciphering" property of the Enigma machine: G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (G enciphers to B, yet B enciphers to E) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (H apparently enciphers to H) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (G enciphers to D, yet B enciphers to G) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (B enciphers to H, yet H enciphers to J) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (Q apparently enciphers to Q) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (G apparently enciphers to G) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (G enciphers to H, yet H enciphers to M) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (H enciphers to Q, yet Q enciphers to W) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (X enciphers to V, yet Q enciphers to X) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (B enciphers to Q, yet Q enciphers to Y) G--B-H---X-Q ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (X enciphers to X) Q G--B-H---X-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible -Q G--B-H---X-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (Q enciphers to B, yet B enciphers to T) X-Q G--B-H---> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible -X-Q G--B-H--> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (X enciphers to B, yet B enciphers to V) --X-Q G--B-H-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible ---X-Q G--B-H-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (X enciphers to D, yet B enciphers to X) H---X-Q G--B-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (Q enciphers to G, yet G enciphers to V) -H---X-Q G--B-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (H enciphers to B, yet Q enciphers to H) B-H---X-Q G--> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible (note the G enciphers to X, X enciphers to G property) -B-H---X-Q G-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is impossible (B enciphers to B) --B-H---X-Q G-> ABCDEFGHIJKLMNOPQRSTUVWXYZ ......... is possible The so-called "end-wheel alphabet" is already limited to just nine possibilities, merely by establishing a letter-chain of five letters derived from a mere four message-pairs. Hut 8 would now try fitting other letter-chains — ones with no letters in common with the first chain — into these nine candidate end-wheel alphabets. Eventually they will hope to be left with just one candidate, maybe looking like this: NUP F----A--D---O --X-Q G--B-H-> ABCDEFGHIJKLMNOPQRSTUVWXYZ Not only this, but such an end-wheel alphabet forces the conclusion that the end wheel is in fact "Rotor I". This is because "Rotor II" would have caused a mid-wheel turnover as it stepped from "E" to "F", yet that's in the middle of the span of the letter-chain "F----A--D---O". Likewise, all the other possible mid-wheel turnovers are precluded. Rotor I does its turnover between "Q" and "R", and that's the only part of the alphabet not spanned by a chain. That the different Enigma wheels had different turnover points was, presumably, a measure by the designers of the machine to improve its security. However, this very complication allowed Bletchley Park to deduce the identity of the end wheel. Middle wheel Once the end wheel is identified, these same principles can be extended to handle the middle rotor, though with the added complexity that the search is for overlaps in message-pairs sharing just the first indicator letter, and that the overlaps could therefore occur at up to 650 characters apart. The workload of doing this is beyond manual labour, so BP punched the messages onto 80-column cards and used Hollerith machines to scan for tetragram repeats or better. That told them which banburies to set up on the light boxes (and with what overlap) to evaluate the whole repeat pattern. Armed with a set of probable mid-wheel overlaps, Hut 8 could compose letter-chains for the middle wheel much in the same way as was illustrated above for the end wheel. That in turn (after Scritchmus) would give at least a partial middle wheel alphabet, and hopefully at least some of the possible choices of rotor for the middle wheel could be eliminated from turnover knowledge (as was done in identifying the end wheel). Taken together, the probable right hand and middle wheels would give a set of bombe runs for the day, that would be significantly reduced from the 336 possible. See also Sequential analysis References Bibliography MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. . This on-line textbook includes a chapter discussing information theory aspects of Banburismus. Further reading External links The 1944 Bletchley Park Cryptographic Dictionary "All You Ever Wanted to Know About Banburismus but were Afraid to Ask" — the whole procedure researched in detail, with a worked example. Bletchley Park Alan Turing Banbury Statistical algorithms Cryptographic attacks
Banburismus
[ "Technology" ]
4,008
[ "Cryptographic attacks", "Computer security exploits" ]
1,530,548
https://en.wikipedia.org/wiki/Planning%20Domain%20Definition%20Language
The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. It was first developed by Drew McDermott and his colleagues in 1998 mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems. Overview PDDL is a human-readable format for problems in automated planning that gives a description of the possible states of the world, a description of the set of possible actions, a specific initial state of the world, and a specific set of desired goals. Action descriptions include the prerequisites of the action and the effects of the action. PDDL separates the model of the planning problem into two major parts: (1) a domain description of those elements that are present in every problem of the problem domain, and (2) the problem description which determines the specific planning problem. The problem description includes the initial state and the goals to be accomplished. The example below gives a domain definition and a problem description instance for the automated planning of a robot with two gripper arms. PDDL becomes the input to planner software, which is usually a domain-independent Artificial Intelligence (AI) planner. PDDL does not describe the output of the planner software, but the output is usually a totally or partially ordered plan, which is a sequence of actions, some of which may be executed in parallel. The PDDL language was inspired by the Stanford Research Institute Problem Solver (STRIPS) and the Action description language (ADL), among others. The PDDL language uses principles from knowledge representation languages which are used to author ontologies, an example is the Web Ontology Language (OWL). Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects. The latest version of PDDL is described in a BNF (Backus–Naur Form) syntax definition of PDDL 3.1. Several online resources of how to use PDDL are available, and also a book. De facto official versions of PDDL PDDL1.2 This was the official language of the 1st and 2nd IPC in 1998 and 2000 respectively. It separated the model of the planning problem in two major parts: (1) domain description and (2) the related problem description. Such a division of the model allows for an intuitive separation of those elements, which are (1) present in every specific problem of the problem-domain (these elements are contained in the domain-description), and those elements, which (2) determine the specific planning-problem (these elements are contained in the problem-description). Thus several problem-descriptions may be connected to the same domain-description (just as several instances may exist of a class in OOP (Object Oriented Programming) or in OWL (Web Ontology Language) for example). Thus a domain and a connecting problem description forms the PDDL-model of a planning-problem, and eventually this is the input of a planner (usually domain-independent AI planner) software, which aims to solve the given planning-problem via some appropriate planning algorithm. The output of the planner is not specified by PDDL, but it is usually a totally or partially ordered plan (a sequence of actions, some of which may be executed even in parallel sometimes). Now lets take a look at the contents of a PDDL1.2 domain and problem description in general...(1) The domain description consisted of a domain-name definition, definition of requirements (to declare those model-elements to the planner which the PDDL-model is actually using), definition of object-type hierarchy (just like a class-hierarchy in OOP), definition of constant objects (which are present in every problem in the domain), definition of predicates (templates for logical facts), and also the definition of possible actions (operator-schemas with parameters, which should be grounded/instantiated during execution). Actions had parameters (variables that may be instantiated with objects), preconditions and effects. The effects of actions could be also conditional (when-effects).(2) The problem description consisted of a problem-name definition, the definition of the related domain-name, the definition of all the possible objects (atoms in the logical universe), initial conditions (the initial state of the planning environment, a conjunction of true/false facts), and the definition of goal-states (a logical expression over facts that should be true/false in a goal-state of the planning environment). Thus eventually PDDL1.2 captured the "physics" of a deterministic single-agent discrete fully accessible planning environment. PDDL2.1 This was the official language of the 3rd IPC in 2002. It introduced numeric fluents (e.g. to model non-binary resources such as fuel-level, time, energy, distance, weight, ...), plan-metrics (to allow quantitative evaluation of plans, and not just goal-driven, but utility-driven planning, i.e. optimization, metric-minimization/maximization), and durative/continuous actions (which could have variable, non-discrete length, conditions and effects). Eventually PDDL2.1 allowed the representation and solution of many more real-world problems than the original version of the language. PDDL2.2 This was the official language of the deterministic track of the 4th IPC in 2004. It introduced derived predicates (to model the dependency of given facts from other facts, e.g. if A is reachable from B, and B is reachable from C, then A is reachable from C (transitivity)), and timed initial literals (to model exogenous events occurring at given time independently from plan-execution). Eventually PDDL2.2 extended the language with a few important elements, but wasn't a radical evolution compared to PDDL2.1 after PDDL1.2. PDDL3.0 This was the official language of the deterministic track of the 5th IPC in 2006. It introduced state-trajectory constraints (hard-constraints in form of modal-logic expressions, which should be true for the state-trajectory produced during the execution of a plan, which is a solution of the given planning problem) and preferences (soft-constraints in form of logical expressions, similar to hard-constraints, but their satisfaction wasn't necessary, although it could be incorporated into the plan-metric e.g. to maximize the number of satisfied preferences, or to just measure the quality of a plan) to enable preference-based planning. Eventually PDDL3.0 updated the expressiveness of the language to be able to cope with recent, important developments in planning. PDDL3.1 This was the official language of the deterministic track of the 6th and 7th IPC in 2008 and 2011 respectively. It introduced object-fluents (i.e. functions' range now could be not only numerical (integer or real), but it could be any object-type also). Thus PDDL3.1 adapted the language even more to modern expectations with a syntactically seemingly small, but semantically quite significant change in expressiveness. Current situation The latest version of the language is PDDL3.1. The BNF (Backus–Naur Form) syntax definition of PDDL3.1 can be found among the resources of the IPC-2011 homepage or the IPC-2014 homepage. Successors/variants/extensions of PDDL PDDL+ This extension of PDDL2.1 from around 2002–2006 provides a more flexible model of continuous change through the use of autonomous processes and events. The key this extension provides is the ability to model the interaction between the agent's behaviour and changes that are initiated by the agent's environment. Processes run over time and have a continuous effect on numeric values. They are initiated and terminated either by the direct action of the agent or by events triggered in the environment. This 3-part structure is referred to as the start-process-stop model. Distinctions are made between logical and numeric states: transitions between logical states are assumed to be instantaneous whilst occupation of a given logical state can endure over time. Thus in PDDL+ continuous update expressions are restricted to occur only in process effects. Actions and events, which are instantaneous, are restricted to the expression of discrete change. This introduces the before mentioned 3-part modelling of periods of continuous change: (1) an action or event starts a period of continuous change on a numeric variable expressed by means of a process; (2) the process realizes the continuous change of the numeric variable; (3) an action or event finally stops the execution of the process and terminates its effect on the numeric variable. Comment: the goals of the plan might be achieved before an active process is stopped. NDDL NDDL (New Domain Definition Language) is NASA's response to PDDL from around 2002. Its representation differs from PDDL in several respects: 1) it uses a variable/value representation (timelines/activities) rather than a propositional/first-order logic, and 2) there is no concept of states or actions, only of intervals (activities) and constraints between those activities. In this respect, models in NDDL look more like schemas for SAT encodings of planning problems rather than PDDL models. Because of the mentioned differences planning and execution of plans (e.g. during critical space missions) may be more robust when using NDDL, but the correspondence to standard planning-problem representations other than PDDL may be much less intuitive than in case of PDDL. MAPL MAPL (Multi-Agent Planning Language, pronounced "maple") is an extension of PDDL2.1 from around 2003. It is a quite serious modification of the original language. It introduces non-propositional state-variables (which may be n-ary: true, false, unknown, or anything else). It introduces a temporal model given with modal operators (before, after, etc.). Nonetheless, in PDDL3.0 a more thorough temporal model was given, which is also compatible with the original PDDL syntax (and it is just an optional addition). MAPL also introduces actions whose duration will be determined in runtime and explicit plan synchronization which is realized through speech act based communication among agents. This assumption may be artificial, since agents executing concurrent plans shouldn't necessarily communicate to be able to function in a multi-agent environment. Finally, MAPL introduces events (endogenous and exogenous) for the sake of handling concurrency of actions. Thus events become part of plans explicitly, and are assigned to agents by a control function, which is also part of the plan. OPT OPT (Ontology with Polymorphic Types) was a profound extension of PDDL2.1 by Drew McDermott from around 2003–2005 (with some similarities to PDDL+). It was an attempt to create a general-purpose notation for creating ontologies, defined as formalized conceptual frameworks for planning domains about which planning applications are to reason. Its syntax was based on PDDL, but it had a much more elaborate type system, which allowed users to make use of higher-order constructs such as explicit λ-expressions allowing for efficient type inference (i.e. not only domain objects had types (level 0 types), but also the functions/fluents defined above these objects had types in the form of arbitrary mappings (level 1 types), which could be generic, so their parameters (the domain and range of the generic mapping) could be defined with variables, which could have an even higher level type (level 2 type) not to speak of that the mappings could be arbitrary, i.e. the domain or range of a function (e.g. predicate, numeric fluent) could be any level 0/1/2 type. For example, functions could map from arbitrary functions to arbitrary functions...). OPT was basically intended to be (almost) upwardly compatible with PDDL2.1. The notation for processes and durative actions was borrowed mainly from PDDL+ and PDDL2.1, but beyond that OPT offered many other significant extensions (e.g. data-structures, non-Boolean fluents, return-values for actions, links between actions, hierarchical action expansion, hierarchy of domain definitions, the use of namespaces for compatibility with the semantic web). PPDDL PPDDL (Probabilistic PDDL) 1.0 was the official language of the probabilistic track of the 4th and 5th IPC in 2004 and 2006 respectively. It extended PDDL2.1 with probabilistic effects (discrete, general probability distributions over possible effects of an action), reward fluents (for incrementing or decrementing the total reward of a plan in the effects of the actions), goal rewards (for rewarding a state-trajectory, which incorporates at least one goal-state), and goal-achieved fluents (which were true, if the state-trajectory incorporated at least one goal-state). Eventually these changes allowed PPDDL1.0 to realize Markov Decision Process (MDP) planning, where there may be uncertainty in the state-transitions, but the environment is fully observable for the planner/agent. APPL APPL (Abstract Plan Preparation Language) is a newer variant of NDDL from 2006, which is more abstract than most existing planning languages such as PDDL or NDDL. The goal of this language was to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. APPL used the same concepts as NDDL with the extension of actions, and also some other concepts, but still its expressive power is much less than PDDL's (in hope of staying robust and formally verifiable). RDDL RDDL (Relational Dynamic influence Diagram Language) was the official language of the uncertainty track of the 7th IPC in 2011. Conceptually it is based on PPDDL1.0 and PDDL3.0, but practically it is a completely different language both syntactically and semantically. The introduction of partial observability is one of the most important changes in RDDL compared to PPDDL1.0. It allows efficient description of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) by representing everything (state-fluents, observations, actions, ...) with variables. This way RDDL departs from PDDL significantly. Grounded RDDL corresponds to Dynamic Bayesian Networks (DBNs) similarly to PPDDL1.0, but RDDL is more expressive than PPDDL1.0. MA-PDDL MA-PDDL (Multi Agent PDDL) is a minimalistic, modular extension of PDDL3.1 introduced in 2012 (i.e. a new :multi-agent requirement) that allows planning by and for multiple agents. The addition is compatible with all the features of PDDL3.1 and addresses most of the issues of MAPL. It adds the possibility to distinguish between the possibly different actions of different agents (i.e. different capabilities). Similarly different agents may have different goals and/or metrics. The preconditions of actions now may directly refer to concurrent actions (e.g. the actions of other agents) and thus actions with interacting effects can be represented in a general, flexible way (e.g. suppose that at least 2 agents are needed to execute a lift action to lift a heavy table into the air, or otherwise the table would remain on the ground (this is an example of constructive synergy, but destructive synergy can be also easily represented in MA-PDDL)). Moreover, as kind of syntactic sugar, a simple mechanism for the inheritance and polymorphism of actions, goals and metrics was also introduced in MA-PDDL (assuming :typing is declared). Since PDDL3.1 assumes that the environment is deterministic and fully observable, the same holds for MA-PDDL, i.e. every agent can access the value of every state fluent at every time-instant and observe every previously executed action of each agent, and also the concurrent actions of agents unambiguously determine the next state of the environment. This was improved later by the addition of partial-observability and probabilistic effects (again, in form of two new modular requirements, :partial-observability and :probabilistic-effects, respectively, the latter being inspired by PPDDL1.0, and both being compatible with all the previous features of the language, including :multi-agent). Example This is the domain definition of a STRIPS instance for the automated planning of a robot with two gripper arms. (define (domain gripper-strips) (:predicates (room ?r) (ball ?b) (gripper ?g) (at-robby ?r) (at ?b ?r) (free ?g) (carry ?o ?g)) (:action move :parameters (?from ?to) :precondition (and (room ?from) (room ?to) (at-robby ?from)) :effect (and (at-robby ?to) (not (at-robby ?from)))) (:action pick :parameters (?obj ?room ?gripper) :precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (at ?obj ?room) (at-robby ?room) (free ?gripper)) :effect (and (carry ?obj ?gripper) (not (at ?obj ?room)) (not (free ?gripper)))) (:action drop :parameters (?obj ?room ?gripper) :precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (carry ?obj ?gripper) (at-robby ?room)) :effect (and (at ?obj ?room) (free ?gripper) (not (carry ?obj ?gripper))))) And this is the problem definition that instantiates the previous domain definition with a concrete environment with two rooms and two balls. (define (problem strips-gripper2) (:domain gripper-strips) (:objects rooma roomb ball1 ball2 left right) (:init (room rooma) (room roomb) (ball ball1) (ball ball2) (gripper left) (gripper right) (at-robby rooma) (free left) (free right) (at ball1 rooma) (at ball2 rooma)) (:goal (at ball1 roomb))) References Automated planning and scheduling Articles with example code Computer languages
Planning Domain Definition Language
[ "Technology" ]
4,036
[ "Computer science", "Computer languages" ]
1,530,575
https://en.wikipedia.org/wiki/Fungal%20infection
Fungal infection, also known as mycosis, is a disease caused by fungi. Different types are traditionally divided according to the part of the body affected; superficial, subcutaneous, and systemic. Superficial fungal infections include common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and yeast infections such as pityriasis versicolor. Subcutaneous types include eumycetoma and chromoblastomycosis, which generally affect tissues in and beneath the skin. Systemic fungal infections are more serious and include cryptococcosis, histoplasmosis, pneumocystis pneumonia, aspergillosis and mucormycosis. Signs and symptoms range widely. There is usually a rash with superficial infection. Fungal infection within the skin or under the skin may present with a lump and skin changes. Pneumonia-like symptoms or meningitis may occur with a deeper or systemic infection. Fungi are everywhere, but only some cause disease. Fungal infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. It is more likely to occur in people with a weak immune system. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. Fungi that cause infections in people include yeasts, molds and fungi that are able to exist as both a mold and yeast. The yeast Candida albicans can live in people without producing symptoms, and is able to cause both superficial mild candidiasis in healthy people, such as oral thrush or vaginal yeast infection, and severe systemic candidiasis in those who cannot fight infection themselves. Diagnosis is generally based on signs and symptoms, microscopy, culture, sometimes requiring a biopsy and the aid of medical imaging. Some superficial fungal infections of the skin can appear similar to other skin conditions such as eczema and lichen planus. Treatment is generally performed using antifungal medicines, usually in the form of a cream or by mouth or injection, depending on the specific infection and its extent. Some require surgically cutting out infected tissue. Fungal infections have a world-wide distribution and are common, affecting more than one billion people every year. An estimated 1.7 million deaths from fungal disease were reported in 2020. Several, including sporotrichosis, chromoblastomycosis and mycetoma are neglected. A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people. Classification Mycoses are traditionally divided into superficial, subcutaneous, or systemic, where infection is deep, more widespread and involving internal body organs. They can affect the nails, vagina, skin and mouth. Some types such as blastomycosis, cryptococcus, coccidioidomycosis and histoplasmosis, affect people who live in or visit certain parts of the world. Others such as aspergillosis, pneumocystis pneumonia, candidiasis, mucormycosis and talaromycosis, tend to affect people who are unable to fight infection themselves. Mycoses might not always conform strictly to the three divisions of superficial, subcutaneous and systemic. Some superficial fungal infections can cause systemic infections in people who are immunocompromised. Some subcutaneous fungal infections can invade into deeper structures, resulting in systemic disease. Candida albicans can live in people without producing symptoms, and is able to cause both mild candidiasis in healthy people and severe invasive candidiasis in those who cannot fight infection themselves. ICD-11 codes ICD-11 codes include: 1F20 Aspergillosis 1F21 Basidiobolomycosis 1F22 Blastomycosis 1F23 Candidosis 1F24 Chromoblastomycosis 1F25 Coccidioidomycosis 1F26 Conidiobolomycosis 1F27 Cryptococcosis 1F28 Dermatophytosis 1F29 Eumycetoma 1F2A Histoplasmosis 1F2B Lobomycosis 1F2C Mucormycosis 1F2D Non-dermatophyte superficial dermatomycoses 1F2E Paracoccidioidomycosis 1F2F Phaeohyphomycosis 1F2G Pneumocystosis 1F2H Scedosporiosis 1F2J Sporotrichosis 1F2K Talaromycosis 1F2L Emmonsiosis Superficial mycoses Superficial mycoses include candidiasis in healthy people, common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and malassezia infections such as pityriasis versicolor. Subcutaneous Subcutaneous fungal infections include sporotrichosis, chromoblastomycosis, and eumycetoma. Systemic Systemic fungal infections include histoplasmosis, cryptococcosis, coccidioidomycosis, blastomycosis, mucormycosis, aspergillosis, pneumocystis pneumonia and systemic candidiasis. Systemic mycoses due to primary pathogens originate normally in the lungs and may spread to other organ systems. Organisms that cause systemic mycoses are inherently virulent.. Systemic mycoses due to opportunistic pathogens are infections of people with immune deficiencies who would otherwise not be infected. Examples of immunocompromised conditions include AIDS, alteration of normal flora by antibiotics, immunosuppressive therapy, and metastatic cancer. Examples of opportunistic mycoses include Candidiasis, Cryptococcosis and Aspergillosis. Signs and symptoms Most common mild mycoses often present with a rash. Infections within the skin or under the skin may present with a lump and skin changes. Less common deeper fungal infections may present with pneumonia like symptoms or meningitis. Causes Mycoses are caused by certain fungi; yeasts, molds and some fungi that can exist as both a mold and yeast. They are everywhere and infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. Candida albicans is the most common cause of fungal infection in people, particularly as oral or vaginal thrush, often following taking antibiotics. Risk factors Fungal infections are more likely in people with weak immune systems. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. People with diabetes also tend to develop fungal infections. Very young and very old people, also, are groups at risk. Individuals being treated with antibiotics are at higher risk of fungal infections. Children whose immune systems are not functioning properly (such as children with cancer) are at risk of invasive fungal infections. COVID-19 During the COVID-19 pandemic some fungal infections have been associated with COVID-19. Fungal infections can mimic COVID-19, occur at the same time as COVID-19 and more serious fungal infections can complicate COVID-19. A fungal infection may occur after antibiotics for a bacterial infection which has occurred following COVID-19. The most common serious fungal infections in people with COVID-19 include aspergillosis and invasive candidiasis. COVID-19–associated mucormycosis is generally less common, but in 2021 was noted to be significantly more prevalent in India. Mechanism Fungal infections occur after spores are either breathed in, come into contact with skin or enter the body through a wound. Diagnosis Diagnosis is generally by signs and symptoms, microscopy, biopsy, culture and sometimes with the aid of medical imaging. Differential diagnosis Some tinea and candidiasis infections of the skin can appear similar to eczema and lichen planus. Pityriasis versicolor can look like seborrheic dermatitis, pityriasis rosea, pityriasis alba and vitiligo. Some fungal infections such as coccidioidomycosis, histoplasmosis, and blastomycosis can present with fever, cough, and shortness of breath, thereby resembling COVID-19. Prevention Keeping the skin clean and dry, as well as maintaining good hygiene, will help larger topical mycoses. Because some fungal infections are contagious, it is important to wash hands after touching other people or animals. Sports clothing should also be washed after use. Treatment Treatment depends on the type of fungal infection, and usually requires topical or systemic antifungal medicines. Pneumocystosis that does not respond to anti-fungals is treated with co-trimoxazole. Sometimes, infected tissue needs to be surgically cut away. Epidemiology Worldwide, every year fungal infections affect more than one billion people. An estimated 1.6 million deaths from fungal disease were reported in 2017. The figure has been rising, with an estimated 1.7 million deaths from fungal disease reported in 2020. Fungal infections also constitute a significant cause of illness and mortality in children. According to the Global Action Fund for Fungal Infections, every year there are over 10 million cases of fungal asthma, around 3 million cases of long-term aspergillosis of lungs, 1 million cases of blindness due to fungal keratitis, more than 200,000 cases of meningitis due to cryptococcus, 700,000 cases of invasive candidiasis, 500,000 cases of pneumocystosis of lungs, 250,000 cases of invasive aspergillosis, and 100,000 cases of histoplasmosis. History In 500BC, an apparent account of ulcers in the mouth by Hippocrates may have been thrush. The Hungarian microscopist based in Paris David Gruby first reported that human disease could be caused by fungi in the early 1840s. SARS 2003 During the 2003 SARS outbreak, fungal infections were reported in 14.8–33% of people affected by SARS, and it was the cause of death in 25–73.7% of people with SARS. Other animals A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people, such as Microsporum canis from cats. See also Actinomycosis Climate change and infectious diseases References Tropical diseases Animal fungal diseases Fungal diseases
Fungal infection
[ "Biology" ]
2,189
[ "Fungi", "Fungal diseases" ]
1,530,649
https://en.wikipedia.org/wiki/Surface-conduction%20electron-emitter%20display
A surface-conduction electron-emitter display (SED) is a display technology for flat panel displays developed by a number of companies. SEDs uses nanoscopic-scale electron emitters to energize colored phosphors and produce an image. In a general sense, a SED consists of a matrix of tiny cathode-ray tubes, each "tube" forming a single sub-pixel on the screen, grouped in threes to form red-green-blue (RGB) pixels. SEDs combine the advantages of CRTs, namely their high contrast ratios, wide viewing angles, and very fast response times, with the packaging advantages of LCD and other flat panel displays. After considerable time and effort in the early and mid-2000s, SED efforts started winding down in 2009 as LCD became the dominant technology. In August 2010, Canon announced they were shutting down their joint effort to develop SEDs commercially, signaling the end of development efforts. SEDs were closely related to another developing display technology, the field-emission display, or FED, differing primarily in the details of the electron emitters. Sony, the main backer of FED, has similarly backed off from their development efforts. Description A conventional cathode-ray tube (CRT) is powered by an electron gun, essentially an open-ended vacuum tube. At one end of the gun, electrons are produced by "boiling" them off a metal filament, which requires relatively high currents and consumes a large proportion of the CRT's power. The electrons are then accelerated and focused into a fast-moving beam, flowing forward towards the screen. Electromagnets surrounding the gun end of the tube are used to steer the beam as it travels forward, allowing the beam to be scanned across the screen to produce a 2D display. When the fast-moving electrons strike the phosphor on the back of the screen, light is produced. Color images are produced by painting the screen with spots or stripes of three colored phosphors, each for red, green, and blue (RGB). When viewed from a distance, the spots, known as "sub-pixels," blend together in the eye to produce a single picture element known as a pixel. The SED replaces the single gun of a conventional CRT with a grid of nanoscopic emitters, one for each sub-pixel of the display. The emitter apparatus consists of a thin slit across which electrons jump when powered with high-voltage gradients. Due to the nanoscopic size of the slits, the required field can correspond to a potential on the order of tens of volts. On the order of 3%, a few of the electrons impact with slit material on the far side and are scattered out of the emitter surface. A second field, applied externally, accelerates these scattered electrons towards the screen. Production of this field requires kilovolt potentials, but is a constant field requiring no switching, so the electronics that produce it are pretty simple. Each emitter is aligned behind a colored phosphor dot. The accelerated electrons strike the dot and cause it to give off light in a fashion identical to a conventional CRT. Since each dot on the screen is lit by a single emitter, there is no need to steer or direct the beam as there is in a CRT. The quantum tunneling effect, which emits electrons across the slits, is highly non-linear, and the emission process tends to be fully on or off for any given voltage. This allows the selection of particular emitters by powering a single horizontal row on the screen and then powering all the needed vertical columns simultaneously, thereby powering the selected emitters. The half-power received by the rest of the emitters on the row is too small to cause emission, even when combined with voltage leaking from active emitters beside them. This allows SED displays to work without an active matrix of thin-film transistors that LCDs and similar displays require to precisely select every sub-pixel, and further reduces the complexity of the emitter array. However, this also means that changes in voltage cannot be used to control the brightness of the resulting pixels. Instead, the emitters are rapidly turned on and off using pulse-width modulation, so that the total brightness of a spot at any given time can be controlled. SED screens consist of two glass sheets separated by a few millimeters, the rear layer supporting the emitters and the front the phosphors. The front is easily prepared using methods similar to existing CRT systems; the phosphors are painted onto the screen using a variety of silkscreen or similar technologies and then covered with a thin layer of aluminum to make the screen visibly opaque and provide an electrical return path for the electrons once they strike the screen. In the SED, this layer also serves as the front electrode that accelerates the electrons toward the screen, held at a constant high voltage relative to the switching grid. As is the case with modern CRTs, a dark mask is applied to the glass before the phosphor is painted on to give the screen a dark charcoal gray color and improve the contrast ratio. Creating the rear layer with the emitters is a multistep process. First, a matrix of silver wires is printed on the screen to form the rows or columns, an insulator is added, and then the columns or rows are deposited on top of that. Electrodes are added into this array, typically using platinum, leaving a gap of about 60 micrometers between the columns. Next, square pads of palladium oxide (PdO) only 20 nanometers thick are deposited into the gaps between the electrodes, connecting them to supply power. A small slit is cut into the pad in the middle by repeatedly pulsing high currents through them. The resulting erosion causes a gap to form. The gap in the pad forms the emitter. The width of the gap has to be tightly controlled to work correctly, which proved challenging to control in practice. Modern SEDs add another step that greatly eases production. The pads are deposited with a much larger gap between them, as much as 50 nm, which allows them to be added directly using technology adapted from inkjet printers. The entire screen is then placed in an organic gas, and pulses of electricity are sent through the pads. Carbon in the gas is pulled onto the edges of the slit in the PdO squares, forming thin films that extend vertically off the tops of the gaps and grow toward each other at a slight angle. This process is self-limiting; if the gap gets too small, the pulses erode the carbon, so the gap width can be controlled to produce a fairly constant 5 nm slit between them. Since the screen needs to be held in a vacuum to work, there is a large inward force on the glass surfaces due to the surrounding atmospheric pressure. Because the emitters are laid out in vertical columns, there is a space between each column where there is no phosphor, normally above the column power lines. SEDs use this space to place thin sheets or rods on top of the conductors, which keep the two glass surfaces apart. A series of these is used to reinforce the screen over its entire surface, which significantly reduces the needed strength of the glass itself. A CRT has no place for similar reinforcements, so the glass at the front screen must be thick enough to support all the pressure. SEDs are thus much thinner and lighter than CRTs. SEDs can have a 100,000:1 contrast ratio. History Canon began SED research in 1986. Their early research used PdO electrodes without the carbon films on top, but controlling the slit width proved difficult. At the time there were a number of flat-screen technologies in early development, and the only one close to commercialization was the plasma display panel (PDP), which had numerous disadvantages – manufacturing cost and energy use among them. LCDs were not suitable for larger screen sizes due to low yields and complex manufacturing. In 2004 Canon signed an agreement with Toshiba to create a joint venture to continue development of SED technology, forming "SED Ltd." Toshiba introduced new technology to pattern the conductors underlying the emitters using technologies adapted from inkjet printers. At the time both companies claimed that production was slated to begin in 2005. Both Canon and Toshiba started displaying prototype units at trade shows during 2006, including 55" and 36" units from Canon, and a 42" unit from Toshiba. They were widely lauded in the press for their image quality, saying it was "something that must be seen to believe[d]." However, by this point Canon's SED introduction date had already slipped several times. It was first claimed it would go into production in 1999. This was pushed back to 2005 after the joint agreement, and then again into 2007 after the first demonstrations at CES and other shows. In October 2006, Toshiba's president announced the company plans to begin full production of 55-inch SED TVs in July 2007 at its recently built SED volume-production facility in Himeji. In December 2006, Toshiba President and Chief Executive Atsutoshi Nishida said Toshiba was on track to mass-produce SED TV sets in cooperation with Canon by 2008. He said the company planned to start small-output production in the fall of 2007, but they do not expect SED displays to become a commodity and will not release the technology to the consumer market because of its expected high price, reserving it solely for professional broadcasting applications. Also, in December 2006 it was revealed that one reason for the delay was a lawsuit brought against Canon by Applied Nanotech. On 25 May 2007, Canon announced that the prolonged litigation would postpone the launch of SED televisions, and a new launch date would be announced at some date in the future. Applied Nanotech, a subsidiary of Nano-Proprietary, holds a number of patents related to FED and SED manufacturing. They had sold Canon a perpetual license for a coating technology used in their newer carbon-based emitter structure. Applied Nanotech claimed that Canon's agreement with Toshiba amounted to an illegal technology transfer, and a separate agreement would have to be reached. They first approached the problem in April 2005. Canon responded to the lawsuit with several actions. On 12 January 2007 they announced that they would buy all of Toshiba's shares in SED Inc. in order to eliminate Toshiba's involvement in the venture. They also started re-working their existing RE40,062 patent filing in order to remove any of Applied Nanotech's technologies from their system. The modified patent was issued on 12 February 2008. On 22 February 2007, the U.S. District Court for the Western District of Texas, a district widely known for agreeing with patent holders in intellectual property cases, ruled in a summary judgment that Canon had violated its agreement by forming a joint television venture with Toshiba. However, on 2 May 2007 a jury ruled that no additional damages beyond the $5.5m fee for the original licensing contract were due. On 25 July 2008, the U.S. Court of Appeals for the 5th Circuit reversed the lower court's decision and provided that Canon's "irrevocable and perpetual" non-exclusive licence was still enforceable and covers Canon's restructured subsidiary SED. On 2 December 2008, Applied Nanotech dropped the lawsuit, stating that continuing the lawsuit "would probably be a futile effort". In spite of legal success, Canon announced at the same time that the Great Recession was making introduction of the sets far from certain, going so far as to say it would not be launching the product at that time "because people would laugh at them". Canon also had an ongoing OLED development process that started in the midst of the lawsuit. In 2007 they announced a joint deal to form "Hitachi Displays Ltd.", with Matsushita and Canon each taking a 24.9% share of Hitachi's existing subsidiary. Canon later announced that they were purchasing Tokki Corp, a maker of OLED fabrication equipment. In April 2009 during NAB 2009, Peter Putman was quoted as saying "I was asked on more than one occasion about the chances of Canon's SED making a comeback, something I would not have bet money on after the Nano Technologies licensing debacle. However, a source within Canon told me at the show that the SED is still very much alive as a pro monitor technology. Indeed, a Canon SED engineer from Japan was quietly making the rounds in the Las Vegas Convention Center to scope out the competition." Canon officially announced on 25 May 2010 the end of the development of SED TVs for the home consumer market, but indicated that they will continue development for commercial applications like medical equipment. On 18 August 2010, Canon decided to liquidate SED Inc., a consolidated subsidiary of Canon Inc. developing SED technology, citing difficulties to secure appropriate profitability and effectively ending hopes to one day see SED TVs in the home or the room or the living room. See also Comparison of display technology Field-emission display Organic light-emitting diode Quantum dot display Notes Bibliography Patents U.S. Patent RE40,062 , "Display device with electron-emitting device with electron-emitting region", Seishiro Yoshioka et al./Canon Kabushiki Kaisha, Filed 2 June 2000, Re-issued 12 Feb 2008 Further reading "Funding for organic-LED technology, patent disputes, and more" , Nature Photonics, Volume 1 Number 5 (2007), pg. 278 External links Technical comparison between SED and FED Display technology Japanese inventions
Surface-conduction electron-emitter display
[ "Engineering" ]
2,824
[ "Electronic engineering", "Display technology" ]
1,530,689
https://en.wikipedia.org/wiki/Complementarity%20%28physics%29
In physics, complementarity is a conceptual aspect of quantum mechanics that Niels Bohr regarded as an essential feature of the theory. The complementarity principle holds that certain pairs of complementary properties cannot all be observed or measured simultaneously. For example, position and momentum or wave and particle properties. In contemporary terms, complementarity encompasses both the uncertainty principle and wave-particle duality. Bohr considered one of the foundational truths of quantum mechanics to be the fact that setting up an experiment to measure one quantity of a pair, for instance the position of an electron, excludes the possibility of measuring the other, yet understanding both experiments is necessary to characterize the object under study. In Bohr's view, the behavior of atomic and subatomic objects cannot be separated from the measuring instruments that create the context in which the measured objects behave. Consequently, there is no "single picture" that unifies the results obtained in these different experimental contexts, and only the "totality of the phenomena" together can provide a completely informative description. History Background Complementarity as a physical model derives from Niels Bohr's 1927 presentation in Como, Italy, at a scientific celebration of the work of Alessandro Volta 100 years previous. Bohr's subject was complementarity, the idea that measurements of quantum events provide complementary information through seemingly contradictory results. While Bohr's presentation was not well received, it did crystallize the issues ultimately leading to the modern wave-particle duality concept. The contradictory results that triggered Bohr's ideas had been building up over the previous 20 years. This contradictory evidence came both from light and from electrons. The wave theory of light, broadly successful for over a hundred years, had been challenged by Planck's 1901 model of blackbody radiation and Einstein's 1905 interpretation of the photoelectric effect. These theoretical models use discrete energy, a quantum, to describe the interaction of light with matter. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum seemingly contradicted other experiments demonstrating the wave-like interference of light. The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thompson, Robert Millikan, and Charles Wilson, among others, had shown that free electrons had particle properties. However, in 1924, Louis de Broglie proposed that electrons had an associated wave and Schrödinger demonstrated that wave equations accurately account for electron properties in atoms. Again some experiments showed particle properties and others wave properties. Bohr's resolution of these contradictions is to accept them. In his Como lecture he says: "our interpretation of the experimental material rests essentially upon the classical concepts." Direct observation being impossible, observations of quantum effects are necessarily classical. Whatever the nature of quantum events, our only information will arrive via classical results. If experiments sometimes produce wave results and sometimes particle results, that is the nature of light and of the ultimate constituents of matter. Bohr's lectures Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding an as-yet-unpublished result, a thought experiment about a microscope using gamma rays. This thought experiment implied a tradeoff between uncertainties that would later be formalized as the uncertainty principle. To Bohr, Heisenberg's paper did not make clear the distinction between a position measurement merely disturbing the momentum value that a particle carried and the more radical idea that momentum was meaningless or undefinable in a context where position was measured instead. Upon returning from his vacation, by which time Heisenberg had already submitted his paper for publication, Bohr convinced Heisenberg that the uncertainty tradeoff was a manifestation of the deeper concept of complementarity. Heisenberg duly appended a note to this effect to his paper, before its publication, stating: Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand. Bohr publicly introduced the principle of complementarity in a lecture he delivered on 16 September 1927 at the International Physics Congress held in Como, Italy, attended by most of the leading physicists of the era, with the notable exceptions of Einstein, Schrödinger, and Dirac. However, these three were in attendance one month later when Bohr again presented the principle at the Fifth Solvay Congress in Brussels, Belgium. The lecture was published in the proceedings of both of these conferences, and was republished the following year in Naturwissenschaften (in German) and in Nature (in English). In his original lecture on the topic, Bohr pointed out that just as the finitude of the speed of light implies the impossibility of a sharp separation between space and time (relativity), the finitude of the quantum of action implies the impossibility of a sharp separation between the behavior of a system and its interaction with the measuring instruments and leads to the well-known difficulties with the concept of 'state' in quantum theory; the notion of complementarity is intended to capture this new situation in epistemology created by quantum theory. Physicists F.A.M. Frescura and Basil Hiley have summarized the reasons for the introduction of the principle of complementarity in physics as follows: Debate following the lectures Complementarity was a central feature of Bohr's reply to the EPR paradox, an attempt by Albert Einstein, Boris Podolsky and Nathan Rosen to argue that quantum particles must have position and momentum even without being measured and so quantum mechanics must be an incomplete theory. The thought experiment proposed by Einstein, Podolsky and Rosen involved producing two particles and sending them far apart. The experimenter could choose to measure either the position or the momentum of one particle. Given that result, they could in principle make a precise prediction of what the corresponding measurement on the other, faraway particle would find. To Einstein, Podolsky and Rosen, this implied that the faraway particle must have precise values of both quantities whether or not that particle is measured in any way. Bohr argued in response that the deduction of a position value could not be transferred over to the situation where a momentum value is measured, and vice versa. Later expositions of complementarity by Bohr include a 1938 lecture in Warsaw and a 1949 article written for a festschrift honoring Albert Einstein. It was also covered in a 1953 essay by Bohr's collaborator Léon Rosenfeld. Mathematical formalism For Bohr, complementarity was the "ultimate reason" behind the uncertainty principle. All attempts to grapple with atomic phenomena using classical physics were eventually frustrated, he wrote, leading to the recognition that those phenomena have "complementary aspects". But classical physics can be generalized to address this, and with "astounding simplicity", by describing physical quantities using non-commutative algebra. This mathematical expression of complementarity builds on the work of Hermann Weyl and Julian Schwinger, starting with Hilbert spaces and unitary transformation, leading to the theorems of mutually unbiased bases. In the mathematical formulation of quantum mechanics, physical quantities that classical mechanics had treated as real-valued variables become self-adjoint operators on a Hilbert space. These operators, called "observables", can fail to commute, in which case they are called "incompatible": Incompatible observables cannot have a complete set of common eigenstates; there can be some simultaneous eigenstates of and , but not enough in number to constitute a complete basis. The canonical commutation relation implies that this applies to position and momentum. In a Bohrian view, this is a mathematical statement that position and momentum are complementary aspects. Likewise, an analogous relationship holds for any two of the spin observables defined by the Pauli matrices; measurements of spin along perpendicular axes are complementary. The Pauli spin observables are defined for a quantum system described by a two-dimensional Hilbert space; mutually unbiased bases generalize these observables to Hilbert spaces of arbitrary finite dimension. Two bases and for an -dimensional Hilbert space are mutually unbiased when Here the basis vector , for example, has the same overlap with every ; there is equal transition probability between a state in one basis and any state in the other basis. Each basis corresponds to an observable, and the observables for two mutually unbiased bases are complementary to each other. This leads to the description of complementarity as a statement about quantum kinematics: The concept of complementarity has also been applied to quantum measurements described by positive-operator-valued measures (POVMs). Continuous complementarity While the concept of complementarity can be discussed via two experimental extremes, continuous tradeoff is also possible. In 1979 Wooters and Zurek introduced an information-theoretic treatment of the double-slit experiment providing on approach to a quantiative model of complementarity. The wave-particle relation, introduced by Daniel Greenberger and Allaine Yasin in 1988, and since then refined by others, quantifies the trade-off between measuring particle path distinguishability, , and wave interference fringe visibility, : The values of and can vary between 0 and 1 individually, but any experiment that combines particle and wave detection will limit one or the other, or both. The detailed definition of the two terms vary among applications, but the relation expresses the verified constraint that efforts to detect particle paths will result in less visible wave interference. Modern role While many of the early discussions of complementarity discussed hypothetical experiments, advances in technology have allowed advanced tests of this concept. Experiments like the quantum eraser verify the key ideas in complementarity; modern exploration of quantum entanglement builds directly on complementarity: In his Nobel lecture, physicist Julian Schwinger linked complementarity to quantum field theory: The Consistent histories interpretation of quantum mechanics takes a generalized form of complementarity as a key defining postulate. See also Copenhagen interpretation Canonical coordinates Conjugate variables Interpretations of quantum mechanics Wave–particle duality References Further reading Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994). Niels Bohr, Causality and Complementarity: supplementary papers edited by Jan Faye and Henry J. Folse. The Philosophical Writings of Niels Bohr, Volume IV. Ox Bow Press. 1998. External links Discussions with Einstein on Epistemological Problems in Atomic Physics Einstein's Reply to Criticisms Quantum mechanics Niels Bohr Dichotomies Scientific laws
Complementarity (physics)
[ "Physics", "Mathematics" ]
2,302
[ "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Equations", "Scientific laws" ]
1,530,838
https://en.wikipedia.org/wiki/Ozokerite
Ozokerite or ozocerite, archaically referred to as earthwax or earth wax, is a naturally occurring odoriferous mineral wax or paraffin found in many localities. Lacking a definite composition and crystalline structure, it is not considered a mineral but only a mineraloid. The name was coined from Greek elements Όζω ozο, to stink, and κηρός keros, wax. Sources Specimens have been obtained from Scotland, Northumberland, Wales, as well as from about thirty different countries. Of these occurrences the ozokerite of the island (now peninsula) of Cheleken, near Türkmenbaşy Bay, parts of the Himalayas in India and the deposits of Utah in the United States, deserve mention, though the latter have been largely worked out. The sole sources of commercial supply are in Galicia, at Boryslav, Dzwiniacz and Starunia, though the mineraloid is found at other points on both flanks of the Carpathians. Ozokerite deposits are believed to have originated in much the same way as mineral veins, the slow evaporation and oxidation of petroleum having resulted in the deposition of its dissolved paraffin in the fissures and crevices previously occupied by the liquid. As found native, ozokerite varies from a very soft wax to a black mass as hard as gypsum. Properties Its specific gravity ranges from 0.85 to 0.95, and its melting point from . It is soluble in ether, petroleum, benzene, turpentine, chloroform, carbon disulfide and others. Galician ozokerite varies in color from light yellow to dark brown, and frequently appears green owing to dichroism. It usually melts at . Chemically, ozokerite consists of a mixture of various hydrocarbons, containing 85–87% by weight of carbon and 14.3% of hydrogen. Mining The mining of ozokerite began in Galicia in the 1880s, and was formerly carried on by means of hand-labor, but in the ozokerite mines owned by the Boryslaw Actien Gesellschaft and the Galizische Kreditbank, the workings of which extend to a depth of , and , respectively, electrical power is employed for hauling, pumping and ventilating. In these mines there are the usual main shafts and galleries, the ozokerite being reached by levels driven along the strike of the deposit. The wax, as it reaches the surface, varies in purity, and, in new workings especially, only hand-picking is needed to separate the pure material. In other cases much earthy matter is mixed with the material, and then the rock or shale having been eliminated by hand-picking, the "wax-stone" is boiled with water in large coppers, when the pure wax rises to the surface. This is again melted without water, and the impurities are skimmed off, the material being then run into slightly conical cylindrical moulds, and thus made into blocks for the market. The crude ozokerite is refined by treatment first with sulfuric acid, and subsequently with charcoal, when the ceresine or cerasin of commerce is obtained. The refined ozokerite or ceresine, which usually has a melting-point of , is largely used as an adulterant of beeswax, and is frequently colored artificially to resemble that product in appearance. On distillation in a current of superheated steam, ozokerite yields a candle-making material resembling the paraffin obtained from petroleum and shale-oil but of higher melting-point, and therefore of greater value if the candles made from it are to be used in hot climates. There are also obtained in the distillation light oils and a product resembling vaseline. The residue in the stills consists of a hard, black, waxy substance, which in admixture with India-rubber was employed under the name of okonite as an electrical insulator. From the residue a form of the material known as heel-ball, used to impart a polished surface to the heels and soles of boots, was also manufactured. Mining of ozokerite diminished after 1940 due to competition from paraffins manufactured from petroleum. It has a higher melting point than most petroleum waxes, and is favored for some applications, such as electrical insulators and candles. See also Zietrisikite References Frank, Alison Fleig (2005). Oil Empire: Visions of Prosperity in Austrian Galicia (Harvard Historical Studies). Harvard University Press. p. 98. . Waxes Hydrocarbons
Ozokerite
[ "Physics", "Chemistry" ]
944
[ "Hydrocarbons", "Organic compounds", "Materials", "Matter", "Waxes" ]
1,530,988
https://en.wikipedia.org/wiki/Created%20kind
In creationism, a religious view based on a literal reading of the Book of Genesis and other biblical texts, created kinds are purported to be the original forms of life as they were created by God. They are also referred to in creationist literature as kinds, original kinds, Genesis kinds, and baramins (baramin is a neologism coined by combining the Hebrew words () and ()). The idea is promulgated by Young Earth creationists and biblical literalists to support their belief in the literal truth of the Genesis creation narrative and the Genesis flood narrative during which, they contend, the ancestors of all land-based life on Earth were housed in Noah's Ark. Old Earth creationists also employ the concept, rejecting the fact of universal common descent while not necessarily accepting a literal interpretation of a global flood or a six-day creation in the last ten thousand years. Both groups accept that some lower-level microevolutionary change occurs within the biblically created kinds. Creationists believe that not all creatures on Earth are genealogically related, and that living organisms were created by God in a finite number of discrete forms with genetic boundaries to prevent interbreeding. This viewpoint claims that the created kinds or baramins are genealogically discrete and are incapable of interbreeding and have no evolutionary (i.e., higher-level macroevolutionary) relationship to one another. Definitions The concept of the "kind" originates from a literal reading of Genesis 1:12–24: There is some uncertainty about what exactly the Bible means when it talks of "kinds". Creationist Brian Nelson claimed "While the Bible allows that new varieties may have arisen since the creative days, it denies that any new species have arisen." However, Russell Mixter, another creationist writer, said that "One should not insist that "kind" means species. The word "kind" as used in the Bible may apply to any animal which may be distinguished in any way from another, or it may be applied to a large group of species distinguishable from another group[...] there is plenty of room for differences of opinion on what are the kinds of Genesis." Frank Lewis Marsh coined the term baramin in his book Fundamental Biology (1941) and expanded on the concept in Evolution, Creation, and Science (), in which he stated that the ability to hybridize and create viable offspring was a sufficient condition for being members of the same baramin. However, he said that it was not a necessary condition, acknowledging that observed speciation events among Drosophila fruitflies had been shown to cut off hybridization. Marsh also originated "discontinuity systematics", the idea that there are boundaries between different animals that cannot be crossed with the consequence that there would be discontinuities in the history of life and limits to common ancestry. Baraminology In 1990, Kurt Wise introduced baraminology as an adaptation of Marsh's and Walter ReMine's ideas that was more in keeping with young Earth creationism. Wise advocated using the Bible as a source of systematic data. Baraminology and its associated concepts have been criticized by scientists and creationists for lacking formal structure. Consequently, in 2003 Wise and other creationists proposed a refined baramin concept in the hope of developing a broader creationist model of biology. Alan Gishlick, reviewing the work of baraminologists in 2006, found it to be surprisingly rigorous and internally consistent, but concluded that the methods did not work. Walter ReMine specified four groupings: holobaramins, monobaramins, apobaramins, and polybaramins. These are, respectively, all things of one kind; some things of the same kind; groups of kinds; and any mixed grouping of things. These groups correspond to the concepts of holophyly, monophyly, paraphyly, and polyphyly used in cladistics. Methods Baraminology employs many of the same methods used in evolutionary systematics, including cladistics and Analysis of Pattern (ANOPA). However, instead of identifying continuity between groups of organisms based on shared similarities, baraminology uses these methods to search for morphological and genetic gaps between groups. Baraminologists have also developed their own creationist systematics software, known as BDIST, to measure distance between groups. Criticism The methods of baraminology are not universally accepted among young-Earth creationists. Other creationists have criticized these methods as having the same problems as traditional cladistics, as well as for occasionally producing results that they feel contradict the Bible. Baraminology has been heavily criticized for its lack of rigorous tests and post-study rejection of data to make it better fit the desired findings. By denying general common descent, it tends to produce inconsistent results that also conflict with evidence discovered by biology. Created kinds have been compared to other attempts at "alternate research" to produce artificial, pseudoscientific "evidence" that supports preconceived conclusions, similarly to how advocacy was done by the tobacco industry. The US National Academy of Sciences and numerous other scientific and scholarly organizations recognize creation science as pseudoscience. Some techniques employed in Baraminology have been used to demonstrate evolution, thereby calling baraminological conclusions into question. See also Antediluvian Creatio ex nihilo Flood geology Garden of Eden Pre-Adamites Notes Explanatory notes Citations External links The Definition of 'kinds' Creationism Pseudoscience Genesis creation narrative
Created kind
[ "Biology" ]
1,133
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
1,531,001
https://en.wikipedia.org/wiki/Tetracosane
Tetracosane, also called tetrakosane, is an alkane hydrocarbon with the structural formula H(CH2)24H. As with other alkanes, its name is derived from Greek for the number of carbon atoms, 24, in the molecule. It has 14,490,245 constitutional isomers, and 252,260,276 stereoisomers. n-Tetracosane is found in mineral form, called evenkite, in the Evenki Region on Lower Tunguska River in Siberia and the Bucnik quarry near Konma in eastern Moravia, Czech Republic. Evenkite is found as colourless flakes and is reported to fluoresce yellow-orange. See also List of alkanes References External links NIST Entry Alkanes
Tetracosane
[ "Chemistry" ]
165
[ "Organic compounds", "Alkanes" ]
1,531,162
https://en.wikipedia.org/wiki/Digital%20permanence
Digital permanence addresses the history and development of digital storage techniques, specifically quantifying the expected lifetime of data stored on various digital media and the factors which influence the permanence of digital data. It is often a mix of ensuring the data itself can be retained on a particular form of media and that the technology remains viable. Where possible, as well as describing expected lifetimes, factors affecting data retention will be detailed, including potential technology issues. Since the inception of automatic computers, a key difference between them and other calculating machines has been their ability to store information. Over the years, various hardware devices have been designed to store ever larger quantities of data. With the development of the Internet the quantity of information available appears to continue to grow at an ever-increasing rate often characterised as an information explosion. As information is increasingly being stored on electronic media as opposed to traditional media such as hand-written documents, printed books, and photographic images, humanity's social and cultural legacy to future generations will depend increasingly on the permanence of these new media. However, not all of this information is worth saving; sometimes its value can be short-lived. Other data, such as legal contracts, literature, scientific studies, are expected to last for centuries. This article describes how reliable different types of storage media are at storing data over time and factors affecting this reliability. Librarians and archivists responsible for large repositories of information take a deeper view of electronic archives. Given that individuals' personal data has been growing at a rapid rate in the 21st century, these archiving issues affecting professional repositories will soon be manifest in small organisations and even the home. Types of storage Solid-state memory devices Digital computers, in particular, make use of two forms of memory known as RAM or ROM and although the most common form today is RAM, designed to retain data while the computer is powered on, this was not always the case. Nor is active memory the only form used; passive memory devices are now in common use in digital cameras. Magnetic, or ferrite core, data retention is dependent on the magnetic properties of iron and its compounds. PROM, or programmable read-only memory, stores data in a fixed form during the manufacturing process, with data retention dependent on the life expectancy of the device itself. EPROM, or erasable programmable read-only memory, is similar to PROM but can be cleared by exposure to ultraviolet light. EEPROM, or electrically erasable programmable read-only memory, is the format used by flash memory devices and can be erased and rewritten electronically. Magnetic media Magnetic tapes consist of narrow bands of a magnetic medium bonded in paper or plastic. The magnetic medium passes across a semi-fixed head which reads or writes data. Typically, magnetic media has a maximum lifetime of about 50 years although this assumes optimal storage conditions; life expectancy can decrease rapidly depending on storage conditions and the resilience and reliability of hardware components. magnetic tape reels magnetic stripe cards magnetic cards cassette tapes video cassette tapes Magnetic disks and drums include a rotating magnetic medium combined with a movable read/write head. floppy disks zip drives hard disks and drums Non-magnetic media punched paper-tape punched cards optical media (rotating media combined with a movable read/write head comprising a laser), such as: pressed CD-ROMs and DVD-ROMs Write once read many (WORM) media such as CD-R, DVD±R, BD-R. Rewriteable media such as CD-RW, DVD±RW, BD-RE. Some disc types can have multiple data layers for greater storage capacity. Printing technology Printing hard-copies of documents and images is a popular means of representing digital data and possibly acquires the qualities associated with original documents, especially their potential for endurance. More recent advances in printer technology have raised the quality of photographic images in particular. Unfortunately, the permanence of printed documents cannot be easily discerned from the documents themselves. wet-ribbon inked printers heat-sensitive papers, such as FAX rolls NCR and other carbon technologies ink-jet printers wax-based inks e.g. DataProducts SI810 water-based inks other bases mono laser printers colour laser printers Financial Driven Resources A way of preserving digital content through means of financial trusts. The data is driven with financial investments typically assigned to a Trust Company which pay traditional storage providers to house data for long periods of time with the interest gained on the principal. In 2008 a series of companies such as LivingStory.com and Orysa.com started offering these services to store point in time accounting data and provide consumer archive services. Soft storage technology The short-comings of some storage media is already well recognised and various attempts have been made to supplement the permanence of an under-lying technology. These "soft storage technologies" enhance their base technology by applying software or system techniques often within quite narrow fields of data storage and not always with the explicit intention of improving digital permanence. RAID systems Distributed systems, such as BitTorrent networked backup services public archive repositories web-site archives financial trust resources See also Preservation (library and archival science) Print permanence References External links UK Digital Archive Strategy Digital technology
Digital permanence
[ "Technology" ]
1,078
[ "Information and communications technology", "Digital technology" ]
1,531,173
https://en.wikipedia.org/wiki/Wien%20bridge%20oscillator
A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can generate a large range of frequencies. The oscillator is based on a bridge circuit originally developed by Max Wien in 1891 for the measurement of impedances. The bridge comprises four resistors and two capacitors. The oscillator can also be viewed as a positive gain amplifier combined with a bandpass filter that provides positive feedback. Automatic gain control, intentional non-linearity and incidental non-linearity limit the output amplitude in various implementations of the oscillator. The circuit shown to the right depicts a once-common implementation of the oscillator, with automatic gain control using an incandescent lamp. Under the condition that R1=R2=R and C1=C2=C, the frequency of oscillation is given by: and the condition of stable oscillation is given by Background There were several efforts to improve oscillators in the 1930s. Linearity was recognized as important. The "resistance-stabilized oscillator" had an adjustable feedback resistor; that resistor would be set so the oscillator just started (thus setting the loop gain to just over unity). The oscillations would build until the vacuum tube's grid would start conducting current, which would increase losses and limit the output amplitude. Automatic amplitude control was investigated. Frederick Terman states, "The frequency stability and wave-shape form of any common oscillator can be improved by using an automatic-amplitude-control arrangement to maintain the amplitude of oscillations constant under all conditions." In 1937, Larned Meacham described using a filament lamp for automatic gain control in bridge oscillators. Also in 1937, Hermon Hosmer Scott described audio oscillators based on various bridges including the Wien bridge. Terman, at Stanford University, was interested in Harold Stephen Black's work on negative feedback, so he held a graduate seminar on negative feedback. Bill Hewlett attended the seminar. Scott's February 1938 oscillator paper came out during the seminar. Here is a recollection by Terman: Fred Terman explains: "To complete the requirements for an Engineer's degree at Stanford, Bill had to prepare a thesis. At that time I had decided to devote an entire quarter of my graduate seminar to the subject of 'negative feedback' I had become interested in this then new technique because it seemed to have great potential for doing many useful things. I would report on some applications I had thought up on negative feedback, and the boys would read recent articles and report to each other on current developments. This seminar was just well started when a paper came out that looked interesting to me. It was by a man from General Radio and dealt with a fixed-frequency audio oscillator in which the frequency was controlled by a resistance-capacitance network, and was changed by means of push-buttons. Oscillations were obtained by an ingenious application of negative feedback." In June 1938, Terman, R. R. Buss, Hewlett and F. C. Cahill gave a presentation about negative feedback at the IRE Convention in New York; in August 1938, there was a second presentation at the IRE Pacific Coast Convention in Portland, OR; the presentation became an IRE paper. One topic was amplitude control in a Wien bridge oscillator. The oscillator was demonstrated in Portland. Hewlett, along with David Packard, co-founded Hewlett-Packard, and Hewlett-Packard's first product was the HP200A, a precision Wien bridge oscillator. The first sale was in January 1939. Hewlett's June 1939 engineer's degree thesis used a lamp to control the amplitude of a Wien bridge oscillator. Hewlett's oscillator produced a sinusoidal output with a stable amplitude and low distortion. Oscillators without automatic gain control The conventional oscillator circuit is designed so that it will start oscillating ("start up") and that its amplitude will be controlled. The oscillator at the right uses diodes to add a controlled compression to the amplifier output. It can produce total harmonic distortion in the range of 1-5%, depending on how carefully it is trimmed. For a linear circuit to oscillate, it must meet the Barkhausen conditions: its loop gain must be one and the phase around the loop must be an integer multiple of 360 degrees. The linear oscillator theory doesn't address how the oscillator starts up or how the amplitude is determined. The linear oscillator can support any amplitude. In practice, the loop gain is initially larger than unity. Random noise is present in all circuits, and some of that noise will be near the desired frequency. A loop gain greater than one allows the amplitude of frequency to increase exponentially each time around the loop. With a loop gain greater than one, the oscillator will start. Ideally, the loop gain needs to be just a little bigger than one, but in practice, it is often significantly greater than one. A larger loop gain makes the oscillator start quickly. A large loop gain also compensates for gain variations with temperature and the desired frequency of a tunable oscillator. For the oscillator to start, the loop gain must be greater than one under all possible conditions. A loop gain greater than one has a down side. In theory, the oscillator amplitude will increase without limit. In practice, the amplitude will increase until the output runs into some limiting factor such as the power supply voltage (the amplifier output runs into the supply rails) or the amplifier output current limits. The limiting reduces the effective gain of the amplifier (the effect is called gain compression). In a stable oscillator, the average loop gain will be one. Although the limiting action stabilizes the output voltage, it has two significant effects: it introduces harmonic distortion and it affects the frequency stability of the oscillator. The amount of distortion is related to the extra loop gain used for startup. If there's a lot of extra loop gain at small amplitudes, then the gain must decrease more at higher instantaneous amplitudes. That means more distortion. The amount of distortion is also related to final amplitude of the oscillation. Although an amplifier's gain is ideally linear, in practice it is nonlinear. The nonlinear transfer function can be expressed as a Taylor series. For small amplitudes, the higher order terms have little effect. For larger amplitudes, the nonlinearity is pronounced. Consequently, for low distortion, the oscillator's output amplitude should be a small fraction of the amplifier's dynamic range. Meacham's bridge stabilized oscillator Larned Meacham disclosed the bridge oscillator circuit shown to the right in 1938. The circuit was described as having very high frequency stability and very pure sinusoidal output. Instead of using tube overloading to control the amplitude, Meacham proposed a circuit that set the loop gain to unity while the amplifier is in its linear region. Meacham's circuit included a quartz crystal oscillator and a lamp in a Wheatstone bridge. In Meacham's circuit, the frequency determining components are in the negative feed back branch of the bridge and the gain controlling elements are in the positive feed back branch. The crystal, Z4, operates in series resonance. As such it minimizes the negative feedback at resonance. The particular crystal exhibited a real resistance of 114 ohms at resonance. At frequencies below resonance, the crystal is capacitive and the gain of the negative feedback branch has a negative phase shift. At frequencies above resonance, the crystal is inductive and the gain of the negative feedback branch has a positive phase shift. The phase shift goes through zero at the resonant frequency. As the lamp heats up, it decreases the positive feedback. The Q of the crystal in Meacham's circuit is given as 104,000. At any frequency different from the resonant frequency by more than a small multiple of the bandwidth of the crystal, the negative feedback branch dominates the loop gain and there can be no self-sustaining oscillation except within the narrow bandwidth of the crystal. In 1944 (after Hewlett's design), J. K. Clapp modified Meacham's circuit to use a vacuum tube phase inverter instead of a transformer to drive the bridge. A modified Meacham oscillator uses Clapp's phase inverter but substitutes a diode limiter for the tungsten lamp. Hewlett's oscillator William R. Hewlett's Wien bridge oscillator can be considered as a combination of a differential amplifier and a Wien bridge, connected in a positive feedback loop between the amplifier output and differential inputs. At the oscillating frequency, the bridge is almost balanced and has very small transfer ratio. The loop gain is a product of the very high amplifier gain and the very low bridge ratio. In Hewlett's circuit, the amplifier is implemented by two vacuum tubes. The amplifier's inverting input is the cathode of tube V1 and the non-inverting input is the control grid of tube V2. To simplify analysis, all the components other than R1, R2, C1 and C2 can be modeled as a non-inverting amplifier with a gain of 1+Rf/Rb and with a high input impedance. R1, R2, C1 and C2 form a bandpass filter which is connected to provide positive feedback at the frequency of oscillation. Rb self heats and increases the negative feedback which reduces the amplifier gain until the point is reached that there is just enough gain to sustain sinusoidal oscillation without over driving the amplifier. If R1 = R2 and C1 = C2 then at equilibrium Rf/Rb = 2 and the amplifier gain is 3. When the circuit is first energized, the lamp is cold and the gain of the circuit is greater than 3 which ensures start up. The dc bias current of vacuum tube V1 also flows through the lamp. This does not change the principles of the circuit's operation, but it does reduce the amplitude of the output at equilibrium because the bias current provides part of the heating of the lamp. Hewlett's thesis made the following conclusions: A resistance-capacity oscillator of the type just described should be well suited for laboratory service. It has the ease of handling of a beat-frequency oscillator and yet few of its disadvantages. In the first place the frequency stability at low frequencies is much better than is possible with the beat-frequency type. There need be no critical placements of parts to insure small temperature changes, nor carefully designed detector circuits to prevent interlocking of oscillators. As a result of this, the overall weight of the oscillator may be kept at a minimum. An oscillator of this type, including a 1 watt amplifier and power supply, weighed only 18 pounds, in contrast to 93 pounds for the General Radio beat-frequency oscillator of comparable performance. The distortion and constancy of output compare favorably with the best beat-frequency oscillators now available. Lastly, an oscillator of this type can be laid out and constructed on the same basis as a commercial broadcast receiver, but with fewer adjustments to make. It thus combines quality of performance with cheapness of cost to give an ideal laboratory oscillator. Wien bridge Bridge circuits were a common way of measuring component values by comparing them to known values. Often an unknown component would be put in one arm of a bridge, and then the bridge would be nulled by adjusting the other arms or changing the frequency of the voltage source (see, for example, the Wheatstone bridge). The Wien bridge is one of many common bridges. Wien's bridge is used for precision measurement of capacitance in terms of resistance and frequency. It was also used to measure audio frequencies. The Wien bridge does not require equal values of R or C. The phase of the signal at Vp relative to the signal at Vout varies from almost 90° leading at low frequency to almost 90° lagging at high frequency. At some intermediate frequency, the phase shift will be zero. At that frequency the ratio of Z1 to Z2 will be purely real (zero imaginary part). If the ratio of Rb to Rf is adjusted to the same ratio, then the bridge is balanced and the circuit can sustain oscillation. The circuit will oscillate even if Rb / Rf has a small phase shift and even if the inverting and non-inverting inputs of the amplifier have different phase shifts. There will always be a frequency at which the total phase shift of each branch of the bridge will be equal. If Rb / Rf has no phase shift and the phase shifts of the amplifiers inputs are zero then the bridge is balanced when: and where ω is the radian frequency. If one chooses R1 = R2 and C1 = C2 then Rf = 2 Rb. In practice, the values of R and C will never be exactly equal, but the equations above show that for fixed values in the Z1 and Z2 impedances, the bridge will balance at some ω and some ratio of Rb/Rf. Analysis Analyzed from loop gain According to Schilling, the loop gain of the Wien bridge oscillator, under the condition that R1=R2=R and C1=C2=C, is given by where is the frequency-dependent gain of the op-amp (note, the component names in Schilling have been replaced with the component names in the first figure). Schilling further says that the condition of oscillation is T=1 which, is satisfied by and with Another analysis, with particular reference to frequency stability and selectivity, is in and . Frequency determining network Let R=R1=R2 and C=C1=C2 Normalize to CR=1. Thus the frequency determining network has a zero at 0 and poles at or −2.6180 and −0.38197. Amplitude stabilization The key to the Wien bridge oscillator's low distortion oscillation is an amplitude stabilization method that does not use clipping. The idea of using a lamp in a bridge configuration for amplitude stabilization was published by Meacham in 1938. The amplitude of electronic oscillators tends to increase until clipping or other gain limitation is reached. This leads to high harmonic distortion, which is often undesirable. Hewlett used an incandescent bulb as a power detector, low pass filter and gain control element in the oscillator feedback path to control the output amplitude. The resistance of the light bulb filament (see resistivity article) increases as its temperature increases. The temperature of the filament depends on the power dissipated in the filament and some other factors. If the oscillator's period (an inverse of its frequency) is significantly shorter than the thermal time constant of the filament, then the temperature of the filament will be substantially constant over a cycle. The filament resistance will then determine the amplitude of the output signal. If the amplitude increases, the filament heats up and its resistance increases. The circuit is designed so that a larger filament resistance reduces loop gain, which in turn will reduce the output amplitude. The result is a negative feedback system that stabilizes the output amplitude to a constant value. With this form of amplitude control, the oscillator operates as a near ideal linear system and provides a very low distortion output signal. Oscillators that use limiting for amplitude control often have significant harmonic distortion. At low frequencies, as the time period of the Wien bridge oscillator approaches the thermal time constant of the incandescent bulb, the circuit operation becomes more nonlinear, and the output distortion rises significantly. Light bulbs have their disadvantages when used as gain control elements in Wien bridge oscillators, most notably a very high sensitivity to vibration due to the bulb's microphonic nature amplitude modulating the oscillator output, a limitation in high frequency response due to the inductive nature of the coiled filament, and current requirements that exceed the capability of many op-amps. Modern Wien bridge oscillators have used other nonlinear elements, such as diodes, thermistors, field effect transistors, or photocells for amplitude stabilization in place of light bulbs. Distortion as low as 0.0003% (3 ppm) can be achieved with modern components unavailable to Hewlett. Wien bridge oscillators that use thermistors exhibit extreme sensitivity to ambient temperature due to the low operating temperature of a thermistor compared to an incandescent lamp. Automatic gain control dynamics Small perturbations in the value of Rb cause the dominant poles to move back and forth across the jω (imaginary) axis. If the poles move into the left half plane, the oscillation dies out exponentially to zero. If the poles move into the right half plane, the oscillation grows exponentially until something limits it. If the perturbation is very small, the magnitude of the equivalent Q is very large so that the amplitude changes slowly. If the perturbations are small and reverse after a short time, the envelope follows a ramp. The envelope is approximately the integral of the perturbation. The perturbation to envelope transfer function rolls off at 6 dB/octave and causes −90° of phase shift. The light bulb has thermal inertia so that its power to resistance transfer function exhibits a single pole low pass filter. The envelope transfer function and the bulb transfer function are effectively in cascade, so that the control loop has effectively a low pass pole and a pole at zero and a net phase shift of almost −180°. This would cause poor transient response in the control loop due to low phase margin. The output might exhibit squegging. Bernard M. Oliver showed that slight compression of the gain by the amplifier mitigates the envelope transfer function so that most oscillators show good transient response, except in the rare case where non-linearity in the vacuum tubes canceled each other producing an unusually linear amplifier. References Other references ; Speaks of Terman's inspiration by Black and his late 1930s graduate seminar about negative feedback and fixed-frequency audio oscillators; Hewlett finishing masters and looking for engineers thesis; hiring San Francisco patent attorney in 1939. . Frequency and amplitude stabilization of an oscillator with no tube overloading. Uses tungsten lamp to balance bridge. . Shows that amplifier non-linearity is needed for fast amplitude settling of the Wien bridge oscillator. ; Wien, briged-T, twin-T oscillators ; Hewlett graduated from Stanford and spent a year doing research; then he goes to MIT to get his masters. Hewlett joins the army, but is discharged in 1936. (diode limiting) External links Model 200A Audio Oscillator, 1939, HP Virtual Museum. Wien Bridge Oscillator, including SPICE simulation. The "Wien bridge oscillator" in the simulation is not a low distortion design with amplitude stabilization; it is a more conventional oscillator with a diode limiter. Online Simulator of Wien Bridge Oscillator – Gives online simulation of Wien bridge oscillator. Bill Hewlett and his Magic Lamp, Clifton Laboratories (Acks Edward L. Ginzton at end of paper.) (Presented 16 June 1938 at 13th Annual Convention, Manuscript received 22 November 1938, abridged 1 August 1939); Meacham presented at 13th Annual Convention on 16 June 1938, too. See BSTJ. Also presented at Pacific Coast Convention, Portland, OR, 11 August 1938. , §Resistance-stabilized Oscillators Employing Negative Feedback, state "For a discussion of ordinary resistance-stabilized oscillators see pages 283–289 of F. E. Terman, 'Measurements in Radio Engineering,' McGraw-Hill Book Company, New York, N.Y., (1935)." (diode limiting) state, "This oscillator [Hewlett's] somewhat resembles that described by H. H. Scott, in the paper 'A new type of selective circuit and some applications,' Proc. I.R.E., vol 26, pp. 226–236; February, (1938), although differing in a number of respects, such as being provided with amplitude control and having the frequency adjusted by variable condensers rather than variable resistors. The latter feature makes the impedance from a to ground constant as the capacitance is varied to change the frequency, and so greatly simplifies the design of the amplifier circuits." http://www.radiomuseum.org/forum/single_pentode_wien_bridge_oscillator.html http://www.americanradiohistory.com/Archive-Bell-Laboratories-Record/40s/Bell-Laboratories-Record-1945-12.pdf has Black bio; "Stabilized feedback amplifier" won prize in 1934. Later (31 December 1940) Meacham patent about multi-frequency bridge-stabilized oscillators using series resonant circuits. Electronic oscillators Analog circuits Electronic test equipment
Wien bridge oscillator
[ "Technology", "Engineering" ]
4,494
[ "Analog circuits", "Electronic engineering", "Electronic test equipment", "Measuring instruments" ]
1,531,306
https://en.wikipedia.org/wiki/Rhizoid
Rhizoids are protuberances that extend from the lower epidermal cells of bryophytes and algae. They are similar in structure and function to the root hairs of vascular land plants. Similar structures are formed by some fungi. Rhizoids may be unicellular or multicellular. Evolutionary development Plants originated in aquatic environments and gradually migrated to land during their long course of evolution. In water or near it, plants could absorb water from their surroundings, with no need for any special absorbing organ or tissue. Additionally, in the primitive states of plant development, tissue differentiation and division of labor were minimal, thus specialized water-absorbing tissue was not required. The development of specialized tissues to absorb water efficiently and anchor the plant body to the ground enabled the spread of plants onto land. Description Rhizoids absorb water mainly by capillary action in which water moves up between threads of rhizoids; this is in contrast to roots in which water moves up through a single root. However, some species of bryophytes do have the ability to take up water inside their rhizoids. Land plants In land plants, rhizoids are trichomes that anchor the plant to the ground. In the liverworts, they are absent or unicellular, but they are multicellular in mosses. In vascular plants, they are often called root hairs and may be unicellular or multicellular. Algae In certain algae, there is an extensive rhizoidal system that allows the alga to anchor itself to a sandy substrate from which it can absorb nutrients. Microscopic free-floating species, however, do not have rhizoids at all. Fungi In fungi, rhizoids are small branching hyphae that grow downwards from the stolons and anchor the fungus to the substrate, where they release digestive enzymes and absorb digested organic material. See also Rhizine, the equivalent structure in lichens References Further reading External links Fungal morphology and anatomy Bryophytes Plant anatomy he:מורפולוגיה של הצמח - מונחים#איברים בצמחים פרימיטיביים
Rhizoid
[ "Biology" ]
468
[ "Bryophytes", "Plants" ]
1,531,369
https://en.wikipedia.org/wiki/Phase%20problem
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years. Overview Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics. In X-ray crystallography, the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell. If the phases are known, the electron density can be simply obtained by Fourier synthesis. This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also called Fraunhofer diffraction) giving rise to a similar type of phase problem. Phase retrieval There are several ways to retrieve the lost phases. The phase problem must be solved in x-ray crystallography, neutron crystallography, and electron crystallography. Not all of the methods of phase retrieval work with every wavelength (x-ray, neutron, and electron) used in crystallography. Direct (ab initio) methods If the crystal diffracts to high resolution (<1.2 Å), the initial phases can be estimated using direct methods. Direct methods can be used in x-ray crystallography, neutron crystallography, and electron crystallography. A number of initial phases are tested and selected by this method. The other is the Patterson method, which directly determines the positions of heavy atoms. The Patterson function gives a large value in a position which corresponds to interatomic vectors. This method can be applied only when the crystal contains heavy atoms or when a significant fraction of the structure is already known. For molecules whose crystals provide reflections in the sub-Ångström range, it is possible to determine phases by brute force methods, testing a series of phase values until spherical structures are observed in the resultant electron density map. This works because atoms have a characteristic structure when viewed in the sub-Ångström range. The technique is limited by processing power and data quality. For practical purposes, it is limited to "small molecules" and peptides because they consistently provide high-quality diffraction with very few reflections. Molecular replacement (MR) Phases can also be inferred by using a process called molecular replacement, where a similar molecule's already-known phases are grafted onto the intensities of the molecule at hand, which are observationally determined. These phases can be obtained experimentally from a homologous molecule or if the phases are known for the same molecule but in a different crystal, by simulating the molecule's packing in the crystal and obtaining theoretical phases. Generally, these techniques are less desirable since they can severely bias the solution of the structure. They are useful, however, for ligand binding studies, or between molecules with small differences and relatively rigid structures (for example derivatizing a small molecule). Isomorphous replacement Multiple isomorphous replacement (MIR) Multiple isomorphous replacement (MIR), where heavy atoms are inserted into structure (usually by synthesizing proteins with analogs or by soaking) Anomalous scattering Single-wavelength anomalous dispersion (SAD). Multi-wavelength anomalous dispersion (MAD) A powerful solution is the multi-wavelength anomalous dispersion (MAD) method. In this technique, atoms' inner electrons absorb X-rays of particular wavelengths, and reemit the X-rays after a delay, inducing a phase shift in all of the reflections, known as the anomalous dispersion effect. Analysis of this phase shift (which may be different for individual reflections) results in a solution for the phases. Since X-ray fluorescence techniques (like this one) require excitation at very specific wavelengths, it is necessary to use synchrotron radiation when using the MAD method. Phase improvement Refining initial phases In many cases, an initial set of phases are determined, and the electron density map for the diffraction pattern is calculated. Then the map is used to determine portions of the structure, which portions are used to simulate a new set of phases. This new set of phases is known as a refinement. These phases are reapplied to the original amplitudes, and an improved electron density map is derived, from which the structure is corrected. This process is repeated until an error term (usually ) has stabilized to a satisfactory value. Because of the phenomenon of phase bias, it is possible for an incorrect initial assignment to propagate through successive refinements, so satisfactory conditions for a structure assignment are still a matter of debate. Indeed, some spectacular incorrect assignments have been reported, including a protein where the entire sequence was threaded backwards. Density modification (phase improvement) Solvent flattening Histogram matching Non-crystallographic symmetry averaging Partial structure Phase extension See also Coherent diffraction imaging Ptychography Phase retrieval External links An example of phase bias An appropriate use of 'molecular replacement' Learning crystallography References Crystallography Inverse problems
Phase problem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,189
[ "Applied mathematics", "Materials science", "Crystallography", "Condensed matter physics", "Inverse problems" ]
1,531,404
https://en.wikipedia.org/wiki/Split%20exact%20sequence
In mathematics, a split exact sequence is a short exact sequence in which the middle term is built out of the two outer terms in the simplest possible way. Equivalent characterizations A short exact sequence of abelian groups or of modules over a fixed ring, or more generally of objects in an abelian category is called split exact if it is isomorphic to the exact sequence where the middle term is the direct sum of the outer ones: The requirement that the sequence is isomorphic means that there is an isomorphism such that the composite is the natural inclusion and such that the composite equals b. This can be summarized by a commutative diagram as: The splitting lemma provides further equivalent characterizations of split exact sequences. Examples A trivial example of a split short exact sequence is where are R-modules, is the canonical injection and is the canonical projection. Any short exact sequence of vector spaces is split exact. This is a rephrasing of the fact that any set of linearly independent vectors in a vector space can be extended to a basis. The exact sequence (where the first map is multiplication by 2) is not split exact. Related notions Pure exact sequences can be characterized as the filtered colimits of split exact sequences. References Sources Abstract algebra
Split exact sequence
[ "Mathematics" ]
252
[ "Abstract algebra", "Algebra" ]
1,531,409
https://en.wikipedia.org/wiki/Homeomorphism%20group
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. They are important to the theory of topological spaces, generally exemplary of automorphism groups and topologically invariant in the group isomorphism sense. Properties and examples There is a natural group action of the homeomorphism group of a space on that space. Let be a topological space and denote the homeomorphism group of by . The action is defined as follows: This is a group action since for all , where denotes the group action, and the identity element of (which is the identity function on ) sends points to themselves. If this action is transitive, then the space is said to be homogeneous. Topology As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology. In the case of regular, locally compact spaces the group multiplication is then continuous. If the space is compact and Hausdorff, the inversion is continuous as well and becomes a topological group. If is Hausdorff, locally compact and locally connected this holds as well. tSome locally compact separable metric spaces exhibit an inversion map that is not continuous, resulting in not forming a topological group. Mapping class group In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy, called the mapping class group: The MCG can also be interpreted as the 0th homotopy group, . This yields the short exact sequence: In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, and by first studying the mapping class group and group of isotopically trivial homeomorphisms, and then (at times) the extension. See also Mapping class group References Group theory Topology Topological groups
Homeomorphism group
[ "Physics", "Mathematics" ]
392
[ "Space (mathematics)", "Group theory", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
1,531,487
https://en.wikipedia.org/wiki/Behavior%20modification%20facility
A behavior modification facility (or youth residential program) is a residential educational and treatment total institution enrolling adolescents who are perceived as displaying antisocial behavior, in an attempt to alter their conduct. Due to irregular licensing rules across countries and states, as well as ambiguity regarding the labels that facilities use themselves, it is hard to gauge how widespread the facilities are. The facilities are part of what has been called the Troubled Teen Industry. Programs in the United States have been controversial due to widespread allegations of abuse and trauma imposed on the adolescents who are enrolled, as well as deceptive marketing practices aimed at parents. Critics say the facilities do not use evidence-based treatments. Methodologies used in such programs Practices and service quality in such program vary greatly. The behavior modification methodologies used vary, but a combination of positive and negative reinforcement is typically used. Often these methods are delivered in a contingency management format such as a point system or level system. Such methodology has been found to be highly effective in the treatment of disruptive disorders (see meta-analysis of Chen & Ma (2007). Positive reinforcement mechanisms include points, rewards and signs of status, while punishment procedures may include time-outs, point deductions, reversal of status, prolonged stays at a facility, physical restraint, or even corporal punishment. Research showed that time out length was not a factor and suggestions were made to limit time out to five minute durations. A newer approach uses graduated sanctions. Staff appear easily trained in behavioral intervention, such training is maintained and does lead to improved consumer outcomes, as well as reduce turn over. More restrictive punishment procedures in general are less appealing to staff and administrators. Behavioral programs were found to lessen the need for medication. Several studies have found that gains made in residential treatment programs are maintained from 1–5 years post discharge. Therapeutic boarding schools are boarding schools based on the therapeutic community model that offers an educational program together with specialized structure and supervision for students with emotional and behavioral problems, substance abuse problems, or learning difficulties. Some schools are accredited as Residential treatment centers. Behavioral residential treatment became so popular in the 1970s and 1980s that a journal was formed called Behavioral Residential Treatment, which later changed its name to Behavioral Interventions. The journal continues to be published today. History In the late 1960s, behavior modification or practice referred to as applied behavior analysis began to move rapidly into residential treatment facilities. The goal was to redesign the behavioral architecture around delinquent teens to lessen chances of recidivism and improve academics. Harold Cohen and James Filipczak (1971) published a book hailing the successes of such programs in doubling learning rates and reducing recidivism. This book even contained an introduction from the leading behaviorist at the time, B.F. Skinner hailing the achievements. Independent analysis of multiple sites with thousands of adolescents found behavior modification to be more effective than treatment as usual, a therapeutic milieu, and as effective as more psychologically intense programs such as transactional analysis with better outcomes on behavioral measures; however, these authors found that behavior modification was more prone to leading to poor relationships with the clients. Over time, interest faded in Cohen's CASE project. Other studies found that in proper supervision of staff in behavior modification facilities could lead to greater use of punishment procedures. Under the leadership of Montrose Wolf, Achievement place, the first Teaching Family Home became the prototype for behavioral programs. Achievement place opened in 1967. Each home has from 6-8 boys in it with two "parents" trained in behavior modification principles. The token system for the program was divided into 3 levels. Outcome studies have found that Achievement place and other teaching family homes reduce recidivism and increase pro-social behavior, as well as self-esteem. While initial research suggested the effects of the program only lasted for one year post discharge, recent review of the data suggests the program lasts longer in effect. Gradually, behavior modification /applied behavior analysis within the penal system including residential facilities for delinquent youth lost popularity in the 1970s-1980s due to a large number of abuses (see Cautilli & Weinberg (2007) ), but recent trends in the increase in U.S. crime and recent focus on reduction of recidivism have given such programs a second look . Indeed, because of societal needs the number of youth residential facilities has grown over recent years to close to 39,950 in 2000. The use of functional analysis has been shown to be teachable to staff and able to reduce use of punishment procedures. Rutherford's (2009) review from interviews and archival materials documents the decline from treatment of behavior analysis with criminal justice populations. These facilities are part of what has been described as the Troubled Teen Industry. Some model programs Studies of successful graduates have shown that boot camp programs as an alternative to prison time are particularly successful in reducing criminality, but these studies are limited to successful graduates of state correctional and prison-alternative programs managed by current and former military service members. Programs such as teaching family homes based on the Teaching-Family Model have been researched by industry funded organizations and show positive gains. Research shows that they can be used to reduce delinquency while adolescents are in the home and post release {see Kingsley (2006). In general, these types of programs take a behavioral engineering approach to reducing problem behavior and building skills. In general, behavior modification programs, including military style boot camps that follow modern curriculum, that are used in facilities or in the natural environment have a large effect size and lead to an estimated 15 to 40% reduction in recidivism. While this reduction appears to be modest, it holds potential in the U.S. given the large number of people in the prison system. Increasingly, behavior modification models based on the principles of applied behavior analysis, cognitive behavioral therapy, and dialectic behavioral therapy are being developed to model and reduce delinquency and are being integrated into programs of all types. Controversy This industry is not without controversy, however. The U.S. Surgeon General (1999) discussed the need to clarify admission criteria to residential treatment programs. Included in the same report was the call for more updated research as most of the residential research had been completed in the 1960s and 1970s.. Disability rights organizations, such as the Bazelon Center for Mental Health Law, oppose placement in such programs and call into question the appropriateness and efficacy of such group placements, the failure of such programs to address problems in the child's home and community environment, the limited or no mental health services offered and substandard educational programs. Bazelon promotes community-based services on the basis that it considers more effective and less costly than residential placement. While the behavior modification programs can be delivered as easily in residential programs as in community-based programs overall community-based programs continue to lack empirical support especially with respect to long term outcomes for severe cases with the notable exception of Hinckley and Ellis (1985). Even with this said, in 1999 the surgeon general clearly stated "...it is premature to endorse the effectiveness of residential treatment for adolescents.". From late 2007 through 2008, a broad coalition of grass roots efforts, prominent medical and psychological organizations that including members of Alliance for the Safe, Therapeutic and Appropriate use of Residential Treatment (ASTART) and the Community Alliance for the Ethical Treatment of Youth (CAFETY), provided testimony and support that led to the creation of the Stop Child Abuse in Residential Programs for Teens Act of 2008 by the United States Congress Committee on Education and Labor. Jon Martin-Crawford and Kathryn Whitehead of CAFETY testified at a hearing of the United States Congress Committee on Education and Labor on April 24, 2008, where they described abusive practices they had experienced at the Family Foundation School and Mission Mountain School, both therapeutic boarding schools. One recent acknowledgement has been that long term care does not equate with better outcomes. To reduce the tendency for abuse, a strong push has occurred to certify or license behavior modifiers or to have such practices limited to licensed psychologists. In particular psychologists with behavioral training American psychological association offers a diplomat (post Ph.D. and licensed certification) in behavioral psychology. Often the practice of behavior modification in facilities comes into question (see recent interest in Judge Rotenberg Educational Center, Aspen Education Group and the World Wide Association of Specialty Programs and Schools). Often these types of restrictive issues are discussed as part of ethical and legal standards (see Professional practice of behavior analysis). Recent research has identified some best practices for use in such facilities In general policies in such facilities require the presence of a treatment team to ensure that abuses do not occur especially if facilities are attempting to use punishment programs. Regulations In the U.S. residential treatment programs are all monitored at the state level and many are JACHO accredited. States vary in requirements to open such centers. Due to the absence of regulation of these programs by the federal government and because many are not subject to state licensing or monitoring, the Federal Trade Commission has issued a guide for parents considering such placement. Due to irregular licensing practices and differences in the kinds of labels that facilities use themselves, it is unclear how many facilities exist in the United States. Organizations Residential therapist who are behavior modifiers should join professional organizations and be professionally affiliated. Many organizations exist for behavior therapists around the world. The World Association for Behavior Analysis offers a certification in behavior therapy In the United States, the American Psychological Association's Division 25 is the division for behavior analysis. The Association for Contextual Behavior Therapy is another professional organization. ACBS is home to many clinicians with specific interest in third generation behavior therapy. The Association for Behavioral and Cognitive Therapies (formerly the Association for the Advancement of Behavior Therapy) is for those with a more cognitive orientation. Internationally, most behavior therapists find a core intellectual home in the International Association for Behavior Analysis (ABA:I) . See also Gooning Therapeutic boarding school Teaching-Family Model Residential treatment center Troubled teen industry References External links Considering a Private Residential Treatment Program for a Troubled Teen? Questions for Parents and Guardians to Ask, U.S. Federal Trade Commission US State Dept. page on offshore BMFs TeenLiberty.org, a site which cites many complaints against BMFs "Exploitation in the Name of 'Specialty Schooling' by Allison Pinto, Ph.D., Robert M. Friedman, Ph.D. and Monica Epstein, Ph.D., Louis de la Parte Florida Mental Health Institute, University of South Florida, American Psychological Association: Children, Youth and Families News, Summer 2005, retrieved June 28, 2006 Bazelon Center for Mental Health Law Alliance for the Safe, Therapeutic and Appropriate use of Residential Treatment Community Alliance for the Ethical Treatment of Youth National Youth Rights Association forum on BMFs The Parent Help Center Child discipline boot camps for troubled youth - Summer Success Behavior Camp, Weekend Success Camp, and Online Empowered Parent Conference. Behavior modification Behaviorism Youth rights
Behavior modification facility
[ "Biology" ]
2,213
[ "Behavior modification", "Human behavior", "Behavior", "Behaviorism" ]
1,531,568
https://en.wikipedia.org/wiki/General-purpose%20input/output
A general-purpose input/output (GPIO) is an uncommitted digital signal pin on an integrated circuit or electronic circuit (e.g. MCUs/MPUs) board which may be used as an input or output, or both, and is controllable by software. GPIOs have no predefined purpose and are unused by default. If used, the purpose and behavior of a GPIO is defined and implemented by the designer of higher assembly-level circuitry: the circuit board designer in the case of integrated circuit GPIOs, or system integrator in the case of board-level GPIOs. Integrated circuit GPIOs Integrated circuit (IC) GPIOs are implemented in a variety of ways. Some ICs provide GPIOs as a primary function whereas others include GPIOs as a convenient "accessory" to some other primary function. Examples of the former include the Intel 8255, which interfaces 24 GPIOs to a parallel communication bus, and various GPIO expander ICs, which interface GPIOs to serial communication buses such as I²C and SMBus. An example of the latter is the Realtek ALC260 IC, which provides eight GPIOs along with its main function of audio codec. Microcontroller ICs usually include GPIOs. Depending on the application, a microcontroller's GPIOs may comprise its primary interface to external circuitry or they may be just one type of I/O used among several, such as analog signal I/O, counter/timer, and serial communication. In some ICs, particularly microcontrollers, a GPIO pin may be capable of other functions than GPIO. Often in such cases it is necessary to configure the pin to operate as a GPIO (vis-à-vis its other functions) in addition to configuring the GPIO's behavior. Some microcontroller devices (e.g., Microchip dsPIC33 family) incorporate internal signal routing circuitry that allows GPIOs to be programmatically mapped to device pins. Field-programmable gate arrays (FPGA) extend this ability by allowing GPIO pin mapping, instantiation and architecture to be programmatically controlled. Board-level GPIOs Many circuit boards expose board-level GPIOs to external circuitry through integrated electrical connectors. Usually, each such GPIO is accessible via a dedicated connector pin. Like IC-based GPIOs, some boards merely include GPIOs as a convenient, auxiliary resource that augments the board's primary function, whereas in other boards the GPIOs are the central, primary function of the board. Some boards, which are classified usually as multi-function I/O boards, are a combination of both; such boards provide GPIOs along with other types of general-purpose I/O. GPIOs are also found on embedded controller boards and Single board computers such as Arduino, BeagleBone, and Raspberry Pi. Board-level GPIOs are often given abilities which IC-based GPIOs usually lack. For example, Schmitt-trigger inputs, high-current output drivers, optical isolators, or combinations of these, may be used to buffer and condition the GPIO signals and to protect board circuitry. Also, higher-level functions are sometimes implemented, such as input debounce, input signal edge detection, and pulse-width modulation (PWM) output. Usage GPIOs are used in a diverse variety of applications, limited only by the electrical and timing specifications of the GPIO interface and the ability of software to interact with GPIOs in a sufficiently timely manner. GPIOs usually employ standard logic levels and cannot supply significant current to output loads. When followed by an appropriate high-current output buffer (or mechanical or solid-state relay), a GPIO may be used to control high-power devices such as lights, solenoids, heaters, and motors (e.g., fans and blowers). Similarly, an input buffer, relay or opto-isolator is often used to translate an otherwise incompatible signal (e.g., high voltage) to the logic levels required by a GPIO. Integrated circuit GPIOs are commonly used to control or monitor other circuitry (including other ICs) on a board. Examples of this include enabling and disabling the operation of (or power to) other circuitry, reading the states of on-board switches and configuration shunts, and driving light-emitting diode (LED) status indicators. In the latter case, a GPIO can, in many cases, supply enough output current to directly power an LED without using an intermediate buffer. Multiple GPIOs are sometimes used together as a bit banging communication interface. For example, two GPIOs may be used to implement a serial communication bus such as Inter-Integrated Circuit (I²C), and four GPIOs can be used to implement a Serial Peripheral Interface (SPI) bus; these are usually used to facilitate serial communication with ICs and other devices which have compatible serial interfaces, such as sensors (e.g., temperature sensors, pressure sensors, accelerometers) and motor controllers. Taken to the extreme, this method may be used to implement an entire parallel bus, thus allowing communication with bus-oriented ICs or circuit boards. Although GPIOs are fundamentally digital in nature, they are often used to control analog processes. For example, a GPIO may be used to control motor speed, light intensity, or temperature. Usually, this is done via PWM, in which the duty cycle of the GPIO output signal determines the effective magnitude of the process control signal. For example, when controlling light intensity, the light may be dimmed by reducing the GPIO duty cycle. Some analog processes require an analog control voltage. In such cases, it may be feasible to connect a GPIO, which is operated as a PWM output, to an RC filter to create a simple, low cost digital-to-analog converter. Implementation GPIO interfaces vary widely. In some cases, they are simple—a group of pins that can switch as a group to either input or output. In others, each pin can be set up to accept or source different logic voltages, with configurable drive strengths and pull ups/downs. Input and output voltages are usually, but not always, limited to the supply voltage of the device with the GPIOs, and may be damaged by greater voltages. A GPIO pin's state may be exposed to the software developer through one of a number of different interfaces, such as a memory-mapped I/O peripheral, or through dedicated IO port instructions. Some GPIOs have 5 V tolerant inputs: even when the device has a low supply voltage (such as 2 V), the device can accept 5 V without damage. A GPIO port is a group of GPIO pins (often 8 pins, but it may be less) arranged in a group and controlled as a group. GPIO abilities may include: GPIO pins can be configured to be input or output GPIO pins can be enabled/disabled Input values are readable (usually high or low) Output values are writable/readable Input values can often be used as IRQs (usually for wakeup events) See also Programmed input/output SGPIO Special input/output References External links GPIO framework for FreeBSD FreeBSD gpio(3) API manual FreeBSD gpioctl(8) manual FreeBSD gpio(4) manual ALSA Development List Linux Kernel Doc on GPIO LinuxTV GPIO Pins Info Computer buses Integrated circuits
General-purpose input/output
[ "Technology", "Engineering" ]
1,580
[ "Computer engineering", "Integrated circuits" ]
1,531,588
https://en.wikipedia.org/wiki/Amifostine
Amifostine (ethiofos) is a cytoprotective adjuvant used in cancer chemotherapy and radiotherapy involving DNA-binding chemotherapeutic agents. It is marketed by Clinigen Group under the trade name Ethyol. Indications Amifostine is used therapeutically to reduce the incidence of neutropenia-related fever and infection induced by DNA-binding chemotherapeutic agents including alkylating agents (e.g. cyclophosphamide) and platinum-containing agents (e.g. cisplatin). It is also used to decrease the cumulative nephrotoxicity associated with platinum-containing agents. Amifostine is also indicated to reduce the incidence of xerostomia in patients undergoing radiotherapy for head and neck cancer. Amifostine was originally indicated to reduce the cumulative renal toxicity from cisplatin in non-small cell lung cancer. However, while nephroprotection was observed, the probability that amifostine could protect tumors could not be excluded. Additional data have shown that amifostine-mediated tumor protection, in any clinical scenario, is unlikely. Pharmacokinetics Amifostine is an organic thiophosphate prodrug which is hydrolysed in vivo by alkaline phosphatase to the active cytoprotective thiol metabolite, WR-1065. The selective protection of non-malignant tissues is believed to be due to higher alkaline phosphatase activity, higher pH, and vascular permeation of normal tissues. Amifostine can be administered intravenously or subcutaneously after reconstitution with normal saline. Infusions lasting less than 15 minutes decrease the risk of adverse effects. The patient should be well-hydrated prior to administration. Mechanism of action Inside cells, amifostine detoxifies reactive metabolites of platinum and alkylating agents, as well as scavenges free radicals. Other possible effects include accelerated DNA repair, induction of cellular hypoxia, inhibition of apoptosis, alteration of gene expression and modification of enzyme activity. Amifostine is believed to radioprotect normal tissue via Warburg-type effects. Adverse effects Common side effects of amifostine include hypocalcemia, diarrhea, nausea, vomiting, sneezing, somnolence, and hiccups. Serious side effects include: hypotension (found in 62% of patients), erythema multiforme, Stevens–Johnson syndrome and toxic epidermal necrolysis, immune hypersensitivity syndrome, erythroderma, anaphylaxis, and loss of consciousness (rare). Contraindications Contraindications to receiving amifostine include hypersensitivity to amifostine and aminothiol compounds like WR-1065. Ethyol contains mannitol. References Chemotherapeutic adjuvants Prodrugs Amines
Amifostine
[ "Chemistry" ]
641
[ "Functional groups", "Prodrugs", "Amines", "Chemicals in medicine", "Bases (chemistry)" ]
1,531,615
https://en.wikipedia.org/wiki/Combination%20therapy
Combination therapy or polytherapy is therapy that uses more than one medication or modality. Typically, the term refers to using multiple therapies to treat a single disease, and often all the therapies are pharmaceutical (although it can also involve non-medical therapy, such as the combination of medications and talk therapy to treat depression). 'Pharmaceutical' combination therapy may be achieved by prescribing/administering separate drugs, or, where available, dosage forms that contain more than one active ingredient (such as fixed-dose combinations). Polypharmacy is a related term, referring to the use of multiple medications (without regard to whether they are for the same or separate conditions/diseases). Sometimes "polymedicine" is used to refer to pharmaceutical combination therapy. Most of these kinds of terms lack a universally consistent definition, so caution and clarification are often advisable. Uses Conditions treated with combination therapy include tuberculosis, leprosy, cancer, malaria, and HIV/AIDS. One major benefit of combination therapies is that they reduce development of drug resistance since a pathogen or tumor is less likely to have resistance to multiple drugs simultaneously. Artemisinin-based monotherapies for malaria are explicitly discouraged to avoid the problem of developing resistance to the newer treatment. Combination therapy may seem costlier than monotherapy in the short term, but when it is used appropriately, it causes significant savings: lower treatment failure rate, lower case-fatality ratios, fewer side-effects than monotherapy, slower development of resistance, and thus less money needed for the development of new drugs. In oncology Combination therapy has gained momentum in oncology in recent years, with various studies demonstrating higher response rates with combinations of drugs compared to monotherapies, and the FDA recently approving therapeutic combination regimens that demonstrated superior safety and efficacy to monotherapies. In a recent study about solid cancers, Martin Nowak, Bert Vogelstein, and colleagues showed that in most clinical cases, combination therapies are needed to avoid the evolution of resistance to targeted drugs. Furthermore, they find that the simultaneous administration of multiple targeted drugs minimizes the chance of relapse when no single mutation confers cross-resistance to both drugs. Various systems biology methods must be used to discover combination therapies to overcome drug resistance in select cancer types. Recent precision medicine approaches have focused on targeting multiple biomarkers found in individual tumors by using combinations of drugs. However, with 300 FDA-approved cancer drugs on the market, there almost 45,000 possible two-drug combinations and almost 4.5 million three-drug combinations for to choose from. That level of complexity is one of the primary impediments to the growth of combination therapy in oncology. The National Cancer Institute has recently highlighted combination therapy as a top research priority in oncology. Bacterial infections Combination therapy with two or more antibiotics are often used in an effort to treat multi-drug resistant Gram-negative bacteria. Contrast to monotherapy Monotherapy, or the use of a single therapy, can be applied to any therapeutic approach, but it is most commonly used to describe the use of a single medication. Normally, monotherapy is selected because a single medication is adequate to treat the medical condition. However, monotherapies may also be used because of unwanted side effects or dangerous drug interactions. See also Polypill, a medication which contains a combination of multiple active ingredients Combination drug References External links Drug combination database. covers information on more than 1300 drug combinations in either clinical use or different testing stages. Perturbation biology method for the discovery of anti-resistance drug combinations with network pharmacology. Medical treatments Medical terminology Pharmacology
Combination therapy
[ "Chemistry" ]
753
[ "Pharmacology", "Medicinal chemistry" ]