id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
15,074,341
https://en.wikipedia.org/wiki/CHMP1B
Charged multivesicular body protein 1b is a protein that in humans is encoded by the CHMP1B gene. References External links Further reading
CHMP1B
Chemistry
30
7,994,183
https://en.wikipedia.org/wiki/Zicam
Zicam is a branded series of products marketed for cold and allergy relief whose original formulations included the element zinc. The Zicam name is derived from a portmanteau of the words "zinc" and "ICAM-1" (the receptor to which a rhinovirus binds in order to infect cells). It is labelled as an "unapproved homeopathic" product and as such has no evidence of effectiveness. Zicam was invented and developed by Charles B. Hensley and Robert Steven Davidson in the mid 1990s, working on the ICAM-1 synthesis success of the Hafdua Laboratory in Haifa, Israel, under the direction of Mich Segal and Avram Satz, and is produced, marketed and sold by Zicam, LLC, a wholly owned subsidiary of Matrixx Initiatives, Inc., an American company. In 2009, the U.S. Food and Drug Administration (FDA) and Health Canada advised consumers to avoid intranasal versions of Zicam Cold Remedy because of a risk of damage to the sense of smell, leading the manufacturer to withdraw these versions from the U.S. market. However, in recent years, they have returned to market with both nasal swabs and also dissolving/chewable tablets and nasal spray and oral mist forms, some with zinc, some without. In 2020, the brand was purchased by Church & Dwight for $530 million. Ingredients and use Because this product is a "homeopathic" over-the-counter drug, it is exempt from a number of the requirements ordinarily applicable to OTC drug products, provided it conforms to the standards of the Homeopathic Pharmacopeia of the United States (HPUS) and is labeled as a homeopathic product. The only biologically active ingredients present in Zicam Cold Remedy are zinc acetate (2X = 1/100 dilution) and zinc gluconate (1X = 1/10 dilution). Other sources list the ionic zinc content as "33 mmol/L of zincum gluconium". Zicam is marketed as a homeopathic product which the maker claims can shorten the duration of a cold and may reduce the severity of common cold symptoms. It is marketed in accordance with the Homeopathic Pharmacopoeia of the United States, a private organization not linked to nor regulated by any part of any government. Various non-scientific private studies done by and for the homeopathic industry support its cold-reduction claims: Center of Integrative Medicine and Department of Infections Diseases, Cleveland Clinic Foundation, Cleveland OH. Some of the homeopathic ingredients used in the preparation of Zicam are galphimia glauca histamine dihydrochloride (homeopathic name histaminum hydrochloricum), luffa operculata, and sulfur. Safety concerns Litigation In 2006, Matrixx Initiatives paid $12 million to settle 340 lawsuits from Zicam users who said that the product destroyed their sense of smell (medically termed anosmia), although the company did not admit fault. As of 2009, "hundreds more such suits have since been filed." In 2005, the International Brotherhood of Electrical Workers pension fund sued Matrixx Initiatives for misrepresenting the stock by not reporting the risks of Zicam. In Matrixx Initiatives, Inc. v. Siracusano, the U.S. Supreme Court ruled that the union's suit could go forward. In 2014, Yesenia Melgar commenced an action entitled Melgar v. Zicam LLC, et al. Melgar claimed that Zicam deceived customers by falsely representing that Zicam products "reduce the duration and severity of a cold." The court allowed the case to become a class action suit that included a variety of Zicam products. In 2018, a settlement was reached. Zicam agreed to pay $16,000,000 to people that had purchased Zicam products between Feb. 15, 2011 and June 5, 2018. NAD claims In April 2013, the National Advertising Division recommended that Matrixx Initiatives cease advertising claims suggesting "its homeopathic Zicam Cold Remedy products prevent users from catching a cold." However, the NAD concluded that imagery of the “cold monster” was unlikely to imply that taking Zicam would, in fact, reduce the severity of a cold. The advertiser’s voluntary discontinuance of the language “concentrated formula” from its Zicam ULTRA advertising and product packaging was noted and appreciated. It was found that Zicam provided a reasonable basis for the use of “Ultra” for Zicam products that contain more of the active ingredient per dosage unit than their original counterparts and require consumers to take fewer doses per day. FDA warning and product recall On June 16, 2009, the FDA advised consumers to discontinue use of three nasally administered versions of Zicam Cold Remedy—Zicam Cold Remedy Nasal Gel, Zicam Cold Remedy Nasal Swabs, and Zicam Cold Remedy Swabs, Kids Size (a discontinued product)—because the FDA had associated a serious risk of anosmia with them. The advisory did not implicate other Zicam products. The FDA indicated that it had received reports of a loss of smell from approximately 130 Zicam Cold Remedy users since 1999. The FDA voiced concern that the loss of smell may be long-lasting or permanent, while the condition for which these Zicam products are marketed—the common cold—typically resolves on its own without lasting problems. The manufacturer stated that it had received an additional 800 reports of a loss of smell, but did not turn those over to the FDA as they did not feel they were required to do so. The FDA disagreed, and requested copies of any reports that had associated anosmia with intranasal Zicam Cold Remedy. The FDA also issued a Warning Letter to Matrixx, stating that the products cannot be marketed without FDA approval. The company initially refused to recall the products but later said that they would withdraw the products from sale and that, "based on the FDA’s recommendation, consumers should discard any unused product or contact Zicam ... to request a refund." On June 24, 2009, Matrixx recalled all affected products. The company maintained that most cases of anosmia are due to the common cold itself, and that complaints of anosmia among Zicam Cold Remedy users are unlikely to be more numerous than those expected among the general population. In contrast, the FDA had reported that cases of anosmia associated with intranasal Zicam Cold Remedy products were in excess of those seen with other nasal remedies for the common cold, and that cases associated with intranasal zinc presented more rapidly, and with different symptoms, than did unrelated cases. In addition, the FDA's warning letter prompted the Securities and Exchange Commission to investigate the company. Through Freedom of Information Act (FOIA) filings, Matrixx has requested the FDA to provide the research and evidence that led them to request the withdrawal of Zicam swabs. The company said that "fundamental fairness" required a clear explanation of the FDA's methodology and analysis. On June 19, 2009, Health Canada, in a foreign product alert, also issued a similar warning based on the U.S. FDA information. References External links Zicam Website Matrixx Initiatives Homeopathic remedies Withdrawn drugs
Zicam
Chemistry
1,555
2,347,496
https://en.wikipedia.org/wiki/Ambient%20network
Ambient networks is a network integration design that seeks to solve problems relating to switching between networks to maintain contact with the outside world. This project aims to develop a network software-driven infrastructure that will run on top of all current or future network physical infrastructures to provide a way for devices to connect to each other, and through each other to the outside world. The concept of Ambient Networks comes from the IST Ambient Network project, which was a research project sponsored by the European Commission within the Sixth Framework Programme (FP6). The Ambient Networks Project Ambient Networks was a collaborative project within the European Union's Sixth Framework Programme that investigates future communications systems beyond fixed and 3rd generation mobile networks. It is part of the Wireless World Initiative. The project worked at a new concept called Ambient Networking, to provide suitable mobile networking technology for the future mobile and wireless communications environment. Ambient Networks aimed to provide a unified networking concept that can adapt to the very heterogeneous environment of different radio technologies and service and network environments. Special focus was put on facilitating both competition and cooperation of various market players by defining interfaces, which allow the instant negotiation of agreements. This approach went beyond interworking of well-defined protocols and was expected to have a long-term effect on the business landscape in the wireless world. Central to the project was the concept of composition of networks, an approach to address the dynamic nature of the target environment, based on an open framework for network control functionality, which can be extended with new capabilities as well as operating over existing connectivity infrastructure. Phase 1 of the project (2004–2005) laid the conceptual foundations. The Deliverable D1-5 "Ambient Networks Framework Architecture" summarizes the work from phase 1 and provides links to other relevant material. Phase 2 (2006–2007) focused on validation aspects. One key result of phase 2 is an integrated prototype that was used to study the feasibility of the Ambient Networks concept for a number of typical network scenarios. The ACS prototype was used to iteratively test the components developed by the project in a real implementation. In parallel, the top-down work was continued which led to a refined System Specification. This document, referred to as the System Description, is available on the Ambient Networks website. Furthermore, standardization of the composition concept is addressed in 3GPP. Interfaces and their use The ACS (Ambient Control Space) is the internal of an ambient network. It has the functions that can be accessed and it is in full control of the resources of the network. The Ambient Networks infrastructure does not deal with nodes, instead it deals with networks, though at the beginning, all the "networks" might only consist of just one node: these "networks" need to merge to form a network in the original sense of the word. A composition establishment consists of the negotiation and then the realization of a Composition Agreement. This merging can happen be fully automatic. The decision to merge or not is decided using pre-configured policies. There are three interfaces present to communicate with an ACS. These are: ANI: Ambient Network Interface. If a network wants to join in, it has to do so through this interface. ASI: Ambient Service Interface. If a function needs to be accessed inside the ACS, this Interface is used. ARI: Ambient Resource Interface. If a resource inside a network needs to be accessed (e.g. the volume of the traffic), this interface is used. Interfaces are used to hide the internal structures of the underlying network. If two networks meet, and decide to merge, a new ACS will be formed of the two (though the two networks will have their own ACS along with the interfaces inside this global, new ACS). The newly composed ACS will of course have its own ANI, ASI and ARI, and will use these interfaces to merge with other Ambient Networks. Other options for composition are to not merge the two Ambient Networks (Network Interworking) or to establish a new virtual ACS that exercises joint control over a given set of shared resources (Control Sharing). ACS Functional Entities Functions are divided into Functional Entities (FEs). The ACS provides a flexible and extensible framework to run these FEs as a distributed system. Examples are Composition Functional Entity: Controlling composition of ANs Bearer Management FEs Overlay Management FEs More information on FEs is contained in the Ambient Networks Framework Architecture and the latest version of the System Description. Example situation Alice has a PAN, a Personal Area Network on her body: she has a Bluetooth enabled PDA, mobile phone and laptop that she is carrying, and are all currently turned on, and forming a network. Her laptop also has the ability to connect using an available WLAN, and her mobile phone has the ability to connect through GPRS, though GPRS is slower and much more costly for Alice to use. She is now on the move, and her laptop is downloading her emails using the GPRS connection on the mobile: Laptop → (Bluetooth) → Mobile → (GPRS) → Mobile phone network While walking, she passes into an area covered by a free WLAN hotspot: Her PAN now immediately starts to initiate a connection with the hotspot. This is called "merging" of the networks (that of the hotspot and that of her PAN). Once this merging is complete, the downloading of her email continues totally unaffected, but instead of using the expensive and slow GPRS connection, it is now using the newly established WLAN connection. If she now wants to browse the web with her PDA, the PDA will also use the WLAN connection of the laptop: PDA → (bluetooth) → Laptop → (WLAN) → Hotspot References External links The official site Dynamic and Automatic Interworking between Personal Area Networks using Composition Sixth Framework Programme Wireless World Initiative smartnfc.com – Intra Body Communication and Personal Area Networks ASPAN – Next Generation Personal Area Networks Network architecture
Ambient network
Engineering
1,217
3,107,845
https://en.wikipedia.org/wiki/Klee%27s%20measure%20problem
In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd. The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem. History and algorithms In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") — see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal. Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity algorithm, now known as Bentley's algorithm, based on reducing the problem to n 1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas. These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of n d-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem. When generalized to the d-dimensional case, Bentley's algorithm has a running time of . This turns out not to be optimal, because it only decomposes the d-dimensional problem into n (d-1)-dimensional problems, and does not further decompose those subproblems. In 1981, Jan van Leeuwen and Derek Wood improved the running time of this algorithm to for d ≥ 3 by using dynamic quadtrees. In 1988, Mark Overmars and Chee Yap proposed an algorithm for d ≥ 3. Their algorithm uses a particular data structure similar to a kd-tree to decompose the problem into 2-dimensional components and aggregate those components efficiently; the 2-dimensional problems themselves are solved efficiently using a trellis structure. Although asymptotically faster than Bentley's algorithm, its data structures use significantly more space, so it is only used in problems where either n or d is large. In 1998, Bogdan Chlebus proposed a simpler algorithm with the same asymptotic running time for the common special cases where d is 3 or 4. In 2013, Timothy M. Chan developed a simpler algorithm that avoids the need for dynamic data structures and eliminates the logarithmic factor, lowering the best known running time for d ≥ 3 to . Known bounds The only known lower bound for any d is , and optimal algorithms with this running time are known for d=1 and d=2. The Chan algorithm provides an upper bound of for d ≥ 3, so for d ≥ 3, it remains an open question whether faster algorithms are possible, or alternatively whether tighter lower bounds can be proven. In particular, it remains open whether the algorithm's running time must depend on d. In addition, the question of whether there are faster algorithms that can deal with special cases (for example, when the input coordinates are integers within a bounded range) remains open. The 1D Klee's measure problem (union of intervals) can be solved in where p denotes the number of piercing points required to stab all intervals (the union of intervals pierced by a common point can be calculated in linear time by computing the extrema). Parameter p is an adaptive parameter that depends on the input configuration, and the piercing algorithm yields an adaptive algorithm for Klee's measure problem. See also Convex volume approximation, an efficient algorithm for convex bodies References and further reading Important papers . . . . . . . Secondary literature Franco P. Preparata and Michael I. Shamos (1985). Computational Geometry (Springer-Verlag, Berlin). Klee's Measure Problem, from Professor Jeff Erickson's list of open problems in computational geometry. (Accessed November 8, 2005, when the last update was July 31, 1998.) References Computational geometry Measure theory Mathematical problems
Klee's measure problem
Mathematics
978
68,175,242
https://en.wikipedia.org/wiki/Gang%20stalking
Gang stalking or group-stalking is a set of persecutory beliefs in which those affected believe they are being followed, stalked, and harassed by a large number of people. The term is associated with the virtual community formed by people who consider themselves "targeted individuals" ("T.I."), claiming their lives are disrupted from being stalked by organized groups intent on causing them harm. Terminology The concept of stalking arose in the 1980s following increased legal equity for women and prosecution of domestic violence. Generally, stalking has a single perpetrator, who may sometimes recruit others to act vicariously on their behalf, usually unwittingly. Beginning in the early 2000s, the term gang stalking became popularized to describe a different experience of repeated harassment which instead comes from multiple people who organize around a shared purpose, with no one person solely responsible. Online communities A 2016 article in The New York Times estimated that more than 10,000 people were participating in online communities "organized around the conviction that its members are victims of a sprawling conspiracy to harass thousands of everyday Americans with mind-control weapons and armies of so-called gang stalkers". The article identified a 2015 paper by Sheridan and James entitled "Complaints of group stalking ('gang stalking'): an exploratory study of their nature and impact on complainants" as the only scientific study of the topic at the time. Hundreds of these communities exist online. News reports have described how groups of Internet users have cooperated to exchange detailed conspiracy theories involving gang stalking. Kershaw & Weinberger say, "Web sites that amplify reports of mind control and group stalking" are "an extreme community that may encourage delusional thinking" and represent "a dark side of social networking. They may reinforce the troubled thinking of the mentally ill and impede treatment." A 2020 study established a framework to classify and examine the phenomenon of individuals with the subjective experience of being gang stalked. The study confirmed the subsequent "serious" sequelae of their experience and recommended further research. Persecutory delusion Those who believe they are victims report that they believe the motivation for the gang stalking is to disrupt every part of their lives. The activities involved are described as including electronic harassment, the use of "psychotronic weapons", directed-energy weapons, cyberstalking, hypnotic suggestion transmitted through remotely-accessed electronic devices, and other alleged mind control techniques. These have been reported by external observers as being examples of belief systems as opposed to reports of objective phenomena. Among the community of targeted individuals, gang stalking is described as a shared experience where the gang stalkers all coordinate to harass individuals, and the individuals share their victim experiences with each other. A study from Australia and the United Kingdom by Lorraine Sheridan and David James compared 128 self-defined victims of gang stalking with a randomly selected group of 128 self-declared victims of stalking by an individual. All 128 "victims" of gang stalking were judged to be delusional, compared with only 5 victims of individual stalking. There were highly significant differences between the two samples on depressive symptoms, post-traumatic symptomatology and adverse impact on social and occupational function, with the self-declared victims of gang stalking being more severely affected. The authors concluded that "group stalking appears to be delusional in basis, but complainants suffer marked psychological and practical sequelae. This is important in the assessment of risk in stalking cases, early referral to psychiatric services and allocation of police resources." While a great majority of those who claim to be targeted individuals do not pose danger to others, one report found that some have acted out with violence, sometimes extreme. In 2022, a reported believer in gang stalking was accused of killing four people in Ohio; he uploaded a video before the shooting in which he said that he wanted to "help other targeted individuals", and that he will conduct "the first counterattack against mind control in history". A manifesto was found on his computer, in which he wrote that his neighbors were mind-controlling terrorists. Notable claimants James Tilly Matthews (1770–1815), English businessman Francis E. Dec (1926–1996), American lawyer Gloria Naylor (1950–2016), American novelist Isaac Brock (1975–present), American musician See also Cyberstalking Mass surveillance Psychosis The Truman Show delusion Stalking#Stalking by groups, for real-world stalking by groups References External links Stalking Group processes Delusions Conspiracy theories Symptoms of schizophrenia
Gang stalking
Biology
911
19,154,280
https://en.wikipedia.org/wiki/Evolutionary%20trap
The term evolutionary trap has retained several definitions associated with different biological disciplines. Evolutionary biology Within evolutionary biology, this term has been used sporadically to refer to situations in which a pre-existing (and presumably well adapted and successful) trait has become obsolete or maladaptive due to changing biophysical environment and/or competitions, but evolved complexities accumulated by prior adaptations now preclude any effective re-adaptation—as organisms can only modify upon or "patch up" existing traits (which essentially have become inherited "baggage") rather than devolving, removing or "redesigning" a trait (i.e. Dollo's law of irreversibility)—leaving the species hosting the trait struggling to keep up with natural selection and thus vulnerable to competitive disadvantage, extirpation or even extinction. In the 1991 BBC lecture series Growing Up in the Universe, British evolutionary biologist Richard Dawkins once analogized the concept to that of a mountaineer blindly climbing up (because "evolution has no foresights") while not allowed to turn back downhill, ended up being trapped on one summit and thus cannot go anywhere else higher. Ecology Within behavioral and ecological sciences, evolutionary traps occur when rapid environmental change triggers organisms to make maladaptive behavioral decisions. While these traps may take place within any type of behavioral context (e.g. mate selection, navigation, nest-site selection), the most empirically and theoretically well-understood type of evolutionary trap is the ecological trap which represents maladaptive habitat selection behavior. Witherington demonstrates an interesting case of a "navigational trap". Over evolutionary time, hatchling sea turtles have evolved the tendency to migrate toward the light of the moon upon emerging from their sand nests. However, in the modern world, this has resulted in them tending to orient towards bright beach-front lighting, which is a more intense light source than the moon. As a result, the hatchlings migrate up the beach and away from the ocean where they exhaust themselves, desiccate and die either as a result of exhaustion, dehydration or predation. Habitat selection is an extremely important process in the lifespan of most organisms. That choice affects nearly all of an individual's subsequent choices, so it may not be particularly surprising the type of evolutionary trap with the best empirical support is the ecological trap. Even so, traps may be relatively difficult to detect and so the lack of evidence for other types of evolutionary trap may be a result of the paucity of researchers looking for them coupled with the demanding evidence required to demonstrate their existence. See also Coextinction Ecological trap Evolutionary anachronism Evolutionary tradeoff Mass extinction Perceptual trap Evolutionary mismatch References Conservation biology Evolutionary biology concepts
Evolutionary trap
Biology
555
1,415,899
https://en.wikipedia.org/wiki/Suburbanization
Suburbanization (American English), also spelled suburbanisation (British English), is a population shift from historic core cities or rural areas into suburbs. Most suburbs are built in a formation of (sub)urban sprawl. As a consequence of the movement of households and businesses away from city centers, low-density, peripheral urban areas grow. Proponents of curbing suburbanization argue that sprawl leads to urban decay and a concentration of lower-income residents in the inner city, in addition to environmental harm. Suburbanization can be a progressive process, as growing population pushes outward the zones of the concentric zone model that move outward to escape the increasing density of inward areas. For example, Kings County, New York served New York City as farmland in the 18th century, with boats carrying produce across the East River. The steam ferry later made Brooklyn Heights a commuter town for Wall Street. Streetcar suburbs spread through the county, and as elevated railways further extended its reach, the City of Brooklyn grew to fill the county. Areas along the river became industrialized and apartment buildings filled the places where factories did not replace the scattered houses. As a result, much of Brooklyn transformed into a suburban economy and later into an urban economy entirely. Many other suburbs have followed this same cycle. History United States Post–World War II economic expansion in the United States included a sudden boom in housing construction as developers raced to address housing shortages across the country. As veterans returned from war, their GI Bill benefits made it especially easy to buy homes in these new, cost-efficient neighborhoods, populating them quickly with young couples and new families. Racially discriminatory housing policies in many areas prevented people of color from buying homes in the new suburbs, making them largely white-dominated spaces. The nationwide mass migration of white homeowners into the suburbs became known as "white flight". Throughout the years, the desire to separate work life and home life has grown, causing an increase in suburban populations. Suburbs are often built around certain industries such as restaurants, shopping, and entertainment, which allows suburban residents to travel less and interact more within the suburban area. In the early 21st century, the spread of communication services, such as broadband, e-mail, and practical home video conferencing, have enabled more people to work from home rather than commuting. Increased connectivity and digitization of office-based work, especially in response to the COVID-19 pandemic, have improved the ability for suburban residents to work from home. Similarly, the rise of modern delivery logistics in postal services, which take advantage of computerization and the availability of efficient transportation networks, also eliminates some of the advantages that were once to be had from having a business located in the city. Industrial, warehousing, and factory land uses have also moved to suburban areas. This removes the need for company headquarters to be within a quick courier distance of warehouses and ports. Urban areas often suffer from traffic congestion, which creates extra driver costs for the company that may have otherwise been reduced if they were located in a suburban area near a highway instead. Lower property taxes and low land prices encourage selling industrial land for profitable brownfield redevelopment. Suburban areas also offer more land to use as a buffer between industrial areas and residential and retail spaces. This may avoid NIMBY sentiments and gentrification pressure from the local community due to residential and retail areas being adjacent to industrial spaces in an urban area. Suburban municipalities can offer tax breaks, specialized zoning, and regulatory incentives to attract industrial land users to their area, such as City of Industry, California. The overall effect of these developments is that both businesses and individuals now see an advantage to relocating to the suburbs, where the cost of buying land, renting space, and running their operations is cheaper than in the city. This continuing dispersal from a single-city center has led to the advent of edge cities and exurbs, which arise out of clusters of office buildings built in suburban commercial areas, shopping malls, and other high-density developments. With more jobs for suburbanites in these areas rather than in the main city core from which the suburbs grew, traffic patterns have become more complex, with the volume of intra-suburban traffic increasing. Historically, by the year 2000, nearly half of the US population had relocated to suburban areas. Eastern Europe In many Eastern European countries, cities have the reputation of being dangerous or very expensive areas to live, while the suburbs are often viewed as safer and more conducive to raising a family. There have however, also been periods of urbanization. During the mid to late 20th century, most socialist countries in the Eastern Bloc were characterized by under-urbanization, which meant that industrial growth occurred well in advance of urban growth, which was sustained by rural-urban commuting. City growth, residential mobility, land, and housing development were under tight political control. Consequently, sub-urbanization in post-socialist Europe is not only a recent, but also a particular phenomenon. The creation of housing and land markets, together with state's withdrawal from housing provisions, have led to the development of privatized modes of housing production and consumption, with an increasing role for private actors, and particularly for households. Yet, the regulatory and institutional frameworks indispensable to a market-driven housing system – including housing finance – have remained underdeveloped, particularly in south-eastern Europe. This environment has undoubtedly stimulated housing self-provision. Clearly, different forces have shaped different outcomes. Long-suppressed urbanization and a dramatic housing backlog resulted in extensive peri-urban growth in Tirana (Albania), which during the 1990s doubled the size of the city whereas war refugees put pressure on cities of former Yugoslavia. Elsewhere processes of suburbanization seemed dominant, but their pace differed according to housing shortages, available finances, preferences, and the degree of 'permitted' informality. The process was slow in Prague during the 1990s and more apparent after 2000, when housing affordability improved. Conversely, Slovenian and Romanian suburban developments visibly surrounded cities/towns during the 1990s. Nonetheless, socialist legacies of underdeveloped infrastructure and the affordability crisis of transition differentiate post-socialist suburbs from their Western counterparts. Various degrees of informality characterized suburban housing from illegal occupation of public land (Tirana), illegal construction on agricultural private land (Belgrade) to the unauthorized but later legalized developments in Romania. Suburban housing displayed a chaotic/unplanned character, especially in south-eastern Europe, where the state retains a degree of illegitimacy. Accepting scattered for-profit housing, much of the new detached suburban houses seem self-developed. Allegedly, owner-building has become a household strategy to adapt to recession, high and volatile inflation, to cut construction costs, and to bridge access to housing. The predominantly owner-built feature of most suburban housing, with the land often obtained at no cost through restitution policies or illegal occupation, allowed a mix of low-/middle-income households within these developments. Psychological effects Social isolation Historically, it was believed that living in highly urban areas resulted in social isolation, disorganization, and psychological problems, while living in the suburbs was supposed to overall improve happiness, due to lower population density, lower crime, and a more stable population. A study based on data from 1974, however, found this not to be the case, finding that people living in the suburbs had neither greater satisfaction with their neighborhood nor greater satisfaction with the quality of their lives as compared to people living in urban areas. Drug abuse Pre-existing disparities in the demographic composition of suburbs pose problems in drug consumption and abuse. This is due to the disconnection created between drug addiction and the biased outward perception of suburban health and safety. The difference in drug mortality rates of suburban and urban spaces is sometimes fueled by the relationship between the general public, medical practitioners, and the pharmaceutical industry. These affluent individuals who are living in the suburbs often have increased means of obtaining otherwise expensive and potent drugs, such as opioids and narcotics through valid prescriptions. In the United States, the combination of demographic and economic features created as a result of suburbanization has increased the risk of drug abuse in suburban communities. Heroin in suburban communities has increased in incidence as new heroin users in the United States are predominantly white suburban men and women in their early twenties. Adolescents and young adults are at an increased risk of drug abuse in suburban spaces due to the enclosed social and economic enclaves that surburbanization propagates. The New England Study of Suburban Youth found that the upper middle class suburban cohorts displayed an increased drug use when compared to the natural average. The shift in demographics and economic statuses related to suburbanization has increased the risk of drug abuse in affluent American communities and changed the approach to drug abuse public health initiatives. When addressing public health concerns of drug abuse with patients directly, suburban health care providers and medical practitioners have the advantage of treating a demographic of drug abuse patients that are better educated and equipped with resources to recover from addiction and overdose. The disparity of treatment and initiatives between suburban and urban environments in regard to drug abuse and overdose is a public health concern. Although suburban healthcare providers may have more resources to address drug addiction, abuse, and overdose, preconceived ideas about suburban lifestyles may prevent them from providing proper treatment to patients. Considering the increasing incidence of drug abuse in suburban environments, the contextual factors that affect certain demographics must also be considered to better understand the prevalence of drug abuse in suburbs; for example, adolescents and their relationship with social groups in school and other socializing forces that occur as a result of suburbanization impact drug abuse incidence. Economic impacts The economic impacts of suburbanization have become very evident since the trend began in the 1950s. Changes in infrastructure, industry, real estate development costs, fiscal policies, and diversity of cities have been easily apparent, as "making it to the suburbs", mainly in order to own a home and escape the chaos of urban centers, have become the goals of many American citizens. These impacts have many benefits as well as side effects and are becoming increasingly important in the planning and revitalization of modern cities. Impact on urban industry The days of industry dominating the urban cores of cities are diminishing as population decentralization of urban centers increases. Companies increasingly look to build industrial parks in less populated areas, largely for more modern buildings and ample parking, as well as to appease the popular desire to work in less congested areas. Government economic policies that provide incentives for companies to build new structures and lack of incentives to build on Brownfield land, also contribute to the flight of industrial development from major cities to surrounding suburban areas. As suburban industrial development becomes increasingly more profitable, it becomes less financially attractive to build in high-density areas. Another impact of industry leaving the city is the reduction of buffer zones separating metropolitan areas, industrial parks and surrounding suburban residential areas. As this land becomes more economically relevant, the value of such properties very often increases, causing many undeveloped landowners to sell their land. Consequences on infrastructure As America continues to sprawl, the cost of the required water lines, sewer lines, and roads could cost more than $21,000 per residential and non-residential development unit, costing the American government $1.12 trillion between 2005 and 2030. Along with the costs of infrastructure, existing infrastructure suffers, as most of the government's money that is dedicated to improving infrastructure goes to paying for the new necessities in areas further out from the urban core. As a result, the government will often forego maintenance on previously built infrastructure. Real estate development costs In the United States, prospective home buyers will often drive farther into the suburbs until they can find an area in which they can afford a home. This concept is colloquially known as "drive until you qualify." Suburban lots are typically larger than urban lots. Thus, bigger lots often mean fewer lots and suburbanization can lead to less dense real estate development. Fiscal impact Public deficits can often grow as a result of suburbanization, mainly because property taxes tend to be lower in less densely populated areas. Also, because of decentralization, lack of variety of housing types, and greater distances between homes, real estate development and public service costs tend to increase, which in turn increases the deficit of upper levels of government. Suburbanization often resulted in lower tax revenues for cities, leading to a reduction in the quality of public services due to the exodus of wealthier populations. Effect on urban diversity As the trend of suburbanization took hold in the United States, many of the people who left the city for the suburbs were white. As a result, there was a rise in Black home ownership in central cities. As white households left for the suburbs, housing prices in transition neighborhoods fell, which often lowered the cost of home ownership for Black households. This trend was stronger in older and denser cities, especially in the Northeast and Midwest, because new construction was generally more difficult. As of the 2010 census, minorities such as African Americans, Asian Americans and Indo-Americans have become an increasing large factor in recent suburbanization. Many suburbs now have large minority communities in suburban and commuter cities. Environmental impacts The growth of suburbanization and the spread of people living outside the city can causes negative impacts on the environment. Suburbanization has been linked to the increase in vehicle mileage, increased land use, and an increase in residential energy consumption. From these factors of suburbanization, it has then caused a degradation of air quality, increase usage of natural resources like water and oil, as well as increased amounts of greenhouse gas. With the increased use of vehicles to commute to and from the work place this causes increased use of oil and gas as well as an increase in emissions. With the increase in emissions from vehicles, this then can cause air pollution and degrades the air quality of an area. Suburbanization is growing which causes an increase in housing development, that then results in an increase in land consumption and available land. Suburbanization has also been linked to increases in natural resource use like water to meet residents' demands and to maintain suburban lawns. Also, with the increase in technology and consumptions of residents there is an increase in energy consumption by the amount of electricity used by residents. Social impacts Suburbanization has negative social impacts on many groups of people, including children, adolescents, and the elderly. Children affected by suburbanization or urban sprawl are occasionally referred to as "cul-de-sac kids." Because children living in suburbs typically cannot go anywhere without a parent, they are less able to practice independence. Teenagers without independence can experience boredom, isolation, and frustration. These feelings have even led to an increase in rates of teenage suicide and school shootings in suburban areas. The elderly in suburbia may also experience more social isolation after losing the ability to drive. Both the wealthy elderly and those who still live in suburbs are largely separated from all other groups of society. See also Urbanization Counterurbanization Transport divide References Notes Bibliography Burchell, Downs, McCann, Mukherji. (2005) "Sprawl Costs: Economic Impacts of Unchecked Development" London, Island Press. Boustan, Margo. "WHITE SUBURBANIZATION AND AFRICAN-AMERICAN HOME OWNERSHIP,1940–1980". National Bureau of Economic Research January 2011 18 Fishman, Robert. (1987) Bourgeois Utopias: The Rise and Fall of Suburbia New York: Basic Books. Garreau, Joel. (1992) Edge City: Life on the New Frontier New York: Anchor Books. Hayden, Delores. (2004) Building Suburbia: Green Fields and Urban Growth, 1820–2000 New York, Vintage. Wiese, Andrew. (2006) "African American Suburban Development in Atlanta" Southern Spaces. http://southernspaces.org/2006/african-american-suburban-development-atlanta Wiese, Andrew. (2005) Places of Their Own: African American Suburbanization in the Twentieth Century Chicago, University of Chicago Press. Soule, David. (2006) "Urban Sprawl: A Comprehensive Reference Guide" London, Greenwood Press. Urban planning Urban decay Urbanization
Suburbanization
Engineering
3,295
21,921,749
https://en.wikipedia.org/wiki/Ridge%20%28biology%29
Ridges (regions of increased gene expression) are domains of the genome with a high gene expression; the opposite of ridges are antiridges. The term was first used by Caron et al. in 2001. Characteristics of ridges are: Gene dense Contain many C and G nucleobases Genes have short introns High SINE repeat density Low LINE repeat density Discovery Clustering of genes in prokaryotes was known for a long time. Their genes are grouped in operons, genes within operons share a common promoter unit. These genes are mostly functionally related. The genome of prokaryotes is relatively very simple and compact. In eukaryotes the genome is huge and only a small amount of it are functionally genes, furthermore the genes are not arranged in operons. Except for nematodes and trypanosomes; although their operons are different from the prokaryotic operons. In eukaryotes each gene has a transcription regulation site of its own. Therefore, genes don't have to be in close proximity to be co-expressed. Therefore, it was long assumed that eukaryotic genes were randomly distributed across the genome due to the high rate of chromosome rearrangements. But because the complete sequence of genomes became available it became possible to absolutely locate a gene and measure its distance to other genes. The first eukaryote genome ever sequenced was that of Saccharomyces cerevisiae, or budding yeast, in 1996. Half a year after that Velculescu et al. (1997) published a research in which they had integrated SAGE data with the now available genome map. During a cell cycle different genes are active in a cell. Therefore, they used SAGE data from three moments of the cell cycle (log phase, S phase-arrested and G2/M-phase arrested cells). Because in yeast all genes have a promoter unit of their own it was not suspected that genes would cluster near to each other but they did. Clusters were present on all 16 yeast chromosomes. A year later Cho et al. also reported (although in more detail) that certain genes are located near to each other in yeast. Characteristics and function Co-expression Cho et al. were the first who determined that clustered genes have the same expression levels. They identified transcripts that show cell-cycle dependent periodicity. Of those genes 25% was located in close proximity to other genes which were transcript in the same cell cycle. Cohen et al. (2000) also identified clusters of co-expressed genes. Caron et al. (2001) made a human transcriptome map of 12 different tissues (cancer cells) and concluded that genes are not randomly distributed across the chromosomes. Instead, genes tend to cluster in groups of sometimes 39 genes in close proximity. Clusters were not only gene dense. They identified 27 clusters of genes with very high expression levels and called them RIDGEs. A common RIDGE counts 6 to 30 genes per centiray. However, there were great exceptions, 40 to 50% of the RIDGEs were not that gene dense; just like in yeast these RIDGEs were located in the telomere regions. Lercher et al. (2002) pointed to some weaknesses in Caron's approach. Clusters of genes in close proximity and high transcription levels can easily been generated by tandem duplicates. Genes can generate duplicates of themselves which are incorporated in their neighborhood. These duplicates can either became a functional part of the pathway of their parent gene, or (because they are no longer favored by natural selection) gain deleterious mutations and turn into pseudogenes. Because these duplicates are false positives in the search for gene clusters they have to be excluded. Lercher excluded neighboring genes with high resemblance to each other, after that he searched with a sliding window for regions with 15 neighboring genes. It was clear that gene dense regions existed. There was a striking correlation between gene density and a high CG content. Some clusters indeed had high expression levels. But most of the highly expressed regions consisted of housekeeping genes; genes that are highly expressed in all tissues because they code for basal mechanisms. Only a minority of the clusters contained genes that were restricted to specific tissues. Versteeg et al. (2003) tried, with a better human genome map and better SAGE , to determine the characteristics of RIDGEs more specific. Overlapping genes were treated as one gene, and genes without introns were rejected as pseudogenes. They determined that RIDGEs are very gene dense, have a high gene expression, short introns, high SINE repeat density and low LINE repeat density. Clusters containing genes with very low transcription levels had characteristics that were the opposite of RIDGEs, therefore those clusters were called antiridges. LINE repeats are junk DNA which contains a cleavage site of endonuclease (TTTTA). Their scarcity in RIDGEs can be explained by the fact that natural selection favors the scarcity of LINE repeats in ORFs because their endonuclease sites can cause deleterious mutation to the genes. Why SINE repeats are abundant is not yet understood. Versteeg et al. also concluded that, contrary to Lerchers analysis, the transcription levels of many genes in RIDGEs (for example a cluster on chromosome 9) can vary strongly between different tissues. Lee et al. (2003) analyzed the trend of gene clustering between different species. They compared Saccharomyces cerevisiae, Homo sapiens, Caenorhabditis elegans, Arabidopsis thaliana and Drosophila melanogaster, and found a degree of clustering, as fraction of genes in loose clusters, of respectively (37%), (50%), (74%), (52%) and (68%). They concluded that pathways of which the genes are clusters across many species are rare. They found seven universally clustered pathways: glycolysis, aminoacyl-tRNA biosynthesis, ATP synthase, DNA polymerase, hexachlorocyclohexane degradation, cyanoamino acid metabolism, and photosynthesis (ATP synthesis in non plant species). Not surprisingly these are basic cellular pathways. Lee et al. used very diverse groups of animals. Within these groups clustering is conserved, for example the clustering motifs of Homo sapiens and Mus musculus are more or less the same. Spellman and Rubin (2002) made a transcriptome map of Drosophila. Of all assayed genes 20% was clustered. Clusters consisted of 10 to 30 genes over a group size of about 100 kilobases. The members of the clusters were not functionally related and the location of clusters didn't correlate with know chromatin structures. This study also showed that within clusters the expression levels of on average 15 genes was much the same across the many experimental conditions which were used. These similarities were so striking that the authors reasoned that the genes in the clusters are not individually regulated by their personal promoter but that changes in the chromatin structure were involved. A similar co-regulation pattern was published in the same year by Roy et al. (2002) in C. elegans. Many genes which are grouped into clusters show the same expression profiles in human invasive ductal breast carcinomas. Roughly 20% of the genes show a correlation with their neighbors. Clusters of co-expressed genes were divided by regions with less correlation between genes. These clusters could cover an entire chromosome arm. Contrary to previous discussed reports Johnidis et al. (2005) have discovered that (at least some) genes within clusters are not co-regulated. Aire is a transcription factor which has an up- and down-regulation effect on various genes. It functions in negative selection of thymocytes, which responds to the organisms own epitopes, by medullary cells. The genes that were controlled by aire clustered. 53 of the genes most activated by aire had an aire-activated neighbor within 200 Kb or less, and 32 of the genes most repressed by aire had an aire-repressed neighbor within 200 Kb; this is less than expected by change. They did the same screening for the transcriptional regulator CIITA. These transcription regulators didn't have the same effect on al genes in the same cluster. Genes that were activated and repressed or unaffected were sometimes present in the same cluster. In this case, it's impossible that aire-regulated genes were clustered because they were all co-regulated. So it is not very clear if domains are co-regulated or not. A very effective way to test this would be by insert synthetic genes into RIDGEs, antiridges and/or random places in the genome and determine their expression. Those expression levels must be compared to each other. Gierman et al. (2007) were the first who proved co-regulation using this approach. As an insertion construct they used a fluorescing GFP gene driven by the ubiquitously expressed human phosphoglycerate kinase (PGK) promoter. They integrated this construct in 90 different positions in the genome of human HEK293 cells. They found that the expression of the construct in Ridges was indeed higher than those inserted in antiridges (while all constructs have the same promoter). They investigated if these differences in expressions were due to genes in the direct neighborhood of the constructs or by the domain as a whole. They found that constructs next to highly expressed genes were slightly more expressed than others. But when to enlarged the window size to the surrounding 49 genes (domain level) they saw that constructs located in domains with an overall high expression had a more than 2-fold higher expression then those located in domains with a low expression level. They also checked if the construct was expressed at similar levels as neighboring genes, and if that tight co-expression was present solely within RIDGEs. They found that the expressions were highly correlated within RIDGEs, and almost absent near the end and outside the RIDGEs. Previous observations and the research of Gierman et al. proved that the activity of a domain has great impact on the expression of the genes located in it. And the genes within a RIDGE are co-expressed. However the constructs used by Gierman et al. were regulated by al full-time active promoter. The genes of the research of Johnidis et al. were dependent of the present of the aire transcription factor. The strange expression of the aire regulated genes could partly have been caused by differences in expression and conformation of the aire transcription factor itself. Functional relation It was known before the genomic era that clustered genes tend to be functionally related. Abderrahim et al. (1994) had shown that all the genes of the major histocompatibility complex were clustered on the 6p21 chromosome. Roy et al. (2002) showed that in the nematode C. elegans genes that are solely expressed in muscle tissue during the larval stage tend to cluster in small groups of 2–5 genes. They identified 13 clusters. Yamashita et al. (2004) showed that genes related to specific functions in organs tend to cluster. Six liver related domains contained genes for xenobiotic, lipid and alcohol metabolism. Five colon-related domains had genes for apoptosis, cell proliferation, ion transporter and mucin production. These clusters were very small and expression levels were low. Brain and breast related genes didn't cluster. This shows that at least some clusters consist of functionally related genes. However, there are great exceptions. Spellman and Rubin have shown that there are clusters of co-expressed genes that are not functionally related. It seems like that clusters appear in very different forms. Regulation Cohen et al. found that of a pair of co-expressed genes only one promoter has an Upstream Activating Sequence (UAS) associated with that expression pattern. They suggested that UASs can activate genes that are not in immediate adjacency to them. This explanation could explain the co-expression of small clusters, but many clusters contain to many genes to be regulated by a single UAS. Chromatin changes are a plausible explanation for the co-regulation seen in clusters. Chromatin consists of the DNA strand and histones that are attached to the DNA. Regions were chromatin is very tightly packed are called heterochromatin. Heterochromatin consists very often of remains of viral genomes, transposons and other junk DNA. Because of tight packing the DNA is almost unreachable for the transcript machinery, covering deleterious DNA with proteins is the way in which the cell can protect itself. Chromatin which consists of functional genes is often an open structure were the DNA is accessible. However, most of the genes are not needed to be expressed all the time. DNA with genes that aren't needed can be covered with histones. When a gene must be expressed special proteins can alter the chemical that are attached to the histones (histone modifications) that cause the histones to open the structure. When the chromatin of one gene is opened, the chromatin of the adjacent genes is also until this modification meets a boundary element. In that way genes is close proximity are expressed on the same time. So, genes are clustered in “expression hubs”. In comparison with this model Gilbert et al. (2004) showed that RIDGEs are mostly present in open chromatin structures. However Johnidis et al. (2005) have shown that genes in the same cluster can be very differently expressed. How eukaryotic gene regulation, and associated chromatin changes, precisely works is still very unclear and there is no consensus about it. In order to get a clear picture about the mechanism of gene clusters first the workings chromatin and gene regulation needs to be illuminated. Furthermore, most papers that identified clusters of co-regulated genes focused on transcription levels whereas few focused on clusters regulated by the same transcription-factors. Johnides et al. discovered strange phenomena when they did. Origins The first models which tried to explain the clustering of genes were, of course, focused on operons because they were discovered before eukaryote gene clusters were. In 1999 Lawrence proposed a model for the origin operons. This selfish operon model suggests that individual genes were grouped together by vertical en horizontal transfer and were preserved as a single unit because that was beneficial for the genes, not per se for the organism. This model predicts that the gene clusters must have conserved between species. This is not the case for many operons and gene clusters seen in eukaryotes. According to Eichler and Sankoff the two mean processes in eukaryotic chromosome evolution are 1) rearrangements of chromosomal segments and 2) localized duplication of genes. Clustering could be explained by reasoning that all genes in a cluster are originated from tandem duplicates of a common ancestor. If all co-expressed genes in a cluster were evolved from a common ancestral gene it would have been expected that they're co-expressed because they all have comparable promoters. However, gene clustering is a very common tread in genomes and it isn't clear how this duplication model could explain all of the clustering. Furthermore, many genes that are present in clusters are not homologous. How did evolutionary non-related genes come in close proximity in the first place? Either there is a force that brings functionally related genes near to each other, or the genes came near by change. Singer et al. proposed that genes came in close proximity by random recombination of genome segments. When functionally related genes came in close proximity to each other, this proximity was conserved. They determined all possible recombination sites between genes of human and mouse. After that, they compared the clustering of the mouse and human genome and looked if recombination had occurred at the potentially recombination sites. It turned out that recombination between genes of the same cluster was very rare. So, as soon as a functional cluster is formed recombination is suppressed by the cell. On sex chromosomes, the amount of clusters is very low in both human and mouse. The authors reasoned this was due to the low rate of chromosomal rearrangements of sex chromosomes. Open chromatin regions are active regions. It is more likely that genes will be transferred to these regions. Genes from organelle and virus genome are inserted more often in these regions. In this way non-homologous genes can be pressed together in a small domain. It is possible that some regions in the genome are better suited for important genes. It is important for the cell that genes that are responsible for basal functions are protected from recombination. It has been observed in yeast and worms that essential genes tend to cluster in regions with a small replication rate. It is possible that genes came in close proximity by change. Other models have been proposed but none of them can explain all observed phenomena. It's clear that as soon as clusters are formed they are conserved by natural selection. However, a precise model of how genes came in close proximity is still lacking. The bulk of the present clusters must have formed relatively recent because only seven clusters of functionally related genes are conserved between phyla. Some of these differences can be explained by the fact that gene expression is very differently regulated by different phyla. For example, in vertebrates and plants DNA methylation is used, whereas it is absent in yeast and flies. See also Chromatin DNA sequence Transcription factor Notes Gene expression Genomics
Ridge (biology)
Chemistry,Biology
3,630
5,054,772
https://en.wikipedia.org/wiki/Mitotic%20recombination
Mitotic recombination is a type of genetic recombination that may occur in somatic cells during their preparation for mitosis in both sexual and asexual organisms. In asexual organisms, the study of mitotic recombination is one way to understand genetic linkage because it is the only source of recombination within an individual. Additionally, mitotic recombination can result in the expression of recessive alleles in an otherwise heterozygous individual. This expression has important implications for the study of tumorigenesis and lethal recessive alleles. Mitotic homologous recombination occurs mainly between sister chromatids subsequent to replication (but prior to cell division). Inter-sister homologous recombination is ordinarily genetically silent. During mitosis the incidence of recombination between non-sister homologous chromatids is only about 1% of that between sister chromatids. Discovery The discovery of mitotic recombination came from the observation of twin spotting in Drosophila melanogaster. This twin spotting, or mosaic spotting, was observed in D. melanogaster as early as 1925, but it was only in 1936 that Curt Stern explained it as a result of mitotic recombination. Prior to Stern's work, it was hypothesized that twin spotting happened because certain genes had the ability to eliminate the chromosome on which they were located. Later experiments uncovered when mitotic recombination occurs in the cell cycle and the mechanisms behind recombination. Occurrence Mitotic recombination can happen at any locus but is observable in individuals that are heterozygous at a given locus. If a crossover event between non-sister chromatids affects that locus, then both homologous chromosomes will have one chromatid containing each genotype. The resulting phenotype of the daughter cells depends on how the chromosomes line up on the metaphase plate. If the chromatids containing different alleles line up on the same side of the plate, then the resulting daughter cells will appear heterozygous and be undetectable, despite the crossover event. However, if chromatids containing the same alleles line up on the same side, the daughter cells will be homozygous at that locus. This results in twin spotting, where one cell presents the homozygous recessive phenotype and the other cell has the homozygous wild type phenotype. If those daughter cells go on to replicate and divide, the twin spots will continue to grow and reflect the differential phenotype. Mitotic recombination takes place during interphase. It has been suggested that recombination takes place during G1, when the DNA is in its 2-strand phase, and replicated during DNA synthesis. It is also possible to have the DNA break leading to mitotic recombination happen during G1, but for the repair to happen after replication. Response to DNA damage In the budding yeast Saccharomyces cerevisiae, mutations in several genes needed for mitotic (and meiotic) recombination cause increased sensitivity to inactivation by radiation and/or genotoxic chemicals. For example, gene rad52 is required for mitotic recombination as well as meiotic recombination. Rad52 mutant yeast cells have increased sensitivity to killing by X-rays, methyl methanesulfonate and the DNA crosslinking agent 8-methoxypsoralen-plus-UV light, suggesting that mitotic recombinational repair is required for removal of the different DNA damages caused by these agents. Mechanisms The mechanisms behind mitotic recombination are similar to those behind meiotic recombination. These include sister chromatid exchange and mechanisms related to DNA double strand break repair by homologous recombination such as single-strand annealing, synthesis-dependent strand annealing (SDSA), and gene conversion through a double-Holliday Junction intermediate or SDSA. In addition, non-homologous mitotic recombination is a possibility and can often be attributed to non-homologous end joining. Method There are several theories on how mitotic crossover occurs. In the simple crossover model, the two homologous chromosomes overlap on or near a common Chromosomal fragile site (CFS). This leads to a double-strand break, which is then repaired using one of the two strands. This can lead to the two chromatids switching places. In another model, two overlapping sister chromatids form a double Holliday junction at a common repeat site and are later sheared in such a way that they switch places. In either model, the chromosomes are not guaranteed to trade evenly, or even to rejoin on opposite sides thus most patterns of cleavage do not result in any crossover event. Uneven trading introduces many of the deleterious effects of mitotic crossover. Alternatively, a crossover can occur during DNA repair if, due to extensive damage, the homologous chromosome is chosen to be the template over the sister chromatid. This leads to gene synthesis since one copy of the allele is copied across from the homologous chromosome and then synthesized into the breach on the damaged chromosome. The net effect of this would be one heterozygous chromosome and one homozygous chromosome. Advantages and disadvantages Mitotic crossover is known to occur in D. melanogaster, some asexually reproducing fungi and in normal human cells, where the event may allow normally recessive cancer-causing alleles to be expressed and thus predispose the cell in which it occurs to the development of cancer. Alternately, a cell may become a homozygous mutant for a tumor-suppressing gene, leading to the same result. For example, Bloom's syndrome is caused by a mutation in RecQ helicase, which plays a role in DNA replication and repair. This mutation leads to high rates of mitotic recombination in mice, and this recombination rate is in turn responsible for causing tumor susceptibility in those mice. At the same time, mitotic recombination may be beneficial: it may play an important role in repairing double stranded breaks, and it may be beneficial to the organism if having homozygous dominant alleles is more functional than the heterozygous state. For use in experimentation with genomes in model organisms such as Drosophila melanogaster, mitotic recombination can be induced via X-ray and the FLP-FRT recombination system. References Griffiths et al. 1999. Modern Genetic Analysis. W. H. Freeman and Company. Cellular processes Modification of genetic information Molecular genetics
Mitotic recombination
Chemistry,Biology
1,429
6,962,132
https://en.wikipedia.org/wiki/Grotrian%20diagram
A Grotrian diagram, or term diagram, shows the allowed electronic transitions between the energy levels of atoms. They can be used for one-electron and multi-electron atoms. They take into account the specific selection rules related to changes in angular momentum of the electron. The diagrams are named after Walter Grotrian, who introduced them in his 1928 book Graphische Darstellung der Spektren von Atomen und Ionen mit ein, zwei und drei Valenzelektronen ("Graphical representation of the spectra of atoms and ions with one, two and three valence electrons"). See also Jablonski diagram (for molecules) References External links Hyperphysics: Atomic Energy Level Diagrams Volumes with Grotrian diagrams of most elements Atomic energy-level and Grotrian diagrams by Stanley Bashkin and John O. Stoner Jr. Volume I: Hydrogen - Phosphorus Volume I: Hydrogen - Phosphorus (Addenda) Volume III Volume IV Grotrian diagrams Electronic structure of atoms NIST Atomic Spectra Database Lines Form Grotrian Diagrams Diagrams Spectroscopy Atomic physics Photochemistry
Grotrian diagram
Physics,Chemistry,Astronomy
225
22,122,235
https://en.wikipedia.org/wiki/Topology%20control
Topology control is a technique used in distributed computing to alter the underlying network (modeled as a graph) to reduce the cost of distributed algorithms if run over the resulting graphs. It is a basic technique in distributed algorithms. For instance, a (minimum) spanning tree is used as a backbone to reduce the cost of broadcast from O(m) to O(n), where m and n are the number of edges and vertices in the graph, respectively. The term "topology control" is used mostly by the wireless ad hoc and sensor networks research community. The main aim of topology control in this domain is to save energy, reduce interference between nodes and extend lifetime of the network. However, recently the term has also been gaining traction with regards to control of the network structure of electric power systems. Topology construction and maintenance Lately, topology control algorithms have been divided into two subproblems: topology construction, in charge of the initial reduction, and topology maintenance, in charge of the maintenance of the reduced topology so that characteristics like connectivity and coverage are preserved. This is the first stage of a topology control protocol. Once the initial topology is deployed, specially when the location of the nodes is random, the administrator has no control over the design of the network; for example, some areas may be very dense, showing a high number of redundant nodes, which will increase the number of message collisions and will provide several copies of the same information from similarly located nodes. However, the administrator has control over some parameters of the network: transmission power of the nodes, state of the nodes (active or sleeping), role of the nodes (Clusterhead, gateway, regular), etc. By modifying these parameters, the topology of the network can change. Upon the same time a topology is reduced and the network starts serving its purpose, the selected nodes start spending energy: Reduced topology starts losing its "optimality as soon as full network activity evolves. After some time being active, some nodes will start to run out of energy. Especially in wireless sensor networks with multihopping, intensive packet forwarding causes nodes that are closer to the sink to spend higher amounts of energy than nodes that are farther away. Topology control has to be executed periodically in order to preserve the desired properties such as connectivity, coverage, density. Topology construction algorithms There are many ways to perform topology construction: Optimizing the node locations during the deployment phase Change the transmission range of the nodes Turn off nodes from the network Create a communication backbone Clustering Adding new nodes to the network to preserve connectivity (Federated Wireless sensor networks) Some examples of topology construction algorithms are: Tx range-based Geometry-based: Gabriel graph (GG), Relative neighborhood graph (RNG), Voronoi diagram Spanning Tree Based: LMST, iMST Direction Based: Yao graph and Nearest neighbor graph, Cone Based Topology Control (CBTC), Distributed RNG Neighbor based: KNeigh, XTC Routing based: COMPOW Hierarchical CDS-based: A3, EECDS, CDS-Rule K Cluster-based: Low Energy Adaptive Clustering Hierarchy (LEACH), HEED Graphical examples Topology maintenance algorithms In the same manner as topology construction, there are many ways to perform topology maintenance: Global Vs. Local Dynamic Vs. Static Vs. Hybrid Triggered by time, energy, density, random, etc. Some examples of topology maintenance algorithms are: Global DGTRec (Dynamic Global Topology Recreation): Periodically, wake up all inactive nodes, reset the existing reduced topology in the network and apply a topology construction protocol. SGTRot (Static Global Topology Rotation): Initially, the topology construction protocol must create more than one reduced topology (hopefully as disjoint as possible). Then, periodically, wake up all inactive nodes, and change the current active reduced topology to the next, like in a Christmas tree. HGTRotRec (Hybrid Global Topology Rotation and Recreation) Work as the SGTRot, but when the current active reduced topology detects a certain level of disconnection, reset the reduced topology and invoke the topology construction protocol to recreate that particular reduced topology. Local DL-DSR (Dynamic Local DSR-based TM) This protocol, based on the Dynamic Source Routing (DSR) routing algorithm, recreates the paths of disconnected nodes when a node fails. In all of the above protocols can be found in. In Atarraya, two version of each of these protocols are implemented with different triggers: one by time, and the other one by energy. In addition, Atarraya allows the pairing of all the topology construction and topology maintenance protocols in order to test the optimal maintenance policy for a particular construction protocol; it is important to mention that many papers on topology construction have not performed any study on this regard. Further reading Many books and papers have been written in the topic: Topology Control for Wireless Sensor Networks. ACM MobiCom 2003. Topology Control in Wireless Sensor Networks: with a companion simulation tool for teaching and research. Miguel Labrador and Pedro Wightman. Springer. 2009. Topology Control in Wireless Ad Hoc and Sensor Networks. Paolo Santi. Wiley. 2005. Protocols and Architectures for Wireless Sensor Networks. Holger Karl and Andreas Willig. Wiley-Interscience. 2007. Capacity-Optimized Topology Control for MANETs with Cooperative Communications. 2011. Robust Topology control for indoor wireless sensor networks. 2008 . Simulation of topology control There are many networking simulation tools, however there is one specifically designed for testing, design and teaching topology control algorithms: Atarraya. Atarraya is an event-driven simulator developed in Java that present a new framework for designing and testing topology control algorithms. It is an open source application, distributed under the GNU V.3 license. It was developed by Pedro Wightman, a Ph.D. candidate at University of South Florida, with the collaboration of Dr. Miguel Labrador. A paper with the detailed description of the simulator was presented in SIMUTools 2009. The paper can be found in this link. References Network topology Wireless sensor network
Topology control
Mathematics,Technology
1,233
76,366,904
https://en.wikipedia.org/wiki/Pileolaria%20%28fungus%29
Pileolaria is a genus of autoecious rust fungi. They are considered plant pathogens and preferentially infect members of the sumac family. Selected species There are about 20 species in Pileolaria. Pileolaria brevipes Pileolaria cotini-coggygyriae Pileolaria terebinthi References Pucciniales Fungal plant pathogens and diseases Basidiomycota genera
Pileolaria (fungus)
Biology
87
52,380,785
https://en.wikipedia.org/wiki/Transphosphorylation
Transphosphorylation is a chemical reaction in which a phosphate group or a phosphono group is transferred between a substrate and a receptor. There are various phosphate esters in living body including nucleic acid, and phosphorylation reaction related to their synthesis and interconversion is the basis of biochemical reaction. In most cases, ATP is the substrate of the phosphate group as a substrate and the enzyme that catalyzes these reactions is referred to as kinase. References Chemical reactions
Transphosphorylation
Chemistry
103
21,265,986
https://en.wikipedia.org/wiki/Delafloxacin
Delafloxacin sold under the brand name Baxdela (by Melinta Therapeutics) among others, is a fluoroquinolone antibiotic used to treat acute bacterial skin and skin structure infections. Medical use Delafloxacin is indicated to treat adults with acute bacterial skin and skin structure infections (ABSSSI) caused by designated susceptible bacteria or adults with community-acquired bacterial pneumonia (CABP) caused by designated susceptible bacteria. Susceptible bacteria for ABSSSI are: Gram-positive organisms: Staphylococcus aureus (including methicillin-resistant [MRSA] and methicillin-susceptible [MSSA] isolates), Staphylococcus haemolyticus, Staphylococcus lugdunensis, Streptococcus agalactiae, Streptococcus anginosus group (including Streptococcus anginosus, Streptococcus intermedius, and Streptococcus constellatus), Streptococcus pyogenes, and Enterococcus faecalis Gram-negative organisms: Escherichia coli, Enterobacter cloacae, Klebsiella pneumoniae, and Pseudomonas aeruginosa. Susceptible bacteria for CABP are: Streptococcus pneumoniae, Staphylococcus aureus (methicillin-susceptible [MSSA] isolates only), Klebsiella pneumoniae, Escherichia coli, Pseudomonas aeruginosa, Haemophilus influenzae, Haemophilus parainfluenzae, Chlamydia pneumoniae, Legionella pneumophila, and Mycoplasma pneumoniae. It has not been tested in pregnant women. In the European Union, it is indicated for the treatment of acute bacterial skin and skin structure infections (ABSSSI) in adults when it is considered inappropriate to use other antibacterial agents that are commonly recommended for the initial treatment of these infections. Adverse effects Like other drugs in the fluoroquinolone class, delafloxacin contains a black box warning about the risk of tendinitis, tendon rupture, peripheral neuropathy, central nervous system effects, and exacerbation of myasthenia gravis. The label also warns against the risk of hypersensitivity reactions and Clostridioides difficile-associated diarrhea. Adverse effects occurring in more than 2% of clinical trial subjects included nausea, diarrhea, headache, elevated transaminases, and vomiting. Interactions Like other fluoroquinolones, delafloxacin chelates metals including aluminum, magnesium, sucralfate, iron, zinc, and divalent and trivalent cations like didanosine; using this drugs with antacids, some dietary supplements, or drugs buffered with any of these ions will interfere with available amounts of delafloxacin. Pharmacology The half-life varies in around 8 hours at normal doses. Excretion is 65% through urine, mostly in unmetabolized form, and 28% via feces. Clearance is reduced in people with severe kidney disease. Delafloxacin is more active (lower MIC90) than other quinolones against Gram-positive bacteria such as methicillin-resistant Staphylococcus aureus (MRSA). In contrast to most approved fluoroquinolones, which are zwitterionic, delafloxacin has an anionic character, which results in a 10-fold increase in delafloxacin accumulation in both bacteria and cells at acidic pH. This property is believed to confer to delafloxacin an advantage for the eradication of Staphylococcus aureus in acidic environments, including intracellular infections and biofilms. Chemistry The chemical name is 1-deoxy-1 (methylamino)-D-glucitol, 1-(6-amino-3,5-difluoropyridin-2-yl)-8-chloro-6-fluoro-7-(3-hydroxyazetidin-1-yl)-4-oxo-1,4-dihydroquinoline-3-carboxylate (salt). The injectable form of delafloxacin is sold as the meglumine salt of the active ingredient and its United States Adopted Name, delafloxacin meglumine, reflects that; the injection formulation also includes EDTA and sulfobutylether-β-cyclodextrin. The tablet is made of delafloxacin, citric acid anhydrous, crospovidone, magnesium stearate, microcrystalline cellulose, povidone, sodium bicarbonate, and sodium phosphate monobasic monohydrate. History Delafloxacin was known as ABT-492, RX-3341, and WQ-3034 while it was under development. Rib-X Pharmaceuticals acquired delafloxacin from Wakunaga Pharmaceutical in 2006. Rib-X was renamed to Melinta Therapeutics in 2013. It was developed and marketed by Melinta Therapeutics (formerly Rib-X Pharmaceuticals), which subsequently merged with Cempra. Key clinical trials for delafloxacin have been performed by Melinta regarding indications for skin and skin structure infections as well as complicated bacterial infections and uncomplicated gonorrhea. The trial on gonorrhea was terminated before data was released. Delafloxacin was approved by the FDA in June 2017, after it was noninferior to vancomycin plus aztreonam in two trials on 1042 patients with acute bacterial skin and skin structure infection. New Drug Applications (NDA) for delafloxacin (Baxdela) 450 mg tablets and 300 mg injections were approved by the FDA in June 2017. The FDA obligated Melinta to conduct further studies as follows: a 5-year surveillance study to determine if resistance emerges, with the final report due in December 2022 a study of the IV form in pregnant rats to determine distribution to the reproductive tract, due June 2018, with further studies required if there is significant distribution. Melinta merged with Cempra in August, 2017. Melinta has entered into commercialization and distribution agreements with both Menarini Therapeutics (March 2017) and Eurofarma Laboratórios (January 2015) for international commercialization of delafloxacin. The agreement with Menarini allows them to commercialize and distribute in 68 countries, including Europe, China, and South Korea among others. A similar agreement with Eurofarma allows for commercialization in Brazil. References Azetidines Chloroarenes Fluoroquinolone antibiotics Pyridines Carboxylic acids
Delafloxacin
Chemistry
1,467
73,053,543
https://en.wikipedia.org/wiki/HD%20174474
HD 174474, also designated as HR 7095 or rarely 35 G. Telescopii, is a solitary white-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.17, placing it near the limit for naked eye visibility. The object is located relatively close at a distance of 244 light years based on Gaia DR3 parallax measurements but is drifting closer with a heliocentric radial velocity of . At its current distance, HD 174474's brightness is diminished by 0.26 magnitudes due to interstellar dust. It has an absolute magnitude of +1.61. This is an ordinary A-type main-sequence star with a stellar classification of A2 V. It has double the mass of the Sun and 1.89 times the Sun's radius. It radiates 18.1 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 174474 is slightly metal deficient with an iron abundance 22% below solar levels ([Fe/H] = −0.11). It is estimated to be 630 million years old based on stellar evolution models from David & Hillenbrand (2015). References A-type main-sequence stars 174474 092676 7095 CD-48 12769 Telescopium Telescopii, 35
HD 174474
Astronomy
292
66,648,270
https://en.wikipedia.org/wiki/Jennifer%20E.%20Smith%20%28biologist%29
Jennifer Elaine Smith (Jenn Smith) is a behavioral ecologist and evolutionary biologist. She is an associate professor of Biology at University of Wisconsin, Eau Claire. Previously, she was an associate professor and chair of biology at Mills College, in Oakland, California, prior to its merger with Northeastern University. Her research focuses primarily on the social lives of mammals based on insights gained from long-term studies on marked individuals and comparative approaches. Early life and education Smith was born in the small coastal town of Cushing, Maine. She holds a B.A. in biology with a concentration in environmental science from Colby College and an M.S. in integrative biology from University of Illinois at Urbana–Champaign. She went on to complete dual PhDs in zoology and the Ecology, Evolution, and Behavior (EEB) Program at Michigan State University. Her dissertation research with Kay E. Holekamp involved extensive fieldwork in Kenya and focused on the evolutionary and ecological forces shaping patterns of cooperation among spotted hyenas Before joining the faculty at Mills College, she was an American Association of University Women postdoctoral fellow with Daniel T. Blumstein at the department of ecology and evolutionary biology, as well as in the Institute for Society and Genetics, at the University of California, Los Angeles. Work and academic contributions Smith is known for her contributions to our understanding of sociality in free-living mammals. Among her most prominent contributions are those focused on animal social networks, comparative social evolution, the fission-fusion society of and coalition formation in spotted hyenas, leadership in mammalian societies, explaining large-scale patterns of collective animal behavior, intergroup conflict, and intragroup coalitions across mammalian societies. Since 2013, she has managed her own Long-term Study on the Behavioral Ecology of the California ground squirrel at Briones Regional Park. This project on marked individual ground squirrels is revealing new insights into the nexus among behavioral type (i.e., personality traits), stress physiology, parasite loads, microbial diversity, and social networks in a changing world. More broadly, Smith is also interested applying comparative approaches to understand patterns of (in)equality in nature. References External links Smith Behavioral Ecology Lab Evolutionary biologists Ethologists Living people Colby College alumni Michigan State University alumni Year of birth missing (living people) University of Illinois Urbana-Champaign alumni University of California, Los Angeles faculty University of Wisconsin–Eau Claire faculty Mills College faculty
Jennifer E. Smith (biologist)
Biology
492
19,645,922
https://en.wikipedia.org/wiki/Palierne%20equation
Palierne equation connects the dynamic modulus of emulsions with the dynamic modulus of the two phases, size of the droplets and the interphase surface tension. The equation can also be used for suspensions of viscoelastic solid particles in viscoelastic fluids. The equation is named after French rheologist Jean-François Palierne, who proposed the equation in 1991. For the dilute emulsions Palierne equation looks like: where is the dynamic modulus of the emulsion, is the dynamic modulus of the continuous phase (matrix), is the volume fraction of the disperse phase and the is given as where is the dynamic modulus of the disperse phase, is the surface tension between the phases and is the radius of the droplets. For the suspension of solid particles the value of is given as The Palierne equation is usually extended for the finite volume concentrations of the disperse phase as: References Non-Newtonian fluids Colloidal chemistry Composite materials
Palierne equation
Physics,Chemistry
201
5,178,038
https://en.wikipedia.org/wiki/Jean%20Bartik
Jean Bartik ( Betty Jean Jennings; December 27, 1924 – March 23, 2011) was an American computer programmer who was one of the original six programmers of the ENIAC computer. Bartik studied mathematics in school then began work at the University of Pennsylvania, first manually calculating ballistics trajectories and then using ENIAC to do so. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence. Bartik and her colleagues developed and codified many of the fundamentals of programming while working on the ENIAC, since it was the first computer of its kind. After her work on ENIAC, Bartik went on to work on BINAC and UNIVAC, and spent time at a variety of technical companies as a writer, manager, engineer and programmer. She spent her later years as a real estate agent and died in 2011 from congestive heart failure complications. Content-management framework Drupal's default theme, Bartik, is named in her honor. Early life and education Born Betty Jean Jennings in Gentry County, Missouri in 1924, she was the sixth of seven children. Her father, William Smith Jennings (1893–1971) was from Alanthus Grove, where he was a schoolteacher as well as a farmer. Her mother, Lula May Spainhower (1887–1988) was from Alanthus. Jennings had three older brothers, William (January 10, 1915) Robert (March 15, 1918); and Raymond (January 23, 1922); two older sisters, Emma (August 11, 1916) and Lulu (August 22, 1919), and one younger sister, Mable (December 15, 1928). In her childhood, she would ride on horseback to visit her grandmother, who bought the young girl a newspaper to read every day and became a role model for the rest of her life. She began her education at a local one-room school, and gained local attention for her softball skill. In order to attend high school, she lived with her older sister in the neighboring town, where the school was located, and then began to drive every day despite being only 14. She graduated from Stanberry High School in 1941, aged 16. She was given the title of salutatorian on her graduation. She attended Northwest Missouri State Teachers College now known Northwest Missouri State University, majoring in mathematics with a minor in English and graduated in 1945. Jennings was awarded the only mathematics degree in her class. Although she had originally intended to study journalism, she decided to change to mathematics because she had a bad relationship with her adviser. Later in her life, she earned a master's degree in English at the University of Pennsylvania in 1967 and was awarded an honorary doctorate degree from Northwest Missouri State University in 2002. Career In 1945, the United States Army was recruiting mathematicians from universities to aid in the war effort; despite a warning by her adviser that she would be "a cog in a wheel" with the Army, and encouragement to become a mathematics teacher instead, Bartik decided to become a human computer. Bartik's calculus professor encouraged her to take the job at University of Pennsylvania because they had a differential analyzer. She applied to both IBM and the University of Pennsylvania at the age of 20. Although rejected by IBM, Jennings was hired by the University of Pennsylvania to work for Army Ordnance at Aberdeen Proving Ground, calculating ballistics trajectories by hand. While working there, Bartik met her future husband, William Bartik, who was an engineer working on a Pentagon project at the University of Pennsylvania. They married in December 1946. When the Electronic Numeric Integrator and Computer (ENIAC) was developed for the purpose of calculating the ballistic trajectories human computers like Bartik had been doing by hand, she applied to become a part of the project and was eventually selected to be one of its first programmers. Bartik was asked to set up problems for the ENIAC without being taught any techniques. Bartik and five other women (Betty Holberton, Marlyn Wescoff, Kathleen McNulty, Ruth Teitelbaum, and Frances Spence) were chosen to be the main programmers for the ENIAC. They were known as the "Sensational Six." Many other women who are often unrecognized contributed to the ENIAC during a period of wartime male labor shortage. Bartik, who became the co-lead programmer (with Betty Holberton), and the other four original programmers became extremely adept at running the ENIAC; with no manual to rely on, the group reviewed diagrams of the device, interviewed the engineers who had built it, and used this information to teach themselves the skills they needed. Initially, they were not allowed to see the ENIAC's hardware at all since it was still classified and they had not received security clearance; they had to learn how to program the machine solely through studying schematic diagrams. The six-woman team was also not initially given space to work together, so they found places to work where they could, in abandoned classrooms and fraternity houses. While the six women worked on ENIAC, they developed subroutines, nesting, and other fundamental programming techniques, and arguably invented the discipline of programming digital computers. Bartik and the other ENIAC female programmers learned to physically modify the machine, moving switches and rerouting cables, in order to program it. In addition to performing the original ballistic trajectories they were hired to compute, the six female programmers soon became operators on the Los Alamos nuclear calculations, and generally expanded the programming repertoire of the machine. Bartik's programming partner on the important trajectory program for the military that would prove that the ENIAC worked to specification was Betty Holberton, known at the time as Betty Snyder. Bartik and Holberton's program was chosen to introduce the ENIAC to the public and larger scientific community. That demonstration occurred on February 15, 1946, and was a tremendous success. The ENIAC proved that it operated faster than the Mark I, a well known electromechanical machine at Harvard, and also showed that the work that would take a "human computer" 40 hours to complete could be done in 20 seconds. Bartik described the first public demonstration of the ENIAC in 1946: The public demonstration was a success, but most of the congratulations on its turnout were given to its engineers, John Mauchly and John Eckert. Following the demonstration, in March 1946, she received a front-page feature in the Gentry County-based Stanberry Headlight, where it was written that, "[t]o acquaintances here of Miss Jennings, it is no great surprise to know that she is holding such an important position", due to her academic esteems. Bartik was later asked to form and lead a group of programmers to convert the ENIAC into a stored program computer, working closely with John von Neumann, Dick Clippinger, and Adele Goldstine. Bartik converted the ENIAC into a stored program computer by March 1948. As head of this process, Bartik was charged with the conversion that allowed the ENIAC to be turned into a rudimentary stored program computer to assist with Clippinger's wind tunnel programs, which allowed the ENIAC to operate more quickly, efficiently, and accurately. Letters between Bartik and Adele Goldstine were discovered by authors Thomas Haigh and Mark Priestley during the time of the project, as well as the fact that much of the 60-order code was in Bartik's handwriting. After the end of the war, Bartik went on to work with the ENIAC designers John Eckert and John Mauchly, and helped them develop the BINAC and UNIVAC I computers. BINAC was the first computer to use magnetic tape instead of punch cards to store data and the first computer to utilize the twin unit concept. BINAC was purchased by Northrop Aircraft to guide the Snark missile, but the BINAC proved to be too large for their purposes. However, according to a Northrop Aircraft programmer, claims that the BINAC did not work once it was moved to Northrop Aircraft were erroneous and the BINAC was working well into the mid-1950s. Besides BINAC, Jean's more important work involved her responsibilities in designing the UNIVAC's logic circuits among other UNIVAC programming and design tasks. Bartik also co-programmed with her life-long friend Betty Holberton the first generative programming system (SORT/MERGE) for a computer. Recalling her time working with Eckert and Mauchly on these projects, she described their close group of computer engineers as a "technical Camelot". In the early 1950s, once the Eckert-Mauchly Corporation was sold to Remington Rand, Bartik went on to help train on how to program and use the UNIVAC for the first six UNIVACs sold, including the programmers at the United States Census Bureau (first UNIVAC sold) and Atomic Energy Commission. Later, Bartik moved to Philadelphia when her husband, William (Bill) Bartik, took a job with Remington Rand. Due to a company policy at the time about husbands and wives working together, Jean was asked to resign from the company. Between 1951 and 1954, prior to her first child's birth, Jean did mostly freelance programming assignments for John Mauchly and was a helpmate to her husband. Once her son was born, Jean walked away from her career in computing to concentrate on raising a family, during which time she had two other children with her husband. It was sometime during this 1950s period that Bartik began going by the name "Jean" rather than her birth first name "Betty", which is what she had been known as during her ENIAC, UNIVAC and Remington-Rand years. Even though Bartik played an integral part in developing ENIAC, her work at University of Pennsylvania and on the ENIAC remained obscure until her pioneering work was documented by Kathy Kleiman and the ENIAC Programmers Project. In 1986, Kleiman first met and identified the women who worked with the ENIAC. Kleiman worked with PBS producer David Roland to record their oral histories and with documentary producers Jon Palfreman and Kate McMahon to produce the award-winning documentary The Computers (premiere 2014). The women's work was also popularized by columnist Tom Petzinger in articles for the Wall Street Journal on Bartik and Holberton in 1996. Later life After getting her master's degree from the University of Pennsylvania in 1967 and making the decision to divorce her husband, Bartik joined the Auerbach Corporation writing and editing technical reports on minicomputers. Bartik remained with Auerbach for eight years, then moved among positions with a variety of other companies for the rest of her career as a manager, writer, and engineer. Jean Bartik and William Bartik divorced by 1968. Bartik ultimately retired from the computing industry in 1986 when her final employer, Data Decisions (a publication of Ziff-Davis), was sold; Bartik spent the following 25 years as a real estate agent. Bartik died from congestive heart failure in a Poughkeepsie, New York nursing home on March 23, 2011. She was 86. Legacy Starting in 1996, once the importance of their role in the development of computing was re-discovered, Bartik along with Betty Holberton and Bartik's other friend of over 60 years Kathleen Antonelli (ENIAC programmer and wife of ENIAC co-inventor John Mauchly) began to finally receive the acknowledgement and honors for their pioneering work in the early field of computing. Bartik and Antonelli became invited speakers both at home and abroad to share their experiences working with the ENIAC, BINAC and UNIVAC. Bartik especially went on to receive many honors and awards for her pioneering role programming the ENIAC, BINAC and UNIVAC, the latter of which helped to launch the commercial computer industry, and for turning the ENIAC into the world's first stored program computer. In 2010, a documentary Top Secret Rosies: The Female "Computers" of WWII was released. The film centered around in-depth interviews of three of the six women programmers, focusing on the commendable patriotic contributions they made during World War II. The ENIAC was responsible for calculating bullet trajectories during the war. The ENIAC team is also the subject of the 2013 short documentary film The Computers. This documentary, created by Kathy Kleiman and the ENIAC Programmers Project, combines actual footage of the ENIAC team from the 1940s with interviews with the female team members as they reflect on their time working together on the ENIAC. The Computers is the first part of a three-part documentary series, titled Great Unsung Women of Computing: The Computers, The Coders, and The Future Makers. Bartik wrote her autobiography Pioneer Programmer: Jean Jennings Bartik and the Computer that Changed the World prior to her death in 2011 with the help of long-time colleagues, Dr. Jon T. Rickman and Kim D. Todd. The autobiography was published in 2013 by Truman State Press to positive reviews. One of the best pieces of advice Bartik ever received was: "Don't ever let anyone tell you that you can't do something because they think you can't. You can do anything, achieve anything, if you think you can and you educate yourself to succeed." Encouraging girls and women to follow their dreams, she said, "If my life has proved anything, it is that women (and girls) should never be afraid to take risks and try new things." The Jean Jennings Bartik Computing Museum at Northwest Missouri State University in Maryville, Missouri is dedicated to the history of computing and Bartik's career. The default theme in the content management framework Drupal, was named Bartik for over a decade. It was named in her honor. Awards and honors Inductee, Women in Technology International Hall of Fame (1997). Fellow, Computer History Museum (2008) IEEE Computer Pioneer Award, IEEE Computer Society (2008) Korenman Award from the Multinational Center for Development of Women in Technology (2009) See also Adele Goldstine Betty Holberton Frances Spence Ruth Teitelbaum Marlyn Wescoff Kathleen Antonelli List of pioneers in computer science Timeline of women in science References External links ENIAC Programmers documentary Oral history from Bartik at the UNIVAC conference, Charles Babbage Institute Jean Jennings Bartik Computing Museum at NWMSU Bartik receives the Computer Pioneer Award Oral history given by Bartik to the Computer History Museum in 2008 1924 births 2011 deaths People from Gentry County, Missouri American computer programmers University of Pennsylvania School of Arts and Sciences alumni Northwest Missouri State University alumni American women computer scientists American computer scientists Human computers 20th-century American women scientists American real estate brokers Mathematicians from Missouri Scientists from Missouri 21st-century American scientists 21st-century American women scientists 20th-century American scientists
Jean Bartik
Technology
3,090
14,644,287
https://en.wikipedia.org/wiki/Pfaffian%20function
In mathematics, Pfaffian functions are a certain class of functions whose derivative can be written in terms of the original function. They were originally introduced by Askold Khovanskii in the 1970s, but are named after German mathematician Johann Pfaff. Basic definition Some functions, when differentiated, give a result which can be written in terms of the original function. Perhaps the simplest example is the exponential function, f(x) = ex. If we differentiate this function we get ex again, that is Another example of a function like this is the reciprocal function, g(x) = 1/x. If we differentiate this function we will see that Other functions may not have the above property, but their derivative may be written in terms of functions like those above. For example, if we take the function h(x) = ex log x then we see Functions like these form the links in a so-called Pfaffian chain. Such a chain is a sequence of functions, say f1, f2, f3, etc., with the property that if we differentiate any of the functions in this chain then the result can be written in terms of the function itself and all the functions preceding it in the chain (specifically as a polynomial in those functions and the variables involved). So with the functions above we have that f, g, h is a Pfaffian chain. A Pfaffian function is then just a polynomial in the functions appearing in a Pfaffian chain and the function argument. So with the Pfaffian chain just mentioned, functions such as F(x) = x3f(x)2 − 2g(x)h(x) are Pfaffian. Rigorous definition Let U be an open domain in Rn. A Pfaffian chain of order r ≥ 0 and degree α ≥ 1 in U is a sequence of real analytic functions f1,..., fr in U satisfying differential equations for i = 1, ..., r where Pi, j ∈ R[x1, ..., xn, y1, ..., yi] are polynomials of degree ≤ α. A function f on U is called a Pfaffian function of order r and degree (α, β) if where P ∈ R[x1, ..., xn, y1, ..., yr] is a polynomial of degree at most β ≥ 1. The numbers r, α, and β are collectively known as the format of the Pfaffian function, and give a useful measure of its complexity. Examples The most trivial examples of Pfaffian functions are the polynomial functions. Such a function will be a polynomial in a Pfaffian chain of order r = 0, that is the chain with no functions. Such a function will have α = 0 and β equal to the degree of the polynomial. Perhaps the simplest nontrivial Pfaffian function is f(x) = ex. This is Pfaffian with order r = 1 and α = β = 1 due to the differential equation f = f. Recursively, one may define f1(x) = exp(x) and fm+1(x) = exp(fm(x)) for 1 ≤ m < r. Then fm′ = f1f2···fm. So this is a Pfaffian chain of order r and degree α = r. All of the algebraic functions are Pfaffian on suitable domains, as are the hyperbolic functions. The trigonometric functions on bounded intervals are Pfaffian, but they must be formed indirectly. For example, the function cos(x) is a polynomial in the Pfaffian chain tan(x/2), cos2(x/2) on the interval (−π, π). In fact all the elementary functions and Liouvillian functions are Pfaffian. In model theory Consider the structure R = (R, +, −, ·, <, 0, 1), the ordered field of real numbers. In the 1960s Andrei Gabrielov proved that the structure obtained by starting with R and adding a function symbol for every analytic function restricted to the unit box [0, 1]m is model complete. That is, any set definable in this structure Ran was just the projection of some higher-dimensional set defined by identities and inequalities involving these restricted analytic functions. In the 1990s, Alex Wilkie showed that one has the same result if instead of adding every restricted analytic function, one just adds the unrestricted exponential function to R to get the ordered real field with exponentiation, Rexp, a result known as Wilkie's theorem. Wilkie also tackled the question of which finite sets of analytic functions could be added to R to get a model-completeness result. It turned out that adding any Pfaffian chain restricted to the box [0, 1]m would give the same result. In particular one may add all Pfaffian functions to R to get the structure RPfaff as a variant of Gabrielov's result. The result on exponentiation is not a special case of this result (even though exp is a Pfaffian chain by itself), as it applies to the unrestricted exponential function. This result of Wilkie's proved that the structure RPfaff is an o-minimal structure. Noetherian functions The equations above that define a Pfaffian chain are said to satisfy a triangular condition, since the derivative of each successive function in the chain is a polynomial in one extra variable. Thus if they are written out in turn a triangular shape appears: and so on. If this triangularity condition is relaxed so that the derivative of each function in the chain is a polynomial in all the other functions in the chain, then the chain of functions is known as a Noetherian chain, and a function constructed as a polynomial in this chain is called a Noetherian function. So, for example, a Noetherian chain of order three is composed of three functions f1, f2, f3, satisfying the equations The name stems from the fact that the ring generated by the functions in such a chain is Noetherian. Any Pfaffian chain is also a Noetherian chain (the extra variables in each polynomial are simply redundant in this case), but not every Noetherian chain is Pfaffian; for example, if we take f1(x) = sin x and f2(x) = cos x then we have the equations and these hold for all real numbers x, so f1, f2 is a Noetherian chain on all of R. But there is no polynomial P(x, y) such that the derivative of sin x can be written as P(x, sin x), and so this chain is not Pfaffian. Notes References Functions and mappings Types of functions
Pfaffian function
Mathematics
1,442
11,268,328
https://en.wikipedia.org/wiki/Hay%20Swamp
Hay Swamp (Hay Swamp Management Area) is a provincially significant wetland complex, 1839 hectares (4544 acres) in size, located in parts of the central land areas of the municipalities of Bluewater and South Huron, in southwestern Ontario, Canada. Approximately in length and in width, at its widest point; it consists of 15 extensively forested individual wetlands, situated on either side, of sections of both the upper drainage of the Ausable River and its tributary, Black Creek. Hay Swamp is situated at the northern limit of the Carolinian Biotic Province and is categorized as consisting of 98% swamp and 2% marshland. Apart from the Ausable and the Black, its primary source of water is considered to be the local Wyoming Moraine aquifer. Hay Swamp is an important regional habitat for wildlife populations including white-tailed deer, great blue heron, ducks, geese, as well as a significant beaver presence. The swamp is also home to several plant species at risk, including green dragon and Riddell's goldenrod. Endangered fish and mussel species present in Hay Swamp include, eastern sand darter, greenside darter, northern riffleshell, snuffbox, wavy-rayed lampmussel, rainbow mussel and kidneyshell. Historic logging operations, as well as continuing intensive drainage practices to obtain access to the rich underlying organic soils of the swamp, have affected its overall size and health as a natural habitat. Since the Ausable River only partially drains the swamp on an ongoing basis, Hay Swamp acts as a natural water 'storage basin', guarding against large-scale flooding in other areas downstream and helping to maintain and moderate baseflow conditions during the summer months. By doing so it improves both the overall water quality in the Ausable system and aids local fish, bird and wildlife populations during the times of the year when water sources that are available for local wildlife are at a premium. The swamp contains several sites of abandoned 19th and early 20th century farms, including the location of the former community of Sodom. Hay Swamp (Hay Swamp Management Area) is administered by the Ausable Bayfield Conservation Authority. External links Ausable Bayfield Conservation Authority Ontario Ministry of Natural Resources Description Page Ontario Ministry of Natural Resources page for Hay Swamp Earth Sciences ANSI Biotic Health "Report Card" for Hay Swamp and Vicinity (pdf doc.) Carolinian Canada Protected areas of Huron County, Ontario Wetlands of Ontario Aquatic ecology
Hay Swamp
Biology
493
30,744,059
https://en.wikipedia.org/wiki/Advances%20in%20Difference%20Equations
Advances in Difference Equations is a peer-reviewed mathematics journal covering research on difference equations, published by Springer Open. The journal was established in 2004 and publishes articles on theory, methodology, and application of difference and differential equations. Originally published by Hindawi Publishing Corporation, the journal was acquired by Springer Science+Business Media in early 2011. The editors-in-chief are Ravi Agarwal, Martin Bohner, and Elena Braverman. Abstracting and indexing The journal is abstracted and indexed by the Science Citation Index Expanded, Current Contents/Physical, Chemical & Earth Sciences, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.803. from July 1, the journal has been transitioning to a new title that opens the scope of the journal to broader developments in theory and applications of models. Under the new title, Advances in Continuous and Discrete Models: Theory and Modern Applications, the journal will cover developments in machine learning, data driven modeling, differential equations, numerical analysis, scientific computing, control, optimization, and computing. References External links Algebra journals Academic journals established in 2004 English-language journals Springer Science+Business Media academic journals Open access journals
Advances in Difference Equations
Mathematics
247
43,191,756
https://en.wikipedia.org/wiki/Thermal%20tails
Thermal tails are an effect found in amplifiers, typically in op-amps, emitter-followers, and differential pairs. The effect is to cause a slow drift from ideal of the output of the amplifier from ideal over time. The cause is the mismatch of temperature of the different transistors on the substrate. Differential pairs have poor recovery from temperature mismatch. The extent of the problem can be determined from the thermal conductivity of the substrate. The lower it is, the more of an issue this will be. It can also be caused by a non-shallow trench isolation. When the power dissipation of a BJT changes the temperature changes. This causes a change in Vbe, thus affecting the output. Thermal tails generally start on the output transistor as it is the one delivering the most power. A thermal tail can be quantified by either loading the output stage of the amplifier or by conducting an AC frequency plot and looking for a rolloff not found due to parasitical components. References External links http://www.analog.com/library/analogdialogue/archives/29-2/qanda.html Electronic amplifiers
Thermal tails
Technology
242
16,482,142
https://en.wikipedia.org/wiki/Microcarrier
A microcarrier is a support matrix that allows for the growth of adherent cells in bioreactors. Instead of on a flat surface, cells are cultured on the surface of spherical microcarriers so that each particle carries several hundred cells, and therefore expansion capacity can be multiplied several times over. It provides a straightforward way to scale up culture systems for industrial production of cell or protein-based therapies, or for research purposes. These solid or porous spherical matrices range anywhere between 100-300 um in diameter to allow sufficient surface area while retaining enough cell adhesion and support, and their density is minimally above that of water (1 g/ml) so that they remain in suspension in a stirred tank. They can be composed of either synthetic materials such as acrylamide or natural materials such as gelatin. The advantages of microcarrier technology in the biotech industry include (a) ease of scale-up, (b) ability to precisely control cell growth conditions in sophisticated, computer-controlled bioreactors, (c) an overall reduction in the floor space and incubator volume required for a given-sized manufacturing operation, (d) a drastic reduction in technician labor, and (e) a more natural environment for cell culture that promotes differentiation. Microcarrier composition Synthetic and natural microcarriers There are several types of microcarriers that can be used, the selection of which is crucial for optimal performance for the application. Early in microcarrier development history, synthetic materials were overwhelmingly used, as they allowed for easy control of mechanical properties and reproducible results for the evaluation of their performance. These materials include DEAE-dextran, glass, polystyrene plastic, and acrylamide. In 1967, microcarrier development began when van Wezel found that the material could support the growth of anchorage-dependent cells, and he used diethylaminoethyl–Sephadex microcarriers. However, synthetic polymers prevent sufficient cell interactions with their environment and stunts their growth. Cells may not differentiate properly without feedback from their environment, and attachment levels would be low. Therefore, the second generation of microcarrier development involves use of natural polymers such as gelatin, collagen, chitin and its derivatives, and cellulose. Not only are these materials easily obtained, but the natural materials provide attachment sites for cells and a similar microenvironment that provides the cell signaling pathways necessary for their proper differentiation. Furthermore, as these are biocompatible, the resulting suspension can be used for delivery of cell therapies in vivo. Solid and porous microcarriers Although liquid microcarriers have been developed, a large majority of commercially available microcarriers are solid particles, synthesized through suspension polymerization. However, cells grown on solid microcarriers risk damage from external forces and collisions with other particles and the tank. Therefore, extra precaution must be taken on determining the stir speed and mechanism, so that the resulting fluid dynamic forces are not strong enough to adversely affect culture. The development of porous microcarriers greatly expanded the capabilities of this technology as it further increased the number of cells that the material can hold, but more importantly, it shielded those within the particle from external forces. These include drag and frictional forces of the suspension fluid, pressure gradients, and shear stresses. The 1980s were marked with a wave of microcarrier development with the breakthrough of porous particles. Surface modifications Microcarriers of the same material can differ in their porosity, specific gravity, optical properties, presence of animal components, and surface chemistries. Surface chemistries can include extracellular matrix proteins, functional groups, recombinant proteins, peptides, and positively or negatively charged molecules, added through conjugation, co-polymerization, plasma treatment or grafting. These may serve to provide higher attachment levels of the cells to the particles, provide a controlled release for isolation, or make the particles more thermally and physically resistant, among other reasons. Several types of microcarriers are available commercially including alginate-based (GEM, Global Cell Solutions), dextran-based (Cytodex, GE Healthcare), collagen-based (Cultispher, Percell), and polystyrene-based (SoloHill Engineering) microcarriers. Advantages over traditional cell culture Expansion capacity A prominent advantage in using microcarrier suspensions for the culture of cells over traditional two-dimensional plates is its capacity to hold more cells in smaller volumes. A hallmark of regular cell culture lab protocol is continual passaging as the cells reach confluence on plates fairly quickly, a bottleneck in biologics production. Multilayer vessels, stacked plates, hollow fibers, and packed bed reactors were other technologies developed to combat this capacity limit in plate cell culture. Although they were an improvement, cell numbers produced through these methods still did not reach the threshold for clinical applications. Microcarrier cell culture, however, was the breakthrough required for cell culture to reach industrial and clinical significance. Studies have shown that microcarrier suspensions, compared to multi-layer vessel culture, improve cell yield by 80-fold at only ten percent of Good Manufacturing Practice space, and only sixty percent of the original cost. Without the need for continual passaging, there is less risk of bacterial contamination and labor costs are minimized as well. Homogeneity Two-dimensional culture also suffers from poor diffusivity of nutrients and gases, requiring added media and supplements to be manually evenly distributed, and may result in irreproducible data. Microcarrier cell suspensions in stirred tank bioreactors allows for an even distribution through homogenous stirring. Parameters such as pH, oxygen pressure, and media supplement concentrations can be continually monitored within a bioreactor as opposed to manually testing small samples from plates. However, high stir speeds can cause damaging collisions between particles and against the reactor, and too low of a speed can inhibit cell growth by causing an accumulation of particles in a ‘dead zone’ and preventing an even distribution of essential nutrients. Therefore, a minimum and maximum velocity gradients must be calculated so as to keep the suspension homogeneous but also sheltered from unnecessary forces. Often the most efficient mechanism for this is an axial stirrer within the bioreactor, which allows for efficient mixing at minimal stir speeds. The homogenous nature of well-functioning bioreactors also allows for simple sampling and monitoring procedures, compared to two dimensional culture which often suffers from tedious sampling procedures. Physiological microenvironment Furthermore, the three-dimensional and high-density suspension environment promotes natural cell morphology and differentiation through mechanical stimulation. On the other hand, two-dimensional plate culture tends to de-differentiate cells over several passages and therefore total passage number must be limited. Industrial translation Microcarrier suspensions are also easily scaled up, through larger concentrations of microparticles in larger stirred tank reactors, while laboratory space used for culture can be still kept to a minimum. However, a scale-up of the microcarrier platform also entails certain challenges in the downstream production process. This includes a reworking of the cell detachment and isolation processes. Larger volumes of suspension liquid must be removed from larger vats of bioreactors, and therefore more equipment must be purchased to handle tens to hundreds of liters of solution instead of the standard milliliter. Biocompatibility Microcarriers are being investigated to deliver cells for targeted tissue engineering. Hepatocytes, chondrocytes, fibroblasts and more have been successfully delivered using biocompatible microcarriers to in vivo targets for the repair of damaged tissues. Microcarriers can also be used to deliver small molecules and proteins for the same purpose. Application A liquid-based assembly method was developed by P. Chen et al. for assembling cell-seeded microcarriers into diverse structures. Neuron-seeded microcarriers were assembled for formation of 3D neural networks with controlled global shape. This method is potentially useful for tissue engineering and neuroscience. External links References Biotechnology
Microcarrier
Biology
1,666
25,429,544
https://en.wikipedia.org/wiki/Sleep%20onset
Sleep onset is the transition from wakefulness into sleep. Sleep onset usually transits into non-rapid eye movement sleep (NREM sleep) but under certain circumstances (e.g. narcolepsy) it is possible to transit from wakefulness directly into rapid eye movement sleep (REM sleep). History During the 1920s an obscure disorder that caused encephalitis and attacked the part of the brain that regulates sleep influenced Europe and North America. Although the virus that caused this disorder was never identified, the psychiatrist and neurologist Constantin von Economo decided to study this disease and identified a key component in the sleep-wake regulation. He identified the pathways that regulated wakefulness and sleep onset by studying the parts of the brain that were affected by the disease and the consequences it had on the circadian rhythm. He stated that the pathways that regulated sleep onset are located between the brain stem and the basal forebrain. His discoveries were not appreciated until the last two decades of the 20th century when the pathways of sleep were found to reside in the exact place that Constantin von Economo stated. Neural circuit Sleep electrophysiological measurements can be made by attaching electrodes to the scalp to measure the electroencephalogram (EEG) and to the chin to monitor muscle activity, recorded as the electromyogram (EMG). Electrodes attached around the eyes monitor eye movements, recorded as the electro-oculogram (EOG). Pathways Von Economo, in his studies, noticed that lesions in the connection between the midbrain and the diencephalon caused prolonged sleepiness and therefore proposed the idea of an ascending arousal system. During the past few decades major ascending pathways have been discovered with located neurons and respective neurotransmitters. This pathway divides into two branches: one that ascends to the thalamus and activates the thalamus relay neurons, and another one that activates neurons in the lateral part of the hypothalamus and the basal forebrain, and throughout the cerebral cortex. This refers to the ascending reticular activating system (cf reticular formation). The cell group involved in the first pathway is an acetylcholine-producing cell group called pedunculopontine and laterodorsal tegmental nucleus. These neurons play a crucial role in bridging information in between the thalamus and the cerebral cortex. These neurons have high activation during wakefulness and during REM sleep and a low activation during NREM sleep. The second branch originates from monoaminorgenic neurons. These neurons are located in the locus coeruleus, dorsal and median raphe nuclei, ventral periaqueductal grey matter, and tuberomammillary nucleus. Each group produces a different neurotransmitter. The neurons in the locus coeruleus produce noradrenaline, as fore the neurons in the dorsal and median raphe nuclei, ventral periaqueductal grey matter, and tuberomammillary nucleus produce serotonin, dopamine and histamine respectively. They then project onto the hypothalamic peptidergic neurons, which contain melanin-concentrated hormones or orexin, and basal forebrain neurons which contain GABA and acetylcholine. These neurons then project onto the cerebral cortex. It has also been discovered that lesions to this part of the brain cause prolonged sleep or may produce coma. Lesions Some light was thrown on the mechanisms on sleep onset by the discovery that lesions in the preoptic area and anterior hypothalamus lead to insomnia while those in the posterior hypothalamus lead to sleepiness. Further research has shown that the hypothalamic region called ventrolateral preoptic nucleus produces the inhibitory neurotransmitter GABA that inhibits the arousal system during sleep onset. Direct mechanism Sleep onset is induced by sleep-promoting neurons, located in the ventrolateral preoptic nucleus (VLPO). The sleep-promoting neurons are believed to project GABA type A and galanin, two known inhibitory neurotransmitters, to arousal-promoting neurons, such as histaminergic, serotonergic, orexinergic, noradrenergic, and cholinergic neurons (neurons mentioned above). Levels of acetylcholine, norepinephrine, serotonin, and histamine decrease with the onset of sleep, for they are all wakefulness promoting neurotransmitters. Therefore, it is believed that the activation of sleep-promoting neurons causes the inhibition of arousal-promoting neurons, which leads to sleep. Evidence has shown that during the sleep-wake cycle, sleep-promoting neurons and the arousal-promoting neurons have reciprocal discharges, and that during NREM sleep, GABA receptors increase in the arousal-promoting neurons. This had led some to believe that the increase of GABA receptors in the arousal-promoting neurons is another pathway of inducing sleep. Adenosine is also known as the sleep promoting nucleoside neuromodulator. Astrocytes maintain a small stock of nutrients in the form of glycogen. In times of increased brain activity, such as during daytime, this glycogen is converted into fuel for neurons; thus, prolonged wakefulness causes a decrease in the level of glycogen in the brain. A fall in the level of glycogen causes an increase in the level of extracellular adenosine, which has an inhibitory effect in neural activity. This accumulation of adenosine serves as a sleep-promoting substance. The majority of sleep neurons are located in the ventrolateral preoptic area (vlPOA). These sleep neurons are silent until an individual shows a transition from waking to sleep. The sleep neurons in the preoptic area receive inhibitory inputs from some of the same regions they inhibit, including the tubermammillary nucleus, raphe nuclei, and locus coeruleus. Thus, they are inhibited by histamine, serotonin, and norepinepherine. This mutual inhibition may provide the basis for establishing periods of sleep and waking. A reciprocal inhibition also characterizes an electronic circuit known as the flip-flop. A flip-flop can assume one of two states, usually referred to as on or off. Thus, either the sleep neurons are active and inhibit the wakefulness neurons, or the wakefulness neurons are active and inhibit the sleep neurons, Because these regions are mutually inhibitory, it is impossible for neurons in both sets of regions to be active at the same time. This flip-flop, switching from one state to another quickly, can be unstable. Stage 1 The sleep cycle is normally defined in stages. When an individual first begins to sleep, stage 1 is entered, marked by the presence of some theta activity, which indicates that the firing of neurons in the neocortex is becoming more synchronized, as well as alpha wave activity (smooth electrical activity of 8–12 Hz recorded from the brain, generally associated with a state of relaxation). This stage is a transition between sleep and wakefulness. This stage is classified as non-REM sleep. See also Sleep onset latency Hypnagogia References Onset Unsolved problems in neuroscience
Sleep onset
Biology
1,516
50,267,904
https://en.wikipedia.org/wiki/GAT100
GAT100 is a negative allosteric modulator of the cannabinoid CB1 receptor. See also Org 27569 PSNCBAM-1 ZCZ-011 References Cannabinoids CB1 receptor negative allosteric modulators Isothiocyanates
GAT100
Chemistry
60
13,132,016
https://en.wikipedia.org/wiki/Micromonas
Micromonas is a genus of green algae in the family Mamiellaceae. Micromonas is a widespread prasinophyte alga that is very small in size, motile, and phototactic. Before characterization and naming of a second species, Micromonas commoda through genome analysis, Micromonas pusilla was considered to be the only species in the genus. This led to a disproportionate amount of research discussing a single species and the suggestion that it was the dominant photosynthetic picoeukaryote in some marine ecosystems. Unlike many marine algae, this single species was thought to be distributed widely in both warm and cold waters, but genome sequencing confirmed indications from single-gene studies that its global distribution really reflected presence of multiple species occupying different niches in the ocean. Some studies have divided Micromonas pusilla into 3 to 5 different clades despite their similarity in morphologies and habitats. Varying ratios of clades contribute to the M. pusilla population throughout the marine ecosystem leading to the hypothesis of clades arising based on niche occupation and susceptibility to virus infection. Other studies have established the presence of at least seven phylogenetically distinct species for which global sequence analyses are beginning to delineate clear differences in the ocean regions they inhabit, with only some of the species actually co-occurring in the same environment. Discovery Micromonas pusilla is considered the first picoplankton studied, when it was discovered and named Chromulina pusilla in the 1950s by R. Butcher. Later, electron micrographs by the English scientists, Irene Manton and Mary Park, in the 1960s provided further details on M. pusilla. Cell morphology and structure Micromonas is a group of small unicellular pear-shaped micro-algae that do not have a visible cell wall. Just like other members in the class, they have a single mitochondrion and a single chloroplast, which covers almost half of the cell. They are able to swim due to the presence of a scale-less flagellum. The axonemal structure of the flagellum for this genus is different in that the peripheral microtubules do not extend up to the termination of the central pair of microtubules, allowing a visible investigation of the motion of the central pair. In Micromonas, the central pair constantly rotates in an anti-clockwise direction, despite the motion of other components of the flagellum. While the cell size, shape and the location of insertion of the flagellum into the cell are similar among strains and genetic clades, the variation in respective hair point length results in different lengths of the flagella within the genus. Antibiotic The antibiotic susceptibility was determined using a single strain of M. pusilla with the purpose to produce axenic cultures to be used in studies and experiments. The strain of M.pusilla was tested with a range of antibiotics to determine the possible effects of the particular antibiotic. Resistance: benzylpenicillin, gentamicin, kanamycin, neomycin, streptomycin Sensitive: chloramphenicol, polymyxin B For M. pusilla, sensitivity towards an antibiotic is likely defined by the impairment of growth, rather than a lethal effect, when subjected to bactericidal levels of that particular antibiotic. The susceptibility of other strains of M. pusilla towards this set of antibiotics should be the same. Genetics Evolutionary history Micromonas diverged early on from the lineage that led to all modern terrestrial plants. Individual species have very similar 18S ribosomal RNA gene sequences, a comparison often used to determine microscopic speciation; however, <90% of different genes are shared between the two genome sequenced Micromonas species. They have more notable differences in the V1-V2 region of the 16S ribosomal RNA genes (located in the chloroplast genome). More recent analyses show just how divergent they are in relation to other green lineage members, specifically land plants and chlorophyte green algae. Although Micromonas pusilla was thought to represent a single species, genetic studies have shown that Micromonas lineages diverged from each other as early as 65 million years ago, accumulating a large amount of genetic differences. The lack of morphological differentiation means that Micromonas pusilla may be considered a cryptic species complex. Strain isolation The original Micromonas reference genome(s) were created from strain CCMP1545 isolated from the North Atlantic and deposited in a culture collection in the 1980s, and strain CCMP2709 (RCC299 prior to being rendered axenic and clonal), isolated in 1998 from an Equatorial Pacific sample. These strains had been cultured for decades and are available from the National Center for Marine Algae and Microbiota (NCMA, US) and the Roscoff Culture Collection (RCC, FR). Cellular mechanisms Cell growth and division Micromonas reproduces asexually through fission. It has been observed that M. pusilla shows variability in optical characteristics, for example cell size and light scattering, throughout the day. There is an increase in these measurements during the period with light, followed by a decrease during period without light. This coincides with the findings that proteomic profiles change over the diel cycle, with an increase in expression of proteins related to cell proliferation, lipid and cell membrane restructuring in the dark when cells start dividing and become smaller. However, the expression levels of genes and proteins can still vary within the same metabolic pathway. It has also been suggested that the structure of 3’ UTR may play a role in the regulatory system. Light-harvesting system Micromonas species still share the same collection of photosynthetic pigments as the members of the class Mamiellophyceae, which includes the common pigments chlorophyll a and chlorophyll b, as well as prasinoxanthin (xanthophyll K), the first algal carotenoid being assigned with a structure that has a γ-end group. It has been discovered that most of its xanthophylls are in the oxidized state and show similarities to ones possessed by other important marine planktons like diatoms, golden and brown algae, and dinoflagellates. In addition, there is another pigment called Chl cCS-170 can be found in some strains of Micromonas and Ostreococcus living in deeper part of the ocean, which may indicate a potential adaptation for organisms that reside under low light intensity; however, at least for Ostreococcus, these strains are found throughout the water column in open ocean gyres, including in surface waters. The light-harvesting complexes of Micromonas are distinguishable from other green algae in terms of pigment composition and stability under unfavorable conditions. It has been shown that these proteins use three different pigments for light harvesting, and they are resistant to high temperature and the presence of detergent. Peptidoglycan biosynthesis Even though the chloroplasts, which are suggested to be originated from Cyanobacteria via endosymbiosis, from Micromonas do not have a surrounding peptidoglycan layer, the peptidoglycan biosynthesis pathway is found to be complete in M. pusilla and partial in M. commoda, with the presence of some relevant enzymes only. While the role of this pathway for Micromonas is still under investigation, this observation shows a lineage for different species of Micromonas along with glaucophyte algae which still have their chloroplasts covered with peptidoglycan. Ecological significance Micromonas make up a significant amount of picoplanktonic biomass and productivity in both oceanic and coastal regions. The abundance of Micromonas has increased over the past decade. Evidence shows these spikes in numbers are induced through climate change, which has been felt more drastically in the Arctic. Many green algal species have been considered solely photosynthetic, and this appears to be the case for Micromonas. Some years ago a study indicated that Micromonas had a predatory mixotrophic lifestyle that might have large impacts on prokaryotic populations within the Arctic. Due to the large consumption of prokaryotes by Micromonas, this study and others building on it, suggested it might underlie why photosynthetic picoeukaryotes appear to be increasing in the arctic. However, the authors of that study lost the strain used, and two subsequent studies by other laboratories were unable to replicate the results, concluding that Micromonas, including M. polaris, is not a predatory mixotroph. Viral infection Viruses are important in the balance of marine ecosystem by regulating the composition of microbial communities, but their behaviors can be affected by several factors including temperature, mode of infection and host conditions. There is an increasing number of Micromonas-infecting virus being discovered and studied, including studies of transcriptional responses to infection under differing nutrient conditions. Micromonas pusilla virus There are currently 45 viral strains identified that coexist with M. pusilla populations. Virus infectivity is dependent on the host strain, light availability and virus adsorption. Per day average of death due to virus lysis is estimated to be about 2 to 10% of the M. pusilla population. Micromonas pusilla reovirus (MpRV): The first isolation of a reovirus that infects protist. This virus is found to be bigger than other members of the family. Micromonas polaris virus It is the first phycodnavirus being isolated from polar ocean waters. It can infect M. polaris, which is the polar ecotype of Micromonas that has adapted to waters with low temperatures. Evidence suggests that the increase in temperature due to climate change may shift the clonal composition of both the virus and host. Metabolic engineering With the growing population in the world, there is an increased demand for wild fishes and algae for their source of polyunsaturated fatty acids (PUFA), which is required for growth and development, as well as the maintenance of health in humans. Recent research is investigating an alternative mechanism for production of PUFA by using acyl-CoA Δ6-desaturase, an enzyme present in M. pusilla, with plants. The M. pusilla strain of acyl-CoA Δ6-desaturase is highly effective in the polyunsaturated fatty acid synthesis pathway due to its strong binding preference for omega-3 substrates in land plants. References External links Genes from tiny algae shed light on big role managing carbon in world's oceans Chlorophyta genera Mamiellophyceae Marine microorganisms
Micromonas
Biology
2,271
11,720,315
https://en.wikipedia.org/wiki/Hilbert%27s%20theorem%20%28differential%20geometry%29
In differential geometry, Hilbert's theorem (1901) states that there exists no complete regular surface of constant negative gaussian curvature immersed in . This theorem answers the question for the negative case of which surfaces in can be obtained by isometrically immersing complete manifolds with constant curvature. History Hilbert's theorem was first treated by David Hilbert in "Über Flächen von konstanter Krümmung" (Trans. Amer. Math. Soc. 2 (1901), 87–99). A different proof was given shortly after by E. Holmgren in "Sur les surfaces à courbure constante négative" (1902). A far-leading generalization was obtained by Nikolai Efimov in 1975. Proof The proof of Hilbert's theorem is elaborate and requires several lemmas. The idea is to show the nonexistence of an isometric immersion of a plane to the real space . This proof is basically the same as in Hilbert's paper, although based in the books of Do Carmo and Spivak. Observations: In order to have a more manageable treatment, but without loss of generality, the curvature may be considered equal to minus one, . There is no loss of generality, since it is being dealt with constant curvatures, and similarities of multiply by a constant. The exponential map is a local diffeomorphism (in fact a covering map, by Cartan-Hadamard theorem), therefore, it induces an inner product in the tangent space of at : . Furthermore, denotes the geometric surface with this inner product. If is an isometric immersion, the same holds for . The first lemma is independent from the other ones, and will be used at the end as the counter statement to reject the results from the other lemmas. Lemma 1: The area of is infinite. Proof's Sketch: The idea of the proof is to create a global isometry between and . Then, since has an infinite area, will have it too. The fact that the hyperbolic plane has an infinite area comes by computing the surface integral with the corresponding coefficients of the First fundamental form. To obtain these ones, the hyperbolic plane can be defined as the plane with the following inner product around a point with coordinates Since the hyperbolic plane is unbounded, the limits of the integral are infinite, and the area can be calculated through Next it is needed to create a map, which will show that the global information from the hyperbolic plane can be transfer to the surface , i.e. a global isometry. will be the map, whose domain is the hyperbolic plane and image the 2-dimensional manifold , which carries the inner product from the surface with negative curvature. will be defined via the exponential map, its inverse, and a linear isometry between their tangent spaces, . That is , where . That is to say, the starting point goes to the tangent plane from through the inverse of the exponential map. Then travels from one tangent plane to the other through the isometry , and then down to the surface with another exponential map. The following step involves the use of polar coordinates, and , around and respectively. The requirement will be that the axis are mapped to each other, that is goes to . Then preserves the first fundamental form. In a geodesic polar system, the Gaussian curvature can be expressed as . In addition K is constant and fulfills the following differential equation Since and have the same constant Gaussian curvature, then they are locally isometric (Minding's Theorem). That means that is a local isometry between and . Furthermore, from the Hadamard's theorem it follows that is also a covering map. Since is simply connected, is a homeomorphism, and hence, a (global) isometry. Therefore, and are globally isometric, and because has an infinite area, then has an infinite area, as well. Lemma 2: For each exists a parametrization , such that the coordinate curves of are asymptotic curves of and form a Tchebyshef net. Lemma 3: Let be a coordinate neighborhood of such that the coordinate curves are asymptotic curves in . Then the area A of any quadrilateral formed by the coordinate curves is smaller than . The next goal is to show that is a parametrization of . Lemma 4: For a fixed , the curve , is an asymptotic curve with as arc length. The following 2 lemmas together with lemma 8 will demonstrate the existence of a parametrization Lemma 5: is a local diffeomorphism. Lemma 6: is surjective. Lemma 7: On there are two differentiable linearly independent vector fields which are tangent to the asymptotic curves of . Lemma 8: is injective. Proof of Hilbert's Theorem: First, it will be assumed that an isometric immersion from a complete surface with negative curvature exists: As stated in the observations, the tangent plane is endowed with the metric induced by the exponential map . Moreover, is an isometric immersion and Lemmas 5,6, and 8 show the existence of a parametrization of the whole , such that the coordinate curves of are the asymptotic curves of . This result was provided by Lemma 4. Therefore, can be covered by a union of "coordinate" quadrilaterals with . By Lemma 3, the area of each quadrilateral is smaller than . On the other hand, by Lemma 1, the area of is infinite, therefore has no bounds. This is a contradiction and the proof is concluded. See also Nash embedding theorem, states that every Riemannian manifold can be isometrically embedded into some Euclidean space. References , Differential Geometry of Curves and Surfaces, Prentice Hall, 1976. , A Comprehensive Introduction to Differential Geometry, Publish or Perish, 1999. Hyperbolic geometry Theorems in differential geometry Articles containing proofs
Hilbert's theorem (differential geometry)
Mathematics
1,239
73,695,367
https://en.wikipedia.org/wiki/Mobile%20barrage%20squad
Mobile barrages squad is an element of a combat or operational order in the form of a temporary military formation, which is created from units of engineering troops and army aviation. The abbreviation for the temporary formation of troops or forces used in service documents is MBS. The main purpose of MBS is to set up mine blast barrages during combat and to destroy transport infrastructure on behalf of friendly forces. Until July 1943 they were referred to simply as a barrage squad. History The theoretical foundation for the practical application of MBS was laid in the work "Разрушения и заграждения" (1931) by the Soviet military engineer Dmitry Karbyshev. During the Second World War (1939 - 1945), and especially the Eastern Front (1941 - 1945), wide use in all types of combat found mines and explosive barrages. For their arrangement in the Battle of Moscow, the Soviet troops for the first time in 1941 were used barrage squads, later called mobile barrage squads, which subsequently were successfully used in other operations of the Red Army of the Soviet Union. After the Battle of Kursk (1943) on the basis of the experience gained, it was concluded that the army command needed a permanent specialized reserve of engineering units, which would have the means of mechanization of mines, large quantities of mines and explosives of various types. As a consequence, the MBS became a mandatory element of operational structure of the Soviet troops, and in 1942 - 1943 the tactics of MBS in the offensive and defensive were practiced. During the Eastern Front, the Red Army expended more than 70,000,000 different mines, including about 30,000,000 anti-tank mines. Basic provisions The composition and equipment of a squad is determined by its objectives in combat or an operation, the availability of available forces and equipment, the composition of the enemy's troops, and the conditions on the ground. When setting a mission, the MBS receives data on the area of location, movement routes, mine lines, and possible courses of action (bands). On defense, the MBS holds behind the first echelon of its troops in the most likely direction of the enemy's main strike in full readiness to move out to the breakout areas. The primary purpose of the MBS in defensive operations is considered to be: Rapid erection of mines and explosive barrages and the organization of destruction on the directions of the enemy's breakthrough into the depths of the defense, in the areas of landing of marines and the like; Covering joints and flanks of their troops with engineering barrages, as well as deployment points for counterattacks and counterstrikes; Increasing the density of barrages in critical areas of first-echelon unit defense. On the offensive, the MBS follows the first echelon of troops in readiness to set up barriers at the mines indicated to it. On the offensive, the tasks of the MBS are usually: Organizing roadblocks in likely directions of enemy counterattacks and counterattacks; Covering the flanks of the strike groups and the entry points of the second echelons with engineering barrages; Setting up barrages in first-echelon units while securing them in captured positions. An MBS formation can be established not only in the ground forces, but also to solve certain tasks within other branches of the armed forces and branches of the military. For example, in the Navy, the tasks of barrages on the high seas are performed by barrage ship units. In the Strategic Rocket Forces of Russia, units are formed to cover approaches to the facilities of the position area, block enemy reconnaissance groups and landing zones of marines. While conventional units and formations usually organize one MBS, troops defending a stretch of coastline create an additional MBS with watercraft or aircraft for placing anti-landing mines in the water. In military formations, two or three MBSs are created, one of which is equipped with helicopter vehicles. The organizational structure of the armies of NATO member states does not provide for the deployment and use of MBS. See also Fougasse Notes Bibliography Руббо Д., Григорьев Б. Подвижные отряды заграждений в битве под Москвой // Армейский сборник : Научно-методический журнал МО РФ. — М.: Редакционно-издательский центр МО РФ, 2016. — No. 11. — p. 25. — ISSN 1560-036X. External links Подвижный отряд заграждений. Энциклопедия. Ministry of Defence (Russia). Пономарёв А. А. Совершенствование инженерного обеспечения боевых действий войск Красной Армии в битвах под Москвой и Сталинградом. Information site «Военно-политическое обозрение» (2013.02.20) Хасанов Ш. Ф. Бои на подступах к рубежу. Archived November 7, 2016 at the Wayback Machine Military terminology Military engineering Army aviation
Mobile barrage squad
Engineering
1,241
49,015,956
https://en.wikipedia.org/wiki/Copper%28I%29%20thiocyanate
Copper(I) thiocyanate (or cuprous thiocyanate) is a coordination polymer with formula CuSCN. It is an air-stable, white solid used as a precursor for the preparation of other thiocyanate salts. Structure At least two polymorphs have been characterized by X-ray crystallography. They both feature copper(I) in a characteristic tetrahedral coordination geometry. The sulfur end of the SCN- ligand is triply bridging so that the coordination sphere for copper is CuS3N. Synthesis Copper(I) thiocyanate forms from the spontaneous decomposition of black copper(II) thiocyanate, releasing thiocyanogen, especially when heated. It is also formed from copper(II) thiocyanate under water, releasing (among others) thiocyanic acid and the highly poisonous hydrogen cyanide. It is conveniently prepared from relatively dilute solutions of copper(II) in water, such as copper(II) sulphate. To a copper(II) solution sulphurous acid is added and then a soluble thiocyanate is added (preferably slowly, while stirring). Copper(I) thiocyanate is precipitated as a white powder. Alternatively, a thiosulfate solution may be used as a reducing agent. Double salts Copper(I) thiocyanate forms one double salt with the group 1 elements, CsCu(SCN)2. The double salt only forms from concentrated solutions of CsSCN, into which CuSCN dissolves. From less concentrated solutions, solid CuSCN separates reflecting its low solubility. When brought together with potassium, sodium or barium thiocyanate, and brought to crystallisation by concentrating the solution, mixed salts will crystallise out. These are not considered true double salts. As with CsCu (SNC)2, copper(I) thiocyanate separates out when these mixed salts are redissolved or their solutions diluted. Uses Copper(I) thiocyanate is a hole conductor, a semiconductor with a wide band gap (3.6 eV, therefore transparent to visible and near infrared light). It is used in photovoltaics in some third-generation cells as a hole transfer layer. It acts as a P-type semiconductor and as a solid-state electrolyte. It is often used in dye-sensitized solar cells. Its hole conductivity is however relatively poor (0.01 S·m−1). This can be improved by various treatments, e.g. exposure to gaseous chlorine or doping with (SCN)2. CuSCN with NiO act synergically as a smoke suppressant additive in polyvinyl chloride (PVC). CuSCN precipitated on carbon support can be used for conversion of aryl halides to aryl thiocyanates. Copper thiocyanate is used in some anti-fouling paints. Advantages compared to cuprous oxide include that the compound is white and a more efficient biocide. References Copper(I) compounds Thiocyanates Semiconductor materials Coordination polymers
Copper(I) thiocyanate
Chemistry
676
8,121,926
https://en.wikipedia.org/wiki/European%20Physical%20Journal%20C
The European Physical Journal C (EPJ C) is a biweekly peer-reviewed, open access scientific journal covering theoretical and experimental physics. It is part of the SCOAP3 initiative. See also European Physical Journal References Physics journals Springer Science+Business Media academic journals Academic journals established in 1998 English-language journals Semi-monthly journals EDP Sciences academic journals Particle physics journals
European Physical Journal C
Physics
78
54,840
https://en.wikipedia.org/wiki/Eutrophication
Eutrophication is a general term describing a process in which nutrients accumulate in a body of water, resulting in an increased growth of organism that may deplete the oxygen in the water. Eutrophication may occur naturally or as a result of human actions. Manmade, or cultural, eutrophication occurs when sewage, industrial wastewater, fertilizer runoff, and other nutrient sources are released into the environment. Such nutrient pollution usually causes algal blooms and bacterial growth, resulting in the depletion of dissolved oxygen in water and causing substantial environmental degradation. Approaches for prevention and reversal of eutrophication include minimizing point source pollution from sewage and agriculture as well as other nonpoint pollution sources. Additionally, the introduction of bacteria and algae-inhibiting organisms such as shellfish and seaweed can also help reduce nitrogen pollution, which in turn controls the growth of cyanobacteria, the main source of harmful algae blooms. History and terminology The term "eutrophication" comes from the Greek eutrophos, meaning "well-nourished". Water bodies with very low nutrient levels are termed oligotrophic and those with moderate nutrient levels are termed mesotrophic. Advanced eutrophication may also be referred to as dystrophic and hypertrophic conditions. Thus, eutrophication has been defined as "degradation of water quality owing to enrichment by nutrients which results in excessive plant (principally algae) growth and decay." Eutrophication was recognized as a water pollution problem in European and North American lakes and reservoirs in the mid-20th century. Breakthrough research carried out at the Experimental Lakes Area (ELA) in Ontario, Canada, in the 1970s provided the evidence that freshwater bodies are phosphorus-limited. ELA uses the whole ecosystem approach and long-term, whole-lake investigations of freshwater focusing on cultural eutrophication. Causes Eutrophication is caused by excessive concentrations of nutrients, most commonly phosphates and nitrates, although this varies with location. Prior to their being phasing out in the 1970's, phosphate-containing detergents contributed to eutrophication. Since then, sewage and agriculture have emerged as the dominant phosphate sources. The main sources of nitrogen pollution are from agricultural runoff containing fertilizers and animal wastes, from sewage, and from atmospheric deposition of nitrogen originating from combustion or animal waste. The limitation of productivity in any aquatic system varies with the rate of supply (from external sources) and removal (flushing out) of nutrients from the body of water. This means that some nutrients are more prevalent in certain areas than others and different ecosystems and environments have different limiting factors. Phosphorus is the limiting factor for plant growth in most freshwater ecosystems, and because phosphate adheres tightly to soil particles and sinks in areas such as wetlands and lakes, due to its prevalence nowadays more and more phosphorus is accumulating inside freshwater bodies. In marine ecosystems, nitrogen is the primary limiting nutrient; nitrous oxide (created by the combustion of fossil fuels) and its deposition in the water from the atmosphere has led to an increase in nitrogen levels, and also the heightened levels of eutrophication in the ocean. Cultural eutrophication Cultural or anthropogenic eutrophication is the process that causes eutrophication because of human activity. The problem became more apparent following the introduction of chemical fertilizers in agriculture (green revolution of the mid-1900s). Phosphorus and nitrogen are the two main nutrients that cause cultural eutrophication as they enrich the water, allowing for some aquatic plants, especially algae to grow rapidly and bloom in high densities. Algal blooms can shade out benthic plants thereby altering the overall plant community. When algae die off, their degradation by bacteria removes oxygen, potentially, generating anoxic conditions. This anoxic environment kills off aerobic organisms (e.g. fish and invertebrates) in the water body. This also affects terrestrial animals, restricting their access to affected water (e.g. as drinking sources). Selection for algal and aquatic plant species that can thrive in nutrient-rich conditions can cause structural and functional disruption to entire aquatic ecosystems and their food webs, resulting in loss of habitat and species biodiversity. There are several sources of excessive nutrients from human activity including run-off from fertilized fields, lawns, and golf courses, untreated sewage and wastewater and internal combustion of fuels creating nitrogen pollution. Cultural eutrophication can occur in fresh water and salt water bodies, shallow waters being the most susceptible. In shore lines and shallow lakes, sediments are frequently resuspended by wind and waves which can result in nutrient release from sediments into the overlying water, enhancing eutrophication. The deterioration of water quality caused by cultural eutrophication can therefore negatively impact human uses including potable supply for consumption, industrial uses and recreation. Natural eutrophication Eutrophication can be a natural process and occurs naturally through the gradual accumulation of sediment and nutrients. Naturally, eutrophication is usually caused by the natural accumulation of nutrients from dissolved phosphate minerals and dead plant matter in water. Natural eutrophication has been well-characterized in lakes. Paleolimnologists now recognise that climate change, geology, and other external influences are also critical in regulating the natural productivity of lakes. A few artificial lakes also demonstrate the reverse process (meiotrophication), becoming less nutrient rich with time as nutrient poor inputs slowly elute the nutrient richer water mass of the lake. This process may be seen in artificial lakes and reservoirs which tend to be highly eutrophic on first filling but may become more oligotrophic with time. The main difference between natural and anthropogenic eutrophication is that the natural process is very slow, occurring on geological time scales. Effects Ecological effects Eutrophication can have the following ecological effects: increased biomass of phytoplankton, changes in macrophyte species composition and biomass, dissolved oxygen depletion, increased incidences of fish kills, loss of desirable fish species. Decreased biodiversity When an ecosystem experiences an increase in nutrients, primary producers reap the benefits first. In aquatic ecosystems, species such as algae experience a population increase (called an algal bloom). Algal blooms limit the sunlight available to bottom-dwelling organisms and cause wide swings in the amount of dissolved oxygen in the water. Oxygen is required by all aerobically respiring plants and animals and it is replenished in daylight by photosynthesizing plants and algae. Under eutrophic conditions, dissolved oxygen greatly increases during the day, but is greatly reduced after dark by the respiring algae and by microorganisms that feed on the increasing mass of dead algae. When dissolved oxygen levels decline to hypoxic levels, fish and other marine animals suffocate. As a result, creatures such as fish, shrimp, and especially immobile bottom dwellers die off. In extreme cases, anaerobic conditions ensue, promoting growth of bacteria. Zones where this occurs are known as dead zones. New species invasion Eutrophication may cause competitive release by making abundant a normally limiting nutrient. This process causes shifts in the species composition of ecosystems. For instance, an increase in nitrogen might allow new, competitive species to invade and out-compete original inhabitant species. This has been shown to occur in New England salt marshes. In Europe and Asia, the common carp frequently lives in naturally eutrophic or hypereutrophic areas, and is adapted to living in such conditions. The eutrophication of areas outside its natural range partially explain the fish's success in colonizing these areas after being introduced. Toxicity Some harmful algal blooms resulting from eutrophication, are toxic to plants and animals. Freshwater algal blooms can pose a threat to livestock. When the algae die or are eaten, neuro- and hepatotoxins are released which can kill animals and may pose a threat to humans. An example of algal toxins working their way into humans is the case of shellfish poisoning. Biotoxins created during algal blooms are taken up by shellfish (mussels, oysters), leading to these human foods acquiring the toxicity and poisoning humans. Examples include paralytic, neurotoxic, and diarrhoetic shellfish poisoning. Other marine animals can be vectors for such toxins, as in the case of ciguatera, where it is typically a predator fish that accumulates the toxin and then poisons humans. Economic effects Eutrophication and harmful algal blooms can have economic impacts due to increasing water treatment costs, commercial fishing and shellfish losses, recreational fishing losses (reductions in harvestable fish and shellfish), and reduced tourism income (decreases in perceived aesthetic value of the water body). Water treatment costs can be increased due to decreases in water transparency (increased turbidity). There can also be issues with color and smell during drinking water treatment. Health impacts Human health effects of eutrophication derive from two main issues excess nitrate in drinking water and exposure to toxic algae. Nitrates in drinking water can cause blue baby syndrome in infants and can react with chemicals used to treat water to create disinfection by-products in drinking water. Getting direct contact with toxic algae through swimming or drinking can cause rashes, stomach or liver illness, and respiratory or neurological problems . Causes and effects for different types of water bodies Freshwater systems One response to added amounts of nutrients in aquatic ecosystems is the rapid growth of microscopic algae, creating an algal bloom. In freshwater ecosystems, the formation of floating algal blooms are commonly nitrogen-fixing cyanobacteria (blue-green algae). This outcome is favored when soluble nitrogen becomes limiting and phosphorus inputs remain significant. Nutrient pollution is a major cause of algal blooms and excess growth of other aquatic plants leading to overcrowding competition for sunlight, space, and oxygen. Increased competition for the added nutrients can cause potential disruption to entire ecosystems and food webs, as well as a loss of habitat, and biodiversity of species. When overproduced macrophytes and algae die in eutrophic water, their decompose further consumes dissolved oxygen. The depleted oxygen levels in turn may lead to fish kills and a range of other effects reducing biodiversity. Nutrients may become concentrated in an anoxic zone, often in deeper waters cut off by stratification of the water column and may only be made available again during autumn turn-over in temperate areas or in conditions of turbulent flow. The dead algae and organic load carried by the water inflows into a lake settle to the bottom and undergo anaerobic digestion releasing greenhouse gases such as methane and CO2. Some of the methane gas may be oxidised by anaerobic methane oxidation bacteria such as Methylococcus capsulatus, which in turn may provide a food source for zooplankton. Thus a self-sustaining biological process can take place to generate primary food source for the phytoplankton and zooplankton depending on the availability of adequate dissolved oxygen in the water body. Enhanced growth of aquatic vegetation, phytoplankton and algal blooms disrupts normal functioning of the ecosystem, causing a variety of problems such as a lack of oxygen which is needed for fish and shellfish to survive. The growth of dense algae in surface waters can shade the deeper water and reduce the viability of benthic shelter plants with resultant impacts on the wider ecosystem. Eutrophication also decreases the value of rivers, lakes and aesthetic enjoyment. Health problems can occur where eutrophic conditions interfere with drinking water treatment. Phosphorus is often regarded as the main culprit in cases of eutrophication in lakes subjected to "point source" pollution from sewage pipes. The concentration of algae and the trophic state of lakes correspond well to phosphorus levels in water. Studies conducted in the Experimental Lakes Area in Ontario have shown a relationship between the addition of phosphorus and the rate of eutrophication. Later stages of eutrophication lead to blooms of nitrogen-fixing cyanobacteria limited solely by the phosphorus concentration. Phosphorus-base eutrophication in fresh water lakes has been addressed in several cases. Coastal waters Eutrophication is a common phenomenon in coastal waters, where nitrogenous sources are the main culprit. In coastal waters, nitrogen is commonly the key limiting nutrient of marine waters (unlike the freshwater systems where phosphorus is often the limiting nutrient). Therefore, nitrogen levels are more important than phosphorus levels for understanding and controlling eutrophication problems in salt water. Estuaries, as the interface between freshwater and saltwater, can be both phosphorus and nitrogen limited and commonly exhibit symptoms of eutrophication. Eutrophication in estuaries often results in bottom water hypoxia or anoxia, leading to fish kills and habitat degradation. Upwelling in coastal systems also promotes increased productivity by conveying deep, nutrient-rich waters to the surface, where the nutrients can be assimilated by algae. Examples of anthropogenic sources of nitrogen-rich pollution to coastal waters include sea cage fish farming and discharges of ammonia from the production of coke from coal. In addition to runoff from land, wastes from fish farming and industrial ammonia discharges, atmospheric fixed nitrogen can be an important nutrient source in the open ocean. This could account for around one third of the ocean's external (non-recycled) nitrogen supply, and up to 3% of the annual new marine biological production. Coastal waters embrace a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf. Phytoplankton productivity in coastal waters depends on both nutrient and light supply, with the latter an important limiting factor in waters near to shore where sediment resuspension often limits light penetration. Nutrients are supplied to coastal waters from land via river and groundwater and also via the atmosphere. There is also an important source from the open ocean, via mixing of relatively nutrient rich deep ocean waters. Nutrient inputs from the ocean are little changed by human activity, although climate change may alter the water flows across the shelf break. By contrast, inputs from land to coastal zones of the nutrients nitrogen and phosphorus have been increased by human activity globally. The extent of increases varies greatly from place to place depending on human activities in the catchments. A third key nutrient, dissolved silicon, is derived primarily from sediment weathering to rivers and from offshore and is therefore much less affected by human activity. Effects of coastal eutrophication These increasing nitrogen and phosphorus nutrient inputs exert eutrophication pressures on coastal zones. These pressures vary geographically depending on the catchment activities and associated nutrient load. The geographical setting of the coastal zone is another important factor as it controls dilution of the nutrient load and oxygen exchange with the atmosphere. The effects of these eutrophication pressures can be seen in several different ways: There is evidence from satellite monitoring that the amounts of chlorophyll as a measure of overall phytoplankton activity are increasing in many coastal areas worldwide due to increased nutrient inputs. The phytoplankton species composition may change due to increased nutrient loadings and changes in the proportions of key nutrients. In particular the increases in nitrogen and phosphorus inputs, along with much smaller changes in silicon inputs, create changes in the ratio of nitrogen and phosphorus to silicon. These changing nutrient ratios drive changes in phytoplankton species composition, particularly disadvantaging silica rich phytoplankton species like diatoms compared to other species. This process leads to the development of nuisance algal blooms in areas such as the North Sea (see also OSPAR Convention) and the Black Sea. In some cases nutrient enrichment can lead to harmful algal blooms (HABs). Such blooms can occur naturally, but there is good evidence that these are increasing as a result of nutrient enrichment, although the causal linkage between nutrient enrichment and HABs is not straightforward. Oxygen depletion has existed in some coastal seas such as the Baltic for thousands of years. In such areas the density structure of the water column severely restricts water column mixing and associated oxygenation of deep water. However, increases in the inputs of bacterially degradable organic matter to such isolated deep waters can exacerbate such oxygen depletion in oceans. These areas of lower dissolved oxygen have increased globally in recent decades. They are usually connected with nutrient enrichment and resulting algal blooms. Climate change will generally tend to increase water column stratification and so exacerbate this oxygen depletion problem. An example of such coastal oxygen depletion is in the Gulf of Mexico where an area of seasonal anoxia more than 5000 square miles in area has developed since the 1950s. The increased primary production driving this anoxia is fueled by nutrients supplied by the Mississippi river. A similar process has been documented in the Black Sea. Hypolimnetic oxygen depletion can lead to summer "kills". During summer stratification, inputs or organic matter and sedimentation of primary producers can increase rates of respiration in the hypolimnion. If oxygen depletion becomes extreme, aerobic organisms (such as fish) may die, resulting in what is known as a "summer kill". Extent of the problem Surveys showed that 54% of lakes in Asia are eutrophic; in Europe, 53%; in North America, 48%; in South America, 41%; and in Africa, 28%. In South Africa, a study by the CSIR using remote sensing has shown more than 60% of the reservoirs surveyed were eutrophic. The World Resources Institute has identified 375 hypoxic coastal zones in the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly Japan. Prevention As a society, there are certain steps we can take to ensure the minimization of eutrophication, thereby reducing its harmful effects on humans and other living organisms in order to sustain a healthy norm of living, some of which are as follows: Minimizing pollution from sewage There are multiple different ways to fix cultural eutrophication with raw sewage being a point source of pollution. For example, sewage treatment plants can be upgraded for biological nutrient removal so that they discharge much less nitrogen and phosphorus to the receiving water body. However, even with good secondary treatment, most final effluents from sewage treatment works contain substantial concentrations of nitrogen as nitrate, nitrite or ammonia. Removal of these nutrients is an expensive and often difficult process. Laws regulating the discharge and treatment of sewage have led to dramatic nutrient reductions to surrounding ecosystems. As a major contributor to the nonpoint source nutrient loading of water bodies is untreated domestic sewage, it is necessary to provide treatment facilities to highly urbanized areas, particularly those in developing countries, in which treatment of domestic waste water is a scarcity. The technology to safely and efficiently reuse wastewater, both from domestic and industrial sources, should be a primary concern for policy regarding eutrophication. Minimizing nutrient pollution by agriculture There are many ways to help fix cultural eutrophication caused by agriculture. Some recommendations issued by the U.S. Department of Agriculture include: Nutrient management techniques - Anyone using fertilizers should apply fertilizer in the correct amount, at the right time of year, with the right method and placement. Organically fertilized fields can "significantly reduce harmful nitrate leaching" compared to conventionally fertilized fields. Eutrophication impacts are in some cases higher from organic production than they are from conventional production. In Japan the amount of nitrogen produced by livestock is adequate to serve the fertilizer needs for the agriculture industry. Year-round ground cover - a cover crop will prevent periods of bare ground thus eliminating erosion and runoff of nutrients even after the growing season has passed. Planting field buffers - Planting trees, shrubs and grasses along the edges of fields can help catch the runoff and absorb some nutrients before the water makes it to a nearby water body. Riparian buffer zones are interfaces between a flowing body of water and land, and have been created near waterways in an attempt to filter pollutants; sediment and nutrients are deposited here instead of in water. Creating buffer zones near farms and roads is another possible way to prevent nutrients from traveling too far. Conservation tillage - By reducing frequency and intensity of tilling, the land will enhance the chance of nutrients absorbing into the ground. Policy The United Nations framework for Sustainable Development Goals recognizes the damaging effects of eutrophication for marine environments. It has established a timeline for creating an Index of Coastal Eutrophication and Floating Plastic Debris Density (ICEP) within Sustainable Development Goal 14 (life below water). SDG 14 specifically has a target to: "by 2025, prevent and significantly reduce marine pollution of all kinds, in particular from land-based activities, including marine debris and nutrient pollution". Policy and regulations are a set of tools to minimize causes of eutrophication. Nonpoint sources of pollution are the primary contributors to eutrophication, and their effects can be minimized through common agricultural practices. Reducing the amount of pollutants that reach a watershed can be achieved through the protection of its forest cover, reducing the amount of erosion leeching into a watershed. Also, through the efficient, controlled use of land using sustainable agricultural practices to minimize land degradation, the amount of soil runoff and nitrogen-based fertilizers reaching a watershed can be reduced. Waste disposal technology constitutes another factor in eutrophication prevention. Because a body of water can have an effect on a range of people reaching far beyond that of the watershed, cooperation between different organizations is necessary to prevent the intrusion of contaminants that can lead to eutrophication. Agencies ranging from state governments to those of water resource management and non-governmental organizations, going as low as the local population, are responsible for preventing eutrophication of water bodies. In the United States, the most well known inter-state effort to prevent eutrophication is the Chesapeake Bay. Reversal and remediation Reducing nutrient inputs is a crucial precondition for restoration. Still, there are two caveats: Firstly, it can take a long time, mainly because of the storage of nutrients in sediments. Secondly, restoration may need more than a simple reversal of inputs since there are sometimes several stable but very different ecological states. Recovery of eutrophicated lakes is slow, often requiring several decades. In environmental remediation, nutrient removal technologies include biofiltration, which uses living material to capture and biologically degrade pollutants. Examples include green belts, riparian areas, natural and constructed wetlands, and treatment ponds. Algae bloom forecasting The National Oceanic Atmospheric Admiration in the United States has created a forecasting tool for regions such as the Great Lakes, the Gulf of Maine, and The Gulf of Mexico. Shorter term predictions can help to show the intensity, location, and trajectory of blooms in order to warn more directly affected communities. Longer term tests in specific regions and bodies help to predict larger scale factors like scale of future blooms and factors that could lead to more adverse effects. Nutrient bioextraction Nutrient bioextraction is bioremediation involving cultured plants and animals. Nutrient bioextraction or bioharvesting is the practice of farming and harvesting shellfish and seaweed to remove nitrogen and other nutrients from natural water bodies. Shellfish in estuaries It has been suggested that nitrogen removal by oyster reefs could generate net benefits for sources facing nitrogen emission restrictions, similar to other nutrient trading scenarios. Specifically, if oysters maintain nitrogen levels in estuaries below thresholds, then oysters effectively stave off an enforcement response, and compliance costs parties responsible for nitrogen emission would otherwise incur. Several studies have shown that oysters and mussels can dramatically impact nitrogen levels in estuaries. Filter feeding activity is considered beneficial to water quality by controlling phytoplankton density and sequestering nutrients, which can be removed from the system through shellfish harvest, buried in the sediments, or lost through denitrification. Foundational work toward the idea of improving marine water quality through shellfish cultivation was conducted by Odd Lindahl et al., using mussels in Sweden. In the United States, shellfish restoration projects have been conducted on the East, West and Gulf coasts. Seaweed farming Studies have demonstrated seaweed's potential to improve nitrogen levels. Seaweed aquaculture offers an opportunity to mitigate, and adapt to climate change. Seaweed, such as kelp, also absorbs phosphorus and nitrogen and is thus helpful to remove excessive nutrients from polluted parts of the sea. Some cultivated seaweeds have very high productivity and could absorb large quantities of N, P, , producing large amounts of having an excellent effect on decreasing eutrophication. It is believed that seaweed cultivation in large scale should be a good solution to the eutrophication problem in coastal waters. Geo-engineering Another technique for combatting hypoxia/eutrophication in localized situations is direct injection of compressed air, a technique used in the restoration of the Salford Docks area of the Manchester Ship Canal in England. For smaller-scale waters such as aquaculture ponds, pump aeration is standard. Chemical removal of phosphorus Removing phosphorus can remediate eutrophication. Of the several phosphate sorbents, alum (aluminium sulfate) is of practical interest.) Many materials have been investigated. The phosphate sorbent is commonly applied in the surface of the water body and it sinks to the bottom of the lake reducing phosphate, such sorbents have been applied worldwide to manage eutrophication and algal bloom (for example under the commercial name Phoslock). In a large-scale study, 114 lakes were monitored for the effectiveness of alum at phosphorus reduction. Across all lakes, alum effectively reduced the phosphorus for 11 years. While there was variety in longevity (21 years in deep lakes and 5.7 years in shallow lakes), the results express the effectiveness of alum at controlling phosphorus within lakes. Alum treatment is less effective in deep lakes, as well as lakes with substantial external phosphorus loading. Finnish phosphorus removal measures started in the mid-1970s and have targeted rivers and lakes polluted by industrial and municipal discharges. These efforts have had a 90% removal efficiency. Still, some targeted point sources did not show a decrease in runoff despite reduction efforts. See also External links International Nitrogen Initiative References Nutrient pollution Water pollution Environmental chemistry Environmental issues with water Aquatic ecology
Eutrophication
Chemistry,Biology,Environmental_science
5,470
14,907
https://en.wikipedia.org/wiki/Inverse%20function
In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by For a function , its inverse admits an explicit description: it sends each element to the unique element such that . As an example, consider the real-valued function of a real variable given by . One can think of as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of is the function defined by Definitions Let be a function whose domain is the set , and whose codomain is the set . Then is invertible if there exists a function from to such that for all and for all . If is invertible, then there is exactly one function satisfying this property. The function is called the inverse of , and is usually denoted as , a notation introduced by John Frederick William Herschel in 1813. The function is invertible if and only if it is bijective. This is because the condition for all implies that is injective, and the condition for all implies that is surjective. The inverse function to can be explicitly described as the function . Inverses and composition Recall that if is an invertible function with domain and codomain , then , for every and for every . Using the composition of functions, this statement can be rewritten to the following equations between functions: and where is the identity function on the set ; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . Notation While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . The notation might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin ). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ). For instance, the inverse of the hyperbolic sine function is typically written as . The expressions like can still be useful to distinguish the multivalued inverse from the partial inverse: . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. Examples Squaring and square root functions The function given by is not injective because for all . Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by . Standard inverse functions The following table shows several standard functions and their inverses: Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse of an invertible function has an explicit description as . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if is the function then to determine for a real number , one must find the unique real number such that . This equation can be solved: Thus the inverse function is given by the formula Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if is the function then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an expression as an infinite sum: Properties Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. Uniqueness If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . Symmetry There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and codomain , then its inverse has domain and image , and the inverse of is the original function . In symbols, for functions and , and This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of and have been reversed; to undo followed by , we must first undo , and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition . Self-inverses If is a set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. Graph of the inverse If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . Inverses and derivatives By the inverse function theorem, a continuous function of a single variable (where ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Real-world examples Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, then its inverse function converts degrees Fahrenheit to degrees Celsius, since Suppose assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, Let be the function that leads to an percentage rise of some quantity, and be the function producing an percentage fall. Applied to $100 with = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is . In many cases we need to find the concentration of acid from a pH measurement. The inverse function is used. Generalizations Partial inverses Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes, this multivalued inverse is called the full inverse of , and the portions (such as and −) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at is called the principal value of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since for every real (and more generally for every integer ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: Left and right inverses Function composition on the left and on the right need not coincide. In general, the conditions "There exists such that " and "There exists such that " imply different properties of . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . Left inverses If , a left inverse for (or retraction of ) is a function such that composing with from the left gives the identity function That is, the function satisfies the rule If , then . The function must equal the inverse of on the image of , but may take any values for elements of not in the image. A function with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If is the left inverse of , and , then . If nonempty is injective, construct a left inverse as follows: for all , if is in the image of , then there exists such that . Let ; this definition is unique because is injective. Otherwise, let be an arbitrary element of .For all , is in the image of . By construction, , the condition for a left inverse. In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set . Right inverses A right inverse for (or section of ) is a function such that That is, the function satisfies the rule If , then Thus, may be any of the elements of that map to under . A function has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If is the right inverse of , then is surjective. For all , there is such that . If is surjective, has a right inverse , which can be constructed as follows: for all , there is at least one such that (because is surjective), so we choose one to be the value of . Two-sided inverses An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If is a left inverse and a right inverse of , for all , . A function has a two-sided inverse if and only if it is bijective. A bijective function is injective, so it has a left inverse (if is the empty function, is its own left inverse). is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If has a two-sided inverse , then is a left inverse and right inverse of , so is injective and surjective. Preimages If is any function (not necessarily invertible), the preimage (or inverse image) of an element is defined to be the set of all elements of that map to : The preimage of can be thought of as the image of under the (multivalued) full inverse of the function . The notion can be generalized to subsets of the range. Specifically, if is any subset of , the preimage of , denoted by , is the set of all elements of that map to : For example, take the function . This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. . The original notion and its generalization are related by the identity The preimage of a single element – a singleton set – is sometimes called the fiber of . When is the set of real numbers, it is common to refer to as a level set. See also Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing Notes References Bibliography Further reading External links Basic concepts in set theory Unary operations
Inverse function
Mathematics
3,011
1,779,362
https://en.wikipedia.org/wiki/Plantronics
Plantronics, Inc. is an American electronics company producing audio communications equipment for business and consumers. Its products support unified communications, mobile use, gaming and music. Plantronics is headquartered in Santa Cruz, California, and most of its products are produced in China and Mexico. On March 18, 2019, Plantronics announced that it would change its name to Poly following its acquisition of Polycom, although it continues to trade on the New York Stock Exchange as Plantronics, Inc. (POLY; listed as PLT until May 24, 2021). On March 28, 2022, HP Inc. announced its intent to acquire Poly for $1.7 billion in cash as it looks to bolster its hybrid work offerings, such as headsets and videoconferencing hardware. Including debt, the deal valued at $3.3 billion and closed in August 2022. History In the early 1960s, airline headsets were so large and cumbersome that many pilots had switched back to the use of handheld microphones for communications. The speed and complexity of jet airliners caused a need for the introduction of small, lightweight headsets into the cockpit. In 1961, United Airlines solicited new designs from anyone who was interested. Courtney Graham, a United Airlines pilot, was one of the many who thought the heavy headsets should be replaced by something lighter. He collaborated with his pilot friend Keith Larkin to create a small, functional design which was robust enough to pass airlines standards. (Larkin had been working for a small company called Plane-Aids, a Japanese import company which offered spectacles and sunglasses that contained transistor radios in their temple pieces.) The final design, incorporating two small hearing aid-style transducers attached to a headband was submitted to United Airline approval. UAL's approval of the innovative design caused Graham and Larkin to incorporate as Pacific Plantronics (now called Plantronics, Inc.) on May 18, 1961. They introduced the first lightweight communications headset, the MS-50, to the commercial marketplace in 1962. In the mid-1960s, the Federal Aviation Agency selected Plantronics as the sole supplier of headsets for air traffic controllers, and thereafter was selected to supply headsets to the operators of the Bell System. SPENCOMM and NASA In 1961, NASA astronaut Wally Schirra contacted Courtney Graham, a fellow pilot, to discuss creating a design for a small, lightweight headset to be used in the Mercury spacecraft. Pacific Plantronics assembled its Space Environmental Communications (SPENCOMM) division to begin working on a reliable solution. SPENCOMM personnel traveled to NASA's Manned Spacecraft Center (now Johnson Space Center) and Kennedy Space Center to meet with and get design feedback from Schirra and several other astronauts, including Gordon Cooper. Together, SPENCOMM and NASA spent only 11 days to create a working microphone design for space communications and Schirra was the first to use the new communication technology during the Mercury-Atlas 8 mission. Significant redundancy was built into these headsets, as each microphone circuit had two transducers and each receiver had five transducers—in addition, the headsets were used in pairs. The use of these SPENCOMM-NASA headsets in astronaut space suits continued through the remainder of the Mercury program, the Apollo program and on to this day. The words spoken by U.S. astronaut Neil Armstrong as he stepped on the Moon were transmitted through a Plantronics headset. MS-50 Following the Pacific Plantronics partnership with NASA in the Space Program, the MS-50 headset gained recognition in the communication marketplace. The FAA, Western Electric, and companies with telephone call centers adapted the MS-50 as a replacement for existing headsets. On May 18, 1965, U.S. Pat. No. 3,184,556 was issued to W. K. Larkin for a StarSet In 1970, Ken Hutchings, an engineer who had joined Pacific Plantronics, patented a device which was marketed as the "StarSet". United States Patent 3548118 describes Wireless products In the 1980s, Plantronics created a line of cordless products using infrared technology. Though the technology utilized was the same one being used by television remote controls, the link did not require a Federal Communications Commission (FCC) telecommunications approval. One of the first products used the infrared beam to create a communications link between a small transmitter and a base unit which was connected to the telephone network. This product was the first "echo-free" speakerphone for use in conference rooms. The small transmitter could be handheld or clipped to clothing to ensure a good pickup of the speaker's voice. Wireless office headsets In 2003, Plantronics introduced the CS50 wireless headset for use on office phones. Since that time, Plantronics has manufactured other wireless headsets, including the "CS70N", CS500 Series, and Savi 700 Series. In recent years there has also been strong focus on Unified Communications headsets and speakers. Mobile and Bluetooth mobile headsets Plantronics manufactures mobile headsets, including a line of Bluetooth headsets for mobile phones. The Pulsar 590, for example, is designed for use with Bluetooth- and A2DP-enabled cellphones. Computer and gaming headsets Plantronics manufactured headsets for PC audio and online and console gaming via its GameCom and .Audio and RIG Gaming labels. Plantronics entered the multimedia headset market in 1999 with the release of the HS1 and the DSP-500 headsets, the latter featuring a built-in digital signal processing card. In 2002, Plantronics and Microsoft created the headset for the Xbox Communicator, the first headset to enable voice communication with Xbox Live. The company created a special headset for the Xbox as a tie-in with the videogame Halo 2 in 2004. Plantronics exited the gaming and consumer markets in 2019, focusing on enterprise collaboration with its Poly brand. Corporate expansion and acquisitions Plantronics has expanded into other segments of the audio equipment market through acquisitions. Clarity In 1986, Plantronics acquired Walker Equipment, Ringgold, Georgia, a manufacturer of amplified handsets and telephones. The Clarity products were created to enhance telephone usability for those with hearing impairment. Walker later acquired Ameriphone in 2002, and became Walker Ameriphone before changing its name to Clarity; Clarity is now a US supplier of amplified telephones. In February 2015, Plantronics released the Clarity 340 handset style phone for UC communications. Altec Lansing In 2005, Plantronics acquired computer speaker manufacturer Altec Lansing for approximately $166 million. In spite of a corporate makeover the brand continued to struggle and was acquired by Prophet Equity in October 2009 for approximately $18 million. Volume Logic Plantronics later acquired Octiv, Inc. in March 2005 as one of its brands and renamed it as the Volume Logic division. Octiv produced an audio toolset for creating 5.1 surround sound soundtracks. Although the Volume Logic series of applications have since been discontinued, the underlying technology has been adapted for use in Plantronics telephony products. Polycom On March 28, 2018, Plantronics announced it would acquire Polycom for approximately $2 billion. 2005 to present In October 2016, long-time chief executive S. Kenneth Kannappan retired and was replaced by Joe Burton, who had joined in 2011. On February 10, 2020, Plantronics announced the appointment of Robert Hagerty as interim CEO, replacing Joe Burton. Gallery See also Plantronics Colorplus References Further reading External links 1961 establishments in California American companies established in 1961 Audio equipment manufacturers of the United States Bluetooth Companies based in Santa Cruz, California Companies formerly listed on the New York Stock Exchange Electronics companies established in 1961 Headphones manufacturers Manufacturing companies based in California 2022 mergers and acquisitions Hewlett-Packard acquisitions Computer companies of the United States Software companies of the United States Computer hardware companies
Plantronics
Technology
1,642
2,445,044
https://en.wikipedia.org/wiki/Deep%20reactive-ion%20etching
Deep reactive-ion etching (DRIE) is a special subclass of reactive-ion etching (RIE). It enables highly anisotropic etch process used to create deep penetration, steep-sided holes and trenches in wafers/substrates, typically with high aspect ratios. It was developed for microelectromechanical systems (MEMS), which require these features, but is also used to excavate trenches for high-density capacitors for DRAM and more recently for creating through-silicon vias (TSVs) in advanced 3D wafer level packaging technology. In DRIE, the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture which breaks the gas molecules into ions. The ions are accelerated towards, and react with the surface of the material being etched, forming another gaseous element. This is known as the chemical part of the reactive ion etching. There is also a physical part, if ions have enough energy, they can knock atoms out of the material to be etched without chemical reaction. There are two main technologies for high-rate DRIE: cryogenic and Bosch, although the Bosch process is the only recognised production technique. Both Bosch and cryogenic processes can fabricate 90° (truly vertical) walls, but often the walls are slightly tapered, e.g. 88° ("reentrant") or 92° ("retrograde"). Another mechanism is sidewall passivation: SiOxFy functional groups (which originate from sulphur hexafluoride and oxygen etch gases) condense on the sidewalls, and protect them from lateral etching. As a combination of these processes, deep vertical structures can be made. Cryogenic process In cryogenic-DRIE, the wafer is chilled to −110 °C (163 K). The low temperature slows down the chemical reaction that produces isotropic etching. However, ions continue to bombard upward-facing surfaces and etch them away. This process produces trenches with highly vertical sidewalls. The primary issues with cryo-DRIE is that the standard masks on substrates crack under the extreme cold, plus etch by-products have a tendency of depositing on the nearest cold surface, i.e. the substrate or electrode. Bosch process The Bosch process, named after the German company Robert Bosch GmbH which patented the process, also known as pulsed or time-multiplexed etching, alternates repeatedly between two modes to achieve nearly vertical structures: A standard, nearly isotropic plasma etch. The plasma contains some ions, which attack the wafer from a nearly vertical direction. Sulfur hexafluoride [SF6] is often used for silicon. Deposition of a chemically inert passivation layer. (For instance, Octafluorocyclobutane [C4F8] source gas yields a substance similar to Teflon.) Each phase lasts for several seconds. The passivation layer protects the entire substrate from further chemical attack and prevents further etching. However, during the etching phase, the directional ions that bombard the substrate attack the passivation layer at the bottom of the trench (but not along the sides). They collide with it and sputter it off, exposing the substrate to the chemical etchant. These etch/deposit steps are repeated many times over resulting in a large number of very small isotropic etch steps taking place only at the bottom of the etched pits. To etch through a 0.5 mm silicon wafer, for example, 100–1000 etch/deposit steps are needed. The two-phase process causes the sidewalls to undulate with an amplitude of about 100–500 nm. The cycle time can be adjusted: short cycles yield smoother walls, and long cycles yield a higher etch rate. Applications Etching depth typically depends on the application: in DRAM memory circuits, capacitor trenches may be 10–20 μm deep, in MEMS, DRIE is used for anything from a few micrometers to 0.5 mm. in irregular chip dicing, DRIE is used with a novel hybrid soft/hard mask to achieve sub-millimeter etching to dice silicon dies into lego-like pieces with irregular shapes. in flexible electronics, DRIE is used to make traditional monolithic CMOS devices flexible by reducing the thickness of silicon substrates to few to tens of micrometers. DRIE is distinguished from RIE by its etch depth. Practical etch depths for RIE (as used in IC manufacturing) would be limited to around 10 μm at a rate up to 1 μm/min, while DRIE can etch features much greater, up to 600 μm or more with rates up to 20 μm/min or more in some applications. DRIE of glass requires high plasma power, which makes it difficult to find suitable mask materials for truly deep etching. Polysilicon and nickel are used for 10–50 μm etched depths. In DRIE of polymers, Bosch process with alternating steps of SF6 etching and C4F8 passivation take place. Metal masks can be used, however they are expensive to use since several additional photo and deposition steps are always required. Metal masks are not necessary however on various substrates (Si [up to 800 μm], InP [up to 40 μm] or glass [up to 12 μm]) if using chemically amplified negative resists. Gallium ion implantation can be used as etch mask in cryo-DRIE. Combined nanofabrication process of focused ion beam and cryo-DRIE was first reported by N Chekurov et al in their article "The fabrication of silicon nanostructures by local gallium implantation and cryogenic deep reactive ion etching". Precision machinery DRIE has enabled the use of silicon mechanical components in high-end wristwatches. According to an engineer at Cartier, “There is no limit to geometric shapes with DRIE,”. With DRIE it is possible to obtain an aspect ratio of 30 or more, meaning that a surface can be etched with a vertical-walled trench 30 times deeper than its width. This has allowed for silicon components to be substituted for some parts which are usually made of steel, such as the hairspring. Silicon is lighter and harder than steel, which carries benefits but makes the manufacturing process more challenging. See also Microelectromechanical systems References Semiconductor device fabrication Microtechnology Etching (microfabrication)
Deep reactive-ion etching
Materials_science,Engineering
1,361
14,725,606
https://en.wikipedia.org/wiki/INSL3
Insulin-like 3 is a protein that in humans is encoded by the INSL3 gene. Function The protein encoded by this gene is an insulin like hormone produced mainly in gonadal tissues in males and females. Studies of the mouse counterpart suggest that this gene may be involved in the development of urogenital tract and female fertility. INSL-3 initiates meiotic progression in follicle-enclosed oocytes by mediating a reduction in intra-oocyte cAMP concentration by activating leucine-rich repeat-containing G protein-coupled receptor 8 (LGR8). It may also act as a hormone to regulate growth and differentiation of gubernaculum, and thus mediating intra-abdominal testicular descent. The mutations in this gene may lead to, but not a frequent cause of, cryptorchidism. References Further reading
INSL3
Chemistry
176
46,617,163
https://en.wikipedia.org/wiki/Penicillium%20marinum
Penicillium marinum is a species in the genus Penicillium which produces patulin and roquefortine C. Further reading References marinum Fungi described in 2004 Fungus species
Penicillium marinum
Biology
40
315,459
https://en.wikipedia.org/wiki/Endoderm
Endoderm is the innermost of the three primary germ layers in the very early embryo. The other two layers are the ectoderm (outside layer) and mesoderm (middle layer). Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm. The endoderm consists at first of flattened cells, which subsequently become columnar. It forms the epithelial lining of multiple systems. In plant biology, endoderm corresponds to the innermost part of the cortex (bark) in young shoots and young roots often consisting of a single cell layer. As the plant becomes older, more endoderm will lignify. Production The following chart shows the tissues produced by the endoderm. The embryonic endoderm develops into the interior linings of two tubes in the body, the digestive and respiratory tube. Liver and pancreas cells are believed to derive from a common precursor. In humans, the endoderm can differentiate into distinguishable organs after 5 weeks of embryonic development. Additional images See also Hypoblast of primitive endoderm Ectoderm Germ layer Histogenesis Mesoderm Organogenesis Endodermal sinus tumor Gastrulation Cell differentiation Triploblasty List of human cell types derived from the germ layers References Germ layers Developmental biology Embryology Gastrulation
Endoderm
Biology
292
5,416,952
https://en.wikipedia.org/wiki/Associazione%20Friulana%20di%20Astronomia%20e%20Meteorologia
The Associazione Friulana di Astronomia e Meteorologia (AFAM, eng. Friulian Association of Astronomy and Meteorology) is a non-profit cultural association whose goal is the promotion of astronomy and meteorology to the public and the development of scientific research activities, often in collaboration with professional scientists. Established in 1969, now AFAM has its own operating structures in Remanzacco (Friuli, Italy). AFAM is member of the Unione Astrofili Italiani (the Italian union of amateur astronomers). The Association has an own library, a conference room, a permanent Astronomical Observatory with optical instruments for visual observation and CCD sensors for research. Members Luca Donato, president Giovanni Sostero See also List of astronomical societies References External links Official site of the Associazione Friulana di Astronomia e Meteorologia Astronomy organizations 1969 establishments in Italy Scientific organizations established in 1969 Astronomy in Italy
Associazione Friulana di Astronomia e Meteorologia
Astronomy
196
43,092,207
https://en.wikipedia.org/wiki/Formal%20ball
In topology, a branch of mathematics, a formal ball is an extension of the notion of ball to allow unbounded and negative radius. The concept of formal ball was introduced by Weihrauch and Schreiber in 1981 and the negative radius case (the generalized formal ball) by Tsuiki and Hattori in 2008. Specifically, if is a metric space then an element of is a formal ball, where is the set of nonnegative real numbers. Elements of are known as generalized formal balls. Formal balls possess a partial order defined by if . Generalized formal balls are interesting because this partial order works just as well for as for , even though a generalized formal ball with negative radius does not correspond to a subset of . Formal balls possess the Lawson topology and the Martin topology. References K. Weihrauch and U. Schreiber 1981. "Embedding metric spaces into CPOs". Theoretical computer science, 16:5-24. H. Tsuiki and Y. Hattori 2008. "Lawson topology of the space of formal balls and the hyperbolic topology of a metric space". Theoretical computer science, 405:198-205 Y. Hattori 2010. "Order and topological structures of posets of the formal balls on metric spaces". Memoirs of the Faculty of Science and Engineering. Shimane University. Series B 43:13-26 Topology
Formal ball
Physics,Mathematics
282
52,133,580
https://en.wikipedia.org/wiki/List%20of%20drugs%20by%20year%20of%20discovery
The following is a table of drugs organized by their year of discovery. Naturally occurring chemicals in plants, including alkaloids, have been used since pre-history. In the modern era, plant-based drugs have been isolated, purified and synthesised anew. Synthesis of drugs has led to novel drugs, including those that have not existed before in nature, particularly drugs based on known drugs which have been modified by chemical or biological processes. Antiquity Prehistory Archaeological evidence indicates that the use of medicinal plants dates back to the Paleolithic age. 4th millennium BCE In ancient Egypt, herbs are mentioned in Egyptian medical papyri, depicted in tomb illustrations, or on rare occasions found in medical jars containing trace amounts of herbs. Medical recipes from 4000 BCE were for liquid preparations rather than solids. In the 4th millennium BCE, Soma (drink) and Haoma are named, but is not clear what ingredients were used to prepare them. 3rd millennium BCE 2nd millennium BCE Written around 1600 BCE, the Edwin Smith Papyrus describes the use of many herbal drugs. The Ebers Papyrus – one of the most important medical papyri of ancient Egypt – was written around 1550 BCE, and covers more than 700 drugs, mainly of plant origin. The first references to pills were found on papyri in ancient Egypt, and contained bread dough, honey, or grease. Medicinal ingredients such as plant powders or spices were mixed in and formed by hand to make little balls, or pills. The papyri also describe how to prepare herbal teas, poultices, ointments, eye drops, suppositories, enemas, laxatives, etc. Aloe vera was used in the 2nd millennium BCE. 1st millennium BCE In Greece, Theophrastus of Eresos wrote Historia Plantarum in the 4th century BCE. Seeds likely used for herbalism have been found in archaeological sites of Bronze Age China dating from the Shang dynasty (c. 1600 BCE–c. 1046 BCE). Over a hundred of the 224 drugs mentioned in the Huangdi Neijing – an early Chinese medical text – are herbs. Herbs also commonly featured in the medicine of ancient India, where the principal treatment for diseases was diet. Opioids are among the world's oldest known drugs. Use of the opium poppy for medical, recreational, and religious purposes can be traced to the 4th century BCE, when Hippocrates wrote about it for its analgesic properties, stating, "Divinum opus est sedare dolores." ("Divine work is the easing of pain") 1st century CE In ancient Greece, pills were known as ("something to be swallowed"). Pliny the Elder, who lived from 23–79 CE, first gave a name to what we now call pills, calling them . Pliny also wrote Naturalis Historia a collection of 38 books and the first pharmacopoea. Pedanius Dioscorides wrote De Materia Medica (c. 40 – 90 CE); this book dominated the area of drug knowledge for some 1500 years until the 1600s. Jojoba was used in the 1st millennium CE. 2nd century CE Aelius Galenus wrote more than 11 books about drugs, also use terra sigillata with kaolinite and goats blood to produce tablets. Post-classical to Early modern Drugs developed in the post-classical (circa 500 to 1450) or early modern eras (circa 1453 to 1789). 6th–11th century CE In middle age ointments were a common dosage form. 11th century CE Avicenna separates Medicine and Pharmacy, in 1025 published his book The Canon of Medicine, an encyclopedia of medicine formed by five books. Drugs mentioned by Avicenna include agaric, scammony and euphorbium. The latex of Euphorbia resinifera contains resiniferatoxin, an ultra potent capsaicin analog. Desensitization to resiniferatoxin is tested in clinical trials to treat neuropathic pain. 16th century CE Paracelsus expounded the concept of dose response in his Third Defense, where he stated that "Solely the dose determines that a thing is not a poison." This was used to defend his use of inorganic substances in medicine as outsiders frequently criticized Paracelsus' chemical agents as too toxic to be used as therapeutic agents. Paracelsus discovered that the alkaloids in opium are far more soluble in alcohol than water. Having experimented with various opium concoctions, Paracelsus came across a specific tincture of opium that was of considerable use in reducing pain. He called this preparation laudanum. For over a thousand years South American indigenous peoples have chewed Erythroxylon coca leaves, which contain alkaloids such as cocaine. Coca leaf remains have been found with ancient Peruvian mummies. There is also evidence coca leaves were used as an anesthetic. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment". 1400s Nicotine (Tobacco) 18th century CE In 1778 John Mudge created the first inhaler devices. In 1747, James Lind, surgeon of HMS Salisbury, conducted the first clinical trial ever recorded, on it he studied how citrus fruit were capable of curing scurvy. Modern 19th century CE In the 1830s chemist Justus von Liebig began the synthesis of organic molecules, stating that "The production of all organic substances no longer belongs just to living organisms." In 1832 produced chloral hydrate, the first synthetic sleeping drug. In 1833 French chemist Anselme Payen was the first to discover an enzyme, diastase. In 1834, François Mothes and Joseph Dublanc created a method to produce a single-piece gelatin capsule that was sealed with a drop of gelatin solution. In 1853 Alexander Wood was the first physician that used hypodermic needle to dispense drugs via Injections. In 1858 Dr. M. Sales Giron invented the first pressurized inhaler. Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Shortly after amphetamine, methamphetamine was synthesized from ephedrine in 1893 by Japanese chemist Nagai Nagayoshi. Three decades later, in 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata via reduction of ephedrine using red phosphorus and iodine. 20th century CE In 1901 Jōkichi Takamine isolated and synthesized the first hormone, Adrenaline. In 1907 Alfred Bertheim synthesized Arsphenamine, the first man-made antibiotic. In 1927 Erik Rotheim patented the first aerosol spray can. In 1933 Robert Pauli Scherer created a method to develop softgels. William Roberts studies about penicillin were continued by Alexander Fleming, who in 1928 concluded that penicillin had an antibiotic effect. In 1944 Howard Florey and Ernst Boris Chain mass-produced penicillin. In 1948 Raymond P. Ahlquist published his seminal work where he divided adrenoceptors into α- and β-adrenoceptor subtypes, this allowed a better understanding of drugs mechanisms of action. In 1987, after Montreal Protocol, CFC inhalers were phased out and HFA inhalers replace them. In 1987 CRISPR technique was discovered by Yoshizumi Ishino that in the next century would be used for genome editing. 21st century CE 21st century begins with the first complete sequences of individual human genomes by Human Genome Project, on 12 February 2001, this allowed a switch in drug development and research from the traditional way of drug discovery that was isolating molecules from plants or animals or create new molecules and see if they could be useful in treatment of illness in humans, to pharmacogenomics, that is the study and knowledge of how genes respond to drugs. Another field beneficed by Human Genome Project is pharmacogenetics, that is the study of inherited genetic differences in drug metabolic pathways which can affect individual responses to drugs, both in terms of therapeutic effect as well as adverse effects. Humane genome study also allowed to identify which genes are responsible of illness, and to develop drugs for rare diseases and also treatment of illness through gene therapy. In 2015 a simplified form of CRISPR edition was used in humans with Cas9, and also was used an even more simple method, Cas12a that prevent genetic damage from viruses. These advances are improving personalized medicine and allowing precision medicine. * MA = Monoclonal antibody SM = Small molecule ACT = Adoptive cell transfer See also List of drugs Lists of molecules History of medicine List of pharmaceutical laboratories by year of foundation Lists of diseases by year of discovery Discovery and development of beta2 agonists Pharmacopoeia Edwin Smith Papyrus De Materia Medica Shennong Ben Cao Jing The Canon of Medicine The Book of Healing References External links The Canon of Medicine (text) Year Medical history-related lists
List of drugs by year of discovery
Chemistry,Biology
1,900
361,609
https://en.wikipedia.org/wiki/Moduli%20space
In mathematics, in particular algebraic geometry, a moduli space is a geometric space (usually a scheme or an algebraic stack) whose points represent algebro-geometric objects of some fixed kind, or isomorphism classes of such objects. Such spaces frequently arise as solutions to classification problems: If one can show that a collection of interesting objects (e.g., the smooth algebraic curves of a fixed genus) can be given the structure of a geometric space, then one can parametrize such objects by introducing coordinates on the resulting space. In this context, the term "modulus" is used synonymously with "parameter"; moduli spaces were first understood as spaces of parameters rather than as spaces of objects. A variant of moduli spaces is formal moduli. Bernhard Riemann first used the term "moduli" in 1857. Motivation Moduli spaces are spaces of solutions of geometric classification problems. That is, the points of a moduli space correspond to solutions of geometric problems. Here different solutions are identified if they are isomorphic (that is, geometrically the same). Moduli spaces can be thought of as giving a universal space of parameters for the problem. For example, consider the problem of finding all circles in the Euclidean plane up to congruence. Any circle can be described uniquely by giving three points, but many different sets of three points give the same circle: the correspondence is many-to-one. However, circles are uniquely parameterized by giving their center and radius: this is two real parameters and one positive real parameter. Since we are only interested in circles "up to congruence", we identify circles having different centers but the same radius, and so the radius alone suffices to parameterize the set of interest. The moduli space is, therefore, the positive real numbers. Moduli spaces often carry natural geometric and topological structures as well. In the example of circles, for instance, the moduli space is not just an abstract set, but the absolute value of the difference of the radii defines a metric for determining when two circles are "close". The geometric structure of moduli spaces locally tells us when two solutions of a geometric classification problem are "close", but generally moduli spaces also have a complicated global structure as well. For example, consider how to describe the collection of lines in R2 that intersect the origin. We want to assign to each line L of this family a quantity that can uniquely identify it—a modulus. An example of such a quantity is the positive angle θ(L) with 0 ≤ θ < π radians. The set of lines L so parametrized is known as P1(R) and is called the real projective line. We can also describe the collection of lines in R2 that intersect the origin by means of a topological construction. To wit: consider the unit circle S1 ⊂ R2 and notice that every point s ∈ S1 gives a line L(s) in the collection (which joins the origin and s). However, this map is two-to-one, so we want to identify s ~ −s to yield P1(R) ≅ S1/~ where the topology on this space is the quotient topology induced by the quotient map S1 → P1(R). Thus, when we consider P1(R) as a moduli space of lines that intersect the origin in R2, we capture the ways in which the members (lines in this case) of the family can modulate by continuously varying 0 ≤ θ < π. Basic examples Projective space and Grassmannians The real projective space Pn is a moduli space that parametrizes the space of lines in Rn+1 which pass through the origin. Similarly, complex projective space is the space of all complex lines in Cn+1 passing through the origin. More generally, the Grassmannian G(k, V) of a vector space V over a field F is the moduli space of all k-dimensional linear subspaces of V. Projective space as moduli of very ample line bundles generated by global sections Whenever there is an embedding of a scheme into the universal projective space , the embedding is given by a line bundle and sections which all don't vanish at the same time. This means, given a pointthere is an associated pointgiven by the compositionsThen, two line bundles with sections are equivalentiff there is an isomorphism such that . This means the associated moduli functor sends a scheme to the setShowing this is true can be done by running through a series of tautologies: any projective embedding gives the globally generated sheaf with sections . Conversely, given an ample line bundle globally generated by sections gives an embedding as above. Chow variety The Chow variety Chow(d,P3) is a projective algebraic variety which parametrizes degree d curves in P3. It is constructed as follows. Let C be a curve of degree d in P3, then consider all the lines in P3 that intersect the curve C. This is a degree d divisor DC in G(2, 4), the Grassmannian of lines in P3. When C varies, by associating C to DC, we obtain a parameter space of degree d curves as a subset of the space of degree d divisors of the Grassmannian: Chow(d,P3). Hilbert scheme The Hilbert scheme Hilb(X) is a moduli scheme. Every closed point of Hilb(X) corresponds to a closed subscheme of a fixed scheme X, and every closed subscheme is represented by such a point. A simple example of a Hilbert scheme is the Hilbert scheme parameterizing degree hypersurfaces of projective space . This is given by the projective bundlewith universal family given bywhere is the associated projective scheme for the degree homogeneous polynomial . Definitions There are several related notions of things we could call moduli spaces. Each of these definitions formalizes a different notion of what it means for the points of space M to represent geometric objects. Fine moduli spaces This is the standard concept. Heuristically, if we have a space M for which each point m ∊ M corresponds to an algebro-geometric object Um, then we can assemble these objects into a tautological family U over M. (For example, the Grassmannian G(k, V) carries a rank k bundle whose fiber at any point [L] ∊ G(k, V) is simply the linear subspace L ⊂ V.) M is called a base space of the family U. We say that such a family is universal if any family of algebro-geometric objects T over any base space B is the pullback of U along a unique map B → M. A fine moduli space is a space M which is the base of a universal family. More precisely, suppose that we have a functor F from schemes to sets, which assigns to a scheme B the set of all suitable families of objects with base B. A space M is a fine moduli space for the functor F if M represents F, i.e., there is a natural isomorphism τ : F → Hom(−, M), where Hom(−, M) is the functor of points. This implies that M carries a universal family; this family is the family on M corresponding to the identity map 1M ∊ Hom(M, M). Coarse moduli spaces Fine moduli spaces are desirable, but they do not always exist and are frequently difficult to construct, so mathematicians sometimes use a weaker notion, the idea of a coarse moduli space. A space M is a coarse moduli space for the functor F if there exists a natural transformation τ : F → Hom(−, M) and τ is universal among such natural transformations. More concretely, M is a coarse moduli space for F if any family T over a base B gives rise to a map φT : B → M and any two objects V and W (regarded as families over a point) correspond to the same point of M if and only if V and W are isomorphic. Thus, M is a space which has a point for every object that could appear in a family, and whose geometry reflects the ways objects can vary in families. Note, however, that a coarse moduli space does not necessarily carry any family of appropriate objects, let alone a universal one. In other words, a fine moduli space includes both a base space M and universal family U → M, while a coarse moduli space only has the base space M. Moduli stacks It is frequently the case that interesting geometric objects come equipped with many natural automorphisms. This in particular makes the existence of a fine moduli space impossible (intuitively, the idea is that if L is some geometric object, the trivial family L × [0,1] can be made into a twisted family on the circle S1 by identifying L × {0} with L × {1} via a nontrivial automorphism. Now if a fine moduli space X existed, the map S1 → X should not be constant, but would have to be constant on any proper open set by triviality), one can still sometimes obtain a coarse moduli space. However, this approach is not ideal, as such spaces are not guaranteed to exist, they are frequently singular when they do exist, and miss details about some non-trivial families of objects they classify. A more sophisticated approach is to enrich the classification by remembering the isomorphisms. More precisely, on any base B one can consider the category of families on B with only isomorphisms between families taken as morphisms. One then considers the fibred category which assigns to any space B the groupoid of families over B. The use of these categories fibred in groupoids to describe a moduli problem goes back to Grothendieck (1960/61). In general, they cannot be represented by schemes or even algebraic spaces, but in many cases, they have a natural structure of an algebraic stack. Algebraic stacks and their use to analyze moduli problems appeared in Deligne-Mumford (1969) as a tool to prove the irreducibility of the (coarse) moduli space of curves of a given genus. The language of algebraic stacks essentially provides a systematic way to view the fibred category that constitutes the moduli problem as a "space", and the moduli stack of many moduli problems is better-behaved (such as smooth) than the corresponding coarse moduli space. Further examples Moduli of curves The moduli stack classifies families of smooth projective curves of genus g, together with their isomorphisms. When g > 1, this stack may be compactified by adding new "boundary" points which correspond to stable nodal curves (together with their isomorphisms). A curve is stable if it has only a finite group of automorphisms. The resulting stack is denoted . Both moduli stacks carry universal families of curves. One can also define coarse moduli spaces representing isomorphism classes of smooth or stable curves. These coarse moduli spaces were actually studied before the notion of moduli stack was invented. In fact, the idea of a moduli stack was invented by Deligne and Mumford in an attempt to prove the projectivity of the coarse moduli spaces. In recent years, it has become apparent that the stack of curves is actually the more fundamental object. Both stacks above have dimension 3g−3; hence a stable nodal curve can be completely specified by choosing the values of 3g−3 parameters, when g > 1. In lower genus, one must account for the presence of smooth families of automorphisms, by subtracting their number. There is exactly one complex curve of genus zero, the Riemann sphere, and its group of isomorphisms is PGL(2). Hence, the dimension of is dim(space of genus zero curves) − dim(group of automorphisms) = 0 − dim(PGL(2)) = −3. Likewise, in genus 1, there is a one-dimensional space of curves, but every such curve has a one-dimensional group of automorphisms. Hence, the stack has dimension 0. The coarse moduli spaces have dimension 3g−3 as the stacks when g > 1 because the curves with genus g > 1 have only a finite group as its automorphism i.e. dim(a group of automorphisms) = 0. Eventually, in genus zero, the coarse moduli space has dimension zero, and in genus one, it has dimension one. One can also enrich the problem by considering the moduli stack of genus g nodal curves with n marked points. Such marked curves are said to be stable if the subgroup of curve automorphisms which fix the marked points is finite. The resulting moduli stacks of smooth (or stable) genus g curves with n-marked points are denoted (or ), and have dimension 3g − 3 + n. A case of particular interest is the moduli stack of genus 1 curves with one marked point. This is the stack of elliptic curves, and is the natural home of the much studied modular forms, which are meromorphic sections of bundles on this stack. Moduli of varieties In higher dimensions, moduli of algebraic varieties are more difficult to construct and study. For instance, the higher-dimensional analogue of the moduli space of elliptic curves discussed above is the moduli space of abelian varieties, such as the Siegel modular variety. This is the problem underlying Siegel modular form theory. See also Shimura variety. Using techniques arising out of the minimal model program, moduli spaces of varieties of general type were constructed by János Kollár and Nicholas Shepherd-Barron, now known as KSB moduli spaces. Using techniques arising out of differential geometry and birational geometry simultaneously, the construction of moduli spaces of Fano varieties has been achieved by restricting to a special class of K-stable varieties. In this setting important results about boundedness of Fano varieties proven by Caucher Birkar are used, for which he was awarded the 2018 Fields medal. The construction of moduli spaces of Calabi-Yau varieties is an important open problem, and only special cases such as moduli spaces of K3 surfaces or Abelian varieties are understood. Moduli of vector bundles Another important moduli problem is to understand the geometry of (various substacks of) the moduli stack Vectn(X) of rank n vector bundles on a fixed algebraic variety X. This stack has been most studied when X is one-dimensional, and especially when n equals one. In this case, the coarse moduli space is the Picard scheme, which like the moduli space of curves, was studied before stacks were invented. When the bundles have rank 1 and degree zero, the study of coarse moduli space is the study of the Jacobian variety. In applications to physics, the number of moduli of vector bundles and the closely related problem of the number of moduli of principal G-bundles has been found to be significant in gauge theory. Volume of the moduli space Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces. Methods for constructing moduli spaces The modern formulation of moduli problems and definition of moduli spaces in terms of the moduli functors (or more generally the categories fibred in groupoids), and spaces (almost) representing them, dates back to Grothendieck (1960/61), in which he described the general framework, approaches, and main problems using Teichmüller spaces in complex analytical geometry as an example. The talks, in particular, describe the general method of constructing moduli spaces by first rigidifying the moduli problem under consideration. More precisely, the existence of non-trivial automorphisms of the objects being classified makes it impossible to have a fine moduli space. However, it is often possible to consider a modified moduli problem of classifying the original objects together with additional data, chosen in such a way that the identity is the only automorphism respecting also the additional data. With a suitable choice of the rigidifying data, the modified moduli problem will have a (fine) moduli space T, often described as a subscheme of a suitable Hilbert scheme or Quot scheme. The rigidifying data is moreover chosen so that it corresponds to a principal bundle with an algebraic structure group G. Thus one can move back from the rigidified problem to the original by taking quotient by the action of G, and the problem of constructing the moduli space becomes that of finding a scheme (or more general space) that is (in a suitably strong sense) the quotient T/G of T by the action of G. The last problem, in general, does not admit a solution; however, it is addressed by the groundbreaking geometric invariant theory (GIT), developed by David Mumford in 1965, which shows that under suitable conditions the quotient indeed exists. To see how this might work, consider the problem of parametrizing smooth curves of the genus g > 2. A smooth curve together with a complete linear system of degree d > 2g is equivalent to a closed one dimensional subscheme of the projective space Pd−g. Consequently, the moduli space of smooth curves and linear systems (satisfying certain criteria) may be embedded in the Hilbert scheme of a sufficiently high-dimensional projective space. This locus H in the Hilbert scheme has an action of PGL(n) which mixes the elements of the linear system; consequently, the moduli space of smooth curves is then recovered as the quotient of H by the projective general linear group. Another general approach is primarily associated with Michael Artin. Here the idea is to start with an object of the kind to be classified and study its deformation theory. This means first constructing infinitesimal deformations, then appealing to prorepresentability theorems to put these together into an object over a formal base. Next, an appeal to Grothendieck's formal existence theorem provides an object of the desired kind over a base which is a complete local ring. This object can be approximated via Artin's approximation theorem by an object defined over a finitely generated ring. The spectrum of this latter ring can then be viewed as giving a kind of coordinate chart on the desired moduli space. By gluing together enough of these charts, we can cover the space, but the map from our union of spectra to the moduli space will, in general, be many to one. We, therefore, define an equivalence relation on the former; essentially, two points are equivalent if the objects over each are isomorphic. This gives a scheme and an equivalence relation, which is enough to define an algebraic space (actually an algebraic stack if we are being careful) if not always a scheme. In physics The term moduli space is sometimes used in physics to refer specifically to the moduli space of vacuum expectation values of a set of scalar fields, or to the moduli space of possible string backgrounds. Moduli spaces also appear in physics in topological field theory, where one can use Feynman path integrals to compute the intersection numbers of various algebraic moduli spaces. See also Construction tools Hilbert scheme Quot scheme Deformation theory GIT quotient Artin's criterion, general criterion for constructing moduli spaces as algebraic stacks from moduli functors Moduli spaces Moduli of algebraic curves Moduli stack of elliptic curves Moduli spaces of K-stable Fano varieties Modular curve Picard functor Moduli of semistable sheaves on a curve Kontsevich moduli space Moduli of semistable sheaves References Notes Moduli theory Moduli stacks in P-adic modular forms and Langlands program Research articles Fundamental papers Mumford, David, Geometric invariant theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, Neue Folge, Band 34 Springer-Verlag, Berlin-New York 1965 vi+145 pp Mumford, David; Fogarty, J.; Kirwan, F. Geometric invariant theory. Third edition. Ergebnisse der Mathematik und ihrer Grenzgebiete (2) (Results in Mathematics and Related Areas (2)), 34. Springer-Verlag, Berlin, 1994. xiv+292 pp. Early applications Other references Papadopoulos, Athanase, ed. (2007), Handbook of Teichmüller theory. Vol. I, IRMA Lectures in Mathematics and Theoretical Physics, 11, European Mathematical Society (EMS), Zürich, , , Papadopoulos, Athanase, ed. (2009), Handbook of Teichmüller theory. Vol. II, IRMA Lectures in Mathematics and Theoretical Physics, 13, European Mathematical Society (EMS), Zürich, , , Papadopoulos, Athanase, ed. (2012), Handbook of Teichmüller theory. Vol. III, IRMA Lectures in Mathematics and Theoretical Physics, 17, European Mathematical Society (EMS), Zürich, , . Other articles and sources Maryam Mirzakhani (2007) "Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces" Inventiones Mathematicae External links Moduli theory Invariant theory
Moduli space
Physics
4,425
18,436,197
https://en.wikipedia.org/wiki/General%20Electric%20EdgeLab
edgelab (typically expressed with a leading lowercase "e") is an applied academic research lab, established in 2000 as a partnership between General Electric and the University of Connecticut. edgelab approaches GE Businesses three times per year, identifying key strategic initiatives for program execution. 15 projects are selected each year (five per semester) and students, faculty, and on-site GE staff work on these projects full-time for the 13-week session. The edgelab program was discontinued in Spring 2011. The School of Business and General Electric have announced their plans to continue the relationship by creating a new joint venture. References Computing and society General Electric Research institutes in Connecticut University of Connecticut Economy of Stamford, Connecticut
General Electric EdgeLab
Technology
142
384,743
https://en.wikipedia.org/wiki/Cyclopropane
Cyclopropane is the cycloalkane with the molecular formula (CH2)3, consisting of three methylene groups (CH2) linked to each other to form a triangular ring. The small size of the ring creates substantial ring strain in the structure. Cyclopropane itself is mainly of theoretical interest but many of its derivatives - cyclopropanes - are of commercial or biological significance. Cyclopropane was used as a clinical inhalational anesthetic from the 1930s through the 1980s. The substance's high flammability poses a risk of fire and explosions in operating rooms due to its tendency to accumulate in confined spaces, as its density is higher than that of air. History Cyclopropane was discovered in 1881 by August Freund, who also proposed the correct structure for the substance in his first paper. Freund treated 1,3-dibromopropane with sodium, causing an intramolecular Wurtz reaction leading directly to cyclopropane. The yield of the reaction was improved by Gustavson in 1887 with the use of zinc instead of sodium. Cyclopropane had no commercial application until Henderson and Lucas discovered its anaesthetic properties in 1929; industrial production had begun by 1936. In modern anaesthetic practice, it has been superseded by other agents. Anaesthesia Cyclopropane was introduced into clinical use by the American anaesthetist Ralph Waters who used a closed system with carbon dioxide absorption to conserve this then-costly agent. Cyclopropane is a relatively potent, non-irritating and sweet smelling agent with a minimum alveolar concentration of 17.5% and a blood/gas partition coefficient of 0.55. This meant induction of anaesthesia by inhalation of cyclopropane and oxygen was rapid and not unpleasant. However at the conclusion of prolonged anaesthesia patients could suffer a sudden decrease in blood pressure, potentially leading to cardiac dysrhythmia: a reaction known as "cyclopropane shock". For this reason, as well as its high cost and its explosive nature, it was latterly used only for the induction of anaesthesia, and has not been available for clinical use since the mid-1980s. Cylinders and flow meters were colored orange. Pharmacology Cyclopropane is inactive at the GABAA and glycine receptors, and instead acts as an NMDA receptor antagonist. It also inhibits the AMPA receptor and nicotinic acetylcholine receptors, and activates certain K2P channels. Structure and bonding The triangular structure of cyclopropane requires the bond angles between carbon-carbon covalent bonds to be 60°. The molecule has D3h molecular symmetry. The C-C distances are 151 pm versus 153-155 pm. Despite their shortness, the C-C bonds in cyclopropane are weakened by 34 kcal/mol vs ordinary C-C bonds. In addition to ring strain, the molecule also has torsional strain due to the eclipsed conformation of its hydrogen atoms. The C-H bonds in cyclopropane are stronger than ordinary C-H bonds as reflected by NMR coupling constants. Bonding between the carbon centres is generally described in terms of bent bonds. In this model the carbon-carbon bonds are bent outwards so that the inter-orbital angle is 104°. The unusual structural properties of cyclopropane have spawned many theoretical discussions. One theory invokes σ-aromaticity: the stabilization afforded by delocalization of the six electrons of cyclopropane's three C-C σ bonds to explain why the strain of cyclopropane is "only" 27.6 kcal/mol as compared to cyclobutane (26.2 kcal/mol) with cyclohexane as reference with Estr=0 kcal/mol, in contrast to the usual π aromaticity, that, for example, has a highly stabilizing effect in benzene. Other studies do not support the role of σ-aromaticity in cyclopropane and the existence of an induced ring current; such studies provide an alternative explanation for the energetic stabilization and abnormal magnetic behaviour of cyclopropane. Synthesis Cyclopropane was first produced via a Wurtz coupling, in which 1,3-dibromopropane was cyclised using sodium. The yield of this reaction can be improved by the use of zinc as the dehalogenating agent and sodium iodide as a catalyst. BrCH2CH2CH2Br + 2 Na → (CH2)3 + 2 NaBr The preparation of cyclopropane rings is referred to as cyclopropanation. Reactions Owing to the increased π-character of its C-C bonds, cyclopropane is often assumed to add bromine to give 1,3-dibromopropane, but this reaction proceeds poorly. Hydrohalogenation with hydrohalic acids gives linear 1-halopropanes. Substituted cyclopropanes also react, following Markovnikov's rule. Cyclopropane and its derivatives can oxidatively add to transition metals, in a process referred to as C–C activation. Safety Cyclopropane is highly flammable. However, despite its strain energy it does not exhibit explosive behavior substantially different from other alkanes. See also Tetrahedrane contains four fused cyclopropane rings that form the faces of a tetrahedron Propellane contains three cyclopropane rings that share a single central carbon-carbon bond. Spiropentane is two cyclopropane rings fused at a vertex Cyclopropene Methylenecyclopropane References External links Synthesis of Cyclopropanes and related compounds Carbon triangle General anesthetics NMDA receptor antagonists Nicotinic antagonists AMPA receptor antagonists Gases
Cyclopropane
Physics,Chemistry
1,255
2,179,639
https://en.wikipedia.org/wiki/Obstruction%20theory
In mathematics, obstruction theory is a name given to two different mathematical theories, both of which yield cohomological invariants. In the original work of Stiefel and Whitney, characteristic classes were defined as obstructions to the existence of certain fields of linear independent vectors. Obstruction theory turns out to be an application of cohomology theory to the problem of constructing a cross-section of a bundle. In homotopy theory The older meaning for obstruction theory in homotopy theory relates to the procedure, inductive with respect to dimension, for extending a continuous mapping defined on a simplicial complex, or CW complex. It is traditionally called Eilenberg obstruction theory, after Samuel Eilenberg. It involves cohomology groups with coefficients in homotopy groups to define obstructions to extensions. For example, with a mapping from a simplicial complex X to another, Y, defined initially on the 0-skeleton of X (the vertices of X), an extension to the 1-skeleton will be possible whenever the image of the 0-skeleton will belong to the same path-connected component of Y. Extending from the 1-skeleton to the 2-skeleton means defining the mapping on each solid triangle from X, given the mapping already defined on its boundary edges. Likewise, then extending the mapping to the 3-skeleton involves extending the mapping to each solid 3-simplex of X, given the mapping already defined on its boundary. At some point, say extending the mapping from the (n-1)-skeleton of X to the n-skeleton of X, this procedure might be impossible. In that case, one can assign to each n-simplex the homotopy class of the mapping already defined on its boundary, (at least one of which will be non-zero). These assignments define an n-cochain with coefficients in . Amazingly, this cochain turns out to be a cocycle and so defines a cohomology class in the nth cohomology group of X with coefficients in . When this cohomology class is equal to 0, it turns out that the mapping may be modified within its homotopy class on the (n-1)-skeleton of X so that the mapping may be extended to the n-skeleton of X. If the class is not equal to zero, it is called the obstruction to extending the mapping over the n-skeleton, given its homotopy class on the (n-1)-skeleton. Obstruction to extending a section of a principal bundle Construction Suppose that is a simply connected simplicial complex and that is a fibration with fiber . Furthermore, assume that we have a partially defined section on the -skeleton of . For every -simplex in , can be restricted to the boundary (which is a topological -sphere). Because sends each back to , defines a map from the -sphere to . Because fibrations satisfy the homotopy lifting property, and is contractible; is homotopy equivalent to . So this partially defined section assigns an element of to every -simplex. This is precisely the data of a -valued simplicial cochain of degree on , i.e. an element of . This cochain is called the obstruction cochain because it being the zero means that all of these elements of are trivial, which means that our partially defined section can be extended to the -skeleton by using the homotopy between (the partially defined section on the boundary of each ) and the constant map. The fact that this cochain came from a partially defined section (as opposed to an arbitrary collection of maps from all the boundaries of all the -simplices) can be used to prove that this cochain is a cocycle. If one started with a different partially defined section that agreed with the original on the -skeleton, then one can also prove that the resulting cocycle would differ from the first by a coboundary. Therefore we have a well-defined element of the cohomology group such that if a partially defined section on the -skeleton exists that agrees with the given choice on the -skeleton, then this cohomology class must be trivial. The converse is also true if one allows such things as homotopy sections, i.e. a map such that is homotopic (as opposed to equal) to the identity map on . Thus it provides a complete invariant of the existence of sections up to homotopy on the -skeleton. Applications By inducting over , one can construct a first obstruction to a section as the first of the above cohomology classes that is non-zero. This can be used to find obstructions to trivializations of principal bundles. Because any map can be turned into a fibration, this construction can be used to see if there are obstructions to the existence of a lift (up to homotopy) of a map into to a map into even if is not a fibration. It is crucial to the construction of Postnikov systems. In geometric topology In geometric topology, obstruction theory is concerned with when a topological manifold has a piecewise linear structure, and when a piecewise linear manifold has a differential structure. In dimension at most 2 (Rado), and 3 (Moise), the notions of topological manifolds and piecewise linear manifolds coincide. In dimension 4 they are not the same. In dimensions at most 6 the notions of piecewise linear manifolds and differentiable manifolds coincide. In surgery theory The two basic questions of surgery theory are whether a topological space with n-dimensional Poincaré duality is homotopy equivalent to an n-dimensional manifold, and also whether a homotopy equivalence of n-dimensional manifolds is homotopic to a diffeomorphism. In both cases there are two obstructions for n>9, a primary topological K-theory obstruction to the existence of a vector bundle: if this vanishes there exists a normal map, allowing the definition of the secondary surgery obstruction in algebraic L-theory to performing surgery on the normal map to obtain a homotopy equivalence. See also Kirby–Siebenmann class Wall's finiteness obstruction References Homotopy theory Differential topology Surgery theory Theories
Obstruction theory
Mathematics
1,281
2,765,297
https://en.wikipedia.org/wiki/Insect%20trap
Insect traps are used to monitor or directly reduce populations of insects or other arthropods, by trapping individuals and killing them. They typically use food, visual lures, chemical attractants and pheromones as bait and are installed so that they do not injure other animals or humans or result in residues in foods or feeds. Visual lures use light, bright colors and shapes to attract pests. Chemical attractants or pheromones may attract only a specific sex. Insect traps are sometimes used in pest management programs instead of pesticides but are more often used to look at seasonal and distributional patterns of pest occurrence. This information may then be used in other pest management approaches. The trap mechanism or bait can vary widely. Flies and wasps are attracted by proteins. Mosquitoes and many other insects are attracted by bright colors, carbon dioxide, lactic acid, floral or fruity fragrances, warmth, moisture and pheromones. Synthetic attractants like methyl eugenol are very effective with tephritid flies. Trap types Insect traps vary widely in shape, size, and construction, often reflecting the behavior or ecology of the target species. Some common varieties are described below Light traps Light traps, with or without ultraviolet light, attract certain insects. Light sources may include fluorescent lamps, mercury-vapor lamps, black lights, or light-emitting diodes. Designs differ according to the behavior of the insects being targeted. Light traps are widely used to survey nocturnal moths. Total species richness and abundance of trapped moths may be influenced by several factors such as night temperature, humidity and lamp type. Grasshoppers and some beetles are attracted to lights at a long range but are repelled by it at short range. Farrow's light trap has a large base so that it captures insects that may otherwise fly away from regular light traps. Light traps can attract flying and terrestrial insects, and lights may be combined with other methods described below. Adhesive traps Sticky traps may be simple flat panels or enclosed structures, often baited, that ensnare insects with an adhesive substance. Baitless ones are nicknamed "blunder" traps, as pests might blunder into them while wandering or exploring. Sticky traps are widely used in agricultural and indoor pest monitoring. Shelter traps, or artificial cover traps, take advantage of an insect's tendencies to seek shelter in loose bark, crevices, or other sheltered places. Baited shelter traps such "Roach Motels" and similar enclosures often have adhesive material inside to trap insects. Flying insect traps These traps are designed to catch flying or wind-blown insects. Flight interception traps are net-like or transparent structures that impede flying insects and funnel them into collecting. Barrier traps consist of a simple vertical sheet or wall that channels insects down into collection containers. The Malaise trap, a more complex type, is a mesh tent-like trap that captures insects that tend to fly up rather than down when impeded. Pan traps (also called water pan traps) are simple shallow dishes filled with a soapy water or a preservative and killing agent such as antifreeze. Pan traps are used to monitor aphids, wasps, and some other small insects. Pan traps are often yellow, but different colors including blue, white, red, and clear can be used to target different species. Bucket traps and bottle traps, often supplemented with a funnel, are inexpensive versions that use a bait or attractant to lure insects into a bucket or bottle filled with soapy water or antifreeze. Many types of moth traps are bucket-type traps. Bottle traps are widely used, often used to sample wasp or pest beetle populations. Terrestrial arthropod traps Pitfall traps are used for ground-foraging and flightless arthropods such as Carabid beetles and spiders. Pitfall traps consist of a bucket or container buried in soil or other substrate so that its lip is flush with the substrate. A grain probe is a type of trap used to monitor pests of stored grain, consisting of a long cylindrical tube with multiple holes along its length that can be inserted at various depths within grain. Soil emergence traps, consisting of an inverted cone or funnel with collecting jar on top, are employed to capture insects with a subterranean pupal stage. Emergence traps have been used to monitor important disease-vectors such as Phlebotomine sandflies. Aquatic arthropod traps Aquatic interception traps typically involve mesh funnels or conical structures that guide insects into a jar or bottle for collecting. Aquatic emergence traps are cage-like or tent-like structures used to capture aquatic insects such as chironomids, caddisflies, mosquitoes, and odonates upon their transition from aquatic nymphs or pupae to terrestrial adults. Aquatic emergence traps may be free floating on the water's surface, submerged, or attached to a post near shore. See also Insect collecting Integrated pest management Biological pest control Insecticidal soap Organic farming References Further reading Weinzierl, R., et al. Insect Attractants and Traps. ENY277. University of Florida IFAS. Published 1995. Revised 2005. Kronkright, D. P. Insect Traps in Conservation Surveys. WAAC Newsletter January, 1991. 13(1) 21–23. External links Tereshkin, A. Devices for collecting wasps. Animal trapping Entomology equipment Garden pests Organic gardening Sustainable agriculture
Insect trap
Biology
1,104
32,179,176
https://en.wikipedia.org/wiki/Cache%20domain
In molecular biology, the cache domain is an extracellular protein domain that is predicted to have a role in small-molecule recognition in a wide range of proteins, including the animal dihydropyridine-sensitive voltage-gated Ca2+ channel alpha-2delta subunit, and various bacterial chemotaxis receptors. The name Cache comes from CAlcium channels and CHEmotaxis receptors. This domain consists of an N-terminal part with three predicted strands and an alpha-helix, and a C-terminal part with a strand dyad followed by a relatively unstructured region. The N-terminal portion of the (unpermuted) Cache domain contains three predicted strands that could form a sheet analogous to that present in the core of the PAS domain structure. Cache domains are particularly widespread in bacteria such as Vibrio cholerae. The animal calcium channel alpha-2delta subunits might have acquired a part of their extracellular domains from a bacterial source. The Cache domain appears to have arisen from the GAF-PAS fold despite their divergent functions. References Protein domains
Cache domain
Biology
224
955,672
https://en.wikipedia.org/wiki/IEEE%20802.16
IEEE 802.16 is a series of wireless broadband standards written by the Institute of Electrical and Electronics Engineers (IEEE). The IEEE Standards Board established a working group in 1999 to develop standards for broadband for wireless metropolitan area networks. The Workgroup is a unit of the IEEE 802 local area network and metropolitan area network standards committee. Although the 802.16 family of standards is officially called WirelessMAN in IEEE, it has been commercialized under the name "WiMAX" (from "Worldwide Interoperability for Microwave Access") by the WiMAX Forum industry alliance. The Forum promotes and certifies compatibility and interoperability of products based on the IEEE 802.16 standards. The 802.16e-2005 amendment was implemented and deployed around the world . The version IEEE 802.16-2009 was amended by IEEE 802.16j-2009. Standards Projects publish draft and proposed standards with the letter "P" prefixed. Once a standard is ratified and published, that "P" gets dropped and replaced by a trailing dash and suffix year of publication. Projects 802.16e-2005 Technology The 802.16 standard essentially standardizes two aspects of the air interface – the physical layer (PHY) and the media access control (MAC) layer. This section provides an overview of the technology employed in these two layers in the mobile 802.16e specification. PHY 802.16e uses scalable OFDMA to carry data, supporting channel bandwidths of between 1.25 MHz and 20 MHz, with up to 2048 subcarriers. It supports adaptive modulation and coding, so that in conditions of good signal, a highly efficient 64 QAM coding scheme is used, whereas when the signal is poorer, a more robust BPSK coding mechanism is used. In intermediate conditions, 16 QAM and QPSK can also be employed. Other PHY features include support for multiple-input multiple-output (MIMO) antennas in order to provide good non-line-of-sight propagation (NLOS) characteristics (or higher bandwidth) and hybrid automatic repeat request (HARQ) for good error correction performance. Although the standards allow operation in any band from 2 to 66 GHz, mobile operation is best in the lower bands which are also the most crowded, and therefore most expensive. MAC The 802.16 MAC describes a number of Convergence Sublayers which describe how wireline technologies such as Ethernet, Asynchronous Transfer Mode (ATM) and Internet Protocol (IP) are encapsulated on the air interface, and how data is classified, etc. It also describes how secure communications are delivered, by using secure key exchange during authentication, and encryption using Advanced Encryption Standard (AES) or Data Encryption Standard (DES) during data transfer. Further features of the MAC layer include power saving mechanisms (using sleep mode and idle mode) and handover mechanisms. A key feature of 802.16 is that it is a connection-oriented technology. The subscriber station (SS) cannot transmit data until it has been allocated a channel by the base station (BS). This allows 802.16e to provide strong support for quality of service (QoS). QoS Quality of service (QoS) in 802.16e is supported by allocating each connection between the SS and the BS (called a service flow in 802.16 terminology) to a specific QoS class. In 802.16e, there are 5 QoS classes: The BS and the SS use a service flow with an appropriate QoS class (plus other parameters, such as bandwidth and delay) to ensure that application data receives QoS treatment appropriate to the application. Certification Because the IEEE only sets specifications but does not test equipment for compliance with them, the WiMAX Forum runs a certification program wherein members pay for certification. WiMAX certification by this group is intended to guarantee compliance with the standard and interoperability with equipment from other manufacturers. The mission of the Forum is to promote and certify compatibility and interoperability of broadband wireless products. See also WiBro WiMAX WiBAS WiMAX MIMO Wireless mesh network 4G LTE References External links IEEE Std 802.16-2004 IEEE Std 802.16e-2005 IEEE Std 802.16-2009 IEEE Std 802.16-2012 IEEE Std 802.16.1-2012 IEEE Std 802.16.1-2017 The WiMAX Forum The implications of WiMAX for competition and regulation A paper of the OECD, Organisation for Economic Co-operation and Development IEEE 802.16m Technology Introduction IEEE 802 WiMAX Wireless networking standards
IEEE 802.16
Technology
941
2,078,963
https://en.wikipedia.org/wiki/History%20of%20quantum%20field%20theory
In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). The field theory behind QED was so accurate and successful in predictions that efforts were made to apply the same basic concepts for the other forces of nature. Beginning in 1954, the parallel was found by way of gauge theory, leading by the late 1970s, to quantum field models of strong nuclear force and weak nuclear force, united in the modern Standard Model of particle physics. Efforts to describe gravity using the same techniques have, to date, failed. The study of quantum field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to several different branches of physics. Early developments Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In particular, de Broglie in 1924 introduced the idea of a wave description of elementary systems in the following way: "we proceed in this work from the assumption of the existence of a certain periodic phenomenon of a yet to be determined character, which is to be attributed to each and every isolated energy parcel". In 1925, Werner Heisenberg, Max Born, and Pascual Jordan constructed just such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators, and by then utilizing the canonical quantization procedure to these oscillators; their paper was published in 1926. This theory assumed that no electric charges or currents were present and today would be called a free field theory. The first reasonably complete theory of quantum electrodynamics, which included both the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927. This quantum field theory could be used to model important processes such as the emission of a photon by an electron dropping into a quantum state of lower energy, a process in which the number of particles changes—one atom in the initial state becomes an atom plus a photon in the final state. It is now understood that the ability to describe such processes is one of the most important features of quantum field theory. The final crucial step was Enrico Fermi's theory of β-decay (1934). In it, fermion species nonconservation was shown to follow from second quantization: creation and annihilation of fermions came to the fore and quantum field theory was seen to describe particle decays. (Fermi's breakthrough was somewhat foreshadowed in the abstract studies of Soviet physicists, Viktor Ambartsumian and Dmitri Ivanenko, in particular the Ambarzumian–Ivanenko hypothesis of creation of massive particles (1930). The idea was that not only the quanta of the electromagnetic field, photons, but also other particles might emerge and disappear as a result of their interaction with other particles.) Incorporating special relativity It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant). A further boost for quantum field theory came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation analogous to the Schrödinger equation, but unlike the Schrödinger equation, the Dirac equation satisfies both the Lorentz invariance, that is, the requirements of special relativity, and the rules of quantum mechanics. The Dirac equation accommodated the spin-1/2 value of the electron and accounted for its magnetic moment as well as giving accurate predictions for the spectra of hydrogen. The attempted interpretation of the Dirac equation as a single-particle equation could not be maintained long, however, and finally it was shown that several of its undesirable properties (such as negative-energy states) could be made sense of by reformulating and reinterpreting the Dirac equation as a true field equation, in this case for the quantized "Dirac field" or the "electron field", with the "negative-energy solutions" pointing to the existence of anti-particles. This work was performed first by Dirac himself with the invention of hole theory in 1930 and by Wendell Furry, Robert Oppenheimer, Vladimir Fock, and others. Erwin Schrödinger, during the same period that he discovered his equation in 1926, also independently found the relativistic generalization of it known as the Klein–Gordon equation but dismissed it since, without spin, it predicted impossible properties for the hydrogen spectrum. (See Oskar Klein and Walter Gordon.) All relativistic wave equations that describe spin-zero particles are said to be of the Klein–Gordon type. Uncertainty, again A subtle and careful analysis in 1933 by Niels Bohr and Léon Rosenfeld showed that there is a fundamental limitation on the ability to simultaneously measure the electric and magnetic field strengths that enter into the description of charges in interaction with radiation, imposed by the uncertainty principle, which must apply to all canonically conjugate quantities. This limitation is crucial for the successful formulation and interpretation of a quantum field theory of photons and electrons (quantum electrodynamics), and indeed, any perturbative quantum field theory. The analysis of Bohr and Rosenfeld explains fluctuations in the values of the electromagnetic field that differ from the classically "allowed" values distant from the sources of the field. Their analysis was crucial to showing that the limitations and physical implications of the uncertainty principle apply to all dynamical systems, whether fields or material particles. Their analysis also convinced most physicists that any notion of returning to a fundamental description of nature based on classical field theory, such as what Einstein aimed at with his numerous and failed attempts at a classical unified field theory, was simply out of the question. Fields had to be quantized. Second quantization The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Pascual Jordan tried to extend the canonical quantization of fields to the many-body wave functions of identical particles using a formalism which is known as statistical transformation theory; this procedure is now sometimes called second quantization. Dirac is also credited with the invention, as he introduced the key ideas in a 1927 paper. In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle (see Jordan–Wigner transformation). This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics and nuclear physics. The problem of infinities Despite its early successes quantum field theory was plagued by several serious theoretical difficulties. Basic physical quantities, such as the self-energy of the electron, the energy shift of electron states due to the presence of the electromagnetic field, gave infinite, divergent contributions—a nonsensical result—when computed using the perturbative techniques available in the 1930s and most of the 1940s. The electron self-energy problem was already a serious issue in the classical electromagnetic field theory, where the attempt to attribute to the electron a finite size or extent (the classical electron-radius) led immediately to the question of what non-electromagnetic stresses would need to be invoked, which would presumably hold the electron together against the Coulomb repulsion of its finite-sized "parts". The situation was dire, and had certain features that reminded many of the "Rayleigh–Jeans catastrophe". What made the situation in the 1940s so desperate and gloomy, however, was the fact that the correct ingredients (the second-quantized Maxwell–Dirac field equations) for the theoretical description of interacting photons and electrons were well in place, and no major conceptual change was needed analogous to that which was necessitated by a finite and physically sensible account of the radiative behavior of hot objects, as provided by the Planck radiation law. Renormalization procedures Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies which the theory was unable to explain. A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. This "divergence problem" was solved in the case of quantum electrodynamics through the procedure known as renormalization in 1947–49 by Hans Kramers, Hans Bethe, Julian Schwinger, Richard Feynman, and Shin'ichiro Tomonaga; the procedure was systematized by Freeman Dyson in 1949. Great progress was made after realizing that all infinities in quantum electrodynamics are related to two effects: the self-energy of the electron/positron, and vacuum polarization. Renormalization requires paying very careful attention to just what is meant by, for example, the very concepts "charge" and "mass" as they occur in the pure, non-interacting field-equations. The "vacuum" is itself polarizable and, hence, populated by virtual particle (on shell and off shell) pairs, and, hence, is a seething and busy dynamical system in its own right. This was a critical step in identifying the source of "infinities" and "divergences". The "bare mass" and the "bare charge" of a particle, the values that appear in the free-field equations (non-interacting case), are abstractions that are simply not realized in experiment (in interaction). What we measure, and hence, what we must take account of with our equations, and what the solutions must account for, are the "renormalized mass" and the "renormalized charge" of a particle. That is to say, the "shifted" or "dressed" values these quantities must have when due systematic care is taken to include all deviations from their "bare values" is dictated by the very nature of quantum fields themselves. Quantum electrodynamics The first approach that bore fruit is known as the "interaction representation" (see the article Interaction picture), a Lorentz-covariant and gauge-invariant generalization of time-dependent perturbation theory used in ordinary quantum mechanics, and developed by Tomonaga and Schwinger, generalizing earlier efforts of Dirac, Fock and Boris Podolsky. Tomonaga and Schwinger invented a relativistically covariant scheme for representing field commutators and field operators intermediate between the two main representations of a quantum system, the Schrödinger and the Heisenberg representations. Within this scheme, field commutators at separated points can be evaluated in terms of "bare" field creation and annihilation operators. This allows for keeping track of the time-evolution of both the "bare" and "renormalized", or perturbed, values of the Hamiltonian and expresses everything in terms of the coupled, gauge invariant "bare" field-equations. Schwinger gave the most elegant formulation of this approach. The next development was due to Richard Feynman, with his rules for assigning a graph to the terms in the scattering matrix (see S-matrix and Feynman diagrams). These directly corresponded (through the Schwinger–Dyson equation) to the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states) one needs to be able to calculate. This revolutionized how quantum field theory calculations are carried out in practice. Two classic text-books from the 1960s, James D. Bjorken, Sidney David Drell, Relativistic Quantum Mechanics (1964) and J. J. Sakurai, Advanced Quantum Mechanics (1967), thoroughly developed the Feynman graph expansion techniques using physically intuitive and practical methods following from the correspondence principle, without worrying about the technicalities involved in deriving the Feynman rules from the superstructure of quantum field theory itself. Although both Feynman's heuristic and pictorial style of dealing with the infinities, as well as the formal methods of Tomonaga and Schwinger, worked extremely well, and gave spectacularly accurate answers, the true analytical nature of the question of "renormalizability", that is, whether ANY theory formulated as a "quantum field theory" would give finite answers, was not worked-out until much later, when the urgency of trying to formulate finite theories for the strong and electro-weak (and gravitational) interactions demanded its solution. Renormalization in the case of QED was largely fortuitous due to the smallness of the coupling constant, the fact that the coupling has no dimensions involving mass, the so-called fine-structure constant, and also the zero-mass of the gauge boson involved, the photon, rendered the small-distance/high-energy behavior of QED manageable. Also, electromagnetic processes are very "clean" in the sense that they are not badly suppressed/damped and/or hidden by the other gauge interactions. By 1965 James D. Bjorken and Sidney David Drell observed: "Quantum electrodynamics (QED) has achieved a status of peaceful coexistence with its divergences ...". The unification of the electromagnetic force with the weak force encountered initial difficulties due to the lack of accelerator energies high enough to reveal processes beyond the Fermi interaction range. Additionally, a satisfactory theoretical understanding of hadron substructure had to be developed, culminating in the quark model. Thanks to the somewhat brute-force, ad hoc and heuristic early methods of Feynman, and the abstract methods of Tomonaga and Schwinger, elegantly synthesized by Freeman Dyson, from the period of early renormalization, the modern theory of quantum electrodynamics (QED) has established itself. It is still the most accurate physical theory known, the prototype of a successful quantum field theory. Quantum electrodynamics is an example of what is known as an abelian gauge theory. It relies on the symmetry group U(1) and has one massless gauge field, the U(1) gauge symmetry, dictating the form of the interactions involving the electromagnetic field, with the photon being the gauge boson. Yang-Mills theory In the 1950s Yang and Mills, following the previous lead of Hermann Weyl, explored the impact of symmetries and invariances on field theory. All field theories, including QED, were generalized to a class of quantum field theories known as gauge theories. That symmetries dictate, limit and necessitate the form of interaction between particles is the essence of the "gauge theory revolution". Yang and Mills formulated the first explicit example of a non-abelian gauge theory, Yang–Mills theory, with an attempted explanation of the strong interactions in mind. The strong interactions were then (incorrectly) understood in the mid-1950s, to be mediated by the pi-mesons, the particles predicted by Hideki Yukawa in 1935, based on his profound reflections concerning the reciprocal connection between the mass of any force-mediating particle and the range of the force it mediates. This was allowed by the uncertainty principle. In the absence of dynamical information, Murray Gell-Mann pioneered the extraction of physical predictions from sheer non-abelian symmetry considerations, and introduced non-abelian Lie groups to current algebra and so the gauge theories that came to supersede it. The 1960s and 1970s saw the formulation of a gauge theory now known as the Standard Model of particle physics, which systematically describes the elementary particles and the interactions between them. The strong interactions are described by quantum chromodynamics (QCD), based on "color" SU(3). The weak interactions require the additional feature of spontaneous symmetry breaking, elucidated by Yoichiro Nambu and the adjunct Higgs mechanism, considered next. Electroweak unification The electroweak interaction part of the Standard Model was formulated by Sheldon Glashow, Abdus Salam, and John Clive Ward in 1959, with their discovery of the SU(2)xU(1) group structure of the theory. In 1967, Steven Weinberg invoked the Higgs mechanism for the generation of the W and Z masses (the intermediate vector bosons responsible for the weak interactions and neutral-currents) and keeping the mass of the photon zero. The Goldstone and Higgs idea for generating mass in gauge theories was sparked in the late 1950s and early 1960s when a number of theoreticians (including Yoichiro Nambu, Steven Weinberg, Jeffrey Goldstone, François Englert, Robert Brout, G. S. Guralnik, C. R. Hagen, Tom Kibble and Philip Warren Anderson) noticed a possibly useful analogy to the (spontaneous) breaking of the U(1) symmetry of electromagnetism in the formation of the BCS ground-state of a superconductor. The gauge boson involved in this situation, the photon, behaves as though it has acquired a finite mass. There is a further possibility that the physical vacuum (ground-state) does not respect the symmetries implied by the "unbroken" electroweak Lagrangian from which one arrives at the field equations (see the article Electroweak interaction for more details). The electroweak theory of Weinberg and Salam was shown to be renormalizable (finite) and hence consistent by Gerardus 't Hooft and Martinus Veltman. The Glashow–Weinberg–Salam theory (GWS theory), in certain applications, gives an accuracy on a par with quantum electrodynamics. Quantum chromodynamics In the case of the strong interactions, progress concerning their short-distance/high-energy behavior was much slower and more frustrating. For strong interactions with the electro-weak fields, there were difficult issues regarding the strength of coupling, the mass generation of the force carriers as well as their non-linear, self interactions. Although there has been theoretical progress toward a grand unified quantum field theory incorporating the electro-magnetic force, the weak force and the strong force, empirical verification is still pending. Superunification, incorporating the gravitational force, is still very speculative, and is under intensive investigation by many of the best minds in contemporary theoretical physics. Gravitation is a tensor field description of a spin-2 gauge-boson, the "graviton", and is further discussed in the articles on general relativity and quantum gravity. Quantum gravity From the point of view of the techniques of (four-dimensional) quantum field theory, and as the numerous efforts to formulate a consistent quantum gravity theory attests, gravitational quantization has been the reigning champion for bad behavior. There are technical problems underlain by the fact that the Newtonian constant of gravitation has dimensions involving inverse powers of mass, and, as a simple consequence, it is plagued by perturbatively badly behaved non-linear self-interactions. Gravity is itself a source of gravity, analogously to gauge theories (whose couplings, are, by contrast, dimensionless) leading to uncontrollable divergences at increasing orders of perturbation theory. Moreover, gravity couples to all energy equally strongly, as per the equivalence principle, so this makes the notion of ever really "switching-off", "cutting-off" or separating, the gravitational interaction from other interactions ambiguous, since, with gravitation, we are dealing with the very structure of space-time itself. Moreover, it has not been established that a theory of quantum gravity is necessary (see Quantum field theory in curved spacetime). Contemporary framework of renormalization Parallel breakthroughs in the understanding of phase transitions in condensed matter physics led to novel insights based on the renormalization group. They involved the work of Leo Kadanoff (1966) and Kenneth Geddes Wilson–Michael Fisher (1972)—extending the work of Ernst Stueckelberg–André Petermann (1953) and Murray Gell-Mann–Francis Low (1954)—which led to the seminal reformulation of quantum field theory by Kenneth Geddes Wilson in 1975. This reformulation provided insights into the evolution of effective field theories with scale, which classified all field theories, renormalizable or not. The remarkable conclusion is that, in general, most observables are "irrelevant", i.e., the macroscopic physics is dominated by only a few observables in most systems. During the same period, Leo Kadanoff (1969) introduced an operator algebra formalism for the two-dimensional Ising model, a widely studied mathematical model of ferromagnetism in statistical physics. This development suggested that quantum field theory describes its scaling limit. Later, there developed the idea that a finite number of generating operators could represent all the correlation functions of the Ising model. The existence of a much stronger symmetry for the scaling limit of two-dimensional critical systems was suggested by Alexander Belavin, Alexander Markovich Polyakov and Alexander Zamolodchikov in 1984, which eventually led to the development of conformal field theory, a special case of quantum field theory, which is presently utilized in different areas of particle physics and condensed matter physics. The renormalization group spans a set of ideas and methods to monitor changes of the behavior of the theory with scale, providing a deep physical understanding which sparked what has been called the "grand synthesis" of theoretical physics, uniting the quantum field theoretical techniques used in particle physics and condensed matter physics into a single powerful theoretical framework. The gauge field theory of the strong interactions, quantum chromodynamics, relies crucially on this renormalization group for its distinguishing characteristic features, asymptotic freedom and color confinement. See also History of quantum mechanics History of string theory QED vacuum References Further reading Pais, Abraham; Inward Bound – Of Matter & Forces in the Physical World, Oxford University Press (1986) . Written by a former Einstein assistant at Princeton, this is a beautiful detailed history of modern fundamental physics, from 1895 (discovery of X-rays) to 1983 (discovery of vectors bosons at CERN). Weinberg, Steven; The Quantum Theory of Fields - Foundations (vol. I), Cambridge University Press (1995) The first chapter (pp. 1–40) of Weinberg's monumental treatise gives a brief history of Q.F.T., p. 608. Weinberg, Steven; The Quantum Theory of Fields - Modern Applications (vol. II), Cambridge University Press:Cambridge, U.K. (1996) , pp. 489. Weinberg, Steven; The Quantum Theory of Fields – Supersymmetry (vol. III), Cambridge University Press:Cambridge, U.K. (2000) , pp. 419. Schweber, Silvan S.; QED and the men who made it: Dyson, Feynman, Schwinger, and Tomonaga, Princeton University Press (1994) Ynduráin, Francisco José; Quantum Chromodynamics: An Introduction to the Theory of Quarks and Gluons, Springer Verlag, New York, 1983. Miller, Arthur I.; Early Quantum Electrodynamics : A Sourcebook, Cambridge University Press (1995) Schwinger, Julian; Selected Papers on Quantum Electrodynamics, Dover Publications, Inc. (1958) O'Raifeartaigh, Lochlainn; The Dawning of Gauge Theory, Princeton University Press (May 5, 1997) Cao, Tian Yu; Conceptual Developments of 20th Century Field Theories, Cambridge University Press (1997) Darrigol, Olivier; La genèse du concept de champ quantique, Annales de Physique (France) 9 (1984) pp. 433–501. Text in French, adapted from the author's Ph.D. thesis. Quantum field theory
History of quantum field theory
Physics
5,208
15,457,327
https://en.wikipedia.org/wiki/Ouvrage%20Barbonnet
Ouvrage Barbonnet is a work (gros ouvrage) of the Maginot Line's Alpine extension, the Alpine Line, also called the Little Maginot Line. The ouvrage consists of one entry block and one infantry block facing Italy. The ouvrage was built somewhat behind the main line of fortifications on the old Fort Suchet, which was already armed with two obsolete Mougin 155 mm gun turrets. Fort Suchet was built between 1883 and 1888 at 850 metres altitude two kilometres to the south of Sospel, dominating the road from Nice to the Col de Tende. This corridor represented the main invasion route to Nice from the north. Fort Suchet and Ouvrage Barbonnet operated separately, the former manned in 1940 by elements of the 157th and 158th Régiments d'Artillerie de Position (RAP) and the latter by the 95th Brigade Alpin de Forteresse (BAF), which also provided infantry support on the surface. The entire position was commanded by Captain Imbault. The Maginot fort's kitchens were used by the garrisons of both fortifications, but the mess halls were separate. Ouvrage Barbonnet description Barbonnet has only two blocks, an entry block and an artillery block, and, like all Maginot fortifications, is entirely subterranean. The Mougin battery is not linked to the Maginot fort. A link had been contemplated and a fully integrated design was prepared in 1929, but the arrangement of Suchet's magazines and concerns about structure and cost prevented work on a link from taking place. In particular, the magazines of Fort Suchet were not considered proof against modern artillery. Block 2 is just to the south of the old fort, outside its walls and facing south, with its galleries, usine and magazines running under the east side of Suchet, at an elevation of 748 metres. Block 1 (entry): one machine gun cloche and three machine gun embrasures. Block 2 (entry): one machine gun cloche, one grenade launcher cloche, three machine gun embrasures, two 75mm/29cal guns and two 81mm mortars. Two flanking infantry blocks were proposed but not carried out, one to the south with two heavy twin machine gun positions, a GFM cloche and an observation cloche, and a detached position to the north with a GFM cloche. A small blockhouse and casemate are located to the south of the main fortification. Casemate Barbonnet Sud was equipped with one FM machine gun and two automatic rifle positions. Barbonnet's Maginot fortifications were built between November 1931 and February 1935 by a contractor named Borie, at a cost of 10.8 million francs. Observation posts Four observation posts are associated with Barbonnet, including Avellan and Petit Ventabren. Fort Suchet description Fort Suchet was built as part of the Séré de Rivières system fortifications that were designed to respond to the rapid development of artillery in the late 19th century. Built between 1883 and 1886, Suchet is a rough trapezoid with a wall and ditch around its perimeter, defended by caponiers. It crowns a prominent peak above the surrounding valley, giving the peak a sawn-off appearance. The fort's primary armament were four 155mm guns in Mougin twin turrets, named "Jeanne d'Arc" and "Bayard." In 1888 the fort also mounted two reserve 155mm guns, ten 95mm guns, one 32mm mortar and several smaller weapons. At the time of its completion, Fort Suchet was one of the three strongest forts in France. A third Mougin turret outside the fort was proposed in 1903, along with two machine gun turrets. None were built, but the existing turrets were reinforced with concrete in 1913–1914, along with minor improvements to other features. Electricity was provided at this time. More concrete was added to the north caponier in 1928, with ventilation improvements for the turrets in 1930. An aerial tram was proposed for access, but not pursued. A 1934 project to install a deeply buried magazine under the Mougin turrets caused cracking in the fort's masonry, and the project was abandoned. A 1938 project to link to the Maginot fortification was likewise not pursued. The Mougin guns were used in June 1940 to fire on Italian positions. The guns were replaced after the war with similar weapons taken from forts in the northeast of France, the Fort de Frouard (Jeanne d'Arc turret) and the Fort de Villey-le-Sec (Bayard turret), near Nancy and Toul, respectively. In 1963 the fort was deactivated and the Bayard guns were returned to Villey-le-Sec, where the turret has been restored to operating condition. Present status The Maginot and Séré de Rivières works may be visited in the summer months, and house a museum. See also List of Alpine Line ouvrages References Bibliography Allcorn, William. The Maginot Line 1928-45. Oxford: Osprey Publishing, 2003. Kaufmann, J.E. and Kaufmann, H.W. Fortress France: The Maginot Line and French Defenses in World War II, Stackpole Books, 2006. Kaufmann, J.E., Kaufmann, H.W., Jancovič-Potočnik, A. and Lang, P. The Maginot Line: History and Guide, Pen and Sword, 2011. Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 1. Paris, Histoire & Collections, 2001. Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 4 – La fortification alpine. Paris, Histoire & Collections, 2009. Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 5. Paris, Histoire & Collections, 2009. External links Barbonnet (gros ouvrage) at fortiff.be Barbonnet (fort du mont) at fortiff.be Fort du Barbonnet/Fort du Suchet at fortiffsere.fr Fort du Suchet/Barbonnet at Chemins de mémoire Ouvrage Barbonnet at Subterranea Britannica BARB Maginot Line Alpine Line Séré de Rivières system World War II museums in France
Ouvrage Barbonnet
Engineering
1,370
337,489
https://en.wikipedia.org/wiki/Karpman%20drama%20triangle
The Karpman drama triangle is a social model of human interaction proposed by San Francisco psychiatrist, Stephen B. Karpman in 1968. The triangle maps a type of destructive interaction that can occur among people in conflict. The drama triangle model is a tool used in psychotherapy, specifically transactional analysis. The triangle of actors in the drama are persecutors, victims, and rescuers. Karpman described how in some cases these roles were not undertaken in an honest manner to resolve the presenting problem, but rather were used fluidly and switched between by the actors in a way that achieved unconscious goals and agendas. The outcome in such cases was that the actors would be left feeling justified and entrenched, but there would often be little or no change to the presenting problem, and other more fundamental problems giving rise to the situation remaining unaddressed. Use Through popular usage, and the work of Stephen Karpman and others, Karpman's triangle has been adapted for use in structural analysis and transactional analysis. Theory Karpman used triangles to map conflicted or drama-intense relationship transactions. The Karpman Drama Triangle models the connection between personal responsibility and power in conflicts, and the destructive and shifting roles people play. He defined three roles in the conflict; Persecutor, Rescuer (the one up positions) and Victim (one down position). Karpman placed these three roles on an inverted triangle and referred to them as being the three aspects, or faces of drama. The Victim: The Victim in this model is not intended to represent an actual victim, but rather someone feeling or acting like one. The Victim seeks to convince him or herself and others that he or she cannot do anything, nothing can be done, all attempts are futile, despite trying hard. One payoff for this stance is avoiding real change or acknowledgement of one's true feelings, which may bring anxiety and risk, while feeling one is doing all one can to escape it. As such, the Victim's stance is "Poor me!" The Victim feels persecuted, oppressed, helpless, hopeless, powerless, ashamed, and seems unable to make decisions, solve problems, take pleasure in life or achieve insight. The Victim will remain with a Persecutor or, if not being persecuted, will set someone else up in the role of Persecutor. The Victim will also seek help, creating one or more Rescuers to save the day, who will in reality perpetuate the Victim's negative feelings and leave the situation broadly unchanged. The Rescuer: The Rescuer's line is "Let me help you." A classic enabler, the Rescuer feels guilty if they do not go to the rescue, and ultimately becomes angry (and becomes a Persecutor) as their help fails to achieve change. Yet the Rescuer's rescuing has negative effects: it keeps the Victim dependent and doesn't allow the Victim permission to fail and experience the consequences of his or her choices. The rewards derived from this rescue role are that the focus is taken away from the Rescuer, who can also feel good for having tried, and justified in their negative feelings (to the other actor/s) upon failing. When one focuses one's energy on another, it enables one to ignore one's own anxiety and troubles. This rescue role is also pivotal because one's actual primary interest is really an avoidance of one's own problems disguised as concern for the Victim's needs. The Persecutor: (a.k.a. Villain) The Persecutor insists, "It's all your fault." The Persecutor is controlling, blaming, critical, oppressive, angry, authoritarian, rigid and superior. But if blamed in turn, the Persecutor may become defensive and may switch roles to become a Victim if attacked forcefully by the Rescuer and/or Victim, in which case the Victim may also switch roles to become a Persecutor. Initially a drama triangle arises when a person takes on the role of a victim or persecutor. This person then feels the need to enlist other players into the conflict. As often happens, a rescuer is encouraged to enter the situation. These enlisted players take on roles of their own that are not static, and therefore various scenarios can occur. The victim might turn on the rescuer, for example, while the rescuer then switches to persecution. The reason that the situation persists is that each participant has their (frequently unconscious) psychological wishes/needs met without having to acknowledge the broader dysfunction or harm done in the situation as a whole. Each participant is acting upon his or her own selfish needs, rather than acting in a genuinely responsible or altruistic manner. Any character might "ordinarily come on like a plaintive victim; it is now clear that the one can switch into the role of Persecutor providing it is 'accidental' and the one apologizes for it". The motivations of the rescuer are the least obvious. In the terms of the triangle, the rescuer has a mixed or covert motive and benefits egoically in some way from being "the one who rescues". The rescuer has a surface motive of resolving the problem and appears to make great efforts to solve it, but also has a hidden motive to not succeed, or to succeed in a way in which he or she benefits. The rescuer may get a self-esteem boost, for example, or receive respected rescue status, or derive enjoyment by having someone depend on or trust him or her and act in a way that ostensibly seems to be trying to help, but at a deeper level plays upon the victim in order to continue getting a payoff. The relationship between the victim and the rescuer may be one of codependency. The rescuer keeps the victim dependent by encouraging his or her victimhood. The victim gets his or her needs met by being taken care of by the rescuer. Participants generally tend to have a primary or habitual role (victim, rescuer, persecutor) when they enter into drama triangles. Participants first learn their habitual roles in their respective families of origin. Even though participants each have a role with which they most identify, once on the triangle, participants rotate through all the three positions. Each triangle has a "payoff" for those playing it. The "antithesis" of a drama triangle lies in discovering how to deprive the actors of their payoff. Historical context Family therapy movement After World War II, therapists observed that while many battle-torn veteran patients readjusted well after returning to their families, some patients did not; some even regressed when they returned to their home environment. Researchers felt that they needed an explanation for this and began to explore the dynamics of family life – and thus began the family therapy movement. Prior to this time, psychiatrists and psychoanalysts focused on the patient's already-developed psyche and downplayed outside detractors. Intrinsic factors were addressed and extrinsic reactions were considered as emanating from forces within the person. Transactions analysis In the 1950s, Eric Berne developed transactional analysis, a method for studying interactions between individuals. This approach was profoundly different from that of Freud. While Freud relied on asking patients about themselves, Berne felt that a therapist could learn by observing what was communicated (words, body language, facial expressions) in a transaction. So instead of directly asking the patient questions, Berne would frequently observe the patient in a group setting, noting all of the transactions that occurred between the patient and other individuals. Triangles/triangulation The theory of triangulation was originally published in 1966 by Murray Bowen as one of eight parts of Bowen's family systems theory. Murray Bowen, a pioneer in family systems theory, began his early work with schizophrenics at the Menninger Clinic, from 1946 to 1954. Triangulation is the “process whereby a two-party relationship that is experiencing tension will naturally involve third parties to reduce tension”. Simply put, when people find themselves in conflict with another person, they will reach out to a third person. The resulting triangle is more comfortable, as it can hold much more tension, because the tension is being shifted around three people instead of two. Bowen studied the dyad of the mother and her schizophrenic child while he had them both living in a research unit at the Menninger clinic. Bowen then moved to the National Institute of Mental Health (NIMH), where he resided from 1954 to 1959. At the NIMH Bowen extended his hypothesis to include the father-mother-child triad. Bowen considered differentiation and triangles the crux of his theory, Bowen Family Systems Theory. Bowen intentionally used the word triangle rather than triad. In Bowen Family Systems Theory, the triangle is an essential part of the relationship. Couples left to their own resources oscillate between closeness and distance. Two people having this imbalance often have difficulty resolving it by themselves. To stabilize the relationship, the couple often seek the aid of a third party to help re-establish closeness. A triangle is the smallest possible relationship system that can restore balance in a time of stress. The third person assumes an outside position. In periods of stress, the outside position is the most comfortable and desired position. The inside position is plagued by anxiety, along with its emotional closeness. The outsider serves to preserve the inside couple's relationship. Bowen noted that not all triangles are constructive – some are destructive. Pathological/perverse triangles In 1968, Nathan Ackerman conceptualized a destructive triangle. Ackerman stated "we observe certain constellations of family interactions which we have epitomized as the pattern of family interdependence, roles those of destroyer or persecutor, the victim of the scapegoating attack, and the family healer or the family doctor." Ackerman also recognized the pattern of attack, defense, and counterattack, as shifting roles. Karpman triangle and Eric Berne In 1968, Stephen Karpman, who had an interest in acting and was a member of the Screen Actors Guild, chose "drama triangle" rather than "conflict triangle" as, here, the Victim in his model is not intended to represent an actual victim, but rather someone feeling or acting like one. He first published his theory in an article entitled "Fairy Tales and Script Drama Analysis". His article, in part, examined the fairy tale "Little Red Riding Hood" to illustrate its points. Karpman was, at the time, a recent graduate of Duke University School of Medicine and was doing post post-graduate studies under Berne. Berne, who founded the field of transactional analysis, encouraged Karpman to publish what Berne referred to as "Karpman's triangle". Karpman's article was published in 1968. In 1972, Karpman received the Eric Berne Memorial Scientific Award for the work. Transactional analysis Eric Berne, a Canadian-born psychiatrist, created the theory of transactional analysis, in the middle of the 20th century, as a way of explaining human behavior. Berne's theory of transactional analysis was based on the ideas of Freud but was distinctly different. Freudian psychotherapists focused on talk therapy as a way of gaining insight to their patients' personalities. Berne believed that insight could be better discovered by analyzing patients’ social transactions. Games in transactional analysis refers to a series of transactions that is complementary (reciprocal), ulterior, and proceeds towards a predictable outcome. In this context, the Karpman Drama Triangle is a "game". Games are often characterized by a switch in roles of players towards the end. The number of players may vary. Games in this sense are devices used (often unconsciously) by people to create a circumstance where they can justifiably feel certain resulting feelings (such as anger or superiority) or justifiably take or avoid taking certain actions where their own inner wishes differ from societal expectations. They are always a substitute for a more genuine and full adult emotion and response which would be more appropriate. Three quantitative variables are often useful to consider for games: Flexibility: "The ability of the players to change the currency of the game (that is, the tools they use to play it). 'Some games...can be played properly with only one kind of currency, while others, such as exhibitionistic games, are more flexible", so that the focus of power may shift from words, to money, to parts of the body. Tenacity: "Some people give up their games easily, others are more persistent", referring to the way people stick to their games and their resistance to breaking away from them. Intensity: "Some people play their games in a relaxed way, others are more tense and aggressive. Games so played are known as easy and hard games, respectively". The consequences of games may vary from small paybacks to paybacks built up over a long period to a major level. Based on the degree of acceptability and potential harm, games are classified into three categories, representing first degree games, second degree games, and third degree games: socially acceptable, undesirable but not irreversibly damaging may result in drastic harm. The Karpman triangle was an adaptation of a model that was originally conceived to analyze the play-action pass and the draw play in American football and later adapted as a way to analyze movie scripts. Karpman is reported to have doodled thirty or more diagram types before settling on the triangle. Karpman credits the movie Valley of the Dolls as being a testbed for refining the model into what Berne coined as the Karpman Drama Triangle. Karpman now has many variables of the Karpman triangle in his fully developed theory, besides role switches. These include space switches (private-public, open-closed, near-far) which precede, cause, or follow role switches, and script velocity (number of role switches in a given unit of time). These include the Question Mark triangle, False Perception triangle, Double Bind triangle, The Indecision triangle, the Vicious Cycle triangle, Trapping triangle, Escape triangle, Triangles of Oppression, and Triangles of Liberation, Switching in the triangle, and the Alcoholic Family triangle. While transactional analysis is the method for studying interactions between individuals, one researcher postulates that drama-based leaders can instill an organizational culture of drama. Persecutors are more likely to be in leadership positions and a persecutor culture goes hand in hand with cutthroat competition, fear, blaming, manipulation, high turnover and an increased risk of lawsuits. There are also victim cultures which can lead to low morale and low engagement as well as an avoidance of conflict, and rescuer cultures which can be characterized as having a high dependence on the leader, low initiative and low innovation. Therapeutic models The Winner's Triangle was published by Acey Choy in 1990 as a therapeutic model for showing patients how to alter social transactions when entering a triangle at any of the three entry points. Choy recommends that anyone feeling like a victim think more in terms of being vulnerable and caring, that anyone cast as a persecutor adopt an assertive posture, and anyone recruited to be a rescuer should react by being "caring". Vulnerable – a victim should be encouraged to accept their vulnerability, problem solve, and be more self-aware. Assertive – a persecutor should be encouraged to ask for what they want, be assertive, and cultivate self-compassion. Caring – a rescuer should be encouraged to show concern and be caring, but not over-reach and problem solve for others. The Power of TED*, first published in 2009, recommends that the "victim" adopt the alternative role of creator, view the persecutor as a challenger, and enlist a coach instead of a rescuer. Creator – victims are encouraged to be outcome-oriented as opposed to problem-oriented and take responsibility for choosing their response to life challenges. They should focus on resolving "dynamic tension" (the difference between current reality and the envisioned goal or outcome) by taking incremental steps toward the outcomes they are trying to achieve. Challenger – a victim is encouraged to see a persecutor as a person (or situation) that forces the creator to clarify their needs, and focus on their learning and growth. Coach – a rescuer should be encouraged to ask questions that are intended to help the individual to make informed choices. The key difference between a rescuer and a coach is that the coach sees the creator as capable of making choices and of solving their own problems. A coach asks questions that enable the creator to see the possibilities for positive action, and to focus on what they do want instead of what they don't want. See also References Further reading Books Emerald, David (2016). The Power of TED* (*The Empowerment Dynamic). Bainbridge Island: Polaris Publishing Group. Emerald, David (2019). 3 Vital Questions: Transforming Workplace Drama. Bainbridge Island: Polaris Publishing Group. Karpman, Stephen (2014). A Game Free Life. Self published. Zimberoff, Diane (1989). Breaking Free from the Victim Trap. Nazareth: Wellness Press. Harris, Thomas (1969). I'm OK, You're OK. New York: Galahad Books. Berne, Eric (1966). Games People Play. New York: Ballantine Books. West, Chris (2020). The Karpman Drama Triangle Explained. London: CWTK Publishing. Articles Johnson, R. Skip (2015). Escaping Conflict and the Karpman Drama Triangle. BPDFamily Forrest, Lynne (2008). The Three Faces of Victim — An Overview of the Drama Triangle. Transforming Victim Consciousness Choy, Acey (1990). The Winner's Triangle Transactional Analysis Journal 20(1):40 Gurowitz, Edward (1978). Energy Considerations in Treatment of the Drama Triangle. Transactional Analysis Journal January 1978 vol. 8 no. 1: 16–18 Behavioral concepts Transactional analysis Triangles Eponyms
Karpman drama triangle
Biology
3,742
2,215,177
https://en.wikipedia.org/wiki/Stagnation%20temperature
In thermodynamics and fluid mechanics, stagnation temperature is the temperature at a stagnation point in a fluid flow. At a stagnation point, the speed of the fluid is zero and all of the kinetic energy has been converted to internal energy and is added to the local static enthalpy. In both compressible and incompressible fluid flow, the stagnation temperature equals the total temperature at all points on the streamline leading to the stagnation point. See gas dynamics. Derivation Adiabatic Stagnation temperature can be derived from the first law of thermodynamics. Applying the steady flow energy equation and ignoring the work, heat and gravitational potential energy terms, we have: where: mass-specific stagnation (or total) enthalpy at a stagnation point mass-specific static enthalpy at the point of interest along the stagnation streamline velocity at the point of interest along the stagnation streamline Substituting for enthalpy by assuming a constant specific heat capacity at constant pressure () we have: or where: specific heat capacity at constant pressure stagnation (or total) temperature at a stagnation point temperature (or static temperature) at the point of interest along the stagnation streamline velocity at the point of interest along the stagnation streamline Mach number at the point of interest along the stagnation streamline Ratio of Specific Heats (), ~1.4 for air at ~300 K Flow with heat addition q = Heat per unit mass added into the system Strictly speaking, enthalpy is a function of both temperature and density. However, invoking the common assumption of a calorically perfect gas, enthalpy can be converted directly into temperature as given above, which enables one to define a stagnation temperature in terms of the more fundamental property, stagnation enthalpy. Stagnation properties (e.g., stagnation temperature, stagnation pressure) are useful in jet engine performance calculations. In engine operations, stagnation temperature is often called total air temperature. A bimetallic thermocouple is frequently used to measure stagnation temperature, but allowances for thermal radiation must be made. Solar thermal collectors Performance testing of solar thermal collectors utilizes the term stagnation temperature to indicate the maximum achievable collector temperature with a stagnant fluid (no motion), an ambient temperature of 30C, and incident solar radiation of 1000W/m2. The aforementioned figures are 'worst case scenario values' that allow collector designers to plan for potential overheat scenarios in the event of collector system malfunctions. See also Stagnation point Stagnation pressure Total air temperature References Fluid dynamics
Stagnation temperature
Chemistry,Engineering
568
2,144,488
https://en.wikipedia.org/wiki/Speech%20Recognition%20Grammar%20Specification
Speech Recognition Grammar Specification (SRGS) is a W3C standard for how speech recognition grammars are specified. A speech recognition grammar is a set of word patterns, and tells a speech recognition system what to expect a human to say. For instance, if you call an auto-attendant application, it will prompt you for the name of a person (with the expectation that your call will be transferred to that person's phone). It will then start up a speech recognizer, giving it a speech recognition grammar. This grammar contains the names of the people in the auto attendant's directory and a collection of sentence patterns that are the typical responses from callers to the prompt. SRGS specifies two alternate but equivalent syntaxes, one based on XML, and one using augmented BNF format. In practice, the XML syntax is used more frequently. Both the ABNF and XML form have the expressive power of a context-free grammar. A grammar processor that does not support recursive grammars has the expressive power of a finite-state machine or regular expression language. If the speech recognizer returned just a string containing the actual words spoken by the user, the voice application would have to do the tedious job of extracting the semantic meaning from those words. For this reason, SRGS grammars can be decorated with tag elements, which when executed, build up the semantic result. SRGS does not specify the contents of the tag elements: this is done in a companion W3C standard, Semantic Interpretation for Speech Recognition (SISR). SISR is based on ECMAScript, and ECMAScript statements inside the SRGS tags build up an ECMAScript semantic result object that is easy for the voice application to process. Both SRGS and SISR are W3C Recommendations, the final stage of the W3C standards track. The W3C VoiceXML standard, which defines how voice dialogs are specified, depends heavily on SRGS and SISR. Examples Here is an example of the augmented BNF of SRGS, as it could be used in an auto attendant application: #ABNF 1.0 ISO-8859-1; // Default grammar language is US English language en-US; // Single language attachment to tokens // Note that "fr-CA" (Canadian French) is applied to only // the word "oui" because of precedence rules $yes = yes | oui!fr-CA; // Single language attachment to an expansion $people1 = (Michel Tremblay | André Roy)!fr-CA; // Handling language-specific pronunciations of the same word // A capable speech recognizer will listen for Mexican Spanish and // US English pronunciations. $people2 = Jose!en-US | Jose!es-MX; /** * Multi-lingual input possible * @example may I speak to André Roy * @example may I speak to Jose */ public $request = may I speak to ($people1 | $people2); Here is the same SRGS example, using the XML form: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" "http://www.w3.org/TR/speech-grammar/grammar.dtd"> <!-- the default grammar language is US English --> <grammar xmlns="http://www.w3.org/2001/06/grammar" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/06/grammar http://www.w3.org/TR/speech-grammar/grammar.xsd" xml:lang="en-US" version="1.0"> <!-- single language attachment to tokens "yes" inherits US English language "oui" is Canadian French language --> <rule id="yes"> <one-of> <item>yes</item> <item xml:lang="fr-CA">oui</item> </one-of> </rule> <!-- Single language attachment to an expansion --> <rule id="people1"> <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>André Roy</item> </one-of> </rule> <!-- Handling language-specific pronunciations of the same word A capable speech recognizer will listen for Mexican Spanish and US English pronunciations. --> <rule id="people2"> <one-of> <item xml:lang="en-US">Jose</item> <item xml:lang="es-MX">Jose</item> </one-of> </rule> <!-- Multi-lingual input is possible --> <rule id="request" scope="public"> <example> may I speak to André Roy </example> <example> may I speak to Jose </example> may I speak to <one-of> <item> <ruleref uri="#people1"/> </item> <item> <ruleref uri="#people2"/> </item> </one-of> </rule> </grammar> See also SISR VoiceXML Pronunciation Lexicon Specification (PLS) Natural Language Semantics Markup Language JSGF External links SRGS Specification (W3C Recommendation) SISR Specification (W3C Recommendation) VoiceXML Forum World Wide Web Consortium standards XML-based standards
Speech Recognition Grammar Specification
Technology
1,246
4,958,097
https://en.wikipedia.org/wiki/55%20Cancri%20e
55 Cancri e (abbreviated 55 Cnc e, also known as Janssen ) is an exoplanet orbiting a Sun-like host star, 55 Cancri A. The mass of the exoplanet is about eight Earth masses and its diameter is about twice that of the Earth. 55 Cancri e was discovered on 30 August 2004, thus making it the first super-Earth discovered around a main sequence star, predating Gliese 876 d by a year. It is the innermost planet in its planetary system, taking less than 18 hours to complete an orbit. However, until the 2010 observations and recalculations, this planet had been thought to take about 2.8 days to orbit the star. Due to its proximity to its star, 55 Cancri e is extremely hot, with temperatures on the day side exceeding 3,000 Kelvin. The planet's thermal emission is observed to be variable, possibly as a result of volcanic activity. It has been proposed that 55 Cancri e could be a carbon planet. The atmosphere of 55 Cancri e has been extensively studied, with varying results. Initial studies suggested an atmosphere rich in hydrogen and helium, but later studies failed to confirm this, instead supporting an atmosphere composed of heavier molecules, possibly only a thin atmosphere of vaporized rock. Most recently as of 2024, JWST observations have ruled out the rock vapor atmosphere scenario and provided evidence for a substantial atmosphere rich in carbon dioxide or carbon monoxide. Name In July 2014 the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Janssen for this planet. The winning name was submitted by the Royal Netherlands Association for Meteorology and Astronomy of the Netherlands. It honors the spectacle maker Zacharias Janssen who is sometimes associated with the invention of the telescope. Discovery Like the majority of extrasolar planets found prior to the Kepler mission, 55 Cancri e was discovered by detecting variations in its star's radial velocity. This was achieved by making sensitive measurements of the Doppler shift of the spectrum of 55 Cancri A. At the time of its discovery, three other planets were known orbiting the star. After accounting for these planets, a signal at around 2.8 days remained, which could be explained by a planet of at least 14.2 Earth masses in a very close orbit. The same measurements were used to confirm the existence of the uncertain planet 55 Cancri c. 55 Cancri e was one of the first extrasolar planets with a mass comparable to that of Neptune to be discovered. It was announced at the same time as Gliese 436 b, another "hot Neptune" orbiting the red dwarf star Gliese 436. Planet challenged In 2005, the existence of planet e was questioned by Jack Wisdom in a reanalysis of the data. He suggested that the 2.8-day planet was an alias and, separately, that there was a 260-day planet in orbit around 55 Cancri. In 2008, Fischer et al. published a new analysis that appeared to confirm the existence of the 2.8-day planet and the 260-day planet. However, the 2.8-day planet was shown to be an alias by Dawson and Fabrycky in 2010; its true period was 0.7365 days. Transit The planet's transit of its host star was announced on 27 April 2011, based on two weeks of nearly continuous photometric monitoring with the MOST space telescope. The transits occur with the period (0.74 days) and phase that had been predicted by Dawson and Fabrycky. This is one of the few planetary transits to be confirmed around a well-known star, and allowed investigations into the planet's composition. Orbit and rotation 55 Cancri e orbits very close to its parent star; with average orbital distance of 0.01544 ± 0.00005 AU, it takes only 18 hours to complete an orbit. Analysis of its transits reveal that its orbital inclination is about 83.6°, and appears to be close to being aligned with the rotation of its parent star, with obliquity of 23 , favouring dynamically gentle inward migration scenarios for this planet. 55 Cancri e may also be coplanar with the next planet in the system, 55 Cancri b. Due to its old age and proximity to the star, the planet is extremely likely to be tidally locked, meaning that one hemisphere, referred to as dayside, permanently faces the star, while the other, the nightside, always faces away from it. Characteristics 55 Cancri e receives more radiation than Gliese 436 b. The side of the planet facing its star has temperatures more than 2,000 Kelvin (approximately 1,700 degrees Celsius or 3,100 Fahrenheit), hot enough to melt iron. Infrared mapping with the Spitzer Space Telescope indicated an average day-side temperature of and an average night-side temperature of around . Reanalysis of the Spitzer data in 2022 found a hotter day-side temperature of and set an upper limit of on the night-side temperature. It was initially unknown whether 55 Cancri e was a small gas giant like Neptune or a large rocky terrestrial planet. In 2011, a transit of the planet was confirmed, allowing scientists to calculate its density. At first it was suspected to be a water planet. As initial observations showed no hydrogen in its Lyman-alpha signature during transit, Ehrenreich speculated that its volatile materials might be carbon dioxide instead of water or hydrogen. An alternative possibility is that 55 Cancri e is a solid planet made of carbon-rich material rather than the oxygen-rich material that makes up the terrestrial planets in the Solar System. In this case, roughly a third of the planet's mass would be carbon, much of which may be in the form of diamond as a result of the temperatures and pressures in the planet's interior. Further observations are necessary to confirm the nature of the planet. A third argument is that the tidal forces, together with the orbital and rotational centrifugal forces, can partially confine a hydrogen-rich atmosphere on the nightside. Assuming an atmosphere dominated by volcanic species and a large hydrogen component, the heavier molecules could be confined within latitudes < 80° while the volatile hydrogen is not. Because of this disparity, the hydrogen would have to slowly diffuse out into the dayside where X-ray and ultraviolet irradiation would destroy it. In order for this mechanism to have taken effect, it is necessary for 55 Cancri e to have become tidally locked before losing the totality of its hydrogen envelope. This model is consistent with spectroscopic measurements claiming to have discovered the presence of hydrogen and with other studies which were unable to discover a significant hydrogen-destruction rate. In February 2016, it was announced that NASA's Hubble Space Telescope had detected hydrogen cyanide, but no water vapor, in the atmosphere of 55 Cancri e, which is only possible if the atmosphere is predominantly hydrogen or helium. This is the first time the atmosphere of a super-Earth exoplanet was analyzed successfully. In November 2017, it was announced that infrared observations with the Spitzer Space Telescope indicated the presence of a global lava ocean obscured by an atmosphere with a pressure of about 1.4 bar, slightly thicker than that of Earth. The atmosphere may contain similar chemicals in Earth's atmosphere, such as nitrogen and possibly oxygen, in order to cause the infrared data observed by Spitzer. In contradiction to the February 2016 findings, a spectroscopic study in 2012 failed to detect escaping hydrogen from the atmosphere, and a spectroscopic study in 2020 failed to detect escaping helium, indicating that the planet probably has no primordial atmosphere. Atmospheres made of heavier molecules such as oxygen and nitrogen are not ruled out by these data. A study published in May 2024 used observations from the James Webb Space Telescope's Near-InfraRed Camera and Mid-Infrared Instrument to produce a thermal emission spectrum of 55 Cancri e within the range of 4 to 12 μm. These measurements ruled out the hypothesis that the planet is a lava world covered by a "tenuous atmosphere made of vaporized rock", as previously proposed, and indicated a "bona fide volatile atmosphere likely rich in or CO". The authors stated that 55 Cancri e's magma ocean could be outgassing and sustaining this atmosphere. Volcanism Large surface-temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions. By 2022, the observation had shown a large variability in the planetary transit depths, which can be attributed to large-scale volcanism, or to the presence of a variable gas torus co-orbital with the planet. Search for Radio Emissions Since 55 Cancri e orbits less than 0.1 AU from its host star, some scientists hypothesized that it may cause stellar flaring synchronized to the orbital period of the exoplanet. A 2011 search for these magnetic star-planet interactions that would result in coronal radio emissions resulted in no detected signal. See also Appearance of extrasolar planets Lists of exoplanets GJ 1132 b Mu Arae c Planetary system References External links Spitzer Detects a Steaming Super-Earth Eclipsing Its Star (JPL 09.26.11) Interactive visualisation of the 55 Cancri system 55 Cancri Cancer (constellation) Exoplanets discovered in 2004 Super-Earths Terrestrial planets Exoplanets detected by radial velocity Transiting exoplanets Exoplanets with proper names
55 Cancri e
Astronomy
2,032
67,182,867
https://en.wikipedia.org/wiki/Directional%20component%20analysis
Directional component analysis (DCA) is a statistical method used in climate science for identifying representative patterns of variability in space-time data-sets such as historical climate observations, weather prediction ensembles or climate ensembles. The first DCA pattern is a pattern of weather or climate variability that is both likely to occur (measured using likelihood) and has a large impact (for a specified linear impact function, and given certain mathematical conditions: see below). The first DCA pattern contrasts with the first PCA pattern, which is likely to occur, but may not have a large impact, and with a pattern derived from the gradient of the impact function, which has a large impact, but may not be likely to occur. DCA differs from other pattern identification methods used in climate research, such as EOFs, rotated EOFs and extended EOFs in that it takes into account an external vector, the gradient of the impact. DCA provides a way to reduce large ensembles from weather forecasts or climate models to just two patterns. The first pattern is the ensemble mean, and the second pattern is the DCA pattern, which represents variability around the ensemble mean in a way that takes impact into account. DCA contrasts with other methods that have been proposed for the reduction of ensembles in that it takes impact into account in addition to the structure of the ensemble. Overview Inputs DCA is calculated from two inputs: a multivariate dataset of weather or climate data, such as historical climate observations, or a weather or climate ensemble a linear impact function. The linear impact function is a function which defines a level of impact for every spatial pattern in the weather or climate data as a weighted sum of the values at different locations in the spatial pattern. An example is the mean value across the spatial pattern. The linear impact function can be generated as the first term in the multivariate Taylor series of a non-linear impact function. Formula Consider a space-time data set , containing individual spatial pattern vectors , where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix . We define a linear impact function of a spatial pattern as , where is a vector of spatial weights. The first DCA pattern is given in terms the covariance matrix and the weights by the proportional expression . The pattern can then be normalized to any length as required. Properties If the weather or climate data is elliptically distributed (e.g., is distributed as a multivariate normal distribution or a multivariate t-distribution) then the first DCA pattern (DCA1) is defined as the spatial pattern with the following mathematical properties: DCA1 maximises probability density for a given value of impact DCA1 maximises impact for a given value of probability density DCA1 maximises the product of impact and probability density DCA1 is the conditional expectation, conditional on exceeding a certain level of impact DCA1 is the impact-weighted ensemble mean Any modification of DCA1 will lead to a pattern that is either less extreme, or has a lower probability density. Rainfall Example For instance, in a rainfall anomaly dataset, using an impact metric defined as the total rainfall anomaly, the first DCA pattern is the spatial pattern that has the highest probability density for a given total rainfall anomaly. If the given total rainfall anomaly is chosen to have a large value, then this pattern combines being extreme in terms of the metric (i.e., representing large amounts of total rainfall) with being likely in terms of the pattern, and so is well suited as a representative extreme pattern. Comparison with PCA The main differences between Principal component analysis (PCA) and DCA are PCA is a function of just the covariance matrix, and the first PCA pattern is defined so as to maximise explained variance DCA is a function of the covariance matrix and a vector direction (the gradient of the impact function), and the first DCA pattern is defined so as to maximise probability density for a given value of the impact metric As a result, for unit vector spatial patterns: The first PCA spatial pattern always corresponds to a higher explained variance, but has a lower value of the impact metric (e.g., the total rainfall anomaly), except in degenerate cases The first DCA spatial pattern always corresponds to a higher value of the impact metric, but has a lower value of the explained variance, except in degenerate cases The degenerate cases occur when the PCA and DCA patterns are equal. Also, given the first PCA pattern, the DCA pattern can be scaled so that: The scaled DCA pattern has the same probability density as the first PCA pattern, but higher impact, or The scaled DCA pattern has the same impact as the first PCA pattern, but higher probability density. Two Dimensional Example Source: Figure 1 gives an example, which can be understood as follows: The two axes represent anomalies of annual mean rainfall at two locations, with the highest total rainfall anomaly values towards the top right corner of the diagram The joint variability of the rainfall anomalies at the two locations is assumed to follow a bivariate normal distribution The ellipse shows a single contour of probability density from this bivariate normal, with higher values inside the ellipse The red dot at the centre of the ellipse shows zero rainfall anomalies at both locations The blue parallel-line arrow shows the principal axis of the ellipse, which is also the first PCA spatial pattern vector In this case, the PCA pattern is scaled so that it touches the ellipse The diagonal straight line shows a line of constant positive total rainfall anomaly, assumed to be at some fairly extreme level The red dotted-line arrow shows the first DCA pattern, which points towards the point at which the diagonal line is tangent to the ellipse In this case, the DCA pattern is scaled so that it touches the ellipse From this diagram, the DCA pattern can be seen to possess the following properties: Of all the points on the diagonal line, it is the one with the highest probability density Of all the points on the ellipse, it is the one with the highest total rainfall anomaly It has the same probability density as the PCA pattern, but represents higher total rainfall (i.e., points further towards the top right hand corner of the diagram) Any change of the DCA pattern will reduce either the probability density (if it moves out of the ellipse) or reduce the total rainfall anomaly (if it moves along or into the ellipse) In this case the total rainfall anomaly of the PCA pattern is quite small, because of anticorrelations between the rainfall anomalies at the two locations. As a result, the first PCA pattern is not a good representative example of a pattern with large total rainfall anomaly, while the first DCA pattern is. In dimensions the ellipse becomes an ellipsoid, the diagonal line becomes an dimensional plane, and the PCA and DCA patterns are vectors in dimensions. Applications Application to Climate Variability DCA has been applied to the CRU data-set of historical rainfall variability in order to understand the most likely patterns of rainfall extremes in the US and China. Application to Ensemble Weather Forecasts DCA has been applied to ECMWF medium-range weather forecast ensembles in order to identify the most likely patterns of extreme temperatures in the ensemble forecast. Application to Ensemble Climate Model Projections DCA has been applied to ensemble climate model projections in order to identify the most likely patterns of extreme future rainfall. Derivation of the First DCA Pattern Source: Consider a space-time data-set , containing individual spatial pattern vectors , where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix . As a function of , the log probability density is proportional to . We define a linear impact function of a spatial pattern as , where is a vector of spatial weights. We then seek to find the spatial pattern that maximises the probability density for a given value of the linear impact function. This is equivalent to finding the spatial pattern that maximises the log probability density for a given value of the linear impact function, which is slightly easier to solve. This is a constrained maximisation problem, and can be solved using the method of Lagrange multipliers. The Lagrangian function is given by Differentiating by and setting to zero gives the solution Normalising so that is unit vector gives This is the first DCA pattern. Subsequent patterns can be derived which are orthogonal to the first, to form an orthonormal set and a method for matrix factorisation. References Climate and weather statistics Numerical climate and weather models Data analysis Multivariate statistics Climate
Directional component analysis
Physics
1,794
23,039,694
https://en.wikipedia.org/wiki/Phenylethylidenehydrazine
Phenylethylidenehydrazine (PEH), also known as 2-phenylethylhydrazone or β-phenylethylidenehydrazine, is a GABA transaminase inhibitor. It is a metabolite of the antidepressant phenelzine and is responsible for its elevation of GABA concentrations. PEH may contribute to phenelzine's anxiolytic effects. See also Phenelzine References Hydrazones GABA transaminase inhibitors Human drug metabolites
Phenylethylidenehydrazine
Chemistry
116
23,581,497
https://en.wikipedia.org/wiki/C3H9NO3S
{{DISPLAYTITLE:C3H9NO3S}} The molecular formula C3H9NO3S (molar mass: 139.173 g/mol, exact mass: 139.0303 u) may refer to: Homotaurine N-Methyltaurine Molecular formulas
C3H9NO3S
Physics,Chemistry
66
23,105,042
https://en.wikipedia.org/wiki/Primordial%20nuclide
In geochemistry, geophysics and nuclear physics, primordial nuclides, also known as primordial isotopes, are nuclides found on Earth that have existed in their current form since before Earth was formed. Primordial nuclides were present in the interstellar medium from which the solar system was formed, and were formed in, or after, the Big Bang, by nucleosynthesis in stars and supernovae followed by mass ejection, by cosmic ray spallation, and potentially from other processes. They are the stable nuclides plus the long-lived fraction of radionuclides surviving in the primordial solar nebula through planet accretion until the present; 286 such nuclides are known. Stability All of the known 251 stable nuclides, plus another 35 nuclides that have half-lives long enough to have survived from the formation of the Earth, occur as primordial nuclides. These 35 primordial radionuclides represent isotopes of 28 separate elements. Cadmium, tellurium, xenon, neodymium, samarium, osmium, and uranium each have two primordial radioisotopes (, ; , ; , ; , ; , ; , ; and , ). Because the age of the Earth is (4.6 billion years), the half-life of the given nuclides must be greater than about (100 million years) for practical considerations. For example, for a nuclide with half-life (60 million years), this means 77 half-lives have elapsed, meaning that for each mole () of that nuclide being present at the formation of Earth, only 4 atoms remain today. The seven shortest-lived primordial nuclides (i.e., the nuclides with the shortest half-lives) to have been experimentally verified are (), (), (), (), (), (), and (). These are the seven nuclides with half-lives comparable to, or somewhat less than, the estimated age of the universe. (87Rb, 187Re, 176Lu, and 232Th have half-lives somewhat longer than the age of the universe.) For a complete list of the 35 known primordial radionuclides, including the next 28 with half-lives much longer than the age of the universe, see the complete list below. For practical purposes, nuclides with half-lives much longer than the age of the universe may be treated as if they were stable. 87Rb, 187Re, 176Lu, 232Th, and 238U have half-lives long enough that their decay is limited over geological time scales; 40K and 235U have shorter half-lives and are hence severely depleted, but are still long-lived enough to persist significantly in nature. The longest-lived isotope not proven to be primordial is , which has a half-life of , followed by () and (). 244Pu was reported to exist in nature as a primordial nuclide in 1971, but this detection could not be confirmed by further studies in 2012 and 2022. Taking into account that all these nuclides must exist for at least , 146Sm must survive 45 half-lives (and hence be reduced by 245 ≈ ), 244Pu must survive 57 (and be reduced by a factor of 257 ≈ ), and 92Nb must survive 130 (and be reduced by 2130 ≈ ). Mathematically, considering the likely initial abundances of these nuclides, primordial 146Sm and 244Pu should persist somewhere within the Earth today, even if they are not identifiable in the relatively minor portion of the Earth's crust available to human assays, while 92Nb and all shorter-lived nuclides should not. Nuclides such as 92Nb that were present in the primordial solar nebula but have long since decayed away completely are termed extinct radionuclides if they have no other means of being regenerated. As for 244Pu, calculations suggest that as of 2022, sensitivity limits were about one order of magnitude away from detecting it as a primordial nuclide. Because primordial chemical elements often consist of more than one primordial isotope, there are only 83 distinct primordial chemical elements. Of these, 80 have at least one observationally stable isotope and three additional primordial elements have only radioactive isotopes (bismuth, thorium, and uranium). Naturally occurring nuclides that are not primordial Some unstable isotopes which occur naturally (such as , , and ) are not primordial, as they must be constantly regenerated. This occurs by cosmic radiation (in the case of cosmogenic nuclides such as and ), or (rarely) by such processes as geonuclear transmutation (neutron capture of uranium in the case of and ). Other examples of common naturally occurring but non-primordial nuclides are isotopes of radon, polonium, and radium, which are all radiogenic nuclide daughters of uranium decay and are found in uranium ores. The stable argon isotope 40Ar is actually more common as a radiogenic nuclide than as a primordial nuclide, forming almost 1% of the Earth's atmosphere, which is regenerated by the beta decay of the extremely long-lived radioactive primordial isotope 40K, whose half-life is on the order of a billion years and thus has been generating argon since early in the Earth's existence. (Primordial argon was dominated by the alpha process nuclide 36Ar, which is significantly rarer than 40Ar on Earth.) A similar radiogenic series is derived from the long-lived radioactive primordial nuclide 232Th. These nuclides are described as geogenic, meaning that they are decay or fission products of uranium or other actinides in subsurface rocks. All such nuclides have shorter half-lives than their parent radioactive primordial nuclides. Some other geogenic nuclides do not occur in the decay chains of 232Th, 235U, or 238U but can still fleetingly occur naturally as products of the spontaneous fission of one of these three long-lived nuclides, such as 126Sn, which makes up about 10−14 of all natural tin. Another, 99Tc, has also been detected. There are five other long-lived fission products known. Primordial elements A primordial element is a chemical element with at least one primordial nuclide. There are 251 stable primordial nuclides and 35 radioactive primordial nuclides, but only 80 primordial stable elements—hydrogen through lead, atomic numbers 1 to 82, except for technetium (43) and promethium (61)—and three radioactive primordial elements—bismuth (83), thorium (90), and uranium (92). If plutonium (94) turns out to be primordial (specifically, the long-lived isotope Pu), then it would be a fourth radioactive primordial, though practically speaking it would still be more convenient to produce synthetically. Bismuth's half-life is so long that it is often classed with the 80 stable elements instead, since its radioactivity is not a cause for concern. The number of elements is smaller than the number of nuclides, because many of the primordial elements are represented by multiple isotopes. See chemical element for more information. Naturally occurring stable nuclides As noted, these number about 251. For a list, see the article list of elements by stability of isotopes. For a complete list noting which of the "stable" 251 nuclides may be in some respect unstable, see list of nuclides and stable nuclide. These questions do not impact the question of whether a nuclide is primordial, since all "nearly stable" nuclides, with half-lives longer than the age of the universe, are also primordial. Radioactive primordial nuclides Though it is estimated that about 35 primordial nuclides are radioactive (list below), it becomes very hard to determine the exact total number of radioactive primordials, because the total number of stable nuclides is uncertain. There are many extremely long-lived nuclides whose half-lives are still unknown; in fact, all nuclides heavier than dysprosium-164 are theoretically radioactive. For example, it is predicted theoretically that all isotopes of tungsten, including those indicated by even the most modern empirical methods to be stable, must be radioactive and can alpha decay, but this could only be measured experimentally for W. Likewise, all four primordial isotopes of lead are expected to decay to mercury, but the predicted half-lives are so long (some exceeding 10 years) that such decays could hardly be observed in the near future. Nevertheless, the number of nuclides with half-lives so long that they cannot be measured with present instruments—and are considered from this viewpoint to be stable nuclides—is limited. Even when a "stable" nuclide is found to be radioactive, it merely moves from the stable to the unstable list of primordials, and the total number of primordial nuclides remains unchanged. For practical purposes, these nuclides may be considered stable for all purposes outside specialized research. List of 35 radioactive primordial nuclides and measured half-lives These 35 primordial radionuclides are isotopes of 28 elements (cadmium, neodymium, osmium, samarium, tellurium, uranium, and xenon each have two primordial radioisotopes). These nuclides are listed in order of decreasing stability. Many of them are so nearly stable that they compete for abundance with stable isotopes of their respective elements. For three elements (indium, tellurium, and rhenium) a very long-lived radioactive primordial nuclide is more abundant than a stable nuclide. The longest-lived radionuclide known, Te, has a half-life of : 1.6 × 10 times the age of the Universe. Only four of these 35 nuclides have half-lives shorter than, or equal to, the age of the universe. Most of the other 30 have half-lives much longer. The shortest-lived primordial, U, has a half-life of 703.8 million years, about 1/6 the age of the Earth and Solar System. Many of these nuclides decay by double beta decay, though some like Bi decay by other means such as alpha decay. At the end of the list, are two more nuclides: Sm and Pu. They have not been confirmed as primordial, but their half-lives are long enough that minute quantities should persist today. List legends See also Alpha nuclide Table of nuclides sorted by half-life Table of nuclides Isotope geochemistry Radionuclide Mononuclidic element Monoisotopic element Stable isotope List of nuclides List of elements by stability of isotopes Big Bang nucleosynthesis References Geochemistry Radiometric dating Isotopes Metrology
Primordial nuclide
Physics,Chemistry
2,368
58,320
https://en.wikipedia.org/wiki/Potential%20flow
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow. Potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero. In the case of an incompressible flow the velocity potential satisfies Laplace's equation, and potential theory is applicable. However, potential flows also have been used to describe compressible flows and Hele-Shaw flows. The potential flow approach occurs in the modeling of both stationary as well as nonstationary flows. Applications of potential flow include: the outer flow field for aerofoils, water waves, electroosmotic flow, and groundwater flow. For flows (or parts thereof) with strong vorticity effects, the potential flow approximation is not applicable. In flow regions where vorticity is known to be important, such as wakes and boundary layers, potential flow theory is not able to provide reasonable predictions of the flow. Fortunately, there are often large regions of a flow where the assumption of irrotationality is valid which is why potential flow is used for various applications. For instance in: flow around aircraft, groundwater flow, acoustics, water waves, and electroosmotic flow. Description and characteristics In potential or irrotational flow, the vorticity vector field is zero, i.e., , where is the velocity field and is the vorticity field. Like any vector field having zero curl, the velocity field can be expressed as the gradient of certain scalar, say which is called the velocity potential, since the curl of the gradient is always zero. We therefore have The velocity potential is not uniquely defined since one can add to it an arbitrary function of time, say , without affecting the relevant physical quantity which is . The non-uniqueness is usually removed by suitably selecting appropriate initial or boundary conditions satisfied by and as such the procedure may vary from one problem to another. In potential flow, the circulation around any simply-connected contour is zero. This can be shown using the Stokes theorem, where is the line element on the contour and is the area element of any surface bounded by the contour. In multiply-connected space (say, around a contour enclosing solid body in two dimensions or around a contour enclosing a torus in three-dimensions) or in the presence of concentrated vortices, (say, in the so-called irrotational vortices or point vortices, or in smoke rings), the circulation need not be zero. In the former case, Stokes theorem cannot be applied and in the later case, is non-zero within the region bounded by the contour. Around a contour encircling an infinitely long solid cylinder with which the contour loops times, we have where is a cyclic constant. This example belongs to a doubly-connected space. In an -tuply connected space, there are such cyclic constants, namely, Incompressible flow In case of an incompressible flow — for instance of a liquid, or a gas at low Mach numbers; but not for sound waves — the velocity has zero divergence: Substituting here shows that satisfies the Laplace equation where is the Laplace operator (sometimes also written ). Since solutions of the Laplace equation are harmonic functions, every harmonic function represents a potential flow solution. As evident, in the incompressible case, the velocity field is determined completely from its kinematics: the assumptions of irrotationality and zero divergence of flow. Dynamics in connection with the momentum equations, only have to be applied afterwards, if one is interested in computing pressure field: for instance for flow around airfoils through the use of Bernoulli's principle. In incompressible flows, contrary to common misconception, the potential flow indeed satisfies the full Navier–Stokes equations, not just the Euler equations, because the viscous term is identically zero. It is the inability of the potential flow to satisfy the required boundary conditions, especially near solid boundaries, makes it invalid in representing the required flow field. If the potential flow satisfies the necessary conditions, then it is the required solution of the incompressible Navier–Stokes equations. In two dimensions, with the help of the harmonic function and its conjugate harmonic function (stream function), incompressible potential flow reduces to a very simple system that is analyzed using complex analysis (see below). Compressible flow Steady flow Potential flow theory can also be used to model irrotational compressible flow. The derivation of the governing equation for from Eulers equation is quite straightforward. The continuity and the (potential flow) momentum equations for steady flows are given by where the last equation follows from the fact that entropy is constant for a fluid particle and that square of the sound speed is . Eliminating from the two governing equations results in The incompressible version emerges in the limit . Substituting here results in where is expressed as a function of the velocity magnitude . For a polytropic gas, , where is the specific heat ratio and is the stagnation enthalpy. In two dimensions, the equation simplifies to Validity: As it stands, the equation is valid for any inviscid potential flows, irrespective of whether the flow is subsonic or supersonic (e.g. Prandtl–Meyer flow). However in supersonic and also in transonic flows, shock waves can occur which can introduce entropy and vorticity into the flow making the flow rotational. Nevertheless, there are two cases for which potential flow prevails even in the presence of shock waves, which are explained from the (not necessarily potential) momentum equation written in the following form where is the specific enthalpy, is the vorticity field, is the temperature and is the specific entropy. Since in front of the leading shock wave, we have a potential flow, Bernoulli's equation shows that is constant, which is also constant across the shock wave (Rankine–Hugoniot conditions) and therefore we can write 1) When the shock wave is of constant intensity, the entropy discontinuity across the shock wave is also constant i.e., and therefore vorticity production is zero. Shock waves at the pointed leading edge of two-dimensional wedge or three-dimensional cone (Taylor–Maccoll flow) has constant intensity. 2) For weak shock waves, the entropy jump across the shock wave is a third-order quantity in terms of shock wave strength and therefore can be neglected. Shock waves in slender bodies lies nearly parallel to the body and they are weak. Nearly parallel flows: When the flow is predominantly unidirectional with small deviations such as in flow past slender bodies, the full equation can be further simplified. Let be the mainstream and consider small deviations from this velocity field. The corresponding velocity potential can be written as where characterizes the small departure from the uniform flow and satisfies the linearized version of the full equation. This is given by where is the constant Mach number corresponding to the uniform flow. This equation is valid provided is not close to unity. When is small (transonic flow), we have the following nonlinear equation where is the critical value of Landau derivative and is the specific volume. The transonic flow is completely characterized by the single parameter , which for polytropic gas takes the value . Under hodograph transformation, the transonic equation in two-dimensions becomes the Euler–Tricomi equation. Unsteady flow The continuity and the (potential flow) momentum equations for unsteady flows are given by The first integral of the (potential flow) momentum equation is given by where is an arbitrary function. Without loss of generality, we can set since is not uniquely defined. Combining these equations, we obtain Substituting here results in Nearly parallel flows: As in before, for nearly parallel flows, we can write (after introudcing a recaled time ) provided the constant Mach number is not close to unity. When is small (transonic flow), we have the following nonlinear equation Sound waves: In sound waves, the velocity magntiude (or the Mach number) is very small, although the unsteady term is now comparable to the other leading terms in the equation. Thus neglecting all quadratic and higher-order terms and noting that in the same approximation, is a constant (for example, in polytropic gas ), we have which is a linear wave equation for the velocity potential . Again the oscillatory part of the velocity vector is related to the velocity potential by , while as before is the Laplace operator, and is the average speed of sound in the homogeneous medium. Note that also the oscillatory parts of the pressure and density each individually satisfy the wave equation, in this approximation. Applicability and limitations Potential flow does not include all the characteristics of flows that are encountered in the real world. Potential flow theory cannot be applied for viscous internal flows, except for flows between closely spaced plates. Richard Feynman considered potential flow to be so unphysical that the only fluid to obey the assumptions was "dry water" (quoting John von Neumann). Incompressible potential flow also makes a number of invalid predictions, such as d'Alembert's paradox, which states that the drag on any object moving through an infinite fluid otherwise at rest is zero. More precisely, potential flow cannot account for the behaviour of flows that include a boundary layer. Nevertheless, understanding potential flow is important in many branches of fluid mechanics. In particular, simple potential flows (called elementary flows) such as the free vortex and the point source possess ready analytical solutions. These solutions can be superposed to create more complex flows satisfying a variety of boundary conditions. These flows correspond closely to real-life flows over the whole of fluid mechanics; in addition, many valuable insights arise when considering the deviation (often slight) between an observed flow and the corresponding potential flow. Potential flow finds many applications in fields such as aircraft design. For instance, in computational fluid dynamics, one technique is to couple a potential flow solution outside the boundary layer to a solution of the boundary layer equations inside the boundary layer. The absence of boundary layer effects means that any streamline can be replaced by a solid boundary with no change in the flow field, a technique used in many aerodynamic design approaches. Another technique would be the use of Riabouchinsky solids. Analysis for two-dimensional incompressible flow Potential flow in two dimensions is simple to analyze using conformal mapping, by the use of transformations of the complex plane. However, use of complex numbers is not required, as for example in the classical analysis of fluid flow past a cylinder. It is not possible to solve a potential flow using complex numbers in three dimensions. The basic idea is to use a holomorphic (also called analytic) or meromorphic function , which maps the physical domain to the transformed domain . While , , and are all real valued, it is convenient to define the complex quantities Now, if we write the mapping as Then, because is a holomorphic or meromorphic function, it has to satisfy the Cauchy–Riemann equations The velocity components , in the directions respectively, can be obtained directly from by differentiating with respect to . That is So the velocity field is specified by Both and then satisfy Laplace's equation: So can be identified as the velocity potential and is called the stream function. Lines of constant are known as streamlines and lines of constant are known as equipotential lines (see equipotential surface). Streamlines and equipotential lines are orthogonal to each other, since Thus the flow occurs along the lines of constant and at right angles to the lines of constant . is also satisfied, this relation being equivalent to . So the flow is irrotational. The automatic condition then gives the incompressibility constraint . Examples of two-dimensional incompressible flows Any differentiable function may be used for . The examples that follow use a variety of elementary functions; special functions may also be used. Note that multi-valued functions such as the natural logarithm may be used, but attention must be confined to a single Riemann surface. Power laws In case the following power-law conformal map is applied, from to : then, writing in polar coordinates as , we have In the figures to the right examples are given for several values of . The black line is the boundary of the flow, while the darker blue lines are streamlines, and the lighter blue lines are equi-potential lines. Some interesting powers are: : this corresponds with flow around a semi-infinite plate, : flow around a right corner, : a trivial case of uniform flow, : flow through a corner, or near a stagnation point, and : flow due to a source doublet The constant is a scaling parameter: its absolute value determines the scale, while its argument introduces a rotation (if non-zero). Power laws with : uniform flow If , that is, a power law with , the streamlines (i.e. lines of constant ) are a system of straight lines parallel to the -axis. This is easiest to see by writing in terms of real and imaginary components: thus giving and . This flow may be interpreted as uniform flow parallel to the -axis. Power laws with If , then and the streamline corresponding to a particular value of are those points satisfying which is a system of rectangular hyperbolae. This may be seen by again rewriting in terms of real and imaginary components. Noting that and rewriting and it is seen (on simplifying) that the streamlines are given by The velocity field is given by , or In fluid dynamics, the flowfield near the origin corresponds to a stagnation point. Note that the fluid at the origin is at rest (this follows on differentiation of at ). The streamline is particularly interesting: it has two (or four) branches, following the coordinate axes, i.e. and . As no fluid flows across the -axis, it (the -axis) may be treated as a solid boundary. It is thus possible to ignore the flow in the lower half-plane where and to focus on the flow in the upper halfplane. With this interpretation, the flow is that of a vertically directed jet impinging on a horizontal flat plate. The flow may also be interpreted as flow into a 90 degree corner if the regions specified by (say) are ignored. Power laws with If , the resulting flow is a sort of hexagonal version of the case considered above. Streamlines are given by, and the flow in this case may be interpreted as flow into a 60° corner. Power laws with : doublet If , the streamlines are given by This is more easily interpreted in terms of real and imaginary components: Thus the streamlines are circles that are tangent to the x-axis at the origin. The circles in the upper half-plane thus flow clockwise, those in the lower half-plane flow anticlockwise. Note that the velocity components are proportional to ; and their values at the origin is infinite. This flow pattern is usually referred to as a doublet, or dipole, and can be interpreted as the combination of a source-sink pair of infinite strength kept an infinitesimally small distance apart. The velocity field is given by or in polar coordinates: Power laws with : quadrupole If , the streamlines are given by This is the flow field associated with a quadrupole. Line source and sink A line source or sink of strength ( for source and for sink) is given by the potential where in fact is the volume flux per unit length across a surface enclosing the source or sink. The velocity field in polar coordinates are i.e., a purely radial flow. Line vortex A line vortex of strength is given by where is the circulation around any simple closed contour enclosing the vortex. The velocity field in polar coordinates are i.e., a purely azimuthal flow. Analysis for three-dimensional incompressible flows For three-dimensional flows, complex potential cannot be obtained. Point source and sink The velocity potential of a point source or sink of strength ( for source and for sink) in spherical polar coordinates is given by where in fact is the volume flux across a closed surface enclosing the source or sink. The velocity field in spherical polar coordinates are See also Potential flow around a circular cylinder Aerodynamic potential-flow code Conformal mapping Darwin drift Flownet Laplacian field Laplace equation for irrotational flow Potential theory Stream function Velocity potential Helmholtz decomposition Notes References Further reading External links — Java applets for exploring conformal maps Potential Flow Visualizations - Interactive WebApps Fluid dynamics
Potential flow
Chemistry,Engineering
3,560
65,743,208
https://en.wikipedia.org/wiki/Diffusion%20Inhibitor
The Diffusion Inhibitor is the first known attempt to build a working fusion power device. It was designed and built at the National Advisory Committee for Aeronautics' (NACA) Langley Memorial Aeronautical Laboratory beginning in the spring of 1938. The basic concept was developed by Arthur Kantrowitz and his boss, Eastman Jacobs. They deliberately picked a misleading name to avoid the project being detected by NACA's headquarters in Washington, D.C., as they believed it would immediately be cancelled if their superiors learned of it. In overall terms, the device was very similar to the toroidal magnetic confinement fusion reactor designs that emerged in the 1950s and 60s, with a strong physical resemblance to the z-pinch and tokamak devices. The major difference was that it used radio waves to heat the plasma while using the magnetic field for confinement alone, not compression. After several early experiments which showed no sign of high-energy releases, NACA director George William Lewis happened into the lab and immediately shut it down. History In 1936, Arthur Kantrowitz, a recent physics graduate from Columbia University, joined NACA's Langley Memorial Aeronautical Laboratory. In early 1938 he read an article that noted Westinghouse had recently purchased a Van de Graaff generator and concluded the company was beginning research into nuclear power, following the footsteps of Mark Oliphant who demonstrated fusion of hydrogen isotopes in 1932 using a particle accelerator. His direct supervisor, Eastman Jacobs, also expressed an interest in the concept when Kantrowitz showed him the article. Kantrowitz began canvassing the literature and came across Hans Bethe's paper in Reviews of Modern Physics about the known types of nuclear reactions and Bethe's speculations on the ones taking place in stars, work that would lead to the Nobel Prize in Physics. This led Kantrowitz to consider the concept of heating hydrogen to the temperatures seen inside stars, with the expectation that one could build a fusion reactor. The easiest reaction in the list was deuterium-deuterium, but having only been discovered in 1932, the supply of deuterium was extremely limited. A pure hydrogen-hydrogen reaction was selected instead, although this would require much higher temperatures to work. Kantrowitz's idea was to use radio frequency signals to heat a plasma, in the same way that a microwave oven uses radio signals to heat food. The system did not have to use microwave frequencies, however, as the charged particles in a plasma will efficiently absorb a wide range of frequencies. This allowed Kantrowitz to use a conventional radio transmitter as the source, building a 150 W oscillator for the purpose. In order to produce any detectable level of fusion reactions, the system would have to heat the plasma to about 10 million degrees Celsius, a temperature that would melt any physical container. At these temperatures, even the atoms of the fuel itself break up into a fluid of separate nuclei and electrons, a state known as a plasma. Kantrowitz concluded the simplest solution was to use magnetic fields to confine the plasma because plasmas are electrically charged so their movement can be controlled by magnetic fields. When placed within a magnetic field, the electrons and protons of a hydrogen plasma will orbit around the magnetic lines of force. This means that if the plasma were within a solenoid, the field would keep the particles confined away from the walls but they would be free to travel along the lines and out the ends of the solenoid. At fusion temperatures, the particles are moving at the equivalent of thousands of miles per hour, so this would happen almost instantly. Kantrowitz came to the conclusion that many others did: The simple solution is to bend the solenoid around into a circle so the particles would flow around the resulting ring-shaped toroidal enclosure. Jacobs approached the lab's director, George W. Lewis, to arrange a small amount of funding, explaining that such a system might one day be used for aircraft propulsion. To disguise the actual purpose from NACA leadership, they called it the "Diffusion Inhibitor". Lewis agreed to provide $5,000 (). The torus was wound with copper magnet cables which were cooled by water, and for a power source, they connected it to the motor circuits of a wind tunnel Jacobs had built. The idea was to measure the resulting fusion reactions by their X-rays, which are emitted from very hot objects. Because the city's power supply was limited, the wind tunnel was only allowed to operate late at night or early morning and for no more than half an hour at maximum power. Using film developed for taking dental x-rays as their detector, the two fired up the machine but found no signal. Believing the problem was that the radio oscillator didn't have enough power, they tried again while manually holding in the circuit breakers to supply more current. Again, nothing appeared on the film. They concluded that something was causing the plasma to be lost from the center of the reactor, but did not have an obvious solution. No further experiments were carried out. Shortly after the first runs, Lewis visited the lab, listened to Jacobs's explanation of the system, and immediately shut it down. It would later be understood that the simple torus design does not correctly confine a plasma. When a solenoid is bent around into a circle, the magnets ringing the container end up being spread apart from each other on the outside circumference. That results in the field being weaker on the outside of the container than the inside. This asymmetry causes the plasma to drift away from the center, eventually hitting the walls. References Citations Bibliography Fusion power Fusion reactors
Diffusion Inhibitor
Physics,Chemistry
1,148
77,899,535
https://en.wikipedia.org/wiki/Diasoma
Diasoma is a proposed clade of mollusks uniting the classes Scaphopoda and Bivalvia. Whether scaphopods and bivalves are each other's closest living relatives among mollusks is disputed, leaving the monophyly of Diasoma in doubt. Diasoma was originally proposed on morphological grounds by Bruce Runnegar and John Pojeta Jr., in 1974. The name means "through-body", referring to the relatively straight gut with a mouth at the anterior end and anus at the posterior end, contrasting with gastropods and cephalopods, in which the gut is more curved and the mouth and anus are usually much closer together. The grouping was accepted by many studies in the 1980s and 1990s, but a phylogenetic analysis of 18s rDNA conducted by Gerhard Steiner and Hermann Dreyer in 2003 found scaphopods to be more closely related to cephalopods than bivalves. A 2020 phylogenetic analysis by Kevin Kocot and colleagues found scaphopods to be more closely related to gastropods than bivalves. However, a molecular phylogenetic analysis published by Hao Song and colleagues in 2023 supports the monophyly of Diasoma. The extinct rostroconchs, a group possibly ancestral to the Scaphopoda, are also considered to belong to Diasoma. Song and colleagues inferred that Diasoma originated approximately 520 million years ago, during the Cambrian period, and considered the earlier fossil genera Anabarella, Watsonella, and Mellopegma to be members of the diasome stem group. References Controversial taxa Mollusc taxonomy
Diasoma
Biology
336
51,654
https://en.wikipedia.org/wiki/Soliton
In mathematics and physics, a soliton is a nonlinear, self-reinforcing, localized wave packet that is strongly stable, in that it preserves its shape while propagating freely, at constant velocity, and recovers it even after collisions with other such localized wave packets. Its remarkable stability can be traced to a balanced cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons were subsequently found to provide stable solutions of a wide class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described in 1834 by John Scott Russell who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation". The Korteweg–de Vries equation was later formulated to model such waves, and the term soliton was coined by Zabusky and Kruskal to describe localized, strongly stable propagating solutions to this equation. The name was meant to characterize the solitary nature of the waves, with the 'on' suffix recalling the usage for particles such as electrons, baryons or hadrons, reflecting their observed particle-like behaviour. Definition A single, consensus definition of a soliton is difficult to find. ascribe three properties to solitons: They are of permanent form; They are localized within a region; They can interact with other solitons, and emerge from the collision unchanged, except for a phase shift. More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction). Explanation Dispersion and nonlinearity can interact to produce permanent and localized wave forms. Consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies. Since glass shows dispersion, these different frequencies travel at different speeds and the shape of the pulse therefore changes over time. However, also the nonlinear Kerr effect occurs; the refractive index of a material at a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect exactly cancels the dispersion effect and the pulse's shape does not change over time. Thus, the pulse is a soliton. See soliton (optics) for a more detailed description. Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform, and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research. Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the morning glory cloud of the Gulf of Carpentaria, where pressure solitons traveling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons. A topological soliton, also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution". Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a nontrivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes. No continuous transformation maps a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess–Zumino–Witten model in quantum field theory, the magnetic skyrmion in condensed matter physics, and cosmic strings and domain walls in cosmology. History In 1834, John Scott Russell described his wave of translation: Scott Russell spent some time making practical and theoretical investigations of these waves. He built wave tanks at his home and noticed some key properties: The waves are stable, and can travel over very large distances (normal waves would tend to either flatten out, or steepen and topple over) The speed depends on the size of the wave, and its width on the depth of water. Unlike normal waves they will never merge – so a small wave is overtaken by a large one, rather than the two combining. If a wave is too big for the depth of water, it splits into two, one big and one small. Scott Russell's experimental work seemed at odds with Isaac Newton's and Daniel Bernoulli's theories of hydrodynamics. George Biddell Airy and George Gabriel Stokes had difficulty accepting Scott Russell's experimental observations because they could not be explained by the existing water wave theories. Additional observations were reported by Henry Bazin in 1862 after experiments carried out in the canal de Bourgogne in France. Their contemporaries spent some time attempting to extend the theory but it would take until the 1870s before Joseph Boussinesq and Lord Rayleigh published a theoretical treatment and solutions. In 1895 Diederik Korteweg and Gustav de Vries provided what is now known as the Korteweg–de Vries equation, including solitary wave and periodic cnoidal wave solutions. In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta, Ulam, and Tsingou. In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems. Solitons are, by definition, unaltered in shape and speed by a collision with other solitons. So solitary waves on a water surface are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind. Solitons are also studied in quantum mechanics, thanks to the fact that they could provide a new foundation of it through de Broglie's unfinished program, known as "Double solution theory" or "Nonlinear wave mechanics". This theory, developed by de Broglie in 1927 and revived in the 1950s, is the natural continuation of his ideas developed between 1923 and 1926, which extended the wave–particle duality introduced by Albert Einstein for the light quanta, to all the particles of matter. The observation of accelerating surface gravity water wave soliton using an external hydrodynamic linear potential was demonstrated in 2019. This experiment also demonstrated the ability to excite and measure the phases of ballistic solitons. In fiber optics Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations. Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well. In biology Solitons may occur in proteins and DNA. Solitons are related to the low-frequency collective motion in proteins and DNA. A recently developed model in neuroscience proposes that signals, in the form of density waves, are conducted within neurons in the form of solitons. Solitons can be described as almost lossless energy transfer in biomolecular chains or lattices as wave-like propagations of coupled conformational and electronic disturbances. In material physics Solitons can occur in materials, such as ferroelectrics, in the form of domain walls. Ferroelectric materials exhibit spontaneous polarization, or electric dipoles, which are coupled to configurations of the material structure. Domains of oppositely poled polarizations can be present within a single material as the structural configurations corresponding to opposing polarizations are equally favorable with no presence of external forces. The domain boundaries, or “walls”, that separate these local structural configurations are regions of lattice dislocations. The domain walls can propagate as the polarizations, and thus, the local structural configurations can switch within a domain with applied forces such as electric bias or mechanical stress. Consequently, the domain walls can be described as solitons, discrete regions of dislocations that are able to slip or propagate and maintain their shape in width and length.   In recent literature, ferroelectricity has been observed in twisted bilayers of van der Waal materials such as molybdenum disulfide and graphene. The moiré superlattice that arises from the relative twist angle between the van der Waal monolayers generates regions of different stacking orders of the atoms within the layers. These regions exhibit inversion symmetry breaking structural configurations that enable ferroelectricity at the interface of these monolayers. The domain walls that separate these regions are composed of partial dislocations where different types of stresses, and thus, strains are experienced by the lattice. It has been observed that soliton or domain wall propagation across a moderate length of the sample (order of nanometers to micrometers) can be initiated with applied stress from an AFM tip on a fixed region. The soliton propagation carries the mechanical perturbation with little loss in energy across the material, which enables domain switching in a domino-like fashion. It has also been observed that the type of dislocations found at the walls can affect propagation parameters such as direction. For instance, STM measurements showed four types of strains of varying degrees of shear, compression, and tension at domain walls depending on the type of localized stacking order in twisted bilayer graphene. Different slip directions of the walls are achieved with different types of strains found at the domains, influencing the direction of the soliton network propagation. Nonidealities such as disruptions to the soliton network and surface impurities can influence soliton propagation as well. Domain walls can meet at nodes and get effectively pinned, forming triangular domains, which have been readily observed in various ferroelectric twisted bilayer systems. In addition, closed loops of domain walls enclosing multiple polarization domains can inhibit soliton propagation and thus, switching of polarizations across it. Also, domain walls can propagate and meet at wrinkles and surface inhomogeneities within the van der Waal layers, which can act as obstacles obstructing the propagation. In magnets In magnets, there also exist different types of solitons and other nonlinear waves. These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation, continuum Heisenberg model, Ishimori equation, nonlinear Schrödinger equation and others. In nuclear physics Atomic nuclei may exhibit solitonic behavior. Here the whole nuclear wave function is predicted to exist as a soliton under certain conditions of temperature and energy. Such conditions are suggested to exist in the cores of some stars in which the nuclei would not react but pass through each other unchanged, retaining their soliton waves through a collision between nuclei. The Skyrme Model is a model of nuclei in which each nucleus is considered to be a topologically stable soliton solution of a field theory with conserved baryon number. Bions The bound state of two solitons is known as a bion, or in systems where the bound state periodically oscillates, a breather. The interference-type forces between solitons could be used in making bions. However, these forces are very sensitive to their relative phases. Alternatively, the bound state of solitons could be formed by dressing atoms with highly excited Rydberg levels. The resulting self-generated potential profile features an inner attractive soft-core supporting the 3D self-trapped soliton, an intermediate repulsive shell (barrier) preventing solitons’ fusion, and an outer attractive layer (well) used for completing the bound state resulting in giant stable soliton molecules. In this scheme, the distance and size of the individual solitons in the molecule can be controlled dynamically with the laser adjustment. In field theory bion usually refers to the solution of the Born–Infeld model. The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system. The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons. On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon, where "E" stands for Einstein. Alcubierre drive Erik Lentz, a physicist at the University of Göttingen, has theorized that solitons could allow for the generation of Alcubierre warp bubbles in spacetime without the need for exotic matter, i.e., matter with negative mass. See also Compacton, a soliton with compact support Dissipative soliton Freak waves may be a Peregrine soliton related phenomenon involving breather waves which exhibit concentrated localized energy with non-linear properties. Instantons Nematicons Non-topological soliton, in quantum field theory Nonlinear Schrödinger equation Oscillons Pattern formation Peakon, a soliton with a non-differentiable peak Q-ball a non-topological soliton Sine-Gordon equation Soliton (optics) Soliton (topological) Soliton distribution Soliton hypothesis for ball lightning, by David Finkelstein Soliton model of nerve impulse propagation Topological quantum number Vector soliton Notes References Further reading External links Related to John Scott Russell John Scott Russell and the solitary wave John Scott Russell biography Photograph of soliton on the Scott Russell Aqueduct Other Heriot–Watt University soliton page Helmholtz solitons, Salford University Short didactic review on optical solitons 1834 introductions 1834 in science Fluid dynamics Integrable systems Partial differential equations Quasiparticles Wave mechanics
Soliton
Physics,Chemistry,Materials_science,Engineering
3,246
42,160,128
https://en.wikipedia.org/wiki/CFAP206
Cilia And Flagella Associated Protein 206 (CFAP206) is a gene that in humans encodes a protein “DUF3508”. This protein has a function that is not currently very well understood. Other known aliases are “dJ382I10.1, UPF0704 Protein C6orf165.” In humans, the gene coding sequence is 56,501 base pairs long, with an mRNA of 2,215 base pairs, and a protein sequence of 622 amino acids. The C6orf165 gene is conserved in chimpanzee, rhesus monkey, dog, cow, mouse, rat, chicken, zebrafish, mosquito, frog, and more C6orf165 is rarely expressed in humans, with relatively high expression in brain, lungs (trachea) and testis. The molecular weight of UPF0704 is 71,193 Da and the PI is 6.38 Gene Locus The CFAP206 gene is located at Chromosome 6 from 88119558 to 88173965(6q15). It contains 12 exons. The genomic DNA is 54,407 base pairs long, while the longest mRNA that it produces is 2,215 bp long. Homology and Evolution Orthologs This protein is well conserved through a series of distantly related organisms including mammals, birds, amphibians, tunicates, bony fish, lancelets, insects, and sea urchins. The list of organisms in which orthologs have been found is shown below. Paralogs C6orf165 has no paralog. Phylogeny The rooted phylogeny tree is shown below Protein The protein that is produced by the C6orf165 gene is termed DUF3508 and is 622 amino acids long. The protein has a predicated molecular weight of 71.20 kDa and isoelectric point of 6.38. Domains The C6orf165 gene protein product contains a well conserved domain DUF3508 This presumed domain is functionally uncharacterized. This domain is found in eukaryotes. This domain is about 280 amino acids in length. Motifs This domain has two conserved sequence motifs: GFC and GLL. Post-translational modifications The only predicted post-translational modification this protein undergo is phosphorylation after trying all tools under post translational modification category on expasy.org. Three phosphorylation site is predicted with score over 0.8. Phosphorylation on Ser 176, Thr 232 and Ser 310 are notified on the conceptual translation. Secondary structure The consensus of the prediction software PELE predicts that protein UPF0704 is dominated by alpha helices with interspersed regions of random coil. PSORT II analysis predicts that there is a coiled_coil_region from 88 to 117 with sequence MNYTNRVEFLEEHHRVLESRLGSVTREITD. Location PSORT II analysis trained on yeast data predicts that the subcellular location of this protein is most likely in the cytoplasm (56%). Less likely possibilities are in the mitochondria (21%) or in the nucleus (17%) or in vacuoles (4%). Gene expression Gene expression data From the EST file of Unigene, the gene expression in human is not strong, the gene EST/EST in pool is really low, even low than 0.01%. These little expression is in brain, connective tissue, kidney, lungs, parathyroid, pharynx, placenta, testis and trachea. In mouse, the gene expression of C6orf165 is even lower, the gene is only expressed in two body parts, ovary and testis. In chicken, the weak expressions are in two body part, brain and testis. In zebra fish, gene expression is still low, the very weak expressions are in eye, kidney and reproductive system. In sea squirt, the expressions are in gonad, heart and neural complex. In summary, c6orf165 is expressed conservatively in testis across the species and partially conservatively in brain or neural complex. Promoter The promoter region for human c6orf165 is identified by ElDorado (at Genomatix). In addition to this, the start codon is at the second exon of the mRNA and this indicate the first exon is spliced during the modification. Transcript variants In humans, the c6orf165 gene produces 4 different transcripts, 2 of which form a protein product (one undergoes nonsense mediated decay ang the other is retained intron). The main transcript in humans is transcript ID ENST00000369562, or C6ORF165-001; it has 13 exons and 12 coding exons; the translation length is 622 residues The second protein coding transcript in human is transcript ID ENST00000480123 or C6ORF165-002;it contains 7 exons and only 6 exons are protein coding; the translation length is 252 residues Interactions Two-hybrid experiments revealed interacting proteins such as Myogenic repressor I-mf. This repressor is highly expressed in sclerotome. It inhibits the transactivation activity of the MyoD family and represses myogenesis. Protein complex co-immunoprecipitation (Co-IP) experiments revealed interacting protein NRF1 nuclear respiratory factor 1 This gene encodes a protein that homodimerizes and functions as a transcription factor which activates the expression of some key metabolic genes regulating cellular growth and nuclear genes required for respiration, heme biosynthesis, and mitochondrial DNA transcription and replication. Two-hybrid experiments revealed interacting protein RNF138 (ring finger protein 138), an E3 ubiquitin protein ligase. Affinity Capture-Western reveal an interaction protein called TP73 tumor protein p73, which is a protein related to the p53 tumor protein. Clinical significance C6orf165 has no currently known disease associations or mutations. References Uncharacterized proteins
CFAP206
Biology
1,280
744,084
https://en.wikipedia.org/wiki/French%20aircraft%20carrier%20PA2
PA2 (, "Aircraft Carrier 2") was a planned aircraft carrier under development by Thales Naval France and DCNS for the French Navy. The design was based on the aircraft carriers developed for the Royal Navy. The project was cancelled in the 2013 French White Paper on Defence and National Security. Background The previous French carriers, and , were completed in 1961 and 1963 respectively. The requirement for a replacement was identified in the mid-1970s, which became the 40,600 tonne nuclear-powered , laid down in April 1989 at the DCNS Brest naval shipyard. This carrier was launched in May 1994, but not officially commissioned until 2001 due to a large number of problems, which included the need to lengthen the flight deck after aircraft trials, a broken propeller and vibration and noise problems. The French Navy was understood to be unwilling to proceed with another carrier of the same design and by 2003 the possibility of sharing the Royal Navy design emerged to fulfill the French requirement for a second carrier. The requirement for the carriers was confirmed by Jacques Chirac in 2004 for the centennial of the Entente Cordiale and on 26 January 2006 the defence ministers of France and Britain reached an agreement regarding cooperation on the design of their future carriers. France agreed to pay the UK for access to the design due to the investment made to date. These payments were £30 million in January 2006, £25 million in July 2006 and a further £45 million if France decides to proceed with the project. The FY2008 French defence budget included the necessary funding, €3 billion, for the ship. However, in April 2008 French Defence Minister Herve Morin cast doubt over plans for a second aircraft carrier, citing a cash crunch and the fact that rising oil prices put the question of the propulsion back on the table, and said a decision would be taken soon. Further doubts were cast on the project on 21 June 2008 when then French President Nicolas Sarkozy decided to suspend co-operation with Britain on the aircraft carrier. Sarkozy stated that a final decision on France building a second carrier would be taken by 2012. British plans for two aircraft carriers went ahead as planned despite the French withdrawal, as the original project had in any case been a British one and not dependent on French involvement. On 3 February 2009, the French government ordered studies about another architecture and design casting even more doubt on the likelihood of the French Navy using the current British design. The option of nuclear propulsion was also put back on the table, and if selected would have required a completely different approach. An option similar to the Azimuth thruster used on the ships was also considered. Design considerations The French carrier would have been built by an alliance of Thales and DCNS using the proposed design of a long, 75,000 tonne variant of the Queen Elizabeth class. While the UK had chosen to continue to use STOVL configuration for its new carriers, the design could also be reconfigured to a CATOBAR configuration for French requirements. The French variant would have most likely operated the Dassault Rafale, the E-2C Hawkeye and the NH-90 fixed-wing and rotary aircraft. Being a CATOBAR design, PA2 would have been equipped with the same long C13-2 steam catapults as those installed on the aircraft carriers of the United States Navy. The crew of PA2 was expected to be about 1,650, a significant decrease from the 1,950 crew of Charles de Gaulle, indicating the high level of automation integrated into the ship's systems. The ship would have had two islands: one devoted to ship navigation, and the other to air operations. Allowing optimal placement of bridges for both tasks; navigation calls for a bridge placed forward (as on Charles De Gaulle), while air operations are made easier with a bridge placed aft (as seen on the US Nimitz class). The original design had to meet the Royal Navy's requirements, so nuclear propulsion was not an option: the British government rejected nuclear propulsion as too costly. Before cancellation the carrier's propulsion system was expected to be an integrated full electric propulsion (IFEP) based on two Rolls-Royce MT30 gas turbines. The carrier would have had a range of approximately . Construction considerations The hull was likely to be built by Chantiers de l'Atlantique at Saint Nazaire, and fitted out by DCN at Brest. The ship was likely to be based at Toulon naval base whose two dry docks can accommodate even the larger Nimitz-class aircraft carriers. Name considerations At the time, it had been proposed to name the aircraft carrier Richelieu, after Cardinal Richelieu, which was the name originally intended for Charles de Gaulle. See also Future French aircraft carrier References Aircraft carriers of France Proposed aircraft carriers Cancelled aircraft carriers
French aircraft carrier PA2
Engineering
979
69,559,697
https://en.wikipedia.org/wiki/Bareiss%20Pr%C3%BCfger%C3%A4tebau%20GmbH
Bareiss (full name: Bareiss Prüfgerätebau GmbH) is a German materials testing company founded in 1954 by Heinrich Bareiss. The company specialises in material testing equipment and is headquartered in Oberdischingen, Germany. Testing instruments Bareiss manufactures durometers, automatic hardness testers, temperature-controlled hardness testers, density testers, ball rebound testers, rheology equipment such as rubber process analyzers, and an automated optical inspections system. Most of the product components are manufactured in-house. Bareiss is also the first DKD calibration laboratory (today: DAkkS laboratory) in Europe, according to DIN EN ISO 17025 for the calibrations of the measurement category of hardness, according to Shore DIN ISO 48-4 and DIN ISO 48-2. History Bareiss was founded in 1954 by Heinrich Bareiss to produce mechanical hardness testers. In 1961, their first product, BS-61, was released. Brigitte Wirth and Peter Strobel took over the company in 1993. A couple of years later, in 1996, Strobel initiated the accreditation of Bareiss to be an official DKD calibration laboratory. Katrin Shen and Oliver Wirth currently lead the company. Bareiss has a branch in Shanghai, China, which opened in 2012. In 2018, Bareiss USA was founded. Bareiss opened a third branch in Taiwan in 2020. In 2021, Bareiss North America was founded in Toronto, Canada to manage North American operations. Bareiss also opened branches in Taiwan and in Toronto, Canada in the 2020s. Bareiss was certified by the German Accreditation Body (former German National Test Authority) to calibrate material testing machines and issue the corresponding DAkkS calibration certificates. References External links Materials testing
Bareiss Prüfgerätebau GmbH
Materials_science,Engineering
369
47,671
https://en.wikipedia.org/wiki/Toxic%20heavy%20metal
A toxic heavy metal is a common but misleading term for a metal-like element noted for its potential toxicity. Not all heavy metals are toxic and some toxic metals are not heavy. Elements often discussed as toxic include cadmium, mercury and lead, all of which appear in the World Health Organization's list of 10 chemicals of major public concern. Other examples include chromium and nickel, thallium, bismuth, arsenic, antimony and tin. These toxic elements are found naturally in the earth. They become concentrated as a result of human caused activities and can enter plant and animal (including human) tissues via inhalation, diet, and manual handling. Then, they can bind to and interfere with the functioning of vital cellular components. The toxic effects of arsenic, mercury, and lead were known to the ancients, but methodical studies of the toxicity of some heavy metals appear to date from only 1868. In humans, heavy metal poisoning is generally treated by the administration of chelating agents. Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health. Controversial terminology The International Union of Pure and Applied Chemistry (IUPAC), which standardizes nomenclature, says the term “heavy metals” is both meaningless and misleading". The IUPAC report focuses on the legal and toxicological implications of describing "heavy metals" as toxins when there is no scientific evidence to support a connection. The density implied by the adjective "heavy" has almost no biological consequences and pure metals are rarely the biologically active substance. This characterization has been echoed by numerous reviews. The most widely used toxicology textbook, Casarett and Doull’s toxicology uses "toxic metal" not "heavy metals". Nevertheless many scientific and science related articles continue to use "heavy metal" as a term for toxic substances. Major and minor metal toxins Metals with multiple toxic effects include arsenic (As), beryllium (Be), cadmium (Cd), chromium (Cr), lead (Pb), mercury (Hg), and nickel (Ni). Elements that are nutritionally essential for animal or plant life but which are considered toxic metals in high doses or other forms include cobalt (Co), copper (Cu), iron (Fe), magnesium (Mg), manganese (Mn), molybdenum (Mo), selenium (Se), and zinc (Zn). Contamination sources Toxic metals are found naturally in the earth, and become concentrated as a result of human activities, or, in some cases geochemical processes, such as accumulation in peat soils that are then released when drained for agriculture. Common sources include fertilisers; aging water supply infrastructure; and microplastics floating in the world's oceans. Arsenic is thought to be used in connection with coloring dyes. Rat poison used in grain and mash stores may be another source of the arsenic. The geographical extent of sources may be very large. For example, up to one-sixth of China's arable land might be affected by heavy metal contamination. Lead is the most prevalent heavy metal contaminant. As a component of tetraethyl lead, , it was used extensively in gasoline during the 1930s–1970s. Lead levels in the aquatic environments of industrialised societies have been estimated to be two to three times those of pre-industrial levels. Although the use of leaded gasoline was largely phased out in North America by 1996, soils next to roads built before this time retain high lead concentrations. Lead (from lead(II) azide or lead styphnate used in firearms) gradually accumulates at firearms training grounds, contaminating the local environment and exposing range employees to a risk of lead poisoning. Entry routes Toxic metals enter plant, animal and human tissues via air inhalation, diet, and manual handling. Welding, galvanizing, brazing, and soldering exposes workers to fumes that may be inhaled and result in metal fume fever. Motor vehicle emissions are a major source of airborne contaminants including arsenic, cadmium, cobalt, nickel, lead, antimony, vanadium, zinc, platinum, palladium and rhodium. Water sources (groundwater, lakes, streams and rivers) can be polluted by toxic metals leaching from industrial and consumer waste; acid rain can exacerbate this process by releasing toxic metals trapped in soils. Transport through soil can be facilitated by the presence of preferential flow paths (macropores) and dissolved organic compounds. Plants are exposed to toxic metals through the uptake of water; animals eat these plants; ingestion of plant- and animal-based foods are the largest sources of toxic metals in humans. Absorption through skin contact, for example from contact with soil, or metal containing toys and jewelry, is another potential source of toxic metal contamination. Toxic metals can bioaccumulate in organisms as they are hard to metabolize. Detrimental effects Toxic metals "can bind to vital cellular components, such as structural proteins, enzymes, and nucleic acids, and interfere with their functioning". Symptoms and effects can vary according to the metal or metal compound, and the dose involved. Broadly, long-term exposure to toxic heavy metals can have carcinogenic, central and peripheral nervous system, and circulatory effects. For humans, typical presentations associated with exposure to any of the "classical" toxic heavy metals, or chromium (another toxic heavy metal) or arsenic (a metalloid), are shown in the table. History The toxic effects of arsenic, mercury and lead were known to the ancients but methodical studies of the overall toxicity of heavy metals appear to date from only 1868. In that year, Wanklyn and Chapman speculated on the adverse effects of the heavy metals "arsenic, lead, copper, zinc, iron and manganese" in drinking water. They noted an "absence of investigation" and were reduced to "the necessity of pleading for the collection of data". In 1884, Blake described an apparent connection between toxicity and the atomic weight of an element. The following sections provide historical thumbnails for the "classical" toxic heavy metals (arsenic, mercury and lead) and some more recent examples (chromium and cadmium). Arsenic Arsenic, as realgar () and orpiment (), was known in ancient times. Strabo (64–50 BCE – c. AD 24?), a Greek geographer and historian, wrote that only slaves were employed in realgar and orpiment mines since they would inevitably die from the toxic effects of the fumes given off from the ores. Arsenic-contaminated beer poisoned over 6,000 people in the Manchester area of England in 1900, and is thought to have killed at least 70 victims. Clare Luce, American ambassador to Italy from 1953 to 1956, suffered from arsenic poisoning. Its source was traced to flaking arsenic-laden paint on the ceiling of her bedroom. She may also have eaten food contaminated by arsenic in flaking ceiling paint in the embassy dining room. Ground water contaminated by arsenic, as of 2014, "is still poisoning millions of people in Asia". Mercury The first emperor of unified China, Qin Shi Huang, it is reported, died of ingesting mercury pills that were intended to give him eternal life. The phrase "mad as a hatter" is likely a reference to mercury poisoning among milliners (so-called "mad hatter disease"), as mercury-based compounds were once used in the manufacture of felt hats in the 18th and 19th century. Historically, gold amalgam (an alloy with mercury) was widely used in gilding, leading to numerous casualties among the workers. It is estimated that during the construction of Saint Isaac's Cathedral alone, 60 workers died from the gilding of the main dome. Outbreaks of methylmercury poisoning occurred in several places in Japan during the 1950s due to industrial discharges of mercury into rivers and coastal waters. The best-known instances were in Minamata and Niigata. In Minamata alone, more than 600 people died due to what became known as Minamata disease. More than 21,000 people filed claims with the Japanese government, of which almost 3000 became certified as having the disease. In 22 documented cases, pregnant women who consumed contaminated fish showed mild or no symptoms but gave birth to infants with severe developmental disabilities. Since the Industrial Revolution, mercury levels have tripled in many near-surface seawaters, especially around Iceland and Antarctica. Lead The adverse effects of lead were known to the ancients. In the 2nd century BC the Greek botanist Nicander described the colic and paralysis seen in lead-poisoned people. Dioscorides, a Greek physician who is thought to have lived in the 1st century CE, wrote that lead "makes the mind give way". Lead was used extensively in Roman aqueducts from about 500 BC to 300 AD. Julius Caesar's engineer, Vitruvius, reported, "water is much more wholesome from earthenware pipes than from lead pipes. For it seems to be made injurious by lead, because white lead is produced by it, and this is said to be harmful to the human body." During the Mongol period in China (1271−1368 AD), lead pollution due to silver smelting in the Yunnan region exceeded contamination levels from modern mining activities by nearly four times. In the 17th and 18th centuries, people in Devon were afflicted by a condition referred to as Devon colic; this was discovered to be due to the imbibing of lead-contaminated cider. In 2013, the World Health Organization estimated that lead poisoning resulted in 143,000 deaths, and "contribute[d] to 600,000 new cases of children with intellectual disabilities", each year. In the U.S. city of Flint, Michigan, lead contamination in drinking water has been an issue since 2014. The source of the contamination has been attributed to "corrosion in the lead and iron pipes that distribute water to city residents". In 2015, the lead concentration of drinking water in north-eastern Tasmania, Australia, reached a level over 50 times the prescribed national drinking water guidelines. The source of the contamination was attributed to "a combination of dilapidated drinking water infrastructure, including lead jointed pipelines, end-of-life polyvinyl chloride pipes and household plumbing". Chromium Chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known since at least the late 19th century. In 1890, Newman described the elevated cancer risk of workers in a chromate dye company. Chromate-induced dermatitis was reported in aircraft workers during World War II. In 1963, an outbreak of dermatitis, ranging from erythema to exudative eczema, occurred amongst 60 automobile factory workers in England. The workers had been wet-sanding chromate-based primer paint that had been applied to car bodies. In Australia, chromium was released from the Newcastle Orica explosives plant on August 8, 2011. Up to 20 workers at the plant were exposed as were 70 nearby homes in Stockton. The town was only notified three days after the release and the accident sparked a major public controversy, with Orica criticised for playing down the extent and possible risks of the leak, and the state Government attacked for their slow response to the incident. Cadmium Cadmium exposure is a phenomenon of the early 20th century, and onwards. In Japan in 1910, the Mitsui Mining & Smelting Company began discharging cadmium into the Jinzū River, as a byproduct of mining operations. Residents in the surrounding area subsequently consumed rice grown in cadmium-contaminated irrigation water. They experienced softening of the bones and kidney failure. The origin of these symptoms was not clear; possibilities raised at the time included "a regional or bacterial disease or lead poisoning". In 1955, cadmium was identified as the likely cause and in 1961 the source was directly linked to mining operations in the area. In February 2010, cadmium was found in Walmart exclusive Miley Cyrus jewelry. Wal-Mart continued to sell the jewelry until May, when covert testing organised by Associated Press confirmed the original results. In June 2010 cadmium was detected in the paint used on promotional drinking glasses for the movie Shrek Forever After, sold by McDonald's Restaurants, triggering a recall of 12 million glasses. Remediation Human In humans, heavy metal poisoning is generally treated by the administration of chelating agents. These are chemical compounds, such as (calcium disodium ethylenediaminetetraacetate) that convert heavy metals to chemically inert forms that can be excreted without further interaction with the body. Chelates are not without side effects and can also remove beneficial metals from the body. Vitamin and mineral supplements are sometimes co-administered for this reason Environment Soils contaminated by heavy metals can be remediated by one or more of the following technologies: isolation; immobilization; toxicity reduction; physical separation; or extraction. Isolation involves the use of caps, membranes or below-ground barriers in an attempt to quarantine the contaminated soil. Immobilization aims to alter the properties of the soil so as to hinder the mobility of the heavy contaminants. Toxicity reduction attempts to oxidise or reduce the toxic heavy metal ions, via chemical or biological means into less toxic or mobile forms. Physical separation involves the removal of the contaminated soil and the separation of the metal contaminants by mechanical means. Extraction is an on or off-site process that uses chemicals, high-temperature volatization, or electrolysis to extract contaminants from soils. The process or processes used will vary according to contaminant and the characteristics of the site. Benefits Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health. These elements include vanadium, manganese, iron, cobalt, copper, zinc, selenium, strontium and molybdenum. A deficiency of these essential metals may increase susceptibility to heavy metal poisoning. Selenium is the most toxic of the heavy metals that are essential for mammals. Selenium is normally excreted and only becomes toxic when the intake exceeds the excretory capacity. See also Bento Rodrigues dam disaster Heavy metal detoxification Kingston Fossil Plant coal fly ash slurry spill Light metal Metal toxicity Citations General references Sets of chemical elements Toxicology
Toxic heavy metal
Environmental_science
3,007
6,264,793
https://en.wikipedia.org/wiki/Friendly%20Floatees%20spill
Friendly Floatees are plastic bath toys (including rubber ducks) marketed by The First Years and made famous by the work of Curtis Ebbesmeyer, an oceanographer who models ocean currents on the basis of flotsam movements. Ebbesmeyer studied the movements of a consignment of 28,800 Friendly Floatees—yellow ducks, red beavers, blue turtles, and green frogs—that were washed into the Pacific Ocean in 1992. Some of the toys landed along Pacific Ocean shores, such as Hawaii. Others traveled over , floating over the site where the Titanic sank, and spent years frozen in Arctic ice before reaching the U.S. Eastern Seaboard as well as British and Irish shores, fifteen years later, in 2007. Oceanography A consignment of Friendly Floatee toys, manufactured in China for The First Years Inc., departed from Hong Kong on a container ship, the Evergreen Ever Laurel, destined for Tacoma, Washington. On 10 January 1992, during a storm in the North Pacific Ocean close to the International Date Line, twelve 40-foot (12-m) intermodal containers were washed overboard. One of these containers held 28,800 Floatees, a child's bath toy which came in a number of forms: red beavers, green frogs, blue turtles and yellow ducks. At some point, the container opened (possibly because it collided with other containers or the ship itself) and the Floatees were released. Although each toy was mounted in a cardboard housing attached to a backing card, subsequent tests showed that the cardboard quickly degraded in sea water allowing the Floatees to escape. Unlike many bath toys, Friendly Floatees have no holes in them so they do not take on water. Seattle oceanographers Curtis Ebbesmeyer and James Ingraham, who were working on an ocean surface current model, began to track their progress. The mass release of 28,800 objects into the ocean at one time offered significant advantages over the standard method of releasing 500–1000 drift bottles. The recovery rate of objects from the Pacific Ocean is typically around 2%, so rather than the 10 to 20 recoveries typically seen with a drift bottle release, the two scientists expected numbers closer to 600. They were already tracking various other spills of flotsam, including 61,000 Nike running shoes that had been lost overboard in 1990. Ten months after the incident, the first Floatees began to wash up along the Alaskan coast. The first discovery consisted of ten toys found by a beachcomber near Sitka, Alaska on 16 November 1992, about from their starting point. Ebbesmeyer and Ingraham contacted beachcombers, coastal workers, and local residents to locate hundreds of the beached Floatees over a shoreline. Another beachcomber discovered twenty of the toys on 28 November 1992, and in total 400 were found along the eastern coast of the Gulf of Alaska in the period up to August 1993. This represented a 1.4% recovery rate. The landfalls were logged in Ingraham's computer model OSCUR (Ocean Surface Currents Simulation), which uses measurements of air pressure from 1967 onwards to calculate the direction of and speed of wind across the oceans, and the consequent surface currents. Ingraham's model was built to help fisheries but it is also used to predict flotsam movements or the likely locations of those lost at sea. Using the models they had developed, the oceanographers correctly predicted further landfalls of the toys in Washington state in 1996 and theorized that many of the remaining Floatees would have traveled to Alaska, westward to Japan, back to Alaska, and then drifted northwards through the Bering Strait and become trapped in the Arctic pack ice. Moving slowly with the ice across the Pole, they predicted it would take five or six years for the toys to reach the North Atlantic where the ice would thaw and release them. Between July and December 2003, The First Years Inc. offered a $100 US savings bond reward to anybody who recovered a Floatee in New England, Canada or Iceland. More of the toys were recovered in 2004 than in any of the preceding three years. However, still, more of these toys were predicted to have headed eastward past Greenland and make landfall on the southwestern shores of the United Kingdom in 2007. In July 2007, a retired teacher found a plastic duck on the Devon coast, and British newspapers mistakenly announced that the Floatees had begun to arrive. But the day after breaking the story, the Western Morning News, the local Devon newspaper, reported that Dr. Simon Boxall of the National Oceanography Centre in Southampton had examined the toy and determined that the duck was not in fact a Floatee. Bleached by sun and seawater, the ducks and beavers had faded to white, but the turtles and frogs had kept their original colors. Legacy At least two children's books have been inspired by the Floatees. In 1997, Clarion Books published Ducky (), written by Eve Bunting and illustrated by Caldecott Medal winner David Wisniewski. Hans Christian Andersen Award winner Eric Carle wrote 10 Little Rubber Ducks (Harper Collins 2005, ). In 1997 Black Swan published That Awkward Age (Transworld 1997, ), a comedy written by Mary Selby, in which several of the ducks are found off the Isle of Lewis, one then being purchased at auction and treated as a metaphor for perseverance. In 2003, Rich Eilbert wrote a song "Yellow Rubber Ducks" commemorating the ducks' journey. In 2011, he published the song as a YouTube video, Yellow Rubber Ducks. In 2011, Donovan Hohn published Moby-Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them (Viking, ) On the 19th of February 2013, BBC mystery series Death in Paradise featured the spill as a plot point in the 7th episode of Series 2. On 20 June 2014, The Disney Channel and Disney Junior aired Lucky Duck, a Canadian-American animated TV movie that is loosely based on and inspired by the Friendly Floatees. In his 2014 poem collection The Cartographer Tries to Map a Way to Zion, poet Kei Miller dedicates a poem to the Friendly Floatees : "When Considering the Long, Long Journey of 28,000 Rubber Ducks". The spill was referenced in a 2022 game "Placid Plastic Duck Simulator" as an "accidental duck experiment", which can be heard on the radio in between music. The toys themselves have become collector's items, fetching prices as high as $1,000. See also Drifter (floating device) Great Pacific Garbage Patch Hansa Carrier Marine debris Message in a bottle Rye Riptides Footnotes References Hohn, Donovan, Moby-Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them. Viking, New York, NY 2011, External links Keith C. Heidorn, 'Of Shoes And Ships And Rubber Ducks And A Message In A Bottle', The Weather Doctor (17 March 1999). Jane Standley, 'Ducks' odyssey nears end', BBC News, (12 July 2003). Duck ahoy, The Age, (7 August 2003) Marsha Walton, 'How Nikes, toys and hockey gear help ocean science', CNN.com (26 May 2003). "Journey of the Floatees", Spiegel magazine (1 July 2007) "Timeline of Rubber Duck Voyage", Rubaduck.com Donovan Hohn, "Moby-Duck: Or, The Synthetic Wilderness of Childhood," Harper's Magazine, January (2007), pp. 39–62. Moby Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them – follow up non-fiction book based on 2 years research after the Harper's Magazine article. Rich Eilbert, Yellow Rubber Ducks, YouTube.com, (March 2011). Water pollution Waste disposal incidents Physical oceanography Ocean currents 1990s toys 1992 in the environment Plastic toys Evergreen Group
Friendly Floatees spill
Physics,Chemistry,Environmental_science
1,711
3,993,712
https://en.wikipedia.org/wiki/ALPAC
ALPAC (Automatic Language Processing Advisory Committee) was a committee of seven scientists led by John R. Pierce, established in 1964 by the United States government in order to evaluate the progress in computational linguistics in general and machine translation in particular. Its report, issued in 1966, gained notoriety for being very skeptical of research done in machine translation so far, and emphasizing the need for basic research in computational linguistics; this eventually caused the U.S. government to reduce its funding of the topic dramatically. This marked the beginning of the first AI winter. The ALPAC was set up in April 1964 with John R. Pierce as the chairman. The committee consisted of: John R. Pierce, who at the time worked for Bell Telephone Laboratories John B. Carroll, a psychologist from Harvard University Eric P. Hamp, a linguist from the University of Chicago David G. Hays, a machine translation researcher from RAND Corporation Charles F. Hockett, a linguist from Cornell University Anthony Oettinger, a machine translation researcher from Harvard University Alan Perlis, an Artificial Intelligence researcher from Carnegie Institute of Technology Testimony was heard from: Paul Garvin of Bunker-Ramo Corporation Gilbert King of Itek Corporation and previously from IBM Winfred P. Lehmann from University of Texas at Austin Jules Mersel of Bunker-Ramo Corporation ALPAC's final recommendations (p. 34) were, therefore, that research should be supported on: practical methods for evaluation of translations; means for speeding up the human translation process; evaluation of quality and cost of various sources of translations; investigation of the utilization of translations, to guard against production of translations that are never read; study of delays in the over-all translation process, and means for eliminating them, both in journals and in individual items; evaluation of the relative speed and cost of various sorts of machine-aided translation; adaptation of existing mechanized editing and production processes in translation; the over-all translation process; production of adequate reference works for the translator, including the adaptation of glossaries that now exist primarily for automatic dictionary look-up in machine translation See also Georgetown–IBM experiment AN/GSQ-16 ("Automatic Language Translator", system introduced 1959) History of artificial intelligence History of machine translation AI winter Lighthill report References John R. Pierce, John B. Carroll, et al., Language and Machines — Computers in Translation and Linguistics. ALPAC report, National Academy of Sciences, National Research Council, Washington, DC, 1966. ALPAC Report , Language and Machines — Computers in Translation and Linguistics. A Report by the Automatic Language Processing Advisory Committee, Washington, DC, 1966 External links The report accessible on-line ALPAC: the (in)famous report — summary of the report (PDF) Computational linguistics Machine translation History of artificial intelligence
ALPAC
Technology
562
19,209,115
https://en.wikipedia.org/wiki/COUP%20transcription%20factor
COUP transcription factor may refer to: COUP-TFI, a protein that in humans is encoded by the NR2F1 gene COUP-TFII, a protein that in humans is encoded by the NR2F2 gene Transcription factors
COUP transcription factor
Chemistry,Biology
50
2,797,553
https://en.wikipedia.org/wiki/R%20Apodis
R Apodis (HD 131109; HR 5540; 18 G. Apodis) is a solitary star in the constellation Apus. It is faintly visible to the naked eye as an orange-hued point of light with an apparent magnitude of 5.36. Parallax measurements imply a distance of 413 light-years and it is drifting closer with a heliocentric radial velocity of . At its current distance, R Apodis' brightness is diminished by an interstellar extinction of 0.26 magnitudes and it has an absolute magnitude of −0.22. HD 131109 was the first star observed to be variable in the constellation; It was first discovered in 1873 by Benjamin Apthorp Gould. Later, it was hastily given the variable star designation R Apodis in a 1907 variable star catalogue despite it being a suspected variable star at the time. However, observations conducted in a 1952 field star survey revealed that R Apodis was not variable at all. Keenan & Pitts (1980) found that it varied between magnitudes 5.5 and 6.1, but this was never confirmed. Hipparcos photometric data revealed that R Apodis indeed had a constant brightness. It has since been listed as a class CST: in the General Catalog of Variable Stars. R Apodis has a stellar classification of K4 III:, indicating that it is an evolved K-type giant that has ceased hydrogen fusion at its core and left the main sequence. However, there is uncertainty about the luminosity class. It has a comparable mass to the Sun at 1.1 solar masses but, at the age of 5.68 billion years, it has expanded to 23 times the radius of the Sun. It radiates 293 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . R Apodis is metal deficient with an iron abundance roughly half of the Sun's and it spins slowly with a projected rotational velocity lower than . References 131109 Apus K-type giants Apodis, R 5540 073223 CD-76 688 J14575300-7639454 IRAS catalogue objects
R Apodis
Astronomy
452
2,870,440
https://en.wikipedia.org/wiki/Tau%20Aquilae
Tau Aquilae, Latinized from τ Aquilae, is the Bayer designation for a star in the equatorial constellation of Aquila. The apparent visual magnitude of 5.7 indicates it is a faint star that is visible to the naked eye from suburban skies; at least according to the Bortle Dark-Sky Scale. The annual orbital motion of the Earth causes a parallax shift of , which means the distance to this star is approximately . The magnitude of the star is diminished by 0.28 from extinction caused by interstellar gas and dust. It is drifting closer to the Sun with a radial velocity of −29 km/s. The spectrum of Tau Aquilae matches a stellar classification of K0 III, with the luminosity class of III suggesting this is an evolved giant star that has exhausted the supply of hydrogen at its core and left the main sequence of stars like the Sun. It has 21 times the girth of the Sun and is radiating 208 times the Sun's luminosity. The outer envelope is radiating energy into space with an effective temperature of 4,660 K, giving it the orange hued glow of a K-type star. References External links Image Tau Aquilae K-type giants Aquila (constellation) Aquilae, Tau Durchmusterung objects Aquilae, 63 190327 098823 7669
Tau Aquilae
Astronomy
283
35,786,690
https://en.wikipedia.org/wiki/Counterdependency
Counterdependency is the state of refusal of attachment, the denial of personal need and dependency, and may extend to the omnipotence and refusal of dialogue found in destructive narcissism, for example. Developmental origins The roots of counterdependency can be found in the age-appropriate negativism of two-year-olds and teens, where it serves the temporary purpose of distancing one from the parental figure[s]. As Selma Fraiberg put it, the two-year-old "says 'no' with splendid authority to almost any question addressed to him...as if he establishes his independence, his separateness from his mother, by being opposite". Where the mother has difficulty accepting the child's need for active distancing, the child may remain stuck in the counterdependent phase of development because of developmental trauma. In similar fashion, the teenager needs to be able to establish the fact of their separate mind to their parents, even if only through a sustained state of cold rejection; and again unresolved adolescent issues can lead to a mechanical counterdependence and unruly assertiveness in later life. Adult manifestations The counterdependent personality has been described as being addicted to activity and suffering from grandiosity, as acting strong and pushing others away. Out of a fear of being crowded, they avoid contact with others, something which can lead through emotional isolation to depression. The counterdependent male in particular may pride himself on being 'manly' – not needing affection, support or warmth, and being tough, independent and normal instead – something still reinforced by gender socialisation. Where a woman takes on the counterdependent position, it may take on the attributes of a false self or androcentric persona. The apparently independent behavior of the counterdependent can act as a powerful lure for the co-dependent – though once a couple has formed the two partners – codependent / counterdependent – are sometimes found to switch roles. In therapy, the counterdependent personality often wishes to flee treatment, as a defense against the possibility of regression. By keeping the therapist at arm's length, and avoiding reference to feelings as far as possible, they may attempt to control the therapist so as to preserve their sense of independence. Existential views Existential therapists distinguish between interdependency on the one hand, and, on the other, both dependency and an escapist form of rebellious counterdependence. Transference Counterdependency can present itself in a clinical situation in the form of a negative transference. In George Kelly's personal construct theory, the term is used in another sense, to describe the therapist's transference of dependency onto the client: counterdependent transference. See also Attachment in adults Autonomy Counterphobic attitude Couples therapy Karpman drama triangle Ludwig Binswanger Mind your own business Oppositional defiant disorder Schizoid avoidant behavior References Narcissism Interpersonal relationships Personal development
Counterdependency
Biology
625
41,748,357
https://en.wikipedia.org/wiki/Hepatitis%20C%20virus%20nonstructural%20protein%204B
Nonstructural protein 4B (NS4B) is a viral protein found in the hepatitis C virus. It has mass of 27 kDa and probably involved in process of intracellular membrane structure formation to allow virus replication. References Viral nonstructural proteins Hepatitis C virus
Hepatitis C virus nonstructural protein 4B
Biology
60
42,091,354
https://en.wikipedia.org/wiki/Digitalose
Digitalose is a deoxy sugar that is a component of various cardiac glycosides including thevetin and emicymarin. It was first reported in 1892 as being obtained by the hydrolysis of Digtalinum verum. The chemical structure was first elucidated in 1943 by the German chemist Otto Schmidt. Chemically, it is a methyl ether of D-fucose. See also Sarmentose, a related deoxy sugar References Deoxy sugars Ethers Methoxy compounds
Digitalose
Chemistry
109
33,035,222
https://en.wikipedia.org/wiki/LETM1-like%20protein%20family
LETM1-like is a family of evolutionarily related proteins. This is a group of mainly hypothetical eukaryotic proteins. Putative features found in LETM1, such as a transmembrane domain and a CK2 and PKC phosphorylation site, are relatively conserved throughout the family. Deletion of LETM1 is thought to be involved in the development of Wolf-Hirschhorn syndrome in humans. A member of this family, SWISSPROT, is known to be expressed in the mitochondria of Drosophila melanogaster, suggesting that this may be a group of mitochondrial proteins. Examples Human gene encoding members of this family include: LETM1, LETM2, LETMD1 References Protein families
LETM1-like protein family
Biology
158
1,683,613
https://en.wikipedia.org/wiki/Corps%20of%20Royal%20New%20Zealand%20Engineers
The Corps of Royal New Zealand Engineers is the administrative corps of the New Zealand Army responsible for military engineering. The role of the Engineers is to assist in maintaining friendly forces' mobility, deny freedom of movement to the enemy, and provide general engineering support. The corps has been involved in numerous conflicts over the course of its history including World War I, World War II, the Korean War, the Vietnam War and the war in Afghanistan. The corps consists of a single regiment, 2nd Engineer Regiment, primarily based at Linton Military Camp near Palmerston North. History Early history and formation The first New Zealand European military engineering unit was an 82 man militia detachment employed as pioneers during the Flagstaff War in 1845-1846. It would be twenty years until the concept of military engineering was revisited by the colonial forces with the formation of the Volunteer Force in 1865. By the 1880s there were five volunteer engineer corps, including a torpedo corps ("torpedo" referred to undersea mines at this time). The engineers were disbanded in 1883, as adequate training could not be provided, but the Russian Scare of 1885 placed a new emphasis on costal fortifications and the engineer corps were revived. In 1887 the military component of the armed constabulary was converted into the Permanent Militia, establishing the first New Zealand regular military force. The Permanent Militia was much smaller than the Volunteer Force and in 1888 consisted of only two companies: the Permanent Artillery and the Torpedo Corps. The Torpedo Corps became the Submarine Mining Branch in 1896 and then No. 2 Service Company in 1897. It was finally retitled as the Corps of Royal New Zealand Engineers on 7 January 1903 (backdated to 15 October 1902). This first rendition of the Royal New Zealand Engineers was short-lived and on 26 March 1908 the engineers were absorbed into the Electric light section of the Royal New Zealand Artillery. The New Zealand Engineer Volunteers continued to exist until 5 October 1911 when they became the Corps of New Zealand Engineers as part of the conversion of the Volunteer force into the Territorial Force. The New Zealand Railway Corps and the New Zealand Post and Telegraph Corps were both formed as independent corps in October 1911, but were brought under the Corps of New Zealand Engineers umbrella in July 1913. First World War The first units of the New Zealand Engineers to be sent overseas as part of the Samoa Expeditionary Force, including a company of railway engineers, two sections of field engineers, and 26 signalers. Field engineers would be sent to Gallipoli with the New Zealand and Australian Division and then the Western front as part of the New Zealand Division. A total of four field engineer companies were raised during the war. In principle one field company was attached to each infantry brigade, but for the most part were under the control of the divisional CRE. A small number of field engineers also served in the Sinai and Palestine Campaign. These sappers served in D troop (later NZ troop) of the 1st Field Squadron of the Australian Engineers. As part of the Australian and New Zealand Mounted Division, they initially provided an engineering capability to the 2nd Light Horse Brigade, but were later assigned to the New Zealand Mounted Rifles Brigade. The field engineers role involved constructing and repairing trenches, fortifications, bridges and digging wells. The Battle of the Somme in 1916 had shown that road transport was inadequate to move supplies and ammunition to the front line and to evacuate wounded. The Engineers were therefore required to build a light railway system close to the front line and in 1917 the 5th Light Railway Operating Company was formed to specialise in these tasks. The New Zealand Tunnelling Company was also raised in 1915 and was the first New Zealand unit deployed to the Western Front, arriving in March 1916. It was initially involved in counter-mining at Vimy ridge and later dug out tunnels at Arras. During the Hundred Days Offensive the tunnelling company was retasked with bridge building, which included the construction of a 240 foot bridge across the Canal du Nord. Signals units, which were part of the Corps of New Zealand Engineers at this time, were attached to most units of the New Zealand Expeditionary Force. The Divisional Signal Company served with the New Zealand Infantry, while the mounted signal troop was assigned to the New Zealand Mounted Rifles Brigade. The 1st ANZAC Wireless Signal Squadron also contained a single New Zealand wireless troop and was part of India's Expeditionary Force D. The wireless troop was the only New Zealand unit to serve in the Mesopotamia Campaign. A number of other units were raised during the First World War with similar roles to, but not part of, the New Zealand Engineers. The New Zealand (Māori) Pioneer Battalion provided a general labour force for construction and entrenching work. Attempts were made to convert the battalion into an engineering unit, but this proved to be impractical due to a shortage of adequately educated Maori officers. Three entrenching battalions were also formed in February 1918 from the recently disbanded 4th Infantry Brigade. The entrenching battalions were a reserve manpower pool for the remaining infantry brigades, but also provided a general labour force to the engineers. During the course of the war the New Zealand Engineers suffered around 400 fatalities. Two members of the corps, Cyril Bassett (Divisional Signal Company) and Samuel Forsyth (attached to 2nd Battalion, Auckland Infantry Regiment) were awarded the Victoria Cross. Following the war the Corps of New Zealand Engineers was restructured. In 1921 the New Zealand Post and Telegraph Corps became a separate corps, the New Zealand Corps of Signals and the railway battalions were disbanded. In the same year the Corps of New Zealand Engineers were retitled as the Regiment of New Zealand Engineers, but reverted to the former name in 1923. Second World War During the Second World War the Corps of New Zealand Engineers provided engineering support to the 2nd New Zealand Expeditionary Force. Three field companies, one for each brigade, were formed as part of the 2nd New Zealand Division. The field companies first saw action in 1941 during the battles of Greece, and Crete and were mostly involved in the demolition of infrastructure to try and slow the German advance. During Operation Crusader the engineers mostly operated as infantry, but following the axis counter attack in 1942, were employed in the construction of minefields at the El Alamein line. During the Second Battle of El Alamein, the engineers played a vital role in clearing German minefields for the allied forces to advance through. The primary role of the engineers continued to be mine clearing during the allied advance across the Western desert and into Tunisia in late 1942 and early 1943. Other non-divisional engineer companies were also formed to support logistics and transportation. By 1940 seven railway companies had been formed and were involved in the construction and operation of railways in Egypt and Libya. In 1942 the New Zealand engineers laid 400 km of new track across the western desert in 265 days and operated the first train to cross the El Alamein line following the breakout. Three forestry companies were formed in 1940 and were sent to England to fell and mill timber. By September 1942 the output of the New Zealand Forestry group exceeded that of all the other forestry groups (British, Canadian and Australian) combined. Two of the forestry companies were disbanded in 1943 and the remaining one was sent to Algeria and then Italy, before also being disbanded in 1944. The 2nd New Zealand Division was deployed to Italy in 1943 and the new environment required the field companies to take on a new role as bridge builders. The New Zealand Engineers were soon proficient in the rapid construction of both pontoon bridges and modular Bailey bridges. The construction of these bridges was critical to the advance of allied forces and instrumental in the crossing of major rivers such as the Sangro, Senio, Santerno and Po. In March 1945 an armoured engineer squadron was also formed. The squadron was equipped with a range of specially modified Sherman and Valentine tanks used for bridge laying and supported the advance of the 4th New Zealand Armoured Brigade. The 3rd New Zealand Division, which served in the Pacific, also contained three field companies, even though the division's third brigade was never fully formed. These units were generally engaged in the construction of infrastructure behind the front line, although they did support the landing at the Battle of the Green Islands where they suffered their only combat casualties of the war. A small number of officers were also seconded to the British Indian Army and took part in the Burma Campaign. A large number of engineering units were formed in New Zealand to defend against a potential Japanese invasion. A total of 13 companies were formed and attached to the 1st, 4th and 5th divisions. A further 19 companies were formed by mobilising the Public Works Department as a military organisation called the Defence Engineering Service Corps. The Corps of New Zealand Engineers suffered around 310 fatalities during the second world war. Cold War In 1947 the various administrative corps of the New Zealand Military forces were granted the prefix "Royal". It was argued by some generals that the earlier Corps of Royal New Zealand Engineers had technically not been disbanded in 1908 and could be resurrected by simply transferring the personnel of the New Zealand Engineers to it. This proposal was, however, rejected by the Army Board who determined that the RNZE had indeed been disbanded. The New Zealand Engineers were therefore granted the royal title on 12 July 1947, but due to a clerical error were listed by the abbreviated name, "New Zealand Engineers" (omitting "Corps of"), and subsequently became the Royal New Zealand Engineers. The error was rectified in 1953 and the formal name was changed to the Corps of Royal New Zealand Engineers. Throughout the cold war the RNZE were deployed overseas alongside New Zealand and other Commonwealth forces. A company of engineers served with Jayforce as part of the British Commonwealth Occupation Force of Japan and during the Korean War an engineer section was attached to the 28th Engineer Regiment of the 1st Commonwealth Division. Engineers were also attached to the battalions of the New Zealand Regiment stationed in Malaya during the 1960s and supported various units of the 1st Australian Task Force during the Vietnam War. The engineers were also stationed in Singapore as part of a forward presence in Asia. The New Zealand engineers were initially part of the 28th ANZUK Field Squadron in the early 70s, but were later attached to 1st Battalion, Royal New Zealand Infantry Regiment stationed in Singapore until 1989. The primary unit of the RNZE based in New Zealand during the 1950s was 1st Field Engineer Regiment which was to support the division sized 3rd New Zealand Expeditionary Force. With the end of compulsory military training in 1958 and the downsizing of the RNZE, the regiment was disbanded in 1962. The RNZE were organised as independent squadrons until the formation of 2nd Engineer Regiment in 1993. Recent history Since the 1980s the RNZE has been primarily deployed on peace keeping and disaster relief missions. An engineer section was attached to the New Zealand company group deployed to Bosnia to quell ethnic conflict from 1994 till 1996. The engineers continued to be deployed to Bosnia until as late as 2001 to support reconstruction. In response to the 1999 East Timorese crisis New Zealand deployed a battalion group, which contained an engineer troop, to East Timor as part of INTERFET. Following renewed unrest in 2006, the engineer were once again deployed to East Timor, eventually leaving in 2012. In 2003 New Zealand deployed the provincial reconstruction team to Afghanistan. Despite the name, the provincial reconstruction team was intended to provide security to Bamyan Province and thereby enable reconstruction by other organizations. The engineers deployed as part of the provincial reconstruction team did not have any construction capability and only oversaw work by contractors from other governments and agencies. The provincial reconstruction team was withdrawn from Afghanistan in 2013. Although New Zealand did not join the American-led coalition which invaded Iraq in 2003, RNZE sappers were deployed to Iraq in 2004 to provide humanitarian and reconstruction support. A RNZE troop was attached to 38th Engineer Regiment, Royal Engineers and repaired bridges, schools and water treatment plants in Basra. Over the last three decades RNZE sappers have deployed to a large number of pacific island nations, including the Cook Islands, Fiji, Samoa, Tonga and Vanuatu, to support disaster relief following cyclones. Additionally, the corps deployed to Tuvalu and Tokelau during the 2011 drought and set up water filtration and reverse osmosis systems. The RNZE has also been active in disaster relief within New Zealand. The engineers were deployed to Christchurch within two hours following the 2011 earthquake. The RNZE were immediately tasked with repairing the city's water supply, but also supported the stabilization, repair and demolition of buildings and other infrastructure. The corps also assisted in clearing slips along State Highway 70 following the 2016 Kaikōura earthquake. Current Role The primary role of the Corps of Royal New Zealand Engineers is to provide mobility and counter mobility capabilities to the New Zealand Army. More generally, the corps provides military engineering support including construction, water purification and reticulation, CBRN defense, bridging, firefighting and demolitions. When not fulfilling an engineering role, the secondary role of sappers is to act as infantry. To fulfil these duties the corps is equipped with a variety of engineering vehicles. At total of six JCB High Mobility Engineer Excavators have been acquired by the New Zealand Army, which include and armoured cab, enabling the corps to clear roads and obstacles in a combat environment. Bridging can be achieved using the rapidly emplaced bridging system which is mounted on a 8x8 HX-77 MAN truck. The system can bridge a twelve meter gap in ten minutes and is strong enough to support the weight of an NZLAV. In the late 2000s 2nd Engineer Regiment operated a troop of NZLAVs to support the then mechanized 1st Battalion, Royal New Zealand Infantry Regiment. The NZLAVs were transferred to Queen Alexandra's Mounted Rifles in the early 2010s when 1st Battalion was converted to light infantry, but the engineers continue to have access to engineering NZLAVs when necessary. Organisation The Corps of Royal New Zealand Engineers currently consists of a single regiment, 2nd Engineer Regiment, based at Linton Military Camp and contains both regular and reserve components. It is organised as follows: 2nd Engineer Regiment Headquarters Squadron 1st Field Squadron 2nd Field Squadron 3rd Field Squadron 25th Engineer Support Squadron Emergency Response Squadron The 2nd Field, and 3rd Field and Emergency Response Squadrons provide combat engineering support to the 1st and 2/1st Battalions of the Royal New Zealand Infantry Regiment, respectively. The 3rd Field Squadron is based at Burnham Military Camp, while the and Emergency Response Squadron has one troop based at each of Linton, Burnham and Waiouru camps. The emergency response troops were formerly the camp fire brigades and provide emergency services to the military camps and the surrounding area. 25th Engineer Support Squadron provides disaster relief and civil support. The School of Military Engineering is based at Linton Camp, and contains the Technical Training Wing and the Combat Engineer Wing. Since 1995 the school also provides firefighting training to personnel from the Royal New Zealand Air Force. Although not a part of the RNZE organisation the Engineer Corps Memorial Centre, Library and Chapel are also based at Linton Camp. Traditions Sappers The most junior enlisted rank of the Royal New Zealand Engineers is Sapper, rather than private which is used in most other corps. Additionally any member of the corps can be informally referred to as a sapper. Motto The official motto of the Royal New Zealand Engineers is "ubique quo fas et gloria ducunt" (everywhere, where right and glory lead). In practice, however, the phrase is split into two separate mottos, "ubique" and "quo fas et gloria ducunt". The motto was originally granted to the Royal Engineers in 1832 and later adopted by the New Zealand Engineers. Uniforms and insignia The badge of the New Zealand Engineers was a simple circle bearing the acronym "NZE" and the motto "quo fas et gloria ducunt", surmounted by the Royal crest. After attaining royal status in 1947 a cap badge identical to that of the Royal Engineers was adopted, except with the scroll inscribed with "Royal N.Z. Engineers" in place of "Royal Engineers". The badge contains the Royal cypher, "ER", standing for "Elizabeth II Regina", encircled by a garter adorned with the motto "honi soit qui mal y pense" (shame on him who thinks evil of it) taken from the Order of the Garter. The collar badge worn by the Royal New Zealand Engineers is a grenade with a scroll inscribed with "ubique". The New Zealand Tunneling Company instead used the Maāri translation of the motto, "inga whai katoa", on their collar badges. The grenade badge has nine flames, in contrast to the very similar seven flame badge of the Royal New Zealand Artillery. The corps colours are purple navy and post office red which were reputedly the colours of the Board of Ordnance. They are also interpreted as representing the blue tunics worn by the Royal Engineers prior to 1813 and the red tunics which replaced them. The colours are reflected in the corps stable belt, which is red with two blue stripes, and the corps flag, which is similarly coloured and embroidered with the corps badge. Colonel-in-Chief The Colonel-in-Chief is the ceremonial head and patron of the corps. The position was first held by Lord Kitchener, who served in the role from 1911 until his death in 1916. Kitchener was himself a former Royal Engineer and some of the RNZE regimental silver comes from the Kitchener estate. The second Colonel-in-Chief was Prince George, Duke of Kent. The Duke of Kent held the position from 1938 until his death in 1942. The third and most recent Colonel-in-Chief was Queen Elizabeth II, who held the position from 1953 until her death in 2022. Alliances The Corps of Royal New Zealand Engineers is allied with: – Corps of Royal Engineers – Corps of Royal Australian Engineers Freedoms The Corps of Royal New Zealand Engineers has been granted the freedoms of: Levin (1959) Various sub-units have also been granted freedoms including: Greymouth (2nd Works Section, 1971) Akaroa (3rd Field Squadron, 1974) Petone (6th Independent Field Squadron, 1985) Banks Peninsula (3rd Field Squadron, 1994) Order of precedence Notes Footnotes Citations References Administrative corps of New Zealand Military engineer corps Military units and formations established in 1902 Organisations based in New Zealand with royal patronage
Corps of Royal New Zealand Engineers
Engineering
3,678
4,179,425
https://en.wikipedia.org/wiki/Royal%20Netherlands%20Meteorological%20Institute
The Royal Netherlands Meteorological Institute (, ; KNMI) is the Dutch national weather forecasting service, which has its headquarters in De Bilt, in the province of Utrecht, central Netherlands. The primary tasks of KNMI are weather forecasting, monitoring of climate changes and monitoring seismic activity. KNMI is also the national research and information centre for climate, climate change and seismology. History KNMI was established by royal decree of King William III on 21 January 1854 under the title "Royal Meteorological Observatory". Professor C. H. D. Buys Ballot was appointed as the first Director. The year before Professor Ballot had moved the Utrecht University Observatory to the decommissioned fort at Sonnenborgh. It was only later, in 1897, that the headquarters of the KNMI moved to the Koelenberg estate in De Bilt. The "Royal Meteorological Observatory" originally had two divisions, the land branch under Dr. Frederik Wilhelm Christiaan Krecke and the marine branch under navy Lt. Marin H. Jansen. Like Robert FitzRoy who founded the Meteorological Office in Britain the same year, Ballot was disenchanted with the non-scientific weather reports found in European newspapers at the time. Like the Met Office, the KNMI also pioneered daily weather predictions, which he called by a new combination "weervoorspelling" (weather prognostication). Research Applied research at KNMI is focused on three areas: Research aimed at improving the quality, usefulness and accessibility of meteorological and oceanographical data in support of operational weather forecasting and other applications of such data. Climate-related research on oceanography; atmospheric boundary layer processes, clouds and radiation; the chemical composition of the atmosphere (e.g. ozone); climate variability research; the analysis of climate, climate variability and climatic change; modelling support and policy support to the Dutch Government with respect to climate and climatic change. Seismological research as well as monitoring of seismic activity (earthquakes). Development of atmospheric dispersion models KNMI's applied research also encompasses the development and operational use of atmospheric dispersion models. Whenever a disaster occurs within Europe which causes the emission of toxic gases or radioactive material into the atmosphere, it is of utmost importance to quickly determine where the atmospheric plume of toxic material is being transported by the prevailing winds and other meteorological factors. At such times, KNMI activates a special calamity service. For this purpose, a group of seven meteorologists is constantly on call day and night. KNMI's role in supplying information during emergencies is included in municipal and provincial disaster management plans. Civil services, fire departments and the police can be provided with weather and other relevant information directly by the meteorologist on duty, through dedicated telephone connections. KNMI has available two atmospheric dispersion models for use by their calamity service: PUFF - In cooperation with the Netherlands National Institute for Public Health and the Environment (Dutch: Rijksinstituut voor Volksgezondheid en Milieuhygiene or simply RIVM), KNMI has developed the dispersion model PUFF. It has been designed to calculate the dispersion of air pollution on European scales. The model was originally tested by using measurements of the dispersion of radioactivity caused by the accident in the nuclear power plant of Chernobyl in 1986. A few years later, in 1994, a dedicated dispersion experiment called ETEX (European Tracer EXperiment) was carried out, which also provided useful data for further testing of PUFF. CALM - CALM is a CALamity Model designed for the calculation of air pollution dispersion on small spatial scales, within the Netherlands. The algorithms and parameters contained in the CALM model are practically identical to that of the PUFF model. However, the meteorological input can only be supplied manually in CALM. The user provides both observed and predicted values for wind velocity at the 10 meter height level, the atmospheric stability classification and the mixing height. After the model calculations have been performed, a map is created and displayed with the derived trajectories of the pollution plume and an indication of how and where the cloud will disperse. Storm naming In 2019 KNMI decided to join the western storm naming group to help awareness of the danger of storms, the first named storm was Storm Ciara on 9 February 2020. See also Atmospheric dispersion modeling List of atmospheric dispersion models National Center for Atmospheric Research NERI, the National Environmental Research Institute of Denmark NILU, the Norwegian Institute for Air Research Roadway air dispersion modeling Swedish Meteorological and Hydrological Institute TA Luft UK Atmospheric Dispersion Modelling Liaison Committee UK Dispersion Modelling Bureau University Corporation for Atmospheric Research References External links KNMI website (in Dutch) KNMI website (in English) KNMI atmospheric dispersion models RIVM website (in English) Atmospheric dispersion modeling Organisations based in De Bilt Governmental meteorological agencies in Europe Independent government agencies of the Netherlands Organisations based in the Netherlands with royal patronage Research institutes in the Netherlands
Royal Netherlands Meteorological Institute
Chemistry,Engineering,Environmental_science
1,022
18,908,678
https://en.wikipedia.org/wiki/Kolakoski%20sequence
In mathematics, the Kolakoski sequence, sometimes also known as the Oldenburger–Kolakoski sequence, is an infinite sequence of symbols {1,2} that is the sequence of run lengths in its own run-length encoding. It is named after the recreational mathematician William Kolakoski (1944–97) who described it in 1965, but it was previously discussed by Rufus Oldenburger in 1939. Definition The initial terms of the Kolakoski sequence are: 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1,2,2,1,2,1,1,2,1,2,2,1,1,... Each symbol occurs in a "run" (a sequence of equal elements) of either one or two consecutive terms, and writing down the lengths of these runs gives exactly the same sequence: 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1,2,2,1,2,1,1,2,1,2,2,1,1,2,1,1,2,1,2,2,1,2,2,1,1,2,1,2,2,... 1, 2 , 2 ,1,1, 2 ,1, 2 , 2 ,1, 2 , 2 ,1,1, 2 ,1,1, 2 , 2 ,1, 2 ,1,1, 2 ,1, 2 , 2 ,1,1, 2 ,... The description of the Kolakoski sequence is therefore reversible. If K stands for "the Kolakoski sequence", description #1 logically implies description #2 (and vice versa): 1. The terms of K are generated by the runs (i.e., run-lengths) of K 2. The runs of K are generated by the terms of K Accordingly, one can say that each term of the Kolakoski sequence generates a run of one or two future terms. The first 1 of the sequence generates a run of "1", i.e. itself; the first 2 generates a run of "22", which includes itself; the second 2 generates a run of "11"; and so on. Each number in the sequence is the length of the next run to be generated, and the element to be generated alternates between 1 and 2: 1,2 (length of sequence l = 2; sum of terms s = 3) 1,2,2 (l = 3, s = 5) 1,2,2,1,1 (l = 5, s = 7) 1,2,2,1,1,2,1 (l = 7, s = 10) 1,2,2,1,1,2,1,2,2,1 (l = 10, s = 15) 1,2,2,1,1,2,1,2,2,1,2,2,1,1,2 (l = 15, s = 23) As can be seen, the length of the sequence at each stage is equal to the sum of terms in the previous stage. This animation illustrates the process: These self-generating properties, which remain if the sequence is written without the initial 1, mean that the Kolakoski sequence can be described as a fractal, or mathematical object that encodes its own representation on other scales. Bertran Steinsky has created a recursive formula for the i-th term of the sequence but the sequence is conjectured to be aperiodic, that is, its terms do not have a general repeating pattern (cf. irrational numbers like π and ). Research Density It seems plausible that the density of 1s in the Kolakoski {1,2}-sequence is 1/2, but this conjecture remains unproved. Václav Chvátal has proved that the upper density of 1s is less than 0.50084. Nilsson has used the same method with far greater computational power to obtain the bound 0.500080. Although calculations of the first 3×108 values of the sequence appeared to show its density converging to a value slightly different from 1/2, later calculations that extended the sequence to its first 1013 values show the deviation from a density of 1/2 growing smaller, as one would expect if the limiting density actually is 1/2. Connection with tag systems The Kolakoski sequence can also be described as the result of a simple cyclic tag system. However, as this system is a 2-tag system rather than a 1-tag system (that is, it replaces pairs of symbols by other sequences of symbols, rather than operating on a single symbol at a time) it lies in the region of parameters for which tag systems are Turing complete, making it difficult to use this representation to reason about the sequence. Algorithms The Kolakoski sequence may be generated by an algorithm that, in the i-th iteration, reads the value xi that has already been output as the i-th value of the sequence (or, if no such value has been output yet, sets xi = i). Then, if i is odd, it outputs xi copies of the number 1, while if i is even, it outputs xi copies of the number 2. Thus, the first few steps of the algorithm are: The first value has not yet been output, so set x1 = 1, and output 1 copy of the number 1 The second value has not yet been output, so set x2 = 2, and output 2 copies of the number 2 The third value x3 was output as 2 in the second step, so output 2 copies of the number 1. The fourth value x4 was output as 1 in the third step, so output 1 copy of the number 2. Etc. This algorithm takes linear time, but because it needs to refer back to earlier positions in the sequence it needs to store the whole sequence, taking linear space. An alternative algorithm that generates multiple copies of the sequence at different speeds, with each copy of the sequence using the output of the previous copy to determine what to do at each step, can be used to generate the sequence in linear time and only logarithmic space. See also Golomb sequence — another self-generating sequence based on run-length Gijswijt's sequence Look-and-say sequence Notes Further reading External links Kolakoski Constant to 25000 digits as computed by Olivier Gerard in April 1998 Integer sequences Parity (mathematics) Fractals
Kolakoski sequence
Mathematics
1,401
249,617
https://en.wikipedia.org/wiki/System%20of%20equations
In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single equations, namely as a: System of linear equations, System of nonlinear equations, System of bilinear equations, System of polynomial equations, System of differential equations, or a System of difference equations See also Simultaneous equations model, a statistical model in the form of simultaneous linear equations Elementary algebra, for elementary methods Equations Broad-concept articles de:Gleichung#Gleichungssysteme
System of equations
Mathematics
128
994,800
https://en.wikipedia.org/wiki/Titanium%20tetrachloride
Titanium tetrachloride is the inorganic compound with the formula . It is an important intermediate in the production of titanium metal and the pigment titanium dioxide. is a volatile liquid. Upon contact with humid air, it forms thick clouds of titanium dioxide () and hydrochloric acid, a reaction that was formerly exploited for use in smoke machines. It is sometimes referred to as "tickle" or "tickle 4", as a phonetic representation of the symbols of its molecular formula (). Properties and structure is a dense, colourless liquid, although crude samples may be yellow or even red-brown. It is one of the rare transition metal halides that is a liquid at room temperature, being another example. This property reflects the fact that molecules of weakly self-associate. Most metal chlorides are polymers, wherein the chloride atoms bridge between the metals. Its melting point is similar to that of . has a "closed" electronic shell, with the same number of electrons as the noble gas argon. The tetrahedral structure for is consistent with its description as a d0 metal center () surrounded by four identical ligands. This configuration leads to highly symmetrical structures, hence the tetrahedral shape of the molecule. adopts similar structures to and ; the three compounds share many similarities. and react to give mixed halides , where x = 0, 1, 2, 3, 4. Magnetic resonance measurements also indicate that halide exchange is also rapid between and . is soluble in toluene and chlorocarbons. Certain arenes form complexes of the type . reacts exothermically with donor solvents such as THF to give hexacoordinated adducts. Bulkier ligands (L) give pentacoordinated adducts . Production is produced by the chloride process, which involves the reduction of titanium oxide ores, typically ilmenite (), with carbon under flowing chlorine at 900 °C. Impurities are removed by distillation. The coproduction of is undesirable, which has motivated the development of alternative technologies. Instead of directly using ilmenite, "rutile slag" is used. This material, an impure form of , is derived from ilmenite by removal of iron, either using carbon reduction or extraction with sulfuric acid. Crude contains a variety of other volatile halides, including vanadyl chloride (), silicon tetrachloride (), and tin tetrachloride (), which must be separated. Applications Production of titanium metal The world's supply of titanium metal, about 250,000 tons per year, is made from . The conversion involves the reduction of the tetrachloride with magnesium metal. This procedure is known as the Kroll process: In the Hunter process, liquid sodium is the reducing agent instead of magnesium. Production of titanium dioxide Around 90% of the production is used to make the pigment titanium dioxide (). The conversion involves hydrolysis of , a process that forms hydrogen chloride: In some cases, is oxidised directly with oxygen: Smoke screens It has been used to produce smoke screens since it produces a heavy, white smoke that has little tendency to rise. "Tickle" was the standard means of producing on-set smoke effects for motion pictures, before being phased out in the 1980s due to concerns about hydrated HCl's effects on the respiratory system. Chemical reactions Titanium tetrachloride is a versatile reagent that forms diverse derivatives including those illustrated below. Alcoholysis and related reactions A characteristic reaction of is its easy hydrolysis, signaled by the release of HCl vapors and titanium oxides and oxychlorides. Titanium tetrachloride has been used to create naval smokescreens, as the hydrochloric acid aerosol and titanium dioxide that is formed scatter light very efficiently. This smoke is corrosive, however. Alcohols react with to give alkoxides with the formula (R = alkyl, n = 1, 2, 4). As indicated by their formula, these alkoxides can adopt complex structures ranging from monomers to tetramers. Such compounds are useful in materials science as well as organic synthesis. A well known derivative is titanium isopropoxide, which is a monomer. Titanium bis(acetylacetonate)dichloride results from treatment of titanium tetrachloride with excess acetylacetone: Organic amines react with to give complexes containing amido (-containing) and imido (-containing) complexes. With ammonia, titanium nitride is formed. An illustrative reaction is the synthesis of tetrakis(dimethylamido)titanium , a yellow, benzene-soluble liquid: This molecule is tetrahedral, with planar nitrogen centers. Complexes with simple ligands is a Lewis acid as implicated by its tendency to hydrolyze. With the ether THF, reacts to give yellow crystals of . With chloride salts, reacts to form sequentially , (see figure above), and . The reaction of chloride ions with depends on the counterion. and gives the pentacoordinate complex , whereas smaller gives . These reactions highlight the influence of electrostatics on the structures of compounds with highly ionic bonding. Redox Reduction of with aluminium results in one-electron reduction. The trichloride () and tetrachloride have contrasting properties: the trichloride is a colored solid, being a coordination polymer, and is paramagnetic. When the reduction is conducted in THF solution, the Ti(III) product converts to the light-blue adduct . Organometallic chemistry The organometallic chemistry of titanium typically starts from . An important reaction involves sodium cyclopentadienyl to give titanocene dichloride, . This compound and many of its derivatives are precursors to Ziegler–Natta catalysts. Tebbe's reagent, useful in organic chemistry, is an aluminium-containing derivative of titanocene that arises from the reaction of titanocene dichloride with trimethylaluminium. It is used for the "olefination" reactions. Arenes, such as react to give the piano-stool complexes (R = H, ; see figure above). This reaction illustrates the high Lewis acidity of the entity, which is generated by abstraction of chloride from by . Reagent in organic synthesis finds occasional use in organic synthesis, capitalizing on its Lewis acidity, its oxophilicity, and the electron-transfer properties of its reduced titanium halides. It is used in the Lewis acid catalysed aldol addition Key to this application is the tendency of to activate aldehydes (RCHO) by formation of adducts such as . Toxicity and safety considerations Hazards posed by titanium tetrachloride generally arise from its reaction with water that releases hydrochloric acid, which is severely corrosive itself and whose vapors are also extremely irritating. is a strong Lewis acid, which exothermically forms adducts with even weak bases such as THF and water. References General reading External links Titanium tetrachloride: Health Hazard Information NIST Standard Reference Database ChemSub Online: Titanium tetrachloride Titanium(IV) compounds Chlorides Titanium halides Reagents for organic chemistry
Titanium tetrachloride
Chemistry
1,523
63,570,283
https://en.wikipedia.org/wiki/NGC%20656
NGC 656 is a barred lenticular galaxy located in the Pisces constellation about 175 million light-years from the Milky Way. It was discovered by the Prussian astronomer Heinrich d'Arrest in 1865. See also List of NGC objects (1–1000) Heinrich d'Arrest References External links Barred lenticular galaxies 0656 Pisces (constellation) 006293
NGC 656
Astronomy
77
679,297
https://en.wikipedia.org/wiki/Specular%20reflection
Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface. The law of reflection states that a reflected ray of light emerges from the reflecting surface at the same angle to the surface normal as the incident ray, but on the opposing side of the surface normal in the plane formed by the incident and reflected rays. This behavior was first described by Hero of Alexandria (AD c. 10–70). Later, Alhazen gave a complete statement of the law of reflection. He was first to state that the incident ray, the reflected ray, and the normal to the surface all lie in a same plane perpendicular to reflecting plane. Specular reflection may be contrasted with diffuse reflection, in which light is scattered away from the surface in a range of directions. Law of reflection When light encounters a boundary of a material, it is affected by the optical and electronic response functions of the material to electromagnetic waves. Optical processes, which comprise reflection and refraction, are expressed by the difference of the refractive index on both sides of the boundary, whereas reflectance and absorption are the real and imaginary parts of the response due to the electronic structure of the material. The degree of participation of each of these processes in the transmission is a function of the frequency, or wavelength, of the light, its polarization, and its angle of incidence. In general, reflection increases with increasing angle of incidence, and with increasing absorptivity at the boundary. The Fresnel equations describe the physics at the optical boundary. Reflection may occur as specular, or mirror-like, reflection and diffuse reflection. Specular reflection reflects all light which arrives from a given direction at the same angle, whereas diffuse reflection reflects light in a broad range of directions. The distinction may be illustrated with surfaces coated with glossy paint and matte paint. Matte paints exhibit essentially complete diffuse reflection, while glossy paints show a larger component of specular behavior. A surface built from a non-absorbing powder, such as plaster, can be a nearly perfect diffuser, whereas polished metallic objects can specularly reflect light very efficiently. The reflecting material of mirrors is usually aluminum or silver. Light propagates in space as a wave front of electromagnetic fields. A ray of light is characterized by the direction normal to the wave front (wave normal). When a ray encounters a surface, the angle that the wave normal makes with respect to the surface normal is called the angle of incidence and the plane defined by both directions is the plane of incidence. Reflection of the incident ray also occurs in the plane of incidence. The law of reflection states that the angle of reflection of a ray equals the angle of incidence, and that the incident direction, the surface normal, and the reflected direction are coplanar. When the light is incident perpendicularly to the surface, it is reflected straight back in the source direction. The phenomenon of reflection arises from the diffraction of a plane wave on a flat boundary. When the boundary size is much larger than the wavelength, then the electromagnetic fields at the boundary are oscillating exactly in phase only for the specular direction. Vector formulation The law of reflection can also be equivalently expressed using linear algebra. The direction of a reflected ray is determined by the vector of incidence and the surface normal vector. Given an incident direction from the light source to the surface and the surface normal direction the specularly reflected direction (all unit vectors) is: where is a scalar obtained with the dot product. Different authors may define the incident and reflection directions with different signs. Assuming these Euclidean vectors are represented in column form, the equation can be equivalently expressed as a matrix-vector multiplication: where is the so-called Householder transformation matrix, defined as: in terms of the identity matrix and twice the outer product of . Reflectivity Reflectivity is the ratio of the power of the reflected wave to that of the incident wave. It is a function of the wavelength of radiation, and is related to the refractive index of the material as expressed by Fresnel's equations. In regions of the electromagnetic spectrum in which absorption by the material is significant, it is related to the electronic absorption spectrum through the imaginary component of the complex refractive index. The electronic absorption spectrum of an opaque material, which is difficult or impossible to measure directly, may therefore be indirectly determined from the reflection spectrum by a Kramers-Kronig transform. The polarization of the reflected light depends on the symmetry of the arrangement of the incident probing light with respect to the absorbing transitions dipole moments in the material. Measurement of specular reflection is performed with normal or varying incidence reflection spectrophotometers (reflectometer) using a scanning variable-wavelength light source. Lower quality measurements using a glossmeter quantify the glossy appearance of a surface in gloss units. Consequences Internal reflection When light is propagating in a material and strikes an interface with a material of lower index of refraction, some of the light is reflected. If the angle of incidence is greater than the critical angle, total internal reflection occurs: all of the light is reflected. The critical angle can be shown to be given by Polarization When light strikes an interface between two materials, the reflected light is generally partially polarized. However, if the light strikes the interface at Brewster's angle, the reflected light is completely linearly polarized parallel to the interface. Brewster's angle is given by Reflected images The image in a flat mirror has these features: It is the same distance behind the mirror as the object is in front. It is the same size as the object. It is the right way up (erect). It is reversed. It is virtual, meaning that the image appears to be behind the mirror, and cannot be projected onto a screen. The reversal of images by a plane mirror is perceived differently depending on the circumstances. In many cases, the image in a mirror appears to be reversed from left to right. If a flat mirror is mounted on the ceiling it can appear to reverse up and down if a person stands under it and looks up at it. Similarly a car turning left will still appear to be turning left in the rear view mirror for the driver of a car in front of it. The reversal of directions, or lack thereof, depends on how the directions are defined. More specifically a mirror changes the handedness of the coordinate system, one axis of the coordinate system appears to be reversed, and the chirality of the image may change. For example, the image of a right shoe will look like a left shoe. Examples A classic example of specular reflection is a mirror, which is specifically designed for specular reflection. In addition to visible light, specular reflection can be observed in the ionospheric reflection of radiowaves and the reflection of radio- or microwave radar signals by flying objects. The measurement technique of x-ray reflectivity exploits specular reflectivity to study thin films and interfaces with sub-nanometer resolution, using either modern laboratory sources or synchrotron x-rays. Non-electromagnetic waves can also exhibit specular reflection, as in acoustic mirrors which reflect sound, and atomic mirrors, which reflect neutral atoms. For the efficient reflection of atoms from a solid-state mirror, very cold atoms and/or grazing incidence are used in order to provide significant quantum reflection; ridged mirrors are used to enhance the specular reflection of atoms. Neutron reflectometry uses specular reflection to study material surfaces and thin film interfaces in an analogous fashion to x-ray reflectivity. See also Geometric optics Hamiltonian optics Reflection coefficient Reflection (mathematics) Specular highlight Specularity Notes References Optical phenomena de:Reflexionsgesetz es:Imagen especular fr:Réflexion optique#Les deux formes de la réflexion ru:Закон отражения света
Specular reflection
Physics
1,608
63,067,469
https://en.wikipedia.org/wiki/Lactarius%20albolutescens
Lactarius albolutescens is a member of the large genus Lactarius (order Russulales), known as milk-caps. Found in North America, the species was first described in 1957 by American mycologist Harry D. Thiers. See also List of Lactarius species References albolutescens Fungi of North America Fungi described in 1957 Fungus species
Lactarius albolutescens
Biology
80
66,313,605
https://en.wikipedia.org/wiki/Diiodine%20oxide
Diiodine oxide, also known as iodo hypoiodite, is an oxide of iodine that is equivalent to an acid anhydride of hypoiodous acid. This substance is unstable and it is very difficult to isolate. Preparation Diiodine oxide can be prepared by reacting iodine with potassium iodate (KIO3) in 96% sulfuric acid and then extracting it into chlorinated solvents. Reactions Diiodine oxide reacts with water to form hypoiodous acid: References Iodine compounds Oxides
Diiodine oxide
Chemistry
116
53,897,387
https://en.wikipedia.org/wiki/Natrinema%20longum
Haloterrigena longa is a species of archaea in the family Natrialbaceae. It was isolated from Aibi Salt Lake in Xinjiang, China in 2006. References External links Type strain of Haloterrigena longa at BacDive - the Bacterial Diversity Metadatabase Halobacteria Archaea described in 2006
Natrinema longum
Biology
70
1,110,450
https://en.wikipedia.org/wiki/N-Acetylmuramic%20acid
N-Acetylmuramic acid (NAM or MurNAc) is an organic compound with the chemical formula . It is a monomer of peptidoglycan in most bacterial cell walls, which is built from alternating units of N-acetylglucosamine (GlcNAc) and N-acetylmuramic acid, cross-linked by oligopeptides at the lactic acid residue of MurNAc. Formation of NAM NAM is an addition product of phosphoenolpyruvate and N-acetylglucosamine. This addition happens exclusively in the cell cytoplasm. Clinical significance N-Acetylmuramic acid (MurNAc) is part of the peptidoglycan polymer of bacterial cell walls. MurNAc is covalently linked to N-acetylglucosamine and may also be linked through the hydroxyl on carbon number 4 to the carbon of L-alanine. A pentapeptide composed of L-alanyl-D-isoglutaminyl-L-lysyl-D-alanyl-D-alanine is added to the MurNAc in the process of making the peptidoglycan strands of the cell wall. Synthesis of NAM is inhibited by fosfomycin. NAG and NAM cross-linking can be inhibited by antibiotics to inhibit pathogens from growing within the body. Therefore, both NAG and NAM are valuable polymers in medicinal research. References See also Amino sugar Glucosamine Amino sugars Monosaccharide derivatives Monosaccharides Membrane biology
N-Acetylmuramic acid
Chemistry
341
30,678,006
https://en.wikipedia.org/wiki/G%C3%A9rard%20Vergnaud
Gérard Vergnaud (8 February 1933 – 6 June 2021) was a French mathematician, philosopher, educator, and psychologist. He earned his doctorate from the International Center for Genetic Epistemology in Geneva under the supervision of Jean Piaget. Vergnaud was a professor emeritus of the Centre national de la recherche scientifique in Paris, where he was a researcher in mathematics. Among his most significant work has been the development of the Theory of Conceptual Fields, which describes how children develop an understanding of mathematics. Gérard Vergnaud graduated from HEC Paris in 1956 and from the University of Geneva in 1968. References External links Official website Gérard Vergnaud – Association pour la Recherche en Didactique des Mathématiques Ciencia al Día – "Horror a las matemáticas" (accessed 29 January 2011). 1933 births 2021 deaths HEC Paris alumni 20th-century French psychologists French mathematicians 20th-century French philosophers French male non-fiction writers Philosophers of mathematics People from Maine-et-Loire
Gérard Vergnaud
Mathematics
208
4,602,149
https://en.wikipedia.org/wiki/Loudspeaker%20measurement
Loudspeaker measurement is the practice of determining the behaviour of loudspeakers by measuring various aspects of performance. This measurement is especially important because loudspeakers, being transducers, have a higher level of distortion than other audio system components used in playback or sound reinforcement. Anechoic measurement One way to test a loudspeaker requires an anechoic chamber, with an acoustically transparent floor-grid. The measuring microphone is normally mounted on an unobtrusive boom (to avoid reflections) and positioned 1 metre in front of the drive units on the axis with the high-frequency driver. While this can produce repeatable results, such a 'free-space' measurement is not representative of performance in a room, especially a small room. For valid results at low frequencies, a very large anechoic chamber is needed, with large absorbent wedges on all sides. Most anechoic chambers are not designed for accurate measurement down to 20 Hz and most are not capable of measuring below 80 Hz. Tetrahedral chamber A tetrahedral chamber is capable of measuring the low frequency limit of the driver without the large footprint required by an anechoic chamber. This compact measurement system for loudspeaker drivers is defined in IEC 60268-21:2018, IEC 60268-22:2020 and AES73id-2019. Half-space measurement An alternative is to simply lay the speaker on its back pointing at the sky on open grass. Ground reflection will still interfere but will be greatly reduced in the mid-range because most speakers are directional, and only radiate very low frequencies backward. Putting absorbent material around the speaker will reduce mid-range ripple by absorbing rear radiation. At low frequencies, the ground reflection is always in-phase, so that the measured response will have increased bass, but this is what generally happens in a room anyway, where the rear wall and the floor both provide a similar effect. There is a good case, therefore, using such half-space measurements, and aiming for a flat half-space response. Speakers that are equalised to give a flat free-space response, will always sound very bass-heavy indoors, which is why monitor speakers tend to incorporate half-space, and quarter-space (for corner use) settings which bring in attenuation below about 400 Hz. Digging a hole and burying the speaker flush with the ground allows far more accurate half-space measurement, creating the loudspeaker equivalent of the boundary effect microphone (all reflections precisely in-phase) but any rear port must remain unblocked, and any rear-mounted amplifier must be allowed cooling air. Diffraction from the edges of the enclosure is reduced, creating a repeatable and accurate, but not very representative, response curve. Room measurements At low frequencies, most rooms have resonances at a series of frequencies where a room dimension corresponds to a multiple of half wavelengths. Sound travels at about , so a room long will have resonances from 27.5 Hz upwards. These resonant modes cause large peaks and dips in the sound level of a constant signal as the frequency of that signal varies from low to high. Additionally, reflections, dispersion, absorption, etc. all strongly alter the perceived sound, though this is not necessarily consciously noticeable for either music or speech, at frequencies above those dominated by room modes. These alterations depend on speaker locations with respect to reflecting, dispersing, or absorbing surfaces (including changes in speaker orientation) and on the listening position. In unfortunate situations, a slight movement of any of these, or of the listener, can cause considerable differences. Complex effects, such as stereo (or multiple channel) aural integration into a unified perceived "sound stage" can be lost easily. There is limited understanding of how the ear and brain process sound to produce such perceptions, and so no measurement, or combination of measurements, can assure successful perceptions of, for instance, the "sound stage" effect. Thus, there is no assured procedure that will maximize speaker performance in any listening space (with the exception of the sonically unpleasant anechoic chamber). Some parameters, such as reverberation time (in any case, really applicable only to larger volumes), and overall room "frequency response" can be somewhat adjusted by addition or subtraction of reflecting, diffusing, or absorbing elements, but, though this can be remarkably effective (with the right additions or subtractions and placements), it remains something of an art and a matter of experience. In some cases, no such combination of modifications has been found to be very successful. Microphone positioning All multi-driver speakers (unless they are coaxial) are difficult to measure correctly if the measuring microphone is placed close to the loudspeaker and slightly above or below the optimum axis because the different path length from two drivers producing the same frequency leads to phase cancellation. It is useful to remember that, as a rule of thumb, 1 kHz has a wavelength of in air, and 10 kHz a wavelength of only . Published results are often only valid for very precise positioning of the microphone to within a centimetre or two. Measurements made at 2 or 3 m, in the actual listening position between two speakers can reveal something of what is actually going on in a listening room. Horrendous though the resulting curve generally appears to be (in comparison to other equipment), it provides a basis for experimentation with absorbent panels. Driving both speakers is recommended, as this stimulates low-frequency room 'modes' in a representative fashion. This means the microphone must be positioned precisely equidistant from the two speakers if 'comb-filter' effects (alternate peaks and dips in the measured room response at that point) are to be avoided. Positioning is best done by moving the mic from side to side for maximum response on a 1 kHz tone, then a 3 kHz tone, then a 10 kHz tone. While the very best modern speakers can produce a frequency response flat to ±1 dB from 40 Hz to 20 kHz in anechoic conditions, measurements at 2 m in a real listening room are generally considered good if they are within ±12 dB. Nearfield measurements Room acoustics have a much smaller effect on nearfield measurements, so these can be appropriate when anechoic chamber analysis cannot be done. Measurements should be done at much shorter distances from the speaker than the speaker (or the sound source, like horn, vent) overall diameter, where the half-wavelength of the sound is smaller than the speaker overall diameter. These measurements yield direct speaker efficiency, or the average sensitivity, without directional information. For a multiple sound source speaker system, the measurement should be carried out for all sound sources (woofer, bass-reflex vent, midrange speaker, tweeter...). These measurements are easy to carry out, can be done at almost any room, more punctual than in-box measurements, and predicts half-space measurements, but without directivity information. Frequency response measurement Frequency response measurements are only meaningful if shown as a graph, or specified in terms of ±3 dB limits (or other limits). A weakness of most quoted figures is a failure to state the maximum SPL available, especially at low frequencies. A power bandwidth measurement is, therefore, most useful, in addition to frequency response, this being a plot of maximum SPL out for a given distortion figure across the audible frequency range. Distortion measurement Distortion measurements on loudspeakers can only go as low as the distortion of the measurement microphone itself of course, at the level tested. The microphone should ideally have a clipping level of 120 to 140 dB SPL if high-level distortion is to be measured. A typical top-end speaker, driven by a typical 100watt power amplifier, cannot produce peak levels much above 105 dB SPL at 1 m (which translates roughly to 105 dB at the listening position from a pair of speakers in a typical listening room). Achieving truly realistic reproduction requires speakers capable of much higher levels than this, ideally around 130 dB SPL. Even though the level of live music measured on a (slow responding and RMS reading) sound level meter might be in the region of 100 dB SPL, programme level peaks on percussion will far exceed this. Most speakers give around 3% distortion measured 468-weighted 'distortion residue' reducing slightly at low levels. Electrostatic speakers can have lower harmonic distortion but suffer higher intermodulation distortion. 3% distortion residue corresponds to 1 or 2% total harmonic distortion. Professional monitors may maintain modest distortion up to around 110 dB SPL at 1 m, but almost all domestic speaker systems distort badly above 100 dB SPL. Colouration analysis Loudspeakers differ from most other items of audio equipment in suffering from colouration, the tendency of various parts of the speaker — the cone, its surround, the cabinet, the enclosed space — to carry on moving when the signal ceases. All forms of resonance cause this, by storing energy, and resonances with high Q factor are especially audible. Much of the work that has gone into improving speakers in recent years has been about reducing colouration, and Fast Fourier Transform, or FFT, measuring equipment was introduced in order to measure the delayed output from speakers and display it as a time vs. frequency waterfall plot or spectrogram plot. Initially, an analysis was performed using impulse response testing, but this 'spike' suffers from having very low energy content if the stimulus is to remain within the peak ability of the speaker. Later equipment uses correlation on other stimulus such as a maximum length sequence system analyser (MLSSA). Using multiple sine wave tones as a stimulus signal and analyzing the resultant output, Spectral Contamination testing provides a measure of a loudspeakers 'self-noise' distortion component. This 'picket fence' type of signal can be optimized for any frequency range, and the results correlate exceptionally well with sound quality listening tests. See also Audio power Audio noise measurement Audio quality measurement Bandwidth extension Directional sound Isobaric loudspeaker Loudspeaker acoustics Parabolic loudspeaker Speaker driver Spherical coordinate system Studio monitor References External links MLSSA acoustical measurement system Praxis loudspeaker measurement system CONEQ loudspeaker measurement and correction system TTC Tetrahedral Test Chambers Audio & Loudspeaker Technologies International (ALTI) Loudspeaker technology Sound measurements
Loudspeaker measurement
Physics,Mathematics
2,139
38,398,925
https://en.wikipedia.org/wiki/Animal%20Coloration%20%28book%29
Animal Coloration, or in full Animal Coloration: An Account of the Principal Facts and Theories Relating to the Colours and Markings of Animals, is a book by the English zoologist Frank Evers Beddard, published by Swan Sonnenschein in 1892. It formed part of the ongoing debate amongst zoologists about the relevance of Charles Darwin's theory of natural selection to the observed appearance, structure, and behaviour of animals, and vice versa. Beddard states in the book that it contains little that is new, intending instead to give a clear overview of the subject. The main topics covered are camouflage, then called 'protective coloration'; mimicry; and sexual selection. Arguments for and against these aspects of animal coloration are intensively discussed in the book. The book was reviewed in 1892 by the major journals including The Auk, Nature, and Science. The scientist reviewers Joel Asaph Allen, Edward Bagnall Poulton and Robert Wilson Shufeldt took up different positions on the book and accordingly praised or criticized Beddard's work. Modern evaluation of the book is from a variety of perspectives, including the history of Darwinism, the history of the Thayer debate on the purpose of camouflage, the mechanisms of camouflage, sexual selection, and mimicry. Beddard is seen as having covered a wide swath of modern biology with both theory and experiment. Context Beddard (1858–1925) was an English zoologist specializing in Annelid worms, but writing much more widely on topics including mammals and zoogeography. He also contributed articles on earthworms, leeches and nematode worms to the 1911 Encyclopædia Britannica. His decision to write an accessible book on animal coloration falls into this pattern. Beddard wrote Animal Coloration at a time when scientists' confidence in Charles Darwin's theory of evolution by natural selection was at a low ebb. Beddard's book was part of an ongoing debate among zoologists about how far natural selection affected animals, and how far other forces – such as the direct action of light – might be the causes of observed features such as the colours of animals. Edward Bagnall Poulton's far more strongly pro-Darwinian book The Colours of Animals had appeared just two years earlier in 1890. Approach Beddard explains in his preface that the book grew from his 1890 Davis Lectures given for the public at London Zoo. The book "contains hardly anything novel, but professes to give some account of the principal phenomena of coloration exhibited by animals." He also notes that since Poulton's recent book "deal[s] with colour almost entirely from the point of view of natural selection, I have attempted to lay some stress upon other aspects of the question." Similarly, because Poulton treated insects in some detail, Beddard chooses to give more attention to other groups, though "it is impossible not to devote a good deal of space to insects". The examples are mainly from Beddard's own observation of "animals that may be usually seen in the Zoological Society's Gardens", though he also introduces and quotes the work of other scientists, including Henry Walter Bates and Alfred Russel Wallace. Illustrations The book has four colour plates by Peter Smit, who both drew and prepared the chromolithographic plates. Plate 1 is stated in the List of Illustrations "To face page 108", but as bound in the first edition it is used as a Frontispiece, facing the title page. There are also 36 woodcuts (in black and white) in the text, though one of these, "Eolis and Dendronotus" is intentionally repeated as figures 10 and 19 to accompany the text in two places. The woodcuts vary from small line drawings on a simple white background (as in the diagrammatic figure 28 of Psyche helix, and figure 34 of the winter moth) to page-width illustrations like figure 2 which shows ermines in winter pelage, in a realistic depiction with a detailed snowy scene in the background. The woodcuts are certainly by a number of different artists; many are unsigned, but figures 5 and 26 are signed "E.A. Brockhaus X.A" lower right (X=cut, A=Artist), while figure 29 is signed "GM" lower left, and figures 35 and 36 are signed "ES" lower left. Figure 2 bears a monograph "FR", lower left, and figure 7, of the penguin Aptenodytes patagonica is stated to be "from Brehm" (Brehms Tierleben). Structure Animal Coloration has a simple structure of six chapters in its 288 pages. 1. Introductory Beddard distinguishes colour, when an animal has just one, from coloration, when there is some kind of pattern of two or more colours. He discusses the mechanisms of colour production, both structural coloration and pigments, and the reasons for coloration, including the red of haemoglobin used to carry oxygen. Non-adaptive coloration is considered, and a section argues that "the action of natural selection in producing colour changes must be strictly limited". 2. Coloration affected by the environment In this chapter Beddard continues to explore the possible direct effect of the environment, i.e. with "no possible relation to natural selection". The effects of different foods, temperature and humidity are discussed. Beddard argues against Poulton's view that natural selection has removed the pigment from cave-dwelling animals, agreeing rather with Wallace that pigment is produced as a by-product. Beddard grants that the change to white of arctic animals in winter looks like natural selection, rather than a direct effect of the environment, but argues that some animals do not change, including the musk ox which he describes as "comparatively defenceless". 3. Protective coloration "Protection" is a shorthand in Beddard's vocabulary for camouflage necessitated by natural selection, whether of prey for defence against predators hunting by sight, or of predators concealing themselves for attack on watchful prey. He mentions that Wallace includes the green of tree-frequenting animals and the tawny of desert animals under "General Protective Resemblance", and mentions his own experiments which agree with Poulton's observation that lizards "do pass over and leave unnoticed protectively coloured caterpillars". However, Beddard continually tests the validity of this explanation: He observes that "Every naturalist traveller appears to have some instance to relate of how he was taken in by a protectively-coloured insect. These stories are told with a curiously exaggerated delight at the deception...", giving as example how Professor Drummond in his book Tropical Africa thought a mantid was a wisp of hay. He picks up on the casually mentioned fact that Drummond's African companion was not deceived, writing that we should not judge camouflage "from the human standpoint". On the other hand, Beddard writes that people who had only seen the giraffe, zebra, and jaguar in the zoo would think them "among the most conspicuously coloured of the Mammalia", but that seen "in their native countries" they are "most difficult to detect". The chapter ends with a discussion of animals that can change colour, including fish like the sole, the chameleon, the horned lizards and the tree frogs including the European species Hyla arborea. He cites Poulton's suggestion that the tree frog's camouflage may be both defensive (protecting from predators) and aggressive (facilitating the hunting of insects). 4. Warning coloration In this chapter Beddard discusses the warning coloration (aposematism) of animals, which he notes "have a precisely opposite tendency" to camouflage, "viz., to render their possessor conspicuous". He at once says that the explanation was "first devised by Mr. Wallace" for insects. The chapter therefore begins with the insects, often using English species as examples. He examines critically whether eye-like markings and other warnings actually work. He discusses experiments by Poulton on the elephant hawk-moth, where a sand lizard is only briefly startled, and his own at the London zoo using a range of predators and different insects. Beddard is only partially convinced, flirting with Dr. Eisig's theory that the pigments creating the colours of caterpillars are inherently distasteful, and hence that "the brilliant colours (i.e. the abundant secretion of pigment) have caused the inedibility of the species, rather than that the inedibility has necessitated the production of bright colours as an advertisement." So Beddard suggests that "the advent of bird-life proved a disastrous event for these animals, and compelled them to undergo various modifications", except when they were already by luck warning coloured and distasteful. 5. Protective mimicry This chapter discusses Batesian mimicry, also mentioning observations and opinions of Fritz Müller and Wallace. Beddard grants that Bates's theory is very strongly supported by the observations that Bates made in South America, especially on butterflies, though again he tests the evolutionary explanation in different cases. He cites Wallace's rules of mimicry, such as that the imitators are always the more defenceless, and always less numerous, than their models, as covering all the examples he has given. However, he then states various objections, including that "the Danaidae, themselves an uneatable race of butterflies and models for mimicry, resemble in South America the uneatable Heliconiidae". He points out that this does not meet any of Wallace's rules so it is "not a case of true mimicry", but is "supposed rather to be like that which is seen between various other unpalatable animals". Müllerian mimicry is not mentioned explicitly in the book, though Beddard does write that this example "tends to the advantage of the insects, for their enemies have to learn fewer colours and patterns, and thus are less likely to make mistakes, than if the lesson to be learnt were an excessively complicated one." By the end, Beddard concludes that "Nevertheless, cases of mimicry that do occur—particularly among Lepidoptera—are often so striking that no other explanation ... seems to account for the finishing touches, at least, of the resemblance". He remains sceptical of cases "which are to be appreciated only by insects", as he considers that insects might not have good enough vision for mimicry to work. 6. Sexual coloration The final chapter begins with examples of sexual dimorphism, such as "the antlers of the stag, the spurs of the cock... and the gorgeous plumes found in the males of the birds of paradise", with other examples chosen from across the animal kingdom. Darwin's theory of sexual selection is explained; Beddard then states the objection that female birds must be supposed to have "a highly-developed aesthetic sense" to choose between similar-looking males, and worse, that females of closely related species must have "immense[ly]" different tastes. He concludes, though, that the question cannot be answered by what we consider improbable, but requires "actual observation". He calls Poulton's arguments for sexual selection "very ingenious", but writes that Wallace's two different (non-selective) explanations "might both be accepted". He concludes that "it is quite possible that sexual selection may have played a subordinate part" in producing sexually dimorphic coloration. Reception Contemporary The Auk The American zoologist and ornithologist Joel Asaph Allen reviewed Animal Coloration in The Auk in 1893. Allen notes Beddard's remark that the book contains hardly anything novel, so that it is mainly a review of previous theories, but welcomes it as a review of the state of knowledge together with Beddard's critical commentary. Allen notes that Beddard could have gone further in criticising Weismann and Poulton on colour changes, but is "glad to see [that Beddard] is willing to grant that the influence of an animal's surroundings may exercise a direct influence upon its coloration without the intervention of the agency of 'natural selection.'" Allen praises Beddard's "commendable conservatism" in his discussion of camouflage, which he compares to the "credulous spirit" of other authors. Reviewing the chapter on warning coloration, Allen remarks that the great horned owl is known to prey on the skunk, showing that even such a disagreeably pungent animal can be subject to predation. On mimicry, Allen is critical of Bates's theory, arguing that edible mimics (such as flies) are often not protected by resembling distasteful models (such as wasps). Allen notes that Beddard deals with many special cases "as of .. spiders mimicking ants, etc." and finds the arguments against any selective advantage from Batesian mimicry, and so against natural selection, somewhat conclusive. Finally, reviewing the chapter on sexual selection, Allen writes (knowing that Wallace largely rejected sexual selection) Allen then makes some remarks, praising Beddard for the "fine vein of irony" that he uses of Nature The zoologist Edward Bagnall Poulton, whose work is referred to throughout Beddard's book, reviewed Animal Coloration in Nature in 1892. Poulton is critical of Beddard and other authors, defending Darwin's theory of natural selection as "the most generally accepted explanation of organic evolution" and insisting that in "case after case" the Darwinian explanation turns out to be correct. Science The white supremacist scientist Robert Wilson Shufeldt reviewed Animal Coloration in Science in 1892, praising it as a concise and useful summary of the subject. He admires Macmillan Publishers' handling of the book with its attractive wood-cuts and coloured lithographic plates. He is pleased to find many Americans in the index. He quotes Beddard's distinction between colour and coloration. He considers that the book brings readers fully up to date and even adds a few new ideas. He recommends the book to all working American naturalists. Popular Science Monthly The anonymous reviewer in Popular Science Monthly in December 1892 writes that Beddard has "made a book interesting to both the zoologist and the general reader." On protective coloration, "he raises the question whether as a matter of fact animals are concealed from their foes by their protective resemblances, and shows that there is much evidence on the negative side", and further that such colours are sometimes produced "more simply and directly than by the operation of natural selection." On warning colours, the reviewer notes that Beddard gives "much weight" to Eisig's theory that "the usual bright pigments" in caterpillars (accidentally) cause inedibility, "instead of being produced to advertise it" and that Beddard cautions against assuming that "the sight or taste of animals were the same as that of man". Modern Beddard's Animal Coloration is cited and discussed both by historians of science, and by practising scientists from a number of different fields. For example, the book illuminates the progress of Darwinism, camouflage research, sexual selection, mimicry and the debate on the purpose of animal coloration triggered by Abbott Thayer. These areas are described in turn below. Darwinism The historian Robinson M. Yost explains that Darwinism went into eclipse during the 1890s. At that time, most zoologists felt that natural selection could not be the main cause of biological adaptation, and sought alternative explanations. As a result, many zoologists rejected both Batesian mimicry and Müllerian mimicry. Beddard, writes Yost, explained some problems in the theory of mimicry including that, given how many insect species there are, resemblances between species could arise by chance, and that mimicry was sometimes either useless or actually harmful. In Yost's view, Beddard wanted more evidence that natural selection really was responsible. Yost cites the staunch Darwinist Poulton's hostile review of 1892, which asserts the pre-eminence of Darwin's theory. But, writes Yost, Beddard was not alone in being wary of natural selection. Camouflage The zoologist Martin Stevens and colleagues, in 2006, write that "almost all early discussions of camouflage were of the background-matching type", citing Wallace, Poulton, and Beddard, "until the pioneering work of Thayer (1909) and Cott (1940)", which added disruptive coloration. Cott however both makes use of Beddard as an authority (for the fact that the Hudson's Bay lemming turns white in winter whereas the Scandinavian lemming does not, and for his experiments on the effectiveness of prey coloration on predators) and is critical of him for the "extreme and illogical" opinion held by Beddard and other authors that keeping perfectly still is vital to camouflage. Cott pointed out on that subject that a cryptic colour scheme makes an animal harder to track and to recognize, even while it is moving. Sexual selection The ornithologist Geoffrey Edward Hill, writing in 2002, notes that both Poulton and Beddard discuss sexual selection, and both agreed that "sexual selection by female choice is a likely explanation for the bright coloration of at least some species of birds". In contrast, Hill observes, Cott's detailed 1940 book does not mention it at all; like other zoologists including Wallace and Huxley, Cott preferred explanations "firmly rooted in natural selection". Mimicry The American evolutionary zoologists Jane Van Zandt Brower and Lincoln Pierson Brower followed up the experiments described in the book (pp. 153–159). Beddard, they write, observed the results of feeding the drone fly Eristalis tenax, a harmless but intimidating Batesian mimic of honeybees, to various predators. A chameleon, a green lizard, and a sand skink eagerly consumed the flies, whereas a thrush and a great spotted woodpecker did not. However, they — like Cott before them, they note — were unable to replicate Beddard's claim that toads would eat insects of any kind, including stinging bees and wasps. They describe their own experimental investigations of bees and their drone fly mimics, like Beddard using toads as the predators, concluding that the Batesian mimicry of the honeybee by the drone fly was "highly effective". The Thayer debate The historian of science Sharon Kingsland, in a 1978 paper on Abbott Thayer and the protective coloration debate, uses Beddard repeatedly to illuminate the different strands of the argument. She quotes Beddard (p. 94) on how difficult the question of animal coloration seemed in the 1890s. Thayer — an artist, not a scientist — had dived head-first into the debate. One of the protagonists, notes Kingsland, was Allen, who had reviewed Beddard's book, and who believed that the environment directly influenced animal coloration — Kingsland cites Beddard p. 54 here —, so natural selection seemed to him an unlikely factor, and he pointed out that blending inheritance would dilute the effect of selection. Furthermore, argues Kingsland, again citing Beddard (p. 148), another major protagonist, Alfred Russel Wallace, was emphasizing the problem of conspicuous markings, which could be selected for as warning coloration. Wallace went so far as to argue, notes Kingsland, that bright colours in sexual dimorphism "resulted from a surplus of vital energy", citing Beddard p. 263 ff. Thayer, on the other hand, had exactly one explanation for everything: natural selection for protective coloration, in particular camouflage by countershading, which radically departed from earlier explanations such as Allen's environmental influences (colours might be affected by light) or Beddard's suggestion that dolphins might have dark backs and light bellies as camouflage when seen from above and from below (Kingsland cites Beddard, p. 115). References Primary These references indicate where in Beddard's book the quotations come from. Secondary Bibliography Beddard, Frank Evers (1892). Animal Coloration, An Account of the Principal Facts and Theories Relating to the Colours and Markings of Animals. Swan Sonnenschein, London. Cott, Hugh Bamford (1940). Adaptive Coloration in Animals. Methuen, London. Darwin, Charles (1874). The Descent of Man. Heinemann, London. Darwin, Charles (1859). On the Origin of Species. John Murray, London. Reprinted 1985, Penguin Classics, Harmondsworth. Poulton, Edward Bagnall, Sir (1890). The Colours of Animals. London : Kegan Paul, Trench & Trübner. Thayer, Abbott Handerson and Thayer, Gerald H. (1909). Concealing-Coloration in the Animal Kingdom. New York. 1892 non-fiction books Camouflage books Mimicry Sexual selection Zoology books Natural history books
Animal Coloration (book)
Biology
4,339
6,699,882
https://en.wikipedia.org/wiki/Troika%20%28ride%29
The Troika is an amusement park ride designed and manufactured by HUSS Park Attractions in the mid-1970s. The name Troika means "group of three" in Russian, a reference to its three armed design. There are several variations on the design. Design HUSS Park Attractions designed and manufactured the first Troika ride in the mid-1970s. It is named after the Russian word meaning "group of three", a reference to its three armed design. Description and operation A Troika consists of three arms radiating from a central column. At the end of each arm is a wheel-like assembly (star) holding seven gondolas, each of which seats 2 people side by side. When the ride is activated, the central column rotates clockwise, while the Star at the end of each arm rotates counterclockwise. Hydraulic cylinders then raise the arms to an angle of 40°. The gondolas do have some capacity to rock from side to side, but this is minimal. At the end of the ride cycle, the arms are lowered, and the rotation stopped. Because it is a relatively mild thrill ride, it is a good ride for younger children or beginning riders who aren't up to riding more extreme attractions, like Huss's own Enterprise or Top Spin. The main safety restraint is a buzz bar system, which locks into one of five positions. Huss recommends that riders be at least 42 inches tall with an adult or over 50 inches tall to ride alone. Variants and modern adaptions The ride is available in both transportable and permanent forms, although due to the total weight and size of the ride (35 tons, plus an additional 27 tons for the temporary base and platform), transportable Troikas are unpopular and uncommon. Portable versions of the Troika can be disassembled, but due to the size and weight of the machinery and platform, rack onto no fewer than three trailers. AirBoat Huss now makes a modern version of the Troika, called the AirBoat. It is smaller and can only carry 24 people (with 3 pods and 4 cars per pod). It runs at about the same speed. One example that is no longer running was the Gator Bait ride at Six Flags New Orleans; this ride closed due to damage from Hurricane Katrina in 2005. Scorpion The Tivoli Scorpion is based on the same pattern as the Troika, but a much smaller gondola wheel diameter results in greater speed and centripetal force experienced by the riders (claimed to be over ). The Scorpion is significantly lighter than a Troika, making it more popular for operation at carnivals and fairs. Troika Installations See also Twist (ride) References Amusement rides Amusement rides introduced in 1975 Articles containing video clips
Troika (ride)
Physics,Technology
570
948,117
https://en.wikipedia.org/wiki/Nebracetam
Nebracetam is an investigational drug of the racetam family that is a M1 acetylcholine receptor agonist in rats. Based on a human leukemic T cell experiment in 1991, it is believed to act as an agonist for human M1-muscarinic receptors. It is also believed to act as a nootropic, like many other racetam drugs. A chemoenzymatic method of synthesis was reported in 2008. , human trials have not yet been conducted. See also Piracetam Aniracetam Racetam References Amines Experimental drugs M1 receptor agonists Racetams
Nebracetam
Chemistry
130
460,700
https://en.wikipedia.org/wiki/Fractional%20ideal
In mathematics, in particular commutative algebra, the concept of fractional ideal is introduced in the context of integral domains and is particularly fruitful in the study of Dedekind domains. In some sense, fractional ideals of an integral domain are like ideals where denominators are allowed. In contexts where fractional ideals and ordinary ring ideals are both under discussion, the latter are sometimes termed integral ideals for clarity. Definition and basic results Let be an integral domain, and let be its field of fractions. A fractional ideal of is an -submodule of such that there exists a non-zero such that . The element can be thought of as clearing out the denominators in , hence the name fractional ideal. The principal fractional ideals are those -submodules of generated by a single nonzero element of . A fractional ideal is contained in if and only if it is an (integral) ideal of . A fractional ideal is called invertible if there is another fractional ideal such that where is the product of the two fractional ideals. In this case, the fractional ideal is uniquely determined and equal to the generalized ideal quotient The set of invertible fractional ideals form an abelian group with respect to the above product, where the identity is the unit ideal itself. This group is called the group of fractional ideals of . The principal fractional ideals form a subgroup. A (nonzero) fractional ideal is invertible if and only if it is projective as an -module. Geometrically, this means an invertible fractional ideal can be interpreted as rank 1 vector bundle over the affine scheme . Every finitely generated R-submodule of K is a fractional ideal and if is noetherian these are all the fractional ideals of . Dedekind domains In Dedekind domains, the situation is much simpler. In particular, every non-zero fractional ideal is invertible. In fact, this property characterizes Dedekind domains: An integral domain is a Dedekind domain if and only if every non-zero fractional ideal is invertible. The set of fractional ideals over a Dedekind domain is denoted . Its quotient group of fractional ideals by the subgroup of principal fractional ideals is an important invariant of a Dedekind domain called the ideal class group. Number fields For the special case of number fields (such as , where = exp(2π i/n)) there is an associated ring denoted called the ring of integers of . For example, for square-free and congruent to . The key property of these rings is they are Dedekind domains. Hence the theory of fractional ideals can be described for the rings of integers of number fields. In fact, class field theory is the study of such groups of class rings. Associated structures For the ring of integerspg 2 of a number field, the group of fractional ideals forms a group denoted and the subgroup of principal fractional ideals is denoted . The ideal class group is the group of fractional ideals modulo the principal fractional ideals, so and its class number is the order of the group, . In some ways, the class number is a measure for how "far" the ring of integers is from being a unique factorization domain (UFD). This is because if and only if is a UFD. Exact sequence for ideal class groups There is an exact sequence associated to every number field. Structure theorem for fractional ideals One of the important structure theorems for fractional ideals of a number field states that every fractional ideal decomposes uniquely up to ordering as for prime ideals . in the spectrum of . For example, factors as Also, because fractional ideals over a number field are all finitely generated we can clear denominators by multiplying by some to get an ideal . Hence Another useful structure theorem is that integral fractional ideals are generated by up to 2 elements. We call a fractional ideal which is a subset of integral. Examples is a fractional ideal over For the ideal splits in as For we have the factorization . This is because if we multiply it out, we get Since satisfies , our factorization makes sense. For we can multiply the fractional ideals and to get the ideal Divisorial ideal Let denote the intersection of all principal fractional ideals containing a nonzero fractional ideal . Equivalently, where as above If then I is called divisorial. In other words, a divisorial ideal is a nonzero intersection of some nonempty set of fractional principal ideals. If I is divisorial and J is a nonzero fractional ideal, then (I : J) is divisorial. Let R be a local Krull domain (e.g., a Noetherian integrally closed local domain). Then R is a discrete valuation ring if and only if the maximal ideal of R is divisorial. An integral domain that satisfies the ascending chain conditions on divisorial ideals is called a Mori domain. See also Divisorial sheaf Dedekind-Kummer theorem Notes References Chapter 9 of Chapter VII.1 of Chapter 11 of Ideals (ring theory) Algebraic number theory
Fractional ideal
Mathematics
1,101
528,867
https://en.wikipedia.org/wiki/Surface%20integral
In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate over this surface a scalar field (that is, a function of position which returns a scalar as a value), or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism. Surface integrals of scalar fields Assume that f is a scalar, vector, or tensor field defined on a surface S. To find an explicit formula for the surface integral of f over S, we need to parameterize S by defining a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be , where varies in some region in the plane. Then, the surface integral is given by where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of , and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere, where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form where is the determinant of the first fundamental form of the surface mapping . For example, if we want to find the surface area of the graph of some scalar function, say , we have where . So that , and . So, which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface. Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, where the metric tensor is given by the first fundamental form of the surface. Surface integrals of vector fields Consider a vector field v on a surface S, that is, for each in S, v(r) is a vector. The integral of v on S was defined in the previous section. Suppose now that it is desired to integrate only the normal component of the vector field over the surface, the result being a scalar, usually called the flux passing through the surface. For example, imagine that we have a fluid flowing through S, such that v(r) determines the velocity of the fluid at r. The flux is defined as the quantity of fluid flowing through S per unit time. This illustration implies that if the vector field is tangent to S at each point, then the flux is zero because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal n to S at each point, which will give us a scalar field, and integrate the obtained field as above. In other words, we have to integrate v with respect to the vector surface element , which is the vector normal to S at the given point, whose magnitude is We find the formula The cross product on the right-hand side of this expression is a (not necessarily unital) surface normal determined by the parametrisation. This formula defines the integral on the left (note the dot and the vector notation for the surface element). We may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. This is equivalent to integrating over the immersed surface, where is the induced volume form on the surface, obtained by interior multiplication of the Riemannian metric of the ambient space with the outward normal of the surface. Surface integrals of differential 2-forms Let be a differential 2-form defined on a surface S, and let be an orientation preserving parametrization of S with in D. Changing coordinates from to , the differential forms transform as So transforms to , where denotes the determinant of the Jacobian of the transition function from to . The transformation of the other forms are similar. Then, the surface integral of f on S is given by where is the surface element normal to S. Let us note that the surface integral of this 2-form is the same as the surface integral of the vector field which has as components , and . Theorems involving surface integrals Various useful results for surface integrals can be derived using differential geometry and vector calculus, such as the divergence theorem, magnetic flux, and its generalization, Stokes' theorem. Dependence on parametrization Let us notice that we defined the surface integral by using a parametrization of the surface S. We know that a given surface might have several parametrizations. For example, if we move the locations of the North Pole and the South Pole on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple; the value of the surface integral will be the same no matter what parametrization one uses. For integrals of vector fields, things are more complicated because the surface normal is involved. It can be proven that given two parametrizations of the same surface, whose surface normals point in the same direction, one obtains the same value for the surface integral with both parametrizations. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization, but, when integrating vector fields, we do need to decide in advance in which direction the normal will point and then choose any parametrization consistent with that direction. Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface. The obvious solution is then to split that surface into several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields, one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts, the normal must point out of the body too. Last, there are surfaces which do not admit a surface normal at each point with consistent results (for example, the Möbius strip). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, we will find that the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces we will have normal vectors pointing in opposite directions. Such a surface is called non-orientable, and on this kind of surface, one cannot talk about integrating vector fields. See also Area element Divergence theorem Stokes' theorem Line integral Line element Volume element Volume integral Cartesian coordinate system Volume and surface area elements in spherical coordinate systems Volume and surface area elements in cylindrical coordinate systems Holstein–Herring method References External links Surface Integral — from MathWorld Surface Integral — Theory and exercises Multivariable calculus Area Integral
Surface integral
Physics,Mathematics
1,638
17,776,831
https://en.wikipedia.org/wiki/Don%20VandenBerg
Dr. Don VandenBerg is Professor Emeritus of astronomy (Ph.D. Australian National University) at the department of physics and astronomy at the University of Victoria, British Columbia, Canada. He is internationally acclaimed for his work on modelling stars of different size and composition. Using basic input physics, such as nuclear reaction rates and opacities, VandenBerg uses computer models to help understand the structure and evolution of stars. These models, which are tightly constrained by observations, provide insight into stellar populations and will ultimately be used to synthesize the stellar populations of distant galaxies. Vandenberg has the most-cited research papers of any astronomer in Canada. His stellar isochrones resulting from his models are widely used throughout the world. References Notes Bibliography Maclean's Magazine Science Magazine article ISI Highly Cited Researchers Don VandenBerg official site 20th-century Canadian astronomers Academic staff of the University of Victoria Living people Year of birth missing (living people) 21st-century Canadian astronomers
Don VandenBerg
Astronomy
199
40,763,130
https://en.wikipedia.org/wiki/Cladophialophora%20bantiana
Cladophialophora bantiana (C. bantiana) is a melanin producing mold known to cause brain abscesses in humans. It is one of the most common causes of systemic phaeohyphomycosis in mammals. Cladophialophora bantiana is a member of the ascomycota and has been isolated from soil samples from around the world. Etymology Cladophialophora bantiana was first isolated from a brain abscess in 1911 by Guido Banti and was described by Pier Andrea Saccardo in 1912 as Torula bantiana. In 1960, the fungus was reclassified by Borelli as Cladosporium bantianum. A morphologically similar species, Cladosporium trichodes was described by Emmons et al. in 1952. Cladosporium trichodes was widely believed to be a different species until 1995 when de Hoog et al. showed it to be conspecific with C. bantiana based on phylogenetic analysis. Morphology Cladophialophora bantiana exhibits predominantly hyphal growth both in vivo and in vitro. The normal morphology consists of dark coloured largely unbranched, wavy chains of conidia, individually 5–10 μm in length. The dark colour is due to the presence of the dark pigment melanin. Hyphae are septate, as is the case for species belonging to the phylum ascomycota. In samples isolated from cerebral tissue compared to cultured samples, a predominance of unbranched conidial chains and absence of conidiophores has been reported. In culture, the colony is black with a velvety texture or dark grey in colour, depending on the type of agar medium it is grown on. Cladophialophora bantiana has been reported to grow in culture under temperatures ranging from 14-42 °C with optimal growth around 30 °C. Cladophialophora bantiana grows slowly in vitro, taking ~15 days to mature when grown at 25–30 °C. Cladophialophora bantiana can be distinguished from other species of the genus Cladophialophora by the presence of the enzyme urease. Infection Non-human Cladophialophora bantiana can cause infection in several species of animals including cats, dogs, and humans. However, it is very rare to find it in non-mammalian species. In one case in a dog, C. bantiana was identified as the causative agent of eumycetoma. It has been known to cause systemic phaeohyphomycosis in both cats and dogs. Human Cladophialophora bantiana is known to cause a cerebral phaeohyphomycosis affecting the central nervous system in humans. It is hypothesized that predilection of this species for the central nervous system is due to the presence of melanin, which may be able to cross the blood–brain barrier. However, this is unlikely since fungal melanin is structurally and biochemically different from human melanin and other species of highly pigmented fungi do not show neurotropism. It has also been suggested that the presence of introns in the 18S rDNA subunit of Cladophialophora may be related to the preference of C. bantiana for the CNS, however more research is required to determine the mechanism of this. In a review of 101 cases of phaeohyphomycosis by Revankar et al., C. bantiana was the causal agent responsible for 48% of cases. It most often manifests as brain abscesses in immunocompetent people, however meningitis and myelitis were observed in a limited number of cases. Although the majority of the patients were immunocompetent (73%), infection is also commonly seen in immunocompromised patients. Clinical symptoms of infection are varied and can include headache, seizure, arm pain, and ataxia. The mortality rate is about 70%, with better outcomes observed in patients who underwent complete excision of the abscess. Since infection is very rare, there is no standard therapy for treatment of C. bantiana phaeohyphomycosis, however combination of amphotericin B, flucytosine, and itraconazole has been associated with improved outcomes. Since the majority of patients infected were immunocompetent, the means of exposure to the fungi is still unclear. However, inhalation is the likely route of entrance. Cases of infection are most commonly found in subtropical regions with high average humidity although cases have also been identified in the US, Canada and the UK. Cases from regions with hot, arid climate are rare. It has also been suggested to occupations with high exposure to dust and dirt such as farming and gardening are associated with higher risk of infection. References Eurotiomycetes Animal fungal diseases Fungus species
Cladophialophora bantiana
Biology
1,049
32,341,352
https://en.wikipedia.org/wiki/Quantum%20pendulum
The quantum pendulum is fundamental in understanding hindered internal rotations in chemistry, quantum features of scattering atoms, as well as numerous other quantum phenomena. Though a pendulum not subject to the small-angle approximation has an inherent nonlinearity, the Schrödinger equation for the quantized system can be solved relatively easily. Schrödinger equation Using Lagrangian mechanics from classical mechanics, one can develop a Hamiltonian for the system. A simple pendulum has one generalized coordinate (the angular displacement ) and two constraints (the length of the string and the plane of motion). The kinetic and potential energies of the system can be found to be This results in the Hamiltonian The time-dependent Schrödinger equation for the system is One must solve the time-independent Schrödinger equation to find the energy levels and corresponding eigenstates. This is best accomplished by changing the independent variable as follows: This is simply Mathieu's differential equation whose solutions are Mathieu functions. Solutions Energies Given , for countably many special values of , called characteristic values, the Mathieu equation admits solutions that are periodic with period . The characteristic values of the Mathieu cosine, sine functions respectively are written , where is a natural number. The periodic special cases of the Mathieu cosine and sine functions are often written respectively, although they are traditionally given a different normalization (namely, that their norm equals ). The boundary conditions in the quantum pendulum imply that are as follows for a given : The energies of the system, for even/odd solutions respectively, are quantized based on the characteristic values found by solving the Mathieu equation. The effective potential depth can be defined as A deep potential yields the dynamics of a particle in an independent potential. In contrast, in a shallow potential, Bloch waves, as well as quantum tunneling, become of importance. General solution The general solution of the above differential equation for a given value of a and q is a set of linearly independent Mathieu cosines and Mathieu sines, which are even and odd solutions respectively. In general, the Mathieu functions are aperiodic; however, for characteristic values of , the Mathieu cosine and sine become periodic with a period of . Eigenstates For positive values of q, the following is true: Here are the first few periodic Mathieu cosine functions for . Note that, for example, (green) resembles a cosine function, but with flatter hills and shallower valleys. See also Quantum harmonic oscillator Bibliography Muhammad Ayub, Atom Optics Quantum Pendulum, 2011, Islamabad, Pakistan., https://arxiv.org/abs/1012.6011 Quantum models Pendulums
Quantum pendulum
Physics
563
68,198,111
https://en.wikipedia.org/wiki/Western%20meadow%20vole
The western meadow vole (Microtus drummondii) is a species of North American vole found in western North America, the midwestern United States, western Ontario, Canada, and formerly in Mexico. It was previously considered conspecific with the eastern meadow vole (M. pennsylvanicus), but genetic studies indicate that it is a distinct species. It is sometimes called the field mouse or meadow mouse, although these common names can also refer to other species. Distribution It ranges from Ontario west to Alaska, and south to Missouri, north-central Nebraska, the northern half of Wyoming, and central Washington south through Idaho into north-central Utah. A disjunct subset of its range occurs from central Colorado to northwestern New Mexico. An isolated population was formerly found in Chihuahua, Mexico, but has since been extirpated. The United States portion of the Souris River is alternately known as the Mouse River because of the large numbers of field mice that lived along its banks. Plant communities In eastern Washington and northern Idaho, meadow voles are found in relative abundance in sedge (Carex sp.) fens, but not in adjacent cedar (Thuja sp.)-hemlock (Tsuga sp.), Douglas-fir (Pseudotsuga menziesii), or ponderosa pine (Pinus ponderosa) forests. Meadow voles are also absent from fescue (Festuca sp.)-snowberry (Symphoricarpos sp.) associations. Moisture may be a major factor in habitat use; possibly the presence of free water is a deciding factor. In southeastern Montana, western meadow voles were the second-most abundant small mammal (after deer mice, Peromyscus maniculatus) in riparian areas within big sagebrush (Artemisia tridentata)-buffalo grass (Bouteloua dactyloides) habitats. Western meadow voles are listed as riparian-dependent vertebrates in the Snake River drainage of Wyoming. In a compilation of 11 studies on small mammals, western meadow voles were reported in only three of 29 sites in subalpine forests of the central Rocky Mountains. Their range extensions were likely to be related to irrigation practices. They are now common in hayfields, pastures, and along ditches in the Rocky Mountain states. In Pipestone National Monument, Minnesota, western meadow voles were present in riparian shrublands, tallgrass prairie, and other habitats. Habitat In an Iowa prairie restoration project, meadow voles experienced an initial population increase during the initial stage of vegetation succession (old field dominated by foxtail grass (Setaria spp.), red clover (Trifolium pratense), annual ragweed (Ambrosia artemisiifolia), alfalfa (Medicago sativa), and thistles (Cirsium spp.). However, populations reached their peak abundance during the perennial grass stage of succession from old field to tallgrass prairie. Meadow vole habitat devoid of tree cover and grasses dominated the herb layer. with low tolerance for habitat variation (i. e., a species that is intolerant of variations in habitat, is restricted to few habitats, and/or uses habitats less evenly than tolerant species). In most areas, meadow voles clearly prefer habitat with dense vegetation. In tallgrass prairie at Pipestone National Monument, they were positively associated with dense vegetation and litter. The variables important to meadow vole habitat in Virginia include vegetative cover reaching a height of 8 to 16 inches (20–41 cm) and presence of litter. Meadow voles appeared to be randomly distributed within a grassland habitat in southern Quebec. Grant and Morris were not able to establish any association of meadow vole abundance with particular plant species. They were also unable to distinguish between food and cover as the determining factor in meadow vole association with dense vegetation. In South Dakota, meadow voles prefer grasslands to Rocky Mountain juniper (Juniperus scopulorum) woodlands. In New Mexico, meadow voles were captured in stands of grasses, wild rose (Rosa sp.), prickly pear (Opuntia sp.), and various forbs; meadow voles were also captured in wet areas with tall marsh grasses. Open habitat with a thick mat of perennial grass favors voles. In west-central Illinois, they were the most common small mammals on Indian grass (Sorghastrum nutans)-dominated and switchgrass (Panicum virgatum)-dominated study plots. They were present in very low numbers on orchard grass (Dactylis glomerata)-dominated plots. The most stable population occurred on unburned big bluestem (Andropogon gerardii)-dominated plots. In Ontario, meadow voles and white-footed mice (Peromyscus leucopus) occur together in ecotones. Meadow voles were the most common small mammals in oak savanna/tallgrass prairie dominated by northern pin oak (Quercus ellipsoidalis) and grasses including bluejoint reedgrass (Calamagrostis canadensis), prairie cordgrass (Sporobolus michauxianus), big bluestem, switchgrass, and Indian grass. In Michigan, strip clearcuts in a conifer swamp resulted in an increase in the relative abundance of meadow voles. They were most abundant in clearcut strip interiors and least abundant in uncut strip interiors. Slash burning did not appear to affect meadow vole numbers about 1.5 years after treatment. Predators Birds not usually considered predators of mice do take voles; examples include gulls (Larus sp.), northern shrikes (Larius borealis), black-billed magpies (Pica hudsonica), common ravens (Corvus corax), American crows (C. brachyrhynchos), great blue herons (Ardea herodias), and American bitterns (Botaurus lentiginosus). Major mammalian predators include the badger (Taxidea taxus), striped skunk (Mephitis mephitis), weasels (Mustela and Neogale sp.), martens (Martes americana and M. caurina), domestic dogs (Canis familiaris), domestic cats (Felis catus) and mountain lions (Puma concolor). Other animals reported to have ingested voles include trout (Salmo sp.), Pacific giant salamanders (Dicampton ensatus), garter snakes (Thamnophis sp.), yellow-bellied racers (Coluber constrictor flaviventris), gopher snakes (Pituophis melanoleucas), plains rattlesnakes (Crotalus viridis), and rubber boas (Charina bottae). In northern prairie wetlands, meadow voles are a large portion of the diets of red foxes (Vulpes vulpes), American mink (Neogale vison), short-eared owls (Asio flammeus), and northern harriers (Circus cyaneus). Voles are frequently taken by racers (Coluber sp.) since both often use the same burrows. Management In forest plantations in British Columbia, an apparently abundant (not measured) meadow vole population was associated with a high rate of "not sufficient regeneration"; damage to tree seedlings was attributed to meadow voles and lemmings (Synaptomys sp.). The cycle of meadow vole abundance is an important proximate factor affecting the life histories of its major predators. Meadow voles are usually the most abundant small mammals in northern prairie wetlands, often exceeding 40% of all individual small mammals present. Numbers of short-eared owls, northern harriers, rough-legged hawks (Buteo lagopus), coyotes (Canis latrans), and red foxes were directly related to large numbers of meadow voles in a field in Wisconsin. Predator numbers are positively associated with meadow vole abundance. Threats The species depends heavily on mesic habitats, and in areas on the periphery of its range, which contain distinctive and divergent subspecies, populations may be lost if the wetness of the habitats changes. A distinct Pleistocene relict subspecies, M. d. chihuahuensis, the Chihuahuan vole, was also found in Chihuahua, Mexico, but has not been recorded since 1988 after its habitat was degraded by recreational activities and especially overgrazing, and eventually the marsh was completely drained by the early 2000s. This subspecies displayed notable divergence from other populations and was highly isolated from any others, and would be considered a distinctive subspecies. In addition, two other populations in New Mexico appear to have been extirpated in recent times, likely as a consequence of climate change-induced drying and overgrazing. Due to the heavy association between meadow voles and mesic habitats, they are especially at risk from drying trends in areas at the peripheries of their range, leaving many of these populations at heavy risk of extirpation. References Western meadow Rodents of Canada Rodents of the United States Bioindicators Fauna of Alaska Fauna of the Plains-Midwest (United States) Mammals described in 1854 Taxa named by John James Audubon Taxa named by John Bachman
Western meadow vole
Chemistry,Environmental_science
1,927
74,932,282
https://en.wikipedia.org/wiki/Ebro%20Hydrographic%20Confederation
The Ebro Hydrographic Confederation (in Spanish: Confederación Hidrográfica del Ebro, CHE) is the organization that manages, regulates and maintains the water and irrigation of the Ebro hydrographic basin (northeastern Spain). The organization's headquarters are in Zaragoza and it was the first institution created in the world with the objective of managing an entire river basin in a unitary manner. History In 1913, the First National Irrigation Congress was held in Zaragoza, exposing the idea of setting up a community group of an economic and supra-regional nature through the federation of the agricultural, commercial and industrial associations of the whole area subject to the influence of the Ebro. In 1926, during the dictatorship of Primo de Rivera, the Confederaciones Hidrográficas were created under the name of Confederaciones Sindicales Hidrográficas. Article 1 of the founding Royal Decree states that:In all the hydrographic basins in which the Administration declares it convenient or in which at least 70% of its agricultural and industrial wealth, affected by the use of its flowing waters, requests it, the Confederación Sindical Hidrográfica will be formed.The Confederación Sindical Hidrográfica del Ebro was the first to be set up, by Royal Decree of March 5, 1926, and its first Technical Director was the engineer Manuel Lorenzo Pardo, a follower of the ideas of Joaquín Costa, the great instigator of its founding. In 1931, the government of the Republic restructured the Ebro Hydrographic Confederation, renaming it the Mancomunidad Hidrográfica del Ebro (Ebro Hydrographic Commonwealth). Manuel Lorenzo Pardo was dismissed and replaced by Félix de los Ríos. In March 1936 he was replaced by Nicolás Liria Almor. In 1932 the Mancomunidad Hidrográfica del Ebro was renamed as Delegación de Servicios Hidráulicos del Ebro and in 1934 it was again renamed Confederación Hidrográfica del Ebro. General information Nowadays, the CHE is an autonomous agency under the Ministry of Ecological Transition. The functions of this agency are regulated in Article 25 of Royal Decree 927/1988, which approves the Regulations of the Public Administration of Water and Hydrological Planning. These functions are the following: The preparation of the basin hydrological plan, as well as its monitoring and revision. The administration and control of the hydraulic public domain. The administration and control of uses of general interest or affecting more than one Autonomous Community. The project, construction and operation of the works carried out with charge to the own funds of the Agency, and those that are entrusted to them by the State. Those deriving from agreements with Autonomous Communities, local corporations and other public or private entities, or from those signed with individuals. The Ebro Confederation is also responsible for economic and ecological problems, such as zebra mussels and other introduced animal and plant species, and for providing users with information on the measures that can be taken in the use of boats, contained in the navigation regulations. It is an autonomous body under the Ministry of the Environment. The functions of this agency are regulated in article 25 of Royal Decree 927/1988, which approves the Regulations of the Public Administration of Water and Hydrological Planning. Scope The Ebro river basin is located in the NE quadrant of the Iberian Peninsula and covers a total surface area of 85,362 km2, of which 445 km2 are in Andorra, 502 km2 in France and the rest in Spain. It is the largest river basin in Spain, representing 17.3% of the Spanish peninsular territory. Its natural limits are: to the north the Cantabrian Mountains and the Pyrenees, to the southeast the Iberian System and to the east the Coastal-Catalan chain. It is drained by the Ebro river which, with a total length of 910 km, runs NW-SE, from the Cantabrian Mountains to the Mediterranean, where it flows into a magnificent delta. On its way it collects water from the Pyrenees and Cantabrian mountains on its left bank through important tributaries, such as the Aragón, Gállego, Segre, etc. and on its right bank it receives tributaries from the Sistema Ibérico, normally less abundant, such as the Oja, Iregua, Jalón or Guadalope. The scope of action is very complex, affecting numerous communities and even interacting with administrations of countries such as France or Andorra. For example, the Segre river, one of the main tributaries, rises in the French Alta Cerdanya, crosses mountainous areas with numerous lagoons and springs and in turn receives tributaries such as the Valltoba, the Llosa river, the Quer, the Noguera Pallaresa, the Noguera Ribagorzana and the Cinca. Even at the local level, as shown by the helophytic vegetation that surrounds them, the saline streams and lagoons are sometimes remarkable. In total, there are about 12,000 km of main river network. In the basin there are numerous seasonal flood lakes, lakes, and lagoons. The most famous lakes and lagoons are mainly in the mountainous areas, the so-called ibones or estanys of the Pyrenees, small in size but of great beauty. Surface area: 85 362 km2. Main rivers: 347 Length of rivers: 12 000 km Inhabitants of the basin according to 2005 census: 3 019 176 Length of rivers: 12 000 km Estimated surface water supply to the natural regime from 1940/41 to 1985/86: Maximum 29 726 hm³ Average 18 217 hm³ Minimum 8 393 hm³ This large and varied territory is home to some 3 019 176 inhabitants, which represents a population density of 33 inhabitants/km2, well below the Spanish average (78 inhabitants/km2). Almost half of the population is concentrated in the cities of Zaragoza, Vitoria, Logroño, Pamplona, Huesca and Lérida. There is a concentration of population in the riparian areas in the center of the valley and large areas empty of human population in the Iberian System, the interior steppes, the interior pre-Pyrenees and the Pyrenees. Wetlands In the middle stretch of the river there are a large number of lagoons that are fed directly from the river aquifer due to its natural dynamics, which causes the water to circulate through the ground. Where the ground is below the water table, lagoons form in the meanders abandoned by the Ebro or its tributaries or in hollows of the land due to subsidence because the subsoil plaster is dissolved by the groundwater and ends up collapsing forming chasms or sinkholes that when water emerges are popularly called "Ojos" (Eyes). They are part of this abundant set present or buried for cultivation, for example the lagoons of Larralde, Ojo del Fraile, Ojo del Cura and Galachos, all around Zaragoza capital, but they are numerous in any other province of the riverbed. These lagoons or flood lakes are more common in the middle stretch of the Ebro. There are also numerous endorheic lagoons such as the Sariñena lagoon in Huesca, the Montcornés lagoon in Lérida or the salty lagoon of Chiprana (Zaragoza). The most famous and largest in the area of the Ebro Hydrographic Confederation is the Gallocanta lagoon, located in an endorheic basin, with no external outlet of 541 km2 of basin, which forms one (when it is completely full) or three lagoons depending on the amount of rainfall it receives. The endorheic lagoons that still persist are the remains of the Cenozoic seas or Pliocene residual lakes and usually have a very characteristic and rare endemic fauna and flora with some large species such as the crane, flamingo, or the alcaraván. Among the wetland projects are the restoration and conditioning of the El Cañizar lake in Villarquemado, (Teruel), and that of Bayas, in Miranda de Ebro (Burgos), completed in 2010; the improvement of the Ojos de Pontil, in Rueda de Jalón (Zaragoza) and the conditioning of the wetland environment in La Sima, in Rubielos de la Cérida (Teruel), both completed in 2011. The organization is also carrying out the environmental restoration of the wetland of the Guaso riverbank on the right bank of the Ara river, in Aínsa (Huesca) and the improvement and conservation of the Larralde pond (Zaragoza) and the restoration of the riverbed of the Queiles river in Los Fayos (Zaragoza). But the great wetland of the basin is located  in the Mediterranean, the Ebro Delta, of 7,736 hectares. It is a Ramsar Convention site, ZEPA area and Tierras del Ebro Biosphere Reserve. Confederation reservoirs Mequinenza reservoir: 1,530 hm³ (1965), lower Ebro section Ebro reservoir: 540 hm³ (1952), headwaters of the Ebro river Yesa reservoir: 446.86 hm³ (1959) - Yesa dam enlarged 1 100 hm³, Aragón river. Mediano reservoir: 436.35 hm³ (1973), Cínca river. Itoiz reservoir: 418 hm³ (2010)*, Iratí river, tributary of the Aragón. Rialb reservoir: 402 hm³ (2000), Segre river. El Grado I reservoir: 399.48 hm³ (1969), Cínca river. Santa Ana reservoir: 236.60 hm³ (1961), Noguera-Ribagorzana. La Sotonera reservoir: 189.38 hm³ (1963), Astón-Sotón river with waters of the Gállego. Oliana reservoir: 101.10 hm³ (1959), Segre river. Joaquín Costa reservoir or Barasona reservoir: 91.70 hm³ (1932), Ésera river. La Tranquera reservoir: 84.17 hm³ (1960), Piedra river, tributary of the Jalón. Caspe reservoir: 81.62 hm³ (1991), Guadalope river. Ribarroja: lower stretch of the Ebro. Mansilla reservoir: 68 hm³ (1960). Najerilla river. La Rioja Pajares reservoir: 35.29 hm³ (1995), Lumbreras river. La Rioja González Lacasa or Ortigosa reservoir: 33 hm³ (1962). Albercos river. La Rioja El Val reservoir: 24 hm³ (1997), Val river with waters of the Queiles. Headquarters From October 1926 Regino Borobio Ojeda was the consulting architect of the CHE for which he carried out in the Ebro basin projects of agricultural farms, garages, schools, housing and offices in reservoirs. In 1929 he designed and built the Ebro Hydrographic Confederation Pavilion for the Barcelona Universal Exposition. Initially the CHE was installed in premises distributed in 7 different buildings. From 1928 it was installed in a building at number 20, Paseo de Sagasta, according to a project by Pascual Bravo. In 1933 it needed an extension of 6,000 m2. On February 4, 1933, the competition for preliminary projects for the CHE headquarters was announced. On April 13, 1933, the jury decided and the work was awarded to Regino Borobio Ojeda and José Borobio Ojeda. Work began in April 1936 and was completed in December 1946. The building is functional. It is located at Paseo de Sagasta, 24–26. See also Guadalquivir Hydrographic Confederation Júcar Hydrographic Confederation Tagus Hydrographic Confederation Ebro valley Ebro sedimentary basin References Bibliography (in Spanish) UTRERA CARO, Sebastián Félix, La incidencia ambiental de las obras hidráulicas: régimen jurídico, Librería-Editorial Dykinson, 2002, 310 pp. ISBN 848155913X, 9788481559132 (in Spanish) PNILLA NAVARRO, Vicente, Gestión y usos del agua en la cuenca del Ebro en el siglo XX, Universidad de Zaragoza, 2008, 759 pp. ISBN 847733997X, 9788477339977 (in Spanish) LORENZO PARDO, Manuel, Por el Pantano del Ebro: un convencido más, 1918, 24 pp. (in Spanish) LORENZO PARDO, Manuel, Uriarte: recuerdos de la vida de un gran ingeniero, Tipografía del Heraldo, 1919, 237 pp. (in Spanish) LORENZO PARDO, Manuel, Aforo de corrientes, Espasa-Calpe, 1926, 35 pp. (in Spanish) LORENZO PARDO, Manuel, Nueva política hidráulica: la Confederación del Ebro, Campañía ibero-Americana de publicaciones, 1930, 214 pp. (in Spanish) LORENZO PARDO, Manuel, Manuel Lorenzo Pardo (1881-1953): escritos publicados en la Revista de Obras Públicas, Colegio de Ingenieros de Caminos, Canales y Puertos, 2003, 151 pp. ISBN 8438002471, 9788438002476 (in Spanish) BOROBIO OJEDA, Regino, BOROBIO OJEDA, José, Edificio de la Confederación hidrográgica del Ebro: Zaragoza 1933, Servicio Publicaciones ETSA, 1999, 63 pp. ISBN 8489713200, 9788489713208 (in Spanish) External links Libro Digital del Agua (in Spanish) Visor Geográfico del Sistema Integrado de Información del Agua (in Spanish) (in Spanish) Página de la Confederación Hidrográfica del Ebro (in Spanish) Unión de entidades para el cumplimiento de la Directiva de Aguas en la cuenca del Ebro (CuencaAzul) (in Spanish) Confederación Hidrográfica del Ebro en la Gran Enciclopedia Aragonesa (in Spanish) Ebro basin Hydrography Zaragoza European drainage basins of the Mediterranean Sea Rivers of Aragon Rivers of Burgos Rivers of La Rioja (Spain)
Ebro Hydrographic Confederation
Environmental_science
3,038
49,158,253
https://en.wikipedia.org/wiki/Thonny
Thonny ( ) is a free and open-source integrated development environment for Python that is designed for beginners. It was created by Aivar Annamaa, an Estonian programmer. It supports different ways of stepping through code, step-by-step expression evaluation, detailed visualization of the call stack and a mode for explaining the concepts of references and heap. Features Line numbers Statement stepping without breakpoints Live variables during debugging Stepping through evaluation of the expressions (expressions get replaced by their values) Separate windows for executing function calls (for explaining local variables and call stack) Variables and memory can be explained either by using simplified model (name → value) or by using more realistic model (name → address/id → value) Simple pip GUI Support for CPython and MicroPython Support for running and managing files on a remote machine via SSH Possibility to log user actions for replaying or analyzing the programming process Availability The program works on Windows, macOS and Linux. It is available as a binary bundle including the recent Python interpreter or pip-installable package. It can be installed via the operating-system package manager on Debian, Raspberry Pi, Ubuntu, and Fedora. Reception Thonny has received favorable reviews from Python and computer science education communities. It has been a recommended tool in several programming MOOCs. Since June 2017 it has been included by default in the Raspberry Pi's official operating system distribution Raspberry Pi OS. See also List of integrated development environments for Python programming language Toolbox Kojo JUDO BASIC-256 Microsoft Small Basic References External links Development site Computer science education Free integrated development environments for Python Pedagogic integrated development environments Python (programming language) software Software using the MIT license
Thonny
Technology
359
50,234,312
https://en.wikipedia.org/wiki/Virstatin
Virstatin is a small molecule that inhibits the activity of the cholera protein, ToxT. Its activity in cholera was first published in 2005 in a paper that described the screening of a chemical library in a phenotypic screen and subsequent testing of one of the hits in infected mice. The compound is an isoquinoline alkaloid and can be synthesized by a simple two-step synthesis References Cholera Carboxylic acids Imides
Virstatin
Chemistry
91
16,250,540
https://en.wikipedia.org/wiki/Hillyard%2C%20Inc.
Hillyard, Inc. (earlier known as Hillyard Disinfectant Company and Hillyard Chemical Company) is a privately owned cleaning products company in St. Joseph, Missouri with a speciality in providing products for cleaning and maintenance of wood basketball courts. The company fielded two Amateur Athletic Union national champion basketball teams in the 1920s and was instrumental in the founding of the Basketball Hall of Fame (where an exhibit celebrates its contributions to the sport). In 2007 the company had an estimated $120 million in sales and employed 600 people. Newton S. Hillyard founded the company in 1907 as a cleaning supplies manufacturer. Hillyard's son Marvin asked him to sponsor a basketball team. N.S. then developed the company's signature cleaning supplies that made the floors "less oily." In 1920 the company moved to a new building that included a 90 x 140 foot wood gymnasium floor—claimed to be the largest west of the Mississippi River at the company headquarters where the company tested gym seals and finishes. The company then sponsored the Hillyard Shine Alls basketball team that won the Amateur Athletic Union national championships in 1926 and 1927 (and also played in two other AAU national championships in 1923 and 1925). The team was led by Forrest DeBernardi. During the Hillyard domination charges surfaced that Hillyard was paying its amateur players or that the players had no-show jobs at the plant. The controversy passed without any formal action taken against the company. Elliot C. Spratt, a Hillyard son-in-law, was the founding president of the Basketball Hall of Fame. The National Association of Basketball Coaches gives the Newton S. Hillyard Memorial Award to its outgoing president. Other Hillyard family members including Haskell Hillyard have received the John W. Bunn Lifetime Achievement Award at the Basketball Hall of Fame. Hillyard plays a major role at Missouri Western State University. Spratt Stadium is named for Elliot C. Spratt. The Hillyard Tip Off Classic is a basketball tournament at the school. References External links hillyard.com Companies based in Missouri Cleaning products Naismith Memorial Basketball Hall of Fame inductees Chemical companies established in 1907 1907 establishments in Missouri
Hillyard, Inc.
Chemistry
444