id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
23,129,804
https://en.wikipedia.org/wiki/Green%20growth
Green growth is a concept in economic theory and policymaking used to describe paths of economic growth that are environmentally sustainable. It is based on the understanding that as long as economic growth remains a predominant goal, a decoupling of economic growth from resource use and adverse environmental impacts is required. As such, green growth is closely related to the concepts of green economy and low-carbon or sustainable development. A main driver for green growth is the transition towards sustainable energy systems. Advocates of green growth policies argue that well-implemented green policies can create opportunities for employment in sectors such as renewable energy, green agriculture, or sustainable forestry. Several countries and international organizations, such as the Organisation for Economic Co-operation and Development (OECD), World Bank, and United Nations, have developed strategies on green growth; others, such as the Global Green Growth Institute (GGGI), are specifically dedicated to the issue. The term green growth has been used to describe national or international strategies, for example as part of economic recovery from the COVID-19 recession, often framed as a green recovery. Critics of green growth highlight how green growth approaches do not fully account for the underlying economic systems change needed in order to address the climate crisis, biodiversity crisis and other environmental degradation. Critics point instead to alternative frameworks for economic change such as a circular economy, steady-state economy, degrowth, doughnut economics and others. Terminology Green growth and related concepts stem from the observation that economic growth of the past 250 years has come largely at the expense of the environment upon which economic activities rely. The concept of green growth assumes that economic growth and development can continue while associated negative impacts on the environment, including climate change, are reduced – or while the natural environment continues to provide ecosystem services –, meaning that a decoupling takes place. On the subject of decoupling, a distinction is made between relative and absolute decoupling: Relative decoupling occurs when environmental pressure still grows, but less so than the gross domestic product (GDP). With absolute decoupling, an absolute reduction in resource use or emissions occurs, while the economy grows. Further distinctions are made based on what is taken into account: decoupling economic growth from resource use (resource decoupling) or from environmental pressure (impact decoupling), different indicators for economic growth and environmental pressures (e.g. resource use, emissions, biodiversity loss), only the domestic level or also impacts along the global value chain, the entire economy or individual sectors (e.g. energy, agriculture), temporary vs. permanent decoupling, or decoupling to reach certain targets (e.g. limiting global warming to 1.5 °C or staying within planetary boundaries). History While the related concepts of green growth, green economy and low-carbon development have received increasing international attention in recent years, the debate on growing environmental degradation in the face of economic growth dates back several decades. It was for example discussed in the 1972 report The Limits to Growth by the Club of Rome and reflected in the I = PAT-equation developed in the early 1970s. The consequent understanding of the need for a sustainable development was in the focus of the 1987 Brundtland Report as well as the United Nations Conference on Environment and Development (UNCED), or Earth Summit, in Rio de Janeiro in 1992. The Environmental Kuznets curve (EKC), theorizing that environmental pressure from economic growth first increases, then automatically decreases due in part to tertiarization, is disputed. Further influential developments include work by the economists Nicholas Stern and William Nordhaus, making the case for integrating environmental concerns into economic activities: The 2006 Stern Review on the Economics of Climate Change assessed the economic costs and risks of climate change and concluded that “the benefits of strong and early action far outweigh the economic costs of not acting”. The term “green growth” originates from the Asia Pacific Region and first emerged at the Fifth Ministerial Conference on Environment and Development (MCED) in Seoul, South Korea in 2005, where the Seoul Initiative Network on Green Growth was founded. Several international organisations had since turned their attention to green growth, in part as a way out of the financial crisis of 2007–2008: At the request of countries, the OECD in 2011 published a Green Growth Strategy and in 2012, the World Bank, UNEP, OECD and GGGI launched the Green Growth Knowledge Platform (GGKP). The related concepts of green growth, green economy and low-carbon development are sometimes used differently by different organisations but are also used interchangeably. Some organisation also include social aspects in their definitions. Employment The report "Growth Within: A Circular Economy Vision for a Competitive Europe" predicts that there are many opportunities in recycling, producing longer-lasting products and offering maintenance services from the manufacturer. According to the International Labour Organization, a shift to a greener economy could create 24 million new jobs globally by 2030, if the right policies are put in place. Also, if a transition to a green economy were not to take place, 72 million full-time jobs may be lost by 2030 due to heat stress, and the temperature increases will lead to shorter available work hours, particularly in agriculture. According to a 2020 report by the Green Alliance the job-creation schemes with the best value for money in the UK are: retrofitting buildings and creating cycle lanes; followed by electric ferries, battery factories and reforestation; and that these would create more jobs than proposed road-building schemes. They also say that new investment in nature recovery could quickly create 10,000 new jobs. Metrics One metric commonly used to measure the resource use of economies is domestic material consumption (DMC). The European Union, for example, uses the DMC the measure its resource productivity. Based on this metric, it has been claimed that some developed countries have achieved relative or even absolute decoupling of material use from economic growth. The DMC, however, does not consider the shift of resource use which results from global supply chains, which is why another proposed metric is the material footprint (MF). The MF aims to encompass the resource use from the beginning of a production chain to its end, meaning from where raw materials are extracted to where the product or service is consumed. Research based on the MF indicates that resource use might be growing similarly to GDP for a number of countries, as for example for the EU-27 or the member countries of the OECD. Green growth as a policy strategy Organizational efforts on green growth IEA: In 2020 the IEA published a strategy towards a "Clean Energy New Deal", which is being strongly promoted by executive director Fatih Birol. IMF: In 2020 Kristalina Georgieva, the head of the IMF, urged governments to invest emergency loans in green sectors, scrap subsidies to fossil fuels and tax carbon. UNESCAP: In 2012, the United Nations Economic and Social Commission for Asia and the Pacific released the Low Carbon Green Growth Roadmap for Asia and the Pacific to explore the opportunities that a low carbon green growth path offers to the region. The roadmap articulates five tracks on which to drive the economic system change necessary to pursue low carbon green growth as a new economic development path. OECD: In 2011 the OECD published a strategy towards green growth. In 2012, they also published a report on green growth and developing countries. UNEP: In 2008, the United Nations Environment Programme (UNEP) led the Green Economy Initiative. World Bank: In 2012, the World Bank published its report "Inclusive Green Growth: The Pathway to Sustainable Development". International Chamber of Commerce (ICC): In 2010, ICC launched the unique global business Task Force on Green Economy resulting in the Green Economy Roadmap, a guide for business, policymakers and society published in 2012. Organizations devoted to green growth Global Green Growth Institute: Founded in 2010 by Korean President Lee Myung-bak and later GGGI was first launched as a think tank in 2010 by Korean President Lee Myung-bak and was later converted into an international treaty-based organization in 2012 at the Rio+20 Summit in Brazil. Green Growth Knowledge Platform: In January 2012, the Global Green Growth Institute, Organisation for Economic Co-operation and Development (OECD), United Nations Environment Programme (UNEP), and World Bank signed a Memorandum of Understanding to formally launch the Green Growth Knowledge Platform (GGKP). The GGKP's mission is to enhance and expand efforts to identify and address major knowledge gaps in green growth theory and practice, and to help countries design and implement policies to move towards a green economy. National green growth efforts China: since at least 2006 (with its 11th 5-Year Plan), China has been committed to achieving a green economy. Emissions growth in recent years has decelerated sharply, underpinned by tighter environmental regulations and massive green investments, including in renewable energy and electric vehicle infrastructure. China's national emissions trading system (ETS) — which will be rolled out to the power sector in 2020 — could help facilitate the shift to cleaner energy. For price signals to be effective however, power producers need to compete, allowing less polluting and more efficient ones to trade freely and expand their market share (which has not yet been the case in 2020.) China also has an impact on the implementation of environmental technologies throughout Asia, via its Belt and Road Initiative International Green Development Coalition. EU: In 2010, the EU adopted the Europe 2020 strategy for “smart, sustainable and inclusive growth” for the 10-year period 2010–2020. In 2019, the European Green Deal was launched as “Europe’s new growth strategy” with the aim of making the continent's economy sustainable. Eastern European businesses currently fall behind their Southern European counterparts in terms of the average quality of their green management practices, notably in terms of specified energy consumption and emissions objectives. South Korea: Green growth is being discussed in the National Assembly in 2020. United Kingdom: Green growth was strongly advocated in 2020 by the Committee on Climate Change. United States: President Barack Obama took several steps toward green growth. He believes that by investing in the future, energy production will not only reduce the dependency on foreign energy sources but will also create jobs and a 'clean-energy economy'. Obama had a goal of installing 10 gigawatts of renewable projects by 2020, doubling the wind and solar energy production by 2025, and to develop such policies, which will help to shape the nation's green economy. A 2014 report by the Center for American Progress quantified the levels of investment necessary for the US to attain green growth, while meeting the levels of emission reduction spelled out by the Intergovernmental Panel on Climate Change (IPCC). In 2019, Democratic members of Congress introduced the Green New Deal resolution to create an umbrella for future government programs. Japan: In 2021, the Ministry of Economy, Trade and Industry proposed the "Green Growth Strategy Through Achieving Carbon Neutrality in 2050” plan achieve carbon neutrality by 2050. There are 14 growth sectors identified in the strategy, categorized into 3 main industries: the energy-related industries, transportation/manufacturing-related industries, and home/office-related industries. Furthermore, this strategy established a Green Innovation Fund worth 2 trillion JPY (18.2 billion USD) that aims to fund research and development and social implementation, as well as hoping to inspire private companies to also invest in their green growth R&D. Green Growth in Developing Countries Developing countries tend to have economies which are more reliant on exploiting the environment’s natural resources. Green technologies and sustainable development are not as affordable or accessible to them. At the same time, they are less able to protect themselves from the adverse effects of climate change and environmental degradation. They can face adverse health effects of polluted air and water, for example. Therefore, Green Growth could help improve the livelihoods and wellbeing of those in developing countries by protecting the environment and fostering economic growth. In 2012, the Organization for Economic Co-operation and Development (OECD) drafted a report on Green Growth and developing countries as a summary for policy makers. This report outlines a policy framework that can be used by developing countries to achieve environmental and socio-economic goals. It also notes some concerns for Green Growth held by developing countries such as its ability to address poverty in practice and possible high cost barriers to green technologies. Requirements of Green Growth Energy sources that meet the requirements of green growth must fit the criteria of the efficient use of natural resources, affordability, access, the prevention of environmental degradation, low health impacts, and high energy security. Renewable energy sources, including nuclear power, increase the power supply options for our current and future populations, and meet sustainable development requirements. While solar, wind, and nuclear energy have nearly no negative interactions with the environment when generating electricity, there is waste and emission connected to material extraction, manufacturing, and construction. Overall, all renewable energy sources are a fundamental part of a nation's green growth strategy. Nuclear, wind, and solar energy can all be beneficial and used together to combat climate change and kickstart green growth. Limits There are several limits to green growth. As described by the European Environmental Bureau (EEB), seven barriers could make green growth wishful thinking. These barriers are as follows: - Rising energy costs. The more natural resources are needed, the more expensive it will be to extract them. - Rebound effects. Improved efficiency is often accompanied by the same or higher consumption of a given good or service. - Displacement of the problem, all technological solutions lead to environmental externalities. - Underestimated impact of services, the service economy is based on the material economy, so it will add a footprint rather than replace it. - Limited recycling potential. - Insufficient and inappropriate technological change. Technological progress is not disruptive and does not target the factors of production that matters for ecological sustainability. - Cost shifting and decoupling phenomena have emerged, but they are characterised by the externalisation of environmental impact from high-consumption countries to low-consumption countries. Criticism A 2020 two-part systematic review published in Environmental Research Letters analyzed the full texts of 835 papers on the relationship between GDP, resource use (materials and energy) and greenhouse gas emissions. The first part found that "the vast majority of studies [...] approach the topic from a statistical-econometric point of view, while hardly acknowledging thermodynamic principles on the role of energy and materials for socio-economic activities. A potentially fundamental incompatibility between economic growth and systemic societal changes to address the climate crisis is rarely considered." The second part concluded "that large rapid absolute reductions of resource use and GHG emissions cannot be achieved through observed decoupling rates, hence decoupling needs to be complemented by sufficiency-oriented strategies and strict enforcement of absolute reduction targets." A 2020 paper by Jason Hickel and Giorgos Kallis published in New Political Economy concludes that "there is no empirical evidence that absolute decoupling from resource use can be achieved on a global scale against a background of continued economic growth" and that "absolute decoupling from carbon emissions is highly unlikely to be achieved at a rate rapid enough to prevent global warming over 1.5°C or 2°C, even under optimistic policy conditions." It thus suggests looking for alternative strategies. The Degrowth movement is opposed to all forms of productivism (the belief that economic productivity and growth is the purpose of human organization). Because of that it is also opposed to Green growth concepts. Another 2020 study shows that the pursuit of ‘green growth’ would increase inequality and unemployment unless accompanied by radical social policies. See also Agrowth Alternative fuels Biobased economy Bright green environmentalism Circular economy Protecting and restoring degraded high-carbon ecosystems Divestment Ecological economics Eco-economic decoupling Free-market environmentalism Fossil fuel phase-out Georgism Green capitalism Green economy Greenwashing Low-carbon tenders Green recovery Hydrogen economy Natural resource economics Industrial mass production in the renewable energy sector Peak oil: reached in 2020 according to the BP Energy Outlook 2020 Reforestation Small-scale agriculture Sustainable development Trillion Tree Campaign Urban planning The Blue Economy Prosperity Without Growth War economy References External links Green Growth Knowledge Platform Green Growth Green Teen Society, promoting Green Growth aimed at teens Official Youtube Channel of Chung Wa Dae ICC Green Economy Roadmap Economic growth Environmental economics
Green growth
Environmental_science
3,365
2,930,061
https://en.wikipedia.org/wiki/Carbon%20monoxide%20detector
A carbon monoxide detector or CO detector is a device that detects the presence of the carbon monoxide (CO) gas to prevent carbon monoxide poisoning. In the late 1990s, Underwriters Laboratories changed the definition of a single station CO detector with a sound device to carbon monoxide (CO) alarm. This applies to all CO safety alarms that meet UL 2034 standard; however for passive indicators and system devices that meet UL 2075, UL refers to these as carbon monoxide detectors. Most CO detectors use a sensor with a defined, limited lifespan, and will not work indefinitely. CO is a colorless, tasteless, and odorless gas produced by incomplete combustion of carbon-containing materials. It is often referred to as the "silent killer" because it is virtually undetectable by humans. In a study by Underwriters Laboratories, "Sixty percent of Americans could not identify any potential signs of a CO leak in the home". Elevated levels of CO can be dangerous to humans depending on the amount present and length of exposure. Smaller concentrations can be harmful over longer periods while increasing concentrations require diminishing exposure times to be harmful. Those living in all-electric homes don’t need CO detectors unless there is an attached garage with a non-electric car, or if a backup generator is used too close to your living quarters during a power outage. CO detectors are designed to measure CO levels over time and sound an alarm before dangerous levels of CO accumulate in an environment, giving people adequate warning to safely ventilate the area or evacuate. Some system-connected detectors also alert a monitoring service that can dispatch emergency services if necessary. While CO detectors do not serve as smoke detectors and vice versa, combined smoke/CO detectors are also sold. In the home, some common sources of CO include open flames, space heaters, water heaters, blocked chimneys or running a car or grill inside a garage. Installation The devices can either be battery-operated or AC-powered (with or without a battery backup). Battery-powered devices advertise a battery lifetime of up to 10 years. The gas sensors in CO alarms have a limited life span, typically two to five years. Newer models are designed to signal a need to be replaced after a set period. CO detectors all have "test" buttons like smoke detectors, but the test buttons only test the battery, electronic circuit, and buzzer, not the alarm’s ability to sense gas. According to the carbon monoxide guidelines of the National Fire Protection Association, CO detectors should be installed in each sleeping area in a dwelling, and each detector should be located "on the wall, ceiling or other location as specified in the installation instructions that accompany the unit". CO detectors are available as stand-alone models, or system-connected devices which can be monitored remotely. Function The primary purpose of CO detectors is to sound an alarm to warn occupants of an enclosed space of a dangerous level of carbon monoxide. The alarm should sound within 60 minutes if the concentration rises to 70 PPM, within 10 minutes at 150 PPM, in 4 minutes at 400 PPM, and immediately at 500 PPM or greater. The alarm should not sound too quickly, as brief false alarms may prompt users to disable the alarm, leaving them unprotected. Some alarm devices may display the CO level. There are also measuring instruments designed to display CO concentrations down to low, non-dangerous levels, rather than detect and warn of dangerous levels. Some detectors without UL certification have been found not to sound at the specified threshold, or to sound within seconds. In some instances, it's worth noting that a red flashing without accompanying sounds could indicate a different condition or simply inform the user about a malfunction. Wireless home safety devices are available that link carbon monoxide detectors to vibrating pillow pads, strobes, or a remote warning handset. Several carbon monoxide detection methods are used and documented in industry specifications published by Underwriters Laboratories. Alerting methods include: Audible tones (4 short beeps) Varies between 3,000 and 3,500 Hz depending on brand and model name 105 dB loudness at 3 feet Spoken voice alert Emergency light for illumination Sensors Early designs used a chemical detector consisting of a white pad that faded to a brownish or blackish color in the presence of carbon monoxide. Such detectors are cheap but only give a visual warning. As carbon monoxide-related deaths increased during the 1990s, audible alarms became standard. The alarm points on carbon monoxide detectors are not a simple alarm level (as in smoke detectors) but are a concentration-time function. At lower concentrations, e.g. 100 parts per million (PPM), the detector does not sound an alarm for many tens of minutes. At 400 PPM, the alarm sounds within a few minutes. This concentration-time function is intended to mimic the uptake of carbon monoxide in the body while also preventing false alarms due to brief bursts of carbon monoxide from relatively common sources such as cigarette smoke. Four types of sensors are available, varying in cost, accuracy, and speed of response. Most detectors do not have replaceable sensors. Opto-chemical type The detector consists of a pad of a colored chemical which changes color upon reaction with carbon monoxide. They only provide a qualitative warning of the gas, however. The main advantage of these detectors is that they are the lowest cost, but the downside is that they also offer the lowest level of protection. One reaction used for carbon monoxide detection is potassium disulfitopalladate (II) catalytic oxidation: CO + K2Pd(SO3)2 -> Pd + CO2 + SO2 + K2SO3 As the reaction progresses, the release of palladium causes the color to change from yellow to brown to black. Biomimetic type A biomimetic sensor works in a fashion similar to hemoglobin which darkens in the presence of CO proportional to the amount of carbon monoxide in the surrounding environment. It uses cyclodextrins, a chromophore, and a number of metal salts. This can either be seen directly or connected to an infrared source of photons such as an IR LED and then monitored using a photodiode. Battery lifespan is usually two to three years with conventional alkaline, but a lithium battery will last the life of the product. The biotechnology-based sensors have a useful operational life of six years. These products were the first to enter the mass market, but because they cost more than other sensors they are mostly used in higher-end areas and RVs. The technology has been improved and is the most reliable technology, according to a report from Lawrence Berkeley National Laboratory. Electrochemical type The electrochemical detector uses the principle of a fuel cell to generate an electrical current when the gas to be detected undergoes a chemical reaction. The generated current is precisely related to the amount of carbon monoxide in the immediate environment close to the sensor. Essentially, the electrochemical cell consists of a container, two electrodes, connection wires, and an electrolyte, typically sulfuric acid. Carbon monoxide is oxidized at one electrode to carbon dioxide while oxygen is consumed at the other electrode. For carbon monoxide detection, the electrochemical cell has advantages over other technologies in that it has a highly accurate and linear output to carbon monoxide concentration, requires minimal power as it is operated at room temperature, and has a long lifetime, which typically is five years to ten years. This technology has become the dominant technology in the United States and Europe. Test buttons only indicate the operational effectiveness of the battery, circuit, and buzzer. The only way to fully test the operation of a CO alarm using an electrochemical cell is with a known source of calibrated test gas delivered in a shroud to maintain the concentration level for the test period. Semiconductor type Thin wires of the semiconductor tin dioxide on an insulating ceramic base provide a sensor monitored by an integrated circuit. This sensing element must be heated to approximately 400 °C for operation. Oxygen increases the resistance of the tin dioxide while carbon monoxide reduces it. The integrated circuit monitors the resistance of the sensing element. Lifespans are approximately five years and alarms need testing on installation and at least annually with a test gas. Due to the large power demand of this sensor, it is usually powered from the mains. A battery-powered, pulsed sensor is available with a lifetime of months. This technology is widely used in Japan and elsewhere in the Far East, with some market penetration in the USA. However, the superior performance of electrochemical cell technology is beginning to displace this technology. Concentration readout Although all home detectors use an audible alarm signal as the primary indicator, some versions also offer a digital readout of the CO concentration, in parts per million (PPM). Typically, they can display both the current reading and a peak reading from memory of the highest level measured over some time. These advanced models cost somewhat more but are otherwise similar to the basic models. The models with display have the advantages of indicating levels below the alarm threshold, reporting levels that may have occurred during an absence, and assessing the degree of hazard if the alarm sounds. They may also aid emergency responders in evaluating the level of past or ongoing exposure or danger. Portable Portable detectors are designed for aircraft, cars and trucks. They warn vehicle occupants of any CO hazard. CO measurement instruments Portable meters which display CO concentration down to a few PPM are more sensitive than home safety CO detectors and correspondingly much more expensive. They are used by industrial hygienists and first responders, and for maintenance and tracing a CO leak. These devices measure low levels of CO in seconds, rather than minutes or hours like residential alarms. Like other test equipment, professional CO meters must be tested and recalibrated periodically. Legislation United States In the US () 32 states have enacted statutes regarding carbon monoxide detectors, and another 11 have promulgated regulations on CO detectors, as well as in Washington DC and New York City. In Canada, CO alarm requirements came into effect on October 15, 2014 in Ontario, there is a strong movement in Alberta to make CO detectors mandatory in all homes. More and more states are legislating for their installation as a mandatory feature. House builders in Colorado are required to install carbon monoxide detectors in new homes by a bill signed into law in March 2009. House Bill 1091 requires the installation of the detectors in new and resold homes near bedrooms as well as rented apartments and homes. It took effect on July 1, 2009. The legislation was introduced after the death of Denver investment banker Parker Lofgren and his family. Lofgren, along with his wife and children were found dead in their home near Aspen, Colorado on Nov. 27, 2008, victims of carbon-monoxide poisoning. In New York State "Amanda's Law" (A6093A/C.367) requires one- and two-family residences that have fuel-burning appliances to have at least one carbon monoxide alarm installed on the lowest story having a sleeping area, effective February 22, 2010. Although homes built before January 1, 2008 are allowed to have battery-powered alarms, homes built after that date must have hard-wired alarms. In addition, New York State contractors must install a carbon monoxide detector when replacing a fuel-burning water heater or furnace if the home is without an alarm. The law is named for Amanda Hansen, a teenager who died from carbon monoxide poisoning from a defective boiler while at a sleepover at a friend's house. Alaska House Bill 351 requires a carbon monoxide detector to be installed in dwelling units that contain or are serviced by a carbon-based fuel appliance or other device that produces by-products of combustion. In July 2011, California required the installation of carbon monoxide detectors in existing single-family homes, with multifamily homes following in 2013. CA Law 2015 require all new installation of smoke and CO alarms to be 10-year non-serviceable type. Existing alarms may not need to be replaced for homeowners, see local codes. Required alarm locations also vary per local enforcing agencies. In Maine all rental units must have carbon monoxide detectors. In nonrental homes they are recommended but not required. Standards North America The Canadian Mortgage and Housing Association reports, "The standards organizations of Canada (CSA) and the United States (Underwriters Laboratories or UL) have coordinated the writing of CO standards and product testing. The standards as of 2010 prohibit showing CO levels of less than 30 PPM on digital displays. The most recent standards also require the alarm to sound at higher levels of CO than with previous editions of the standard. The reasoning behind these changes is to reduce calls to fire stations, utilities and emergency response teams when the levels of CO are not life threatening. This change will also reduce the number of calls to these agencies due to detector inaccuracy or the presence of other gases. Consequently, new alarms will not sound at CO concentrations up to 70 PPM. Note that these concentrations are significantly in excess of the Canadian health guidelines," (and also in excess of US Occupational Safety and Health Administration (OSHA) Permissible exposure limits, which is 50 PPM). References External links American inventions Detector Detectors Natural gas safety Safety equipment Fire detection and alarm Fire prevention Home automation Indoor air pollution
Carbon monoxide detector
Chemistry,Technology
2,738
27,864,034
https://en.wikipedia.org/wiki/Fuzzy-trace%20theory
Fuzzy-trace theory (FTT) is a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd to explain cognitive phenomena, particularly in memory and reasoning. FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes. The theory has been used in areas such as cognitive psychology, human development, and social psychology to explain, for instance, false memory and its development, probability judgments, medical decision making, risk perception and estimation, and biases and fallacies in decision making. FTT can explain phenomena involving both true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened). History FTT was initially proposed in the 1990s as an attempt to unify findings from the memory and reasoning domains that could not be predicted or explained by earlier approaches to cognition and its development (e.g., constructivism and information processing). One of such challenges was the statistical independence between memory and reasoning, that is, memory for background facts of problem situations is often unrelated to accuracy in reasoning tasks. Such findings called for a rethinking of the memory-reasoning relation, which in FTT took the form of a dual-process theory linking basic concepts from psycholinguistic and Gestalt theory to memory and reasoning. More specifically, FTT posits that people form two types of mental representations about a past event, called verbatim and gist traces. Gist traces are fuzzy representations of a past event (e.g., its bottom-line meaning), hence the name fuzzy-trace theory, whereas verbatim traces are detailed representations of a past event. Although people are capable of processing both verbatim and gist information, they prefer to reason with gist traces rather than verbatim. This implies, for example, that even if people are capable of understanding ratio concepts like probabilities and prevalence rates, which are the standard for the presentation of health- and risk-related data, their choice in decision situations will usually be governed by the bottom-line meaning of it (e.g., "the risk is high" or "the risk is low"; "the outcome is bad" or "the outcome is good") rather than the actual numbers. More importantly, in FTT, memory-reasoning independence can be explained in terms of preferred modes of processing when one performs a memory task (e.g., retrieval of verbatim traces) relative to when one performs a reasoning task (e.g., preference for reasoning with gist traces). In 1999, a similar approach was applied to human vision. It suggested that human vision has two types of processing: one that aggregates local spatial receptive fields, and one that parses the local receptive field. People used prior experience, gists, to decide which process dominates a perceptual decision. The work attempted to link Gestalt theory and psychophysics (i.e., independent linear filters). This theory was further developed into fuzzy image processing and used in information processing technology and edge detection. Memory FTT posits two types of memory processes (verbatim and gist) and, therefore, it is often referred to as a dual process theory of memory. According to FTT, retrieval of verbatim traces (recollective retrieval) is characterized by mental reinstatement of the contextual features of a past event, whereas retrieval of gist traces (nonrecollective retrieval) is not. In fact, gist processes form representations of an event's semantic features rather than its surface details, the latter being a property of verbatim processes. In the memory domain, FTT's notion of verbatim and gist representations has been influential in explaining true memories (i.e., memories about events that actually happened) as well as false memories (i.e., memories about events that never happened). The following five principles have been used to predict and explain true and false memory phenomena: Principles Process independence Parallel storage The principle of parallel storage asserts that the encoding and storage of verbatim and gist information operate in parallel rather than in a serial fashion. For instance, suppose that a person is presented with the word "apple" in red color. On the one hand, according to the principle of parallel storage of verbatim and gist traces, verbatim features of the target item (e.g., the word was apple, it was presented in red, printed in boldface and italic, and all but the first letter were presented in lowercase) and gist features (e.g., the word was a type of fruit) would be encoded and stored simultaneously via distinct pathways. Conversely, if verbatim and gist traces are stored in a serial fashion, then gist features of the target item (the word was a type of fruit) would be derived from its verbatim features and, therefore, the formation of gist traces would depend on the encoding and storage of verbatim traces. The latter idea was often assumed by early memory models. However, despite the intuitive appeal of the serial processing approach, research suggests that the encoding and storage of gist traces do not depend on verbatim ones. Several studies have converged on the finding that the meaning of target items is encoded independently of, and even prior to, the encoding of the surface form of the same items. Ankrum and Palmer, for example, found that when participants are presented with a familiar word (e.g., apple) for a very brief period (100 milliseconds), they are able to identify the word itself ("was it apple?") better than its letters ("did it contain the letter L?"). Dissociated retrieval Similar to the principle of parallel storage, retrieval of verbatim and gist traces also occur via dissociated pathways. According to the principle of dissociated retrieval, recollective and nonrecollective retrieval processes are independent of each other. Consequently, this principle allows verbatim and gist processes to be differentially influenced by factors such as the type of retrieval cues and the availability of each form of representation. In connection with Tulving's encoding specificity principle, items that were actually presented in the past are better cues for verbatim traces than items that were not. Similarly, items that were not presented in the past but preserve the meaning of presented items are usually better cues for gist traces. Suppose, for example, that subjects of an experiment are presented with a word list containing several dog breeds, such as poodle, bulldog, greyhound, doberman, beagle, collie, boxer, mastiff, husky, and terrier. During a recognition test, the words poodle, spaniel, and chair are presented. According to the principle of dissociated retrieval, retrieval of verbatim and gist traces does not depend on each other and, therefore, different types of test probes might serve as better cues to one type of trace than another. In this example, test probes such as poodle (targets, or studied items) will be better retrieval cues for verbatim traces than gist, whereas test probes such as spaniel (related distractors, non-studied items but related to targets) will be better retrieval cues for gist traces than verbatim. Chair, on the other hand, would neither be a better cue for verbatim traces nor for gist traces because it was not presented and is not related to dogs. If verbatim and gist processes were dependent, then factors that affect one process would also affect the other in the same direction. However, several experiments showing, for example, differential forgetting rates between memory for the surface details and memory for the bottom-line meaning of past events favor the notion of dissociated retrieval of verbatim and gist traces. In the case of forgetting rates, those experiments have shown that, over time, verbatim traces become inaccessible at a faster rate than gist traces. Brainerd, Reyna, and Kneer, for instance, found that delay drives true recognition rates (supported by both verbatim and gist traces) and false recognition rates (supported by gist and suppressed by verbatim traces) in opposite directions, namely true memory decays over time while false memory increases. Opponent processes in false memory The principle of opponent processes describes the interaction between verbatim and gist processes in creating true and false memories. Whereas true memory is supported by both verbatim and gist processes, false memory is supported by gist processes and suppressed by verbatim processes. In other words, verbatim and gist processes work in opposition to one another when it comes to false memories. Suppose, for example, that one is presented with a word list such as lemon, apple, pear, and citrus. During a recognition test, the items lemon (target), orange (related distractor), and fan (unrelated distractor) are shown. In this case, retrieval of a gist trace (fruits) supports acceptance of both test probes lemon (true memory) and orange (false memory), whereas retrieval of a verbatim trace (lemon) only supports acceptance of the test probe lemon. In addition, retrieval of an exclusory verbatim trace ("I saw only the words lemon, apple, pear, and citrus") suppresses acceptance of false but related items such as orange through an operation known as recollection rejection. If neither verbatim nor gist traces are retrieved, then one might accept any test probe on the basis of response bias. This principle plays a key role in FTT's explanation of experimental dissociations between true and false memories (e.g., when a variable affects one type of memory without affecting the other, or when it produces opposite effects on them). The time of exposure of each word during study and the number of repetitions have been shown to produce such dissociations. More specifically, while true memory follows a monotonically increasing function when plotted against presentation duration, false memory rates exhibit an inverted-U pattern when plotted as a function of presentation duration. Similarly, repetition is monotonically related to true memory (true memory increases as a function of the number of repetitions) and is non-monotonically related to false memory (repetition produces an inverted-U relation with false memory). Retrieval phenomenology Retrieval phenomenologies are spontaneous mental experiences associated with the act of remembering. It was first systematically characterized by E. K. Strong in the early 1900s. Strong identified two distinct types of introspective phenomena associated with memory retrieval that have since been termed recollection (or remembrance) and familiarity. Whereas the former is characterized as retrieval associated with recollection of past experiences, the latter lacks such association. The two forms of experiences can be illustrated by everyday expressions such as "I remember that!" (recollection) and "That seems familiar..." (familiarity). In FTT, retrieval of verbatim traces often produces recollective phenomenology and thus is frequently referred to as recollective retrieval. However, one feature of FTT is that recollective phenomenology is not particular to one type of memory process as posited by other dual-process theories of memory. Instead, FTT posits that retrieval of gist traces can also produce recollective phenomenology under some circumstances. When gist resemblance between a false item and memory is high and compelling, this gives rise to a phenomenon called phantom recollection, which is a vivid, but false, memory deemed to be true. Developmental variability in dual processes The principle of developmental variability in dual processes posits that verbatim and gist processes show variability across the lifespan. More specifically, verbatim and gist processes have been shown to improve between early childhood and young adulthood. Regarding verbatim processes, older children are better at retrieval of verbatim traces than younger children, although even very young children (4-year-olds) are able to retrieve verbatim information at above chance level. For instance, source memory accuracy greatly increases between 4-year-olds and 6-year-olds, and memory for nonsense words (i.e., words without a meaning, such as neppez) has been shown to increase between 7- and 10-year-olds. Gist processes also improve with age. For example, semantic clustering in free recall increases from 8-year-olds to 14-year-olds, and meaning connection across words and sentences has been shown to improve between 6- and 9-year-olds. In particular, the notion that gist memory improves with age plays a central role in FTT's prediction of age increases in false memory, a counterintuitive pattern that has been called developmental reversal. Regarding old age, several studies suggest that verbatim memory declines between early and late adulthood, while gist memory remains fairly stable. Experiments indicate that older adults perform worse on tasks that require retrieval of surface features from studied items relative to younger adults. In addition, results with measurement models that quantify verbatim and gist processes indicate that older adults are less able to use verbatim traces during recall than younger adults. False memories When people try to remember past events (e.g., a birthday party or the last dinner), they often commit two types of errors: errors of omission and errors of commission. The former is known as forgetting, while the latter is better known as false memories. False memories can be separated into spontaneous and implanted false memories. Spontaneous false memories result from endogenous (internal) processes, such as meaning processing, while implanted false memories are the result of exogenous (external) processes, such as the suggestion of false information by an outside source (e.g., an interviewer asking misleading questions). Research had first suggested that younger children are more susceptible to suggestion of false information than adults. However, research has since indicated that younger children are much less likely to form false memories than older children and adults. Moreover, in opposition to common sense, true memories are not more stable than false ones. Studies have shown that false memories are actually more persistent than true memories. According to FTT, such pattern arises because false memories are supported by memory traces that are less susceptible to interference and forgetting (gist traces) than traces that suppress them and also support true memories (verbatim traces). FTT is not a model for false memories but rather a model that explains how memory reacts with a higher reasoning process. Essentially, the gist and verbatim traces of whatever the subject is experiencing has a major effect on information that the subject falsely remembers. Verbatim and gist traces assist with memory performance due to the performance being able to pull from traces, relying on factors of different retrieval cues, on the accessibility of these kinds of memories, and forgetting. Although not a model for false memories, FTT is able to predict true and false memories associated with narratives and sentences. This is especially apparent in eye witness testimonies. There are 5 explanatory principles that explain FTT's description of false memory, which lays out the differences between experiences dealing with gist and verbatim traces. The storage of verbatim and gist traces are lateral. The subject and the meaning contents are lateral in experiences. The very surface forms of directly experienced events are representations of verbatim traces; gist traces are stored throughout many levels of familiarity. The retrieval of gist and verbatim traces: Retrieval cues work best with verbatim when the subject experiences different events. Events that are not explicitly experienced work best with regard to retrieval cues in gist traces. Surface memories in verbatim traces typically will decline more rapidly than those memories which deal with meaning. False Memory and the dual-opponent processes: Effects on false memory typically differ between retrieval cues of verbatim and gist traces. Gist traces will support false memory because the meaning an item has to the subject will seem like it is familiar; whereas verbatim processes will suppress the false memory by getting rid of the familiarity of the meaning to the subject. However, when a false memory is shown to the subject as a suggestion, this rule takes exception. In this case, both retrieval cues of gist and verbatim traces will support the false memory, while the verbatim trace, through retrieval of originally experienced memories, will suppress the false memory. Variability in development: There is some variability to the development of retrieval of both gist and verbatim memory; both of these will improve during development into adulthood. Especially in the case of gist traces, where as someone gets older, connecting meaning with different items/events will improve. Gist and verbatim processes assist with remembering an event vividly. The retrieval of both gist and verbatim support a form of verbatim assist with remembering, either the recollected thoughts will be more generic like in the case of gist traces or conscience experiences like with verbatim traces. Differences between true and false memories are also laid out by FTT. The associations and dissociations between true and false memories are predicted by FTT, namely, certain associations and dissociations are observed under different kinds of conditions. Dissociation emerges under situations that involve reliance on verbatim traces. Memories, whether true or false are then based on different kinds of representations. FTT may also help explain the effects of false memories, misinformation, and false-recognition in children as well as how this may vary during developmental changes. While many false memories may be perceived as being "dumb," recent studies on FTT have shown that the theory might have an influence on creating "smart" false memories, which are created from being aware of the meaning of certain experiences. While false memory research is still in early development, the application of FTT in false memory has been able to apply to real world settings; FTT has been effective in explaining multiple phenomena of false memory. In explaining false memories, FTT rejects the idea that offhand false memories are deemed as being true and how gist and verbatim traces embed false memories. Reasoning and decision-making FTT, as it applies to reasoning, is adapted from dual process models of human cognition. It differs from the traditional dual process model in that it makes a distinction between impulsivity and intuition—which are combined in System 1 according to traditional dual process theories—and then makes the claim that expertise and advanced cognition relies on intuition. The distinction between intuition and analysis depends on what kind of representation is used to process information. The mental representations described by FTT are categorized as either gist or verbatim representations: Gist representations are bottom-line understandings of the meaning of information or experience, and are used in intuitive gist processing. Verbatim representations are the precise and detailed representations of the exact information or experience, and are used in analytic verbatim processing. Generally, most adults display what is called a "fuzzy processing preference," meaning that they rely on the least precise gist representations necessary to make a decision, despite parallel processing of both gist and verbatim representations. Both processes increase with age, though the verbatim process develops sooner than the gist, and is thus more heavily relied on in adolescence. In this regard, the theory expands on research that has illustrated the role of memory representations in reasoning processes, the intersection of which has been previously underexplored. However, in certain circumstances, FTT predicts independence between memory and reasoning, specifically between reasoning tasks that rely on gist representations and memory tests that rely on verbatim representations. An example of this is research between the risky choice framing task and working memory, in which better working memory is not associated with a reduction in bias. FTT thus explains inconsistencies or biases in reasoning to be dependent on retrieval cues that access stored values and principles that are gist representations, which can be filtered through experience and cultural, affective, and developmental factors. This dependence on gist results in a vulnerability of reasoning to processing interference from overlapping classes of events, but can also explain expert reasoning in that a person can treat superficially different reasoning problems in the same way if the problems share an underlying gist. Risk perception and probability judgments FTT posits that when people are presented with statistical information, they extract representations of the gist of the information (qualitatively) as well as the exact verbatim information (quantitatively). The gist that is encoded is often a basic categorical distinction between no risk and some risk. However, in situations when both choices in the decision have a level of uncertainty or risk, then another level of precision would be required, e.g., low risk or high risk. An illustration of this principle can be found in FTT's explanation of the common framing effect. Framing effects Framing effects occur when linguistically different descriptions of equivalent options lead to inconsistent choices. A famous example of a risky choice framing task is the Asian Disease Problem. This task requires the participants to imagine that their country is about to face a disease which is expected to kill 600 people. They have to choose among two programs to combat this disease. Subjects are presented with options that are framed as either gains (lives saved) or losses (lives lost). The possible options, as well as the categorical gists that are posited to be encoded by FTT are displayed below. It is commonly found that people prefer the sure option when the options are framed as gains (program A) and the risky option when they are framed as losses (program D), despite the fact that the expected values for all the programs are equivalent. This is in contrast to a normative point of view that would indicate that if respondents prefer the sure option in the positive frame, they should also prefer the sure option in the negative frame. The explanation for this effect according to FTT is that people will tend to operate on the simplest gist that is permitted to make a decision. In the case of this framing question, the gain frame presents a situation in which people prefer the gist of some people being saved to the possibility that some are saved or no one could be saved, and conversely, that the possibility of some people dying or no one dying is preferable to the option that some people will surely die. Critical tests have been conducted to provide evidence in support of this explanation in favor of other theoretical explanations (i.e., Prospect theory) by presenting a modified version of this task that eliminates some mathematically redundant wording, e.g., program B would instead indicate that "If program B is adopted, there is 1/3 probability that 600 people will be saved." FTT predicts, in this case, that the elimination of the additional gist (the explicit possible death in program B) would result in indifference and eliminate the framing effect, which is indeed what was found. Probability judgments and risk The dual-process assumption of FTT has also been used to explain common biases of probability judgment, including the conjunction and disjunction fallacies. The conjunction fallacy occurs when people mistakenly judge a specific set of circumstances to be more probable than a more general set that includes the specific set. This fallacy is famously demonstrated by the Linda problem: that given a description of a woman named Linda who is an outspoken philosophy major who is concerned about discrimination and social justice, people will judge "Linda is a bank teller and is active in the feminist movement" to be more probable than "Linda is a bank teller", despite the fact that the latter statement is entirely inclusive of the former. FTT explains this phenomenon to not be a matter of encoding, given that priming participants to understand the inclusive nature of the categories tends not to reduce the bias. Instead, this is the result of the salience of relational gist, which contributes to a tendency to judge relative numerosity instead of merely applying the principle of class inclusion. Errors of probability perception are also associated with the theory's predictions of contradictory relationships between risk perception and risky behavior. Specifically, that endorsement of accurate principles of objective risk is actually associated with greater risk-taking, whereas measures that assess global, gist-based judgments of risk had a protective effect (consistent with other predictions from FTT in the field of medical decision making). Since gist processing develops after verbatim processing as people age, this finding lends explanation to the increase in risk-taking that occurs during adolescence. Management and economics FTT has also been applied in the domains of consumer behavior and economics. For example, since the theory posits that people rely primarily on gist representations in making decisions, and that culture and experience can affect consumers' gist representations, factors such as cultural similarity and personal relevance have been used to explain consumers' perceptions of the risk of food-borne contamination and their intentions to reduce consumption of certain foods. In other words, one's evaluation of how "at-risk" he or she is can be influenced both by specific information learned as well as by the fuzzy representations of culture experience, and perceived proximity. In practice this resulted in greater consumer concern when the threat of a food-borne-illness was described in a culturally similar location, regardless of geographical proximity or other verbatim details. Evidence was also found in consumer research in support of FTT's "editing" hypothesis, namely that extremely low-probability risks can be simplified by gist processing to be represented as "essentially nil." For example, one study found that people were willing to pay more for a safer product if safety was expressed relatively (i.e., product A is safer than product B) than they were if safety was expressed with statistics of actual incidence of safety hazards. This result is in contrast to most prescriptive decision rules that predict that formally equivalent methods of communicating risk information should have identical effects on risk-taking behavior, even if the pertinent displays are different. These findings are predicted by FTT (and related models), which suggest that people reason on the basis of simplified representations rather than on the literal information available. Medical decision-making Like other people, clinicians apply cognitive heuristics and fall into systematic errors which affect decisions in everyday life. Research has shown that patients and their physicians have difficulty understanding a host of numerical concepts, especially risks and probabilities, and this often implies some problems with numeracy, or mathematical proficiency. For example, physicians and patients both demonstrate great difficulty understanding the probabilities of certain genetic risks and were prone to the same errors, despite vast differences in medical knowledge. Though traditional dual process theory generally predicts that decisions made by computation are superior to those made by intuition, FTT assumes the opposite: that intuitive processing is more sophisticated and is capable of making better decisions, and that increases in expertise are accompanied by reliance on intuitive, gist-based reasoning rather than on literal, verbatim reasoning. FTT predicts that simply educating people with statistics regarding risk factors can hinder prevention efforts. Due to low prevalence of HIV or cancer, for example, people tend to overestimate their risks, and consequently interventions stressing the actual numbers may move people toward complacency as opposed to risk reduction. When women learn that their actual risks for breast cancer are lower than they thought, they return for screening at a lower rate. Also, some interventions to discourage adolescent drug use by presenting the risks have been shown to be ineffective or can even backfire. The conclusion drawn from this evidence is that health-care professionals and health policymakers need to package, present, and explain information in more meaningful ways that facilitate forming an appropriate gist. Such strategies would include explaining quantities qualitatively, displaying information visually, and tailoring the format to trigger the appropriate gist and to cue the retrieval of health-related knowledge and values. Web-based interventions have been designed using these principles, which have been found to increase the patient's willingness to escalate care, as well as gain knowledge and make an informed choice. Implications Theory-driven research using principles from FTT provides empirically supported recommendations that can be applied in many fields. For example, it provides specific recommendations regarding interventions aiming at reducing adolescent risk taking. Moreover, according to FTT, precise information does not necessarily work to communicate health-related information, which has obvious implications to public policy and procedures for improving treatment adherence in particular. Specifically, FTT principles suggest examples of how to display risk proportions in order to be comprehensible for both patients and health care professionals: Explain quantities qualitatively. Do not rely solely on numbers when presenting information. Explain quantities, percentages, and probabilities verbally, stressing conceptual understanding (the bottom-line meaning of information) over precise memorization of verbatim facts or numbers (e.g., a 20% chance of breast cancer is actually a "high" risk). Provide verbal guidance in disentangling classes and class-inclusion relationships. Display information visually. When it is necessary to present information numerically, arrange numbers so that meaningful patterns or relationships among them are obvious. Make use of graphical displays which help people extract the relevant gist. Useful formats for conveying relative risks and other comparative information include simple bar graphs and risk ladders. Pie charts are good for representing relative proportions. Line graphs are optimal for conveying the gist of a linear trend, such as survival and mortality curves or the effectiveness of a drug over time. Stacked bar graphs are useful for showing absolute risks; and Venn diagrams, two-by-two grids, and 100-square grids are useful for disentangling numerators and denominators and for eliminating errors from probability judgments. Avoid distracting gists. The class-inclusion confusion is especially likely to produce errors when visually or emotionally salient details, a story, or a stereotype draws attention away from the relevant data in the direction of extraneous information. For example, given a display of seven cows and three horses, children are asked whether there are more cows or more animals. Until the age of ten, children often respond that there are more cows than animals, even after counting the number in each class aloud correctly. However, young children in the previous example are more likely to answer the problem correctly when they are not shown a picture with the visually hard-to-ignore detail, that is, several figures of cows. Facilitate reexamination of problems. Encourage people to reexamine problems and edit their initial judgments. Although gist for quantities tends to be more available than the numbers verbatim, people can and do attend to the numbers to correct their first gist-based impressions when cued to do so and when they are given the time and opportunity, which can help reducing errors. In addition, memory principles in FTT provide recommendations to eyewitness testimony. Children are often called upon to testify in courts, most commonly in cases of maltreatment, divorce, and child custody. Contrary to common sense, FTT posits that children can be reliable witnesses as long as they are encouraged to report verbatim memories and their reports are protected from suggestion of false information. More specifically: Children should be interviewed as soon as possible after the target event to reduce exposure to false suggestions and to facilitate retrieval of verbatim memories before their rapid decay. When reminding a witness of a target event, interviewers should present pictures or photos rather than words to describe it. Pictures of the actual target event help to increase retrieval of true memories as they are better cues to verbatim memories than words. Avoid repeated questioning. FTT predicts, for example, that the repetition of questions that restate the gist of a false information can increase the probability of false memories during subsequent interviews. Do not give children negative feedback about their performance during an interview. This procedure prompts children to provide additional information that is often false rather than true. See also Behavioral economics Cognitive development Decision-making Developmental psychology Framing Reason Risk References Cognitive psychology Applied probability Decision theory
Fuzzy-trace theory
Mathematics,Biology
6,762
285,622
https://en.wikipedia.org/wiki/Scalar%20curvature
In the mathematical field of Riemannian geometry, the scalar curvature (or the Ricci scalar) is a measure of the curvature of a Riemannian manifold. To each point on a Riemannian manifold, it assigns a single real number determined by the geometry of the metric near that point. It is defined by a complicated explicit formula in terms of partial derivatives of the metric components, although it is also characterized by the volume of infinitesimally small geodesic balls. In the context of the differential geometry of surfaces, the scalar curvature is twice the Gaussian curvature, and completely characterizes the curvature of a surface. In higher dimensions, however, the scalar curvature only represents one particular part of the Riemann curvature tensor. The definition of scalar curvature via partial derivatives is also valid in the more general setting of pseudo-Riemannian manifolds. This is significant in general relativity, where scalar curvature of a Lorentzian metric is one of the key terms in the Einstein field equations. Furthermore, this scalar curvature is the Lagrangian density for the Einstein–Hilbert action, the Euler–Lagrange equations of which are the Einstein field equations in vacuum. The geometry of Riemannian metrics with positive scalar curvature has been widely studied. On noncompact spaces, this is the context of the positive mass theorem proved by Richard Schoen and Shing-Tung Yau in the 1970s, and reproved soon after by Edward Witten with different techniques. Schoen and Yau, and independently Mikhael Gromov and Blaine Lawson, developed a number of fundamental results on the topology of closed manifolds supporting metrics of positive scalar curvature. In combination with their results, Grigori Perelman's construction of Ricci flow with surgery in 2003 provided a complete characterization of these topologies in the three-dimensional case. Definition Given a Riemannian metric , the scalar curvature Scal is defined as the trace of the Ricci curvature tensor with respect to the metric: The scalar curvature cannot be computed directly from the Ricci curvature since the latter is a (0,2)-tensor field; the metric must be used to raise an index to obtain a (1,1)-tensor field in order to take the trace. In terms of local coordinates one can write, using the Einstein notation convention, that: where are the components of the Ricci tensor in the coordinate basis, and where are the inverse metric components, i.e. the components of the inverse of the matrix of metric components . Based upon the Ricci curvature being a sum of sectional curvatures, it is possible to also express the scalar curvature as where denotes the sectional curvature and is any orthonormal frame at . By similar reasoning, the scalar curvature is twice the trace of the curvature operator. Alternatively, given the coordinate-based definition of Ricci curvature in terms of the Christoffel symbols, it is possible to express scalar curvature as where are the Christoffel symbols of the metric, and is the partial derivative of in the σ-coordinate direction. The above definitions are equally valid for a pseudo-Riemannian metric. The special case of Lorentzian metrics is significant in the mathematical theory of general relativity, where the scalar curvature and Ricci curvature are the fundamental terms in the Einstein field equation. However, unlike the Riemann curvature tensor or the Ricci tensor, the scalar curvature cannot be defined for an arbitrary affine connection, for the reason that the trace of a (0,2)-tensor field is ill-defined. However, there are other generalizations of scalar curvature, including in Finsler geometry. Traditional notation In the context of tensor index notation, it is common to use the letter to represent three different things: the Riemann curvature tensor: or the Ricci tensor: the scalar curvature: These three are then distinguished from each other by their number of indices: the Riemann tensor has four indices, the Ricci tensor has two indices, and the Ricci scalar has zero indices. Other notations used for scalar curvature include , , , , or , and . Those not using an index notation usually reserve R for the full Riemann curvature tensor. Alternatively, in a coordinate-free notation one may use Riem for the Riemann tensor, Ric for the Ricci tensor and R for the scalar curvature. Some authors instead define Ricci curvature and scalar curvature with a normalization factor, so that The purpose of such a choice is that the Ricci and scalar curvatures become average values (rather than sums) of sectional curvatures. Basic properties It is a fundamental fact that the scalar curvature is invariant under isometries. To be precise, if is a diffeomorphism from a space to a space , the latter being equipped with a (pseudo-)Riemannian metric , then the scalar curvature of the pullback metric on equals the composition of the scalar curvature of with the map . This amounts to the assertion that the scalar curvature is geometrically well-defined, independent of any choice of coordinate chart or local frame. More generally, as may be phrased in the language of homotheties, the effect of scaling the metric by a constant factor is to scale the scalar curvature by the inverse factor . Furthermore, the scalar curvature is (up to an arbitrary choice of normalization factor) the only coordinate-independent function of the metric which, as evaluated at the center of a normal coordinate chart, is a polynomial in derivatives of the metric and has the above scaling property. This is one formulation of the Vermeil theorem. Bianchi identity As a direct consequence of the Bianchi identities, any (pseudo-)Riemannian metric has the property that This identity is called the contracted Bianchi identity. It has, as an almost immediate consequence, the Schur lemma stating that if the Ricci tensor is pointwise a multiple of the metric, then the metric must be Einstein (unless the dimension is two). Moreover, this says that (except in two dimensions) a metric is Einstein if and only if the Ricci tensor and scalar curvature are related by where denotes the dimension. The contracted Bianchi identity is also fundamental in the mathematics of general relativity, since it identifies the Einstein tensor as a fundamental quantity. Ricci decomposition Given a (pseudo-)Riemannian metric on a space of dimension , the scalar curvature part of the Riemann curvature tensor is the (0,4)-tensor field (This follows the convention that .) This tensor is significant as part of the Ricci decomposition; it is orthogonal to the difference between the Riemann tensor and itself. The other two parts of the Ricci decomposition correspond to the components of the Ricci curvature which do not contribute to scalar curvature, and to the Weyl tensor, which is the part of the Riemann tensor which does not contribute to the Ricci curvature. Put differently, the above tensor field is the only part of the Riemann curvature tensor which contributes to the scalar curvature; the other parts are orthogonal to it and make no such contribution. There is also a Ricci decomposition for the curvature of a Kähler metric. Basic formulas The scalar curvature of a conformally changed metric can be computed: using the convention for the Laplace–Beltrami operator. Alternatively, Under an infinitesimal change of the underlying metric, one has This shows in particular that the principal symbol of the differential operator which sends a metric to its scalar curvature is given by Furthermore the adjoint of the linearized scalar curvature operator is and it is an overdetermined elliptic operator in the case of a Riemannian metric. It is a straightforward consequence of the first variation formulas that, to first order, a Ricci-flat Riemannian metric on a closed manifold cannot be deformed so as to have either positive or negative scalar curvature. Also to first order, an Einstein metric on a closed manifold cannot be deformed under a volume normalization so as to increase or decrease scalar curvature. Relation between volume and Riemannian scalar curvature When the scalar curvature is positive at a point, the volume of a small geodesic ball about the point has smaller volume than a ball of the same radius in Euclidean space. On the other hand, when the scalar curvature is negative at a point, the volume of a small ball is larger than it would be in Euclidean space. This can be made more quantitative, in order to characterize the precise value of the scalar curvature S at a point p of a Riemannian n-manifold . Namely, the ratio of the n-dimensional volume of a ball of radius ε in the manifold to that of a corresponding ball in Euclidean space is given, for small ε, by Thus, the second derivative of this ratio, evaluated at radius ε = 0, is exactly minus the scalar curvature divided by 3(n + 2). Boundaries of these balls are (n − 1)-dimensional spheres of radius ; their hypersurface measures ("areas") satisfy the following equation: These expansions generalize certain characterizations of Gaussian curvature from dimension two to higher dimensions. Special cases Surfaces In two dimensions, scalar curvature is exactly twice the Gaussian curvature. For an embedded surface in Euclidean space R3, this means that where are the principal radii of the surface. For example, the scalar curvature of the 2-sphere of radius r is equal to 2/r2. The 2-dimensional Riemann curvature tensor has only one independent component, and it can be expressed in terms of the scalar curvature and metric area form. Namely, in any coordinate system, one has Space forms A space form is by definition a Riemannian manifold with constant sectional curvature. Space forms are locally isometric to one of the following types: The scalar curvature is also constant when given a Kähler metric of constant holomorphic sectional curvature. Products The scalar curvature of a product M × N of Riemannian manifolds is the sum of the scalar curvatures of M and N. For example, for any smooth closed manifold M, M × S2 has a metric of positive scalar curvature, simply by taking the 2-sphere to be small compared to M (so that its curvature is large). This example might suggest that scalar curvature has little relation to the global geometry of a manifold. In fact, it does have some global significance, as discussed below. In both mathematics and general relativity, warped product metrics are an important source of examples. For example, the general Robertson–Walker spacetime, important to cosmology, is the Lorentzian metric on , where is a constant-curvature Riemannian metric on a three-dimensional manifold . The scalar curvature of the Robertson–Walker metric is given by where is the constant curvature of . Scalar-flat spaces It is automatic that any Ricci-flat manifold has zero scalar curvature; the best-known spaces in this class are the Calabi–Yau manifolds. In the pseudo-Riemannian context, this also includes the Schwarzschild spacetime and Kerr spacetime. There are metrics with zero scalar curvature but nonvanishing Ricci curvature. For example, there is a complete Riemannian metric on the tautological line bundle over real projective space, constructed as a warped product metric, which has zero scalar curvature but nonzero Ricci curvature. This may also be viewed as a rotationally symmetric Riemannian metric of zero scalar curvature on the cylinder . Yamabe problem The Yamabe problem was resolved in 1984 by the combination of results found by Hidehiko Yamabe, Neil Trudinger, Thierry Aubin, and Richard Schoen. They proved that every smooth Riemannian metric on a closed manifold can be multiplied by some smooth positive function to obtain a metric with constant scalar curvature. In other words, every Riemannian metric on a closed manifold is conformal to one with constant scalar curvature. Riemannian metrics of positive scalar curvature For a closed Riemannian 2-manifold M, the scalar curvature has a clear relation to the topology of M, expressed by the Gauss–Bonnet theorem: the total scalar curvature of M is equal to 4 times the Euler characteristic of M. For example, the only closed surfaces with metrics of positive scalar curvature are those with positive Euler characteristic: the sphere S2 and RP2. Also, those two surfaces have no metrics with scalar curvature ≤ 0. Nonexistence results In the 1960s, André Lichnerowicz found that on a spin manifold, the difference between the square of the Dirac operator and the tensor Laplacian (as defined on spinor fields) is given exactly by one-quarter of the scalar curvature. This is a fundamental example of a Weitzenböck formula. As a consequence, if a Riemannian metric on a closed manifold has positive scalar curvature, then there can exist no harmonic spinors. It is then a consequence of the Atiyah–Singer index theorem that, for any closed spin manifold with dimension divisible by four and of positive scalar curvature, the  genus must vanish. This is a purely topological obstruction to the existence of Riemannian metrics with positive scalar curvature. Lichnerowicz's argument using the Dirac operator can be "twisted" by an auxiliary vector bundle, with the effect of only introducing one extra term into the Lichnerowicz formula. Then, following the same analysis as above except using the families version of the index theorem and a refined version of the  genus known as the α-genus, Nigel Hitchin proved that in certain dimensions there are exotic spheres which do not have any Riemannian metrics of positive scalar curvature. Gromov and Lawson later extensively employed these variants of Lichnerowicz's work. One of their resulting theorems introduces the homotopy-theoretic notion of enlargeability and says that an enlargeable spin manifold cannot have a Riemannian metric of positive scalar curvature. As a corollary, a closed manifold with a Riemannian metric of nonpositive curvature, such as a torus, has no metric with positive scalar curvature. Gromov and Lawson's various results on nonexistence of Riemannian metrics with positive scalar curvature support a conjecture on the vanishing of a wide variety of topological invariants of any closed spin manifold with positive scalar curvature. This (in a precise formulation) in turn would be a special case of the strong Novikov conjecture for the fundamental group, which deals with the K-theory of C*-algebras. This in turn is a special case of the Baum–Connes conjecture for the fundamental group. In the special case of four-dimensional manifolds, the Seiberg–Witten equations have been usefully applied to the study of scalar curvature. Similarly to Lichnerowicz's analysis, the key is an application of the maximum principle to prove that solutions to the Seiberg–Witten equations must be trivial when scalar curvature is positive. Also in analogy to Lichnerowicz's work, index theorems can guarantee the existence of nontrivial solutions of the equations. Such analysis provides new criteria for nonexistence of metrics of positive scalar curvature. Claude LeBrun pursued such ideas in a number of papers. Existence results By contrast to the above nonexistence results, Lawson and Yau constructed Riemannian metrics of positive scalar curvature from a wide class of nonabelian effective group actions. Later, Schoen–Yau and Gromov–Lawson (using different techniques) proved the fundamental result that existence of Riemannian metrics of positive scalar curvature is preserved by topological surgery in codimension at least three, and in particular is preserved by the connected sum. This establishes the existence of such metrics on a wide variety of manifolds. For example, it immediately shows that the connected sum of an arbitrary number of copies of spherical space forms and generalized cylinders has a Riemannian metric of positive scalar curvature. Grigori Perelman's construction of Ricci flow with surgery has, as an immediate corollary, the converse in the three-dimensional case: a closed orientable 3-manifold with a Riemannian metric of positive scalar curvature must be such a connected sum. Based upon the surgery allowed by the Gromov–Lawson and Schoen–Yau construction, Gromov and Lawson observed that the h-cobordism theorem and analysis of the cobordism ring can be directly applied. They proved that, in dimension greater than four, any non-spin simply connected closed manifold has a Riemannian metric of positive scalar curvature. Stephan Stolz completed the existence theory for simply-connected closed manifolds in dimension greater than four, showing that as long as the α-genus is zero, then there is a Riemannian metric of positive scalar curvature. According to these results, for closed manifolds, the existence of Riemannian metrics of positive scalar curvature is completely settled in the three-dimensional case and in the case of simply-connected manifolds of dimension greater than four. Kazdan and Warner's trichotomy theorem The sign of the scalar curvature has a weaker relation to topology in higher dimensions. Given a smooth closed manifold M of dimension at least 3, Kazdan and Warner solved the prescribed scalar curvature problem, describing which smooth functions on M arise as the scalar curvature of some Riemannian metric on M. Namely, M must be of exactly one of the following three types: Every function on M is the scalar curvature of some metric on M. A function on M is the scalar curvature of some metric on M if and only if it is either identically zero or negative somewhere. A function on M is the scalar curvature of some metric on M if and only if it is negative somewhere. Thus every manifold of dimension at least 3 has a metric with negative scalar curvature, in fact of constant negative scalar curvature. Kazdan–Warner's result focuses attention on the question of which manifolds have a metric with positive scalar curvature, that being equivalent to property (1). The borderline case (2) can be described as the class of manifolds with a strongly scalar-flat metric, meaning a metric with scalar curvature zero such that M has no metric with positive scalar curvature. Akito Futaki showed that strongly scalar-flat metrics (as defined above) are extremely special. For a simply connected Riemannian manifold M of dimension at least 5 which is strongly scalar-flat, M must be a product of Riemannian manifolds with holonomy group SU(n) (Calabi–Yau manifolds), Sp(n) (hyperkähler manifolds), or Spin(7). In particular, these metrics are Ricci-flat, not just scalar-flat. Conversely, there are examples of manifolds with these holonomy groups, such as the K3 surface, which are spin and have nonzero α-invariant, hence are strongly scalar-flat. See also Basic introduction to the mathematics of curved spacetime Yamabe invariant Kretschmann scalar Notes References Further reading Curvature tensors Riemannian geometry Trace theory de:Riemannscher Krümmungstensor#Krümmungsskalar
Scalar curvature
Engineering
4,061
9,933,030
https://en.wikipedia.org/wiki/SEDAT
SEDAT ("Space Environment DATa System") provides access to near-original satellite data on the space environment in order to perform analyses and queries needed for evaluation of space environment hazards. History The development was performed between 1999 and 2001 by the Rutherford Appleton Laboratory (RAL) and funded by the European Space Agency via its Space Environments and Effects Section. Description The aim of the SEDAT project is to develop a new approach to the engineering analysis of the spacecraft charged-particle environments. The project assembled a database containing a large and comprehensive set of data about that environment as measured in-situ by a number of space plasma missions. The user is able to select a set of space environment data appropriate to the engineering problem under study. The project developed a set of software tools, which can operate on the data retrieved from the SEDAT database. These tools allow the user to carry out a wide range of engineering analyses. This approach differs from traditional space environment engineering studies. In the latter the space environment is characterised by a model that is a synthesis of previous observations. However, in SEDAT the environment is characterised directly by the observations. This approach offers several advantages to the engineering analyst: The data used in the study can be tailored more precisely to the engineering problem under study. The analysis is not constrained by selection effects within the model used. The analyst may tailor the processing of the data to the problem under study. The analysis is not constrained by binning or other processing effects that were used to generate the model. New data are readily incorporated in the database and thus made available for engineering analyses. The traditional approach would require the production, validation and dissemination of an updated model, which is a far more time-consuming activity. The SEDAT concept foresees access to distributed datasets, capture of processing methods and openness in analysis tools. SEDAT implementation The implementation of SEDAT is divided into three main parts: Construction of the SEDAT database, based in the STPDF. Production of the analysis tools to be used in conjunction with the SEDAT database, based on IDL routines. Execution of four small exercises, using the SEDAT database and tools, to demonstrate that these functions operate correctly. Four demonstrations of the SEDAT system were performed in the original study: Update of solar proton model (RAL-SED-TN-0301) Radiation dose calculation for interplanetary mission (RAL-SED-TN-0302) Correlation of electron fluxes with spacecraft anomalies (RAL-SED-TN-0303) Electron fluorescence on XMM (RAL-SED-TN-0304) References External links Spaceweather SEDAT Project Description Environmental science databases European Space Agency Research institutes in Oxfordshire Space science
SEDAT
Astronomy,Environmental_science
571
13,916,379
https://en.wikipedia.org/wiki/Location%20routing%20number
A location routing number (LRN) is an identification for a telephone switch for the purpose of routing telephone calls through the public switched telephone network (PSTN) in the United States. This identification has the format of a telephone number, in accordance with the North American Numbering Plan (NANP). The association of a location routing number with a telephone number is required for local number portability. Function In the US, the location routing number is a ten-digit number following the specifications of the North American Numbering Plan. The LRN is stored in a database called a Service Control Point (SCP) that identifies a switching port for a local telephone exchange. Using LRN, when a telephone number has been dialed, the local telephone exchange queries or "dips" a routing database, usually the SCP, for the LRN associated with the subscriber number. The LRN removes the need for the public telephone number to identify the local exchange carrier. If a subscriber changes to another telephone service provider, the current telephone number can be retained, and only the LRN needs to be changed. In addition to supporting service provider telephone number portability, an LRN also supports the possibility of two other types of number portability: service portability (for example, ordinary service to ISDN) and geographic portability. History In 1996, the US Congress mandated a change in local telephone service that allows any carrier to enter a local market. The new regulation provided for local number portability (LNP), which permitted the servicing of telephone numbers from other wire centers than that the given by the NPA-NXX prefixes of each number. In practice, a subscriber can keep a telephone number when moving to another exchange area by a process called porting a telephone number. Every ported telephone number has an LRN assigned. Virginia-based NeuStar has been contracted with developing and maintaining the Number Portability Administration Center (NPAC) to support the implementation of local number portability. On March 26, 2015, the FCC approved the recommendation of the North American Numbering Council (NANC) to award the contract to Telcordia Technologies, doing business as iconectiv, as the next Local Number Portability Administrator (LNPA), after 18 years of management by Neustar.[4][5] The reasons for the change were cited as cost savings.[6] With commission oversight, North American Portability Management, LLC (NAPM) negotiated the terms of a Master service agreement (MSA) with iconectiv.[4][6] The MSA was submitted to the FCC for review and approval in March 2016.[7] The iconectiv contract was finalized in August 2016.[8] iconectiv officially became the administrator of the NPAC on May 25, 2018.[9] See also Signalling System No. 7 References External links http://www.nanc-chair.org/docs/nowg/Nov03_LRN_Cites_Document.doc Telephone numbers
Location routing number
Mathematics
626
49,321,089
https://en.wikipedia.org/wiki/Satellogic
Satellogic Inc. is a company specializing in Earth-observation satellites, founded in 2010 by Emiliano Kargieman and Gerardo Richarte. Satellogic began launching their Aleph-1 constellation of ÑuSat satellites in May 2016. On 19 December 2019, Satellogic announced they have received US$50 million in funding in the latest funding round. In January 2022 the company went public with a special-purpose acquisition company (CF Acquisition Corp. V) merger. Satellogic is a publicly traded company on the Nasdaq exchange. History In the summer of 2010, after spending some time at the Ames Research Center in Mountain View, California, Emiliano Kargieman started developing the concepts that would become Satellogic. He realized there was a great opportunity: to bring to the satellite services industry many of the lessons learned during the last two decades of working with Information Technology, and build a platform that provides spatial information services, without major investments in infrastructure. Together with his friend and colleague, Gerardo Richarte, they started Satellogic. Since 2010, the company has grown from a small start-up to a multinational company that has customers around the globe. Satellogic made Argentina's first two nanosatelites, CubeBug-1 (nickname El Capitán Beto, COSPAR 2013-018D, launched 26 April 2013 on a Long March 2D launch vehicle) and CubeBug-2 (nickname Manolito, also known as LUSAT-OSCAR 74 or LO 74, COSPAR 2013-066AA, launched 21 November 2013 on a Dnepr launch vehicle). Their third satellite, BugSat 1, launched in June 2014. Both the CubeBug-1 and CubeBug-2 as well as the BugSat 1 satellite served as technology tests and demonstrations for the ÑuSat satellites. They also had amateur radio payloads. The CubeBug project was sponsored by Argentinian Ministry of Science, Technology and Productive Innovation. Satellogic began launching their Aleph-1 constellation of ÑuSat satellites in May 2016. On 19 December 2019, Satellogic announced they have received US$50 million in funding in the latest funding round. In January 2022 the company went public with a special-purpose acquisition company (CF Acquisition Corp. V) merger. In connection with the closing of the business combination and other transactions, Satellogic received gross proceeds of approximately $262 million to fund its satellite constellation. Satellogic planned to have 202 satellites in orbit by 2025 and expected revenue of $480 million in 2025. Former US Secretary of the Treasury Steven Mnuchin and Cantor Fitzgerald CEO Howard Lutnick invested in the SPAC merging with Satellogic and became major investors. Satellogic announced a partnership with Palantir Technologies in 2022. As of June 2024, Satellogic had 26 satellites in operation in space and staff of about 140 people. Its revenues for all of 2022 were $6-8 million, and for 2023 $10 million. After President-elect Donald Trump announced in November 2024 that he would make Howard Lutnick the new US Secretary of Commerce, Lutnick resigned from the Satellogic board of directors. At the same time, his company Cantor Fitzgerald increased its stake in Satellogic. Technology Satellogic is building a 200+ satellite constellation as a scalable Earth observation platform with the ability to weekly remap the entire planet at high resolution to provide affordable geospatial insights for daily decision making. Satellogic created a small, light, and inexpensive system that can be produced at scale. Each commercial satellite carries two payloads – one for high resolution multispectral imaging and another one for a hyperspectral camera of 30 m GSD and 150 km swath (at a 470 km altitude). Satellite specifications Satellogic's satellites are built to the following specifications: Products and services Dedicated satellite constellations Satellogic's markets "Dedicated Satellite Constellations" (DSC) as an opportunity for customers to develop a national geospatial imaging program at unmatched frequency, resolution and cost. This program is aimed at municipal, state and national governments eager to gain exclusive control of a fleet of satellites over an area of interest. It can be used to support key decisions, to manage policy impact, to measure investment and socio-economic progress and to serve as an open environment to foster collaboration, data and information sharing. DSC's satellites are registered and flagged by the operating entity. With complete control of the satellites over the designated area of interest, the operator will directly task the satellite from its own ground station, allowing frequent remapping and the ability to revisit specific points of interest several times per day. Total control of imagery download and private cloud archiving guarantee prompt and secure data management by an operator's own team. In 2019. Satellogic signed its first agreement to deliver a dedicated satellite constellation for exclusive geospatial analytics in Henan Province, China. DSC has been nominated for Via Satellite's "2019 Satellite Technology of the Year" Award. Data services Satellogic offers 1-meter resolution multispectral imaging and 30-meter resolution hyperspectral satellite imagery. Geospatial analytics Satellogic's data science and AI team convert images into layers available as data-services in its online platform, including object identification, classification, semantic change detection and predictive models within a broad range of industries including agriculture, forestry, energy, finance and insurance, as well as applications for the civilian area of governments, such as cartography, environmental monitoring and critical infrastructure, among others. Offices Satellogic's R&D facilities are located in Buenos Aires and Córdoba, Argentina. The AIT facility is located in Montevideo, Uruguay. The data-technology center in Barcelona, Spain; a finance office in Charlotte, United States, and there is a business development center in Miami, United States. Satellite launches , Satellogic has launched 46 satellites from the US (with SpaceX), China and Russia and French Guiana. While the first three spacecraft were early prototypes, the following 43 satellites corresponded to four consecutive iterations and incremental versions of Satellogic's ÑuSat design (Mark I to Mark V). Since 2018, Satellogic has a tradition of naming their spacecraft after important women scientists. On 19 January 2021, it was announced that SpaceX would become their preferred rideshare vendor, the first due in June 2021. In May 2022, a new multi-launch agreement with SpaceX for the next ~60 satellites was announced. See also ÑuSat References Commercial spaceflight Companies listed on the Nasdaq Special-purpose acquisition companies Technology companies established in 2010 Remote sensing companies Geospatial intelligence Satellites Earth observation satellites Earth observation projects Earth observation Satellites in geosynchronous orbit Nanosatellites Satellite constellations Imaging reconnaissance satellites Prototypes SpaceX Spaceflight
Satellogic
Astronomy
1,424
6,573,614
https://en.wikipedia.org/wiki/Mesoamerican%20architecture
Mesoamerican architecture is the set of architectural traditions produced by pre-Columbian cultures and civilizations of Mesoamerica, traditions which are best known in the form of public, ceremonial and urban monumental buildings and structures. The distinctive features of Mesoamerican architecture encompass a number of different regional and historical styles, which however are significantly interrelated. These styles developed throughout the different phases of Mesoamerican history as a result of the intensive cultural exchange between the different cultures of the Mesoamerican culture area through thousands of years. Mesoamerican architecture is mostly noted for its pyramids, which are the largest such structures outside of Ancient Egypt. One interesting and widely researched topic is the relation between cosmovision, religion, geography, and architecture in Mesoamerica. Much seems to suggest that many traits of Mesoamerican architecture were governed by religious and mythological ideas. For example, the layout of most Mesoamerican cities seem to be influenced by the cardinal directions and their mythological and symbolic meanings in Mesoamerican culture. Another part of Mesoamerican architecture is its iconography. The monumental architecture of Mesoamerica was decorated with images of religious and cultural significance, and also in many cases with writing in some of the Mesoamerican writing systems. Iconographic decorations and texts on buildings are important contributors to the overall current knowledge of pre-Columbian Mesoamerican society, history and religion. Chronology The following tables show the different phases of Mesoamerican architecture and archeology and correlates them with the cultures, cities, styles and specific buildings that are notable from each period. Urban planning and cosmovision Cosmos and its replication Symbolism An important part of the Mesoamerican religious system was replicating their beliefs in concrete tangible forms, in effect making the world an embodiment of their beliefs. This meant that the Mesoamerican city was constructed to be a microcosm, manifesting the same division that existed in the religious, mythical geography—a division between the underworld and the human world. The underworld was represented by the direction north and many structures and buildings related to the underworld, such as tombs, are often found in the city's northern half. The southern part represented life, sustenance, and rebirth and often contained structures related to the continuity and daily function of the city-state, such as monuments depicting the noble lineages, or residential quarters, markets, etc. Between the two halves of the north/south axis was the plaza, often containing stelae resembling the world tree the Mesoamerican axis mundi, and a ballcourt which served as a crossing point between the two worlds. Some Mesoamericanists argue that in religious symbolism the Mesoamerican monumental architecture pyramids were mountains, stelae were trees, and wells, ballcourts and cenotes were caves that provided access to the underworld. Orientation Mesoamerican architecture is often designed to align to specific celestial events. Some pyramids, temples, and other structures were designed to achieve special lighting effects on particular days important in the Mesoamerican cosmovision. A famous example is the "El Castillo" pyramid at Chichen Itza, where a light-and-shadow effect can be observed during several weeks around the equinoxes. Contrary to a common opinion, however, there is no evidence that this phenomenon was the result of a purposeful design intended to commemorate the equinoxes. Much Mesoamerican architecture is also aligned to roughly 15° east of north. Vincent H Malmstrom has argued that this is because of a general wish to align the pyramids to face the sunset on August 13, which was the beginning date of the Maya Long Count calendar. However, recent research has shown that the earliest orientations marking sunsets on August 13 (and April 30) occur outside of the Maya area. Their purpose must have been to record the dates separated by a period of 260 days (from August 13 to April 30), equivalent to the length of the sacred Mesoamerican calendrical count. In general, the orientations in Mesoamerican architecture tend to mark the dates separated by multiples of 13 and 20 days, i.e. of basic periods of the calendrical system. The distribution of these dates in the year suggests that the orientations allowed the use of observational calendars that facilitated the prediction of agriculturally significant dates. These conclusions are supported by the results of systematic research accomplished in various Mesoamerican regions, including central Mexico, the Maya Lowlands, Oaxaca, the Gulf Coast lowlands, and western and northern Mesoamerica. While solar orientations prevail, some prominent buildings were aligned to Venus extremes, a notable example being the Governor's Palace at Uxmal. Orientations to lunar standstill positions on the horizon have also been documented; they are particularly common along the Northeast Coast of the Yucatán peninsula, where the worship of the goddess Ixchel, associated with the Moon, is known to have had an outstanding importance during the Postclassic period. The Plaza Nearly every known ancient Mesoamerican city had one or more formal public plazas. They are typically large impressive spaces, surrounded by tall pyramids, temples, and other important buildings. Activities that would take place in these plazas would include private rituals, periodic markets, mass spectator ceremonies, participatory public ceremonies, feasts, and other popular celebrations. The size of the main plazas in Mesoamerican cities differed greatly, the largest being located in Tenochtitlan with an estimated size of 115,000 square meters. This plaza is an outlier due to the population of the city being so large. The next largest estimated plaza is located in the Gulf Coast in the city of Cempoala (or Zempoala), measuring at 48,088 square meters. Most plazas average at around 3,000 square meters, the smallest being at the site of Paxte which is 528 square meters. Some cities contain many smaller plazas throughout, while some focus their attention on a significantly large main plaza. Tenochtitlan Tenochtitlan was an Aztec city that thrived from 1325 to 1521. The city was built on an island, surrounded on all sides by Lake Texcoco. It consisted of an elaborate system of canals, aqueducts, and causeways allowing the city to supply its residents. The island was about 12 square kilometers and had a population of approximately 125,000 people, making it the largest Mesoamerican city ever recorded. The main plaza of Tenochtitlan was approximately 115,000 square meters, or . The main temple of Tenochtitlan known as Templo Mayor or the Great Temple was 100 meters by 80 meters at its base, and 60 meters tall. The city ultimately fell in 1521 when it was destroyed by the Spanish conquistador Hernán Cortés in 1521. Cortés and the Spaniards raided the city for its gold supply and artifacts, leaving little behind of the Aztec's civilization. At the monumental Templo Mayor of Tenochtitlan, archaeologists discovered that the Aztec enlarged the temple seven times, with five extra façades, but always kept intact the basic dual symbolism of the rain god Tlaloc and the tribute/war god Huitzilopochtli. Mexican Archaeologist Eduardo Matos Moctezuma has shown that the symbolic and ritual life of this imperial shrine unified the patterns of forced tributary payments from hundreds of communities with the agricultural and hydraulic subsystems of food production. Pyramids Often the most important religious temples sat atop the towering pyramids, presumably as the closest place to the heavens. While recent discoveries point toward the extensive use of pyramids as tombs, the temples themselves seem to rarely, if ever, contain burials. Residing atop the pyramids, some of over two-hundred feet, such as that at El Mirador, the temples were impressive and decorated structures themselves. Commonly topped with a roof comb, or superficial grandiose wall, these temples might have served as a type of propaganda. Pyramid of the Sun The Pyramid of the Sun is the largest structure created in the city of Teotihuacan and one of the largest structures in the entire Western Hemisphere. It stands at about 216 feet and is roughly at its base. The pyramid is located on the east side of the avenue of the dead which runs almost directly down the center of the city of Teotihuacan. After archaeologists discovered animal remains, masks, figurines, specifically one of the Aztec god Huehueteotl, and shards of clay pots in the pyramid, it was agreed upon that the pyramid was likely a ritual temple at one point. Temple of the Feathered Serpent The Temple of the Feathered Serpent was constructed after the Pyramid of the Sun and the Pyramid of the Moon had been completed. The temple marks one of the first uses of the architecture style of talud-tablero. On the surfaces, the temple had murals illustrated on them just like so many temples that were built at the same time and by the same people. The tableros featured large serpent heads complete with elaborate headdresses. The feathered serpent refers to the Aztec god Quetzalcoatl. Ballcourts The Mesoamerican ballgame ritual was a symbolic journey between the underworld and the world of the living, and many ball courts are found in the mid-part of the city functioning as a connection between the northern and southern halves of the city. All but the earliest ball courts are masonry structures. Over 1300 ball courts have been identified, and although there is a tremendous variation in size, they all have the same general shape: a long narrow alley flanked by two walls with horizontal, sloping, and sometimes vertical faces. The later vertical faces, such as those at Chichen Itza and El Tajin, are often covered with complex iconography and scenes of human sacrifice. Although the alleys in early ball courts were open-ended, later ball courts had enclosed end-zones, giving the structure an -shape when viewed from above. The playing alley may be at ground level, or the ball court may be "sunken". Ball courts were no mean feats of engineering. One of the sandstone stones on El Tajin's South Ball court is 11 m long and weighs more than 10 tons. Residential quarters and elite residences Large and often highly decorated, the palaces usually sat close to the center of a city and housed the population's elite. Any exceedingly large royal palace, or one consisting of many chambers on different levels might be referred to as an acropolis. However, often these were one-story and consisted of many small chambers and typically at least one interior courtyard; these structures appear to take into account the needed functionality required of a residence, as well as the decoration required for their inhabitants stature. Archaeologists seem to agree that many palaces are home to various tombs. At Copán, beneath over four-hundred years of later remodeling, a tomb for one of the ancient rulers has been discovered and the North Acropolis at Tikal appears to have been the site of numerous burials during the Terminal Pre-classic and Early Classic periods. Building materials The most surprising aspect of the great Mesoamerican structures is their lack of many advanced technologies that would seem to be necessary for such constructions. Lacking metal tools, Mesoamerican architecture required one thing in abundance: manpower. Yet, beyond this enormous requirement, the remaining materials seem to have been readily available. They most often utilized limestone, which remained pliable enough to be worked with stone tools while being quarried, and only hardened once when removed from its bed. In addition to the structural use of limestone, much of their mortar consisted of crushed, burnt, and mixed limestone that mimicked the properties of cement and was used just as widely for stucco finishing as it was for mortar. However, later improvements in quarrying techniques reduced the necessity for this limestone-stucco as their stones began to fit quite perfectly, yet it remained a crucial element in some post and lintel roofs. A common building material in central Mexico was tezontle (a light, volcanic rock). It was common for palaces and monumental structures to be made of this rough stone and then covered with stucco or with a cantera veneer. Very large and ornate architectural ornaments were fashioned from a very enduring stucco (kalk), especially in the Maya region, where a type of hydraulic limestone cement or concrete was also used. In the case of the common houses, wooden framing, adobe, and thatch were used to build homes over stone foundations. However, instances of what appear to be common houses of limestone have been discovered as well. Buildings were typically finished with high slanted roofs usually built of wood or thatch although stone roofs in these high slant fashions are also used rarely. Styles Megalithic An architectural construction technique that employs large dry-laid limestone blocks (c. 1 m × 50 cm × 30 cm) covered with a thick layer of stucco. This style was common in the northern Maya lowlands from the Preclassic until the early parts of the Early Classic. Talud-tablero Pyramids in Mesoamerican were platformed pyramids and many used a style called talud-tablero, which first became common in Teotihuacan. This style consists of a platform structure, or the "tablero," on top of a sloped "talud". Many different variants on the talud-tablero style arose throughout Mesoamerica, developing and manifesting itself differently among the various cultures. Classic Period Maya styles Palenque, Tikal, Copán, Tonina, the corbeled arch. "Toltec" Style Chichén Itzá, Tula Hidalgo, chacmools, Atlantean figures, Quetzalcoatl designs. Puuc So named after the Puuc hills in which this style developed and flourished during the latter portion of the Late Classic and throughout the Terminal Classic in the northern Maya lowlands, Puuc architecture consists of veneer facing stones applied to a concrete core. Two façades were typically built, partitioned by a ridge of stone. The blank lower façade is formed by flat cut stones and punctuated by doorways. The upper partition is richly decorated with repeating geometric patterns and iconographic elements, especially the distinctive curved-nosed Chaac masks. Carved columnettes are also common. Technology Corbelled arch Mesoamerican cultures never invented the keystone, and so were unable to build true arches, but instead, all of their architecture made use of the "false" or Corbelled arch. These arches are built without centering and can be built without support, by corbelling regularly the horizontal courses of the wall masonry. This type of arch supports much less weight than a true arch. However, recent work by engineer James O'Kon suggests the Mesoamerican "arch" is technically not a corbelled arch at all but a trapezium truss system. Moreover, unlike a corbelled arch, it does not rely on overlapping layers of blocks but cast-in-place concrete often supported by timber thrust beams. Computer analysis reveals this to be structurally superior to a curved arch True arch Scholars such as David Eccott and Gordon Ekholm argue that true arches were known in pre-Columbian times in Mesoamerica; they point to various examples of true arches at a Maya site in La Muneca, the facade of Temple A at Nukum, two low domes at Tajin in Veracruz, a sweat bath at Chichen Itza, and an arch at Oztuma. In 2010, a robot discovered a long arch-roofed passageway underneath the Pyramid of Quetzalcoatl, which stands in the ancient city of Teotihuacan north of Mexico City, dated to around 200 AD. UNESCO World Heritage Sites A number of important archeological sites representing Mesoamerican architecture have been categorized as "World Heritage Sites" by the UNESCO. El Salvador Maya site of Joya de Cerén Honduras Maya Site of Copán Guatemala Tikal National Park Archaeological Park and Ruins of Quirigua Mexico Pre-Hispanic City and National Park of Palenque Pre-Hispanic Town of Uxmal Pre-Hispanic City of Teotihuacan Historic Centre of Oaxaca and Archaeological Site of Monte Albán Pre-Hispanic City of Chichen Itza Archaeological Monuments Zone of Xochicalco, Morelos El Tajin, Pre-Hispanic City of Veracruz Ancient Maya City of Calakmul, Campeche Historic Centre of Mexico City and Xochimilco (with templo mayor and adjacent temples) Agave Landscape and Ancient Industrial Facilities of Tequila (with guachimonotones and near teuchitlán culture sites) Prehistoric Caves of Yagul and Mitla in the Central Valley of Oaxaca (with the yagul archeological site) See also Maya architecture Mayan Revival architecture Maya city Buildings and structures in Mesoamerica Triadic pyramid Notes Further reading Leibsohn, Dana, and Barbara E. Mundy, "The Mechanics of the Art World", Vistas: Visual Culture in Spanish America, 1520–1820 (2015). External links Architectural history Architectural styles Architecture in Mexico Central American architecture
Mesoamerican architecture
Engineering
3,559
1,361,076
https://en.wikipedia.org/wiki/Klein%20transformation
In quantum field theory, the Klein transformation is a redefinition of the fields to amend the spin-statistics theorem. Bose–Einstein Suppose φ and χ are fields such that, if x and y are spacelike-separated points and i and j represent the spinor/tensor indices, Also suppose χ is invariant under the Z2 parity (nothing to do with spatial reflections!) mapping χ to −χ but leaving φ invariant. Free field theories always satisfy this property. Then, the Z2 parity of the number of χ particles is well defined and is conserved in time. Let's denote this parity by the operator Kχ which maps χ-even states to itself and χ-odd states into their negative. Then, Kχ is involutive, Hermitian and unitary. The fields φ and χ above don't have the proper statistics relations for either a boson or a fermion. This means that they are bosonic with respect to themselves but fermionic with respect to each other. Their statistical properties, when viewed on their own, have exactly the same statistics as the Bose–Einstein statistics because: Define two new fields φ' and χ' as follows: and This redefinition is invertible (because Kχ is). The spacelike commutation relations become Fermi–Dirac Consider the example where (spacelike-separated as usual). Assume you have a Z2 conserved parity operator Kχ acting upon χ alone. Let and Then References See also Jordan–Schwinger transformation Jordan–Wigner transformation Bogoliubov–Valatin transformation Holstein–Primakoff transformation Quantum field theory
Klein transformation
Physics
340
4,826,789
https://en.wikipedia.org/wiki/Magnesium%20alloy
Magnesium alloys are mixtures of magnesium (the lightest structural metal) with other metals (called an alloy), often aluminium, zinc, manganese, silicon, copper, rare earths and zirconium. Magnesium alloys have a hexagonal lattice structure, which affects the fundamental properties of these alloys. Plastic deformation of the hexagonal lattice is more complicated than in cubic latticed metals like aluminium, copper and steel; therefore, magnesium alloys are typically used as cast alloys, but research of wrought alloys has been more extensive since 2003. Cast magnesium alloys are used for many components of modern cars and have been used in some high-performance vehicles; die-cast magnesium is also used for camera bodies and components in lenses. The commercially dominant magnesium alloys contain aluminium (3 to 13 percent). Another important alloy contains Mg, Al, and Zn. Some are hardenable by heat treatment. All the alloys may be used for more than one product form, but alloys AZ63 and AZ92 are most used for sand castings, AZ91 for die castings, and AZ92 generally employed for permanent mold castings (while AZ63 and A10 are sometimes also used in the latter application as well). For forgings, AZ61 is most used, and here alloy M1 is employed where low strength is required and AZ80 for highest strength. For extrusions, a wide range of shapes, bars, and tubes are made from M1 alloy where low strength suffices or where welding to M1 castings is planned. Alloys AZ31, AZ61 and AZ80 are employed for extrusions in the order named, where increase in strength justifies their increased relative costs. Magnox (alloy), whose name is an abbreviation for "magnesium non-oxidizing", is 99% magnesium and 1% aluminium, and is used in the cladding of fuel rods in magnox nuclear power reactors. Magnesium alloys are referred to by short codes (defined in ASTM B275) which denote approximate chemical compositions by weight. For example, AS41 has 4% aluminium and 1% silicon; AZ81 is 7.5% aluminium and 0.7% zinc. If aluminium is present, a manganese component is almost always also present at about 0.2% by weight which serves to improve grain structure; if aluminium and manganese are absent, zirconium is usually present at about 0.8% for this same purpose. Magnesium is a flammable material and must be handled carefully. Designation By ASTM specification B951-11(2018), magnesium alloys are represented by two letters followed by two, three, or four numbers and a serial letter. Letters tell main alloying elements as per the table at the right. Numbers indicate respective integer compositions of main alloying elements, from most to least abundant. The serial letter is chosen arbitrarily in order to disambiguate between two alloys with the same designation. Marking AZ91A for example conveys magnesium alloy with roughly 9 weight percent aluminium (between 8.6 and 9.4) and 1 weight percent zinc (between 0.6 and 1.4), and the final A means it was the first alloy with this composition at the time of registration. Exact composition should be confirmed from reference standards. Aluminium, zinc, zirconium, and thorium promote precipitation hardening: manganese improves corrosion resistance; and tin improves castability. Aluminium is the most common alloying element. The numerals correspond to the rounded-off percentage of the two main alloy elements, proceeding alphabetically as compositions become standard. Temper nonstandard designation is usually much the same as in the case of aluminium: using –F, -O, -H1, -T4, -T5, and –T6. Sand permanent-mold, and die casting are all well developed for magnesium alloys, die casting being the most popular. Although magnesium is about twice as expensive as aluminium, its hot-chamber die-casting process is easier, more economical, and 40% to 50% faster than cold-chamber process required for aluminium. Forming behavior is poor at room temperature, but most conventional processes can be performed when the material is heated to temperatures of . As these temperatures are easily attained and generally do not require a protective atmosphere, many formed and drawn magnesium products are manufactured. The machinability of magnesium alloys is the best of any commercial metal, and in many applications, the savings in machining costs more than compensate for the increased cost of the material. It is necessary, however, to keep the tools sharp and to provide ample space for the chips. Magnesium alloys can be spot-welded nearly as easily as aluminium, but scratch brushing or chemical cleaning is necessary before the weld is formed. Fusion welding is carried out most easily by processes using an inert shielding atmosphere of argon or helium gas. Considerable misinformation exists regarding the fire hazard in processing magnesium alloys. It is true that magnesium alloys are highly combustible when in a finely divided form, such as powder or fine chips, and this hazard should never be ignored. Above , a non-combustible, oxygen-free atmosphere is required to suppress burning. Casting operations often require additional precautions because of the reactivity of magnesium with sand and water in sheet, bar, extruded or cast form; however, magnesium alloys present no real fire hazard. Thorium-containing alloys are not usually used, since a thorium content of more than 2% requires that a component be handled as a radioactive material, although thoriated magnesium known as Mag-Thor was used in military and aerospace applications in the 1950s. Similarly, uranium-containing alloys have declined in use to the point where the ASTM B275 "G" designation is no longer in the standard. Magnesium alloys are used for both cast and forged components, with the aluminium-containing alloys usually used for casting and the zirconium-containing ones for forgings; the zirconium-based alloys can be used at higher temperatures and are popular in aerospace. Magnesium+yttrium+rare-earth+zirconium alloys such as WE54 and WE43 (the latter with composition Mg 93.6%, Y 4%, Nd 2.25%, 0.15% Zr) can operate without creep at up to 300C and are reasonably corrosion-resistant. Trade names have sometimes been associated with magnesium alloys. Examples are: Elektron Magnox Magnuminium Mag-Thor Metal 12 Birmabright Magnalium Cast alloys Magnesium casting proof stress is typically 75–200 MPa, tensile strength 135–285 MPa and elongation 2–10%. Typical density is 1.8 g/cm3 and Young's modulus is 42 GPa. Most common cast alloys are: AZ63 AZ81 AZ91 AM50 AM60 ZK51 ZK61 ZE41 ZC63 HK31 HZ32 QE22 QH21 WE54 WE43 Elektron 21 Wrought alloys Magnesium wrought alloy proof stress is typically 160-240 MPa, tensile strength is 180-440 MPa and elongation is 7-40%. The most common wrought alloys are: AZ31 AZ61 AZ80 Elektron 675 ZK60 M1A HK31 HM21 ZE41 ZC71 ZM21 AM40 AM50 AM60 K1A M1 ZK10 ZK20 ZK30 ZK40 Wrought magnesium alloys have a special feature. Their compressive proof strength is smaller than tensile proof strength. After forming, wrought magnesium alloys have a stringy texture in the deformation direction, which increases the tensile proof strength. In compression, the proof strength is smaller because of crystal twinning, which happens more easily in compression than in tension in magnesium alloys because of the hexagonal lattice structure. Extrusions of rapidly solidified powders reach tensile strengths of up to 740 MPa due to their amorphous character, which is twice as strong as the strongest traditional magnesium alloys and comparable to the strongest aluminium alloys. Compositions table Characteristics Magnesium's particular merits are similar to those of aluminium alloys: low specific gravity with satisfactory strength. Magnesium provides advantages over aluminium, in being of even lower density (≈ 1.8 g/cm3) than aluminium (≈ 2.8 g/cm3). The mechanical properties of magnesium alloys tend to be below those of the strongest of the aluminium alloys. The strength-to-weight ratio of the precipitation-hardened magnesium alloys is comparable with that of the strong alloys of aluminium or with the alloy steels. Magnesium alloys, however, have a lower density, stand greater column loading per unit weight and have a higher specific modulus. They are also used when great strength is not necessary, but where a thick, light form is desired, or when higher stiffness is needed. Examples are complicated castings, such as housings or cases for aircraft, and parts for rapidly rotating or reciprocating machines. Such applications can induce cyclic crystal twinning and detwinning that lowers yield strength under loading direction change. The strength of magnesium alloys is reduced at elevated temperatures; temperatures as low as 93 °C (200 °F) produce considerable reduction in the yield strength. Improving the high-temperature properties of magnesium alloys is an active research area with promising results. Magnesium alloys show strong anisotropy and poor formability at room temperature stemming from their hexagonal close-packed crystal structure, limiting practical processing modes. At room temperature, basal plane slip of dislocation and mechanical crystal twinning are the only operating deformation mechanisms; the presence of twinning additionally requires specific loading conditions to be favorable. For these reasons processing of magnesium alloys must be done at high temperatures to avoid brittle fracture. The high-temperature properties of magnesium alloys are relevant for automotive and aerospace applications, where slowing creep plays an important role in material lifetime. Magnesium alloys generally have poor creep properties; this shortcoming is attributed to the solute additions rather than the magnesium matrix since pure magnesium shows similar creep life as pure aluminium, but magnesium alloys show decreased creep life compared to aluminium alloys. Creep in magnesium alloys occurs mainly by dislocation slip, activated cross slip, and grain boundary sliding. Addition of small amounts of zinc in Mg-RE alloys has been shown to increase creep life by 600% by stabilizing precipitates on both basal and prismatic planes through localized bond stiffening. These developments have allowed for magnesium alloys to be used in automotive and aerospace applications at relatively high temperatures. Microstructural changes at high temperatures are also influenced by Dynamic recrystallization in fine-grained magnesium alloys. Individual contributions of gadolinium and yttrium to age hardening and high temperature strength of magnesium alloys containing both elements are investigated using alloys containing different Gd:Y mole ratios of 1:0, 1:1, 1:3, and 0:1 with a constant Y+Gd content of 2.75 mol%. All investigated alloys exhibit remarkable age hardening by precipitation of β phase with DO19 crystal structure and β phase with BCO crystal structure, even at aging temperatures higher than 200 °C. Both precipitates are observed in peak-aged specimens. The precipitates contributing to age hardening are fine and their amount increases as Gd content increases, and this result in increased peak hardness, tensile strength and 0.2% proof stress but decreased elongation. On the other hand, higher Y content increases the elongation of the alloys but results in decreased strength. Despite its reactivity (magnesium ignites at 630 °C and burns in air), magnesium and its alloys have good resistance to corrosion in air at STP. The rate of corrosion is slow compared with rusting of mild steel in the same atmosphere. Immersion in salt water is problematic, but a great improvement in resistance to salt-water corrosion has been achieved, especially for wrought materials, by reducing some impurities particularly nickel and copper to very low proportions or using appropriate coatings. Fabrication Hot and cold working Magnesium alloys harden rapidly with any type of cold work, and therefore cannot be extensively cold formed without repeated annealing. Sharp bending, spinning, or drawing must be done at about , although gentle bending around large radii can be done cold. Slow forming gives better results than rapid shaping. Press forging is preferred to hammer forging, because the press allows greater time for metal flow. The plastic forging range is . Metal worked outside this range is easily broken due to lack of available deformation mechanisms. Casting Magnesium alloys, especially precipitation-hardened alloys, are used in casting. Sand, permanent mold and die casting methods are used, but plaster-of-Paris casting has not yet been perfected. Sand casting in green-sand molds requires a special technique, because the magnesium reacts with moisture in the sand, forming magnesium oxide and liberating hydrogen. The oxide forms blackened areas called burns on the surface of the casting, and the liberated hydrogen may cause porosity. Inhibitors such as sulfur, boric acid, ethylene glycol, or ammonium fluoride are mixed with the damp sand to prevent the reaction. All gravity-fed molds require an extra high column of molten metal to make the pressure great enough to force gas bubbles out of the casting and make the metal take the detail of the mold. The thickness of the casting wall should be at least 5/32 in. under most conditions. Extra-large fillets must be provided at all re-entrant corners, since stress concentration in magnesium castings are particularly dangerous. Permanent mold castings are made from the same alloys and have about the same physical properties as sand castings. Since the solidification shrinkage of magnesium is about the same as that of aluminium, aluminium molds can often be adapted to make magnesium-alloy castings (although it may be necessary to change the gating). Pressure cold-chamber castings are used for quantity production of small parts. The rapid solidification caused by contact of the fluid metal with the cold die produces a casting of dense structure with excellent physical properties. The finish and dimensional accuracy are very good, and machining is necessary only where extreme accuracy is required. Usually these castings are not heat treated. Welding, soldering, and riveting Many standard magnesium alloys are easily welded by gas or resistance-welding equipment, but cannot be cut with an oxygen torch. Magnesium alloys are not welded to other metals, because brittle inter-metallic compounds may form, or because the combination of metals may promote corrosion. Where two or more parts are welded together, their compositions must be the same. Soldering of magnesium alloys is feasible only for plugging surface defects in parts. The solders are even more corrosive than with aluminium, and the parts should never be required to withstand stress. Riveted joints in magnesium alloy structures usually employ aluminium or aluminium-magnesium alloy rivets. Magnesium rivets are not often used because they must be driven when hot. The rivet holes should be drilled, especially in heavy sheet and extruded sections, since punching tends to give a rough edge to the hole and to cause stress concentrations. Machining A particular attraction of magnesium alloys lies in their extraordinarily good machining properties, in which respect they are superior even to screwing brass. The power required in cutting them is small, and extremely high speeds (5000 ft per min in some cases) may be used. The best cutting tools have special shapes, but the tools for machining other metals can be used, although somewhat lower efficiency results. When magnesium is cut at high speed, the tools should be sharp and should be cutting at all times. Dull, dragging tools operating at high speed may generate enough heat to ignite fine chips. Since chips and dust from grinding can therefore be a fire hazard, grinding should be done with a coolant, or with a device to concentrate the dust under water. The magnesium grinder should not be used also for ferrous metals, since a spark might ignite the accumulated dust. If a magnesium fire should start, it can be smothered with cast-iron turnings or dry sand, or with other materials prepared especially for the purpose. Water or liquid extinguishers should never be used, because they tend to scatter the fire. Actually, it is much more difficult to ignite magnesium chips and dust than is usually supposed, and for that reason they do not present great machining difficulties. The special techniques that must be used in fabricating magnesium (working, casting, and joining) add considerably to the manufacturing cost. In selecting between aluminium and magnesium or a given part, the base cost of the metal may not give much advantage to either, but usually the manufacturing operations make magnesium more affordable. There is, perhaps, no group of alloys where extrusion is more important than it is to these, since the comparatively coarse-grained structure of the cast material makes most of them too susceptible to cracking to work by other means until sufficient deformation has been imparted to refine the grain. Therefore, except for one or two soft alloys, machining is invariably a preliminary step before other shaping processes. Hot extrusion Not much pure magnesium is extruded, for it has somewhat poor properties, especially as regards its proof stress. The alloying elements of chief concern at present are aluminium, zinc, cerium and zirconium; manganese is usually also present since, though it has little effect on the strength, it has a valuable function in improving corrosion resistance. One important binary alloy, containing up to 2.0% manganese, is used extensively for the manufacture of rolled sheet. It is comparatively soft and easier to extrude than other alloys, and is also one of the few that can be rolled directly without pre-extrusion. In the UK, extrusions are made from billets of dia. On presses varying in power over the range 600-3500 tons; normal maximum pressures on the billet are 30-50 tons/sq. in the U.S. the Dow chemical company have recently installed a 13.200 ton press capable of handling billets up to 32 in. Extrusion technique is generally similar to that for aluminium base alloys but, according to Wilkinson and fox, die design requires special consideration and, in their opinion, should incorporate short bearing lengths and sharp die entries. Tube extrusion in alloys AM503, ZW2, and ZW3 is now made with bridge dies. (The aluminium-bearing alloys do not weld satisfactorily.) In contrast to the previous practice of using bored billets, mandrel piercing is now used in the extrusion of large diameter tubes in ZW3 alloy. The stiffness of the alloys towards extrusion is increased in proportion to the amount of hardening elements they contain, and the temperature employed is generally higher the greater the quantity of these. Billet temperatures are also affected by the size of the sections, being higher for heavy reductions, but are usually in the range . Container temperatures should be identical with, or only slightly higher than billet temperature. Pre-heating of the billets must be carried out uniformly to promote as far as possible a homogeneous structure by absorption of compounds, such as Mg4Al, present in the alloys. Fox points out and this is also applicable to aluminium alloys. The initial structure of the billet is important, and casting methods that lead to fine grain are worthwhile. In coarse material, larger particles of the compounds are present that are less readily dissolved, and tend to cause a solution gradient. In magnesium alloys, this causes internal stress, since solution is accompanied by a small contraction, and it can also influence the evenness of response to later heat treatment. The binary magnesium-manganese alloy (AM505) is readily extruded at low pressures in the temperature range ., the actual temperature used depending upon the reduction and billet length rather than the properties desired, which are relatively insensitive to extrusion conditions. Good surface condition of the extrusion is achieved only with high speeds, of the order of per minute. With the aluminium and zinc containing alloys, and particularly those with the higher aluminium contents such as AZM and AZ855 difficulties arise at high speeds due to hot-shortness. Under conditions approaching equilibrium magnesium is capable of dissolving about 12 per cent aluminium, but in cast billets 4-5 wt.% usually represents the limit of solubility. Alloys containing 6 wt.% Al or more therefore contain Mg4Al3, which forms a eutectic melting at 435 °C. The extrusion temperature may vary from , but at the higher values speeds are restricted to about per minute. Continuous casting improves the homogeneity of these alloys and water cooling of the dies or taper heating of the billets further facilities their extrusion. Introduction of the magnesium-zinc-zirconium alloys, ZW2 and ZW3, represents a considerable advance in magnesium alloy technology for a number of reasons. They are high strength, but, since they do not contain aluminium, the cast billet contains only small quantities of the second phase. Since the solidus temperature is raised by about , the risk of hotshortness at relatively high extrusion speeds is much reduced. However, the mechanical properties are sensitive to billet preheating time, temperature and extrusion speed, Long preheating times and high temperatures and speeds produces properties similar to those in older aluminium-containing alloys, Heating times must be short and temperatures and speeds low to produce high properties. Increasing zinc content to 5 or 6 wt.%, as in the American alloy ZK60 and ZK61, reduces sensitivity to extrusion speed in respect of mechanical properties. Alloying of zirconium-bearing materials has been a major problem in their development. It is usual to add the zirconium from a salt—and careful control can produce good results. Dominion Magnesium Limited in Canada have developed a method adding in the conventional manner through a master alloy. Explanation for the low extrusion rates necessary to successfully extrude some magnesium alloys does not lie outside reasons put forward for other metals. Altwicker considers that the most significant cause is connected. With the degree of recovery from crystal deformation, which is less complete when work is applied quickly, causing higher stresses and the exhausting of the capacity for slip in the crystals. This is worthy of consideration, for the speed of re-crystallization varies from one metal to another, and according to temperature. It is also a fact that a metal worked in what is considered its working range can frequently be made to show marked work hardening if quenched immediately after deformation—showing that temporary loss of plasticity can easily accompany rapid working. Further alloy development Scandium and gadolinium have been tried as alloying elements; an alloy with 1% manganese, 0.3% scandium and 5% gadolinium offers almost perfect creep resistance at 350C. The physical composition of these multi-component alloys is complicated, with plates of intermetallic compounds such as Mn2Sc forming. Addition of zinc to Mg-RE alloys has been shown to greatly increase creep life by stabilizing RE precipitates. Erbium has also been considered as an additive. Magnesium–lithium alloys Adding 10% of lithium to magnesium produces an alloy that can be used as an improved anode in batteries with a manganese-dioxide cathode. Magnesium-lithium alloys are generally soft and ductile, and the density of 1.4 g/cm3 is appealing for space applications. Non-combustible magnesium alloys Adding 2% of calcium by weight to magnesium alloy AM60 results in the non-combustible magnesium alloy AMCa602. The higher oxidation reactivity of calcium causes a coat of calcium oxide to form before magnesium ignites. The ignition temperature of the alloy is elevated by 200–300 K. An oxygen-free atmosphere is not necessary for machining operations. Magnesium alloys for biomedical application Among all biocompatible metals, Mg has the closest elastic modulus to that of natural bone . Mg ranks as the fourth most plentiful cation in the human body, is an essential element for metabolism, and is primarily stored in bone tissue. Stimulating the growth of bone cells and speeding up the recovery of bone tissue is accelerated with a diet containing Mg. The addition of biocompatible alloying elements can have a serious impact on the mechanical behavior of Mg. Creating a solid solution, which is a type of alloying, is an effective method to increase the strength of metals References Aluminium–magnesium alloys
Magnesium alloy
Chemistry
5,081
51,081,465
https://en.wikipedia.org/wiki/Chemical%20protective%20clothing
Chemical protective clothing (CPC) is clothing worn to shield those who work with chemicals from the effects of chemical hazards that can cause injuries on the job. It provides a last line of defense for chemical safety; it does not replace more proactive measures like engineering controls. Clothing selection factors There are some considerations with chemical protective clothing. For instance, no clothing is "impervious," since all clothing will eventually seep in chemicals. CPC also prevents evaporation, causing skin temperature to increase and potentially increasing the permeability of skin. CPC that has not been tested for the specific operating condition it is used in may not provide adequate protection. The same material, even at the same thickness, may provide different levels of protection depending on the manufacturer, since different manufacturers use different processes and may add different additives. Finally, while the test data will provide information on individual chemicals based on "worst-case scenario" continuous contact testing, most industrial exposures are not continuous and are in fact mixtures of chemical, for which permeation rates are different. When selecting Chemical Protective Clothing, there are several factors that must be taken into account prior to selecting the garments that are needed. A risk assessment is often conducted to assist with making sure that the right protective clothing is selected. When selecting the appropriate chemical protective clothing, it is recommended to determine: The chemicals being used and their hazards The state of those chemicals; for example, if they are vaporous, they could be more hazardous Whether contact is a result of occasional splashing or a result of more continuous contact Whether the worker can be exposed from handling contaminated CPC Environmental Conditions (Weather, location) Duration the worker will be wearing the protective clothing The room temperature where the chemical is being handled The parts of the body that the chemical could potentially contact Whether the CPC resists physical wear and tear commensurate with the type of work being done Whether the CPC interferes with the work, for instance by limiting dexterity From there, it is recommended that candidate garments should be selected and subject to appropriate testing. Testing is also considered necessary to make sure the material is suitable to the specific condition it will be used in, as opposed to the generic, worst-case scenarios it ordinarily undergoes. Once a garment is selected, it should undergo a limited evaluation with worker training. Once the garment is regularly used it should be regularly evaluated. Clothing ensemble Chemical Protective Clothing ensembles are not a one size fits all approach. The level of protection needed and the hazards that are associated with the chemical will play a major role in what pieces of the ensemble are needed to fully protect the worker. When purchasing Chemical Protective Clothing, careful consideration should be taken to make sure that all pieces of the ensemble are compatible with each other. Pieces of the ensemble may include: Protective Suit (Fully Encapsulating, Splash Suit) Respiratory Protection (Self Contained Breathing Apparatus, Respirator) Head Protection (Helmet) Hearing Protection (Ear Plugs) Eye Protection (Safety Goggles / Face Shield) Gloves (Inner and Outer) Boots Gloves When using solvents, an improper glove selection may allow the solvent to leak through the gloves leading to skin contact. Appropriate gloves protect workers from hazards such as burns, cuts, electrical shocks, amputation, and chemical absorption or contact. Improper selection of gloves gives employees a false sense of security since chemicals can penetrate the "protection" without showing any signs of failure. There are standards that protective gloves have to meet, and manufacturers' data should be used in selecting appropriate fabric properties. They cannot just be any fabric, examples of fabric that should not be used are nylon, polyester, leather, neoprene, or latex. Appropriate properties to look for the chemical resistance, thermal protection, cut and puncture resistance, and non-electrical conductivity. When working with chemicals workers should have the ability to remove gloves and protective clothing in a simple way that avoids skin or other contamination to themselves or others. Protective gloves should be sized properly to prevent restricted worker movement or tearing when working with different types of materials. Chemical-resistant gloves can be made of different kinds of rubber or plastics. These materials can be laminated or blended to create a better performance. Thicker gloves improve the protection but may be clumsier to use, which can reduce safety. Examples of chemical-resistant gloves: Butyl gloves: Made of synthetic rubber, resistant to oxidation, ozone corrosion, and abrasion. Does not perform well with aliphatic or aromatic hydrocarbons and halogenated solvents. Protects against a wide variety of dangerous chemicals. Natural latex rubber gloves: One of the most popular general-purpose gloves, mainly because of comfortability. Have a great tensile strength, which helps against abrasions that can be caused by grinding and polishing of materials. Protect from most water solutions of acids, alkalis, salts, and ketones. There are people who are allergic to latex, which causes irritation and is not suitable for all employees. Great alternatives that can be offered to employees are hypoallergenic gloves, glove liners, and powderless gloves. Neoprene gloves: Made of synthetic rubber which helps with dexterity, has a higher density, and good tear resistance. Able to protect against many fluids like hydraulic, gasoline, certain alcohols, organic acids, and alkalis. Nitrile gloves: Made of copolymer, due to those properties it provides protection from chlorinated solvents. After prolonged exposure to substances that cause other brands of gloves to deteriorate, nitrile gloves stand heavy. Not recommended for use with strong oxidizing agents, aromatic solvents, ketones, and acetates. Offer good protection when working with oils, greases, acids, and alcohols. Levels of protection The EPA categorizes Chemical Protective Clothing into four levels, with Level A being the highest level of protection and Level D being the lowest level of protection. These levels are based on the amount of protection for the user’s skin and respiratory protection. Level A – The highest level of both respiratory and skin protection. Consists of a full encapsulating suit that is vapor tight with respiratory protection consisting of either Self Contained Breathing Apparatus (SCBA) or supplied air respirator. Used when protection from vapor and liquid is needed. Ensemble may also consist of internal radio communication, head protection, boots, and gloves. Level B – The highest level of respiratory protection with reduced skin protection. Consists of chemical resistant clothing that may or may not be fully encapsulating, paired with either a Self Contained Breathing Apparatus (SCBA) or supplied air respirator. Used when there is a reduced risk of vapor exposure but there are concerns with exposure to respiratory tract. Ensemble may also consist of radio communication, head protection, face shield, boots, and gloves. Level C – Reduced respiratory protection along with reduced skin protection. Consists of a liquid splash protection suit (coveralls) paired with an air purifying respirator. Used when there is reduced risk of skin exposure to chemicals but there are concerns with contaminants in the air. Ensemble may also consist of radio communication, head protection, face shield, boots, and gloves. Level D – The lowest level of protection required. Can be used where there is no chance of chemical being splashed onto the worker and there are no contaminants that would harm the respiratory tract. Ensemble consist of standard work coveralls, face shield or safety glasses, gloves, and boots. NFPA standards Over the years, the roles and responsibilities of first responders has drastically changed. To protect the best interest of those first responders, standards have been developed to assist agencies with selecting the appropriate level of protection. These standards also ensure that the chemical protective clothing has been tested and certified to meet a minimum set of specifications. The standards not only cover the protective clothing suit, but also all other components such as respiratory protection, gloves, boots, and all other garments that complete the ensemble. NFPA 1991 standard covers the requirements for ensembles that offer the highest level of protection. These types of suits would be classified on the EPA scale as Level A suits. These types of suits are fully encapsulating and are air tight (vapor resistive). NFPA 1992 standard covers the requirements for ensembles that are liquid/splash protective. These types of suits would be classified on the EPA scale as Level B suits. These suits are resistive to liquids and are not rated for any type of vapor protection. NFPA 1994 standard is broken down into 4 classes. NFPA 1994 Class 1 and 2 are intended to protect the user in an environment that requires a self contained breathing apparatus and where vapors or liquids are expected to make contact with the users skin. These liquids or vapors may include those of chemical warfare, bloodborne pathogens, or industrial chemicals. NFPA 1994 Class 3 must also be rated to protect the user from the same potential exposures of Class 1 and 2, however these ensembles only require the use of air purifying respirator. NFPA 1994 Class 4 ensembles are rated to protect the user from bloodborne pathogens and biological agents and offer no protection against industrial chemicals or chemical warfare.  Class 4 ensembles are rated to be used with an air purifying respirators as well. NFPA 1999 standard covers the requirements for ensembles that are single use or multi-use for protection against bloodborne pathogens or potential exposures to infectious diseases. These ensembles are rated to be used with an air purifying respirator. See also Chemical safety Personal protective equipment References Chemical safety Occupational safety and health
Chemical protective clothing
Chemistry
1,965
15,227,101
https://en.wikipedia.org/wiki/ZBED1
Full Name: Zinc finger BED domain-containing protein 1 is a protein that in humans is encoded by the ZBED1 gene. ZBED1 regulates the expression of several genes involved in cell proliferation, color remodeling, protein metabolism, and other genes involved in cell proliferation and differentiation. At one point in time ZBED1 was confused to be a gene similar to Ac transposable elements, but was later changed as transposes activity was not found. Function ZBED1 is located in the pseudoautosomal region 1(PAR) of the X and Y chromosome. ZBED1 is a gene that has a localization in the nucleus and has properties that help it function as a transcription factor as it is able to bind with DNA elements. These Elements can be found in regions that have promoters with several genes in relation to any cell proliferation. Histone H1 being one, at times has the opportunity to regulate genes that are related to cell proliferation. ZBED1 has been found to also have spliced transcript variants that have numerous types of 5' untranslated regions. Structure Released images from November 23 in 2005 show the solution structure of the sink finger Bed domain of ZBED1. Having a deposited structure of PDB entry 2ct5 the structure is colored with a chain and is shown from a forward angle. Each side of the structure was also able to. show the copy of the ZINC ion. ZBED1 has a multi metric state that appears to be monomeric. This shows how the energy and binding statistics are not able to work and function properly for this current assembly. In addition to its macromolecules is it evident that this is considered to be an A Chain. Holding a wide length of 73 amino acids and a theoretical weight of 8.07 KDa. However, there was no provided information for ZBED1's expression system. ZBED1's source organisms was pronounced to be Homo sapiens (Human). Homo sapiens (Human) function as almost ubiquitin-like modifier ligase that then is responsible for sumoylates CHD3/Mi2-alpha that is set to release DNA. RNA polymerase II also had a regulation increase of gene promoters and regulations of transcription that are positive. These all also include H1-5 ribosomal proteins and are listed as RPS6, RPL10A, and RPL12. Gene Cards shows the three demential structure of ZBED1 in a variety of colors those being navy blue, sky blue, yellow, orange and green. All three dimensional structures having either a representative or predicted gene of ZEBD1. This being known as PDB and AlphaFold. These all being listed from very high to very low model confidence. At very high being Navy blue with a (pLDDT > 90). Sky blue being labeled to have a confident model confidence of 90 > pLDDT > 70. Yellow having a model confidence 70 > pLDDT > 50 and Orange having a model confidence being at very low with pLDDT < 50. Clininical significance ZBED1 is associated with two diseases those being fibrosclerosis of breast and Sotos Syndrome. Fibrosclerosis of Breast being heavily related to breast disease and non-proliferative fibrocystic change of the best. However, other compatible disease that are also related diseases include; breast cancer, mastitis, gynecomastia, breast fibroadenoma, papilloma, diabetic mast-patchy, vascular disease, and systemic scleroderma. All diseases ranging with scores 10.0 to 9.7 and non-proliferative fibrocystic that changes the breast having the highest affiliating gene. This disease is known to be a non-proliferative fibrocystic that changes the breast as a result of containing scar tissue. Sotos Syndrome is globally known as a genetic disease, rare disease and in some cases fetal disease. Sotos syndrome has a large relatedness to many other disease these being; overgrowth syndrome, Sotos syndrome 2, normokalemic periodic paralysis, tremor, hereditary essential, hypokalemic periodic paralysis (type 2), (myotonia, potassium-aggravated), (myotonia), (myasthenia syndrome, congenital,16), torticollis. All rating with scores that vary from 31.6 being the highest to 10.2 being the lowest (left to right). References Further reading
ZBED1
Chemistry
930
43,577,815
https://en.wikipedia.org/wiki/Santa%20Maria%20Island%20Station
Santa Maria Island Station (also known as SMA or Montes das Flores, Hill of Flowers) is an ESTRACK satellite ground station in the Azores, from the town of Vila do Porto on the island of Santa Maria. Station currently operates a 5.5m S-band antenna capable of receiving signals in the 2200-2300 MHz range, the first one in the ESTRACK network with launch tracking capability. It covers a large portion of the Atlantic Ocean and during the Ariane 5 launches, it acquires signals until the upper stage engine cut-off. Future upgrades for SMA will include an X-band antenna working in the range of 8025-8400 MHz. Construction of the station was completed in January 2008 under Ariane Development Programme in an agreement between ESA and the Portuguese government. A reason for building an additional station was tracking of the medium inclination Ariane 5 launches and upcoming Vega along with Soyuz from Guiana Space Centre. The first launch tracked by the newly built site was Ariane 5 ES flight V-181 lifting Automated Transfer Vehicle Jules Verne in March 2008. When not used for launch tracking station is used in CleanSeaNet and MARISS service for Copernicus Programme References ESTRACK facilities Buildings and structures in Vila do Porto
Santa Maria Island Station
Astronomy
257
30,156,261
https://en.wikipedia.org/wiki/NASA%20Astronaut%20Corps
The NASA Astronaut Corps is a unit of the United States National Aeronautics and Space Administration (NASA) that selects, trains, and provides astronauts as crew members for U.S. and international space missions. It is based at Johnson Space Center in Houston, Texas. History The first U.S. astronaut candidates were selected by NASA in 1959, for its Project Mercury with the objective of orbiting astronauts around the Earth in single-man capsules. The military services were asked to provide a list of military test pilots who met specific qualifications. After stringent screening, NASA announced its selection of the "Mercury Seven" as its first astronauts. Since then, NASA has selected 22 more groups of astronauts, opening the corps to civilians, scientists, doctors, engineers, and school teachers. As of the 2009 Astronaut Class, 61% of the astronauts selected by NASA have come from military service. NASA selects candidates from a diverse pool of applicants with a wide variety of backgrounds. From the thousands of applications received, only a few are chosen for the intensive astronaut candidate training program. Including the "Original Seven", 339 candidates have been selected to date. Organization The Astronaut Corps is based at the Lyndon B. Johnson Space Center in Houston, although members may be assigned to other locations based on mission requirements, e.g. Soyuz training at Star City, Russia. The Chief of the Astronaut Office is the most senior leadership position for active astronauts in the Corps. The Chief Astronaut serves as head of the Corps and is the principal adviser to the NASA Administrator on astronaut training and operations. The first Chief Astronaut was Deke Slayton, appointed in 1962. The current Chief Astronaut is Joe Acaba. Salary Salaries for newly hired civilian astronauts are based on the federal government's General Schedule pay scale for grades GS-11 through GS-14. The astronaut's grade is based on the astronaut's academic achievements and experience. Astronauts can be promoted up to grade GS-15. As of 2015, astronauts based at the Johnson Space Center in Houston, Texas, earn between $66,026 (GS-11 step 1) and $158,700 (GS-15 step 8 and above). As of the new astronaut candidate class announcement of 2024, astronaut candidates will be removed from the GS pay scale and be paid on an AD 'Administratively Determined" scale. Military astronauts are detailed to the Johnson Space Center and remain on active duty for pay, benefits, leave, and similar military matters. Qualifications There are no age restrictions for the NASA Astronaut Corps. Astronaut candidates have ranged between the ages of 26 and 46, with the average age being 34. Candidates must be U.S. citizens to apply for the program. There are three broad categories of qualifications: education, work experience, and medical. Candidates must have a master's degree from an accredited institution in engineering, biological science, physical science or mathematics. The degree must be followed by at least two to three years of related, progressively responsible, professional experience (graduate work or studies) or at least 1,000 hours of pilot-in-command time in jet aircraft. An advanced degree is desirable and may be substituted for experience, such as a doctoral degree (which counts as the two years experience). Teaching experience, including experience at the K–12 levels, is considered to be qualifying experience. Candidates must have the ability to pass the NASA long-duration space flight physical, which includes the following specific requirements: Distant and near visual acuity: Must be correctable to 20/20, each eye separately (corrective lenses such as glasses are allowed) The refractive surgical procedures of the eye, PRK and LASIK, are allowed, providing at least 1 year has passed since the date of the procedure with no permanent adverse after effects. Blood pressure not to exceed 140/90 measured in a sitting position Standing height between 62 and 75 inches Members Active astronauts , the corps has 47 "active" astronauts consisting of 20 women and 27 men The highest number of active astronauts at one time was in 2000 when there were 149. All of the current astronaut corps are from the classes of 1996 (Group 16) or later. Missions in italics are scheduled and subject to change. There are currently 19 "international active astronauts", "who are assigned to duties at the Johnson Space Center", who were selected by their home agency to train as part of a NASA Astronaut Group and serve alongside their NASA counterparts. While the international astronauts, Payload Specialists, and Spaceflight Participants go through training with the NASA Astronaut Corps, they are not considered members of the corps. Management astronauts , the corps has 12 "management" astronauts, who remain NASA employees but are no longer eligible for flight assignment. The management astronauts included personnel chosen to join the corps as early as 1987 (Group 12) and as recently as 2009 (Group 20). Astronaut candidates The term "Astronaut Candidate" (informally "ASCAN") refers to individuals who have been selected by NASA as candidates for the NASA Astronaut Corps and are currently undergoing a candidacy training program at the Johnson Space Center. The most recent class of astronaut candidates was selected in 2021. Only three astronaut candidates have resigned before completing training: Brian O'Leary and Anthony Llewellyn, both from the 1967 Selection Group, and Robb Kulin of the 2017 group. O'Leary resigned in April 1968 after additional Apollo missions were cancelled, Llewellyn resigned in August 1968 after failing to qualify as a jet pilot, and Kulin resigned in August 2018 for unspecified personal reasons. Another astronaut candidate, Stephen Thorne, died in an airplane accident before he could finish astronaut training. Former members Selection as an astronaut candidate and subsequent promotion to astronaut does not guarantee the individual will eventually fly in space. Some have voluntarily resigned or been medically disqualified after becoming astronauts before being selected for flights. Civilian candidates are expected to remain with the corps for at least five years after initial training; military candidates are assigned for specific tours. After these time limits, members of the Astronaut Corps may resign or retire at any time. Three members of the Astronaut Corps (Gus Grissom, Edward White, and Roger B. Chaffee) were killed during a ground test accident while preparing for the Apollo 1 mission. Eleven were killed during spaceflight, on Space Shuttle missions STS-51-L and STS-107. Another four (Elliot See, Charles Bassett, Theodore Freeman, and Clifton Williams) were killed in T-38 plane crashes during training for space flight during the Gemini and Apollo programs. Another was killed in a 1967 automobile accident, and another died in a 1991 commercial airliner crash while traveling on NASA business. Two members of the corps have been involuntarily dismissed: Lisa Nowak and William Oefelein. Both were returned to service with the US Navy. A James Adamson – STS-28, STS-43 Thomas Akers – STS-41, STS-49, STS-61, STS-79 Buzz Aldrin – Gemini 12, Apollo 11 Andrew Allen – STS-46, STS-62, STS-75 Joseph Allen – STS-5, STS-51-A Scott Altman – STS-90, STS-106, STS-109, STS-125 William Anders – Apollo 8 Clayton Anderson – STS-117/STS-120 (Expedition 15), STS-131 Michael Anderson – STS-89, STS-107 Dominic Antonelli – STS-119, STS-132 Jerome Apt – STS-37, STS-47, STS-59, STS-79 Lee Archambault – STS-117, STS-119 Neil Armstrong – Gemini 8, Apollo 11 Richard Arnold – STS-119, Soyuz MS-08 (Expedition 55/56) Jeffrey Ashby – STS-93, STS-100, STS-112 Serena Auñón-Chancellor – Soyuz MS-09 (Expedition 56/57) B James Bagian – STS-29, STS-40 Ellen Baker – STS-34, STS-50, STS-71 Michael Baker – STS-43, STS-52, STS-68, STS-81 Daniel Barry – STS-72, STS-96, STS-105 Charles Bassett Alan Bean – Apollo 12, Skylab 3 Robert Behnken – STS-123, STS-130, SpaceX Demo-2 (Expedition 63) John Blaha – STS-29, STS-33, STS-43, STS-58, STS-79/STS-81 (Mir EO-22) Michael Bloomfield – STS-86, STS-97, STS-110 Guion Bluford – STS-8, STS-61-A, STS-39, STS-53 Karol Bobko – STS-6, STS-51-D, STS-51-J Charles Bolden – STS-61-C, STS-31, STS-45, STS-60 Frank Borman – Gemini 7, Apollo 8 Kenneth Bowersox – STS-50, STS-61, STS-73, STS-82, STS-113/Soyuz TMA-1 (Expedition 6) Charles Brady – STS-78 Vance Brand – Apollo-Soyuz Test Project, STS-5, STS-41B, STS-35 Daniel Brandenstein – STS-8, STS-51-G, STS-32, STS-49 Roy Bridges – STS-51-F Curtis Brown – STS-47, STS-66, STS-77, STS-85, STS-95, STS-103 David Brown – STS-107 Mark Brown – STS-28, STS-48 James Buchli – STS-51-C, STS-61-A, STS-29, STS-48 John Bull Daniel Burbank – STS-106, STS-115, Soyuz TMA-22 (Expedition 29/30) Daniel Bursch – STS-51, STS-68, STS-77, STS-108/STS-111 (Expedition 4) C Robert Cabana – STS-41, STS-53, STS-65, STS-88 Yvonne Cagle Fernando Caldeiro Charles Camarda – STS-114 Kenneth Cameron – STS-37, STS-56, STS-74 Duane Carey – STS-109 Scott Carpenter – Mercury-Atlas 7 Gerald Carr – Skylab 4 Sonny Carter – STS-33 John Casper – STS-36, STS-54, STS-62, STS-77 Christopher Cassidy – STS-127, Soyuz TMA-08M (Expedition 35/36), Soyuz MS-16 (Expedition 62/63) Josh Cassada, SpaceX Crew-5 (Expedition 68) Gene Cernan – Gemini 9A, Apollo 10, Apollo 17 Roger Chaffee – Apollo 1 Gregory Chamitoff – STS-124/STS-126 (Expedition 17/18), STS-134 Franklin Chang-Diaz – STS-61-C, STS-34, STS-46, STS-60, STS-75, STS-91, STS-111 Philip Chapman Kalpana Chawla – STS-87, STS-107 Leroy Chiao – STS-65, STS-72, STS-92, Soyuz TMA-5 (Expedition 10) Kevin Chilton – STS-49, STS-59, STS-76 Laurel Clark – STS-107 Mary Cleave – STS-61-B, STS-30 Michael Clifford – STS-53, STS-59, STS-76 Michael Coats – STS-41-D, STS-29, STS-39 Kenneth Cockrell – STS-56, STS-69, STS-80, STS-98, STS-111 Catherine Coleman – STS-73, STS-93, Soyuz TMA-20 (Expedition 26/27) Eileen Collins – STS-63, STS-84, STS-93, STS-114 Michael Collins – Gemini 10, Apollo 11 Pete Conrad – Gemini 5, Gemini 11, Apollo 12, Skylab 2 Gordon Cooper – Mercury-Atlas 9, Gemini 5 Richard Covey – STS-51-I, STS-26, STS-38, STS-61 Timothy Creamer – Soyuz TMA-17 (Expedition 22/23) John Creighton – STS-51-G, STS-36, STS-48 Robert Crippen – STS-1, STS-7, STS-41-C, STS-41-G Frank Culbertson – STS-38, STS-51, STS-105/STS-108 (Expedition 3) Walter Cunningham – Apollo 7 Robert Curbeam – STS-85, STS-98, STS-116 Nancy Currie – STS-57, STS-70, STS-88, STS-109 D Jan Davis – STS-47, STS-60, STS-85 Alvin Drew – STS-118, STS-133 Brian Duffy – STS-45, STS-57, STS-72, STS-92 Charles Duke – Apollo 16 Bonnie Dunbar – STS-61-A, STS-32, STS-50, STS-71, STS-89 James Dutton – STS-131 E Joe Edwards – STS-89 Donn Eisele – Apollo 7 Anthony England – STS-51-F Joe Engle – ALT, STS-2, STS-51I Ronald Evans – Apollo 17 F John Fabian – STS-7, STS-51-G Christopher Ferguson – STS-115, STS-126, STS-135 Jack Fischer – Soyuz MS-04 (Expedition 52/53) Anna Fisher – STS-51-A William Fisher – STS-51-I Michael Foale – STS-45, STS-56, STS-63, STS-84/STS-86 (Mir EO-23/24), STS-103, Soyuz TMA-3 (Expedition 8) Kevin Ford – STS-128, Soyuz TMA-06M (Expedition 33/34) Michael Foreman – STS-123, STS-129 Patrick Forrester – STS-105, STS-117, STS-128 Michael Fossum – STS-121, STS-124, Soyuz TMA-02M (Expedition 28/29) Theodore Freeman Stephen Frick – STS-110, STS-122 C. Gordon Fullerton – ALT, STS-3, STS-51-F G Ronald Garan – STS-124, Soyuz TMA-21 (Expedition 27/28) Dale Gardner – STS-8, STS-51-A Guy Gardner – STS-27, STS-35 Owen Garriott – Skylab 3, STS-9 Charles Gemar – STS-38, STS-48, STS-62 Michael Gernhardt – STS-69, STS-83, STS-94, STS-104 Edward Gibson – Skylab 4 Robert Gibson – STS-41-B, STS-61-C, STS-27, STS-47, STS-71 Edward Givens John Glenn – Mercury-Atlas 6, STS-95 Linda Godwin – STS-37, STS-59, STS-76, STS-108 Michael Good – STS-125, STS-132 Richard Gordon – Gemini 11, Apollo 12 Dominic Gorie – STS-91, STS-99, STS-108, STS-123 Ronald Grabe – STS-51-J, STS-30, STS-42, STS-57 Duane Graveline Frederick Gregory – STS-51-B, STS-33, STS-44 William Gregory – STS-67 S. David Griggs – STS-51-D Gus Grissom – Mercury-Redstone 4, Gemini 3, Apollo 1 John Grunsfeld – STS-67, STS-81, STS-103, STS-109, STS-125 Sidney Gutierrez – STS-40, STS-59 H Fred Haise – Apollo 13, ALT James Halsell – STS-65, STS-74, STS-83, STS-94, STS-101 Kenneth Ham – STS-124, STS-132 Blaine Hammond – STS-39, STS-64 Gregory Harbaugh – STS-39, STS-54, STS-71, STS-82 Bernard Harris – STS-55, STS-63 Terry Hart – STS-41-C Henry Hartsfield – STS-4, STS-41-D, STS-61-A Frederick Hauck – STS-7, STS-51-A, STS-26 Steven Hawley – STS-41-D, STS-61-C, STS-31, STS-82, STS-93 Susan Helms – STS-54, STS-64, STS-78, STS-101, STS-102/STS-105 (Expedition 2) Karl Henize – STS-51-F Terence Henricks – STS-44, STS-55, STS-70, STS-78 Jose Hernandez – STS-128 John Herrington – STS-113 Richard Hieb – STS-39, STS-49, STS-65 Joan Higginbotham – STS-116 David Hilmers – STS-51-J, STS-26, STS-36, STS-42 Kathryn Hire – STS-90, STS-130 Charles Hobaugh – STS-104, STS-118, STS-129 Jeffrey Hoffman – STS-51-D, STS-35, STS-46, STS-61, STS-75 Donald Holmquest Michael Hopkins – Soyuz TMA-10M (Expedition 37/38), SpaceX Crew-1 (Expedition 64) Scott Horowitz – STS-75, STS-82, STS-101, STS-105 Douglas Hurley – STS-127, STS-135, SpaceX Demo-2 (Expedition 63) Rick Husband – STS-96, STS-107 I James Irwin – Apollo 15 Marsha Ivins – STS-32, STS-46, STS-62, STS-81, STS-98 J Mae Jemison – STS-47 Tamara Jernigan – STS-40, STS-52, STS-67, STS-80, STS-96 Brent Jett – STS-72, STS-81, STS-97, STS-115 Gregory C. Johnson – STS-125 Gregory H. Johnson – STS-123, STS-134 Thomas Jones – STS-59, STS-68, STS-80, STS-98 K Janet Kavandi – STS-91, STS-99, STS-104 James Kelly – STS-102, STS-114 Mark Kelly – STS-108, STS-121, STS-124, STS-134 Scott Kelly – STS-103, STS-118, Soyuz TMA-01M (Expedition 25/26), Soyuz TMA-16M/Soyuz TMA-18M (Expedition 43/44/45/46) Joseph Kerwin – Skylab 2 Robert Kimbrough – STS-126, Soyuz MS-02 (Expedition 49/50), SpaceX Crew-2 (Expedition 65/66) Timothy Kopra – STS-127/STS-128 (Expedition 20), Soyuz TMA-19M (Expedition 46/47) Kevin Kregel – STS-70, STS-78, STS-87, STS-99 L Wendy Lawrence – STS-67, STS-86, STS-91, STS-114 Mark Lee – STS-30, STS-47, STS-64, STS-82 David Leestma – STS-41-G, STS-28, STS-45 William Lenoir – STS-5 Don Lind – STS-51-B Steven Lindsey – STS-87, STS-95, STS-104, STS-121, STS-133 Jerry Linenger – STS-64, STS-81/STS-84 (Mir EO-22/23) Richard Linnehan – STS-78, STS-90, STS-109, STS-123 Paul Lockhart – STS-111, STS-113 Michael Lopez-Alegria – STS-73, STS-92, STS-113, Soyuz TMA-9 (Expedition 14), Axiom Mission 1 Christopher Loria John Lounge – STS-51-I, STS-26, STS-35 Jack Lousma – Skylab 3, STS-3 Stanley Love – STS-122 Jim Lovell – Gemini 7, Gemini 12, Apollo 8, Apollo 13 G. David Low – STS-32, STS-43, STS-57 Edward Lu – STS-84, STS-104, Soyuz TMA-2 (Expedition 7) Shannon Lucid – STS-51-G, STS-34, STS-43, STS-58, STS-76/STS-79 (Mir EO-21/22) M Sandra Magnus – STS-112, STS-126/STS-119 (Expedition 18), STS-135 Thomas Marshburn – STS-127, Soyuz TMA-07M (Expedition 34/35), SpaceX Crew-3 (Expedition 66/67) Michael Massimino – STS-109, STS-125 Richard Mastracchio – STS-106, STS-118, STS-131, Soyuz TMA-11M (Expedition 38/39) Ken Mattingly – Apollo 16, STS-4, STS-51-C William McArthur – STS-58, STS-74, STS-92, Soyuz TMA-7 (Expedition 12) Jon McBride – STS-41-B Bruce McCandless – STS-41-B, STS-31 William McCool – STS-107 Michael McCulley – STS-34 James McDivitt – Gemini 4, Apollo 9 Donald McMonagle – STS-39, STS-54, STS-66 Ronald McNair – STS-41-B, STS-51-L Carl Meade – STS-38, STS-50, STS-64 Bruce Melnick – STS-41, STS-49 Pamela Melroy – STS-92, STS-112, STS-120 Leland Melvin – STS-122, STS-129 Dorothy Metcalf-Lindenburger – STS-131 Curt Michel Edgar Mitchell – Apollo 14 Barbara Morgan – STS-118 Lee Morin – STS-110 Mike Mullane – STS-41-D, STS-27, STS-36 Story Musgrave – STS-6, STS-51F, STS-33, STS-44, STS-61, STS-80 N Steven Nagel – STS-51-G, STS-61-A, STS-37, STS-55 George Nelson – STS-41-C, STS-51-D, STS-26 James Newman – STS-51, STS-69, STS-88, STS-109 Carlos Noriega – STS-84, STS-97 Lisa Nowak – STS-121 Karen Nyberg – STS-124, Soyuz TMA-09M (Expedition 36/37) O Ellen Ochoa – STS-56, STS-66, STS-96, STS-110 Bryan O'Connor – STS-61-B, STS-40 William Oefelein – STS-116 John Olivas – STS-117, STS-128 Ellison Onizuka – STS-51-C, STS-51-L Stephen Oswald – STS-42, STS-56, STS-67 Robert Overmyer – STS-5, STS-51-B P Scott Parazynski – STS-66, STS-86, STS-95, STS-100, STS-120 Robert Parker – STS-9, STS-35 Nicholas Patrick – STS-116, STS-130 Donald Peterson – STS-6 John Phillips – STS-100, Soyuz TMA-6 (Expedition 11), STS-119 William Pogue – Skylab 4 Alan Poindexter – STS-122, STS-131 Mark Polansky – STS-98, STS-116, STS-127 Charles Precourt – STS-55, STS-71, STS-84, STS-91 R William Readdy – STS-42, STS-51, STS-79 Kenneth Reightler – STS-48, STS-60 James Reilly – STS-89, STS-104, STS-117 Garrett Reisman – STS-123/STS-124 (Expedition 16/17), STS-132 Judith Resnik – STS-41-D, STS-51-L Paul Richards – STS-102 Richard Richards – STS-28, STS-41, STS-50, STS-64 Sally Ride – STS-7, STS-41-G Patricia Robertson Stephen Robinson – STS-85, STS-95, STS-114, STS-130 Kent Rominger – STS-73, STS-80, STS-85, STS-96, STS-100 Stuart Roosa – Apollo 14 Jerry Ross – STS-61-B, STS-27, STS-37, STS-55, STS-74, STS-88, STS-110 Mario Runco – STS-44, STS-54, STS-77 S Robert Satcher – STS-129 Wally Schirra – Mercury-Atlas 8, Gemini 6A, Apollo 7 Harrison Schmitt – Apollo 17 Russell Schweickart – Apollo 9 Francis Scobee – STS-41-C, STS-51-L David Scott – Gemini 8, Apollo 9, Apollo 15 Winston Scott – STS-72, STS-87 Richard Searfoss – STS-58, STS-76, STS-90 Margaret Rhea Seddon – STS-51-D, STS-40, STS-58 Elliot See Ronald Sega – STS-60, STS-76 Piers Sellers – STS-112, STS-121, STS-132 Brewster Shaw – STS-9, STS-61-B, STS-28 Alan Shepard – Mercury-Redstone 3, Apollo 14 William Shepherd – STS-27, STS-41, STS-52, Soyuz TM-31/STS-102 (Expedition 1) Loren Shriver – STS-51-C, STS-31, STS-46 Deke Slayton – Apollo-Soyuz Test Project Michael Smith – STS-51-L Steven Smith – STS-68, STS-82, STS-103, STS-110 Sherwood Spring – STS-61-B Robert Springer – STS-29, STS-38 Thomas P. Stafford – Gemini 6A, Gemini 9A, Apollo 10, Apollo-Soyuz Test Project Heidemarie Stefanyshyn-Piper – STS-115, STS-126 Robert Stewart – STS-41-B, STS-51-J Susan Still-Kilrain – STS-83, STS-94 Nicole Stott – STS-128/STS-129 (Expedition 20/21), STS-133 Frederick Sturckow – STS-88, STS-105, STS-117, STS-128 Kathryn Sullivan – STS-41-G, STS-31, STS-45 Steven Swanson – STS-117, STS-119, Soyuz TMA-12M (Expedition 39/40) Jack Swigert – Apollo 13 T Daniel Tani – STS-108, STS-120/STS-122 (Expedition 16) Norman Thagard – STS-7, STS-51-B, STS-30, STS-42, Soyuz TM-21/STS-71 (Mir EO-18) Joseph Tanner – STS-66, STS-82, STS-97, STS-115 Andy Thomas – STS-77, STS-89/STS-91 (Mir EO-24/25), STS-102, STS-114 Donald Thomas – STS-65, STS-70, STS-83, STS-94 Kathryn Thornton – STS-33, STS-49, STS-61, STS-73 William Thornton – STS-8, STS-51-B Pierre Thuot – STS-36, STS-49, STS-62 Richard Truly – ALT, STS-2, STS-8 V James Van Hoften – STS-41-C, STS-51-I Charles Veach – STS-39, STS-52 Terry Virts – STS-130, Soyuz TMA-15M (Expedition 42/43) James Voss – STS-44, STS-53, STS-69, STS-101, STS-102/STS-105 (Expedition 2) Janice Voss – STS-57, STS-63, STS-83, STS-94, STS-99 W Rex Walheim – STS-110, STS-122, STS-135 David Walker – STS-51-A, STS-30, STS-53, STS-69 Carl Walz – STS-51, STS-65, STS-79, STS-108/STS-111 (Expedition 4) Mary Weber – STS-70, STS-101 Paul Weitz – Skylab 2, STS-6 James Wetherbee – STS-32, STS-52, STS-63, STS-86, STS-102, STS-113 Ed White – Gemini 4, Apollo 1 Peggy Whitson – STS-111/STS-113 (Expedition 5), Soyuz TMA-11 (Expedition 16), Soyuz MS-03/Soyuz MS-04 (Expedition 51/52/53) Terrence Wilcutt – STS-68, STS-79, STS-89, STS-106 Clifton Williams Donald Williams – STS-51-D, STS-34 Jeffrey Williams – STS-101, Soyuz TMA-8 (Expedition 13), Soyuz TMA-16 (Expedition 21/22), Soyuz TMA-20M (Expedition 47/48) Peter Wisoff – STS-57, STS-68, STS-81, STS-92 David Wolf – STS-58, STS-86/STS-89 (Mir EO-24), STS-112, STS-127 Neil Woodward Alfred Worden – Apollo 15 Y John Young – Gemini 3, Gemini 10, Apollo 10, Apollo 16, STS-1, STS-9 Z George Zamka – STS-120, STS-130 Selection groups 1959 Group 1 – "The Mercury Seven" 1962 Group 2 – "The New Nine" 1963 Group 3 – "The Fourteen" 1965 Group 4 – "The Scientists" 1966 Group 5 – "The Original 19" 1967 Group 6 – "The Excess Eleven (XS-11)" 1969 Group 7 – USAF MOL Transfer, no official nickname (Astronauts selected from the Manned Orbiting Laboratory program) 1978 Group 8 – "Thirty-Five New Guys (TFNG)" (class included first female candidates) 1980 Group 9 – "19+80" 1984 Group 10 – "The Maggots" 1985 Group 11 – no official nickname 1987 Group 12 – "The GAFFers" 1990 Group 13 – "The Hairballs" 1992 Group 14 – "The Hogs" 1994 Group 15 – "The Flying Escargot" 1996 Group 16 – "The Sardines" (largest class to date, 35 NASA candidates and nine international astronauts) 1998 Group 17 – "The Penguins" 2000 Group 18 – "The Bugs" 2004 Group 19 – "The Peacocks" 2009 Group 20 – "The Chumps" 2013 Group 21 – "The 8-Balls" (composed of four male and four female candidates; highest percentage of females) 2017 Group 22 – "The Turtles" 2021 Group 23 – "The Flies" See also Other astronaut corps: Canadian Astronaut Corps European Astronaut Corps JAXA Astronaut Corps (Japan) Roscosmos Cosmonaut Corps (Russia) People's Liberation Army Astronaut Corps (China) List of astronauts by selection Human spaceflight History of spaceflight United States Astronaut Hall of Fame Notes References External links NASA Astronaut Candidate Program Brochure Current NASA Astronaut Corps Members Former NASA Astronaut Corps Members * Astronaut Candidate Program Lists of astronauts NASA lists NASA astronauts Human spaceflight programs
NASA Astronaut Corps
Engineering
6,466
31,260,516
https://en.wikipedia.org/wiki/PKA%20%28irradiation%29
In condensed-matter physics, a primary knock-on atom (PKA) is an atom that is displaced from its lattice site by irradiation; it is, by definition, the first atom that an incident particle encounters in the target. After it is displaced from its initial lattice site, the PKA can induce the subsequent lattice site displacements of other atoms if it possesses sufficient energy (threshold displacement energy), or come to rest in the lattice at an interstitial site if it does not (interstitial defect). Most of the displaced atoms resulting from electron irradiation and some other types of irradiation are PKAs, since these are usually below the threshold displacement energy and therefore do not have sufficient energy to displace more atoms. In other cases like fast neutron irradiation, most of the displacements result from higher-energy PKAs colliding with other atoms as they slow down to rest. Collision Models Atoms can only be displaced if, upon bombardment, the energy they receive exceeds a threshold energy . Likewise, when a moving atom collides with a stationary atom, both atoms will have energy greater than after the collision only if the original moving atom had an energy exceeding . Thus, only PKAs with an energy greater than can continue to displace more atoms and increase the total number of displaced atoms. In cases where the PKA does have sufficient energy to displace further atoms, the same truth holds for any subsequently displaced atom. In any scenario, the majority of displaced atoms leave their lattice sites with energies no more than two or three times . Such an atom will collide with another atom approximately every mean interatomic distance traveled, losing half of its energy during the average collision. Assuming that an atom that has slowed down to a kinetic energy of 1 eV becomes trapped in an interstitial site, displaced atoms will typically be trapped no more than a few interatomic distances away from the vacancies they leave behind. There are several possible scenarios for the energy of PKAs, and these lead to different forms of damage. In the case of electron or gamma ray bombardment, the PKA usually does not have sufficient energy to displace more atoms. The resulting damage consists of a random distribution of Frenkel defects, usually with a distance no more than four or five interatomic distances between the interstitial and vacancy. When PKAs receive energy greater than from bombarding electrons, they are able to displace more atoms, and some of the Frenkel defects become groups of interstitial atoms with corresponding vacancies, within a few interatomic distances of each other. In the case of bombardment by fast-moving atoms or ions, groups of vacancies and interstitial atoms widely separated along the track of the atom or ion are produced. As the atom slows down, the cross section for producing PKAs increases, resulting in groups of vacancies and interstitials concentrated at the end of the track. Damage Models A thermal spike is a region in which a moving particle heats up the material surrounding its track through the solid for times of the order of 10−12 s. In its path, a PKA can produce effects similar to those of heating and rapidly quenching a metal, resulting in Frenkel defects. A thermal spike does not last long enough to permit annealing of the Frenkel defects. A different model called the displacement spike was proposed for fast neutron bombardment of heavy elements. With high energy PKAs, the region affected is heated to temperatures above the material's melting point, and instead of considering individual collisions, the entire volume affected could be considered to “melt” for a short period of time. The words “melt” and “liquid” are used loosely here because it is not clear whether the material at such high temperatures and pressures would be a liquid or a dense gas. Upon melting, former interstitials and vacancies become “density fluctuations,” since the surrounding lattice points no longer exist in liquid. In the case of a thermal spike, the temperature is not high enough to maintain the liquid state long enough for density fluctuations to relax and interatomic exchange to occur. A rapid “quenching” effect results in vacancy-interstitial pairs that persist throughout melting and resolidification. Towards the end of the path of a PKA, the rate of energy loss becomes high enough to heat up the material well above its melting point. While the material is melted, atomic interchange occurs as a result of random motion of the atoms initiated by the relaxation of local strains from the density fluctuations. This releases stored energy from these strains that raises the temperature even higher, maintaining the liquid state briefly after most of the density fluctuations disappear. During this time, the turbulent motions continue so that upon resolidification, most of the atoms will occupy new lattice sites. Such regions are called displacement spikes, which, unlike thermal spikes, do not retain Frenkel defects. Based on these theories, there should be two different regions, each retaining a different form of damage, along the path of a PKA. A thermal spike should occur in the earlier part of the path, and this high-energy region retains vacancy-interstitial pairs. There should be a displacement spike towards the end of the path, a low-energy region where atoms have been moved to new lattice sites but no vacancy-interstitial pairs are retained. Cascade Damage The structure of cascade damage is strongly dependent on PKA energy, so the PKA energy spectrum should be used as the basis of evaluating microstructural changes under cascade damage. In thin gold foil, at lower bombardment doses, the interactions of cascades are insignificant, and both visible vacancy clusters and invisible vacancy-rich regions are formed by cascade collision sequences. The interaction of cascades at higher doses was found to produce new clusters near existing groups of vacancy clusters, apparently converting invisible vacancy-rich regions to visible vacancy clusters. These processes are dependent on PKA energy, and from three PKA spectra obtained from fission neutrons, 21 MeV self-ions, and fusion neutrons, the minimum PKA energy required to produce new visible clusters by interaction was estimated to be 165 keV. References See also Vacancy defect Interstitial defect Atoms
PKA (irradiation)
Physics
1,280
74,810,935
https://en.wikipedia.org/wiki/Miti%20hue
Miti hue is a traditional sauce in Polynesian cuisine made from the flesh of the coconut and salt water mixed together and fermented. Preparation is prepared from the young coconut known as , a stage where the flesh of the green coconut starts to harden and begins losing its water. The flesh of the is cut into pieces and placed in a calabash vessel, with salt water and the heads of freshwater prawns. The mixture is left in the sun for a few days to ferment. is served as an accompaniment to traditional Tahitian dishes, most notably the fermented fish dish Fafaru. The preparation of is also similar to Miti hue, though crushed crustaceans are entirely absent from the recipe. Flavourings like lemon, lime and chilli can also be added to Tai monomono, with the addition of chilli being known as . Fermented coconut sauce is also eaten in Tonga, the Samoan islands and the Polynesian island of Rotuma, but the process differs from Miti hue as the sauce is a byproduct of converting coconut shells into containers, a practice that was common in the West Polynesian islands. A mature coconut has a hole drilled into it and the water inside the nut is removed, replaced with sea water. A stopper is placed into the hole and is left to ferment for a few weeks, resulting the inner flesh breaking down into a gruel. Names Cook Islands: French Polynesia: Rotuma: Samoa and American Samoa: Tonga: See also Taioro – A fermented paste made from coconut meat, eaten in Oceania. References Condiments Fermented foods French Polynesian cuisine Cook Islands cuisine Fijian cuisine Samoan cuisine Tongan cuisine Polynesian cuisine Foods containing coconut
Miti hue
Biology
360
936,289
https://en.wikipedia.org/wiki/Groombridge%201618
Groombridge 1618 is a star in the northern constellation Ursa Major. With an apparent visual magnitude of +6.6, it lies at or below the threshold of stars visible to the naked eye for an average observer. It is relatively close to Earth, at . This is a main sequence star of spectral type K7.5 Ve, having just 67% of the Sun's mass. Properties This star was first identified as entry 1618 in the work A Catalog of Circumpolar Stars by Stephen Groombridge published posthumously in 1838. Its large proper motion across the sky suggested that it was relatively nearby and made it an early candidate for parallax measurements. In 1884 the parallax angle was measured as , which is larger than the modern value of 0″.205. Groombridge 1618 has a stellar classification of K8 V, which means it is a K-type main sequence star that is generating energy by fusing hydrogen at its core. It has 67% of the mass of the Sun, 61% of the Sun's radius, but radiates only 15% of the Sun's energy and only 4.6% of the Sun's energy in the visible light spectrum. The effective surface temperature of the star's photosphere is about 4,000 K, giving it an orange hue. It is a BY Draconis variable with a surface magnetic field strength of 750 G. The chromosphere is relatively inactive and produces star spots comparable to Sun spots. However, like UV Ceti, it has been observed to undergo increases in luminosity as a flare star. Search for planets A search for excess infrared emission from this star by the Infrared Space Observatory came up negative, implying that Groombridge 1618 does not possess a nearby debris disk (such as Vega does). However, observations using the Herschel Space Observatory showed a small excess suggesting a low-temperature debris disk. The data can be modeled by a ring of coarse, highly-reflective dust at a temperature below 22 K orbiting at least 51 AU from the host star. If this star does have a companion, astrometric measurements appear to place an upper bound of 3–12 times the mass of Jupiter on such a hypothetical object (for orbital periods in the range of 5–50 years). Observations collated by Marcy & Benitz (1989), tend towards a single notable object with periodicity of 122 days as a planetary object with minimum mass 4 times that of Jupiter. This candidate planet was never confirmed and the signal the authors had found could have been due to intrinsic stellar activity from the star's young age. If confirmed, the planet would be within the star's habitable zone. An examination of this system in 2010 using the MMT telescope fitted with adaptive optics failed to detect a planetary companion. The habitable zone for this star, defined where liquid water could be present on an Earth-like planet, is at a radius of 0.26–0.56 AU, where 1 AU is the average distance from the Earth to the Sun. The star is among five nearby K-type stars of a type in a 'sweet spot’ between Sun-analog stars and M stars for the likelihood of evolved life, per analysis of Giada Arney from NASA’s Goddard Space Flight Center. See also Stephen Groombridge List of nearest stars and brown dwarfs Notes References Notes External links K-type main-sequence stars Local Bubble Ursa Major Flare stars Hypothetical planetary systems 088230 049908 0380 1618 BD+50 1725 TIC objects
Groombridge 1618
Astronomy
723
43,101,983
https://en.wikipedia.org/wiki/Gliophorus%20lilacipes
Gliophorus lilacipes is a species of agaric fungus in the family Hygrophoraceae. Found in New Zealand, it was described by Egon Horak in 1973. References External links Hygrophoraceae Fungi described in 1973 Fungi of New Zealand Taxa named by Egon Horak Fungus species
Gliophorus lilacipes
Biology
69
37,369,491
https://en.wikipedia.org/wiki/List%20of%20DNA%20nanotechnology%20research%20groups
This list of DNA nanotechnology research groups gives a partial overview of academic research organisations in the field of DNA nanotechnology, sorted geographically. Any sufficiently notable research group (which in general can be considered as any group having published in well regarded, high impact factor journals) should be listed here, along with a brief description of their research. North America Asia Europe References DNA nanotechnology DNA Research groups
List of DNA nanotechnology research groups
Materials_science
83
753,145
https://en.wikipedia.org/wiki/Del%20in%20cylindrical%20and%20spherical%20coordinates
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems. Notes This article uses the standard notation ISO 80000-2, which supersedes ISO 31-11, for spherical coordinates (other sources may reverse the definitions of θ and φ): The polar angle is denoted by : it is the angle between the z-axis and the radial vector connecting the origin to the point in question. The azimuthal angle is denoted by : it is the angle between the x-axis and the projection of the radial vector onto the xy-plane. The function can be used instead of the mathematical function owing to its domain and image. The classical arctan function has an image of , whereas atan2 is defined to have an image of . Coordinate conversions Note that the operation must be interpreted as the two-argument inverse tangent, atan2. Unit vector conversions Del formula This page uses for the polar angle and for the azimuthal angle, which is common notation in physics. The source that is used for these formulae uses for the azimuthal angle and for the polar angle, which is common mathematical notation. In order to get the mathematics formulae, switch and in the formulae shown in the table above. Defined in Cartesian coordinates as . An alternative definition is . Defined in Cartesian coordinates as . An alternative definition is . Calculation rules (Lagrange's formula for del) (From ) Cartesian derivation The expressions for and are found in the same way. Cylindrical derivation Spherical derivation Unit vector conversion formula The unit vector of a coordinate parameter u is defined in such a way that a small positive change in u causes the position vector to change in direction. Therefore, where is the arc length parameter. For two sets of coordinate systems and , according to chain rule, Now, we isolate the th component. For , let . Then divide on both sides by to get: See also Del Orthogonal coordinates Curvilinear coordinates Vector fields in cylindrical and spherical coordinates References External links Maxima Computer Algebra system scripts to generate some of these operators in cylindrical and spherical coordinates. Vector calculus Coordinate systems
Del in cylindrical and spherical coordinates
Mathematics
439
21,252,539
https://en.wikipedia.org/wiki/Frank%20Philip%20Bowden
Frank Philip Bowden CBE FRS (2 May 1903 – 3 September 1968) was an Australian physicist. Early life He was born in Hobart, Tasmania, the son of telegraph engineer Frank Prosser Bowden. Bowden received his Bachelor of Science degree from the University of Tasmania in Australia in 1925, a Master of Science degree there in 1927 and a Doctor of Science (D.Sc) degree in 1933, by which time he was working at the University of Cambridge in England. He gained his PhD from Cambridge in 1929. Career Between 1931 and 1939, Bowden worked as a lecturer in physical chemistry at the University of Cambridge. He moved back to Australia in 1939 to work at the Commonwealth Scientific and Industrial Research Organisation. He returned to Britain in 1946 as a reader in physical chemistry. In 1957, Bowden became Reader of Physics at Cambridge, and in 1966 became the Professor of Surface Physics. He made significant contributions to the field of tribology and he received the International Award from the Society of Tribologists and Lubrication Engineers in 1955. He was also named as one of 23 "Men of Tribology" by Duncan Dowson. Much of Bowden's tribology research was performed alongside David Tabor, with whom he published his popular book 'The Friction and Lubrication of Solids'. Bowden died on 3 September 1968. Private life He married Tasmanian Margot Hutchison in London in 1931. They had 3 sons and a daughter. Honours and awards 1938 Awarded Beilby Medal and Prize by the Royal Society of Chemistry 1948 Elected a Fellow of the Royal Society. 1955 Awarded the Franklin Institute's Elliott Cresson Medal. 1955 Awarded the International Award from the STLE. 1956 Awarded CBE in the 1956 Birthday Honours. 1956 Awarded the Rumford Medal of the Royal Society "In recognition of his distinguished work on the nature of friction". 1968 Awarded the Glazebrook Medal of the Institute of Physics 1968 Awarded the Bernard Lewis Gold Medal of the Combustion Institute References 1903 births 1968 deaths 20th-century British physicists 20th-century British engineers British mechanical engineers Tribologists Commanders of the Order of the British Empire Academics of the University of Cambridge Fellows of the Royal Society University of Tasmania alumni Alumni of Gonville and Caius College, Cambridge Presidents of the Cambridge Philosophical Society
Frank Philip Bowden
Materials_science
464
30,861,073
https://en.wikipedia.org/wiki/Fog%20collection
Fog collection is the harvesting of water from fog using large pieces of vertical mesh netting to induce the fog-droplets to flow down towards a trough below. The setup is known as a fog fence, fog collector or fog net. Through condensation, atmospheric water vapour from the air condenses on cold surfaces into droplets of liquid water known as dew. The phenomenon is most observable on thin, flat, exposed objects including plant leaves and blades of grass. As the exposed surface cools by radiating its heat to the sky, atmospheric moisture condenses at a rate greater than that of which it can evaporate, resulting in the formation of water droplets. Water condenses onto the array of parallel wires and collects at the bottom of the net. This requires no external energy and is facilitated naturally through temperature fluctuation, making it attractive for deployment in less developed areas. The term 'fog fence' comes from its long rectangular shape resembling a fence, but fog collectors are not confined just to this structural style. The efficiency of the fog collector is based on the net material, the size of the holes and filament, and chemical coating. Fog collectors can harvest from 2% up to 10% of the moisture in the air, depending on their efficiency. An ideal location is a high altitude arid area near cold offshore currents, where fog is common, and therefore, the fog collector can produce the highest yield. Historical origin The organized collection of dew or condensation through natural or assisted processes is an ancient practice, from the small-scale drinking of pools of condensation collected in plant stems (still practiced today by survivalists), to large-scale natural irrigation without rain falling, such as in the Atacama and Namib deserts. The first man-made fog collectors stretch back as far as the Inca Empire, where buckets were placed under trees to take advantage of condensation. Several man-made devices such as antique stone piles in Ukraine, medieval dew ponds in southern England and volcanic stone covers on the fields of Lanzarote have all been thought to be possible dew-catching devices. One of the first recorded projects of fog collection was in 1969 in South Africa as a water source for an air force base. The structure consisted of two fences each 100m2 (1000 sq. ft.). Between the two, 11 litres (2½ gallons) of water was produced on average per day over the 14 month study, which is 110 ml of water for every square meter (⅓ fl. oz. per sq. ft.). The next large study was performed by the National Catholic University of Chile and the International Development Research Centre in Canada in 1987. One hundred 48m2 (520 sq. ft.) fog fences were assembled in northern Italy. The project was able to yield on average 0.5 litre of water for every square meter (1½ fl. oz. per sq. ft), or 33L (8 gallons) for each of the 300 villagers, each day. In nature Fog collectors were first seen in nature as a technique for collecting water by some insects and foliage. Namib Desert beetles live off water that condenses on their wings due to a pattern of alternating hydrophilic (water attracting) and hydrophobic (water repelling) regions. Redwood forests are able to survive on limited rainfall due to the addition of condensation on needles which drip into the trees' root systems. Parts of a fog collector The fog collector is made up of three major parts: the frame, the mesh netting, and the trough or basin. The frame supports the mesh netting and can be made from a wide array of materials from stainless steel poles to bamboo. The frame can vary in shape. Proposed geometries include linear, similar to a fence and cylindrical. Linear frames are rectangles with the vertical endpoints embedded into the ground. They have rope supports connected at the top and staked into the ground to provide stability. The mesh netting is where the condensation of water droplets appear. It consists of filaments knitted together with small openings, coated with a chemical to increase condensation. Shade cloth is used for mesh structure because it can be locally sourced in underdeveloped countries. The filaments are coated to be hydrophilic and hydrophobic, which attracts and repels water to increase the condensation. This can retrieve 2% of moisture in the air. Efficiency increases as the size of the filaments and the holes decrease. The most optimal mesh netting is made from stainless steel filaments the size of three to four human hairs and with holes that are twice as big as the filament. The netting is coated in a chemical that decreases water droplet's contact angle hysteresis, which allows for more small droplets to form. This type of netting can capture 10% of the moisture in the air. Below the mesh netting of a fog fence, there is a small trough for the water to be collected in. The water runs from the trough to some type of storage container or irrigation system for use. If the fog collector is circular the water will be deposited into a basin placed at the bottom of the netting. Principle Fog contains typically from 0.05 to 1 grams of water per cubic meter (⅗ to 12 grains per cu. yd.), with droplets from 1 to 40 micrometres in diameter. It settles slowly and is carried by wind. Therefore, an efficient fog fence must be placed facing the prevailing winds, and must be a fine mesh, as wind would flow around a solid wall and take the fog with it. The water droplets in the fog deposit on the mesh. A second mesh rubbing against the first causes the droplets to coalesce and run to the bottom of the meshes, where the water may be collected and led away. Advantages and disadvantages Advantages Water can be collected in any environment, including extremely arid environments such as the Atacama Desert, one of the driest places on earth. The harvested water can be safer to drink than ground water. Fog collection is considered low maintenance because it requires no exterior energy and only an occasional brushing of the nets to keep them clean. Parts can sometimes be sourced locally in underdeveloped countries, which allows for the collector to be fixed if broken and to not sit in disrepair. No in-depth training is necessary for repairing the collector. Fog collectors are low cost to implement compared to other water alternatives. Disadvantages Fog fences are limited in quantity by the regional climate and topography and cannot produce more water on demand. Their yields are not consistent year round and are affected by local weather and global weather fluctuations (such as El Niño). Their water supply can still be contaminated by windborne dust, birds, and insects. The moisture collected can promote growth of mold and other possibly toxic microorganisms on the mesh. Modern methods In the mid-1980s, the Meteorological Service of Canada (MSC) began constructing and deploying large fog collecting devices on Mont Sutton in Quebec. These simple tools consisted of a large piece of canvas (generally 12 metres; 40' long and 4 metres; 10' high) stretched between two 6 metres (20') wooden poles held up by guy wires, with a long trough underneath. Water would condense out of the fog onto the canvas, coalesce into droplets, and then slide down to drip off of the bottom of the canvas and into the collecting trough below. Chilean project The intent of the Canadian project was simply to use fog collection devices to study the constituents of the fog that they collected. However, their success sparked the interest of scientists in Chile's National Forest Corporation (CONAF) and Catholic University of Chile to exploit the or clouds which blanket the northern Chile coast in the southern hemisphere winter. With funding from the International Development Research Centre (IDRC), the MSC collaborated with the Chileans to begin testing different designs of collection facilities on El Tofo Mountain in northern Chile. Once perfected, approximately 50 of the systems were erected and used to irrigate seedlings on the hillside in an attempt at reforestation. Once vegetation became established, it should have begun collecting fog for itself, like the many cloud forests in South America, in order to flourish as a self-sustaining system. The success of the reforestation project is unclear, but approximately five years after the beginning of the project, the nearby village of Chungungo began to push for a pipeline to be sent down the mountain into the town. Though this was not in the scope of CONAF, which pulled out at this point, it was agreed to expand the collection facility to 94 nylon mesh collectors with a reserve tank and piping in order to supply the 300 inhabitants of Chungungo with water. The IDRC reports that ten years later in 2002, only nine of the devices remained and the system overall was in very poor shape. Conversely, the MSC states in its article that the facility was still fully functional in 2003, but provides no details behind this statement. In June 2003 the IDRC reported that plans existed to revive the site on El Tofo. Dar Si Hmad In March 2015 Dar Si Hmad (DSH), a Moroccan NGO, built a large fog-collection and distribution system in the Anti-Atlas Mountains. The region DSH worked in is water-poor, but abundant fog drapes the area 6 months out of the year. DSH's system included technology that monitored the water system via SMS message. These capabilities were crucial in dealing with the effects of fog collection on the social fabric of these rural areas. According to MIT researchers, the fog collection methods implemented by DSH have "improved the fog-collecting efficiency by about five hundred per cent." International use Despite the apparent failure of the fog collection project in Chungungo, the method has caught on in some localities around the world. The International Organization for Dew Utilization organization is working on foil-based effective condensers for regions where rain or fog cannot cover water needs throughout the year. Shortly after the initial success of the project, researchers from the participating organizations formed the nonprofit organization FogQuest, which has set up operational facilities in Yemen and central Chile, while still others are under evaluation in Guatemala, Haiti, and Nepal, this time with much more emphasis on the continuing involvement of the communities in the hopes that the projects will last well into the future. Villages in a total of 25 countries worldwide now operate fog collection facilities. There is potential for the systems to be used to establish dense vegetation on previously arid grounds. It appears that the inexpensive collectors will continue to flourish. There have been several attempts to set up fog catchers in Peru, with varying success. See also References Sources International Development Research Centre article on the fog collection project Meteorological Service of Canada article on fog collection project Further reading External links Fog Harvesting, chapter from Source Book of Alternative Technologies for Freshwater Augmentation in Latin America and the Caribbean, UNEP International Environmental Technology Centre FogQuest: Sustainable Water Solutions, Canadian organization, historical information on fog collection projects in developing countries Standard Fog Collector, at USGS installation in Hawaii Fog Harvesting, chapter from Source Book of Alternative Technologies for Freshwater Augmentation in Latin America and the Caribbean, UNEP International Environmental Technology Centre The Fog Collectors: Harvesting Water From Thin Air FogQuest: Sustainable Water Solutions, Canadian organization, historical information on fog collection projects in developing countries Water supply Hydrology Appropriate technology
Fog collection
Chemistry,Engineering,Environmental_science
2,319
8,969,359
https://en.wikipedia.org/wiki/Engineering%20research
Engineering research - as a branch of science, it stands primarily for research that is oriented towards achieving a specific goal that would be useful, while seeking to employ the powerful tools already developed in Engineering as well as in non-Engineering sciences such as Physics, Mathematics, Computer science, Chemistry, Biology, etc. Often, some of the knowledge required to develop such tools is nonexistent or is simply not good enough, and the engineering research takes the form of a non-engineering science. Since engineering is extensive, it comprises specialised areas such as bioengineering, mechanical engineering, chemical engineering, electrical and computer engineering, civil and environmental engineering, agricultural engineering, etc. The largest professional organisation is the IEEE that today includes much more than the original Electrical and Electronic Engineering. Major contributors to engineering research around the world include governments, private business, and academia. The results of engineering research can emerge in journal articles, at academic conferences, and in the form of new products on the market. Much engineering research in the United States of America takes place under the aegis of the Department of Defense. Military-related research into science and technology has led to "dual-use" applications, with the adaptation of weaponry, communications and other defense systems for the military and other applications for civilian use. Programmable digital computers and the Internet which connects them, the GPS satellite network, fiber-optic cable, radar and lasers provide examples. See also List of engineering schools Engineer's degree Engineering studies Engineering education research References Research by field Engineering disciplines
Engineering research
Engineering
308
3,521,038
https://en.wikipedia.org/wiki/Isothermal%E2%80%93isobaric%20ensemble
The isothermal–isobaric ensemble (constant temperature and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions. In the ensemble, the probability of a microstate is , where is the partition function, is the internal energy of the system in microstate , and is the volume of the system in microstate . The probability of a macrostate is , where is the Gibbs free energy. Derivation of key properties The partition function for the -ensemble can be derived from statistical mechanics by beginning with a system of identical atoms described by a Hamiltonian of the form and contained within a box of volume . This system is described by the partition function of the canonical ensemble in 3 dimensions: , where , the thermal de Broglie wavelength ( and is the Boltzmann constant), and the factor (which accounts for indistinguishability of particles) both ensure normalization of entropy in the quasi-classical limit. It is convenient to adopt a new set of coordinates defined by such that the partition function becomes . If this system is then brought into contact with a bath of volume at constant temperature and pressure containing an ideal gas with total particle number such that , the partition function of the whole system is simply the product of the partition functions of the subsystems: . The integral over the coordinates is simply . In the limit that , while stays constant, a change in volume of the system under study will not change the pressure of the whole system. Taking allows for the approximation . For an ideal gas, gives a relationship between density and pressure. Substituting this into the above expression for the partition function, multiplying by a factor (see below for justification for this step), and integrating over the volume V then gives . The partition function for the bath is simply . Separating this term out of the overall expression gives the partition function for the -ensemble: . Using the above definition of , the partition function can be rewritten as , which can be written more generally as a weighted sum over the partition function for the canonical ensemble The quantity is simply some constant with units of inverse volume, which is necessary to make the integral dimensionless. In this case, , but in general it can take on multiple values. The ambiguity in its choice stems from the fact that volume is not a quantity that can be counted (unlike e.g. the number of particles), and so there is no “natural metric” for the final volume integration performed in the above derivation. This problem has been addressed in multiple ways by various authors, leading to values for C with the same units of inverse volume. The differences vanish (i.e. the choice of becomes arbitrary) in the thermodynamic limit, where the number of particles goes to infinity. The -ensemble can also be viewed as a special case of the Gibbs canonical ensemble, in which the macrostates of the system are defined according to external temperature and external forces acting on the system . Consider such a system containing particles. The Hamiltonian of the system is then given by where is the system's Hamiltonian in the absence of external forces and are the conjugate variables of . The microstates of the system then occur with probability defined by where the normalization factor is defined by . This distribution is called generalized Boltzmann distribution by some authors. The -ensemble can be found by taking and . Then the normalization factor becomes , where the Hamiltonian has been written in terms of the particle momenta and positions . This sum can be taken to an integral over both and the microstates . The measure for the latter integral is the standard measure of phase space for identical particles: . The integral over term is a Gaussian integral, and can be evaluated explicitly as . Inserting this result into gives a familiar expression: . This is almost the partition function for the -ensemble, but it has units of volume, an unavoidable consequence of taking the above sum over volumes into an integral. Restoring the constant yields the proper result for . From the preceding analysis it is clear that the characteristic state function of this ensemble is the Gibbs free energy, This thermodynamic potential is related to the Helmholtz free energy (logarithm of the canonical partition function), , in the following way: Applications Constant-pressure simulations are useful for determining the equation of state of a pure system. Monte Carlo simulations using the -ensemble are particularly useful for determining the equation of state of fluids at pressures of around 1 atm, where they can achieve accurate results with much less computational time than other ensembles. Zero-pressure -ensemble simulations provide a quick way of estimating vapor-liquid coexistence curves in mixed-phase systems. -ensemble Monte Carlo simulations have been applied to study the excess properties and equations of state of various models of fluid mixtures. The -ensemble is also useful in molecular dynamics simulations, e.g. to model the behavior of water at ambient conditions. References Statistical ensembles
Isothermal–isobaric ensemble
Physics
1,087
41,759,436
https://en.wikipedia.org/wiki/Cathal%20Gurrin
Cathal Gurrin is an Irish Professor and lifelogger. He is the Head of the Adapt Centre at Dublin City University, a Funded Investigator of the Insight Centre, and the director of the Human Media Archives research group. He was previously the deputy head of the School of Computing. His interests include personal analytics and lifelogging. He publishes in information retrieval (IR) with a particular focus on how people access information from pervasive computing devices. He has captured a continuous personal digital memory since 2006 using a wearable camera and logged hundreds of millions of other sensor readings. Early life Cathal attended primary school in Scoil Lorcáin, Kilbarrack, Dublin, and secondary school in St. Fintan's High School, Sutton. He graduated from Dublin City University with a PhD in developing web search engines, including the first Irish language search engine. Research Gurrin has worn a wearable camera since 2006 which takes several still photographs every minute. He is likely the longest wearer of such a device in the world. He also records his location (using GPS) and various other sources of biometric data. Gurrin generated a database of over 18 million images, and produces about a terabyte of personal data a year. Gurrin and his researchers use information retrieval algorithms to segment his personal image archive into "events" such as eating, driving, etc. New events are recognised on a daily basis using machine learning. In an interview Gurrin said that "If I need to remember where I left my keys, or where I parked my car, or what wine I drank at an event two years ago... the answers should all be there." While searching by date and time is easy, more complex searches within images such as looking for brand names and objects with complex form factors, such as keys, is more difficult. One aim of Gurrin's research is to create search engines to allow complex searches of such image databases, and to develop assistive technology. He is the founder of the annual ACM Lifelog Search Challenge, which attracts a worldwide participant list annually. References External links Adapt Centre 1975 births Alumni of Dublin City University Living people People educated at St. Fintan's High School Lifelogging
Cathal Gurrin
Technology
462
59,953,471
https://en.wikipedia.org/wiki/DNA-templated%20organic%20synthesis
DNA‐templated organic synthesis (DTS) is a way to control the reactivity of synthetic molecules by using nature's molarity‐based approach. Historically, DTS was used as a model of prebiotic nucleic acid replication. Now however, it is capable of translating DNA sequences into complex small‐molecule and polymer products of multistep organic synthesis. Base Editors The DNA base editors, developed at Harvard University under David Liu, allow altering the genomic structure of DNA. The base editors include BE3, BE4 and ABE7. BE3 and its later version, BE4 allow to change the nucleobase C to T and nucleobase G to A. ABE7 allows to change A-T base pairs into G-C base pairs. The system works by rearranging the atoms in the target base pair and then tricking cells into fixing the other DNA strand to make the change permanent. References Biological engineering Biotechnology Genome editing Molecular biology
DNA-templated organic synthesis
Chemistry,Engineering,Biology
199
40,765,261
https://en.wikipedia.org/wiki/Point%20process%20notation
In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both. The notation varies due to the histories of certain mathematical fields and the different interpretations of point processes, and borrows notation from mathematical areas of study such as measure theory and set theory. Interpretation of point processes The notation, as well as the terminology, of point processes depends on their setting and interpretation as mathematical objects which under certain assumptions can be interpreted as random sequences of points, random sets of points or random counting measures. Random sequences of points In some mathematical frameworks, a given point process may be considered as a sequence of points with each point randomly positioned in d-dimensional Euclidean space Rd as well as some other more abstract mathematical spaces. In general, whether or not a random sequence is equivalent to the other interpretations of a point process depends on the underlying mathematical space, but this holds true for the setting of finite-dimensional Euclidean space Rd. Random set of points A point process is called simple if no two (or more points) coincide in location with probability one. Given that often point processes are simple and the order of the points does not matter, a collection of random points can be considered as a random set of points The theory of random sets was independently developed by David Kendall and Georges Matheron. In terms of being considered as a random set, a sequence of random points is a random closed set if the sequence has no accumulation points with probability one A point process is often denoted by a single letter, for example , and if the point process is considered as a random set, then the corresponding notation: is used to denote that a random point is an element of (or belongs to) the point process . The theory of random sets can be applied to point processes owing to this interpretation, which alongside the random sequence interpretation has resulted in a point process being written as: which highlights its interpretation as either a random sequence or random closed set of points. Furthermore, sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the point (or ) belongs to or is a point of the point process , or with set notation, . Random measures To denote the number of points of located in some Borel set , it is sometimes written where is a random variable and is a counting measure, which gives the number of points in some set. In this mathematical expression the point process is denoted by: . On the other hand, the symbol: represents the number of points of in . In the context of random measures, one can write: to denote that there is the set that contains points of . In other words, a point process can be considered as a random measure that assigns some non-negative integer-valued measure to sets. This interpretation has motivated a point process being considered just another name for a random counting measure and the techniques of random measure theory offering another way to study point processes, which also induces the use of the various notations used in integration and measure theory. Dual notation The different interpretations of point processes as random sets and counting measures is captured with the often used notation in which: denotes a set of random points. denotes a random variable that gives the number of points of in (hence it is a random counting measure). Denoting the counting measure again with , this dual notation implies: Sums If is some measurable function on Rd, then the sum of over all the points in can be written in a number of ways such as: which has the random sequence appearance, or with set notation as: or, equivalently, with integration notation as: where which puts an emphasis on the interpretation of being a random counting measure. An alternative integration notation may be used to write this integral as: The dual interpretation of point processes is illustrated when writing the number of points in a set as: where the indicator function if the point is exists in and zero otherwise, which in this setting is also known as a Dirac measure. In this expression the random measure interpretation is on the left-hand side while the random set notation is used is on the right-hand side. Expectations The average or expected value of a sum of functions over a point process is written as: where (in the random measure sense) is an appropriate probability measure defined on the space of counting measures . The expected value of can be written as: which is also known as the first moment measure of . The expectation of such a random sum, known as a shot noise process in the theory of point processes, can be calculated with Campbell's theorem. Uses in other fields Point processes are employed in other mathematical and statistical disciplines, hence the notation may be used in fields such stochastic geometry, spatial statistics or continuum percolation theory, and areas which use the methods and theory from these fields. See also Mathematical Alphanumeric Symbols Mathematical notation Notation in probability Table of mathematical symbols Notes References N
Point process notation
Mathematics
1,041
5,350,525
https://en.wikipedia.org/wiki/Manhattanhenge
Manhattanhenge, also called the Manhattan Solstice, is an event during which the setting sun or the rising sun is aligned with the east–west streets of the main street grid of Manhattan, New York City. The astrophysicist Neil deGrasse Tyson claims to have coined the term, by analogy with Stonehenge. The sunsets and sunrises each align twice a year, on dates evenly spaced around the summer solstice and winter solstice. The sunset alignments occur around May 28 and July 13. The sunrise alignments occur around December 5 and January 8. Manhattan has a phenomenon of this kind due to its extensive urban canyons and its rectilinear street grid that is rotated 29° clockwise from true east–west. Many streets align with the view of the Manhattanhenge including 14th, 23rd, 34th, 42nd, and 57th Streets. Explanation and details The term Manhattanhenge is a reference to Stonehenge, a prehistoric monument located in Wiltshire, England, which was constructed so that the rising sun, seen from the center of the monument at the time of the summer solstice, aligns with the outer "Heel Stone". The phenomenon (but not the term "Manhattanhenge") was described by Neil deGrasse Tyson, an astrophysicist at the American Museum of Natural History and a native New Yorker in 1997 in the magazine Natural History. In a later interview, Tyson stated that he coined the term, and that it was inspired by a childhood visit to Stonehenge on an expedition headed by Gerald Hawkins, an astronomer who was the first to propose Stonehenge's purpose as an ancient astronomical observatory used to predict movements of sun and stars, as outlined in his 1965 book Stonehenge Decoded. According to Tyson, In accordance with the Commissioners' Plan of 1811, the street grid for most of Manhattan is rotated 29° clockwise from true east-west. Thus, when the azimuth for sunset is 299° (i.e., 29° north of due West), the sunset aligns with the streets on that grid. This rectilinear grid design runs from north of Houston Street in Lower Manhattan to south of 155th Street (Manhattan) in Upper Manhattan. A more impressive visual spectacle, and the one commonly referred to as Manhattanhenge, occurs a couple of days after the first such date of the year, and a couple of days before the second date, when a pedestrian looking down the center line of the street westward toward New Jersey can see the full solar disk slightly above the horizon and in between the profiles of the buildings. The date shifts are due to the sunset time being when the last of the sun just disappears below the horizon. The precise dates of Manhattanhenge depend on the date of the summer solstice, which varies from year to year, but remains close to June 21. In 2014, the "full sun" Manhattanhenge occurred on May 30 at 8:18 p.m., and on July 11 at 8:24 p.m. The event has attracted increasing attention in recent years. The dates on which sunrise aligns with the streets on the Manhattan grid are evenly spaced around the winter solstice and correspond approximately to December 5 and January 8. Occurrences In the following table, "full sun" refers to occurrences of the full solar disk just above the horizon, while "half sun" refers to occurrences of the solar disk partially hidden below the horizon. Related phenomena in other cities The same phenomenon happens in other cities with a uniform street grid and an unobstructed view of the horizon. If the streets on the grid were rigorously north-south and east–west, then both sunrise and sunset would be aligned on the days of the vernal and autumnal equinoxes (which occur around March 20 and September 23 respectively). In Baltimore, for instance, sunrise aligns on March 25 and September 18 and sunset on March 12 and September 29. In , where the street grid aligns with the cardinal directions, the setting sun lines up with the street canyons near the spring and autumn equinoxes, March 20 and September 25, a phenomenon dubbed Chicagohenge. In Toronto, the setting sun lines up with the east–west streets on February 16 and October 25, a phenomenon now known locally as Torontohenge. In Montreal, there is a Montrealhenge each year around June 12. When the architects designing the city centre of Milton Keynes, in the United Kingdom, discovered that its main street almost framed the rising sun on Midsummer Day and the setting sun on Midwinter Day, they consulted Greenwich Observatory to obtain the exact angle required at their latitude, and persuaded their engineers to shift the grid of roads a few degrees. In Cambridge, Massachusetts, MIThenge occurs about January 29 and November 11, when the setting sun may be seen across the length of the "Infinite Corridor" at the Massachusetts Institute of Technology. In Strasbourg, the Strasbourghenge occurs in October where the rising sun seen from the A351 motorway lines up with the spire of the cathedral. In San Francisco, the sunrise lines up and falls perfectly above the San Francisco–Bay Bridge between California and Gough Street in San Francisco, twice a year (Spring and Fall). This has been called "California Henge" at times. Also in San Francisco there is a “Crack of light” between two very close buildings on the Summer Solstice at 1698 Sanchez Street every year. Variously over the years there has been a white or yellow line painted on the sidewalk to mark the place where the light shines through the crack on the Solstice. In San Diego, the sunset can be seen underneath the Ellen Browning Scripps Memorial Pier at Scripps Institution of Oceanography twice a year to form Scrippshenge. See also Stonehenge replicas and derivatives References External links Media Flickr photos tagged with Manhattanhenge Video interpretation of Manhattanhenge Video on Science Friday website Manhattanhenge, NOVA scienceNOW, first broadcast September 14, 2006 Discussion Hayden Planetarium discussion Images and maps Manhattanhenge images on Yahoo! news July 12, 2011 Interactive map showing Manhattanhenge visibility by time of year Astronomical events of the Solar System Culture of Manhattan 2000s neologisms May July Geography of Manhattan Neil deGrasse Tyson Solar alignment Solar phenomena Summer solstice Winter solstice
Manhattanhenge
Physics,Astronomy
1,303
45,668,279
https://en.wikipedia.org/wiki/Kepler-432b
Kepler-432b (also known by its Kepler Object of Interest designation KOI-1299.01) is a hot super-Jupiter (or "warm" super-Jupiter) exoplanet orbiting the giant star Kepler-432 A, the innermost of two such planets discovered by NASA's Kepler spacecraft. It is located about 2,830 light-years (870 parsecs, or nearly km) from Earth in the constellation Cygnus. The exoplanet was found by using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured. Characteristics Mass, radius and temperature Kepler-432b is a hot super-Jupiter, an exoplanet that has a radius and mass larger than that of the planet Jupiter, and with an extremely high temperature. It has a temperature of . It has a mass of 5.41 and a radius of 1.45 . It also has a relatively high density for such a planet, at 4.46 g cm3. Host star The planet orbits a (K-type) giant star named Kepler-432 A. It has exhausted the hydrogen in its core and has begun expanding into a red giant. The star has a mass of 1.32 and a radius of 4.06 . It has a surface temperatures of 4995 K and is 4.2 billion years old. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 13. It is too dim to be seen with the naked eye. Orbit Kepler-432b orbits its host star with 920% of the Sun's luminosity (9.2 ) about every 52 days at a distance of 0.30 AU (close to the orbital distance of Mercury from the Sun, which is 0.38 AU). It has an eccentric orbit, with an eccentricity of 0.5134. Stellar interactions and remaining lifetime Obseverations made on Kepler-432b reveal that its host star is gradually causing the planet's orbit to decay via tidal interactions. As Kepler-432 A is ascending the red giant branch (RGB), it will continue to expand past the orbit of Kepler-432b, likely engulfing it completely. The drag between the stellar photosphere and the gas giant would cause its orbit to spiral inward until it is vaporized by the star after ablation and vaporization take its toll on the planet. In some way, this helps with studying how similar interactions will eventually cause the Earth to be engulfed by the Sun as a red giant, some 7 billion years from now. Discovery In 2009, NASA's Kepler spacecraft was completing observing stars on its photometer, the instrument it uses to detect transit events, in which a planet crosses in front of and dims its host star for a brief and roughly regular period of time. In this last test, Kepler observed stars in the Kepler Input Catalog, including Kepler-419, the preliminary light curves were sent to the Kepler science team for analysis, who chose obvious planetary companions from the bunch for follow-up at observatories. Observations for the potential exoplanet candidates took place between 13 May 2009 and 17 March 2012. After observing the respective transits, which for Kepler-432b occurred every 50 days, it was eventually concluded that a planetary companion was responsible for the periodic 50-day transits. The discovery of the oddball planet was announced on 24 January 2015. See also Kepler-1520 b – similar planet about to be engulfed by its host star. References External links NASA – Kepler Mission. NASA – Kepler Discoveries – Summary Table. NASA – Kepler-432b at The NASA Exoplanet Archive. NASA – Kepler-432b at The Exoplanet Data Explorer. NASA – Kepler-432b at The Extrasolar Planets Encyclopaedia. Exoplanets discovered by the Kepler space telescope Giant planets Exoplanets discovered in 2015 Transiting exoplanets Cygnus (constellation)
Kepler-432b
Astronomy
864
44,979,777
https://en.wikipedia.org/wiki/East%20Marmara%20region
The East Marmara Region (Turkish: Doğu Marmara Bölgesi) (TR4) is a statistical region in Turkey. Subregions and provinces Bursa Subregion (TR41) Bursa Province (TR411) Eskişehir Province (TR412) Bilecik Province (TR413) Kocaeli Subregion (TR42) Kocaeli Province (TR421) Sakarya Province (TR422) Düzce Province (TR423) Bolu Province (TR424) Yalova Province (TR425) Age groups Internal immigration State register location of East Marmara residents Marital status of 15+ population by gender Education status of 15+ population by gender See also NUTS of Turkey References External links TURKSTAT Sources ESPON Database Statistical regions of Turkey
East Marmara region
Mathematics
167
45,377,269
https://en.wikipedia.org/wiki/Camalexin
Camalexin (3-thiazol-2-yl-indole) is a simple indole alkaloid found in the plant Arabidopsis thaliana and other crucifers. The secondary metabolite functions as a phytoalexin to deter bacterial and fungal pathogens. Structure The base structure of camalexin consists of an indole ring derived from tryptophan. The ethanamine moiety attached to the 3 position of the indole ring is subsequently rearranged into a thiazole ring. Biosynthesis While the biosynthesis of camalexin in planta has not been fully elucidated, most of the enzymes involved in the pathway are known and involved in a metabolon complex. The pathway starts with a tryptophan precursor which is subsequently oxidized by two cytochrome P450 enzymes. The indole-3-acetaldoxime is then converted to indole-3-acetonitrile by another cytochrome P450, CYP71A13. A glutathione conjugate followed by a subsequent unknown enzyme is needed to form dihydrocamalexic acid. A final decarboxylation step by cytochrome P450 CYP71B15, also called phytoalexin deficient4 (PAD3) results in the final product, camalexin. Biological activity Camalexin is cytotoxic against aggressive prostate cancer cell lines in vitro. References Indole alkaloids 2-Thiazolyl compounds
Camalexin
Chemistry
336
54,990,692
https://en.wikipedia.org/wiki/Lesinurad/allopurinol
Lesinurad/allopurinol (trade name Duzallo) is a fixed-dose combination drug for the treatment of gout. It contains 200 mg of lesinurad and 300 mg of allopurinol. In August 2017, the US Food and Drug Administration approved it for the treatment of hyperuricemia associated with gout in patients for whom target serum uric acid levels have not been achieved with allopurinol alone. It was approved for medical use in the European Union in August 2018. In February 2019, it was discontinued by its manufacturer for business reasons and is no longer available. References Antigout agents Combination drugs Drugs developed by AstraZeneca Withdrawn drugs
Lesinurad/allopurinol
Chemistry
147
63,050
https://en.wikipedia.org/wiki/Eosinophilia%E2%80%93myalgia%20syndrome
Eosinophilia–myalgia syndrome is a rare, sometimes fatal neurological condition linked to the ingestion of the dietary supplement L-tryptophan. The risk of developing EMS increases with larger doses of tryptophan and increasing age. Some research suggests that certain genetic polymorphisms may be related to the development of EMS. The presence of eosinophilia is a core feature of EMS, along with unusually severe myalgia (muscle pain). Signs and symptoms The initial, acute phase of EMS, which last for three to six months, presents as trouble with breathing and muscle problems, including soreness and spasm, but which may also be intense. Muscle weakness is not a feature of this phase, but some people experience muscle stiffness. Additional features can include cough, fever, fatigue, joint pain, edema, and numbness or tingling, usually in the limbs, hands and feet. The chronic phase follows the acute phase. Eosinophilic fasciitis may develop, primarily in the limbs. CNS signs may appear, including numbness, increased sensation, muscle weakness, and sometimes cardiac or digestive dysfunction. Fatigue is present to some degree, while the muscle pain (which may be extremely intense) and dyspnea continue in this phase. Causes Subsequent epidemiological studies suggested that EMS was linked to specific batches of L-tryptophan supplied by a single large Japanese manufacturer, Showa Denko. It eventually became clear that recent batches of Showa Denko's L-tryptophan were contaminated by trace impurities, which were subsequently thought to be responsible for the 1989 EMS outbreak. The L-tryptophan was produced by a bacterium grown in open vats in a Showa Denko fertilizer factory. While a total of 63 trace contaminants were eventually identified, only six of them could be associated with EMS. The compound EBT (1,1'-ethylidene-bis-L-tryptophan, also known as "Peak E") was the only contaminant identifiable by initial analysis, but further analysis revealed PAA (3-(phenylamino)-L-alanine, also known as "UV-5"), and peak 200 (2[3-indolyl-methyl]-L-tryptophan). Two of the remaining uncharacterized peaks associated with EMS were later determined to be 3a-hydroxy-1,2,3,3a,8,8a-hexahydropyrrolo-[2,3-b]-indole-2-carboxylic acid (peak C) and 2-(2-hydroxy indoline)-tryptophan (peak FF). These were characterized using accurate mass LC–MS, LC–MS/MS and multistage mass spectrometry (MSn). The last of the six contaminants (peak AAA/"UV-28", being "the contaminant most significantly associated with EMS" has been characterized as two related chain-isomers; peak AAA1 ((S)-2-amino-3-(2-((S,E)-7-methylnon-1-en-1-yl)-1H-indol-3-yl)propanoic acid, a condensation product between L-tryptophan and 7-methylnonanoic acid) and peak AAA2 ((S)-2-amino-3-(2-((E)-dec-1-en-1-yl)-1H-indol-3-yl)propanoic acid, a condensate between L-tryptophan and decanoic acid). No consistent relationship has ever been firmly established between any specific trace impurity or impurities identified in these batches and the effects of EMS. While EBT in particular has been frequently implicated as the culprit, there is no statistically significant association between EBT levels and EMS. Of the 63 trace contaminants, only the two AAA compounds displayed a statistically significant association with cases of EMS (with a p-value of 0.0014). As most research has focused on attempts to associate individual contaminants with EMS, there is a comparative lack of detailed research on other possible causal or contributing factors. Tryptophan itself has been implicated as a potentially major contributory factor in EMS. While critics of this theory have argued that this hypothesis fails to explain the near-absent reports of EMS prior to and following the EMS outbreak, this fails to take into account the sudden rapid increase in tryptophan's usage immediately prior to the 1989 outbreak, and ignores the strong influence of the EMS outbreak's legacy and the extended FDA ban on later usage of tryptophan. Crucially, this also ignores the existence of a number of cases of EMS that developed both prior to and after the primary epidemic, including at least one case where the tryptophan was tested and found to lack the contaminants found in the contaminated lots of Showa Denko's tryptophan, as well as cases with other supplements inducing EMS, and even a case of EMS induced by excessive dietary L-tryptophan intake via overconsumption of cashew nuts. A major Canadian analysis located a number of patients that met the CDC criteria for EMS but had never been exposed to tryptophan, which "brings causal interpretations of earlier studies into question". Other studies have highlighted numerous major flaws in many of the epidemiological studies on the association of tryptophan with EMS, which cast serious doubt on the validity of their results. As the FDA concluded, "other brands of L-tryptophan, or L-tryptophan itself, regardless of the levels or presence of impurities, could not be eliminated as causal or contributing to the development of EMS". Even animal studies have suggested that tryptophan itself "when ingested by susceptible individuals either alone or in combination with some other component in the product, results in the pathological features in EMS". At the time of the outbreak, Showa Denko had recently made alterations to its manufacturing procedures that were thought to be linked to the possible origin of the contaminants detected in the affected lots of tryptophan. A key change was the reduction of the amount of activated charcoal used to purify each batch from >20 kg to 10 kg. A portion of the contaminated batches had also bypassed another filtration step that used reverse-osmosis to remove certain impurities. Additionally, the bacterial culture used to synthesize tryptophan was a strain of Bacillus amyloliquefaciens that had been genetically engineered to increase tryptophan production. Although the prior four generations of the genetically engineered strain had been used without incident, the fifth generation used for the contaminated batches was thought to be a possible source of the impurities that were detected. This has been used to argue that the genetic engineering itself was the primary cause of the contamination, a stance that was heavily criticized for overlooking the other known non-GMO causes of contamination, as well as for its use by anti-GMO activists as a way to threaten the development of biotechnology with false information. The reduction in the amount of activated carbon used and the introduction of the fifth generation Bacillus amyloliquefaciens strain were both associated with the development of EMS, but due to the high overlap of these changes, the precise independent contribution of each change could not be determined (although the bypass of the reverse-osmosis filtration for certain lots was determined to be not significantly associated with the contaminated lots of tryptophan). While Showa Denko claimed a purity of 99.6%, it was noted that "the quantities of the known EMS associated contaminants, EBT and PAA, were remarkably small, of the order of 0.01%, and could easily escape detection". Regulatory response The FDA loosened its restrictions on sales and marketing of tryptophan in February 2001, but continued to limit the importation of tryptophan not intended for an exempted use until 2005. Diagnosis Treatment Treatment is withdrawal of products containing L-tryptophan and the administration of glucocorticoids. Most patients recover fully, remain stable, or show slow recovery, but the disease is fatal in up to 5% of patients. History The first case of eosinophilia–myalgia syndrome was reported to the Centers for Disease Control and Prevention (CDC) in November 1989, although some cases had occurred as early as 2–3 years before this. In total, more than 1,500 cases of EMS were reported to the CDC, as well as at least 37 EMS-associated deaths. After preliminary investigation revealed that the outbreak was linked to intake of tryptophan, the U.S. Food and Drug Administration (FDA) recalled tryptophan supplements in 1989 and banned most public sales in 1990, with other countries following suit. This FDA restriction was loosened in 2001, and fully lifted in 2005. Since the initial ban on L-tryptophan, a normal metabolite of the compound in mammals, 5-hydroxtryptophan (5-HTP) has become a popular replacement dietary supplement. See also Toxic oil syndrome References Systemic connective tissue disorders Connective tissue diseases Syndromes Drug safety Adulteration
Eosinophilia–myalgia syndrome
Chemistry
1,987
1,493,799
https://en.wikipedia.org/wiki/Biocomplexity%20Institute%20of%20Virginia%20Tech
The Biocomplexity Institute of Virginia Tech (formerly the Virginia Bioinformatics Institute) was a research institute specializing in bioinformatics, computational biology, and systems biology. The institute had more than 250 personnel, including over 50 tenured and research faculty. Research at the institute involved collaboration in diverse disciplines such as mathematics, computer science, biology, plant pathology, biochemistry, systems biology, statistics, economics, synthetic biology and medicine. The institute developed -omic and bioinformatic tools and databases that can be applied to the study of human, animal and plant diseases as well as the discovery of new vaccine, drug and diagnostic targets. The institute's programs were supported by a variety of government and private agencies including the National Institutes of Health, National Science Foundation, U.S. Department of Defense, U.S. Department of Agriculture, and U.S. Department of Energy. Since inception, the Biocomplexity Institute has received over $179 million in extramural support. It has a research portfolio totaling $68 million in grants and contracts. The institute's executive director was Chris Barrett. In 2019, the institute was absorbed into the Fralin Institute of Life Sciences at Virginia Tech after many faculty members, including Dr. Barrett, were hired away to form the Biocomplexity Institute and Initiative of the University of Virginia. History The institute opened in July 2000 in space in the Virginia Tech Corporate Research Center; it was hosted briefly in Building XI, then Building X, until it moved to Building XV in 2002, which was designed to host the institute. In January 2005, it moved into a new building on the main Virginia Tech's campus, called "Bioinformatics Facility Phase I and II", but retained its existing space in the CRC. In 2011, the institute moved its National Capital Region office into the Virginia Tech building in Arlington, Virginia. In 2015, the Virginia Bioinformatics Institute was quietly renamed and rebranded as the "Biocomplexity Institute". In November 2016, the home of the institute on Virginia Tech's main campus was dedicated as Steger Hall, after former Virginia Tech president Charles Steger. Major research divisions The Advanced Computing and Informatics Laboratories is dedicated to "Policy Informatics", including the Network Dynamics and Simulation Science Laboratory. It pursues research and development in interaction-based modeling, simulation, and associated analysis, experimental design, and decision support tools for understanding large biological, information, social, and technological systems. It includes the Comprehensive National Incident Management System project for developing a system to provide the United States military with detailed operational information about the populations being affected by a possible crisis. It also includes the project, “Modeling Disease Dynamics on Large, Detailed, Co-Evolving Networks,” which supports work to develop high-performance computer models for the study of very large networks. The Cyberinfrastructure Division develops methods, infrastructure, and resources primarily for infectious disease research. The “Pathosystems Resource Integration Center - Bioinformatics Resource Center for Bacterial Diseases” aims to integrate information on pathogens, provide resources and tools to analyze genomic, proteomic and other data arising from infectious disease research. It is part of the Middle-Atlantic Regional Center of Excellence for Biodefense and Emerging Infectious Diseases Research), which focuses on research to enable rapid defense against bioterror and emerging infectious diseases. Specific diseases and disease-causing agents under investigation include anthrax, West Nile virus, smallpox, and cryptosporidiosis The division collaborates with Georgetown University and Social and Scientific Systems on the Administrative Center of the National Institute of Allergy and Infectious Diseases-funded Proteomics Research Resource Center (PRC) for Biodefense Proteomics Research project. The team helps design, develop, and maintain a publicly accessible Web site containing data and technology protocols generated by each PRC, as well as a catalog that lists reagents and products available for public distribution. The Biological Systems Division develops computational methods for studying biochemical networks using experimental data . It developed COPASI (Complex Pathway Simulator), an open-source software package that allows users with limited experience in mathematics to construct models and simulations of biochemical networks. It also developed GenoCAD, a web-based Computer Assisted Design environment for synthetic biology. The Medical Informatics & Systems Division focuses on human genetics and disease, especially cancer and neurological disorders. It collaborates with Carilion Clinics, Virginia Tech Carilion School of Medicine and Research Institute, and other universities and government agencies. Major research laboratories The Network Dynamics and Simulation Science Laboratory at ACDIL pursues programs for interaction-based modeling, simulation, and associated analysis, experimental design, and decision support tools for understanding large and complex systems. Extremely detailed, high-resolution, multi-scale computer simulations allow formal and experimental investigation of these systems. Social and Decision Analytics Laboratory focuses on the use and development of analytical technology in the areas of public health policy, national and international security policy & public and social policy. The Nutritional Immunology and Molecular Medicine Laboratory was founded in 2002 to investigate fundamental mechanisms of gut enteric immunity, and identifying biomarkers and therapeutic targets for inflammatory and immune-mediated diseases. The center has discovered the mechanism of action underlying the anti-inflammatory actions of Conjugated linoleic acid in inflammatory bowel disease, and the insulin sensitizing and anti-inflammatory effects of abscisic acid. Its Center for Modeling Immunity to Enteric Pathogens Program is applying high performance computing techniques to model and simulate human immunology systems and help immunologists conduct quick in silico experiments to narrow down experimental design, validate their hypotheses and save significant time and laboratory cost. This laboratory is also collaborating with the Center for Global Health at the University of Virginia, the Department of Gastroenterology and the University of North Carolina at Chapel Hill and other medical schools and leading several human clinical trials on safer therapies for inflammatory and immune mediated diseases. It has recently established a partnership with the Division of Gastroenterology at the Carilion Clinic to launch a joint translational research program in inflammatory bowel diseases. Core facilities and services The institute occupies more than on the Virginia Tech campus, including over of laboratory space, designed for flexibility and to house computing and laboratory facilities. The institute occupies in Alexandria, Virginia, as part of Virginia Tech National Capital Region. The institute's infrastructure includes core facilities that integrate high-throughput data generation and data analysis capabilities. The Core Computational Facility has three data centers occupying over , with over 250 servers totalling over 10.5 terabytes of random access memory, distributed over more than 2650 processor cores. It has a storage area network with over 1 petabyte of disk and 3 petabytes of tape, expandable to 50 petabytes. The Genomics Research Laboratory has of laboratory space located at the institute's main building. It possesses state-of-the-art Roche GS-FLX, Illumina and Ion Torrent genome sequencers. It includes the Affymetrix National Custom Array Center for custom microarray design, sample processing and analytical services The Data Analysis Core offers Turnkey service to analyze -omics and other data from raw data in to manuscript ready figures and text out. It also provides Nexgen sequence assembly and annotation; microarray design, analysis and interpretation; mass spec data analysis; data QC; hypothesis generation; experimental design; statistical data analysis Education and outreach K–12 programs include "Kids' Tech University," (an educational research program for sparking interest in science, technology, engineering, and mathematics disciplines), the Climate Change Student Summit for teachers and students, and high school summer internships. Undergraduate Programs include Research Experiences for Undergraduates in microbiology and in systems biology, and a Summer Research Institute for foreign and local students. The institute is the home of the Genomics, Bioinformatics, Computational Biology Graduate Program at Virginia Tech, and accommodates students in various Virginia Tech departments. References External links 2000 establishments in Virginia 2019 disestablishments in Virginia Virginia Tech Bioinformatics organizations Genetics or genomics research institutions Research institutes in Virginia Research institutes established in 2000 Research institutes disestablished in 2019
Biocomplexity Institute of Virginia Tech
Biology
1,683
7,590,849
https://en.wikipedia.org/wiki/List%20of%20Canadian%20plants%20by%20family%20R
Main page: List of Canadian plants by family Families: A | B | C | D | E | F | G | H | I J K | L | M | N | O | P Q | R | S | T | U V W | X Y Z Radulaceae Radula auriculata Radula complanata Radula obconica Radula obtusiloba Radula prolifera Radula tenax Ranunculaceae Aconitum columbianum — Columbia monkshood Aconitum delphiniifolium — larkspur-leaf monkshood Aconitum x bicolor Actaea elata — tall bugbane Actaea pachypoda — white baneberry Actaea racemosa — black bugbane Actaea rubra — red baneberry Actaea x ludovici Anemone canadensis — Canada anemone Anemone cylindrica — long-fruited anemone Anemone drummondii — Drummond's anemone Anemone lithophila — Little Belt Mountain thimbleweed Anemone lyallii — little mountain thimbleweed Anemone multiceps — Porcupine River thimbleweed Anemone multifida — Hudson Bay anemone Anemone narcissiflora — narcissus thimbleweed Anemone parviflora — smallflower anemone Anemone piperi — Piper's anemone Anemone quinquefolia — wood anemone Anemone richardsonii — yellow anemone Anemone virginiana — Virginia anemone Aquilegia brevistyla — smallflower columbine Aquilegia canadensis — wild columbine Aquilegia flavescens — yellow columbine Aquilegia formosa — crimson columbine Aquilegia jonesii — Jones' columbine Caltha leptosepala — slender-sepal marsh-marigold Caltha natans — floating marsh-marigold Caltha palustris — marsh-marigold Clematis columbiana — Columbian virgin's-bower Clematis hirsutissima — clustered leather-flower Clematis ligusticifolia — western virgin's-bower Clematis occidentalis — purple clematis Clematis virginiana — Virginia virgin's-bower Coptis aspleniifolia — spleenwort-leaf goldthread Coptis trifolia — goldthread Delphinium bicolor — flathead larkspur Delphinium brachycentrum — northern larkspur Delphinium burkei — meadow larkspur Delphinium carolinianum — Carolina larkspur Delphinium distichum — two-spike larkspur Delphinium glareosum — rockslide larkspur Delphinium glaucum — pale larkspur Delphinium menziesii — Puget Sound larkspur Delphinium nuttallianum — Nuttall's larkspur Delphinium sutherlandii — Sutherland's larkspur Delphinium x occidentale — duncecap larkspur Enemion biternatum — false rue-anemone Enemion savilei — Queen Charlotte Island false rue-anemone Anemone hepatica — roundlobe hepatica Hydrastis canadensis — goldenseal Kumlienia cooleyae — Cooley's buttercup Myosurus apetalus — bristly mousetail Myosurus minimus — eastern mousetail Pulsatilla occidentalis — western pasqueflower Pulsatilla nuttalliana — American pasqueflower Ranunculus abortivus — kidneyleaf buttercup Ranunculus acris — tall buttercup Ranunculus alismifolius — water-plantain buttercup Ranunculus allenii — Allen's buttercup Ranunculus ambigens — water-plantain spearwort Ranunculus aquatilis — white water buttercup Ranunculus californicus — California buttercup Ranunculus cardiophyllus — heartleaf buttercup Ranunculus cymbalaria — seaside crowfoot Ranunculus eschscholtzii — Eschscholtz' buttercup Ranunculus eximius — tundra buttercup Ranunculus fascicularis — early buttercup Ranunculus ficaria — figroot buttercup Ranunculus flabellaris — yellow water-crowfoot Ranunculus flammula — lesser spearwort Ranunculus glaberrimus — sagebrush buttercup Ranunculus gmelinii — small yellow water-crowfoot Ranunculus hexasepalus — Queen Charlotte Island buttercup Ranunculus hispidus — hispid buttercup Ranunculus hyperboreus — arctic buttercup Ranunculus inamoenus — graceful buttercup Ranunculus karelinii — Karelin's arctic buttercup Ranunculus lapponicus — Lapland buttercup Ranunculus longirostris — eastern white water-crowfoot Ranunculus macounii — Macoun's buttercup Ranunculus nivalis — snowy buttercup Ranunculus occidentalis — western buttercup Ranunculus orthorhynchus — bird's-food buttercup Ranunculus pallasii — Pallas' buttercup Ranunculus pedatifidus — northern buttercup Ranunculus pensylvanicus — bristly crowfoot Ranunculus pygmaeus — dwarf buttercup Ranunculus recurvatus — hooked crowfoot Ranunculus rhomboideus — prairie buttercup Ranunculus sabinei — Sardinian buttercup Ranunculus sceleratus — cursed crowfoot Ranunculus suksdorfii — Suksdorf's buttercup Ranunculus sulphureus — sulphur buttercup Ranunculus trichophyllus — northeastern white water-crowfoot Ranunculus turneri — Turner's buttercup Ranunculus uncinatus — woodland buttercup Ranunculus verecundus — timberline buttercup Ranunculus x spitzbergensis Thalictrum alpinum — alpine meadowrue Thalictrum dasycarpum — purple meadowrue Thalictrum dioicum — early meadowrue Thalictrum occidentale — western meadowrue Thalictrum pubescens — tall meadowrue Thalictrum revolutum — waxleaf meadowrue Thalictrum sparsiflorum — few-flower meadowrue Thalictrum thalictroides — windflower Thalictrum venulosum — veined meadowrue Trautvetteria caroliniensis — Carolina tassel-rue Trollius laxus — spreading globeflower Rhamnaceae Ceanothus americanus — New Jersey tea Ceanothus herbaceus — prairie redroot Ceanothus sanguineus — Oregon-tea Ceanothus velutinus — tobacco ceanothus Frangula purshiana — Cascara false buckthorn Rhamnus alnifolia — alderleaf buckthorn Rhytidiaceae Rhytidium rugosum — golden glade-moss Ricciaceae Riccia beyrichiana Riccia bifurca Riccia cavernosa Riccia fluitans Riccia frostii Riccia sorocarpa Riccia sullivantii Ricciocarpos natans Rosaceae Agrimonia gryposepala — tall hairy groovebur Agrimonia parviflora — swamp agrimony Agrimonia pubescens — soft groovebur Agrimonia striata — woodland agrimony Alchemilla alpina — mountain lady's-mantle Alchemilla filicaulis — thinstem lady's-mantle Alchemilla glomerulans — clustered lady's-mantle Amelanchier alnifolia — saskatoonberry Amelanchier arborea — downy serviceberry Amelanchier bartramiana — Bartram's shadbush Amelanchier canadensis — oblongleaf serviceberry Amelanchier fernaldii — Fernald's serviceberry Amelanchier humilis — running serviceberry Amelanchier interior — shadbush Amelanchier laevis — Allegheny serviceberry Amelanchier sanguinea — roundleaf shadbush Amelanchier stolonifera — running serviceberry Amelanchier × intermedia Amelanchier × neglecta Amelanchier × quinti-martii Argentina anserina — silverweed Argentina egedei — Eged's cinquefoil Aruncus dioicus — common goat's-beard Chamaerhodos erecta — rose chamærhodos Comarum palustre — marsh cinquefoil Crataegus beata — Dunbar's hawthorn Crataegus brainerdii — Brainerd's hawthorn Crataegus calpodendron — pear hawthorn Crataegus chrysocarpa — fineberry hawthorn Crataegus compacta — clustered hawthorn Crataegus compta — adorned hawthorn Crataegus crus-galli — cockspur hawthorn Crataegus dilatata — broadleaf hawthorn Crataegus dissona — northern hawthorn Crataegus dodgei — Dodge's hawthorn Crataegus douglasii — Douglas' hawthorn Crataegus flabellata — fanleaf hawthorn Crataegus fluviatilis Crataegus fulleriana — Fuller's hawthorn Crataegus holmesiana — Holmes' hawthorn Crataegus intricata — Copenhagen hawthorn Crataegus iracunda — stolon-bearing hawthorn Crataegus irrasa — Blanchard's hawthorn Crataegus jonesae — Miss Jones' hawthorn Crataegus knieskerniana — Knieskern's hawthorn Crataegus lemingtonensis — Lemington hawthorn Crataegus lumaria — roundleaf hawthorn Crataegus macrosperma — bigstem hawthorn Crataegus margarettiae — Margarett's hawthorn Crataegus mollis — downy hawthorn Crataegus nitida — glossy hawthorn Crataegus nitidula — Ontario hawthorn Crataegus okennonii — O'Kennon's hawthorn Crataegus pedicellata — scarlet hawthorn Crataegus pennsylvanica — Pennsylvania hawthorn Crataegus perjucunda — pearthorn Crataegus persimilis — plumleaf hawthorn Crataegus phippsii — Phipps' hawthorn Crataegus pringlei — Pringle's hawthorn Crataegus prona — Illinois hawthorn Crataegus pruinosa — waxy-fruit hawthorn Crataegus punctata — dotted hawthorn Crataegus robinsonii — Robinson's hawthorn Crataegus scabrida — rough hawthorn Crataegus schuettei — Schuette's hawthorn Crataegus submollis — Québec hawthorn Crataegus suborbiculata — Caughuawaga hawthorn Crataegus succulenta — fleshy hawthorn Crataegus suksdorfii — Suksdorf's hawthorn Crataegus x anomala Crataegus x kingstonensis Dalibarda repens — robin-run-away Dasiphora fruticosa — shrubby cinquefoil Dryas drummondii — yellow mountain-avens Dryas integrifolia — entire-leaved mountain-avens Dryas octopetala — eight-petal mountain-avens Dryas x sundermannii Dryas x wyssiana Drymocallis fissa — bigflower cinquefoil Filipendula rubra — queen-of-the-prairie Fragaria chiloensis — Chilean strawberry Fragaria crinita — Pacific strawberry Fragaria vesca — woodland strawberry Fragaria virginiana — Virginia strawberry Fragaria x ananassa Geum aleppicum — yellow avens Geum calthifolium — caltha-leaf avens Geum canadense — white avens Geum glaciale — glacier avens Geum laciniatum — rough avens Geum macrophyllum — largeleaf avens Geum peckii — mountain avens Geum rivale — purple avens Geum rossii — Ross' avens Geum triflorum — prairie-smoke Geum vernum — spring avens Geum virginianum — pale avens Geum x aurantiacum Geum x macranthum Geum x pulchrum Holodiscus discolor — creambush oceanspray Luetkea pectinata — segmented lütkea Malus coronaria — sweet crabapple Malus fusca — Pacific crabapple Malus glaucescens — sweet crabapple Oemleria cerasiformis — osoberry Aronia floribunda — purple chokeberry Aronia melanocarpa — black chokeberry Aronia pyrifolia — red chokeberry Physocarpus capitatus — Pacific ninebark Physocarpus malvaceus — mallowleaf ninebark Physocarpus opulifolius — eastern ninebark Potentilla arguta — tall cinquefoil Potentilla biennis — biennial cinquefoil Potentilla biflora — two-flower cinquefoil Potentilla bipinnatifida — tansy cinquefoil Potentilla canadensis — Canada cinquefoil Potentilla concinna — red cinquefoil Potentilla diversifolia — mountain meadow cinquefoil Potentilla drummondii — Drummond's cinquefoil Potentilla effusa — branched cinquefoil Potentilla elegans — elegant cinquefoil Potentilla flabellifolia — fanleaf cinquefoil Potentilla flabelliformis Potentilla glandulosa — sticky cinquefoil Potentilla gracilis — fanleaf cinquefoil Potentilla hippiana — horse cinquefoil Potentilla hookeriana — Hooker's cinquefoil Potentilla macounii — Macoun's cinquefoil Potentilla multifida — divided cinquefoil Potentilla nana — arctic cinquefoil Potentilla neumanniana — spring cinquefoil Potentilla nivea — snow cinquefoil Potentilla norvegica — Norwegian cinquefoil Potentilla ovina — sheep cinquefoil Potentilla paradoxa — bushy cinquefoil Potentilla pectinisecta — combleaf cinquefoil Potentilla pensylvanica — Pennsylvania cinquefoil Potentilla plattensis — Platte River cinquefoil Potentilla pulchella — pretty cinquefoil Potentilla pulcherrima — soft cinquefoil Potentilla rivalis — brook cinquefoil Potentilla rubricaulis — Rocky Mountain cinquefoil Potentilla simplex — common cinquefoil Potentilla subjuga — Colorado cinquefoil Potentilla tabernaemontani — spotted cinquefoil Potentilla uniflora — one-flower cinquefoil Potentilla vahliana — Vahl's cinquefoil Potentilla villosa — northern cinquefoil Prunus americana — American plum Prunus emarginata — bitter cherry Prunus nigra — Canada plum Prunus pensylvanica — fire cherry Prunus pumila — sand cherry Prunus serotina — wild black cherry Prunus virginiana — choke cherry Purshia tridentata — antelope bitterbrush Rosa acicularis — prickly rose Rosa arkansana — prairie rose Rosa blanda — smooth rose Rosa carolina — Carolina rose Rosa gymnocarpa — wood rose Rosa nitida — shining rose Rosa nutkana — Nootka rose Rosa palustris — swamp rose Rosa pisocarpa — clustered rose Rosa setigera — prairie rose Rosa virginiana — Virginia rose Rosa woodsii — Woods' rose Rosa x dulcissima Rubus adenocaulis — glandstem dewberry Rubus adjacens — peaty dewberry Rubus alaskensis — Alaska blackberry Rubus allegheniensis — Allegheny blackberry Rubus alumnus — blackberry Rubus arcticus — nagoonberry Rubus arcuans — wand dewberry Rubus arenicola — sand-dwelling dewberry Rubus baileyanus — Bailey's dewberry Rubus bellobatus — Kittatinny blackberry Rubus biformispinus — pasture dewberry Rubus canadensis — smooth blackberry Rubus chamaemorus — cloudberry Rubus elegantulus — showy blackberry Rubus flagellaris — northern dewberry Rubus fraternalis — northeastern dewberry Rubus frondisentis — leafy blackberry Rubus frondosus — Yankee blackberry Rubus glandicaulis — glandstem blackberry Rubus heterophyllus — ecotone blackberry Rubus hispidus — bristly dewberry Rubus idaeus — American red raspberry Rubus jacens — spreading dewberry Rubus junceus — herbaceous blackberry Rubus kennedyanus — Kennedy's blackberry Rubus lasiococcus — hairy-fruit smooth dewberry Rubus leucodermis — white-stemmed raspberry Rubus mananensis — Grand Manan dewberry Rubus michiganensis — Michigan dewberry Rubus multiformis — variable blackberry Rubus navus — grand lake blackberry Rubus nivalis — snow dwarf bramble Rubus novocaesarius — Tuckahoe dewberry Rubus occidentalis — black raspberry Rubus odoratus — purple-flowering raspberry Rubus ortivus — Mt. Desert Island blackberry Rubus paganus — St. Lawrence dewberry Rubus particeps — Kingston dewberry Rubus parviflorus — thimbleberry Rubus pedatus — five-leaf dwarf bramble Rubus pensilvanicus — Pennsylvania blackberry Rubus pergratus — upland blackberry Rubus pervarius — Westminster dewberry Rubus plicatifolius — plaitleaf dewberry Rubus provincialis — groundberry Rubus pubescens — dwarf red raspberry Rubus pugnax — pugnacious blackberry Rubus recurvans — recurved blackberry Rubus recurvicaulis — arching dewberry Rubus regionalis — Wisconsin dewberry Rubus roribaccus — velvet-leaved dewberry Rubus russeus — Halifax blackberry Rubus segnis — Nova Scotia dewberry Rubus semisetosus — New England blackberry Rubus setosus — small bristleberry Rubus severus — harsh dewberry Rubus signatus — sphagnum dewberry Rubus spectabilis — salmonberry Rubus suppar — New Glasgow dewberry Rubus tardatus — wet-thicket dewberry Rubus trifrons — dewberry Rubus ursinus — California blackberry Rubus uvidus — Kalamazoo dewberry Rubus vermontanus — Green Mountain blackberry Rubus weatherbyi — Weatherby's dewberry Rubus wheeleri — Wheeler's blackberry Rubus x fraseri Rubus x paracaulis Sanguisorba annua — prairie burnet Sanguisorba canadensis — Canada burnet Sanguisorba menziesii — Menzies' burnet Sanguisorba occidentalis — annual burnet Sanguisorba officinalis — great burnet Sibbaldia procumbens — Arizona cinquefoil Sibbaldiopsis tridentata — three-toothed cinquefoil Sorbus americana — American mountain-ash Sorbus decora — northern mountain-ash Sorbus groenlandica — Greenland mountain-ash Sorbus scopulina — Greene's mountain-ash Sorbus sitchensis — Sitka mountain-ash Spiraea alba — narrowleaf white meadowsweet Spiraea betulifolia — white meadowsweet Spiraea douglasii — Douglas' spiræa Spiraea septentrionalis — northern meadowsweet Spiraea splendens — rose meadowsweet Spiraea stevenii — Steven's spiræa Spiraea tomentosa — hardhack spiræa Spiraea x pyramidata — pyramidal spiræa Waldsteinia fragarioides — barren strawberry x Sorbaronia arsenii x Sorbaronia jackii Rubiaceae Cephalanthus occidentalis — common buttonbush Galium aparine — catchweed bedstraw Galium asprellum — rough bedstraw Galium bifolium — low mountain bedstraw Galium boreale — northern bedstraw Galium brevipes — limestone swamp bedstraw Galium circaezans — licorice bedstraw Galium concinnum — shining bedstraw Galium kamtschaticum — boreal bedstraw Galium labradoricum — bog bedstraw Galium lanceolatum — Torrey's wild licorice Galium mexicanum — Mexican bedstraw Galium multiflorum — many-flower bedstraw Galium obtusum — bluntleaf bedstraw Galium palustre — marsh bedstraw Galium pilosum — hairy bedstraw Galium tinctorium — stiff marsh bedstraw Galium trifidum — small bedstraw Galium triflorum — sweetscent bedstraw Houstonia caerulea — Quaker-ladies Houstonia canadensis — Canada bluets Houstonia longifolia — longleaf bluets Mitchella repens — partridge-berry Ruppiaceae Ruppia cirrhosa — widgeon-grass Ruppia maritima — ditch-grass Rutaceae Ptelea trifoliata — common hoptree Zanthoxylum americanum — northern prickly-ash Canada,family,R
List of Canadian plants by family R
Biology
4,626
10,106,544
https://en.wikipedia.org/wiki/Cophenetic
In the clustering of biological information such as data from microarray experiments, the cophenetic similarity or cophenetic distance of two objects is a measure of how similar those two objects have to be in order to be grouped into the same cluster. The cophenetic distance between two objects is the height of the dendrogram where the two branches that include the two objects merge into a single branch. Outside the context of a dendrogram, it is the distance between the largest two clusters that contain the two objects individually when they are merged into a single cluster that contains both. See also Cophenetic correlation References External links University of Ohio lecture Microarrays
Cophenetic
Chemistry,Materials_science,Biology
137
1,460,525
https://en.wikipedia.org/wiki/Oncovirus
An oncovirus or oncogenic virus is a virus that can cause cancer. This term originated from studies of acutely transforming retroviruses in the 1950–60s, when the term oncornaviruses was used to denote their RNA virus origin. With the letters RNA removed, it now refers to any virus with a DNA or RNA genome causing cancer and is synonymous with tumor virus or cancer virus. The vast majority of human and animal viruses do not cause cancer, probably because of longstanding co-evolution between the virus and its host. Oncoviruses have been important not only in epidemiology, but also in investigations of cell cycle control mechanisms such as the retinoblastoma protein. The World Health Organization's International Agency for Research on Cancer estimated that in 2002, infection caused 17.8% of human cancers, with 11.9% caused by one of seven viruses. A 2020 study of 2,658 samples from 38 different types of cancer found that 16% were associated with a virus. These cancers might be easily prevented through vaccination (e.g., papillomavirus vaccines), diagnosed with simple blood tests, and treated with less-toxic antiviral compounds. Causality Generally, tumor viruses cause little or no disease after infection in their hosts, or cause non-neoplastic diseases such as acute hepatitis for hepatitis B virus or mononucleosis for Epstein–Barr virus. A minority of persons (or animals) will go on to develop cancers after infection. This has complicated efforts to determine whether or not a given virus causes cancer. The well-known Koch's postulates, 19th-century constructs developed by Robert Koch to establish the likelihood that Bacillus anthracis will cause anthrax disease, are not applicable to viral diseases. Firstly, this is because viruses cannot truly be isolated in pure culture—even stringent isolation techniques cannot exclude undetected contaminating viruses with similar density characteristics, and viruses must be grown on cells. Secondly, asymptomatic virus infection and carriage is the norm for most tumor viruses, which violates Koch's third principle. Relman and Fredericks have described the difficulties in applying Koch's postulates to virus-induced cancers. Finally, the host restriction for human viruses makes it unethical to experimentally transmit a suspected cancer virus. Other measures, such as A. B. Hill's criteria, are more relevant to cancer virology but also have some limitations in determining causality. Tumor viruses come in a variety of forms: Viruses with a DNA genome, such as adenovirus, and viruses with an RNA genome, like the hepatitis C virus (HCV), can cause cancers, as can retroviruses having both DNA and RNA genomes (Human T-lymphotropic virus and hepatitis B virus, which normally replicates as a mixed double and single-stranded DNA virus but also has a retroviral replication component). In many cases, tumor viruses do not cause cancer in their native hosts but only in dead-end species. For example, adenoviruses do not cause cancer in humans but are instead responsible for colds, conjunctivitis and other acute illnesses. They only become tumorigenic when infected into certain rodent species, such as Syrian hamsters. Some viruses are tumorigenic when they infect a cell and persist as circular episomes or plasmids, replicating separately from host cell DNA (Epstein–Barr virus and Kaposi's sarcoma-associated herpesvirus). Other viruses are only carcinogenic when they integrate into the host cell genome as part of a biological accident, such as polyomaviruses and papillomaviruses. Oncogenic viral mechanism A direct oncogenic viral mechanism involves either insertion of additional viral oncogenic genes into the host cell or to enhance already existing oncogenic genes (proto-oncogenes) in the genome. For example, it has been shown that vFLIP and vCyclin interfere with the TGF-β signaling pathway indirectly by inducing oncogenic host mir17-92 cluster. Indirect viral oncogenicity involves chronic nonspecific inflammation occurring over decades of infection, as is the case for HCV-induced liver cancer. These two mechanisms differ in their biology and epidemiology: direct tumor viruses must have at least one virus copy in every tumor cell expressing at least one protein or RNA that is causing the cell to become cancerous. Because foreign virus antigens are expressed in these tumors, persons who are immunosuppressed such as AIDS or transplant patients are at higher risk for these types of cancers. Chronic indirect tumor viruses, on the other hand, can be lost (at least theoretically) from a mature tumor that has accumulated sufficient mutations and growth conditions (hyperplasia) from the chronic inflammation of viral infection. In this latter case, it is controversial but at least theoretically possible that an indirect tumor virus could undergo "hit-and-run" and so the virus would be lost from the clinically diagnosed tumor. In practical terms, this is an uncommon occurrence if it does occur. DNA oncoviruses DNA oncoviruses typically impair two families of tumor suppressor proteins: tumor proteins p53 and the retinoblastoma proteins (Rb). It is evolutionarily advantageous for viruses to inactivate p53 because p53 can trigger cell cycle arrest or apoptosis in infected cells when the virus attempts to replicate its DNA. Similarly, Rb proteins regulate many essential cell functions, including but not limited to a crucial cell cycle checkpoint, making them a target for viruses attempting to interrupt regular cell function. While several DNA oncoviruses have been discovered, three have been studied extensively. Adenoviruses can lead to tumors in rodent models but do not cause cancer in humans; however, they have been exploited as delivery vehicles in gene therapy for diseases such as cystic fibrosis and cancer. Simian virus 40 (SV40), a polyomavirus, can cause tumors in rodent models but is not oncogenic in humans. This phenomenon has been one of the major controversies of oncogenesis in the 20th century because an estimated 100 million people were inadvertently exposed to SV40 through polio vaccines. The human papillomavirus-16 (HPV-16) has been shown to lead to cervical cancer and other cancers, including head and neck cancer. These three viruses have parallel mechanisms of action, forming an archetype for DNA oncoviruses. All three of these DNA oncoviruses are able to integrate their DNA into the host cell, and use this to transcribe it and transform cells by bypassing the G1/S checkpoint of the cell cycle. Integration of viral DNA DNA oncoviruses transform infected cells by integrating their DNA into the host cell's genome. The DNA is believed to be inserted during transcription or replication, when the two annealed strands are separated. This event is relatively rare and generally unpredictable; there seems to be no deterministic predictor of the site of integration. After integration, the host's cell cycle loses regulation from Rb and p53, and the cell begins cloning to form a tumor. G1/S Checkpoint Rb and p53 regulate the transition between G1 and S phase, arresting the cell cycle before DNA replication until the appropriate checkpoint inputs, such as DNA damage repair, are completed. p53 regulates the p21 gene, which produces a protein which binds to the Cyclin D-Cdk4/6 complex. This prevents Rb phosphorylation and prevents the cell from entering S phase. In mammals, when Rb is active (unphosphorylated), it inhibits the E2F family of transcription factors, which regulate the Cyclin E-Cdk2 complex, which inhibits Rb, forming a positive feedback loop, keeping the cell in G1 until the input crosses a threshold. To drive the cell into S phase prematurely, the viruses must inactivate p53, which plays a central role in the G1/S checkpoint, as well as Rb, which, though downstream of it, is typically kept active by a positive feedback loop. Inactivation of p53 Viruses employ various methods of inactivating p53. The adenovirus E1B protein (55K) prevents p53 from regulating genes by binding to the site on p53 which binds to the genome. In SV40, the large T antigen (LT) is an analogue; LT also binds to several other cellular proteins, such as p107 and p130, on the same residues. LT binds to p53's binding domain on the DNA (rather than on the protein), again preventing p53 from appropriately regulating genes. HPV instead degrades p53: the HPV protein E6 binds to a cellular protein called the E6-associated protein (E6-AP, also known as UBE3A), forming a complex which causes the rapid and specific ubiquitination of p53. Inactivation of Rb Rb is inactivated (thereby allowing the G1/S transition to progress unimpeded) by different but analogous viral oncoproteins. The adenovirus early region 1A (E1A) is an oncoprotein which binds to Rb and can stimulate transcription and transform cells. SV40 uses the same protein for inactivating Rb, LT, to inactivate p53. HPV contains a protein, E7, which can bind to Rb in much the same way. Rb can be inactivated by phosphorylation, or by being bound to a viral oncoprotein, or by mutations—mutations which prevent oncoprotein binding are also associated with cancer. Variations DNA oncoviruses typically cause cancer by inactivating p53 and Rb, thereby allowing unregulated cell division and creating tumors. There may be many different mechanisms which have evolved separately; in addition to those described above, for example, the Human Papillomavirus inactivates p53 by sequestering it in the cytoplasm. SV40 has been well studied and does not cause cancer in humans, but a recently discovered analogue called Merkel cell polyomavirus has been associated with Merkel cell carcinoma, a form of skin cancer. The Rb binding feature is believed to be the same between the two viruses. RNA oncoviruses In the 1960s, the replication process of RNA virus was believed to be similar to other single-stranded RNA. Single-stranded RNA replication involves RNA-dependent RNA synthesis which meant that virus-coding enzymes would make partial double-stranded RNA. This belief was shown to be incorrect because there were no double-stranded RNA found in the retrovirus cell. In 1964, Howard Temin proposed a provirus hypothesis, but shortly after reverse transcription in the retrovirus genome was discovered. Description of virus All retroviruses have three major coding domains; gag, pol and env. In the gag region of the virus, the synthesis of the internal virion proteins are maintained which make up the matrix, capsid and nucleocapsid proteins. In pol, the information for the reverse transcription and integration enzymes are stored. In env, it is derived from the surface and transmembrane for the viral envelope protein. There is a fourth coding domain which is smaller, but exists in all retroviruses. Pol is the domain that encodes the virion protease. Retrovirus enters host cell The retrovirus begins the journey into a host cell by attaching a surface glycoprotein to the cell's plasma membrane receptor. Once inside the cell, the retrovirus goes through reverse transcription in the cytoplasm and generates a double-stranded DNA copy of the RNA genome. Reverse transcription also produces identical structures known as long terminal repeats (LTRs). Long terminal repeats are at the ends of the DNA strands and regulates viral gene expression. The viral DNA is then translocated into the nucleus where one strand of the retroviral genome is put into the chromosomal DNA by the help of the virion integrase. At this point the retrovirus is referred to as provirus. Once in the chromosomal DNA, the provirus is transcribed by the cellular RNA polymerase II. The transcription leads to the splicing and full-length mRNAs and full-length progeny virion RNA. The virion protein and progeny RNA assemble in the cytoplasm and leave the cell, whereas the other copies send translated viral messages in the cytoplasm. Classification DNA viruses Human papillomavirus (HPV), a DNA virus, causes transformation in cells through interfering with tumor suppressor proteins such as p53. Interfering with the action of p53 allows a cell infected with the virus to move into a different stage of the cell cycle, enabling the virus genome to be replicated. Forcing the cell into the S phase of the cell cycle could cause the cell to become transformed. Human papillomavirus infection is a major cause of cervical cancer, vulvar cancer, vaginal cancer, penis cancer, anal cancer, and HPV-positive oropharyngeal cancers. There are nearly 200 distinct human papillomaviruses (HPVs), and many HPV types are carcinogenic. Hepatitis B virus (HBV) is associated with Hepatocarcinoma Epstein–Barr virus (EBV or HHV-4) is associated with four types of cancers Human cytomegalovirus (CMV or HHV-5) is associated with mucoepidermoid carcinoma and possibly other malignancies. Kaposi's sarcoma-associated herpesvirus (KSHV or HHV-8) is associated with Kaposi's sarcoma, a type of skin cancer. Merkel cell polyomavirusa polyoma virusis associated with the development of Merkel cell carcinoma RNA viruses Not all oncoviruses are DNA viruses. Some RNA viruses have also been associated such as the hepatitis C virus as well as certain retroviruses, e.g., human T-lymphotropic virus (HTLV-1) and Rous sarcoma virus (RSV). Overview table Estimated percent of new cancers attributable to the virus worldwide in 2002. NA indicates not available. The association of other viruses with human cancer is continually under research. Main viruses associated with human cancer The main viruses associated with human cancers are the human papillomavirus, the hepatitis B and hepatitis C viruses, the Epstein–Barr virus, the human T-lymphotropic virus, the Kaposi's sarcoma-associated herpesvirus (KSHV) and the Merkel cell polyomavirus. Experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans, exceeded only by tobacco usage. The mode of virally induced tumors can be divided into two, acutely transforming or slowly transforming. In acutely transforming viruses, the viral particles carry a gene that encodes for an overactive oncogene called viral-oncogene (v-onc), and the infected cell is transformed as soon as v-onc is expressed. In contrast, in slowly transforming viruses, the virus genome is inserted, especially as viral genome insertion is an obligatory part of retroviruses, near a proto-oncogene in the host genome. The viral promoter or other transcription regulation elements in turn cause overexpression of that proto-oncogene, which in turn induces uncontrolled cellular proliferation. Because viral genome insertion is not specific to proto-oncogenes and the chance of insertion near that proto-oncogene is low, slowly transforming viruses have very long tumor latency compared to acutely transforming viruses, which already carry the viral oncogene. Hepatitis viruses, including hepatitis B and hepatitis C, can induce a chronic viral infection that leads to liver cancer in 0.47% of hepatitis B patients per year (especially in Asia, less so in North America), and in 1.4% of hepatitis C carriers per year. Liver cirrhosis, whether from chronic viral hepatitis infection or alcoholism, is associated with the development of liver cancer, and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development. Worldwide, liver cancer is one of the most common, and most deadly, cancers due to a huge burden of viral hepatitis transmission and disease. Through advances in cancer research, vaccines designed to prevent cancer have been created. The hepatitis B vaccine is the first vaccine that has been established to prevent cancer (hepatocellular carcinoma) by preventing infection with the causative virus. In 2006, the U.S. Food and Drug Administration approved a human papilloma virus vaccine, called Gardasil. The vaccine protects against four HPV types, which together cause 70% of cervical cancers and 90% of genital warts. In March 2007, the US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) officially recommended that females aged 11–12 receive the vaccine, and indicated that females as young as age 9 and as old as age 26 are also candidates for immunization. History The history of cancer virus discovery is intertwined with the history of cancer research and the history of virology. The oldest surviving record of a human cancer is the Babylonian Code of Hammurabi (dated ca. 1754 BC) but scientific oncology could only emerge in the 19th century, when tumors were studied at microscopic level with the help of the compound microscope and achromatic lenses. 19th century microbiology accumulated evidence that implicated bacteria, yeasts, fungi, and protozoa in the development of cancer. In 1926 the Nobel Prize was awarded for documenting that a nematode worm could provoke stomach cancer in rats. But it was not recognized that cancer could have infectious origins until much later as virus had first been discovered by Dmitri Ivanovsky and Martinus Beijerinck at the close of the 19th century. History of non-human oncoviruses The theory that cancer could be caused by a virus began with the experiments of Oluf Bang and Vilhelm Ellerman in 1908 at the University of Copenhagen. Bang and Ellerman demonstrated that avian sarcoma leukosis virus could be transmitted between chickens after cell-free filtration and subsequently cause leukemia. This was subsequently confirmed for solid tumors in chickens in 1910–1911 by Peyton Rous. Rous at the Rockefeller University extended Bang and Ellerman's experiments to show cell-free transmission of a solid tumor sarcoma to chickens (now known as Rous sarcoma). The reasons why chickens are so receptive to such transmission may involve unusual characteristics of stability or instability as they relate to endogenous retroviruses. Charlotte Friend confirmed Bang and Ellerman findings for liquid tumor in mice by . In 1933 Richard Shope and Edward Weston Hurst showed that warts from wild cottontail rabbits contained the Shope papilloma virus. In 1936 John Joseph Bittner identified the mouse mammary tumor virus, an "extrachromosomal factor" (i.e. virus) that could be transmitted between laboratory strains of mice by breast feeding. By the early 1950s, it was known that viruses could remove and incorporate genes and genetic material in cells. It was suggested that such types of viruses could cause cancer by introducing new genes into the genome. Genetic analysis of mice infected with Friend virus confirmed that retroviral integration could disrupt tumor suppressor genes, causing cancer. Viral oncogenes were subsequently discovered and identified to cause cancer. Ludwik Gross identified the first mouse leukemia virus (murine leukemia virus) in 1951 and in 1953 reported on a component of mouse leukemia extract capable of causing solid tumors in mice. This compound was subsequently identified as a virus by Sarah Stewart and Bernice Eddy at the National Cancer Institute, after whom it was once called "SE polyoma". In 1957 Charlotte Friend discovered the Friend virus, a strain of murine leukemia virus capable of causing cancers in immunocompetent mice. Though her findings received significant backlash, they were eventually accepted by the field and cemented the validity of viral oncogenesis. In 1961 Eddy discovered the simian vacuolating virus 40 (SV40). Merck Laboratory also confirmed the existence of a rhesus macaque virus contaminating cells used to make Salk and Sabin polio vaccines. Several years later, it was shown to cause cancer in Syrian hamsters, raising concern about possible human health implications. Scientific consensus now strongly agrees that this is not likely to cause human cancer. History of human oncoviruses In 1964 Anthony Epstein, Bert Achong and Yvonne Barr identified the first human oncovirus from Burkitt's lymphoma cells. A herpesvirus, this virus is formally known as human herpesvirus 4 but more commonly called Epstein–Barr virus or EBV. In the mid-1960s Baruch Blumberg first physically isolated and characterized Hepatitis B while working at the National Institute of Health (NIH) and later the Fox Chase Cancer Center. Although this agent was the clear cause of hepatitis and might contribute to liver cancer hepatocellular carcinoma, this link was not firmly established until epidemiologic studies were performed in the 1980s by R. Palmer Beasley and others. In 1980 the first human retrovirus, Human T-lymphotropic virus 1 (HTLV-I), was discovered by Bernard Poiesz and Robert Gallo at NIH, and independently by Mitsuaki Yoshida and coworkers in Japan. But it was not certain whether HTLV-I promoted leukemia. In 1981 Yorio Hinuma and his colleagues at Kyoto University reported visualization of retroviral particles produced by a leukemia cell line derived from patients with Adult T-cell leukemia/lymphoma. This virus turned out to be HTLV-1 and the research established the causal role of the HTLV-1 virus to ATL. Between 1984 and 1986 Harald zur Hausen and Lutz Gissmann discovered HPV16 and HPV18, together these Papillomaviridae viruses (HPV) are responsible for approximately 70% of human papillomavirus infections that cause cervical cancers. For the discovery that HPV cause human cancer the 2008 Nobel Prize was awarded. In 1987 the Hepatitis C virus (HCV) was discovered by panning a cDNA library made from diseased tissues for foreign antigens recognized by patient sera. This work was performed by Michael Houghton at Chiron, a biotechnology company, and Daniel W. Bradley at the Centers for Disease Control and Prevention (CDC). HCV was subsequently shown to be a major contributor to Hepatocellular carcinoma (liver cancer) worldwide. In 1994 Patrick S. Moore and Yuan Chang at Columbia University), working together with Ethel Cesarman, isolated Kaposi's sarcoma-associated herpesvirus (KSHV or HHV8) using representational difference analysis. This search was prompted by work from Valerie Beral and colleagues who inferred from the epidemic of Kaposi's sarcoma among patients with AIDS that this cancer must be caused by another infectious agent besides HIV, and that this was likely to be a second virus. Subsequent studies revealed that KSHV is the "KS agent" and is responsible for the epidemiologic patterns of KS and related cancers. In 2008 Yuan Chang and Patrick S. Moore developed a new method to identify cancer viruses based on computer subtraction of human sequences from a tumor transcriptome, called digital transcriptome subtraction (DTS). DTS was used to isolate DNA fragments of Merkel cell polyomavirus from a Merkel cell carcinoma and it is now believed that this virus causes 70–80% of these cancers. See also Infectious causes of cancer Carcinogen Oncogenic Oncogene Adult T-cell leukemia/lymphoma Cancer bacteria Oncolytic virus, a virus that infects and kills cancer cells Gag-onc fusion protein List of infectious diseases References External links Carcinogenesis Virology Viruses Infectious causes of cancer
Oncovirus
Biology
5,066
36,377,373
https://en.wikipedia.org/wiki/Gbcast
Gbcast (also known as group broadcast) is a reliable multicast protocol that provides ordered, fault-tolerant (all-or-none) message delivery in a group of receivers within a network of machines that experience crash failure. The protocol is capable of solving Consensus in a network of unreliable processors, and can be used to implement state machine replication. Gbcast can be used in a standalone manner, or can support the virtual synchrony execution model, in which case Gbcast is normally used for group membership management while other, faster, protocols are often favored for routine communication tasks. History Introduced in 1985, Gbcast was the first widely deployed reliable multicast protocol to implement state machine replication with dynamically reconfigurable membership. Although this problem had been treated theoretically under various models in prior work, Gbcast innovated by showing that the same multicasts used to update replicated data within the state machine can also be used to dynamically reconfigure the group membership, which can then evolve to permit members to join and leave at will, in addition to being removed upon failure. This functionality, together with a state transfer mechanism used to initialize joining members, represents the basis of the virtual synchrony process group execution model. The term state machine replication was first suggested by Leslie Lamport and was widely adopted after publication of a survey paper written by Fred B. Schneider. The model covers any system in which some deterministic object (a state machine) is replicated in such a way that a series of commands can be applied to the replicas fault-tolerantly. A reconfigurable state machine is one that can vary its membership, adding new members or removing old ones. Some state machine protocols can also ride out the temporary unavailability of a subset of the current members without requiring reconfiguration when such situations arise, including Gbcast and also Paxos, Lamport's widely cited protocol for state machine replication. State machine replication is closely related to the distributed Consensus problem, in which a collection of processes must agree upon some decision outcome, such as the winner of an election. In particular, it can be shown that any solution to the state machine replication problem would also be capable of solving distributed consensus. As a consequence, impossibility results for distributed consensus apply to solutions to the state machine replication problem. Implications of this finding are discussed under liveness. Gbcast is somewhat unusual in that most solutions to the state machine replication problem are closely integrated with the application being replicated. Gbcast, in contrast, is designed as a multicast API and implemented by a library that delivers messages to group members. Lamport, Malkhi and Zhou note that few reliable multicast protocols have the durability properties required to correctly implement the state machine model. Gbcast does exhibit the necessary properties. The Gbcast protocol was first described in a 1985 publication that discussed infrastructure supporting the virtual synchrony model in the Isis Toolkit. Additional details were provided in a later 1987 journal article, and an open-source version of the protocol was released by the Cornell developers in November of that year. Isis used the protocol primarily for maintaining the membership of process groups but also offered an API that could be called directly by end-users. The technology became widely used starting in 1988, when the Isis system was commercialized and support became available. Commercial support for the system ended in 1998 when Stratus Computer, then the parent of Isis Distributed Systems, refocused purely on hardware solutions for the telecommunications industry. Examples of systems that used Isis in production settings include the New York Stock Exchange, where it was employed for approximately a decade to manage a configurable, fault-tolerant and self-healing reporting infrastructure for the trading floor, to relay quotes and trade reports from the "back office" systems used by the exchange to overhead display. The French Air Traffic Control System continues to use Isis; since 1996 the system has been employed to create fault-tolerant workstation clusters for use by air traffic controllers and to reliably relay routing updates between air traffic control centers; over time the French technology has also been adopted by other European ATC systems. The US Navy AEGIS has used Isis since 1993 to support a reliable and self-healing communication infrastructure. Isis also had several hundred other production users in the financial, telecommunications, process control, SCADA and other critical infrastructure domains. More details can be found in. Problem statement The fundamental problem solved by Gbcast is this: we are given an initial set of group members and wish to support a multicast abstraction, permitting members of the group to send messages that encode various commands or requests. The protocol must agree on the messages to deliver, and on their ordering, so that if any member of the group sends a message, every member of the group that doesn't fail will receive that message and in the same order with respect to other delivered messages. The set of group members changes each time a member fails or joins, and Gbcast is also used to maintain group membership by means of special multicasts that are delivered to the application as "new view" events, but that also adjust the group membership list maintained by the Gbcast protocol library. The application thus sees a series of membership views that start with an "initial view" when a particular group member joins, and then evolve over time, and that are ordered with respect to other view-changing events and multicast messages. These multicasts are delivered to all the non-failed members listed in the view during which delivery is scheduled, a property referred to as virtual synchrony. Network partitions can split a group into two or more disjoint subgroups, creating the risk of split brain behavior, in which some group members take a decision (perhaps, to launch the rocket) without knowing that some other partition of the group has taken a different, conflicting decision. Gbcast offers protection against this threat: the protocol ensures that progress occurs only in a single primary partition of the group. Thus, should a network partition arise, at most one subgroup of members will continue operations, while the other is certain to stall and shut down. Should a failed member recover (or if a partitioning failure caused some member to be incorrectly sensed as faulty and hence dropped from the view), after communication is restored, that member can rejoin. An incarnation number is used to avoid ambiguity: a counter that will be incremented each time a process joins the group, and is treated as part of the process identifier. Any given (processor-id, process-id, incarnation-number) tuple joins the group at most once, then remains in the group until it fails, or is forced to leave because a time out occurred. Any dynamically reconfigurable system, including both Gbcast and Paxos, can enter states from which no further progress is possible. For example, this could happen if operational processes are wrongly removed from the configuration, and then too many real crashes occur within the remaining members of the view. In such situations, the data center management infrastructure is responsible for restarting the entire application. This is in contrast to the behavior of non-reconfigurable (vanilla) Paxos, which can tolerate disruptions of unlimited duration and then will resume once enough group members are accessible, without intervention of the management infrastructure. The following terms are used in the detailed protocol description. Processes Processes run on processors that operate at arbitrary speed. Processes may experience crash (halting) failures. A process is uniquely identified by a three-tuple: (processor-id, process-id, incarnation-number). Processes with stable storage may re-join the protocol after failures (following a crash-recovery failure model), after incrementing the incarnation number. Processes do not collude, lie, or otherwise attempt to subvert the protocol. (That is, Byzantine failures don't occur.) Network All processes in the system can send messages to all other processes in the system. Messages are sent asynchronously: there is no time bound on message delivery. Messages may be lost, reordered, or duplicated. Messages are delivered without corruption. These are weak assumptions: a network that never delivers any messages would satisfy them (we would say that such a network is experiencing a complete and permanent partitioning failure). The network conditions required for Gbcast to guarantee progress are discussed below. In practice Gbcast is normally used within data centers; these have networks that can experience transient failures, but in which partitioning failures are rare, and generally impact just small subsets of the nodes. Thus for purposes of analysis we assume a harsher networking environment than would arise in actual deployments. To simplify the presentation, we assume that a TCP-like acknowledgement / retransmission scheme is employed, creating the illusion of a reliable, sequenced, non-repeating message channel between each pair of processes. A timeout occurs if this channel abstraction retries repeatedly and is unable to obtain an acknowledgement for some message. Using the same TCP-like channels, we can also support a 1-to-all capability, whereby a single process sends some message over its channels to all the other members of some view of some group. This is done by mapping the 1-to-all request into multiple 1-to-1 messages. Notice that these 1-to-all channels lack any atomicity guarantee: if the sender fails while a message is being sent, it might reach just some of the destinations. Process Groups and Views Gbcast is defined with respect to a "process group:" a set of processes. In a deployed system such a group might have a name (like a file name), a way to initially contact the group, and other attributes such as flow-control parameters. However, those kinds of details are omitted here for brevity. The term membership view is a list of members, rank-ordered by age (determined by the view in which each member most recently joined the group) and with ties broken by a lexicographic ordering rule. The initial membership of the group is specified by an external agent and defines the first membership view of the group. Subsequent membership views arise by applying add and remove commands and are identified by sequence number. New views are reported to the processes belonging to the view by means of "new view" events. The application is notified via an upcall (a call from the library into a handler defined by the application program). Multicast Messages Members of a view can request that multicast messages be sent to a process group without knowledge of the membership that will apply at the time of delivery. The Gbcast protocol carries out these operations with a series of guarantees, discussed below. Delivery is by upcall to the application, which can perform whatever action the message requests. Roles Gbcast is best understood in terms of a set of roles. Application An application corresponds to a program which can be launched on one or more processors. Each application process then joins one or more process groups. An application process belonging to a group initiates new multicasts by invoking Gbcast. The protocol is considered to have terminated when all members of the target group have either acknowledged delivery of the message, or have been detected as faulty, via a mechanism explained below. Incoming Gbcast messages are delivered via upcalls, as are view change notifications. As noted earlier, the members of a group observe the same sequence of upcalls starting when they initially join: an initial view and then a sequence of new views and multicast messages. All members of a group receive any particular multicast in the same view, and the multicast is delivered to all non-failed members of that view. Leader The leader of a group is defined with respect to some view of the group, and is the member with lowest rank in the view. As noted, the rank is age-ordered (with older members having lower rank), and ties are broken using a lexicographic sort. Failure detection All components of the system are permitted to participate in the role of "detecting" failures. Detection is distinct from the reporting of the failure (which occurs through a new view and is ordered with respect to message deliveries). The channel abstraction supported by the network layer senses failures by timeouts. (Notice that under the network model, a process that attempts to send a message to a crashed target process will always experience a timeout, but it is also possible that the channel abstraction could misreport an operational process as faulty if messages are delayed because of a transient partitioning failure). Any process that experiences a timeout can declare that the endpoint of the associated channel has failed. If a process learns of a failure for some (processor-id, process-id, incarnation-number) tuple, it includes that information on the next outgoing message on all channels. A process that considers some other process to have failed will reject messages from the failed incarnation, responding "you have failed". (That is, processes gossip about failures, and shun failed group members). An incoming message from a new incarnation of a failed process is treated as a message from a "new" process. Failed process Any member of the current view that has been detected as failed is considered to be a failed process. An operational process that learns that it is considered to have failed (by attempting to communicate with some other process that rejects the message, thereby "shunning" it) might exit from the system, or can increase its incarnation number and rejoin. New Leader If every lower-ranked process in the current view is a failed process, then the next highest-ranked non-failed process is designated as the new leader. The new leader must run a protocol, discussed below, to become the leader. Quorums Quorums are used to guarantee the safety properties of Gbcast by ensuring that there is a single globally agreed-upon sequence of group views and multicast messages and by preventing progress in more than one partition if a group becomes fragmented into two or more partitions (disjoint subsets of members that can communicate with other members of their subsets, but not with members of other subsets). Quorums are defined for a specific view. Given view i with n members {A,B,C….}, a quorum of the view is any majority subset of the members of that view. Notice that this is in contrast to the way the term is defined in systems that have a static underlying membership: for Gbcast, the quorum size will change over time as the membership of a group changes and new views become defined. Safety and liveness properties In order to guarantee safety, Gbcast defines three safety properties and ensures they hold, regardless of the pattern of failures: Non-triviality Only multicasts actually sent by some group member are delivered. If a process receives a message from a group member that it considers to have failed, it will reject those messages. Consistency If any member of a view delivers a multicast (or reports a new view) in some order relative to other multicasts, then all other members of the same view that deliver the same message (or report the same view) will do so in the same order. Conditional liveness If multicast is sent in some view and the sender remains operational, then eventually all members of that view (with the exception of any that crash) will deliver . Liveness cannot be guaranteed under all conditions, hence we impose a further condition: we require this property only while sufficiently many processes remain non-faulty (we'll discuss this further below). Basic Gbcast This protocol is the one used under normal conditions. Recall that in Gbcast, each operational process has a current view, and each view defines a leader. Only a process that believes itself to be the leader in the current view can initiate a new multicast; other members must relay multicasts by sending them to the leader, over 1-to-1 connections, and then waiting for the leader to run the protocol. Should the leader fail while some member that is not the leader is attempting to relay a multicast, the sender must determine the status of its pending request. This is accomplished as follows: Notice that members observe the delivery of their own multicasts. Accordingly, if a new view becomes defined in which the old leader has failed, either the multicast has been delivered (in which case the sender knows this because it was one of the receivers), or the delivery of the new view allows it to conclude that the leader failed to relay the pending message, and that it should be resent by asking the new leader to relay it (non-triviality). Prepare step The leader proposes some sequence of one or more multicast messages by using the 1-to-all reliable network layer to send the message(s) to the members of the most current view, identifying each by means of an integer sequence number. The sequence numbers reset to 1 as each new view is defined (via a special kind of multicast, as explained below). A leader "talks to itself", participating in the protocol just as do other members. During recovery (discussed below), a new leader might re-propose some previously proposed view or message, as the new leader attempts to complete protocols that the old leader might have started but failed to complete. When this occurs, the new leader will respect the original sequencing and will re-propose the identical view or message. Promise step Each recipient retains a copy of the message(s) and responds with a promise to deliver them (such a promise will be fulfilled so long as the recipient itself remains a member of the group view, but if the recipient fails, the promise might not be carried out). During recovery, a recipient might receive a duplicated prepare request for the same message. If some message is re-proposed with the same sequence number, a recipient simply repeats its promise. Commit step The leader collects promise messages until, for each member of the group, it either has a promise message or a timeout has occurred causing the leader to suspect the corresponding member as faulty (recall that in this latter case, the leader will shun the suspected member, and because the message-sending subsystem piggybacks this information on the next messages it sends, any process receiving a subsequent message from the leader will also begin to shun these newly suspected members). If the leader receives promises from a quorum of members, as defined with respect to the view in which it is running the protocol, it sends a commit request. If the leader lacks a quorum, and hence suspects more than a majority of group members, it will never again be able to make progress, and the leader therefore terminates (the application program may rejoin the group using a new process name, but further progress by this process in the old view, under the old process name, is impossible). Notice that the leader may also have learned of failures during the prepare phase or the propose phase. In the prepare phase, some view members may have failed to acknowledge the propose request, in which case the leader's channel to those members will have experienced timeouts. The leader will have marked them as failed members. Additionally, it may be the case that by receiving the promise messages in the promise phase, the leader has learned of failed members that were detected by other group members. Thus, at the start of the commit phase, the leader has a quorum of promises together with a possibly empty list of failed view members. The leader therefore sends the "Commit" message to the non-failed members of the view, together with a proposal for a view change event that will remove the failed member(s) from the view, thereby combining a commit step and a propose step into a single actions. Recall that the after any failure detection occurs, the first message to each member in the group will piggyback that failure detection information, and that members shun failed members. Thus members that learn of a failure instantly begin to shun failed members, and the leader takes the further step of starting a view change protocol (which will then take some time to complete). If a proposal changed the view by adding members, the leader sends the new view to the joining members; it becomes their initial view, and they can then participate in any subsequent runs of the protocol. During recovery, a participant might receive a duplicated commit for a previously committed message. If so, it enters the delivery phase but does not redeliver the message or view to the application. Delivery step If a member receives a Commit message, it delivers the associated message(s) or new view(s) to the application, in the order that they were proposed by the leader. The leader learns that this step has succeeded when the acknowledgements used by the reliable 1-to-1 channel are received. Message flow: Basic Gbcast, simplest case (Quorum size = 2, view1={A,B,C}) Member Leader Members Application Layer A A B C A B C | | | | | | | | X-------->| | | | | | | Request that the leader send a multicast M | X--------->|->|->| | | | Propose(1.1: M) (View 1, sequence 1, message M) | |<---------X--X--X | | | Promise(1.1) | X--------->|->|->| | | | Commit(1.1) | |<---------X--X--X------>M->M->M Committed(1.1); Delivers M | | | | | | | | Error cases in basic Gbcast The simplest error cases are those in which one or more members fail, but a quorum remains active. In the example below, the group consists of {A,B,C} with A playing the leader role. fails during the promise phase and a timeout occurs within the reliable channel from the leader to process . The leader therefore commits the delivery of M, but simultaneously initiates a protocol to remove from the group, which commits, creating the new view {A,B}. If C has not actually failed, it can now rejoin the group but with a new incarnation number: in effect, C must rejoin as C'. Any messages from C to A or B will be rejected from the instant that each learns of the apparent failure: C will be shunned by A and B. Message flow: Basic Gbcast, failure of member other than the Leader (Quorum size = 2, view1={A,B,C}) Member Leader Members Application Layer A A B C A B C | | | | | | | | X-------->| | | | | | | Request(M) | X--------->|->|->| | | | Propose(1.1: M) | | | | * | | * !! C FAILS !! | |<---------X--X | | Promise(1.1) | X--------->|->| | | Commit(1.1); Propose(1.2: "remove C") | |<---------X--X--------->M->M Committed(M); Delivers M; Promise(1.2) | X--------->|->|->| | | Commit(1.2); | |<---------X--X--X------>V->V Committed(1.2); Delivers view2={A,B} | | | | | | Notice that the Commit and the new Proposal (and the piggybacked failure notification) are combined into a single message. This ensures that any process that commits an action after a new failure has been sensed simultaneously learns of that failure and will shun the associated process, and that the process will quickly be removed from the view. If C hasn't crashed, it can rejoin by incrementing its incarnation number (so it is now named C') and then requesting that it be added back into the group by the leader. It will be appended to the membership list with its new name, and will have the highest rank (because it is the youngest member) among members of the view. Message flow: Basic Gbcast, add members {D,E,F}, failure of member other than the Leader In the example shown below, a group that initially contains members {A,B,C} is asked to add {D,E,F}, but member C fails during the protocol. Membership change requests are treated as a special kind of multicast and the sequence of events is the same. The example is thus nearly identical to the prior one, but now a series of new view events are delivered to the application. (Quorum size = 2, view1={A,B,C}) Member Leader Members Application Layer A A B C D E F A B C D E F | | | | | | | | | | | X-------->| | | | | | | | | | Request("add D,E,F") | X--------->|->|->| | | | | | | Propose(1.1: "add D,E,F") | | | | * | | * | | | !! C FAILS !! | |<---------X--X | | | | | Promise(1.1) | X--------->|->| | | | | | Commit(1.1); Propose(2.1: "remove C") | |<---------X--X-----X--X--X------>V->V---->V->V->V Committed(1.1); Deliver view2={A,B,C,D,E,F}; Promise(2.1) | X--------->|->|---->|->|->| | | | | | Commit(2.1) | |<---------X--X-----X--X--X------>V->V---->V->V->V Committed(2.1); Deliver view3={A,B,D,E,F} | | | | | | | | | | | | At the end of the protocol, the new active view is view3={A,B,D,E,F} and the new quorum size is 3. But notice that there was an "intermediate" view, view2={A,B,C,D,E,F} with quorum size of 4. Had the leader not received 4 promises to the proposal phase that removed C, it would not have been able to run the commit phase for view3. This illustrates a basic policy: the quorum required to commit a new view is always based on the size of the prior view. Takeover protocol, used when the leader fails The next failure case is when a leader fails, resulting in a new leader. To take over as the leader, the new leader first runs a takeover protocol, and then the new leader can run basic Gbcast as above. The takeover protocol is as follows: Inquiry Step The new leader sends a 1-to-n message interrogating non-failed members to learn of any messages they have promised to deliver. Promise-List Step Each recipient sends the current list of promised messages to the leader. If a recipient lacks its initial view, it sends a request for an initial view to the leader. The new leader waits until it has either received a promise-list from each of the members it contacted, or has timed out. If a timeout occurs, the new leader suspects the member in question, and will shun it, as will any other members that it contacts. It will eventually propose a view that excludes these shunned members, as explained further below. Repeat If Necessary The new leader examines the promise-list, looking for membership-change messages that add new members. If any are present, it iterates the inquiry phase and promise-list collection phase, sending inquiries to the new members. This in turn could lead to the discovery of additional proposals that add still further members. The process terminates when every member (current or proposed to be added) has responded with a promise-list or been suspected by the new leader. Check for Quorums At the end of the inquiry phase, the leader has received promise-list responses from some of the processes it contacted; any unresponsive members will now be suspected. The new leader constructs a list of proposed views. To advance to the next step of the take-over proposal, the new leader must have received a quorum of responses from each of the committed or proposed views on this list. If it has failed to receive a quorum of responses for any committed or proposed view on the list, the new leader has failed to take over as leader and will never succeed. It terminates the protocol and must rejoin the system as a new member, using a new process incarnation number. Start as New Leader Having successfully checked for quorums, the new leader becomes the leader. It can now run the basic protocol. It re-proposes any promised messages or view-changes, in the order it learned them from the promise-lists, following them with a new view-change command that removes the old leader and any other members that failed to respond during the inquiry phase. If any member responded, during the promise-list phase, that it lacks its initial view, the new leader sends the appropriate initial view to that member. Dueling Leaders It is possible that the promise-lists include two or more distinct proposals for the same slot. This happens (only) if a first leader A became partitioned from the system, but nonetheless made a proposal that was seen only by a small (non quorum) set of members. A new leader B then took over successfully, but didn't learn of A's proposal (which cannot have become committed). B now proposes Y, again at a small minority of members. Now B is believed to have failed and C takes over. It is possible for C to learn of proposals and Y, for the same slot. C should ignore the proposal associated with the older leader, A, but retain the proposal associated with the newer leader, B: in this situation, proposal cannot have achieved a quorum and hence cannot have become committed, whereas proposal Y, made by the more recent leader, could have become committed (otherwise, which is to say if X might have been reached a quorum, B would have learned of and hence repeated proposal X; thus because B didn't learn of , must not have received a quorum). Note that C's take-over protocol uses a deterministic ordering among leaders A and B to determine that proposal is doomed, because leader B must have shunned A in order to become leader. Conversely, C must assume that proposal Y may become committed, even if A suspected that B has failed, because proposal Y intersected with C's take-over step. The rule is implemented: by numbering the leaders sequentially and including the leader-number in the proposal. During the inquire step, a new leader can then use the proposal from the leader with the larger number, if it receives conflicting proposals for the same slot. Failure Suspicions Piggyback on Outgoing Messages Notice that the new leader believes the old leader to have failed, and may also believe that other members have failed. Thus, the inquiry phase, and or the new propose phase, may also carry piggybacked failure messages for one or more members. This is a central requirement for the protocol, because it ensures that those members will subsequently be shunned: if further communication is received from a shunned member, the receiver will reject those messages. It follows that if any member executes the promise-list phase for an old leader L, no further propose or commit messages from L will be processed by that member. From this we can see that the promise-list collected by the new leader will be complete, containing all promised messages that could possibly have achieved a quorum in the current view. It may also contain some additional promised messages that have not yet achieved a quorum. Message flow: Basic Gbcast, failure of Leader, TakeOver, Basic Gbcast by the new leader (Quorum size = 2, view 1={A,B,C}) Member Leader Members Application Layer A B A B C A B C | | | | | | | | X----->| | | | | | | Request(M) | X------------>|->| | | | | Propose(1.1: M) !! Leader fails during send, Propose doesn't reach C !! | *<------------X—-X | | | | Promise(1.1) | | * | | * | | !! A (THE LEADER) HAS FAILED !! | | | | | | !! NEW LEADER: B !! | ?------------>|->| | | Inquire("B is taking over because A has failed") | |<------------X--X | | PromiseLists(1.1: M) | X------------>|->| | | Propose(1.1: M); Propose(1.2: "remove A") | |<------------X--X--------->| | Promise(1.1); Promise(1.2) | X------------>|->|--------->| | Commit(1.1); Commit(1.2); | |<------------X--X-------->M;V->M;V Committed(1.1); Committed(1.2); Delivers(M). Delivers view2={B,C} Message flow: Basic Gbcast, Add members {D,E,F}, failure of the Leader As an example of a more complex case, here the leader fails in the middle of a commit that increases the size of the view (Quorum size = 2, view 1={A,B,C}) Member Leader Members Application Layer A B A B C D E F A B C D E F | | | | | | | | | | | | | | X----->| | | | | | | | | | | | | Request("add D, E, F") | X------------>|->| | | | | | | | | | | Propose(1.1) !! Leader fails during send, Propose doesn't reach C !! | *<------------X—-X | | | | | | | | | | Promise(1.1) | | * | | | | | * | | | | | !! A (THE LEADER) HAS FAILED !! | | | | | | | | | | | | !! NEW LEADER: B !! | ?------------>|->| | | | | | | | | Inquire("B is taking over because A has failed") | |<------------X--X | | | | | | | | PromiseLists(1.1: "add D, E, F"); | ?-------------|--|->|->|->| | | | | | Iterated Inquire("B is taking over because A has failed") | |<------------|--|--X--X--X | | | | | PromiseLists(1.1: "add D, E, F"); | X------------>|->|->|->|->| | | | | | Propose(1.1: "add D, E, F"); Propose(2.1: "remove A") | |<------------X--X--X--X--X | | | | | Promise(1.1); Promise(2.1); | X------------>|->|->|->|->| | | | | | Commit(1.1); Commit(2.1); | |<------------X--X->X->X->X -------->V->V->V->V->V Committed(1.1); Committed(2.1); Delivers view2={A,B,C,D,E,F}. Delivers view3={B,C,D,E,F} In this example we see the inquiry iteration "in action": B learns of the protocol that adds {D,E,F} in a first phase of the inquiry, hence it repeats the inquiry, this time contacting D, E and F. There is no need to repeat the inquiry at C since this would simply return the same information previously obtained. In this example, the final commit actually causes two views to be delivered in succession at members B and C. Even though the two proposals were sent concurrently, the commit for view2 requires a promise from a quorum of view1, whereas the commit for view3 requires a quorum response from the members of view2. Although the sending of initial views isn't explicitly shown in the diagram, the joining members don't participate in the 1.1 protocol because they don't join the group until view2. Notice that at members B and C a pipelining effect arises: events associated with view2 are already being proposed even as events in view1 are still being committed. Correctness To show that Gbcast satisfies non-triviality we start by tracing backwards from an arbitrary delivery action to the point at which a client requested the corresponding event; clearly, only messages that were legitimately sent will be delivered. However, nontriviality for this protocol goes further: we must also show that messages from a given member are delivered only while that member is still a live participant in some view. Accordingly, we look at the case in which the leader initiates some multicast but then fails before it is delivered. Here, the new leader either discovers the pending proposal, and will order it before the view-change event, or the new leader fails to discover the pending proposal, in which case all members of the new view will shun any late-arriving incoming message from the old leader. Thus either a multicast message is delivered while the view in which it was sent is still pending, or it will not be delivered at all. To establish consistency we begin by analysis of the case in which there is just a single leader that never fails or loses connectivity with a quorum. Since the leader sequences the events and includes each member starting with the first view that contains that member, all members deliver the identical messages starting from the view in which they were added to the system. When a new leader takes over, the inquiry is required to reach a quorum of members for the most recent committed view. This quorum necessarily will include at least one process that received any proposal that the old leader could have committed. Thus the new leader will learn of any potentially committed proposal and include it as a preflix to its own new proposals. It follows that if any process delivers any event, then if the system makes progress, every surviving member will eventually deliver that same event and in the same order. We can show that a joining member will receive its initial view by analysis of the two relevant cases. If the leader doesn't fail, it sends the initial view on an eventually reliable channel. If the leader does fail and some member lacks its initial view, the new leader sends that view after receipt of the "promise-list" response to its inquiry-phase message. A logical partitioning of the group is impossible because of the shunning rule. In order to commit any new view, the old leader must obtain promises from a quorum of the current view. A new leader, taking over, will learn of any view that could have become committed. To commit its own proposed next view, it will thus be required to interact with a quorum of that intermediary view, if any. In a scenario that could lead to partitioning, the leader, A, might have timed out on B and gone on to create a sequence of new views and events that excluded B. But in this case a majority of the old or of the intermediary view members will have learned that A believes B to have failed, and will shun B when it inquires. In either case, B is prevented from obtaining a quorum and hence cannot make progress. A symmetric argument shows that if B succeeds in defining a new view that excludes A, A would be unable to obtain a quorum for any other new view that it might attempt to propose. Liveness The Gbcast protocol will make progress provided that at all times in the execution, if view holds at time , then less than a quorum of members of fail (or are suspected as failing) within some subset of the members of the view. To maximize progress, it is important that excluded but still live members rejoin the group, so that erroneous failure detections don't cause the view to shrink in a persistent manner. However, the protocol will not recover and make progress if at any time, every process suspects more than a quorum of members of the current view of having failed. This property is similar to but "stronger" than <>W, the "weakest failure detector" for achieving consensus, as defined by Chandra and Toueg. To see this, consider a run in which a mutually suspecting quorum arises "too quickly" for processes that have been wrongly excluded from the view to rejoin it. Gbcast will not make progress and, indeed, the group will need to shut down and restart. Arguably, such runs would be unlikely in the kinds of data centers where Gbcast is typically used, but clearly they can be constructed in an adversarial manner. Discussion: Failure sensing The Gbcast protocol presumes that the probability of incorrect failure suspicions will be low; the scheme breaks down if failure suspicions occur frequently and operational processes are often suspected as faulty. By analogy, consider the TCP protocol, in which the failure to receive an acknowledgement will eventually cause a connection to break. TCP is used nearly universally, a tremendous disruption to the Web would result if TCP connections frequently were to break when neither endpoint has failed. Thus timeouts are set conservatively. A similar assumption is required for systems that use Gbcast. In contrast, there are other failure detection schemes, such as the one explored by Chandra and Toueg, that can yield high rates of incorrect failure suspicions. Some protocols, including Paxos, are able to tolerate incorrect failure suspicions without any costly consequence. Whether one approach is inherently better than the other is beyond the scope of this discussion. We simply underscore that the approaches differ, and that Gbcast would be ineffective if timeouts are set overly aggressively. One extreme scenario is worthy of further mention: network partitioning events. Modern data centers and networks often experience events in which a single machine and all the processes on it becomes transiently partitioned from a larger pool of machines that remain connected to one another. Such cases are treated as failures in Gbcast, but if the surviving, connected members include a sufficiently large number of processes, the majority portion of the system will simply reconfigure itself to exclude the disconnected member. It can reconnect and rejoin the group later when the partition heals. A more extreme kind of partitioning is sometimes seen in data centers: in this situation, a network switch might fail, causing a collection of machines (perhaps, a whole rack or even an entire container) to become disconnected from the Internet and from the remainder of the data center. In such cases one could imagine a group in which all members begin to suspect all other members; Gbcast will not progress in this case and the management infrastructure would need to relaunch the entire application. On the other hand, in most large data centers, the operating systems of the machines experiencing such a failure would also shut down, restarting only when connectivity is restored. Thus in practice, the restart of the system is unavoidable. This said, there are protocols, such as Paxos, that could ride out such an outage if the machines themselves were to remain operational and later regained adequate connectivity. The Transis system explored extensions to the Gbcast protocol that permit multiple partitions to form, to make independent progress, and then to remerge. This topic, however, is beyond the scope of the present discussion. Discussion: Dueling leaders In the Paxos protocol, a situation can arise in which two or more leaders "duel" by proposing different commands for the same slot. This can also occur in Gbcast. In the normal sequence of events, one leader takes over because the prior leader has failed, learns of any proposals the prior leader made during the inquiry phase, and then repeats those same proposals, extended with new ones. Thus no duel over the content of slots arises because the same proposals are repeated in the same slots. The closest situation to a duel is seen if the old leader has become partitioned from the majority and the new leader, taking over, is unable to contact some set of members (but does obtain the required quorum during the INQUIRE phase). Here the new leader may be unaware of some proposals that the old leader made, or might still issue, if those reach only the members the new leader didn't contact. The shunning mechanism resolves such duels. When the new leader obtained a quorum during the INQUIRE phase, it also blocked the old leader from ever again achieving a quorum for any new PROPOSE it might initiate: a majority of members are now shunning the old leader. Thus if any proposal is missed by the new leader it necessarily is a proposal that didn't reach a quorum of members, and won't reach a quorum in the future. Moreover, members aware of such a proposal will be shunned by the new leader, since (when it gave up waiting for them to respond to its INQUIRE) it considers them to have failed. Any member learning of new proposals from the new leader will shun them as well. Shunning of leaders in Gbcast occurs in the pre-determined order of leader ranks: a higher-ranking leader only shuns a lower-ranking leader when it tries to take-over its place. The Paxos ballots mechanism serves precisely the same purpose, but differs in allowing participants to attempt to take-over repeatedly, eaach time assuming a new ballot ("rank"). The result is that, one the one hand, Paxos leader demotion is reversible, and on the other, dueling leaders could theoretically continue forever. Bi-simulation equivalence to Paxos Although superficially quite different, upon close study Gbcast is seen to be surprisingly similar to Paxos. Indeed, Paxos can be "transformed" into Gbcast with the following (reversible) sequence of steps. For brevity we describe these steps informally and omit a detailed proof. Note that this transformation does not address durability. Gbcast treats durable state as a property of the application, not the protocol, whereas Paxos logs events to a set of durable command logs, and hence can still recover its state even after the whole service is shut down and restarted. The equivalent behavior with Gbcast involves having the application log all received messages, but that case will not be considered here. Start with the basic Paxos protocol. Add a process incarnation number to distinguish a rejoining process from one that has been continuously a member of the view. Impose an age-based ordering on the members of the group, designate the oldest member (breaking ties lexicographic) as the leader. Non-leaders issue requests through the leader. Both protocols permit batching of requests: Basic Paxos has a concurrency parameter, alpha: a leader can concurrently run a maximum of alpha instances of the protocol. Gbcast permits the leader to propose multiple events in a single protocol instance, which could be message deliveries or view events. Paxos does not normally require reliable, ordered communication. Modify the protocol to run over the reliable one-to-one channel abstraction (a one-to-many message would be sent by Paxos over a set of one-to-one channels). We can now assume that any message sent will either be received and delivered in order, or that a timeout will occur at the sender side. The Paxos slot number will become the Gbcast sequence number. The Paxos ballot number is, in effect, transformed into the proposing leader-number used to discriminate between conflicting proposals during the inquire step. Define a category of view-modifying commands that operate by adding or removing processes from the group membership. Introduce a failure detection mechanism as used in Gbcast, asking the leader to remove any timed-out members. A member removed from the group that reestablishes connectivity to the group should rejoin with a new incarnation number. Report views by upcalls to the application. Basic Paxos can propose a multicast to just a quorum of group members, hence a typical member may have gaps in its command list. This is why, in Paxos, a learner must read a quorum of members and merge their command lists. In our modified protocol, any multicast is proposed to all non-failed members, while failed members are dropped from the view. Thus unlike Paxos, our modified protocol has the property that any single live member has the full committed event list. In effect, the protocol has a write quorum equal to the current membership view size, and a read quorum of 1. This can be convenient when building applications that maintain the actual state of a database or object and for which it is inconvenient to represent state as a series of updates in command lists that must be merged to learn the actual sequence of events. The same quorum mechanisms that define Paxos, including the inquiry used when a new Paxos leader takes over, are now seen to correspond precisely to the steps of Gbcast. The ballot mechanism, generally viewed as the hallmark of Paxos protocols, reduces to a counter that tracks the order of succession of leaders. This simplification is fundamentally due to the guarantee that once a leader is suspected, it will be removed from the view and would need to rejoin before participating in the protocol. It follows that Gbcast and Paxos can be transformed, each to the other, without changing assumptions and with the identical correctness properties. Obviously, the protocols don't look very similar, but they have a deep connection. Indeed, one can make a stronger claim: any sequence of delivery events exhibited by Gbcast can also arise in some run of Paxos, and vice versa: any sequence of learned events from Paxos can also arise in some run of Gbcast. The type of proof outlined above is formally called a bi-simulation: one shows that any (input-sequence, output-behavior) pair that one protocol can exhibit is also possible with the other protocol. Notice that in carrying out a bisimulation, features that one protocol supports but the other lacks can be ignored if they are not considered to be part of the "behavior" being studied. For example, the Gbcast reporting of new views (events that Paxos lacks) are not treated as output events here. Summary of differences between Paxos and Gbcast Gbcast has no durable state: the protocol does not maintain a log of events on disk, and durability is treated as an application-specific property. In contrast, Paxos guarantees durability: after recovering from a complete shutdown of the system, a Paxos application will still be able to learn the full log of received messages. In the propose phase, Gbcast must wait for responses from all participants (or for the maximal timeout and then suspect the remaining ones), instead of making progress with the fastest quorum. In Gbcast, the cost of a failure suspicion is high and the protocol may cease to make progress if too many failures are suspected, forcing a management layer to restart the entire application group. Thus, in practice, Gbcast requires conservative timeout settings relative to Paxos. With Gbcast, if an error does occur (e.g. an operational process is suspected and shunned), that process must drop out (it can rejoin under a different name). With Paxos, if f>0, should a process be unable to participate in a protocol instance, it can continue to participate in subsequent protocol instances without error. Operational members of a view will never have gaps in their command lists with Gbcast (every member has a complete state). Operational members can have gaps in their command lists when using Paxos (learners merge a quorum of lists in Paxos to "fill" these gaps). With Paxos, to propose multiple commands we use alpha>1, but in this case commands can be committed in a different order from the order in which they were initiated (one case in which this problematic scenario is seen involves dueling leaders; leader A proposes commands a1 and a2, and leader B proposes commands b1 and b2; both then fail and leader C, taking over, ends up committing b2, and then a1: an outcome that might not be desired by the applications that initiated the requests ). With Gbcast, the leader can initiate multiple commands by issuing a single propose that describes a series of actions. The group will be committed all at once, hence the order of initiation will be respected. With Gbcast, a command is delivered in the view in which it was initiated. Reconfigurable Paxos can commit a command in a slot associated with a membership view prior to the active membership view at the time when the commit occurs. Thus, in Paxos, if an application is in some way view sensitive, commands must carry a view identifier, so that recipients can determine whether or not the command is still executable. Gbcast does not require that the protocol be halted when changing configurations: the rate of new proposals can be constant even across membership changes. For many implementations of reconfigurable Paxos, this would not be the case. With both Gbcast and Paxos, reconfiguration is only possible if a quorum of the prior view is accessible and can acknowledge the new view. However, in Paxos, the requirement also extends to learning the outcomes of commands proposed for slots associated with the old view. In practice, this can cause the Paxos reconfiguration computation to extend over a longer period than for Gbcast, in which any state is stored within the application, not a long-lived command list: Paxos cannot discard the state associated with an old view until the new view is active and any replicas have learned the old state. Gbcast does not require a garbage collection protocol because, as each message or view is committed and reported to the application it can be discarded. Paxos maintains state using a quorum scheme in the command logs at its acceptors, and requires a garbage collection protocol to free these command slots once the outcome is committed and all learners (replicas) have learned the outcome. Liveness comparison Both Paxos and Gbcast are subject to the FLP impossibility result. Thus neither protocol can be guaranteed live under all possible conditions. At best we can talk about the conditions under which liveness is guaranteed, expressed as predicates on the failure detection mechanism: if the condition for liveness holds, then the protocol will be live. The liveness conditions of Basic Paxos and Gbcast are similar but not identical. In Gbcast, progress will never resume if a circle of mutual suspicions arises, as noted above: once a quorum of mutually shunning processes arises, the shunning mechanism makes it impossible for any leader to obtain a quorum of promises. With an (unmodified) Paxos protocol, this problem will not arise: once the excessive level of mutual suspicions ends, progress resumes. Thus Paxos makes progress with any failure-detection mechanism satisfying the <>W condition, even if periods arise during which more than a quorum of mutual suspicions occur. For example, if we start with a group containing {A.B,C} and cause an extended network partition, Paxos would resume when the network partition resolves but Gbcast will shut down permanently and some form of management infrastructure may need to restart the system. If it is necessary to preserve group state across the failure, such an infrastructure would identify the last member to fail and restart the group using some form of checkpoint stored by that last member. In Paxos deployments, it is common to require human operator intervention to reconfigure Paxos. In such settings, Gbcast may be able to make progress during period when Paxos cannot. Suppose that a group has membership that slowly drops to less than a quorum of the original group size. Gbcast can continue to operate with even a single member. Paxos would cease to make progress during periods when less than a quorum of its view are active. Need for state transfer Systems such as Isis that implement Gbcast typically provide a state transfer mechanism: at the instant the new view showing some joining member is delivered, some existing member makes a checkpoint of its copy of the group state. This is then copied to the new member, which loads the checkpoint as the initial group state as of the instant it joined. (Various out-of-band copying schemes can be used to pre-load some state prior to the join for cases where the state is too large to transfer at the last moment this way). State transfer is needed because in Gbcast, once a member is dropped from a group, it will no longer receive updates. Gbcast is typically used by applications that maintain their state in memory and apply updates one by one as received, hence once a gap arises, a replica is no longer useful. Notice that this is in contrast to Paxos. In that protocol, gaps can arise as a consequence of the basic quorum update scheme, which doesn't guarantee that every member will see every update and can run over unreliable message passing layers that might never deliver some messages. The Paxos learner algorithm reads multiple histories and combines them to fill such gaps. Thus Paxos will normally ride out transient failures, continuing to operate without actually dropping the failed member from the group. The failed member misses updates, yet state transfer is not needed unless a group is being reconfigured. Which dynamically reconfigurable state machine replication protocol came first? The Gbcast protocol was published early in a period when several state machine protocols capable of managing their own membership were introduced: Gbcast, View-Stamped Replication (Oki and Liskov ), Basic Paxos (Lamport ), the partial synchrony protocol of Dwork, Lynch and Stockmeyer, etc. Among these, Gbcast was the first to be published, in papers that appeared in 1985 and 1987; the others were published starting in 1988. One could thus argue that Gbcast was really the first Paxos protocol. Such a statement, however, treats "Paxos" as a fairly broad term covering a family of protocols that all implement state machine replication, all support dynamic reconfiguration of their membership, and have identical correctness properties but vary in their liveness conditions. Under this definition, Gbcast is a Paxos protocol. If equivalence is formalized using bisimulation, in which any run that one protocol can exhibit is also exhibited by the other, and in which the assumptions made and the conditions for progress are identical, the comparison becomes more complex. Under this definition, Gbcast is not a Paxos protocol: although each can exhibit the same runs as the other (viewed purely in terms of requests from the application and notifications to the application), they have similar, but not identical, liveness conditions. However, this sort of stringent definition poses a different problem: if one adopts it, some versions of Paxos are not Paxos protocols. For example, "Cheap Paxos" and "Vertical Paxos" are not bisimulation-equivalent to Basic Paxos. Thus the question has no answer unless one makes it more specific, and has a different answer depending upon the definition of equivalence one uses. See also Paxos (computer science) Virtual synchrony Atomic broadcast Consensus (computer science) Reliable multicast Safety and liveness properties References Distributed algorithms Fault-tolerant computer systems
Gbcast
Technology,Engineering
12,995
2,359,746
https://en.wikipedia.org/wiki/Electrorotation
Electrorotation is the circular movement of an electrically polarized particle. Similar to the slip of an electric motor, it can arise from a phase lag between an applied rotating electric field and the respective relaxation processes and may thus be used to investigate the processes or, if these are known or can be accurately described by models, to determine particle properties. The method is popular in cellular biophysics, as it allows measuring cellular properties like conductivity and permittivity of cellular compartments and their surrounding membranes. See also Dielectric relaxation Dielectrophoresis Membrane potential Biophysics Electric and magnetic fields in matter
Electrorotation
Physics,Chemistry,Materials_science,Engineering,Biology
124
24,162,194
https://en.wikipedia.org/wiki/Noncommutative%20measure%20and%20integration
Noncommutative measure and integration refers to the theory of weights, states, and traces on von Neumann algebras (Takesaki 1979 v. 2 p. 141). References I. E. Segal. A noncommutative extension of abstract integration. Ann. of Math. (2), 57:401–457, 1953. MR # 14:991f, JSTOR collection. 2.0(2) . Operator algebras Noncommutative geometry
Noncommutative measure and integration
Mathematics
100
7,430,232
https://en.wikipedia.org/wiki/Project%20Highwater
Project Highwater was an experiment carried out as part of two of the test flights of NASA's Saturn I launch vehicle (using battleship upper stages), successfully launched into a sub-orbital trajectory from Cape Canaveral, Florida. The Highwater experiment sought to determine the effect of a large volume of water suddenly released into the ionosphere. The project answered questions about the effect of the diffusion of propellants in the event that a rocket was destroyed at high altitude. The first flight, SA-2, took place on April 25, 1962. After the flight test of the rocket was complete and first stage shutdown occurred, explosive charges on the dummy upper stages destroyed the rocket and released of ballast water weighing into the upper atmosphere at an altitude of , eventually reaching an apex of . The second flight, SA-3, launched on November 16, 1962, and involved the same payload. The ballast water was explosively released at the flight's peak altitude of . For both of these experiments, the resulting ice clouds expanded to several miles in diameter and lightning-like radio disturbances were recorded. See also High-altitude nuclear explosion - other high altitude explosive tests References Further reading 1962 in spaceflight NASA programs Military projects of the United States Water and the environment Spacecraft launched by Saturn rockets Saturn I
Project Highwater
Engineering
261
1,508,709
https://en.wikipedia.org/wiki/Idrialin
Idrialin is a mineral wax which can be distilled from the mineral idrialite. According to G. Goldschmidt of the Chemical Society of London, it can be extracted by means of xylene, amyl alcohol or turpentine; also without decomposition, by distillation in a current of hydrogen, or carbon dioxide. It is a white crystalline body, very difficultly fusible, boiling above 440 °C (824 °F). Its solution in glacial acetic acid, by oxidation with chromic acid, yielded a red powdery solid and a fatty acid fusing at 62 °C, and exhibiting all the characters of a mixture of palmitic acid and stearic acid. References Waxes
Idrialin
Physics
151
16,022,841
https://en.wikipedia.org/wiki/Shaft%20passer
A shaft passer is a device that allows a spoked wheel to rotate despite having a shaft (such as the axle of another wheel) passing between its spokes. The device is usually mentioned as a joke between nerds, in the manner of a fool's errand, however, examples do exist. In ~100 C.E. Heron describes a horse statue with the neck connected to its body with a shaft passer. A sword (acting as the "shaft") could slice through the neck but the head would not detach. In 2023 Blonder created a two and three dimensional shaft passer that allows a wire mesh cube to penetrate a mesh screen under its own weight. One of the earliest modern references to these devices was made by Richard Feynman, who was told by a colleague at Frankford Arsenal in Philadelphia that the cable-passing version of the device had been used during both world wars on German naval mine mooring cables, to prevent the mines from being caught by British cables swept along the sea bottom. The device was supposed to work using a spoked, rimless wheel that allows cables to pass through as it rotates. The ends of the spokes are widened, and the cable is held together by a short curved sleeve through which these spoke ends slide. External links , with diagram of the device. Tesseract cube, Blonder, 2023 References Practical jokes Wheels Mechanisms (engineering)
Shaft passer
Engineering
291
43,763,081
https://en.wikipedia.org/wiki/Logic%20for%20Programming%2C%20Artificial%20Intelligence%20and%20Reasoning
The International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR) is an academic conference aiming at discussing cutting-edge results in the fields of automated reasoning, computational logic, programming languages and their applications. It grew out of the Russian Conferences on Logic Programming 1990 and 1991; the idea to organize the conference was largely due to Robert Kowalski who proposed to create the Russian Association for Logic Programming. The conference was renamed in 1992 to "Logic Programming and Automated Reasoning" (LPAR) to reflect its extended scope, due to considerable interest in automated reasoning in the Former Soviet Union. After a break from 1995 to 1998, LPAR continued in 1999 under the name "Logic for Programming and Automated Reasoning", to indicate an extension of its logic part beyond logic programming. In 2001, the name changed to "Logic for Programming, Artificial Intelligence and Reasoning". The LPAR steering committee consists of Matthias Baaz, Chris Fermüller, Geoff Sutcliffe, and Andrei Voronkov (chair). Following its slogan "To boldly go where no reasonable conference has gone before", LPAR typically takes place in locations that are unusual or difficult to reach. Overview of conference events References External links — accounting for 1st to 15th conference (1990–1994, 1999–2008) 17th LPAR's home page (2010) 18th LPAR's home page (2012) 19th LPAR's home page (2013) LPAR page at DBLP Theoretical computer science conferences Logic conferences
Logic for Programming, Artificial Intelligence and Reasoning
Technology
302
70,444,808
https://en.wikipedia.org/wiki/Aigialus%20mangrovis
Aigialus mangrovis is a fungus species of the genus of Aigialus. Aigialus mangrovis has been first isolated from the mangrove Rhizophora mucronata in Maharashtra in India. References Fungi described in 1987 Fungus species Pleosporales
Aigialus mangrovis
Biology
61
68,756,030
https://en.wikipedia.org/wiki/List%20of%20fungi%20of%20South%20Africa%20%E2%80%93%20L
This is an alphabetical list of the fungal taxa as recorded from South Africa. Currently accepted names have been appended. La Order: Laboulbeniales Family: Laboulbeniaceae Genus: Laboulbenia Laboulbenia anomala Thaxt. Laboulbenia bilabiata Thaxt. Laboulbenia dryptae Thaxt. Genus: Laccaria Laccaria laccata Berk. & Br. Genus: Lacellina Lacellina graminicola Petch. Genus: Lachnea Lachnea capensis v.d.Byl. Lachnea capensis Lloyd Lachnea hemispherica Gill. Lachnea lusatiae Cooke Genus: Lachnocladium Lachnocladium cristatum Lloyd Lachnocladium furcellatum Lev. Lachnocladium semivestitum Berk. & Curt. Lachnocladium zenkeri P.Henn. Genus: Lactarium Lactarium deliciosus S.F.Gray Lactarium piperatus S.F.Gray Lactarium scrobiculatus Fr. Genus: Lagenula Lagenula fructicola Amaud. Genus: Lamprospora Lamprospora leiocarpa Seaver. Genus: Lanopila Lanopila capensis Lloyd Lanopila radloffiana Verw. Lanopila wahlbergii Fr. Genus: Laschia Laschia auriscalpium Mont. Laschia cucullata Bres. Laschia duthiei Lloyd Laschia friesiana P.Henn. Laschia rubella Sacc. Laschia tenerrima Kalchbr. Laschia thwaitesii Berk. & Br. Laschia sp. Genus: Lasiobolus Lasiobolus equinus Karst. Genus: Lasiosphaeria Lasiosphaeria capensis Kalchbr. & Cooke Lasiosphaeria hispida Fuck. Genus: Lasmeniella Lasmeniella globulifera Petrak & Syd. Lasmeniella pterocarpi Petrak. Genus: Laternea Laternea angolensis Welw. & Curr. Le Family: Lecanactidaceae Genus: Lecanactis (Lichens) Lecanactis bullata Zahlbr. Lecanactis develans Nyl. Lecanactis diversa Nyl. Lecanactis emersa Stizenb. Lecanactis ulcerata Zahlbr. Genus: Lecania (Lichens) Lecania arenaria Flagey Lecania cyrtella Th.Fr. Lecania fructuosa Zahlbr. Lecania punicea Müll.Arg. Genus: Lecanora (Lichens) Lecanora aequata Stizenb. Lecanora albella Ach. Lecanora albella f. angulosa Nyl. Lecanora albospersa Stizenb. Lecanora allophana Rohl. Lecanora allophana var. glabrata Steiner. Lecanora amphidoxa Stizenb. Lecanora angulosa Ach. Lecanora arenaria Nyl. Lecanora armstrongiae Stizenb. Lecanora asperella Nyl. Lecanora aspersa Stizenb. Lecanora atra Ach. Lecanora atra var. americana Fee. Lecanora atraeformis Vain. Lecanora atrorimata Nyl. Lecanora atrosulphurea Ach. Lecanora atrosulphurea f. leptococca Stizenb. Lecanora atrosulphurea f. livens Stizenb. Lecanora aurantiaca Flotow. Lecanora aurantiaca var. erythrella Nyl. Lecanora aurantiaca var. fulva Nyl. Lecanora aurantiaca var. placidium Stizenb. Lecanora aureola Stirt. Lecanora badia (Hoffm.) Ach. (1810) var. cinerascens Flot. (1849), accepted as Protoparmelia badia (Hoffm.) Hafellner (1984) Lecanora benguellensis Nyl. Lecanora bicincta Ram. Lecanora blanda Nyl. Lecanora bogotana Nyl. Lecanora breuteliana Massal. Lecanora bylii Vain. Lecanora bylii Zahlbr. Lecanora caesiopallens Vain. Lecanora caesiorubella Ach. Lecanora cancriformis Vain. Lecanora candidata Stizenb. Lecanora cameoflava Müll.Arg. Lecanora carpinea Vain. Lecanora cateileoides Vain. Lecanora cervina Ach. Lecanora chlarona Nyl. Lecanora chlarona f. geographica Nyl. Lecanora chlarona f. pinastri Cromb. Lecanora chlarona var. bogotana Vain. Lecanora chlarotera Nyl. Lecanora chondroplaca Zahlbr. Lecanora cinefacta Stizenb. Lecanora cinerea Rohl. Lecanora cinereocamea Stizenb. Lecanora cinnabarina Ach. Lecanora cinnabarina var. haematodes Stizenb. Lecanora cinnabarina var. pallidior Stizenb. Lecanora cinnabarina var. opaca Stizenb. Lecanora cinnabarina var. perminiata Nyl. Lecanora cinnabariza Nyl. Lecanora clavulus Stizenb. Lecanora coarctata Ach. Lecanora coarctata f. cotaria Ach. Lecanora coarctata f. fulgiana Zahlbr. Lecanora coarctata var. argilliseda Duf. Lecanora coarctata var. fossulans Stizenb. Lecanora coccinella Stizenb. Lecanora coilocarpa Nyl. Lecanora confluens Stizenb. Lecanora confragulosa Nyl. Lecanora conspersa Stizenb. Lecanora constans Nyl. Lecanora crassildbra Müll.Arg. Lecanora cruda Stizenb. Lecanora deminuta Stizenb. Lecanora deminutula Stizenb. Lecanora detecta Stizenb. Lecanora diffusilis Nyl. Lecanora dispersa Rohl. f. nana Vain. Lecanora dispersa f. testacea Vain. Lecanora domingensis Ach. Lecanora elaeophaea Nyl. Lecanora elapheia Stizenb. Lecanora elegantissima Nyl. Lecanora epichlora Vain. Lecanora erythrella Ach. Lecanora erythroleuca var. subcerina Nyl. Lecanora eudoxa Stizenb. Lecanora euelpis Stizenb. Lecanora exigua Rohl. Lecanora expallens Ach. Lecanora expallens var. lutescens Nyl. Lecanora fenzliana Stizenb. Lecanora ferruginea Link. Lecanora ferruginea f. erysibe Stizenb. Lecanora fibrosa Stizenb. Lecanora ficta Stizenb. Lecanora flava Stizenb. Lecanora flavocrea Nyl. Lecanora flavorubens Stizenb. Lecanora flavovirens Fee. Lecanora flexuosa Stizenb. Lecanora fructuosa Stizenb. Lecanora frustulosa Ach. Lecanora galactiniza Nyl. Lecanora gibbosa Nyl. var. subdepressa Nyl. Lecanora glaucolivescens Nyl. Lecanora glaucoma Ach. Lecanora granulosa Wedd. Lecanora helva Stizenb. Lecanora homaloplaca Nyl. Lecanora hufferiana Stizenb. Lecanora hypocrocina Nyl. Lecanora imponens Stizenb. Lecanora impressa Zahlbr. Lecanora labiosa Stizenb. Lecanora laciniosa Nyl. Lecanora lamprocheila Nyl. Lecanora leprosa Fee. Lecanora leptoplaca Zahlbr. Lecanora leucoxantha Müll.Arg. Lecanora leueoxantha Stizenb. Lecanora leucoxanthalla Stizenb. Lecanora lithagogo Nyl. Lecanora lugens Stizenb. Lecanora massula Stizenb. Lecanora microlepida Stizenb. Lecanora microps Stizenb. Lecanora murorum Ach. Lecanora murorum var. pusilla Wedd. Lecanora nidulans Stizenb. Lecanora nubila Stizenb. Lecanora obvirescens Stizenb. Lecanora ochracea Nyl. var. parvula Stizenb. Lecanora odoardi Stizenb. Lecanora oveina Ach. Lecanora orichalcea Stizenb. Lecanora ostracoderma Ach. Lecanora pallescens Rohl. Lecanora pallida Rabenh. Lecanora parella Ach. Lecanora perexigua Stiz. Lecanora phlogina Nyl. Lecanora placodina Zahlbr. Lecanora poliotera Nyl. Lecanora polytypa Vain. Lecanora porinoides Stizenb. Lecanora praemicans Nyl. Lecanora prosecha Ach. var. homaloplaca Vain. Lecanora psaromela Nyl. Lecanora punicea Ach. Lecanora punicea var. brevicula Stizenb. Lecanora punicea var. collata Stirt. Lecanora pyracea Nyl. f. picta Nyl. Lecanora pyracea f. pyrithroma Nyl. Lecanora pyracea var. picta Stizenb. Lecanora pyropoecila Nyl. Lecanora rehmannii Stizenb. Lecanora robiginans Stizenb. Lecanora roboris Nyl. Lecanora rupicola Zahlbr. Lecanora scorigena Nyl. Lecanora scoriophila Stizenb. Lecanora seductrir Stizenb. Lecanora smaragdula Nyl. Lecanora sophodes Nyl. Lecanora sophodes var. atroalbida Nyl. Lecanora sophodes var. roboris Duf. Lecanora sphinctrina Nyl. Lecanora subcarnea Ach. Lecanora subcarnosa Ach. Lecanora subdepressa Nyl. Lecanora subfulgescens Nyl. Lecanora subfusca Ach. Lecanora subfusca var. allophana Ach. Lecanora subfusca var. campestris Rabenh. Lecanora subfusca var. cinereocarnea Müll.Arg. Lecanora subfusca var. glabrata Sch. Lecanora subfusca var. subcrenulata Nyl. Lecanora subfusca var. subgranulata Nyl. Lecanora subgranulata Nyl. Lecanora subpunicea Stizenb. Lecanora subsoluta Nyl. Lecanora subunicolor Nyl. Lecanora sylvestris Stizenb. Lecanora teichophiloides Stizenb. Lecanora tersa Nyl. Lecanora thaeodes Stizenb. Lecanora thiocheila Stizenb. Lecanora varia Ach. Lecanora vascesia Stizenb. Lecanora vincentina Nyl. Lecanora vitellina Ach. Lecanora vulpina Nyl. Lecanora xanthophana Nyl. Lecanora zambesica Stizenb. Family: Lecanoraceae (Lichens) Genus: Lecidea (Lichens) Lecidea acervata Stizenb. Lecidea achristella Vain. Lecidea aemula Stizenb. Lecidea aeneola Vain. var. fuscoatrata Zahlbr. Lecidea aethalea Nyl. Lecidea aethaloessa Stizenb. Lecidea affine Merrill. Lecidea afra Stizenb. Lecidea africana Tuck. Lecidea albinea Stizenb. Lecidea albocoerulescens Arn. Lecidea albocoerulescens var. flavocoerulescens Schaer. Lecidea albula Nyl. Lecidea ambusta Stizenb. Lecidea anatalodia Krempelb. Lecidea angolensis Müll.Arg. Lecidea anomala Ach. Lecidea anteposita Nyl. Lecidea aporetica Stizenb. Lecidea armstrongiae Jones. Lecidea atroalha Ach. Lecidea atroalbella Nyl. Lecidea alrovirens Ach. Lecidea aurantiaca Ach. Lecidea aureola Tuck. Lecidea aurigera Fee. Lecidea breviuscula Nyl. Lecidea brugierae Vain. Lecidea bumamma Nyl. Lecidea buxea Stiz. Lecidea caesiopallida Nyl. Lecidea caledonica Zahlbr. Lecidea callaina Stizenb. Lecidea capensis Zahlbr. Lecidea capreolina Stizenb. Lecidea carneola Ach. Lecidea caruncula Stizenb. Lecidea caudata Nyl. Lecidea chalybeia Borr. Lecidea chlorophaeata Nyl. Lecidea ckloropoliza Nyl. Lecidea chlorotica Nyl. Lecidea cinnamomea Stizenb. Lecidea coccinella Hue. Lecidea coeruleata Stizenb. Lecidea confluens Hue. Lecidea contingens Nyl. Lecidea coroniformis Krempelh. Lecidea crassa Stizenb. Lecidea crenata Stizenb. Lecidea crenata var. coroniformis Zahlbr. Lecidea crenata var. speirea Stizenb. Lecidea crustulata Sprengl. Lecidea cyanocentra Nyl. Lecidea cyrtocheila Stizenb. Lecidea deceptoria Nyl. Lecidea decipiens Ach. Lecidea decrustulosa Vain. Lecidea disciformis Nyl. Lecidea disciformis var. sanguinea Stizenb. Lecidea discolor Stizenb. Lecidea dispersula Stizenb. Lecidea distrata Nyl. Lecidea domingensis Nyl. Lecidea domingensis var.inexplicata Nyl. Lecidea elaeochroma Ach. Lecidea elaeochroma f. flavicans Th.Fr. Lecidea elaeochroma f. geographica Zahlbr. Lecidea elaeochroma var. hyalina Zahlbr. Lecidea endoleuca Nyl. Lecidea endoleucella Stizenb. Lecidea enteroleuca Ach. Lecidea enteroleuca var. geographica Bagl. Lecidea elginensis Zahlbr. Lecidea epichromatica Zahlbr. Lecidea esuriens Zahlbr. Lecidea euelpis Hue. Lecidea exigua Chaub. Lecidea exiguella Vain. Lecidea finckei Zahlbr. Lecidea flavocrocea Nyl. Lecidea fucina Stizenb. Lecidea fumosa Ach. Lecidea fumosa var. mosigii Ach. Lecidea fuscoatra Ach. Lecidea fuscoatrata Nyl. Lecidea fuscorubella Rohl. Lecidea fuscorubescens Nyl. Lecidea fuscolutea Ach. Lecidea fuscotabulata Stizenb. Lecidea geina Stizenb. Lecidea geographica Rebent. Lecidea geographica f. intermedia Stizenb. Lecidea glebaria Stizenb. Lecidea glencairnensis Zahlbr. Lecidea glomerulosa Steud. Lecidea goniophila Floerke. Lecidea gouritzensis Vain. Lecidea graniferna Wain. Lecidea granulosula Nyl. Lecidea grisella Floerke f. mosigii Zahlbr. Lecidea griseofusciuscula Vain. Lecidea guamensis Vain. Lecidea halonia Ach. Lecidea hereroensis Zahlbr. Lecidea hereroensis f. genuina Zahlbr. Lecidea hereroensis f. depauperata Zahlbr. Lecidea howickensis Vain. Lecidea hysbergensis Vain. Lecidea icmadophila Ach. Lecidea imponens Hue. Lecidea impressa Krempelh. Lecidea inconsequens Nyl. Lecidea inconveniens Nyl. Lecidea incretata Stizenb. Lecidea incuriosa Nyl. Lecidea inquilina Stizenb. Lecidea inscripta Stizenb. Lecidea insculpta Flotow. Lecidea insculpta f. oxydata Flotow. Lecidea intermedia Nyl. Lecidea intermixta f. cyanocentra Nyl. Lecidea italica Wedd. Lecidea italica var. debanensis Stizenb. Lecidea italica var. recobarina Stizenb. Lecidea lactaria Stizenb. Lecidea lactea Floerke. Lecidea lactens Stizenb. Lecidea langbaanensis Vain. Lecidea laurocerasi Nyl. var. amylothelia Wain Lecidea lenticularis var. nigroclavata Stizenb. Lecidea leptobola Nyl. Lecidea leucina Stizenb. Lecidea leucostephana Stizenb. Lecidea leucoxantha Spreng. Lecidea lithagogo Wain. Lecidea lutata Stizenb. Lecidea lutea Tayl. Lecidea luteola Ach. Lecidea luteola var. chlorotica Ach. Lecidea luteola f. conspondens Nyl. Lecidea massula Hue. Lecidea medialis Tuck. Lecidea meiospora Nyl. Lecidea melampepla Tuck. Lecidea melanthina Stizenb. Lecidea millegrana Nyl. Lecidea minutula Nyl. Lecidea montaqnei Flotow. Lecidea mortualis Stizenb. Lecidea mossamedana Wain. Lecidea mutabilis Fee. Lecidea myriocarpa Rohl. Lecidea myriocarpa f. marcidula Nyl. Lecidea nanosperma Stizenb. Lecidea natalensis Nyl. Lecidea nesiotis Stizenb. Lecidea nigrella Stizenb. Lecidea nigropallida Nyl. Lecidea nitidula Fr. Lecidea norrlinii Lamy. Lecidea obumbrata Nvl. Lecidea ocellata Floerke. Lecidea ochriodea Stizenb. Lecidea ochroplaca Zahlbr. var. intermedia Zahlbr. Lecidea ochroplaca var. leprosa Zahlbr. Lecidea ochroplaca var. polita Zahlbr. Lecidea ochroxantha Nyl. f. aethiopica Stizenb. Lecidea oligocheila Zahlbr. Lecidea olivacea Massal. Lecidea olivacea var. ambigua Lettau. Lecidea olivacea Stizenb. Lecidea opacata Stizenb. Lecidea opalina Stizenb. Lecidea orbiculata Stizenb. Lecidea orichalcea Hue. Lecidea owaniana Müll.Arg. Lecidea pachnodes Stizenb. Lecidea pachycarpa Fr. Lecidea pallidonigra Ach. Lecidea palmeti Stizenb. Lecidea pantherina Ach. Lecidea parasema Ach. Lecidea parasema var. areolata Duf. Lecidea parasema var. areolata Merrill. Lecidea parasema var. atropurpurea Flotow. Lecidea parasema var. elaeochroma Ach. Lecidea parasema var. exigua Nyl. Lecidea paraspeirea Stizenb. Lecidea parmeliarum Sommerf. Lecidea parvifolia Pers. Lecidea parvifolia var. fibrillifera Nyl. Lecidea parvifoliella Nyl. Lecidea patellaria Stizenb. Lecidea peltasta Stirt. Lecidea peltoloma Müll.Arg. Lecidea peltulidea Stirt. Lecidea perforans Stizenb. Lecidea perigrapta Stizenb. Lecidea permodica Stizenb. Lecidea phalerata Stizenb. Lecidea placodina Nyl. Lecidea porphyrea Mey. Lecidea praelata Stizenb. Lecidea praemicans Hue. Lecidea procellarum Stizenb. Lecidea promontorii Zahlbr. Lecidea proposita Nyl. Lecidea punieea Hue. Lecidea quartzina Stizenb. Lecidea remota Vain. Lecidea rhynsdorpensis Zahlbr. Lecidea rhyparoleuca Stizenb. Lecidea rivulosa Ach. Lecidea rudis Stizenb. Lecidea rufata Stizenb. Lecidea russula Ach. Lecidea rusticorum Stizenb. Lecidea sabuletorum Ach. Lecidea santensis Tuck. Lecidea schinziana Stizenb. Lecidea speirea Ach. Lecidea spuria Schaer. Lecidea spuria var. insulans Stizenb. Lecidea squamifera Stizenb. Lecidea squamifera var. Bylii Zahlbr. Lecidea stellans Stizenb. Lecidea stellenboschiana Vain. Lecidea stellulata Tayl. Lecidea stellulata f. albosparsa Stizenb. Lecidea stellulata f. hybrida Stizenb. Lecidea stellulata f. murina Stizenb. Lecidea stenospora Nyl. var. acutata Stizenb. Lecidea stictella Stirt. Lecidea stupparia Stizenb. Lecidea styloumena Stirt. Lecidea subalbicans Nyl. Lecidea subalbula Nyl. Lecidea subattingens Merrill. Lecidea subceresina Zahlbr. Lecidea subdisciformis Leight. Lecidea subexigua Vain. Lecidea subexiguella Vain. Lecidea subfuscata Nyl. Lecidea subinquinans Nyl. Lecidea sublucida Stizenb. Lecidea subluteola Nyl. Lecidea subspadicea Stizenb. Lecidea subsquamifera Zahlbr. Lecidea subrussula Steiner. Lecidea substylosa Zahlbr. Lecidea subtristis Nyl. Lecidea sulfurosula Stizenb. Lecidea tenebrieosa Nyl. Lecidea terrena Nyl. Lecidea thaleriza Stirt. Lecidea theichroa Vain. Lecidea theiphoriodes Vain. Lecidea tragorum Zahlbr. Lecidea transvaalica Stizenb. Lecidea trichiliae Zahlbr. Lecidea trifaria Stizenb. Lecidea triphragmia Nyl. Lecidea tuberculosa Fee. Lecidea tuberculosa f. geotropa Stizenb. Lecidea valida Stizenb. Lecidea vasquesia Hue. Lecidea vemalis Ach. Lecidea vernalis S.F.Gray. Lecidea versicolor Nyl. Lecidea vesicularis Ach. Lecidea vestita Nyl. Lecidea viridans Lamy var. nigrella Steiner. Lecidea viridiatra Stizenb. Lecidea volvarioides Stizenb. Lecidea vorticosa Korb. Lecidea vulgata Zahlbr. Lecidea vulpina Tuck. Lecidea woodii Stizenb. Lecidea zeyheri Zahlbr. Family: Lecideaceae(lichens) Genus: Lecidella(lichens) Lecidella nigrella Massal. Genus: Lecideola Lecideola flavescens Massal. Genus: Lembosia Lembosia congesta Wint. Lembosia durbana v.d.Byl. Lembosia natalensis Doidge Lembosia phillipsii Doidge Lembosia piriensis Doidge Lembosia radiata Doidge Lembosia wageri Doidge Genus: Lembosina Lembosina Rawsoniae Doidge Genus: Lembosiopsis Lembosiopsis eucalyptina Petrak & Syd. Genus: Lentinus Lentinus albidus Berk. Lentinus capronatus Berk. Lentinus cirrosus Fr. Lentinus dactyliophorus Lev. Lentinus fastuosus Kalchbr. & MacOwan Lentinus flabelliformis Fr. Lentinus hyracinus Kalchbr. Lentinus lecomtei Fr. Lentinus lepideus Fr. Lentinus miserculus Kalchbr. Lentinus murrayi Kalchbr. & MacOwan. Lentinus natalensis v.d.Byl. Lentinus nigripes Fr. Lentinus phillipsii v.d.Byl. Lentinus sajor-caju Fr. Lentinus sajor-caju var. sparsifolius Pilat. Lentinus sajor-caju var. typicus Pilat. Lentinus strigosus Fr. Lentinus stupeus Klotzsch. Lentinus tigrinus Fr. Lentinus tuber-regium Fr. Lentinus ursinus Fr. Lentinus velutinus Fr. Lentinus villosus Klotzsch. Lentinus villosus var. zeyheri Pilat. Lentinus woodii Kalchbr. Lentinus zeyheri Berk. Genus: Lenzites Lenzites abietina Fr. Lenzites alborepanda Lloyd. Lenzites applanata Fr. Lenzites aspera Klotszch. Lenzites betulina (L.) Fr., (1838), accepted as Trametes betulina (L.) Pilát (1939) Lenzites deplanata Fr. Lenzites guineensis Fr. Lenzites junghuhnii Lev. Lenzites ochracea Lloyd Lenzites palisoti Fr. Lenzites repanda Fr. Lenzites trabea (Pers.) Fr. (1838), accepted as Gloeophyllum trabeum (Pers.) Murrill (1908) Lenzites tricolor Fr. Genus: Leotia Leotia elegantula Kalchbr. Genus: Lepiota Lepiota acutesquamosa Gill. Lepiota africana Kalchbr. Lepiota atricapilla Sacc. Lepiota cinereo-bubella Kalchbr. & MacOwan Lepiota cristata Quél. Lepiota cuculliformis Sacc. Lepiota excoriata Quél. Lepiota flava Beeli. Lepiota goossensiae Beeli. Lepiota gracilenta Quél. Lepiota hispida Gill. Lepiota ianthina Mass. Lepiota kunzei Sacc. Lepiota lutea (Bolton) Godfrin, (1897), accepted as Leucocoprinus birnbaumii (Corda) Singer, (1962) Lepiota magnannulata Sacc. Lepiota montagnei Sacc. Lepiota morgani Peck. Lepiota naucina Quél. (sic) could be Lepiota naucina var. cinerascens (Quél.) Konrad & Maubl. (194) accepted as Leucoagaricus cinerascens (Quél.) Bon & Boiffard, in Gams 1978, or Lepiota naucina (Fr.) P. Kumm. (1871), accepted as Leucoagaricus leucothites (Vittad.) Wasser (1977) Lepiota nympharum Kalchbr. Lepiota polysarca Sacc. Lepiota procera S.F.Grey. (sic) could be Lepiota procera (Scop.) Gray (1821) accepted as Macrolepiota procera (Scop.) Singer (1948) Lepiota pteropa Sacc. Lepiota purpurata Kalchbr. Lepiota sulfurella Sacc. Lepiota varians Sacc. Lepiota zeyheri Sacc. Lepiota zeyheri var. elegantula Sacc. Lepiota Lepiota zeyheri var. telosa Sacc. Lepiota zeyheri var. verrucellosa Sacc. Lepiota sp. Genus: Lepra Lepra citrina Schaer. Lepra lactea DC. Lepra sulphurea Ehrht. Genus: Lepraria (Lichens) Lepraria alba Ach. Lepraria candelaris Fr. Lepraria citrina Schaer. Lepraria crassa Nees. Lepraria flava Ach. Lepraria glaucella Ach. Lepraria xanthina Vain. Genus: Leproloma Leproloma lanuqinosum Nyl. Genus: Leptogiopsis Leptogiopsis brebissonii Müll.Arg. Leptogiopsis chloromeloides Müll.Arg. Genus: Leptogium (Lichens) Leptogium adpressum Nyl. Leptogium africanum Zahlbr. Leptogium azureum Mont. Leptogium brebissonii Mont. Leptogium bullatum Mont. Leptogium bullatum var. dactylinoideum Nyl. Leptogium burgesii Mont. Leptogium chloromeloides Nyl. Leptogium chloromelum Nyl. Leptogium chloromelum var. caespitosum Zahlbr. Leptogium chloromelum var. crassius Nyl. Leptogium daedaleum Nyl. Leptogium hildenbrandii Nyl. Leptogium kraussii Zahlbr. Leptogium marginellum S.F.Gray. Leptogium menziesii Mont. Leptogium menziesii f. fuliginosum Müll.Arg. Leptogium moluccanum Wain. Leptogium moluccanum var. simplicata Vain. Leptogium phyllocarpum Mont. Leptogium phyllocarpum var. coralloideum Hue. Leptogium phyllocarpum var. daedaleum Nyl. Leptogium phyllocarpum var. isidiosum Nyl. Leptogium phyllocarpum var. macrocarpum Nyl. Leptogium saturninum Nyl. Leptogium tremelloides S.F.Gray. Leptogium tremelloides var. azureum Nyl. Genus: Leptosphaerella Leptosphaerella helichrysi Cooke. Genus: Leptosphaeria Leptosphaeria anceps Sacc. Leptosphaeria caffra Thuem. Leptosphaeria cervispora Sacc. Leptosphaeria cinnamomi Shir. & Hara. Leptosphaeria coniothyrium Sacc. Leptosphaeria helichrysi Sacc. Leptosphaeria owaniae Sacc. Leptosphaeria protearum Syd. Leptosphaeria pterocelastri Doidge Leptosphaeria sacchari van Breda. Leptosphaeria salvinii Catt., (1879), accepted as Magnaporthe salvinii (Catt.) R.A. Krause & R.K. Webster, (1972) Leptosphaeria verwoerdiana du Pless. Family: Leptostromataceae Genus: Leptostromella Leptostromella acaciae Syd. Genus: Leptothyrium Leptothyrium evansii Syd. Leptothyrium pomi (Mont. & Fr.) Sacc. (1880),accepted as Schizothyrium pomi (Mont. & Fr.) Arx, (1959) Genus: Leptotrema (Lichens) Leptotrema endoxanthellum Zahlbr. Leptotrema microglaenoides Zahlbr. Genus: Leveillina Leveillina arduinae Theiss. & Syd. Genus: Leveillula Leveillula taurica Am. Genus: Levieuxia Levieuxia natalensis Fr. Li Genus: Libertella Libertella rhois Kalchbr. Genus: Lichen Lichen albus Roth. Lichen atrovirens Linn. Lichen barbatus Linn. Lichen capensis Linn.f. Lichen ceranoides Lam. Lichen crispus Linn. Lichen crocatus Linn. Lichen divaricatus Thunb. Lichen excavatus Thunb. Lichen fastigiatus Pers. Lichen fimbriatus Linn. Lichen flammeus Linn.f. Lichen flavicans Sw. Lichen fragilis Wither. Lichen fraxineus Linn. Lichen gilvus Ach. Lichen helopherus Ach. Lichen hepaticus (= Endocarpon thunbergii Lam.) Lichen hotentottus Ach. Lichen incarnatus Thunb. Lichen monocarpus Thunb. Lichen opegraphus Lam. Lichen pallidoniger Ach. Lichen peltatus Lam. Lichen peltatus (= Lecanora ostracoderma Lam.) Lichen perforatus Wulf. Lichen pertusus Thunb. Lichen physodes Linn. Lichen pulmonarius Linn. Lichen pyxidatus Linn. Lichen rangiferinus Linn. Lichen roccella Linn. Lichen rubiginosus Thunb. Lichen scriptus Linn. Lichen squarrosus Lam. Lichen tabularis Thunb. Lichen thunbergii Ach. Lichen tomentosus Sw. Lichen torulosus Thunb. Lichen usnea Linn. Lichen verruciger Gmel. Lichen verrucosus Linn.f. Lichen viridis Linn.f. Lichenes Imperfectae Genus: Limacinia Limacinia nuxiae Doidge Limacinia transvaalensis Doidge Genus: Linderiella Linderiella columnata G.H.Cunn. Genus: Linochora Linochora doidgei Syd. Genus: Linochorella Linochorella striiformis Syd. Genus: Lithographa (Lichens) Lithographa cerealis Stizenb. Lithographa fumida Nyl. Ll Genus: Lloydella Lloydella retiruga Bres. Lo Genus: Lobaria (Lichens) Lobaria interversans Wain. Lobaria isidiosa Wain. Lobaria meridionalis Wain. Lobaria patinifera Hue. Lobaria pulnumacea Shirley. Lobaria pulmonacea var. hypomela Stizenb. Lobaria prdmonacea var. pleurocarpa Ach. Lobaria pulmonaria Hoffm. Lobaria pulmonaria f. hypomela Cromb. Lobaria pulmonaria f. papillaris Hue. Lobaria pulmonaria f. pleurocarpa Cromb. Lobaria quercizans Michx. Lobaria retigera Trevis. Lobaria verrucosa Holfm. Genus: Lobarina Lobarina retigera Nyl. Lobarina retigera f. isidiosa Stizenb. Lobarina scrobiculata Nyl. Genus: Longia Longia natalensis Syd. Genus: Lopadium Lopadium fuscoluteum Mudd. Lopadium leucoxanthum Zahlbr. Lopadium mariae Zahlbr. Lopadium vulpinum Zahlbr. Lopadium woodii Zahlbr. Genus: Lopharia Lopharia javanica P. Henn. & E.Nym. Lopharia lirellosa Kalchbr. & MacOwan. Lopharia mirabilis Pat. Family: Lophiostomataceae Genus: Lophodermium Lophodermium pinastri Chev. Ly Genus: Lycogala (amoebozoa) Lycogala epidendrum Fr. Lycogala flavo-fuscum Rost. Lycogala rufo-cinnamomeum Mass. Family: Lycogalaceae Family: Lycoperdaceae Order: Lycoperdales Family: Lycoperdeae Genus: Lycoperdon Lycoperdon asperrimum Welw. & Curr. Lycoperdon asperum de Toni. Lycoperdon atroviolaceum Kalchbr. Lycoperdon bicolor Welw. & Curr. Lycoperdon bovista Linn. Lycoperdon caespitosum Welw. & Curr. Lycoperdon caffrorum Kalchbr. & Cooke Lycoperdon capense Cooke & Mass. Lycoperdon capense Fr. Lycoperdon carcinomale Linn.f. Lycoperdon cepaeforme Mass. Lycoperdon curreyi Mass. Lycoperdon curtisii Berk. Lycoperdon cyatihiforme Bose. Lycoperdon djurense P.Henn. Lycoperdon duthiei Bottomley. Lycoperdon endotephrum Pat. Lycoperdon eylesii Verw. Lycoperdon flavum Mass. Lycoperdon furfuraceum Schaeff. ex de Toni. Lycoperdon gardneri Berk. Lycoperdon gemmatum Batsch. Lycoperdon glabellum Peck. Lycoperdon gunnii Berk. Lycoperdon hyemale Vitt. Lycoperdon laetum Berk. Lycoperdon lilacinum Mass. Lycoperdon multiseptum Lloyd. Lycoperdon natalense Cooke & Mass. Lycoperdon natalense Fr. Lycoperdon oblongisporum Berk. & Curt. Lycoperdon perlatum Pers. Lycoperdon polymorphum Vitt. Lycoperdon pratense Pers. Lycoperdon pusillum Batsch ex Pers. Lycoperdon qudenii Bottomley. Lycoperdon radicatum Welw. & Curr. Lycoperdon retis Lloyd Lycoperdon rhodesianum Verw. Lycoperdon saccatum Vahl. Lycoperdon subincamatum Peck. Lycoperdon umbrinum Pers. Lycoperdon welwitschii de Toni. Genus: Lysurus Lysurus borealis P.Henn. Lysurus corallocephalus Welw. & Curr. Lysurus gardneri Berk. Lysurus woodii Lloyd. References Sources See also List of bacteria of South Africa List of Oomycetes of South Africa List of slime moulds of South Africa List of fungi of South Africa List of fungi of South Africa – A List of fungi of South Africa – B List of fungi of South Africa – C List of fungi of South Africa – D List of fungi of South Africa – E List of fungi of South Africa – F List of fungi of South Africa – G List of fungi of South Africa – H List of fungi of South Africa – I List of fungi of South Africa – J List of fungi of South Africa – K List of fungi of South Africa – L List of fungi of South Africa – M List of fungi of South Africa – N List of fungi of South Africa – O List of fungi of South Africa – P List of fungi of South Africa – Q List of fungi of South Africa – R List of fungi of South Africa – S List of fungi of South Africa – T List of fungi of South Africa – U List of fungi of South Africa – V List of fungi of South Africa – W List of fungi of South Africa – X List of fungi of South Africa – Y List of fungi of South Africa – Z Further reading Kinge TR, Goldman G, Jacobs A, Ndiritu GG, Gryzenhout M (2020) A first checklist of macrofungi for South Africa. MycoKeys 63: 1-48. https://doi.org/10.3897/mycokeys.63.36566 South Africa Fungi L
List of fungi of South Africa – L
Biology
9,537
19,147,084
https://en.wikipedia.org/wiki/Teprenone
Teprenone (or geranylgeranylacetone), sold under the brand name Selbex, is a medication used for the treatment of gastric ulcers. References Drugs acting on the gastrointestinal system and metabolism Terpenes and terpenoids Ketones Alkene derivatives
Teprenone
Chemistry
65
69,809,487
https://en.wikipedia.org/wiki/Pio%20Emanuelli
Pio Emanuelli (3 November 1889 – 2 July 1946) was an Italian astronomer, historian and popularizer of astronomy. He worked for many years at the Vatican Observatory and also taught at the University of Rome. Emanuelli was born in Rome, son of a Vatican clerk. He was only ten when his father died and took an interest in astronomy from a very young age, attending lectures by Elia Millosevich, writing articles in magazines and newspapers. Even as young boy he was in correspondence with astronomers like Giovanni Schiaparelli and Camille Flammarion. In 1910 he joined the Vatican Observatory under Father Johann Georg Hagen and worked on the Star Catalog for years except for a break due to conscription into the war between 1915 and 1919. In 1922 he became a lecturer in astronomy at the University of Rome and in 1938 he became a professor of the history of astronomy. He wrote numerous popular articles and books. He served as secretary to the Italian Astronomical Society between 1924 and 1928 and was a corresponding member of the Pontifical Academy of the Nuovi Lincei from 1925. In 1940 he was appointed back on army duty at the meteorological station of Vigna di Valle (near Bracciano) with the rank of major but he continued to teach. Emanuelli died unexpectedly from an illness, leaving behind a large number of incomplete and unpublished works which are now held at the Domus Galileana in Pisa. The asteroid 11145 Emanuelli was named in his memory in 1997. A street in Rome is also named after him. References 1889 births 1946 deaths 20th-century Italian astronomers Historians of astronomy Scientists from Rome
Pio Emanuelli
Astronomy
326
67,970
https://en.wikipedia.org/wiki/Lycophyte
The lycophytes, when broadly circumscribed, are a group of vascular plants that include the clubmosses. They are sometimes placed in a division Lycopodiophyta or Lycophyta or in a subdivision Lycopodiophytina. They are one of the oldest lineages of extant (living) vascular plants; the group contains extinct plants that have been dated from the Silurian (ca. 425 million years ago). Lycophytes were some of the dominating plant species of the Carboniferous period, and included the tree-like Lepidodendrales, some of which grew over in height, although extant lycophytes are relatively small plants. The scientific names and the informal English names used for this group of plants are ambiguous. For example, "Lycopodiophyta" and the shorter "Lycophyta" as well as the informal "lycophyte" may be used to include the extinct zosterophylls or to exclude them. Description Lycophytes reproduce by spores and have alternation of generations in which (like other vascular plants) the sporophyte generation is dominant. Some lycophytes are homosporous while others are heterosporous. When broadly circumscribed, the lycophytes represent a line of evolution distinct from that leading to all other vascular plants, the euphyllophytes, such as ferns, gymnosperms and flowering plants. They are defined by two synapomorphies: lateral rather than terminal sporangia (often kidney-shaped or reniform), and exarch protosteles, in which the protoxylem is outside the metaxylem rather than vice versa. The extinct zosterophylls have at most only flap-like extensions of the stem ("enations") rather than leaves, whereas extant lycophyte species have microphylls, leaves that have only a single vascular trace (vein), rather than the much more complex megaphylls of other vascular plants. The extinct genus Asteroxylon represents a transition between these two groups: it has a vascular trace leaving the central protostele, but this extends only to the base of the enation. See . Zosterophylls and extant lycophytes are all relatively small plants, but some extinct species, such as the Lepidodendrales, were tree-like, and formed extensive forests that dominated the landscape and contributed to the formation of coal. Taxonomy Classification In the broadest circumscription of the lycophytes, the group includes the extinct zosterophylls as well as the extant (living) lycophytes and their closest extinct relatives. The names and ranks used for this group vary considerably. Some sources use the names "Lycopodiophyta" or the shorter "Lycophyta" to include zosterophylls as well as extant lycophytes and their closest extinct relatives, while others use these names to exclude zosterophylls. The name "Lycopodiophytina" has also been used in the inclusive sense. English names, such as "lycophyte", "lycopodiophyte" or "lycopod", are similarly ambiguous, and may refer to the broadly defined group or only to the extant lycophytes and their closest extinct relatives. The consensus classification produced by the Pteridophyte Phylogeny Group classification in 2016 (PPG I) places all extant (living) lycophytes in the class Lycopodiopsida. There are around 1,290 to 1,340 such species. For more information on the classification of extant lycophytes, see . Phylogeny A major cladistic study of land plants was published in 1997 by Kenrick and Crane. In 2004, Crane et al. published some simplified cladograms, based on a number of figures in Kenrick and Crane (1997). Their cladogram for the lycophytes is reproduced below (with some branches collapsed into 'basal groups' to reduce the size of the diagram). In this view, the "zosterophylls" comprise a paraphyletic group, ranging from forms like Hicklingia, which had bare stems, to forms like Sawdonia and Nothia, whose stems are covered with unvascularized spines or enations. The genus Renalia illustrates the problems in classifying early land plants. It has characteristics both of the non-lycophyte rhyniophytes – terminal rather than lateral sporangia – and of the zosterophylls – kidney-shaped sporangia opening along the distal margin. A rather different view is presented in a 2013 analysis by Hao and Xue. Their preferred cladogram shows the zosterophylls and associated genera basal to both the lycopodiopsids and the euphyllophytes, so that there is no clade corresponding to the broadly defined group of lycophytes used by other authors. Some extinct orders of lycophytes fall into the same group as the extant orders. Different sources use varying numbers and names of the extinct orders. The following phylogram shows a likely relationship between some of the proposed Lycopodiopsida orders. Evolution of microphylls Within the broadly defined lycophyte group, species placed in the class Lycopodiopsida are distinguished from species placed in the Zosterophyllopsida by the possession of microphylls. Some zosterophylls, such as the Devonian Zosterophyllum myretonianum, had smooth stems (axes). Others, such as Sawdonia ornata, had flap-like extensions on the stems ("enations"), but without any vascular tissue. Asteroxylon, identified as an early lycopodiopsid, had vascular traces that extended to the base of the enations. Species in the genus Leclercqia had fully vascularized microphylls. These are considered to be stages in the evolution of microphylls. Gallery References External links Lycophytes Fossil Groves Paleo Plants (archived 15 January 2005) Cryptogams Plant divisions Wenlock first appearances Extant Silurian first appearances
Lycophyte
Biology
1,348
35,263,738
https://en.wikipedia.org/wiki/Vasant%20Dhar
Vasant Dhar is a professor at the Stern School of Business and the Center for Data Science at New York University, former editor-in-chief of the journal Big Data and the founder of SCT Capital, one of the first machine-learning-based hedge funds in New York City in the 1990s. His research focuses on building scalable decision-making systems from large sources of data using techniques and principles from the disciplines of artificial intelligence and machine learning. Early life and education Dhar is a graduate of The Lawrence School, Sanawar, which he considers one of the best presents his parents gave him without realizing it. He graduated from the Indian Institute of Technology Delhi in 1978 with a B.Tech in chemical engineering. He subsequently attended the University of Pittsburgh where he received an M. Phil and a Ph.D. in 1984. After he earned his doctorate, he joined the faculty at New York University. He worked at Morgan Stanley between 1994 and 1997 where he created the Data Mining Group that focused on predicting financial markets and customer behavior. Career highlights Dhar is an artificial intelligence researcher and data scientist whose research addresses the question, when do we trust AI systems with decision making? The question is particularly relevant to current-day autonomous machine-learning-based systems that learn and adapt with ongoing data. His research has been motivated by building predictive models in a number of domains, most notably finance, as well as areas including healthcare, sports, education and business, asking why are we willing to trust machines in some areas and not others? His view is that there is a discontinuity when we give complete decision-making control to a machine that learns from ongoing data. This discontinuity introduces some risks, specifically those around the errors made by such systems, which directly affect our degree of trust in them. Dhar's research breaks down trust along two risk-based dimensions: predictability, or how frequently a system makes mistakes (X-axis), and the associated costs of error (Y-axis) of such mistakes. The research demonstrates the existence of a "frontier" that expresses a trade-off between how often a system will be wrong and the consequences of such mistakes. Trust, and hence our willingness to cede control of decision making to the machine, increases with increasing predictability and lower error costs. In other words, we are willing to trust machines if they do not make too many mistakes and their costs are tolerable. As mistakes increase, we require that their consequences be less costly. The automation frontier provides a natural way to think about the future of work. With more and better data and algorithms, parts of existing processes become automated due to increased predictability, and cross the automation frontier into the "trust the machine" zone, whereas the parts with high error costs remain under human control. The model provides a way to think about the changing responsibilities of humans and machines as more data and better algorithms become better than humans with decisions. Dhar also uses the framework to frame policy issues around the risks of AI-based social media platforms and issues of privacy and ethical uses and governance of data. He writes regularly in the media on artificial intelligence, societal risks of AI platforms, data governance, privacy, ethics, and trust. He is a frequent speaker in academic as well as industrial forums. Dhar teaches courses on systematic investing, prediction, data science and the foundations of FinTech. He has written over 100 research articles, funded by grants from industry and government agencies such as the National Science Foundation. See also Data science Predictive analytics References External links New York University Stern faculty page New York University Stern School of Business faculty Living people IIT Delhi alumni University of Pittsburgh alumni Year of birth missing (living people) Information systems researchers
Vasant Dhar
Technology
759
51,528,677
https://en.wikipedia.org/wiki/Meizu%20M3%20Max
The Meizu M3 Max is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is a current phablet model of the M series. It was unveiled on September 5, 2016, in Beijing. History In August, rumors about a new phablet Meizu device appeared after the company released some teasers for a new device mentioning that it will be a device containing “Max” in the product name. At the same point, invitations containing a Nokia device for a Meizu launch event on September 5, 2016, had been sent out. On August 26, 2016, several leaked photos of the upcoming phablet device had been released. Release As announced, the M3 Max was released in Beijing on August 10, 2016. Pre-orders for the M3 Max began after the launch event on August 10, 2016. Features Flyme The Meizu M3 Max was released with an updated version of Flyme OS, a modified operating system based on Android Marshmallow. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu M3 Max features a MediaTek Helio P10 system-on-a-chip with an array of eight ARM Cortex-A53 CPU cores, an ARM Mali-T860 MP2 GPU and 3 GB of RAM. The M3 Max is available in four different colors (grey, silver, champagne gold and rose gold) and comes with 3 GB of RAM and 32 GB of internal storage. The Meizu M3 Max has a full-metal body, which measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front. Unlike most other Android smartphones, the M3 Max doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. The M3 Max further extends this button by a fingerprint sensor called mTouch. The M3 Max features a fully laminated 6-inch IPS multi-touch capacitive touchscreen display with a FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 296 ppi. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu M3 Max has two cameras. The rear camera has a resolution of 13 MP, a ƒ/2.2 aperture, a 5-element lens, phase-detection autofocus and an LED flash. The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 4-element lens. See also Comparison of smartphones References External links Official product page Meizu Phablets Android (operating system) devices Mobile phones introduced in 2016 M3 Max Discontinued smartphones
Meizu M3 Max
Technology
633
2,686,634
https://en.wikipedia.org/wiki/Polyphenism
A polyphenic trait is a trait for which multiple, discrete phenotypes can arise from a single genotype as a result of differing environmental conditions. It is therefore a special case of phenotypic plasticity. There are several types of polyphenism in animals, from having sex determined by the environment to the castes of honey bees and other social insects. Some polyphenisms are seasonal, as in some butterflies which have different patterns during the year, and some Arctic animals like the snowshoe hare and Arctic fox, which are white in winter. Other animals have predator-induced or resource polyphenisms, allowing them to exploit variations in their environment. Some nematode worms can develop either into adults or into resting dauer larvae according to resource availability. Definition A polyphenism is the occurrence of several phenotypes in a population, the differences between which are not the result of genetic differences. For example, crocodiles possess a temperature-dependent sex determining polyphenism, where sex is the trait influenced by variations in nest temperature. When polyphenic forms exist at the same time in the same panmictic (interbreeding) population they can be compared to genetic polymorphism. With polyphenism, the switch between morphs is environmental, but with genetic polymorphism the determination of morph is genetic. These two cases have in common that more than one morph is part of the population at any one time. This is rather different from cases where one morph predictably follows another during, for instance, the course of a year. In essence the latter is normal ontogeny where young forms can and do have different forms, colours and habits to adults. The discrete nature of polyphenic traits differentiates them from traits like weight and height, which are also dependent on environmental conditions but vary continuously across a spectrum. When a polyphenism is present, an environmental cue causes the organism to develop along a separate pathway, resulting in distinct morphologies; thus, the response to the environmental cue is “all or nothing.” The nature of these environmental conditions varies greatly, and includes seasonal cues like temperature and moisture, pheromonal cues, kairomonal cues (signals released from one species that can be recognized by another), and nutritional cues. Types Sex determination Sex-determining polyphenisms allow a species to benefit from sexual reproduction while permitting an unequal gender ratio. This can be beneficial to a species because a large female-to-male ratio maximizes reproductive capacity. However, temperature-dependent sex determination (as seen in crocodiles) limits the range in which a species can exist, and makes the species susceptible to endangerment by changes in weather pattern. Temperature-dependent sex determination has been proposed as an explanation for the extinction of the dinosaurs. Population-dependent and reversible sex determination, found in animals such as the blue wrasse fish, have less potential for failure. In the blue wrasse, only one male is found in a given territory: larvae within the territory develop into females, and adult males will not enter the same territory. If a male dies, one of the females in his territory becomes male, replacing him. While this system ensures that there will always be a mating couple when two animals of the same species are present, it could potentially decrease genetic variance in a population, for example if the females remain in a single male's territory. Insect castes The caste system of insects enables eusociality, the division of labor between non-breeding and breeding individuals. A series of polyphenisms determines whether larvae develop into queens, workers, and, in some cases soldiers. In the case of the ant, P. morrisi, an embryo must develop under certain temperature and photoperiod conditions in order to become a reproductively-active queen. This allows for control of the mating season but, like sex determination, limits the spread of the species into certain climates. In bees, royal jelly provided by worker bees causes a developing larva to become a queen. Royal jelly is only produced when the queen is aging or has died. This system is less subject to influence by environmental conditions, yet prevents unnecessary production of queens. Seasonal Polyphenic pigmentation is adaptive for insect species that undergo multiple mating seasons each year. Different pigmentation patterns provide appropriate camouflage throughout the seasons, as well as alter heat retention as temperatures change. Because insects cease growth and development after eclosion, their pigment pattern is invariable in adulthood: thus, a polyphenic pigment adaptation would be less valuable for species whose adult form survives longer than one year. Birds and mammals are capable of continued physiological changes in adulthood, and some display reversible seasonal polyphenisms, such as in the Arctic fox, which becomes all white in winter as snow camouflage. Predator-induced Predator-induced polyphenisms allow the species to develop in a more reproductively-successful way in a predator's absence, but to otherwise assume a more defensible morphology. However, this can fail if the predator evolves to stop producing the kairomone to which the prey responds. For example, the midge larvae (Chaoborus) that feed on Daphnia cucullata (a water flea) release a kairomone that Daphnia can detect. When the midge larvae are present, Daphnia grow large helmets that protect them from being eaten. However, when the predator is absent, Daphnia have smaller heads and are therefore more agile swimmers. Resource Organisms with resource polyphenisms show alternative phenotypes that allow differential use of food or other resources. One example is the western spadefoot toad, which maximizes its reproductive capacity in temporary desert ponds. While the water is at a safe level, the tadpoles develop slowly on a diet of other opportunistic pond inhabitants. However, when the water level is low and desiccation is imminent, the tadpoles develop a morphology (wide mouth, strong jaw) that permits them to cannibalize. Cannibalistic tadpoles receive better nutrition and thus metamorphose more quickly, avoiding death as the pond dries up. Among invertebrates, the nematode Pristionchus pacificus has one morph that primarily feeds on bacteria and a second morph that produces large teeth, enabling it to feed on other nematodes, including competitors for bacterial food. In this species, cues of starvation and crowding by other nematodes, as sensed by pheromones, trigger a hormonal signal that ultimately activates a developmental switch gene that specifies formation of the predatory morph. Density-dependent Density-dependent polyphenism allows species to show a different phenotype based on the population density in which it was reared. In Lepidoptera, African armyworm larvae exhibit one of two appearances: the gregarious or solitary phase. Under crowded or "gregarious" conditions, the larvae have black bodies and yellow stripes along their bodies. However, under solitary conditions, they have green bodies with a brown stripe down their backs. The different phenotypes emerge during the third instar and remain until the last instar. Dauer diapause in nematodes Under conditions of stress such as crowding and high temperature, L2 larvae of some free living nematodes such as Caenorhabditis elegans can switch development to the so-called dauer larva state, instead of going the normal molts into a reproductive adult. These dauer larvae are a stress-resistant, non-feeding, long-lived stage, enabling the animals to survive harsh conditions. On return to favorable conditions, the animal resumes reproductive development from L3 stage onwards. Evolution A mechanism has been proposed for the evolutionary development of polyphenisms: A mutation results in a novel, heritable trait. The trait's frequency expands in the population, creating a population on which selection can act. Pre-existing (background) genetic variation in other genes results in phenotypic differences in expression of the new trait. These phenotypic differences undergo selection; as genotypic differences narrow, the trait becomes: Genetically fixed (non-responsive to environmental conditions) Polyphenic (responsive to environmental conditions) Evolution of novel polyphenisms through this mechanism has been demonstrated in the laboratory. Suzuki and Nijhout used an existing mutation (black) in a monophenic green hornworm (Manduca sexta) that causes a black phenotype. They found that if larvae from an existing population of black mutants were raised at 20˚C, then all the final instar larvae were black; but if the larvae were instead raised at 28˚C, the final instar larvae ranged in color from black to green. By selecting for larvae that were black if raised at 20˚C but green if raised at 28˚C, they produced a polyphenic strain after thirteen generations. This fits the model described above because a new mutation (black) was required to reveal pre-existing genetic variation and to permit selection. Furthermore, the production of a polyphenic strain was only possible because of background variation within the species: two alleles, one temperature-sensitive and one stable, were present for a single gene upstream of black (in the pigment production pathway) before selection occurred. The temperature-sensitive allele was not observable because at high temperatures, it caused an increase in green pigment in hornworms that were already bright green. However, introduction of the black mutant caused the temperature-dependent changes in pigment production to become obvious. The researchers could then select for larvae with the temperature-sensitive allele, resulting in a polyphenism. See also Phenotypic switching References External links "Seasonal Polyphenism in Butterfly Wings", article in DevBio, a companion to Developmental Biology, 9th edition, by Scott F. Gilbert Evolutionarily significant biological phenomena Population ecology Polymorphism (biology) Genetics
Polyphenism
Biology
2,048
233,944
https://en.wikipedia.org/wiki/External%20combustion%20engine
An external combustion engine (EC engine) is a reciprocating heat engine where a working fluid, contained internally, is heated by combustion in an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine, produces motion and usable work. The fluid is then dumped (open cycle), or cooled, compressed and reused (closed cycle). In these types of engines, the combustion is primarily used as a heat source, and the engine can work equally well with other types of heat sources. Combustion "Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; they are not then strictly classed as external combustion engines, but as external thermal engines. Working fluid The working fluid can be of any composition and the system may be single-phase (liquid only or gas only) or dual-phase (liquid/gas). Single phase Gas is used in a Stirling engine. Single-phase liquid may sometimes be used. Dual phase Dual-phase external combustion engines use a phase transition to convert temperature to usable work, for example from liquid to (generally much larger) gas. This type of engine follows variants of the Rankine cycle. Steam engines are a common example of dual-phase engines. Another example is engines that use the Organic Rankine cycle. See also Organic Rankine cycle Steam engines Stirling engines Trochilic engine Internal combustion engine (ICE) Nuclear power Solar thermal rocket (an externally heated rocket) Naptha engine, a variant of the steam engine, using a petroleum liquid as both fuel and working fluid. References Engines
External combustion engine
Physics,Technology
368
43,065,863
https://en.wikipedia.org/wiki/Holmberg%20II
Holmberg II is an irregular dwarf galaxy in the constellation Ursa Major. Its apparent magnitude is 11,1m and it is 11 million light years away from Earth. The galaxy is dominated by huge glowing gas bubbles, which are regions of star formation. Holmberg II also hosts an ultraluminous X-ray source. One hypothesis suggests that is caused by an intermediate mass black hole that is pulling surrounding material. Holmberg II was discovered by Erik Bertil Holmberg. References External links Ursa Major Dwarf galaxies 268 23324 04305 M81 Group
Holmberg II
Astronomy
116
47,329,491
https://en.wikipedia.org/wiki/One-to-many%20%28data%20model%29
In systems analysis, a one-to-many relationship is a type of cardinality that refers to the relationship between two entities (see also entity–relationship model). For example, take a car and an owner of the car. The car can only be owned by one owner at a time or not owned at all, and an owner could own zero, one, or multiple cars. One owner could have many cars, one-to-many. In a relational database, a one-to-many relationship exists when one record is related to many records of another table. A one-to-many relationship is not a property of the data, but rather of the relationship itself. One-to-many often refer to a primary key to foreign key relationship between two tables, where the record in the first table can relate to multiple records in the second table. A foreign key is one side of the relationship that shows a row or multiple rows, with one of those rows being the primary key already listed on the first table. This is also called a foreign key constraint, which is important to keep data from being duplicated and have relationships within the database stay reliable as more information is added. Many-to-many relationships are not able to be used in relational databases and must be converted to one-to-many relationships. Both one-to-many and one-to-one relationships are common in relational databases but are normally created majorly with one-to-many relationships. The opposite of one-to-many is many-to-one. The transpose of a one-to-many relationship is a many-to-one relationship. Entity relationship diagram (ERD) notations One notation as described in Entity Relationship modeling is Chen notation or formally Chen ERD notation created originally by Peter Chen in 1976 where a one-to-many relationship is notated as 1:N where N represents the cardinality and can be 0 or higher. A many-to-one relationship is sometimes notated as N:1. See also One-to-one (data model) Many-to-many (data model) References Data modeling
One-to-many (data model)
Engineering
436
63,502,168
https://en.wikipedia.org/wiki/Zoombombing
Zoombombing or Zoom raiding is the unwanted, disruptive intrusion, generally by Internet trolls, into a video-conference call. In a typical Zoombombing incident, a teleconferencing session is hijacked by the insertion of material that is lewd, obscene, or offensive in nature, typically resulting in the shutdown of the session or the removal of the troll. The term is especially associated with and is derived from, the name of the Zoom videoconferencing software program; however, it has also been used to refer to the phenomenon on other video conferencing platforms. The term became popularized in 2020 when the COVID-19 pandemic forced many people to stay at home, and videoconferencing came to be used on a large scale by businesses, schools, and social groups. Zoombombing has caused significant issues in particular for schools, companies, and organizations worldwide. Such incidents have resulted in increased scrutiny on Zoom as well as restrictions on usage of the platform by educational, corporate, and governmental institutions globally. In response, Zoom, citing the sudden influx of new users due to the COVID-19 pandemic, took measures to increase security of its teleconferencing application. Incidents of Zoombombing have prompted law enforcement officers in various countries to investigate such cases and file criminal charges against those responsible. Etymology The term Zoombombing is a neologism derived from the teleconferencing application Zoom and influenced by the word photobombing. The term had appeared in mid-March 2020 on technology and news websites. Zoombombing has also been used in reference to similar incidents on other teleconferencing platforms, such as WebEx or Skype. Methods The increased use of Zoom during the COVID-19 pandemic as an alternative to face-to-face meetings resulted in widespread exposure to hackers and Internet trolls, who exploit and work around the application's security features. In various forums such as Discord and Reddit, efforts have been coordinated to disrupt Zoom sessions, while certain Twitter accounts advertise meeting IDs and passwords or meeting links (allowing users to instantly join a Zoom meeting instead of entering the credentials required to access a meeting) for sessions that were vulnerable to being joined without authorization. At educational institutions, some students were "actively asking strangers to Zoombomb or 'Zoom raid' their virtual classrooms to spice up their isolated lessons" and facilitating the raids by sharing passwords with the raiders. CNET pointed out that simple Google searches for URLs that include "Zoom.us" could bring up conferences that are not password protected, and that links within public pages can allow anyone to join. Hackers and trolls also look for easy targets such as unprotected or underprotected "check-in" meetings in which organizations meet with their employers or clients remotely. While a Zoom session is in progress, unfamiliar users show up and hijack the session by saying or showing things that are lewd, obscene, or racist in nature. The compromised Zoom session is then typically shut down by the host. Many of those successful in disrupting sessions have posted video footage of those incidents to social media and video sharing platforms such as TikTok and YouTube. While it is believed Zoombombing attacks are mainly orchestrated by external hackers and trolls, many are also orchestrated internally from within their respective organization or entity. Some view Zoombombing as a continuation of cyberbullying by teenagers, particularly after schools were shut down due to the pandemic. Responses Zoombombings would frequently make the local news for how disruptive they are. The trolling has caused a number of problems for schools and educators, with unwanted participants posting lewd content to interrupt learning sessions. Some schools had to suspend using video conferencing altogether. The University of Southern California called Zoombombing a type of trolling and apologized for "vile" events that interrupted "lectures and learning." Zoombombing has prompted colleges and universities to publish guides and resources to educate and bring awareness to their students and staff about the phenomenon. Zoombombing has left online lectures vulnerable to the intrusion of people looking to inflict harm. These crimes have brought attention not only to the lack of security on videoconferencing platforms, but also the lack in the universities. According to an article from The Guardian, the University of Warwick, in the midst of a rape-chat scandal, received criticisms for its weak cybersecurity. Zoombombing affected twelve-step programs such as Alcoholics Anonymous and Narcotics Anonymous and other substance abuse and addiction recovery programs who were forced to switch to online meetings. Concerns arise from causing undue stress to an already vulnerable population and video recording which can break anonymity. Some bombers reference the drug-of-choice for recovery members, such as alcohol, in an attempt to emotionally trigger the participants of the meeting. The problem reached such prominence that the United States Federal Bureau of Investigation (FBI) warned of video-teleconferencing and online classroom hijacking, which it called "Zoom-bombing." The FBI advised users of teleconferencing software to keep meetings private, require passwords or other forms of access control such as "waiting rooms" to limit access only to specific people, and limiting screen-sharing access to the meeting host only. Given the number of incidents of Zoombombing, New York's attorney general initiated an inquiry into Zoom's data privacy and security policies. U.S. Senator Sherrod Brown (D-OH) asked the Federal Trade Commission to investigate into the matter, accusing Zoom of engaging in deceptive practices regarding user privacy and security. Amid concerns about Zoombombing, various organizations banned the use of Zoom. In April 2020, Google banned the use of Zoom on its corporate computers, directing employees to instead use its video chat app Google Duo. The use of Zoom was also banned by SpaceX, Smart Communications, NASA, and the Australian Defence Force. The Taiwanese and Canadian governments banned Zoom for all government use. The New York City Department of Education prohibited all its teachers from using the platform with students, and the Clark County School District in Nevada disabled access to Zoom to its staff. Singapore's Ministry of Education briefly banned all teachers within the country from using Zoom before lifting the ban three days later, adding extra security features. Some Zoombombers have shared their side of the story, claiming they aren't trying to cause harm. They claim it is a form of protest in response to the extensive amount of work given from teachers. Not all incidents are malicious, as many have shared some new pop culture, such as memes and TikToks, to bring some relief and fun during the pandemic. Zoom CEO Eric Yuan made a public apology, saying that the teleconferencing company had not anticipated the sudden influx of new consumer users and stating that "this is a mistake and lesson learned." In response to the concerns, Zoom has published a guide on their blog on how to avoid these types of incidents. On April 7, 2020, Zoom implemented user experience and security updates to the application. Such updates include a more visible "Security" icon for users to see and use, suppression of meeting ID numbers, and a change in the default settings to require passwords and waiting rooms for sessions. On April 8, 2020, Zoom announced that it had formed a council of chief information security officers from other companies to share ideas on best practices, and that it had hired Alex Stamos, former chief security officer of Facebook, as an adviser. Zoom released its 5.0 version in April 2020 with security features that include AES 256-bit GCM encryption, passwords by default, and a feature to report suspicious users to its Trust and Safety Team for possible misuse. In May 2020, Zoom announced it had temporarily disabled its Giphy (frequently used as a tactic in Zoombombing) integration until security concerns could be properly and fully addressed. On July 1, 2020, Zoom stated it had released 100 new safety features over the past 90 days, including end-to-end encryption for all users, turning on meeting passwords by default, giving users the ability to choose which data centers calls are routed from, consulting with security experts, forming a CISO council, an improved bug bounty program, and working with third parties to help test security. Criminal use National authorities worldwide warned of possible charges against people engaging with Zoombombing. On April 8, 2020, a teen in Madison, Connecticut, was arrested for computer crime, conspiracy, and disturbing the peace following a Zoombombing incident involving online classes at Daniel Hand High School; police also identified another teen involved in the incident. In San Francisco, a man was arrested after being traced to pornographic videos that were streamed on Zoom. As of May 2020, the FBI has received 195 incidents of Zoombombing involving child abuse, while the United Kingdom's National Crime Agency has reported more than 120 such cases. Notable incidents St. Paulus Lutheran Church in San Francisco filed a class-action lawsuit against Zoom after one of its Bible study classes was "Zoombombed" on May 6, 2020. The church alleged that Zoom "did nothing" when it tried to reach out to the company. In November 2020, a Dutch journalist for RTL Nieuws managed to gain access to a secret Zoom meeting of European Union defence ministers. The EU's foreign affairs representative Josep Borrell told him that it was criminal offense and he should sign off before the police arrived. The Zoombomb was revealed to have been the result of the Dutch defence minister Ank Bijleveld posting a picture of herself that showed the login and the partial PIN number. In 2022, an online event hosted by the Italian Senate's Movimento 5 Stelle and broadcast live to Senato della Repubblica was interrupted by roughly a minute of a 3D animated Final Fantasy VII pornographic parody, displaying the character Tifa Lockhart in the middle of sexual intercourse. Overlapping the content's original audio was a man speaking English with a thick Italian accent stating, "I used to be a sex offender, but now I am a kindergarten teacher." Brian Adams, a man from Paintsville, Kentucky, faced multiple federal charges after he interrupted an elementary school's video conference class during the COVID-19 pandemic with a digital racist threat. He allegedly crashed a class Zoom conference on October 14, 2020, and targeted the Laureate Academy Charter School, whose student population is about 67% Black, because of its racial demographics. In 2020, livestreamer Muudea Sedik, better known as twomad, gained popularity for his Zoom bombings. Sedik would request Zoom meeting links or passwords from his followers on social media, and would broadcast the subsequent invasions live. Sedik's antics made him a popular subject for various internet memes, particularly among Generation Z. See also Photobombing Email bomb Text message bomb Google bombing Griefing Trolling References Criticisms of software and websites Hacking in the 2020s Internet memes introduced in 2020 Internet trolling Online obscenity controversies Videotelephony 2020 neologisms 2020s neologisms Zoom (software)
Zoombombing
Technology
2,323
31,855,224
https://en.wikipedia.org/wiki/Rodion%20Kuzmin
Rodion Osievich Kuzmin (, 9 November 1891, Riabye village in the Haradok district – 24 March 1949, Leningrad) was a Soviet mathematician, known for his works in number theory and analysis. His name is sometimes transliterated as Kusmin. He was an Invited Speaker of the ICM in 1928 in Bologna. Selected results In 1928, Kuzmin solved the following problem due to Gauss (see Gauss–Kuzmin distribution): if x is a random number chosen uniformly in (0, 1), and is its continued fraction expansion, find a bound for where Gauss showed that Δn tends to zero as n goes to infinity, however, he was unable to give an explicit bound. Kuzmin showed that where C,α > 0 are numerical constants. In 1929, the bound was improved to C 0.7n by Paul Lévy. In 1930, Kuzmin proved that numbers of the form ab, where a is algebraic and b is a real quadratic irrational, are transcendental. In particular, this result implies that Gelfond–Schneider constant is transcendental. See Gelfond–Schneider theorem for later developments. He is also known for the Kusmin-Landau inequality: If is continuously differentiable with monotonic derivative satisfying (where denotes the Nearest integer function) on a finite interval , then Notes External links (The chronology there is apparently wrong, since J. V. Uspensky lived in USA from 1929.) 1891 births 1949 deaths People from Gorodoksky Uyezd Soviet mathematicians Number theorists Mathematical analysts Academic staff of Perm State University
Rodion Kuzmin
Mathematics
336
42,287,747
https://en.wikipedia.org/wiki/Acoustic%20tweezers
Acoustic tweezers (also known as acoustical tweezers) are a set of tools that use sound waves to manipulate the position and movement of very small objects. Strictly speaking, only a single-beam based configuration can be called acoustical tweezers. However, the broad concept of acoustical tweezers involves two configurations of beams: single beam and standing waves. The technology works by controlling the position of acoustic pressure nodes that draw objects to specific locations of a standing acoustic field. The target object must be considerably smaller than the wavelength of sound used, and the technology is typically used to manipulate microscopic particles. Acoustic waves have been proven safe for biological objects, making them ideal for biomedical applications. Recently, applications for acoustic tweezers have been found in manipulating sub-millimetre objects, such as flow cytometry, cell separation, cell trapping, single-cell manipulation, and nanomaterial manipulation. The use of one-dimensional standing waves to manipulate small particles was first reported in the 1982 research article "Ultrasonic Inspection of Fiber Suspensions". Method In a standing acoustic field, objects experience an acoustic-radiation force that moves them to specific regions of the field. Depending on an object's properties, such as density and compressibility, it can be induced to move to either acoustic pressure nodes (minimum pressure regions) or pressure antinodes (maximum pressure regions). As a result, by controlling the position of these nodes, the precise movement of objects using sound waves is feasible. Acoustic tweezers do not require expensive equipment or complex experimental setups. Fundamental theory Particles in an acoustic field can be moved by forces originating from the interaction among the acoustic waves, fluid, and particles. These forces (including acoustic radiation force, secondary field force between particles, and Stokes drag force) create the phenomena of acoustophoresis, which is the foundation of the acoustic tweezers technology. Acoustic radiation force When a particle is suspended in the field of a sound wave, an acoustic radiation force that has risen from the scattering of the acoustic waves is exerted on the particle. This was first modeled and analyzed for incompressible particles in an ideal fluid by Louis King in 1934. Yosioka and Kawasima calculated the acoustic radiation force on compressible particles in a plane wave field in 1955. Gorkov summarized the previous work and proposed equations to determine the average force acting on a particle in an arbitrary acoustical field when its size is much smaller than the wavelength of the sound. Recently, Bruus revisited the problem and gave detailed derivation for the acoustic radiation force. As shown in Figure 1, the acoustic radiation force on a small particle results from a non-uniform flux of momentum in the near-field region around the particle, , which is caused by the incoming acoustic waves and the scattering on the surface of the particle when acoustic waves propagate through it. For a compressible spherical particle with a diameter much smaller than the wavelength of acoustic waves in an ideal fluid, the acoustic radiation force can be calculated by , where is a given quantity, also called acoustic potential energy. The acoustic potential energy is expressed as: where is the particle volume, is the acoustic pressure, is the velocity of acoustic particles, is the fluid mass density, is the speed of sound of the fluid, is the time-average term, The coefficients and can be calculated by and where is the mass density of the particle, is the speed of sound of the particle. Acoustic radiation force in standing waves The standing waves can form a stable acoustic potential energy field, so they are able to create stable acoustic radiation force distribution, which is desirable for many acoustic tweezers applications. For one-dimension planar standing waves, the acoustic fields are given by: , , , where is the displacement of acoustic particle, is the acoustic pressure amplitude, is the angular velocity, is the wave number. With these fields, the time-average terms can be obtained. These are: , , Thus, the acoustic potential energy is: , Then, the acoustic radiation force is found by differentiation: , , , where is the acoustic energy density, and is acoustophoretic contrast factor. The term shows that the radiation force period is one-half of the pressure period. Also, the contrast factor can be positive or negative depending on the properties of particles and fluid. For positive value of , the radiation force points from the pressure antinodes to the pressure nodes, as shown in Figure 2, and the particles will be pushed to the pressure nodes. Secondary acoustic forces When multiple particles in a suspension are exposed to a standing wave field, they will not only experience acoustic radiation force, but also secondary acoustic forces caused by waves scattered by other particles. The inter-particle forces are sometimes called Bjerknes forces. A simplified equation for the inter-particle forces of identical particles is: where is the radius of the particle, is the distance between the particles, is the angle between the central line of the particles and the direction of propagation of the incident acoustic wave. The sign of the force represents its direction: a negative sign for an attractive force, and a positive sign for a repulsive force. The left side of the equation depends on the acoustic particle velocity amplitude and the right side depends on the acoustic pressure amplitude . The velocity-dependent term is repulsive when particles are aligned with wave propagation (Θ=0°), and negative when perpendicular to wave propagation (Θ=90°). The pressure-dependent term is unaffected by the particle orientation and is always attractive. In the case of a positive contrast factor, the velocity-dependent term diminishes as particles are driven to the velocity node (pressure antinode), as in the case of air bubbles and lipid vesicles. In a similar way, the pressure-dependent term diminishes as particles are driven towards the pressure node (velocity antinode), as are most solid particles in aqueous solutions. In addition to the scattering-related secondary acoustic forces, the flow field resulting from the interactions of the various acoustic streaming fields, generated by the acoustic boundary layer of each particle (sometimes called microstreaming), can induce additional viscous shear forces on each of the particles' surfaces, which then results in an additional contribution to the secondary acoustic forces in its fully viscous formulations. The viscous effects on the secondary acoustic force can become significant when compared to the perfect fluid formulation exemplified above, and even dominant in certain limit cases, yielding both quantitatively and qualitatively different results than what is predicted by inviscid theory. The relevance of the viscous contributions varies greatly depending on the specific case being investigated, and thus important care needs to be taken in selecting an appropriate secondary acoustic force model for the given scenario. The influence of the secondary forces is usually very weak, and only has an effect when the distance between particles is very small. It becomes important in aggregation and sedimentation applications, where particles are initially gathered in nodes by the acoustic radiation force. As inter-particle distances become smaller, the secondary forces assist in further aggregation until the clusters become heavy enough for sedimentation to begin. Acoustic streaming Acoustic streaming is a steady flow generated by the nonlinear component of the oscillations in an acoustic field. Depending on the mechanisms, the acoustic streaming can be categorized into two general types, Eckart streaming and Rayleigh streaming. Eckart streaming is driven by a time-average momentum flux created when high-amplitude acoustic waves propagate and attenuate in a fluid. Rayleigh streaming, also called "boundary driven streaming", is forced by Reynolds stresses in the viscous boundary layer. Both of the driven mechanisms come from a time-average nonlinear effect. A perturbation approach is used to analyze the phenomenon of nonlinear acoustic streaming. The governing equations for this problem are mass conservation and Navier-Stokes equations: , where is the density of fluid, is the velocity of fluid particle, is the pressure, is the dynamic viscosity of fluid, is the viscosity ratio. The perturbation series can be written as , , , which are diminishing series with the higher-order terms much smaller than the lower-order ones. The liquid is quiescent and homogeneous at its zero-order state. Substituting the perturbation series into the mass conservation and Navier-Stokes equation and using the relation of , the first-order equations can be obtained by collecting terms in first-order, , . Similarly, the second-order equations can be found as well, , . For the first-order equations, taking the time derivation of the Navier-Stokes equation and inserting the mass conservation, a combined equation can be found: . This is an acoustic wave equation with viscous attenuation. Physically, and can be interpreted as the acoustic pressure and the velocity of the acoustic particle. The second-order equations can be considered as governing equations used to describe the motion of fluid with mass source and force source . Generally, the acoustic streaming is a steady mean flow, where the response time scale is much smaller than the one of the acoustic vibration. The time-average term is normally used to present the acoustic streaming. By using , the time-average second-order equations can be obtained: , . It is important to note that the time-averaging of pure first-order terms lead to their cancellation, since they are by definition harmonic. This means that they are pure sine waves, and thus have a mean of 0, which leads to the cancellation of any term that contains them. Second-order terms are, however, not harmonic, and do not get cancelled out by time-averaging. This is most important for understanding acoustic streaming: first-order terms, related to simple oscillatory motion, have much larger magnitudes than second-order terms, and thus are dominant in the oscillation time-scale. Those first-order terms, however, being pure sines, in a quasi-steady state, repeat after each oscillation cycle, yielding no net fluid flow. Second-order terms, instead, are not harmonic, and thus can have a cumulative effect which, despite being smaller, can add up over many oscillation cycles, leading to the development of the net steady-state flow we identify as acoustic streaming. In determining the acoustic streaming, the second-order equations are thus most important. Since Navier-Stokes equations can only be analytically solved for simple cases, numerical methods are typically used, with the finite element method (FEM) the most common technique. It can be employed to simulate the acoustic streaming phenomena. Figure 3 is one example of acoustic streaming around a solid circular pillar, which is calculated by FEM. As mentioned, acoustic streaming is driven by mass and force sources originating from the acoustic attenuation. However, these are not the only driven forces for acoustic streaming. The boundary vibration may also contribute, especially to "boundary driven streaming". For these cases, the boundary condition should also be processed by the perturbation approach and be imposed on the two order equations accordingly. Particle motion The motion of a suspended particle whose gravity is balanced by the buoyancy force in an acoustic field is determined by two forces: the acoustic radiation force and Stokes drag force. By applying Newton's law, the motion can be described as: , . where is the fluid velocity, is the velocity of particle. For applications in a static flow, the fluid velocity comes from the acoustic streaming. The magnitude of acoustic streaming depends on the power and frequency of the input and the properties of the fluid media. For typical acoustic-based microdevices, the operating frequency may be from the to the range. The vibration amplitude is in a range of 0.1 nm to 1 μm. Assuming the fluid used is water, the estimated magnitude of acoustic streaming is in the range of 1 μm/s to 1 mm/s. Thus, the acoustic streaming should be smaller than the main flow for most continuous flow applications. The drag force is mainly induced by the main flow in those applications. Applications Cell separation Cells with different densities and compression strengths can theoretically be separated with acoustic force. It has been suggested that acoustic tweezers could be used to separate lipid particles from red blood cells. This is a problem during cardiac surgery supported by a heart-lung machine, for which current technologies are insufficient. According to the proposal, acoustic force applied to blood plasma passing through a channel will cause red blood cells to gather in the pressure node in the center and the lipid particles to gather in antinodes at the sides (see Figure 4). At the end of the channel, the separated cells and particles exit through separate outlets. The acoustic method might also be used to separate particles of different sizes. According to the equation of primary acoustic radiation force, larger particles experience larger forces than smaller particles. Shi et al. reported using interdigital transducers (IDTs) to generate a standing surface acoustic wave (SSAW) field with pressure nodes in the middle of a microfluidic channel, separating microparticles with different diameters. When introducing a mixture of particles with different sizes from the edge of the channel, larger particles will migrate toward the middle more quickly and be collected at the center outlet. Smaller particles will not be able to migrate to the center outlet before they are collected from the side outlets. This experimental setup has also been used to separate blood components, bacteria, and hydrogel particles. 3D cell focusing Fluorescence-activated cell sorters (FACS) can sort cells by focusing a fluid stream containing the cells, detecting fluorescence from individual cells, and separating the cells of interest from other cells. They have high throughput but are expensive to purchase and maintain, and are bulky with a complex configuration. They also affect cell physiology with high shear pressure, impact forces and electromagnetic forces, which may result in cellular and genetic damage. Acoustic forces are not dangerous to cells, and there has been progress integrating acoustic tweezers with optical/electrical modules for simultaneous cell analysis and sorting, in a smaller and less-expensive machine. Acoustic tweezers have been developed to achieve 3D focusing of cells/particles in microfluidics. A pair of interdigital transducers (IDTs) are deposited on a piezoelectric substrate, and a microfluidic channel is bonded with the substrate and positioned between the two IDTs. Microparticle solutions are infused into the microfluidic channel by a pressure-driven flow. Once an RF signal is applied to both IDTs, two series of surface acoustic waves (SAW) propagate in opposite directions toward the particle suspension solution inside the microchannel. The constructive interference of the two SAWs results in the formation of a SSAW. Leakage waves in the longitudinal mode are generated inside the channel, causing pressure fluctuations that act laterally on the particles. As a result, the suspended particles inside the channel will be forced toward either the pressure nodes or antinodes, depending on the density and compressibility of the particles and the medium. When the channel width covers only one pressure node (or antinode), the particles will be focused in that node. In addition to focusing in a horizontal direction, cells/particles can also be focused in the vertical direction. After SSAW is on, the randomly distributed particles are focused into a single file stream (Fig. 10c) in the vertical direction. By integrating a standing surface acoustic wave (SSAW)-based microdevice capable of 3D particle/cell focusing with laser-induced fluorescence (LIF) detection system, acoustic tweezers are developed into a microflow cytometer for high-throughput single cell analysis. The tunability offered by chirped interdigital transducers renders it capable of precisely sorting cells into a number (e.g., five) of outlet channels in a single step. This is a major advantage over most existing sorting methods, which typically only sort cells into two outlet channels. Noninvasive cell trapping and patterning A glass reflector with etched fluidic channels is clamped to the PCB holding the transducer. Cells infused into the chip are trapped in the ultrasonic standing wave formed in the channel. The acoustic forces focus the cells into clusters in the center of the channel as illustrated in the inset. Since the trapping occurs close to the transducer surface, the actual trapping sites are given by the near-field pressure distribution as shown in the 3D image. Cells will be trapped in clusters around the local pressure minima creating different patterns depending on the number of cells trapped. The peaks in the graph correspond to the pressure minima. Manipulation of single cell, particle, or organism Manipulating single cells is important to many biological studies, such as in controlling the cellular microenvironment and isolating specific cells of interest. Acoustic tweezers have been demonstrated to manipulate each individual cell with micrometer-level resolution. Cells generally have a diameter of 10–20 μm. To meet the resolution requirements of manipulating single cells, short-wavelength acoustic waves should be employed. In this case, a surface acoustic wave (SAW) is preferred to a bulk acoustic wave (BAW), because it allows using shorter-wavelength acoustic waves (normally less than 200 μm). Ding et al. reported a SSAW microdevice that is able to manipulate single cells with prescribed paths. Figure 6 records a demonstration that the movement of single cells can be finely controlled with acoustic tweezers. The working principle of the device lies in the controlled movement of pressure nodes in an SSAW field. Ding et al. employed chirped interdigital transducers (IDTs) that are able to generate SSAWs with adjustable positions of pressure nodes by changing the input frequency. They also showed that the millimeter-sized microorganism C. elegan can be manipulated in the same manner. They also examined cell metabolism and proliferation after acoustic treatment, and found no significant differences compared to the control group, indicating the non-invasive nature of acoustic base manipulation. In addition to using chirped IDTs, phaseshift-based single particle/cell manipulation has also been reported. Manipulation of single biomolecules Sitters et al. have shown that acoustics can be used to manipulate single biomolecules such as DNA and proteins. This method, which the inventors call acoustic force spectroscopy, allows measuring the force response of single molecules. This is achieved by attaching small microspheres to the molecules at one side and attaching them to a surface at the other. By pushing the microspheres away from the surface with a standing acoustic wave the molecules are effectively stretched out. Manipulation of organic nano-materials Polymer-dispersed liquid crystal (PDLC) displays can be switched from opaque to transparent using acoustic tweezers. A SAW-driven PDLC light shutter has been demonstrated by integrating a cured PDLC film and a pair of interdigital transducers (IDTs) onto a piezoelectric substrate. Manipulation of inorganic nano-materials Acoustic tweezers provide a simple approach for tuneable nanowire patterning. In this approach, SSAWs are generated by interdigital transducers, which induced a periodic alternating current (AC) electric field on the piezoelectric substrate and consequently patterned metallic nanowires in suspension. The patterns could be deposited onto the substrate after the liquid evaporated. By controlling the distribution of the SSAW field, metallic nanowires are assembled into different patterns including parallel and perpendicular arrays. The spacing of the nanowire arrays could be tuned by controlling the frequency of the surface acoustic waves. Selective manipulation While most acoustic tweezers are able to manipulate a large number of objects collectively, a complementary function is to be able to manipulate a single particle within a cluster without moving adjacent objects. To achieve this goal, the acoustic trap must be localized spacially. A first approach consists in using highly focused acoustic beams. Since many particles of interest are attracted to the nodes of an acoustic field and thus expelled from the focus point, some specific wave structures combining strong focalization but with a minimum of the pressure amplitude at the focal point (surrounded by a ring of intensity to create the trap) are required to trap this type of particle. These specific conditions are met by Bessel beams of topological order larger than zero, also called "acoustical vortices". With this kind of wave structures, the 2D and 3D selective manipulation of particles has been demonstrated with an array of transducers driven by programmable electronics. Alternatively, another approach to localize the acoustic energy relies on the use of nanosecond-scale pulsed fields to generate localized acoustic standing waves. High frequency tweezers and holographic InterDigitated Transducers (IDTs) The individual selective manipulation of micro-objects requires to synthesize complex acoustic fields such as acoustic vortices (see previous section) at sufficiently high frequency to reach the necessary spatial resolution (typically the wavelength must be comparable to the size of the manipulated object to be selective). Many holographic methods have been developed to synthesize complex wavefields including transducer arrays, 3D printed holograms, metamaterials or diffraction gratings. Nevertheless, all these methods are limited to relatively low frequencies with an insufficient resolution to address micrometric particles, cells or microorganisms individually. On the other hand, InterDigitated Transducers (IDTs) were known as a reliable technique to synthesize acoustic wavefields up to GHz frequency. See also Acoustic levitation Acoustic contrast factor Holographic direct sound printing References External links Fast acoustic tweezers—YouTube video illustrating how acoustic tweezers work Acoustics
Acoustic tweezers
Physics
4,468
24,510,097
https://en.wikipedia.org/wiki/SSR-180%2C711
SSR180711 is a drug that acts as a potent and selective partial agonist for the α7 subtype of neural nicotinic acetylcholine receptors. In animal studies, it shows nootropic effects and may be useful in the treatment of schizophrenia. References Nicotinic agonists Stimulants Nootropics Organobromides Carbamates Nitrogen heterocycles
SSR-180,711
Chemistry
86
12,953,417
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Communications
IEEE Transactions on Communications is a monthly peer-reviewed scientific journal published by the IEEE Communications Society that focuses on all aspects of telecommunication technology, including telephone, telegraphy, facsimile, and point-to-point television by electromagnetic propagation. The editor-in-chief is George K. Karagiannidis (Aristotle University of Thessaloniki). According to the Journal Citation Reports, the journal has a 2022 impact factor of 8.3. History The journal traces back to the establishment of the Transactions of the American Institute of Electrical Engineers in 1884. The journal has gone through several name changes and splits over the years. 1884–1951: Transactions of the American Institute of Electrical Engineers 1952–1963: Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics 1953–1955: Transactions of the IRE Professional Group on Communications Systems 1956–1962: IRE Transactions on Communications Systems 1963–1964: IEEE Transactions on Communications Systems 1964: IEEE Transactions on Communication and Electronics 1964–1971: IEEE Transactions on Communication Technology 1972–present: IEEE Transactions on Communications See also IEEE Transactions on Green Communications and Networking References External links Transactions on Communications Monthly journals Academic journals established in 1972 English-language journals Telecommunications engineering journals
IEEE Transactions on Communications
Engineering
245
195,984
https://en.wikipedia.org/wiki/Hermite%20polynomials
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: signal processing as Hermitian wavelets for wavelet transform analysis probability, such as the Edgeworth series, as well as in connection with Brownian motion; combinatorics, as an example of an Appell sequence, obeying the umbral calculus; numerical analysis as Gaussian quadrature; physics, where they give rise to the eigenstates of the quantum harmonic oscillator; and they also occur in some cases of the heat equation (when the term is present); systems theory in connection with nonlinear operations on Gaussian noise. random matrix theory in Gaussian ensembles. Hermite polynomials were defined by Pierre-Simon Laplace in 1810, though in scarcely recognizable form, and studied in detail by Pafnuty Chebyshev in 1859. Chebyshev's work was overlooked, and they were named later after Charles Hermite, who wrote on the polynomials in 1864, describing them as new. They were consequently not new, although Hermite was the first to define the multidimensional polynomials. Definition Like the other classical orthogonal polynomials, the Hermite polynomials can be defined from several different starting points. Noting from the outset that there are two different standardizations in common use, one convenient method is as follows: The "probabilist's Hermite polynomials" are given by while the "physicist's Hermite polynomials" are given by These equations have the form of a Rodrigues' formula and can also be written as, The two definitions are not exactly identical; each is a rescaling of the other: These are Hermite polynomial sequences of different variances; see the material on variances below. The notation and is that used in the standard references. The polynomials are sometimes denoted by , especially in probability theory, because is the probability density function for the normal distribution with expected value 0 and standard deviation 1. The first eleven probabilist's Hermite polynomials are: The first eleven physicist's Hermite polynomials are: Properties The th-order Hermite polynomial is a polynomial of degree . The probabilist's version has leading coefficient 1, while the physicist's version has leading coefficient . Symmetry From the Rodrigues formulae given above, we can see that and are even or odd functions depending on : Orthogonality and are th-degree polynomials for . These polynomials are orthogonal with respect to the weight function (measure) or i.e., we have Furthermore, and where is the Kronecker delta. The probabilist polynomials are thus orthogonal with respect to the standard normal probability density function. Completeness The Hermite polynomials (probabilist's or physicist's) form an orthogonal basis of the Hilbert space of functions satisfying in which the inner product is given by the integral including the Gaussian weight function defined in the preceding section An orthogonal basis for is a complete orthogonal system. For an orthogonal system, completeness is equivalent to the fact that the 0 function is the only function orthogonal to all functions in the system. Since the linear span of Hermite polynomials is the space of all polynomials, one has to show (in physicist case) that if satisfies for every , then . One possible way to do this is to appreciate that the entire function vanishes identically. The fact then that for every real means that the Fourier transform of is 0, hence is 0 almost everywhere. Variants of the above completeness proof apply to other weights with exponential decay. In the Hermite case, it is also possible to prove an explicit identity that implies completeness (see section on the Completeness relation below). An equivalent formulation of the fact that Hermite polynomials are an orthogonal basis for consists in introducing Hermite functions (see below), and in saying that the Hermite functions are an orthonormal basis for . Hermite's differential equation The probabilist's Hermite polynomials are solutions of the differential equation where is a constant. Imposing the boundary condition that should be polynomially bounded at infinity, the equation has solutions only if is a non-negative integer, and the solution is uniquely given by , where denotes a constant. Rewriting the differential equation as an eigenvalue problem the Hermite polynomials may be understood as eigenfunctions of the differential operator . This eigenvalue problem is called the Hermite equation, although the term is also used for the closely related equation whose solution is uniquely given in terms of physicist's Hermite polynomials in the form , where denotes a constant, after imposing the boundary condition that should be polynomially bounded at infinity. The general solutions to the above second-order differential equations are in fact linear combinations of both Hermite polynomials and confluent hypergeometric functions of the first kind. For example, for the physicist's Hermite equation the general solution takes the form where and are constants, are physicist's Hermite polynomials (of the first kind), and are physicist's Hermite functions (of the second kind). The latter functions are compactly represented as where are Confluent hypergeometric functions of the first kind. The conventional Hermite polynomials may also be expressed in terms of confluent hypergeometric functions, see below. With more general boundary conditions, the Hermite polynomials can be generalized to obtain more general analytic functions for complex-valued . An explicit formula of Hermite polynomials in terms of contour integrals is also possible. Recurrence relation The sequence of probabilist's Hermite polynomials also satisfies the recurrence relation Individual coefficients are related by the following recursion formula: and , , . For the physicist's polynomials, assuming we have Individual coefficients are related by the following recursion formula: and , , . The Hermite polynomials constitute an Appell sequence, i.e., they are a polynomial sequence satisfying the identity An integral recurrence that is deduced and demonstrated in is as follows: Equivalently, by Taylor-expanding, These umbral identities are self-evident and included in the differential operator representation detailed below, In consequence, for the th derivatives the following relations hold: It follows that the Hermite polynomials also satisfy the recurrence relation These last relations, together with the initial polynomials and , can be used in practice to compute the polynomials quickly. Turán's inequalities are Moreover, the following multiplication theorem holds: Explicit expression The physicist's Hermite polynomials can be written explicitly as These two equations may be combined into one using the floor function: The probabilist's Hermite polynomials have similar formulas, which may be obtained from these by replacing the power of with the corresponding power of and multiplying the entire sum by : Inverse explicit expression The inverse of the above explicit expressions, that is, those for monomials in terms of probabilist's Hermite polynomials are The corresponding expressions for the physicist's Hermite polynomials follow directly by properly scaling this: Generating function The Hermite polynomials are given by the exponential generating function This equality is valid for all complex values of and , and can be obtained by writing the Taylor expansion at of the entire function (in the physicist's case). One can also derive the (physicist's) generating function by using Cauchy's integral formula to write the Hermite polynomials as Using this in the sum one can evaluate the remaining integral using the calculus of residues and arrive at the desired generating function. Expected values If is a random variable with a normal distribution with standard deviation 1 and expected value , then The moments of the standard normal (with expected value zero) may be read off directly from the relation for even indices: where is the double factorial. Note that the above expression is a special case of the representation of the probabilist's Hermite polynomials as moments: Asymptotic expansion Asymptotically, as , the expansion holds true. For certain cases concerning a wider range of evaluation, it is necessary to include a factor for changing amplitude: which, using Stirling's approximation, can be further simplified, in the limit, to This expansion is needed to resolve the wavefunction of a quantum harmonic oscillator such that it agrees with the classical approximation in the limit of the correspondence principle. A better approximation, which accounts for the variation in frequency, is given by A finer approximation, which takes into account the uneven spacing of the zeros near the edges, makes use of the substitution with which one has the uniform approximation Similar approximations hold for the monotonic and transition regions. Specifically, if then while for with complex and bounded, the approximation is where is the Airy function of the first kind. Special values The physicist's Hermite polynomials evaluated at zero argument are called Hermite numbers. which satisfy the recursion relation . In terms of the probabilist's polynomials this translates to Relations to other functions Laguerre polynomials The Hermite polynomials can be expressed as a special case of the Laguerre polynomials: Relation to confluent hypergeometric functions The physicist's Hermite polynomials can be expressed as a special case of the parabolic cylinder functions: in the right half-plane, where is Tricomi's confluent hypergeometric function. Similarly, where is Kummer's confluent hypergeometric function. Hermite polynomial expansion Similar to Taylor expansion, some functions are expressible as an infinite sum of Hermite polynomials. Specifically, if , then it has an expansion in the physicist's Hermite polynomials. Given such , the partial sums of the Hermite expansion of converges to in the norm if and only if . Differential-operator representation The probabilist's Hermite polynomials satisfy the identity where represents differentiation with respect to , and the exponential is interpreted by expanding it as a power series. There are no delicate questions of convergence of this series when it operates on polynomials, since all but finitely many terms vanish. Since the power-series coefficients of the exponential are well known, and higher-order derivatives of the monomial can be written down explicitly, this differential-operator representation gives rise to a concrete formula for the coefficients of that can be used to quickly compute these polynomials. Since the formal expression for the Weierstrass transform is , we see that the Weierstrass transform of is . Essentially the Weierstrass transform thus turns a series of Hermite polynomials into a corresponding Maclaurin series. The existence of some formal power series with nonzero constant coefficient, such that , is another equivalent to the statement that these polynomials form an Appell sequence. Since they are an Appell sequence, they are a fortiori a Sheffer sequence. Contour-integral representation From the generating-function representation above, we see that the Hermite polynomials have a representation in terms of a contour integral, as with the contour encircling the origin. Generalizations The probabilist's Hermite polynomials defined above are orthogonal with respect to the standard normal probability distribution, whose density function is which has expected value 0 and variance 1. Scaling, one may analogously speak of generalized Hermite polynomials of variance , where is any positive number. These are then orthogonal with respect to the normal probability distribution whose density function is They are given by Now, if then the polynomial sequence whose th term is is called the umbral composition of the two polynomial sequences. It can be shown to satisfy the identities and The last identity is expressed by saying that this parameterized family of polynomial sequences is known as a cross-sequence. (See the above section on Appell sequences and on the differential-operator representation, which leads to a ready derivation of it. This binomial type identity, for , has already been encountered in the above section on #Recursion relations.) "Negative variance" Since polynomial sequences form a group under the operation of umbral composition, one may denote by the sequence that is inverse to the one similarly denoted, but without the minus sign, and thus speak of Hermite polynomials of negative variance. For , the coefficients of are just the absolute values of the corresponding coefficients of . These arise as moments of normal probability distributions: The th moment of the normal distribution with expected value and variance is where is a random variable with the specified normal distribution. A special case of the cross-sequence identity then says that Hermite functions Definition One can define the Hermite functions (often called Hermite-Gaussian functions) from the physicist's polynomials: Thus, Since these functions contain the square root of the weight function and have been scaled appropriately, they are orthonormal: and they form an orthonormal basis of . This fact is equivalent to the corresponding statement for Hermite polynomials (see above). The Hermite functions are closely related to the Whittaker function : and thereby to other parabolic cylinder functions. The Hermite functions satisfy the differential equation This equation is equivalent to the Schrödinger equation for a harmonic oscillator in quantum mechanics, so these functions are the eigenfunctions. Recursion relation Following recursion relations of Hermite polynomials, the Hermite functions obey and Extending the first relation to the arbitrary th derivatives for any positive integer leads to This formula can be used in connection with the recurrence relations for and to calculate any derivative of the Hermite functions efficiently. Cramér's inequality For real , the Hermite functions satisfy the following bound due to Harald Cramér and Jack Indritz: Hermite functions as eigenfunctions of the Fourier transform The Hermite functions are a set of eigenfunctions of the continuous Fourier transform . To see this, take the physicist's version of the generating function and multiply by . This gives The Fourier transform of the left side is given by The Fourier transform of the right side is given by Equating like powers of in the transformed versions of the left and right sides finally yields The Hermite functions are thus an orthonormal basis of , which diagonalizes the Fourier transform operator. Wigner distributions of Hermite functions The Wigner distribution function of the th-order Hermite function is related to the th-order Laguerre polynomial. The Laguerre polynomials are leading to the oscillator Laguerre functions For all natural integers , it is straightforward to see that where the Wigner distribution of a function is defined as This is a fundamental result for the quantum harmonic oscillator discovered by Hip Groenewold in 1946 in his PhD thesis. It is the standard paradigm of quantum mechanics in phase space. There are further relations between the two families of polynomials. Partial Overlap Integrals It can be shown that the overlap between two different Hermite functions () over a given interval has the exact result: Combinatorial interpretation of coefficients In the Hermite polynomial of variance 1, the absolute value of the coefficient of is the number of (unordered) partitions of an -element set into singletons and (unordered) pairs. Equivalently, it is the number of involutions of an -element set with precisely fixed points, or in other words, the number of matchings in the complete graph on vertices that leave vertices uncovered (indeed, the Hermite polynomials are the matching polynomials of these graphs). The sum of the absolute values of the coefficients gives the total number of partitions into singletons and pairs, the so-called telephone numbers 1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496,... . This combinatorial interpretation can be related to complete exponential Bell polynomials as where for all . These numbers may also be expressed as a special value of the Hermite polynomials: Completeness relation The Christoffel–Darboux formula for Hermite polynomials reads Moreover, the following completeness identity for the above Hermite functions holds in the sense of distributions: where is the Dirac delta function, the Hermite functions, and represents the Lebesgue measure on the line in , normalized so that its projection on the horizontal axis is the usual Lebesgue measure. This distributional identity follows by taking in Mehler's formula, valid when : which is often stated equivalently as a separable kernel, The function is the bivariate Gaussian probability density on , which is, when is close to 1, very concentrated around the line , and very spread out on that line. It follows that when and are continuous and compactly supported. This yields that can be expressed in Hermite functions as the sum of a series of vectors in , namely, In order to prove the above equality for , the Fourier transform of Gaussian functions is used repeatedly: The Hermite polynomial is then represented as With this representation for and , it is evident that and this yields the desired resolution of the identity result, using again the Fourier transform of Gaussian kernels under the substitution See also Hermite transform Legendre polynomials Mehler kernel Parabolic cylinder function Romanovski polynomials Turán's inequalities Notes References Oeuvres complètes 12, pp.357-412, English translation . - 2000 references of Bibliography on Hermite polynomials. External links GNU Scientific Library — includes C version of Hermite polynomials, functions, their derivatives and zeros (see also GNU Scientific Library) Orthogonal polynomials Polynomials Special hypergeometric functions
Hermite polynomials
Mathematics
3,589
30,963,255
https://en.wikipedia.org/wiki/Underwood%27s%20septa
In anatomy, Underwood's septa (or maxillary sinus septa, singular septum) are fin-shaped projections of bone that may exist in the maxillary sinus, first described in 1910 by Arthur S. Underwood, an anatomist at King's College in London. The presence of septa at or near the floor of the sinus are of interest to the dental clinician when proposing or performing sinus floor elevation procedures because of an increased likelihood of surgical complications, such as tearing of the Schneiderian membrane. The prevalence of Underwood's septa in relation to the floor of the maxillary sinus has been reported at nearly 32%. Location of septa in the sinus Underwood divided the maxillary sinus into three regions relating to zones of distinct tooth eruption activity: anterior (corresponding to the premolars), middle (corresponding to the first molar) and posterior (corresponding to the second molar). Thus, he asserted, these septa always arise between teeth and never opposite the middle of a tooth. Different studies reveal a different predisposition for the presence of septa based on sinus region: Anterior: Ulm, et al., Krennmair et al. Middle: Velásquez-Plata et al., Kim et al. and González-Santana et al. Posterior: Underwood Primary vs. secondary septa Recent studies have classified two types of maxillary sinus septa: primary and secondary. Primary septa are those initially described by Underwood and that form as a result of the floor of the sinus sinking along with the roots of erupting teeth; these primary septa are thus generally found in the sinus corresponding to the space between teeth, as explained by Underwood. Conversely, secondary septa form as a result of irregular pneumatization of the sinus following loss of maxillary posterior teeth. Sinus pneumatization is a poorly understood phenomenon that results in an increased volume of the maxillary sinus, generally following maxillary posterior tooth loss, at the expense of the bone which used to house the roots of the maxillary posterior teeth. References Dentistry Anatomy
Underwood's septa
Biology
443
17,731,917
https://en.wikipedia.org/wiki/Tricritical%20point
A tricritical point is a point where a second order phase transition curve meets a first order phase transition curve. The notion was first introduced by Lev Landau in 1937, who referred to the tricritical point as the critical point of the continuous transition. The first example of a tricritical point was shown by Robert B. Griffiths in a helium-3 helium-4 mixture. In condensed matter physics, dealing with the macroscopic physical properties of matter, a tricritical point is a point in the phase diagram of a system at which three-phase coexistence terminates. This definition is clearly parallel to the definition of an ordinary critical point as the point at which two-phase coexistence terminates. A point of three-phase coexistence is termed a triple point for a one-component system, since, from Gibbs' phase rule, this condition is only achieved for a single point in the phase diagram (F = 2-3+1 =0). For tricritical points to be observed, one needs a mixture with more components. It can be shown that three is the minimum number of components for which these points can appear. In this case, one may have a two-dimensional region of three-phase coexistence (F = 2-3+3 =2) (thus, each point in this region corresponds to a triple point). This region will terminate in two critical lines of two-phase coexistence; these two critical lines may then terminate at a single tricritical point. This point is therefore "twice critical", since it belongs to two critical branches. Indeed, its critical behavior is different from that of a conventional critical point: the upper critical dimension is lowered from d=4 to d=3 so the classical exponents turn out to apply for real systems in three dimensions (but not for systems whose spatial dimension is 2 or lower). Solid state It seems more convenient experimentally to consider mixtures with four components for which one thermodynamic variable (usually the pressure or the volume) is kept fixed. The situation then reduces to the one described for mixtures of three components. Historically, it was for a long time unclear whether a superconductor undergoes a first- or a second-order phase transition. The question was finally settled in 1982. If the Ginzburg–Landau parameter that distinguishes type-I and type-II superconductors (see also here) is large enough, vortex fluctuations become important which drive the transition to second order. The tricritical point lies at roughly , slightly below the value where type-I goes over into type-II superconductor. The prediction was confirmed in 2002 by Monte Carlo computer simulations. References Phase transitions Critical phenomena
Tricritical point
Physics,Chemistry,Materials_science,Mathematics
565
23,093,011
https://en.wikipedia.org/wiki/Strength%20of%20a%20graph
In graph theory, the strength of an undirected graph corresponds to the minimum ratio of edges removed/components created in a decomposition of the graph in question. It is a method to compute partitions of the set of vertices and detect zones of high concentration of edges, and is analogous to graph toughness which is defined similarly for vertex removal. Definitions The strength of an undirected simple graph G = (V, E) admits the three following definitions: Let be the set of all partitions of , and be the set of edges crossing over the sets of the partition , then . Also if is the set of all spanning trees of G, then And by linear programming duality, Complexity Computing the strength of a graph can be done in polynomial time, and the first such algorithm was discovered by Cunningham (1985). The algorithm with best complexity for computing exactly the strength is due to Trubin (1993), uses the flow decomposition of Goldberg and Rao (1998), in time . Properties If is one partition that maximizes, and for , is the restriction of G to the set , then . The Tutte-Nash-Williams theorem: is the maximum number of edge-disjoint spanning trees that can be contained in G. Contrary to the graph partition problem, the partitions output by computing the strength are not necessarily balanced (i.e. of almost equal size). References W. H. Cunningham. Optimal attack and reinforcement of a network, J of ACM, 32:549–561, 1985. A. Schrijver. Chapter 51. Combinatorial Optimization, Springer, 2003. V. A. Trubin. Strength of a graph and packing of trees and branchings,, Cybernetics and Systems Analysis, 29:379–384, 1993. Graph connectivity Graph invariants
Strength of a graph
Mathematics
376
13,106,156
https://en.wikipedia.org/wiki/List%20of%20HTTP%20header%20fields
HTTP header fields are a list of strings sent and received by both the client program and server on every HTTP request and response. These headers are usually invisible to the end-user and are only processed or logged by the server and client applications. They define how information sent/received through the connection are encoded (as in Content-Encoding), the session verification and identification of the client (as in browser cookies, IP address, user-agent) or their anonymity thereof (VPN or proxy masking, user-agent spoofing), how the server should handle data (as in Do-Not-Track or Global Privacy Control), the age (the time it has resided in a shared cache) of the document being downloaded, amongst others. General format In HTTP version 1.x, header fields are transmitted after the request line (in case of a request HTTP message) or the response line (in case of a response HTTP message), which is the first line of a message. Header fields are colon-separated key-value pairs in clear-text string format, terminated by a carriage return (CR) and line feed (LF) character sequence. The end of the header section is indicated by an empty field line, resulting in the transmission of two consecutive CR-LF pairs. In the past, long lines could be folded into multiple lines; continuation lines are indicated by the presence of a space (SP) or horizontal tab (HT) as the first character on the next line. This folding was deprecated in RFC 7230. HTTP/2 and HTTP/3 instead use a binary protocol, where headers are encoded in a single HEADERS and zero or more CONTINUATION frames using HPACK (HTTP/2) or QPACK (HTTP/3), which both provide efficient header compression. The request or response line from HTTP/1 has also been replaced by several pseudo-header fields, each beginning with a colon (:). Field names A core set of fields is standardized by the Internet Engineering Task Force (IETF) in . The Field Names, Header Fields and Repository of Provisional Registrations are maintained by the IANA. Additional field names and permissible values may be defined by each application. Header field names are case-insensitive. This is in contrast to HTTP method names (GET, POST, etc.), which are case-sensitive. HTTP/2 makes some restrictions on specific header fields (see below). Non-standard header fields were conventionally marked by prefixing the field name with X- but this convention was deprecated in June 2012 because of the inconveniences it caused when non-standard fields became standard. An earlier restriction on use of Downgraded- was lifted in March 2013. Field values A few fields can contain comments (i.e. in User-Agent, Server, Via fields), which can be ignored by software. Many field values may contain a quality (q) key-value pair separated by equals sign, specifying a weight to use in content negotiation. For example, a browser may indicate that it accepts information in German or English, with German as preferred by setting the q value for de higher than that of en, as follows: Accept-Language: de; q=1.0, en; q=0.5 Size limits The standard imposes no limits to the size of each header field name or value, or to the number of fields. However, most servers, clients, and proxy software impose some limits for practical and security reasons. For example, the Apache 2.3 server by default limits the size of each field to 8,190 bytes, and there can be at most 100 header fields in a single request. Request fields Standard request fields Common non-standard request fields Response fields Standard response fields Common non-standard response fields Effects of selected fields Avoiding caching If a web server responds with Cache-Control: no-cache then a web browser or other caching system (intermediate proxies) must not use the response to satisfy subsequent requests without first checking with the originating server (this process is called validation). This header field is part of HTTP version 1.1, and is ignored by some caches and browsers. It may be simulated by setting the Expires HTTP version 1.0 header field value to a time earlier than the response time. Notice that no-cache is not instructing the browser or proxies about whether or not to cache the content. It just tells the browser and proxies to validate the cache content with the server before using it (this is done by using If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match attributes mentioned above). Sending a no-cache value thus instructs a browser or proxy to not use the cache contents merely based on "freshness criteria" of the cache content. Another common way to prevent old content from being shown to the user without validation is Cache-Control: max-age=0. This instructs the user agent that the content is stale and should be validated before use. The header field Cache-Control: no-store is intended to instruct a browser application to make a best effort not to write it to disk (i.e not to cache it). The request that a resource should not be cached is no guarantee that it will not be written to disk. In particular, the HTTP/1.1 definition draws a distinction between history stores and caches. If the user navigates back to a previous page a browser may still show you a page that has been stored on disk in the history store. This is correct behavior according to the specification. Many user agents show different behavior in loading pages from the history store or cache depending on whether the protocol is HTTP or HTTPS. The Cache-Control: no-cache HTTP/1.1 header field is also intended for use in requests made by the client. It is a means for the browser to tell the server and any intermediate caches that it wants a fresh version of the resource. The Pragma: no-cache header field, defined in the HTTP/1.0 spec, has the same purpose. It, however, is only defined for the request header. Its meaning in a response header is not specified. The behavior of Pragma: no-cache in a response is implementation specific. While some user agents do pay attention to this field in responses, the HTTP/1.1 RFC specifically warns against relying on this behavior. See also HTTP header injection HTTP ETag List of HTTP status codes References External links Headers: Permanent Message Header Field Names : IETF HTTP State Management Mechanism : HTTP Semantics : HTTP Caching : HTTP/1.1 : HTTP/2 : HTTP/3 : Forwarded HTTP Extension : Prefer Header for HTTP HTTP/1.1 headers from a web server point of view Internet Explorer and Custom HTTP Headers - EricLaw's IEInternals - Site Home - MSDN Blogs HTTP header fields
List of HTTP header fields
Technology
1,446
10,107,406
https://en.wikipedia.org/wiki/Double%20inverted%20pendulum
A double inverted pendulum is the combination of the inverted pendulum and the double pendulum. The double inverted pendulum is unstable, meaning that it will fall down unless it is controlled in some way. The two main methods of controlling a double inverted pendulum are moving the base, as with the inverted pendulum, or by applying a torque at the pivot point between the two pendulums. See also Inverted pendulum Inertia wheel pendulum Furuta pendulum Tuned mass damper References External links A dynamical simulation of a double inverted pendulum on an oscillatory base Pendulums Control engineering
Double inverted pendulum
Physics,Engineering
115
73,829,144
https://en.wikipedia.org/wiki/Chuntex%20Electronic
Chuntex Electronic Co., Ltd., also known as CTX International, is a Taiwanese computer display manufacturer. History Chuntex Electronic Co., Ltd. was founded in 1981. Initially only a domestic manufacturer of cathode-ray-tube computer monitors within Taiwan, Chuntex expanded globally in 1986, establishing CTX International—their United States and primary international export subsidiary—that year, placing its headquarters in the City of Industry, California. In the United Kingdom, meanwhile, Chuntex established European offices in the Netherlands and the United Kingdom (Watford), employing 75 between them in 2004. Between the late 1980s to the late 1990s, the company acquired several overseas companies in the field of computer monitors and hardware, helping CTX grow to become one of the largest brands and OEM suppliers of monitors. In the early 1990s, they established their Opto subsidiary, which manufactured LCD monitors and projectors. Chuntex's largest export market in 1995 was the United States (62 percent), compared with Asia (19 percent) and Europe (15 percent). Between fall 1992 and fall 1993, sales in CTX's wares grew from US$15.5 million to $27.2 million. The company earned US$11.5 million in profit on sales of roughly $250 million in 1998. By 1999, the company had 5,000 employees globally. In August 1994, Chuntex purchased a 51-percent stake in Veridata Electronics, a computer company in Taiwanese, with Chuntex seeking the latter's laptop-manufacturing factory lines and workforce. After acquiring an even larger stake in Veridata, Chuntex then began selling computers branded under their own CTX name, as well as for other computer vendors, such as CompUSA in 1996, on an OEM basis. Though CTX was a relatively small name in the personal computer market at the time, the company initially earned a respectable profit from these systems, which included the sub-brands EzNote for their laptops and Nutopia for their desktop computers. However, in April 1999, the company reported losses equal to roughly half of their market capitalization, which the company attributed in large part to their laptop business. These losses put CTX in the red; in the process, they were the first major Taiwanese company to go bankrupt in 1999. Chuntex shortly after filed for reorganization protection in Taiwan. A few months later, the company announced that they would abandon manufacturing complete computer systems, in favor of focusing solely on monitor production while still selling some systems, albeit built by other companies and rebadged as CTX machines. CTX remains active in Taiwan . References 1981 establishments in Taiwan 1986 establishments in California Companies based in Taipei Computer companies established in 1981 Taiwanese brands Computer monitors Computer companies of Taiwan Computer hardware companies
Chuntex Electronic
Technology
575
47,070,737
https://en.wikipedia.org/wiki/Zetapapillomavirus
Zetapapillomavirus is a genus of viruses in the family Papillomaviridae. Horses serve as natural hosts. There is only one species in this genus: Zetapapillomavirus 1. Diseases associated with this genus include cutaneous lesions. Structure Viruses in Zetapapillomavirus are non-enveloped, with icosahedral geometries, and T=7 symmetry. The diameter is around 52-55 nm. Genomes are circular, around 7kb in length. Life cycle Viral replication is nuclear. Entry into the host cell is achieved by attachment of the viral proteins to host receptors, which mediates endocytosis. Replication follows the dsDNA bidirectional replication model. DNA-templated transcription, with some alternative splicing mechanism is the method of transcription. The virus exits the host cell by nuclear envelope breakdown. Horses serve as the natural host. Transmission routes are contact. References External links ICTV Report Papillomaviridae Viralzone: Zetapapillomavirus Papillomavirus Virus genera
Zetapapillomavirus
Biology
219
52,685,710
https://en.wikipedia.org/wiki/Karachi%20Institute%20of%20Radiotherapy%20and%20Nuclear%20Medicine
The Karachi Institute of Radiotherapy and Nuclear Medicine (KIRAN) is a cancer hospital in Karachi, Pakistan under the administrative control of the Pakistan Atomic Energy Commission. KIRAN is one of nineteen medical centers in Pakistan providing patients access to diagnostic and treatment facilities either free of charge or at subsidized rates. Services KIRAN was initially planned to have state of the art radiotherapy facilities. Subsequently, oncology and chemotherapy facilities were established to provide services at low cost to poor cancer patients in Karachi and rural areas of Sindh and Baluchistan. The hospital currently provides services in clinical oncology, nuclear medicine, radiology, and pharmacy. Besides conducting research on cancer diagnosis and biopsies, KIRAN organizes annual activities to mark World Cancer Day. Research From 2002 to 2006, a study was carried out to evaluate the benefits of scanning prostate cancer patients for a prostate specific antigen. Findings were published in the Journal of the Pakistan Medical Association. Treatment Charges The costs of treatment at KIRAN are lower than at other nuclear medicine centers in Karachi, such as the AKUH, Baqai Medical University, and Liaquat National Hospital. See also Nuclear medicine in Pakistan References Nuclear technology in Pakistan Nuclear medicine organizations Hospitals in Karachi Cancer hospitals in Pakistan
Karachi Institute of Radiotherapy and Nuclear Medicine
Engineering
245
43,500,794
https://en.wikipedia.org/wiki/NGC%20462
NGC 462 is an elliptical galaxy located in the Pisces constellation. It was discovered by Albert Marth on 23 October 1864. Dreyer, creator of the New General Catalogue, originally described it as "extremely faint, very small, stellar". The word stellar clearly suggests an initial misidentification of NGC 462 as a star. See also Elliptical galaxy List of NGC objects (1–1000) Pisces (constellation) References External links SEDS Elliptical galaxies Pisces (constellation) 0462 18641023 Discoveries by Albert Marth 004667
NGC 462
Astronomy
119
12,602,791
https://en.wikipedia.org/wiki/Java%20stingaree
The Java stingaree (Urolophus javanicus) was a species of stingray in the family Urolophidae, known only from a single female specimen long caught off Jakarta, Indonesia. This species is characterized by an oval-shaped pectoral fin disc longer than wide, and a tail with a dorsal fin in front of the stinging spine and a caudal fin. It is brown above, with darker and lighter spots. The International Union for Conservation of Nature has listed the Java stingaree as Extinct; it has not been recorded since its discovery over 150 years ago, and its range is subject to heavy fishing pressure and habitat degradation. Taxonomy In July 1862, German zoologist Eduard von Martens purchased the sole known specimen of the Java stingaree at a fish market in Jakarta. He described it as Trygonoptera javanica in an 1864 volume of the scientific journal Monatsberichte der Akademie der Wissenschaft zu Berlin (Monthly Report of the Academy of Sciences, Berlin). Subsequent authors moved this species to the genus Urolophus. Distribution and habitat The Java stingaree has only been found in the Java Sea, perhaps in the vicinity of Jakarta. Its exact range, and depth and habitat preferences, are unknown but probably very restricted. Description The Java stingaree has an oval pectoral fin disc slightly longer than wide; the leading margins are gently convex and converge at a blunt angle on the snout. The eyes are followed by larger, comma-shaped spiracles. The nostrils are crescent-like, and between them is a curtain of skin with a minutely fringed posterior margin. The mouth is bow-shaped, and contains three papillae (nipple-like structures) on the floor. The teeth are closely arranged with a quincunx pattern; each is small with a transverse ridge on the crown. The five pairs of gill slits are short. The pelvic fins are almost square, with rounded corners. The tail is shorter than the disc and bears a prominent dorsal fin about halfway along its length; immediately posterior to the dorsal fin is a serrated stinging spine. The tail ends in a leaf-shaped caudal fin, whose dorsal origin lies behind the ventral origin. The skin is devoid of dermal denticles, though there are tiny white bumps on the upper central portion of the disc. This species is dark brown above, with many indistinct darker and lighter spots, and pale below. The sole specimen measures long. Biology and ecology Very little is known of the natural history of the Java stingaree. It is presumably aplacental viviparous with a small litter size, as in other stingarees. Human interactions No new Java stingaree specimens have emerged since the first was discovered over 150 years ago, and it is feared to be extinct. There is heavy fishing activity within its range, as well as habitat degradation from the proximity of major population centers. While it is possible that captured specimens have gone unrecognized, if this species still survives its population would almost certainly be gravely imperilled; this has led to the International Union for Conservation of Nature (IUCN) assessing it as Extinct as of 31 March 2023. References Java stingaree Fish of Indonesia Endemic fauna of Java Critically endangered fish Critically endangered fauna of Asia Taxonomy articles created by Polbot Taxa named by Eduard von Martens Java stingaree Critically endangered fauna of Indonesia
Java stingaree
Biology
703
5,397,363
https://en.wikipedia.org/wiki/Utility%20room
A utility room is a room where equipment not used in day-to-day activities is kept. "Utility" refers to an item which is designed for usefulness or practical use, so in turn most of the items kept in this room have functional attributes. A utility room is generally the area where laundry is done, and is the descendant of the scullery. Utility room is more commonly used in British English, while North American English generally refer to this room as a laundry room, except in the American Southeast. In Australian English laundry is the usual term. Uses The utility room has several uses but typically functions as an area to do laundry. This room contains laundry equipment such as a washing machine, tumble dryer, ironing boards and clothes iron. The room is also used for closet organization and storage. The room would normally contain a second coat closet which is used to store seasonal clothing such as winter coats or clothing which are no longer used daily. Storage spaces would contain other appliances which would generally be in the kitchen if it was in usage daily. Furnaces and the water heater are sometime incorporated to the room as well. Shelving and trash may sometimes be seen at this area as not to congest the other parts of the house. History The utility room was a modern spin off to the scullery room where important kitchen items were kept during its usage in England, the term was further defined around the 14th century as a household department where kitchen items are taken care of. The term utility room was mentioned in 1760, when a cottage was built in a rural location in the United Kingdom that was accessible through Penarth and Cardiff. A utility room for general purposes also depicted its use as a guest room in case of an immediate need. A 1944 Scottish housing and planning report recommended new state-built homes for families could provide a utility room as a general purpose workroom for the home (for washing clothes, cleaning boots and jobbing repairs). An American publication, the Pittsburgh Post-Gazette, on July 24, 1949 reported that utility rooms have become more popular than basements in new constructions. On June 28, 1959, in a report of a typical American house being built in Moscow, Russia, the house was described to have a utility room immediately at the right side after the entrance. The Chicago Tribune reported that the laundry room was then commonly being referred to as the utility room in a September 30, 1970, publication. See also Furnace room Mechanical room Root cellar Scullery Storage room Technical room References Rooms Laundry places
Utility room
Engineering
510
11,552,840
https://en.wikipedia.org/wiki/Cloud%20Nine%20%28sphere%29
Cloud Nine is the name Buckminster Fuller gave to his proposed airborne habitats created from giant geodesic spheres, which might be made to levitate by slightly heating the air inside above the ambient temperature. Geodesic spheres become stronger (and relative to the volume enclosed, lighter) as they become bigger, because of how they distribute stress over their surfaces. As a sphere gets bigger, the volume it encloses grows much faster than the mass of the enclosing structure itself. Fuller suggested that the mass of a mile-wide geodesic sphere would be negligible compared to the mass of the air trapped within it. He suggested that if the air inside such a sphere were heated even by one degree higher than the ambient temperature of its surroundings, the sphere could become airborne. He calculated that such a balloon could lift a considerable mass, and hence that 'mini-cities' or airborne towns of thousands of people could be built in this way. A Cloud Nine could be tethered, or free-floating, or maneuverable so that it could migrate in response to climatic and environmental conditions, such as providing emergency shelters. In fiction Maiden Flight, a science fiction novel by Eric Vinicoff, postulates a post-nuclear apocalyptic world in which humanity has either retreated underground or taken to the skies in 'windriders', essentially large (approximately one-mile diameter) tensegrity bubbles that employ a hot-air lift mechanism. These windriders may also qualify as arcologies as they are, for all practical purposes, self-sustaining. See also Aerospace architecture Arcology Colonization of Venus Floating cities and islands in fiction Tensegrity References Aerostats Buckminster Fuller House types Fictional aircraft Proposed arcologies ru:Фуллер, Ричард Бакминстер#Девятое небо
Cloud Nine (sphere)
Technology
384
22,385,246
https://en.wikipedia.org/wiki/Numeric%20std
numeric_std is a library package defined for VHDL. It provides arithmetic functions for vectors. Overrides of std_logic_vector are defined for signed and unsigned arithmetic. It defines numeric types and arithmetic functions for use with synthesis tools. Two numeric types are defined: UNSIGNED (represents UNSIGNED number in vector form) and SIGNED (represents a SIGNED number in vector form). The base element type is type STD_LOGIC. The leftmost bit is treated as the most significant bit. Signed vectors are represented in two's complement form. This package contains overloaded arithmetic operators on the SIGNED and UNSIGNED types. The package also contains useful type conversions functions. It is typically included at the top of a design unit: library ieee; use ieee.std_logic_1164.all; -- standard unresolved logic UX01ZWLH- use ieee.numeric_std.all; -- for the signed, unsigned types and arithmetic ops The alternative numeric package ieee.std_logic_arith should not be used for new designs. This package does not provide overrides for mixing signed and unsigned functions. This package includes definitions for the following (not all of which are synthesizable): Operators and functions Sign changing operators abs - Arithmetic operators + - * / rem mod Note: the second argument of /, rem, or mod must be nonzero. Comparison operators > < <= >= = /= Shift and rotate functions SHIFT_LEFT SHIFT_RIGHT ROTATE_LEFT ROTATE_RIGHT sll srl rol ror Resize function RESIZE(v,n) Note: when increasing the size of a signed vector the leftmost bits are filled with the sign bit, while truncation retains the sign bit along with the (n-1) rightmost bits. For an unsigned vector, a size increase fills the leftmost bits with zero, while truncation retains n rightmost bits. Conversion functions TO_INTEGER TO_UNSIGNED TO_SIGNED Note: The latter two functions each require a second argument specifying the length of the resulting vector. Logical operators not and or nand nor xor xnor Match function STD_MATCH Note: compares argument vectors element by element, but treats any bit with the value '-' as matching any other STD_ULOGIC value. Returns false if any argument bit is 'U', 'X', 'W', or 'Z'. Special translation function TO_01 Note: 'H' is translated to '1' and 'L' is translated to '0'; this function takes an optional second argument XMAP, which can be any of the std_logic values, but defaults to '0'. Any value besides 01LH in the input argument results in all bits being set to XMAP, with a warning issued. References Hardware description languages Articles with underscores in the title
Numeric std
Engineering
610
14,757,547
https://en.wikipedia.org/wiki/GABRB2
The GABAA beta-2 subunit is a protein that in humans is encoded by the GABRB2 gene. It combines with other subunits to form the ionotropic GABAA receptors. GABA (γ-aminobutyric acid) system is the major inhibitory system in the brain, and its dominant GABAA receptor subtype is composed of α1, β2, and γ2 subunits with the stoichiometry of 2:2:1, which accounts for 43% of all GABAA receptors. Alternative splicing of the GABRB2 gene leads at least to four isoforms, viz. β2-long (β2L) and β2-short (β2S, β2S1, and β2S2). Alternatively spliced variants displayed similar but non-identical electrophysiological properties. GABRB2 is subjected to positive selection and known to be both an alternative splicing and a recombination hotspot; it is regulated via epigenetic regulation including imprinting and gene and promoter methylation GABRB2 has been associated with a number of neuropsychiatric disorders, and found to display altered expression in cancer. Structure GABRB2 encodes the GABAA receptor beta-2 subunit. It is highly expressed in the brain with dominance in the gray matter. In humans, it is located on chromosome 5q34, with 11 exons and 10 introns spanning more than 260 kb, and a promoter region ranging from 1000 bp upstream to 689 bp downstream of exon 1. Alternative splicing of the gene product yields at least four isoforms, viz. β2-long (β2L), β2-short (β2S) and two additional short isoforms β2S1 and β2S2. These isoforms, composed of 512, 474, 313, and 372 amino acids respectively, display dissimilar electrophysiological properties. In mice, the corresponding Gabrb2 gene on chromosome 11A5 comprises 12 exons and 11 introns, and the two isoforms β2L and β2S from alternative splicing consisted of 512 and 474 amino acids respectively. The β-2 subunit is a component of the ligand-gated chloride GABAA receptors which belongs to the Cys-loop superfamily. Like all subunits of this family, it consists of an extracellular N-terminal domain containing a Cys-loop of 13 amino acids, four membrane-spanning domains (TM1-4) with a large intracellular loop between TM3 and TM4, and an extracellular C-terminal domain. Five subunits from varied families (α1-6, β1-3, γ1-3, δ, ε, π, θ, ρ1-3) combine to form the heteropentameric GABAA receptor. TM2 from each subunit participates in the formation of the ion pore of the receptor, and α2β2γ2 is the major subtype in the brain that accounts for 43% of all GABAA receptors. Regulation Phosphorylation is an important mechanism for the modulation of GABAA receptor function. GABRB2 includes a consensus sequence for a calmodulin-dependent protein kinase II within exon 10 which is only expressed by β2L. As a result, upon repetitive stimulation, the β2L isoform-containing GABAA receptors are more vulnerable to run-down than those containing the short isoforms. Accordingly, ATP depletion reduces the inhibitory transmission of the GABAergic system due to GABAA receptors rundown through β2. Since this rundown occasioned by the presence of β2L would lead to improved maintenance of survival-favoring activities such as hunting and food gathering in the face of energy deprivation, it could be selected as an evolutionary advantage over the shorter isoforms. Multiple lines of evidence confirmed the epigenetic regulation of GABRB2 gene expression via methylation and imprinting. GABRB2 mRNA expression level varied with germline genotypes, and with the gender of the parent in accord with the process of imprinting. Function GABRB2 is highly expressed in the brain where it plays its major role. In the immature brain, GABAA receptors participate in excitatory transmission, which is important to synaptogenesis, neurogenesis, and the formation of the glutamatergic system. In the mature brain, GABAA receptors fulfill their conventional inhibitory role, with the β2 subunits participating in some of the fastest inhibitory transmissions that prevent hyperexcitability, regulate the stress response of the hypothalamic-pituitary-adrenal axis, as well as pain signals mediated by the thalamus. Moreover, GABRB2 is associated with cognitive function, energy regulation, time perception, and the maintenance of efferent synaptic terminals in the mature ear. Clinical significance GABRB2 is associated with a spectrum of neuropsychiatric disorders, and displays of differential gene expression between tumor and non tumor tissues. Psychiatric disorders Schizophrenia Single nucleotide polymorphisms (SNPs) in GABRB2 were first associated with schizophrenia (SCZ) in Han Chinese, and confirmed subsequently for German, Portuguese, and Japanese SCZ patients. Furthermore, their significant associations have been extended to cognitive function, psychosis, and neuroleptic-induced tardive dyskinesia in schizophrenics. Recurrent copy number variations (CNVs) in GABRB2 were likewise associated with schizophrenia. GABRB2 expression was decreased in genotype and age-dependent manners, with reduced β2L/β2S ratios in schizophrenics serving as a key determinant of the response of receptor function to the energy status. The regulation of its expression by methylation and imprinting, as well as its N-glycosylation of the β2-subunit, were altered in SCZ. That GABRB2 is both a recombination hotspot and subject to positive selection could be an important factor in the widespread occurrence of SCZ. Gabrb2-knockout mice displayed schizophrenia-like behavior including prepulse inhibition deficit and antisocial behavior that were ameliorated by the antipsychotic risperidone, strongly supporting the proposal based on postmortem SCZ brains that GABRB2 represents the key genetic factor in SCZ etiology. Other psychiatric disorders GABRB2 was significantly associated with bipolar disorder, with a genotype-dependent decrease in GABRB2 mRNA levels weaker than that observed in SCZ. In major depressive disorder, the expressions of GABAA subunit genes were altered, and the expression of GABRB2 was significantly decreased in the anterior cingulate cortex, in the postmortem brains of patients. The expression of GABRB2 was significantly increased in the internet gaming disorder group, and GABRB2 was the downstream target for two circulating microRNA, viz. hsa-miR-26b-5p and hsa-miR-652-3p, which were significantly downregulated in these subjects. The GABAergic system was suggested to be a factor in the physiopathology of premenstrual dysphoric disorder (PMDD). GABA levels were altered in the brain of PMDD patients. Two highly recurrent copy number variations in GABRB2 were associated with PMDD in Chinese and German patients, providing thereby a possible explanation of part of the complex psychological symptoms of PMDD. Drug dependence SNPs in GABRB2 were significantly associated with alcohol dependence and consumption in Southwestern Native Americans, Finnish, Scottish, and Sidney populations. Chronic alcohol administration induced an increase in the expression of Gabrb2 in a rat model. and sleep time was decreased in Gabrb2 knockout mice. SNPs in GABRB2 were significantly associated with heroin addiction in African American subjects. Haplotypes in GABRB2 yielded a significant association with heroin dependence in the Chinese population. Neurological disorders Epilepsy Numerous de novo mutations in GABRB2 were associated with infantile and early childhood epileptic encephalopathy (IECEE). As well, SNPs in GABRB2 were significantly associated with epilepsy in the North Indian population. Moreover, Gabrb2 knockout mice displayed audiogenic epilepsy, which further confirmed the contribution of GABRB2 to the etiology of epilepsy. Autism spectrum disorder The density of GABAA receptors showed a significant reduction in autistic brains. and SNPs in GABRB2 were significantly associated with autism. De novo pathogenic mutations in the GABRB2 gene contribute to the physiopathology of Rett syndrome. β2 subunit mRNA expression level was subjected to significant upregulation in a mouse model of Rett syndrome Neurodegenerative disorders Deficits in the GABergic system and decreased levels of GABA were reported in Alzheimer's disease (AD). An SNP near GABRB2 was associated with AD. Two SNPs in GABRB2 were significantly associated with frontotemporal dementia (FTD) risk, and GABRB2 was downregulated in a cellular system of FTD and a mouse model of tauopathy. Cancer Genomic classifiers including GABRB2 could differentiate correctly between malignant and benign nodules. and GABRB2 alone or in combination with other genes correctly distinguished between malignant and benign tumors. GABRB2 was upregulated and hypomethylated in papillary thyroid carcinoma. The downregulation of GABRB2 enhanced the apoptotic cell death and decreased proliferation, migration, and invasiveness of thyroid cancer cells. GABRB2 was upregulated in adrenocortical carcinoma and salivary gland cancer, but downregulated in patients with colorectal cancer, brain tumors, kidney renal clear cell carcinoma and lung cancer Therapeutic implications The β2 subunit-containing GABAA receptors are more sensitive to GABA. Tyrosine and proline residues in the Cys-loop of this subunit were important elements in the binding and response to GABA, and the subunit also mediated the receptor binding of alcohol and anesthetics, anticonvulsive activity of loreclezole, hypothermic response to etomidate, as well as the sedative effects of both etomidate and loreclezole. It was identified as a target for the endocannabinoid 2-arachidonylglycerol, and Gabrb2 expression was upregulated by the antiepileptic drug qingyangshenylycosides and downregulated by the opioid oxycodone The wide-ranging involvement of the GABRB2 and its gene products in neuropsychiatric pharmacology are in accord with their central roles in inhibitory signaling in the brain. See also GABAA receptor Notes References External links Ion channels
GABRB2
Chemistry
2,346
60,330,183
https://en.wikipedia.org/wiki/Structural%20reliability
Structural reliability is about applying reliability engineering theories to buildings and, more generally, structural analysis. Reliability is also used as a probabilistic measure of structural safety. The reliability of a structure is defined as the probability of complement of failure . The failure occurs when the total applied load is larger than the total resistance of the structure. Structural reliability has become known as a design philosophy in the twenty-first century, and it might replace traditional deterministic ways of design and maintenance. Theory In structural reliability studies, both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated. When loads and resistances are explicit and have their own independent function, the probability of failure could be formulated as follows. where is the probability of failure, is the cumulative distribution function of resistance (R), and is the probability density of load (S). However, in most cases, the distribution of loads and resistances are not independent and the probability of failure is defined via the following more general formula. where 𝑋 is the vector of the basic variables, and G(X) that is called is the limit state function could be a line, surface or volume that the integral is taken on its surface. Solution approaches Analytical solutions In some cases when load and resistance are explicitly expressed (such as equation () above), and their distributions are normal, the integral of equation () has a closed-form solution as follows. Simulation In most cases load and resistance are not normally distributed. Therefore, solving the integrals of equations () and () analytically is impossible. Using Monte Carlo simulation is an approach that could be used in such cases. References Reliability analysis Reliability engineering Structural engineering
Structural reliability
Engineering
350
45,717,359
https://en.wikipedia.org/wiki/Protocoleoptera
The Protocoleoptera are a paraphyletic group of extinct beetles, containing the earliest and most primitive lineages of beetles. They represented the dominant group of beetles during the Permian, but were largely replaced by modern beetle groups during the following Triassic. Protocoleopterans typically possess prognathous (horizontal) heads, distinctive elytra with regular window punctures, cuticles with tubercles or scales, as well as a primitive pattern of ventral sclerites, similar to the modern archostematan families Ommatidae and Cupedidae. They are thought to have been xylophagous and wood boring. Nomenclature Protocoleoptera was originally proposed by Robert John Tillyard in 1924 for the extinct genus Protocoleus, assigned to the family Protocoleidae. Protocoleus is now considered a member of the extinct order Protelytroptera (a stem-group of the modern Dermaptera, the earwigs), which would make Protocoleoptera in this sense a synonym of the order. Roy Crowson later reused the name "Protocoleoptera" to refer to Early Permian beetles such as the Tshekardocoleidae, while establishing the Archecoleoptera for Late Permian beetles. Because Protocoleoptera in its original sense is a synonym of Protelytroptera, Cai et al. (2022) proposed a replacement name, Alphacoleoptera . Some authors reject the use of Protocoleoptera and Archecoleoptera and include their members within a broad concept of the suborder Archostemata instead. Taxonomy The taxonomic naming scheme of early beetles currently has no consensus, with several separate classification schemes proposed for higher level-clades within the stem-group. It is generally agreed that Tshekardocoleidae is the earliest diverging group among the major families. Coleoptera †Coleopsis Kirejtshuk et al., 2014 †Tshekardocoleidae (Early Permian) †Permocupedidae (Early Permian–Middle Triassic) †Taldycupedidae (Middle–Late Permian) †Ademosynidae (Late Jurassic–Early Cretaceous) †Permosynidae (Permian–Cretaceous, form family?) †Rhombocoleidae (Permian–Cretaceous) †Triadocupedidae (Triassic) †Asiocoleidae (Permian–Jurassic) †Peltosynidae (Middle–Late Triassic) References Insect suborders Paraphyletic groups
Protocoleoptera
Biology
524
6,649,191
https://en.wikipedia.org/wiki/Bombesin%20receptor
The bombesin receptors are a group of G-protein coupled receptors which bind bombesin. Three bombesin receptors are currently known: BB1, previously known as Neuromedin B receptor BB2, previously known as Gastrin-releasing peptide receptor BB3, previously known as Bombesin-like receptor 3 External links G protein-coupled receptors
Bombesin receptor
Chemistry
76
2,866,667
https://en.wikipedia.org/wiki/Molecular%20Biology%20%28journal%29
Molecular Biology is a scientific journal which covers a wide scope of problems related to molecular, cell, and computational biology including genomics, proteomics, bioinformatics, molecular virology and immunology, molecular development biology, and molecular evolution. Molecular Biology publishes reviews, mini-reviews, experimental, and theoretical works, short communications and hypotheses. In addition, the journal publishes book reviews and meeting reports. The journal also publishes special issues devoted to most rapidly developing branches of physical-chemical biology and to the most outstanding scientists on the occasion of their anniversary birthdays. The journal is published in English and Russian versions by Nauka. External links Molecular and cellular biology journals Multilingual journals Publications with year of establishment missing Nauka academic journals English-language journals Russian-language journals Springer Science+Business Media academic journals
Molecular Biology (journal)
Chemistry
170
17,017,917
https://en.wikipedia.org/wiki/Software%20studies
Software studies is an emerging interdisciplinary research field, which studies software systems and their social and cultural effects. The implementation and use of software has been studied in recent fields such as cyberculture, Internet studies, new media studies, and digital culture, yet prior to software studies, software was rarely ever addressed as a distinct object of study. To study software as an artifact, software studies draws upon methods and theory from the digital humanities and from computational perspectives on software. Methodologically, software studies usually differs from the approaches of computer science and software engineering, which concern themselves primarily with software in information theory and in practical application; however, these fields all share an emphasis on computer literacy, particularly in the areas of programming and source code. This emphasis on analysing software sources and processes (rather than interfaces) often distinguishes software studies from new media studies, which is usually restricted to discussions of interfaces and observable effects. History The conceptual origins of software studies include Marshall McLuhan's focus on the role of media in themselves, rather than the content of media platforms, in shaping culture. Early references to the study of software as a cultural practice appear in Friedrich Kittler's essay, "Es gibt keine Software", Lev Manovich's Language of New Media, and Matthew Fuller's Behind the Blip: Essays on the Culture of Software. Much of the impetus for the development of software studies has come from video game studies, particularly platform studies, the study of video games and other software artifacts in their hardware and software contexts. New media art, software art, motion graphics, and computer-aided design are also significant software-based cultural practices, as is the creation of new protocols and platforms. The first conference events in the emerging field were Software Studies Workshop 2006 and SoftWhere 2008. In 2008, MIT Press launched a Software Studies book series with an edited volume of essays (Fuller's Software Studies: A Lexicon), and the first academic program was launched, (Lev Manovich, Benjamin H. Bratton, and Noah Wardrip-Fruin's "Software Studies Initiative" at U. California San Diego). In 2011, a number of mainly British researchers established Computational Culture, an open-access peer-reviewed journal. The journal provides a platform for "inter-disciplinary enquiry into the nature of the culture of computational objects, practices, processes and structures." Related fields Software studies is closely related to a number of other emerging fields in the digital humanities that explore functional components of technology from a social and cultural perspective. Software studies' focus is at the level of the entire program, specifically the relationship between interface and code. Notably related are critical code studies, which is more closely attuned to the code rather than the program, and platform studies, which investigates the relationships between hardware and software. See also Cultural studies Digital sociology References Footnotes Bibliography Further reading External links Software studies bibliography at Monoskop.org Computing culture Cultural studies Digital humanities Science and technology studies Software Technological change
Software studies
Technology,Engineering
612
4,917,988
https://en.wikipedia.org/wiki/Iota%20Leonis
Iota Leonis, Latinized from ι Leonis, is a triple star system in the constellation Leo. The system is fairly close to the Sun, at only 79 light-years (24.2 parsecs) away, based on its parallax. The system has a combined apparent magnitude of 4.00 making it faintly visible to the naked eye. It is moving closer to the Sun with a radial velocity of −10 km/s. Iota Leonis has a spectral type of F3 IV, matching that of an F-type subgiant star. It is a spectroscopic binary, which means it is a binary star with components that are too close together to be able to resolve individually through a telescope. In this case, light from only the primary star can be detected, and it is considered single-lined. The third component in the star system is designated Iota Leonis B. It orbits the central pair almost every 200 years, and with its perihelion passage in 1948, the separation between the two is steadily growing. Iota Leonis B has a mass approximately 8% greater than that of the Sun. It is a G-type main-sequence star, like the Sun. Name In Chinese, (), meaning Right Wall of Supreme Palace Enclosure, refers to an asterism consisting of ι Leonis, β Virginis, σ Leonis, θ Leonis and δ Leonis. Consequently, the Chinese name for ι Leonis itself is (, .), representing (), meaning The Second Western General. 西次將 (Xīcìjiāng), spelled Tsze Tseang by R.H. Allen, means "the Second General" See also List of stars in Leo Chinese star names References F-type subgiants Triple star systems Spectroscopic binaries Suspected variables Leonis, Iota Leo (constellation) Durchmusterung objects Leonis, 78 055642 99028 4399
Iota Leonis
Astronomy
403
56,007,650
https://en.wikipedia.org/wiki/NGC%20502
NGC 502, also occasionally referred to as PGC 5034 or UGC 922, is a lenticular galaxy in the constellation Pisces. It is located approximately 113 million light-years from the Solar System and was discovered on 25 September 1862 by German astronomer Heinrich Louis d'Arrest. When the Morphological Catalogue of Galaxies was published between 1962 and 1974, the identifications of NGC 502 and NGC 505 were reversed. In reality, NGC 502 is equal to MGC +01-04-041 and not MCG +01-04-043 as noted in the catalogue. Observation history Arrest discovered NGC 502 using an 11" reflecting telescope in Copenhagen. His position, which he measured on four separate nights, matches with both UGC 922 and PGC 5034. John Louis Emil Dreyer, creator of the New General Catalogue, described the galaxy as "considerably bright, small, round, brighter middle and nucleus". See also Elliptical galaxy List of NGC objects (1–1000) Pisces (constellation) References External links SEDS Lenticular galaxies Pisces (constellation) 0502 00922 +01-04-041 005034 J01225553+0902570 Astronomical objects discovered in 1862 Discoveries by Heinrich Louis d'Arrest
NGC 502
Astronomy
264
35,778,963
https://en.wikipedia.org/wiki/Cypionic%20acid
Cypionic acid, also known as cyclopentylpropionic acid, is an aliphatic carboxylic acid with the molecular formula C8H14O2. Its salts and esters are known as cypionates or cipionates. The primary use of cypionic acid is in pharmaceutical formulations. Cypionic acid is used to prepare ester prodrugs which have increased half-lives relative to the parent compound. The lipophilicity of the cypionate group allows the prodrug to be sequestered in fat depots after intramuscular injection. The ester group is slowly hydrolyzed by metabolic enzymes, releasing steady doses of the active ingredient. Examples include testosterone cypionate, estradiol cypionate, hydrocortisone cypionate, oxabolone cipionate, and mesterolone cypionate. References Carboxylic acids Cyclopentyl compounds
Cypionic acid
Chemistry
203
2,574,661
https://en.wikipedia.org/wiki/Bergius%20process
The Bergius process is a method of production of liquid hydrocarbons for use as synthetic fuel by hydrogenation of high-volatile bituminous coal at high temperature and pressure. It was first developed by Friedrich Bergius in 1913. In 1931 Bergius was awarded the Nobel Prize in Chemistry for his development of high-pressure chemistry. Process The coal is finely ground and dried in a stream of hot gas. The dry product is mixed with heavy oil recycled from the process. A catalyst is typically added to the mixture. A number of catalysts have been developed over the years, including tungsten or molybdenum disulfide, tin or nickel oleate, and others. Alternatively, iron sulfide present in the coal may have sufficient catalytic activity for the process, which was the original Bergius process. The mixture is pumped into a reactor. The reaction occurs at between 400 and 500 °C and 20 to 70 MPa hydrogen pressure. The reaction produces heavy oils, middle oils, gasoline, and gases. The overall reaction can be summarized as follows: (where x = Degrees of Unsaturation) The immediate product from the reactor must be stabilized by passing it over a conventional hydrotreating catalyst. The product stream is high in cycloalkanes and aromatics, low in alkanes (paraffins) and very low in alkenes (olefins). The different fractions can be passed to further processing (cracking, reforming) to output synthetic fuel of desirable quality. If passed through a process such as platforming, most of the cycloalkanes are converted to aromatics and the recovered hydrogen recycled to the process. The liquid product from Platforming will contain over 75% aromatics and has a Research Octane Number (RON) of over 105. Overall, about 97% of input carbon fed directly to the process can be converted into synthetic fuel. However, any carbon used in generating hydrogen will be lost as carbon dioxide, so reducing the overall carbon efficiency of the process. There is a residue of unreactive tarry compounds mixed with ash from the coal and catalyst. To minimise the loss of carbon in the residue stream, it is necessary to have a low-ash feed. Typically the coal should be <10% ash by weight. The hydrogen required for the process can be also produced from coal or the residue by steam reforming. A typical hydrogen demand is ~80 kg hydrogen per ton of dry, ash-free coal. Generally, this process is similar to hydrogenation. The output is at three levels: heavy oil, middle oil, gasoline. The middle oil is hydrogenated in order to get more gasoline and the heavy oil is mixed with the coal again and the process restarts. In this way, heavy oil and middle oil fractions are also reused in this process. The most recent evolution of Bergius' work is the 2-stage hydroliquefaction plant at Wilsonville AL which operated during 1981-85. Here a coal extract was prepared under heat and hydrogen pressure using finely pulverized coal and recycle donor solvent. As the coal molecule is broken down, free radicals are formed which are immediately stabilized by absorption of H atoms from the donor solvent. Extract then passes to a catalytic ebullated-bed hydrocracker (H-Oil unit) fed by additional hydrogen, forming lower molecular weight hydrocarbons and splitting off sulfur, oxygen and nitrogen originally present in the coal. Part of the liquid product is hydrogenated donor solvent which is returned to Stage I. The balance of liquid product is fractionated by distillation yielding various boiling range products and an ashy residue. Ashy residue goes to a Kerr-McGee critical solvent deashing unit which yields additional liquid product and a high-ash material containing unreacted coal and heavy residuum, which in a commercial plant would be gasified to make the H2 needed to feed the process. Parameters can be adjusted to avoid directly gasifying any of the coal entering the plant. Alternative versions of the plant configuration could use L-C Fining and/or an antisolvent deashing unit. Typical species in the donor solvent are fused-ring aromatics (tetrahydronaphthalene and up) or the analogous heterocycles. History Friedrich Bergius developed the process during his habilitation. A technique for the high-pressure and high-temperature chemistry of carbon-containing substrates yielded in a patent in 1913. In this process liquid hydrocarbons used as synthetic fuel are produced by hydrogenation of lignite (brown coal). He developed the process well before the commonly known Fischer–Tropsch process. Karl Goldschmidt invited him to build an industrial plant at his factory the Th. Goldschmidt AG (now known as Evonik Industries) in 1914. The production began only in 1919, after World War I ended, when the need for fuel was already declining. The technical problems, inflation and the constant criticism of Franz Joseph Emil Fischer, which changed to support after a personal demonstration of the process, made the progress slow, and Bergius sold his patent to BASF, where Carl Bosch worked on it. Before World War II several plants were built with an annual capacity of 4 million tons of synthetic fuel. These plants were extensively used during World War II to supply Germany with fuel and lubricants. Use Coal hydrogenation is not used commercially any more. The Bergius process was extensively used by Brabag, a cartel firm of Nazi Germany. Plants that used the process were targeted for bombing during the Oil Campaign of World War II. At present there are no plants operating the Bergius Process or its derivatives commercially. The largest demonstration plant was the 200 ton per day plant at Bottrop, Germany, operated by Ruhrkohle, which ceased operation in 1993. There are reports of a Chinese company constructing a plant with a capacity of 4 000 ton per day. It was expected to become operational in 2007, but there has been no confirmation that this was achieved. Towards the end of World War II the United States began heavily financing research into converting coal to gasoline, including money to build a series of pilot plants. The project was enormously helped by captured German technology. One plant using the Bergius process was built in Louisiana, Missouri and began operation about 1946. Located along the Mississippi river, this plant was producing gasoline in commercial quantities by 1948. The Louisiana process method produced automobile gasoline at a price slightly higher than, but comparable to, petroleum-based gasoline but of a higher quality. The facility was shut down in 1953 by the Eisenhower administration, allegedly after intense lobbying by the oil industry. See also Synthetic Liquid Fuels Program Fischer–Tropsch process Karrick process Coal-water slurry fuel References External links The Early Days of Coal Research, U.S. Department of Energy webpage Coal Catalysis Synthetic fuel technologies German inventions 1913 in science
Bergius process
Chemistry
1,410
3,447,756
https://en.wikipedia.org/wiki/Carbon-based%20life
Carbon is a primary component of all known life on Earth, and represents approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. Carbonic anhydrase is part of this process. Carbon has an atomic number of 6 on the periodic table. The carbon cycle is a biogeochemical cycle that is important in maintaining life on Earth over a long time span. The cycle includes carbon sequestration and carbon sinks. Plate tectonics are needed for life over a long time span, and carbon-based life is important in the plate tectonics process. Iron- and sulfur-based Anoxygenic photosynthesis life forms that lived from 3.80 to 3.85 billion years ago on Earth produced an abundance of black shale deposits. These shale deposits increase heat flow and crust buoyancy, especially on the sea floor, helping to increase plate tectonics. Talc is another organic mineral that helps drive plate tectonics. Inorganic processes also help drive plate tectonics. Carbon-based photosynthesis life caused a rise in oxygen on Earth. This increase of oxygen helped plate tectonics form the first continents. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics, like Carl Sagan in 1973, refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that is but a fraction of the number of compounds that are theoretically possible under standard conditions. The enormous diversity of carbon compounds, known as organic compounds, has led to a distinction between them and the inorganic compounds that do not contain carbon. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of cellular life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously, and that the energy required to make or break a bond with a carbon atom is at an appropriate level for building large and complex molecules which may be both stable and reactive. Carbon atoms bond readily to other carbon atoms; this allows the building of arbitrarily long macromolecules and polymers in a process known as catenation.<ref>Oxford English Dictionary, 1st edition (1889) s.v. 'chain', definition 4g</ref> "What we normally think of as 'life' is based on chains of carbon atoms, with a few other atoms, such as nitrogen or phosphorus", per Stephen Hawking in a 2008 lecture, "carbon [... has the richest chemistry." Norman Horowitz was the head of the Jet Propulsion Laboratory's bioscience section for the first U.S. mission, Viking Lander of 1976, to successfully land an unmanned probe on the surface of Mars. He considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. However, the results of this mission indicated that Mars was presently extremely hostile to carbon-based life. He also considered that, in general, there was only a remote possibility that non-carbon life forms would be able to evolve with genetic information systems capable of self-replication and adaptation. Key molecules The most notable classes of biological macromolecules used in the fundamental processes of living organisms include: Proteins, which are the building blocks from which the structures of living organisms are constructed (this includes almost all enzymes, which catalyse organic chemical reactions). Amino acid, make up proteins, included the use in genetic code of life. Nucleic acids, which carry genetic information. Ribonucleic acid (RNA), production of proteins. Deoxyribonucleic acid (DNA), nucleic acid in genetic form. Peptide, building block of proteins. Lipids, which also store energy, but in a more concentrated form, and which may be stored for extended periods in the bodies of animals. Phospholipid used in cell membrane. Carbohydrates, which store energy in a form that can be used by living cells. Lectin, for binding proteins. Monosaccharide, simple sugars, including glucose and fructose. Disaccharides, sugar soluble in water, including lactose, maltose, and sucrose. Starch, made of amylose and amylopectin, plants energy storage. Glycogen, energy in animals. Cellulose, a biopolymer, found in the cell walls of plants. Fatty acid, two types, saturated fat and unsaturated fat (oil), are stored energy. Essential fatty acid, needed but not synthesized by the human body. Steroid, hormone, and used in cell membrane. Neurotransmitter, are signaling molecules. Cholesterol, used in the brain and spinal cord of animals. Wax, found in beeswax and lanolin. Plant wax used for protection. Water Liquid water is essential for carbon-based life. Chemical bonding of carbon molecules requires liquid water. Water has the chemical property to make compound-solvent pairing. Water provides the reversible hydration of carbon dioxide. Hydration of carbon dioxide is needed in carbon-based life. All life on Earth uses the same biochemistry of carbon. Water is important in life's carbonic anhydrase the interaction of between carbon dioxide and water. Carbonic anhydrase needs a family of carbon base enzymes for the hydration of carbon dioxide and acid–base homeostasis, that regulates PH levels in life. In plant life, liquid water is needed for photosynthesis, the biological process plants use to convert light energy and carbon dioxide into chemical energy. Water makes up 55% to 60% of the human body by weight. Other candidates A few other elements have been proposed as candidates for supporting biological systems and processes as fundamentally as carbon does, for example, processes such as metabolism. The most frequently suggested alternative is silicon. Silicon, atomic number of 14, more than twice the size of carbon, shares a group in the periodic table with carbon, can also form four valence bonds, and also bonds to itself readily, though generally in the form of crystal lattices rather than long chains. Despite these similarities, silicon is considerably more electropositive than carbon, and silicon compounds do not readily recombine into different permutations in a manner that would plausibly support lifelike processes. Silicon is abundant on Earth, but as it is more electropositive and in a water based environment it forms Si–O bonds rather than Si–Si bonds. Boron does not react with acids and does not form chains naturally. Thus boron is not a candidate for life. Arsenic is toxic to life, and its possible candidacy has been rejected. In the past (1960s-1970s) other candidates for life were plausible, but with time and more research, only carbon has the complexity and stability to make large molecules and polymers essential for life. Fiction Speculations about the chemical structure and properties of hypothetical non-carbon-based life have been a recurring theme in science fiction. Silicon is often used as a substitute for carbon in fictional lifeforms because of its chemical similarities. In cinematic and literary science fiction, when man-made machines cross from non-living to living, this new form is often presented as an example of non-carbon-based life. Since the advent of the microprocessor in the late 1960s, such machines are often classed as "silicon-based life". Other examples of fictional "silicon-based life" can be seen in the 1967 episode "The Devil in the Dark" from Star Trek: The Original Series, in which a living rock creature's biochemistry is based on silicon. In the 1994 The X-Files episode "Firewalker", in which a silicon-based organism is discovered in a volcano. In the 1984 film adaptation of Arthur C. Clarke's 1982 novel 2010: Odyssey Two, a character argues, "Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect." In JoJolion, the eighth part of the larger JoJo's Bizarre Adventure series, a mysterious race of silicon-based lifeforms "Rock Humans" serve as the primary antagonists. Gallery See also Carbon source (biology) Cell biology CHONPS, a mnemonic acronym for the order of the most common elements in living organisms: carbon, hydrogen, oxygen, nitrogen, phosphorus, and sulfur Habitable zone for complex life References External links Astrobiology Biology and pharmacology of chemical elements Carbon Life
Carbon-based life
Chemistry,Astronomy,Biology
1,980
38,324,409
https://en.wikipedia.org/wiki/DNA%20digital%20data%20storage
DNA digital data storage is the process of encoding and decoding binary data to and from synthesized strands of DNA. While DNA as a storage medium has enormous potential because of its high storage density, its practical use is currently severely limited because of its high cost and very slow read and write times. In June 2019, scientists reported that all 16 GB of text from the English Wikipedia had been encoded into synthetic DNA. In 2021, scientists reported that a custom DNA data writer had been developed that was capable of writing data into DNA at 1 Mbps. Encoding methods Many methods for encoding data in DNA are possible. The optimal methods are those that make economical use of DNA and protect against errors. If the message DNA is intended to be stored for a long period of time, for example, 1,000 years, it is also helpful if the sequence is obviously artificial and the reading frame is easy to identify. Encoding text Several simple methods for encoding text have been proposed. Most of these involve translating each letter into a corresponding "codon", consisting of a unique small sequence of nucleotides in a lookup table. Some examples of these encoding schemes include Huffman codes, comma codes, and alternating codes. Encoding arbitrary data To encode arbitrary data in DNA, the data is typically first converted into ternary (base 3) data rather than binary (base 2) data. Each digit (or "trit") is then converted to a nucleotide using a lookup table. To prevent homopolymers (repeating nucleotides), which can cause problems with accurate sequencing, the result of the lookup also depends on the preceding nucleotide. Using the example lookup table below, if the previous nucleotide in the sequence is T (thymine), and the trit is 2, the next nucleotide will be G (guanine). Various systems may be incorporated to partition and address the data, as well as to protect it from errors. One approach to error correction is to regularly intersperse synchronization nucleotides between the information-encoding nucleotides. These synchronization nucleotides can act as scaffolds when reconstructing the sequence from multiple overlapping strands. In vivo The genetic code within living organisms can potentially be co-opted to store information. Furthermore synthetic biology can be used to engineer cells with "molecular recorders" to allow the storage and retrieval of information stored in the cell's genetic material. CRISPR gene editing can also be used to insert artificial DNA sequences into the genome of the cell. For encoding developmental lineage data (molecular flight recorder), roughly 30 trillion cell nuclei per mouse * 60 recording sites per nucleus * 7-15 bits per site yields about 2 TeraBytes per mouse written (but only very selectively read). In-vivo light-based direct image and data recording A proof-of-concept in-vivo direct DNA data recording system was demonstrated through incorporation of optogenetically regulated recombinases as part of an engineered "molecular recorder" allows for direct encoding of light-based stimuli into engineered E.coli cells. This approach can also be parallelized to store and write text or data in 8-bit form through the use of physically separated individual cell cultures in cell-culture plates. This approach leverages the editing of a "recorder plasmid" by the light-regulated recombinases, allowing for identification of cell populations exposed to different stimuli. This approach allows for the physical stimulus to be directly encoded into the "recorder plasmid" through recombinase action. Unlike other approaches, this approach does not require manual design, insertion and cloning of artificial sequences to record the data into the genetic code. In this recording process, each individual cell population in each cell-culture plate culture well can be treated as a digital "bit", functioning as a biological transistor capable of recording a single bit of data. History The idea of DNA digital data storage dates back to 1959, when the physicist Richard P. Feynman, in "There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics" outlined the general prospects for the creation of artificial objects similar to objects of the microcosm (including biological) and having similar or even more extensive capabilities. In 1964–65, Mikhail Samoilovich Neiman, the Soviet physicist, published 3 articles about microminiaturization in electronics at the molecular-atomic level, which independently presented general considerations and some calculations regarding the possibility of recording, storage, and retrieval of information on synthesized DNA and RNA molecules. After the publication of the first M.S. Neiman's paper and after receiving by Editor the manuscript of his second paper (January, the 8th, 1964, as indicated in that paper) the interview with cybernetician Norbert Wiener was published. N. Wiener expressed ideas about miniaturization of computer memory, close to the ideas, proposed by M. S. Neiman independently. These Wiener's ideas M. S. Neiman mentioned in the third of his papers. This story is described in details. One of the earliest uses of DNA storage occurred in a 1988 collaboration between artist Joe Davis and researchers from Harvard University. The image, stored in a DNA sequence in E.coli, was organized in a 5 x 7 matrix that, once decoded, formed a picture of an ancient Germanic rune representing life and the female Earth. In the matrix, ones corresponded to dark pixels while zeros corresponded to light pixels. In 2007 a device was created at the University of Arizona using addressing molecules to encode mismatch sites within a DNA strand. These mismatches were then able to be read out by performing a restriction digest, thereby recovering the data. In 2011, George Church, Sri Kosuri, and Yuan Gao carried out an experiment that would encode a 659 kb book that was co-authored by Church. To do this, the research team did a two-to-one correspondence where a binary zero was represented by either an adenine or cytosine and a binary one was represented by a guanine or thymine. After examination, 22 errors were found in the DNA. In 2012, George Church and colleagues at Harvard University published an article in which DNA was encoded with digital information that included an HTML draft of a 53,400 word book written by the lead researcher, eleven JPEG images and one JavaScript program. Multiple copies for redundancy were added and 5.5 petabits can be stored in each cubic millimeter of DNA. The researchers used a simple code where bits were mapped one-to-one with bases, which had the shortcoming that it led to long runs of the same base, the sequencing of which is error-prone. This result showed that besides its other functions, DNA can also be another type of storage medium such as hard disk drives and magnetic tapes. In 2013, an article led by researchers from the European Bioinformatics Institute (EBI) and submitted at around the same time as the paper of Church and colleagues detailed the storage, retrieval, and reproduction of over five million bits of data. All the DNA files reproduced the information with an accuracy between 99.99% and 100%. The main innovations in this research were the use of an error-correcting encoding scheme to ensure the extremely low data-loss rate, as well as the idea of encoding the data in a series of overlapping short oligonucleotides identifiable through a sequence-based indexing scheme. Also, the sequences of the individual strands of DNA overlapped in such a way that each region of data was repeated four times to avoid errors. Two of these four strands were constructed backwards, also with the goal of eliminating errors. The costs per megabyte were estimated at $12,400 to encode data and $220 for retrieval. However, it was noted that the exponential decrease in DNA synthesis and sequencing costs, if it continues into the future, should make the technology cost-effective for long-term data storage by 2023. In 2013, a software called DNACloud was developed by Manish K. Gupta and co-workers to encode computer files to their DNA representation. It implements a memory efficiency version of the algorithm proposed by Goldman et al. to encode (and decode) data to DNA (.dnac files). The long-term stability of data encoded in DNA was reported in February 2015, in an article by researchers from ETH Zurich. The team added redundancy via Reed–Solomon error correction coding and by encapsulating the DNA within silica glass spheres via Sol-gel chemistry. In 2016 research by Church and Technicolor Research and Innovation was published in which, 22 MB of a MPEG compressed movie sequence were stored and recovered from DNA. The recovery of the sequence was found to have zero errors. In March 2017, Yaniv Erlich and Dina Zielinski of Columbia University and the New York Genome Center published a method known as DNA Fountain that stored data at a density of 215 petabytes per gram of DNA. The technique approaches the Shannon capacity of DNA storage, achieving 85% of the theoretical limit. The method was not ready for large-scale use, as it costs $7000 to synthesize 2 megabytes of data and another $2000 to read it. In March 2018, University of Washington and Microsoft published results demonstrating storage and retrieval of approximately 200MB of data. The research also proposed and evaluated a method for random access of data items stored in DNA. In March 2019, the same team announced they have demonstrated a fully automated system to encode and decode data in DNA. Research published by Eurecom and Imperial College in January 2019, demonstrated the ability to store structured data in synthetic DNA. The research showed how to encode structured or, more specifically, relational data in synthetic DNA and also demonstrated how to perform data processing operations (similar to SQL) directly on the DNA as chemical processes. In April 2019, due to a collaboration with TurboBeads Labs in Switzerland, Mezzanine by Massive Attack was encoded into synthetic DNA, making it the first album to be stored in this way. In June 2019, scientists reported that all 16 GB of Wikipedia have been encoded into synthetic DNA. In 2021, CATALOG reported that they had developed a custom DNA writer capable of writing data at 1 Mbps into DNA. The first article describing data storage on native DNA sequences via enzymatic nicking was published in April 2020. In the paper, scientists demonstrate a new method of recording information in DNA backbone which enables bit-wise random access and in-memory computing. In 2021, a research team at Newcastle University led by N. Krasnogor implemented a stack data structure using DNA, allowing for last-in, first-out (LIFO) data recording and retrieval. Their approach used hybridization and strand displacement to record DNA signals in DNA polymers, which were then released in reverse order. The study demonstrated that data structure-like operations are possible in the molecular realm. The researchers also explored the limitations and future improvements for dynamic DNA data structures, highlighting the potential for DNA-based computational systems. Davos Bitcoin Challenge On January 21, 2015, Nick Goldman from the European Bioinformatics Institute (EBI), one of the original authors of the 2013 Nature paper, announced the Davos Bitcoin Challenge at the World Economic Forum annual meeting in Davos. During his presentation, DNA tubes were handed out to the audience, with the message that each tube contained the private key of exactly one bitcoin, all coded in DNA. The first one to sequence and decode the DNA could claim the bitcoin and win the challenge. The challenge was set for three years and would close if nobody claimed the prize before January 21, 2018. Almost three years later on January 19, 2018, the EBI announced that a Belgian PhD student, Sander Wuyts, of the University of Antwerp and Vrije Universiteit Brussel, was the first one to complete the challenge. Next to the instructions on how to claim the bitcoin (stored as a plain text and PDF file), the logo of the EBI, the logo of the company that printed the DNA (CustomArray), and a sketch of James Joyce were retrieved from the DNA. The Lunar Library The Lunar Library, launched on the Beresheet Lander by the Arch Mission Foundation, carries information encoded in DNA, which includes 20 famous books and 10,000 images. This was one of the optimal choices of storage, as DNA can last a long time. The Arch Mission Foundation suggests that it can still be read after billions of years. The lander crashed on 11 April 2019 and was lost. DNA of things The concept of the DNA of Things (DoT) was introduced in 2019 by a team of researchers from Israel and Switzerland, including Yaniv Erlich and Robert Grass. DoT encodes digital data into DNA molecules, which are then embedded into objects. This gives the ability to create objects that carry their own blueprint, similar to biological organisms. In contrast to Internet of things, which is a system of interrelated computing devices, DoT creates objects which are independent storage objects, completely off-grid. As a proof of concept for DoT, the researcher 3D-printed a Stanford bunny which contains its blueprint in the plastic filament used for printing. By clipping off a tiny bit of the ear of the bunny, they were able to read out the blueprint, multiply it and produce a next generation of bunnies. In addition, the ability of DoT to serve for steganographic purposes was shown by producing non-distinguishable lenses which contain a YouTube video integrated into the material. See also DNA computing DNA nanotechnology Nanobiotechnology Natural computing Plant-based digital data storage 5D optical data storage References Further reading DNA Sequencing Caught in Deluge of Data. The New York Times (NYTimes.com). DNA Molecular biology Storage media Computational biology
DNA digital data storage
Chemistry,Biology
2,879
34,694,497
https://en.wikipedia.org/wiki/Bar%20screen
A bar screen is a mechanical filter used to remove large objects, such as rags and plastics, from wastewater. It is part of the primary filtration flow and typically is the first, or preliminary, level of filtration, being installed at the influent to a wastewater treatment plant. They typically consist of a series of vertical steel bars spaced between 1 and 3 inches apart. Bar screens come in many designs. Some employ automatic cleaning mechanisms using electric motors and chains, some must be cleaned manually by means of a heavy rake. Items removed from the influent are called screenings and are collected in dumpsters and disposed of in landfills. As a bar screen collects objects, the water level will rise, and so they must be cleared regularly to prevent overflow. References Water treatment
Bar screen
Chemistry,Engineering,Environmental_science
163
14,333,272
https://en.wikipedia.org/wiki/Dominance-based%20rough%20set%20approach
The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. The main change compared to the classical rough sets is the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration of criteria and preference-ordered decision classes. Multicriteria classification (sorting) Multicriteria classification (sorting) is one of the problems considered within MCDA and can be stated as follows: given a set of objects evaluated by a set of criteria (attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem of classification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to as ordinal classification problem with monotonicity constraints and often appears in real-life application when ordinal and monotone properties follow from the domain knowledge about the problem. As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes: bad, medium and good (notice that class good is preferred to medium and medium is preferred to bad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible values bad, medium and good. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class). As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classes safe and risky. This may involve such characteristics as "return on equity (ROE)", "return on investment (ROI)" and "return on sales (ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information in knowledge discovery may lead to wrong conclusions. Data representation Decision table In DRSA, data are often presented using a particular form of decision table. Formally, a DRSA decision table is a 4-tuple , where is a finite set of objects, is a finite set of criteria, where is the domain of the criterion and is an information function such that for every . The set is divided into condition criteria (set ) and the decision criterion (class) . Notice, that is an evaluation of object on criterion , while is the class assignment (decision value) of the object. An example of decision table is shown in Table 1 below. Outranking relation It is assumed that the domain of a criterion is completely preordered by an outranking relation ; means that is at least as good as (outranks) with respect to the criterion . Without loss of generality, we assume that the domain of is a subset of reals, , and that the outranking relation is a simple order between real numbers such that the following relation holds: . This relation is straightforward for gain-type ("the more, the better") criterion, e.g. company profit. For cost-type ("the less, the better") criterion, e.g. product price, this relation can be satisfied by negating the values from . Decision classes and class unions Let . The domain of decision criterion, consist of elements (without loss of generality we assume ) and induces a partition of into classes , where . Each object is assigned to one and only one class . The classes are preference-ordered according to an increasing order of class indices, i.e. for all such that , the objects from are strictly preferred to the objects from . For this reason, we can consider the upward and downward unions of classes, defined respectively, as: Main concepts Dominance We say that dominates with respect to , denoted by , if is better than on every criterion from , . For each , the dominance relation is reflexive and transitive, i.e. it is a partial pre-order. Given and , let represent P-dominating set and P-dominated set with respect to , respectively. Rough approximations The key idea of the rough set philosophy is approximation of one knowledge by another knowledge. In DRSA, the knowledge being approximated is a collection of upward and downward unions of decision classes and the "granules of knowledge" used for approximation are P-dominating and P-dominated sets. The P-lower and the P-upper approximation of with respect to , denoted as and , respectively, are defined as: Analogously, the P-lower and the P-upper approximation of with respect to , denoted as and , respectively, are defined as: Lower approximations group the objects which certainly belong to class union (respectively ). This certainty comes from the fact, that object belongs to the lower approximation (respectively ), if no other object in contradicts this claim, i.e. every object which P-dominates , also belong to the class union (respectively ). Upper approximations group the objects which could belong to (respectively ), since object belongs to the upper approximation (respectively ), if there exist another object P-dominated by from class union (respectively ). The P-lower and P-upper approximations defined as above satisfy the following properties for all and for any : The P''-boundaries (P-doubtful regions) of and are defined as: Quality of approximation and reducts The ratio defines the quality of approximation of the partition into classes by means of the set of criteria . This ratio express the relation between all the P-correctly classified objects and all the objects in the table. Every minimal subset such that is called a reduct of and is denoted by . A decision table may have more than one reduct. The intersection of all reducts is known as the core. Decision rules On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms of decision rules. The decision rules are expressions of the form if [condition] then [consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions. Certain rules has the following form: if and and then if and and then Possible rules has a similar syntax, however the consequent part of the rule has the form: could belong to or the form: could belong to . Finally, approximate rules has the syntax: if and and and and and then The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table. Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes). A set of decision rules is complete if it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We call minimal each set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete. One of three induction strategies can be adopted to obtain a set of decision rules: generation of a minimal description, i.e. a minimal set of rules, generation of an exhaustive description, i.e. all rules for a given data matrix, generation of a characteristic description, i.e. a set of rules covering relatively many objects each, however, all together not necessarily all objects from the decision table The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM, which generates minimal set of rules. Example Consider the following problem of high school students’ evaluations: {| class="wikitable" style="text-align:center" border="1" |+ Table 1: Example—High School Evaluations ! object (student) !! (Mathematics) !! (Physics) !! (Literature) !! !! (global score) |- ! |medium || medium || bad || || bad |- ! |good || medium || bad || || medium |- ! |medium || good || bad || || medium |- ! |bad || medium || good || || bad |- ! |bad || bad || medium || || bad |- ! |bad || medium || medium || || medium |- ! |good || good || bad || || good |- ! |good || medium || medium || || medium |- ! |medium || medium || good || || good |- ! |good || medium || good || || good |} Each object (student) is described by three criteria , related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes: , and . Thus, the following unions of classes were approximated: i.e. the class of (at most) bad students, i.e. the class of at most medium students, i.e. the class of at least medium students, i.e. the class of (at least) good students. Notice that evaluations of objects and are inconsistent, because has better evaluations on all three criteria than but worse global score. Therefore, lower approximations of class unions consist of the following objects: Thus, only classes and cannot be approximated precisely. Their upper approximations are as follows: while their boundary regions are: Of course, since and are approximated precisely, we have , and The following minimal set of 10 rules can be induced from the decision table: if then if and and then if then if and then if and then if and then if and then if then if then if and then'' The last rule is approximate, while the rest are certain. Extensions Multicriteria choice and ranking problems The other two problems considered within multi-criteria decision analysis, multicriteria choice and ranking problems, can also be solved using dominance-based rough set approach. This is done by converting the decision table into pairwise comparison table (PCT). Variable-consistency DRSA The definitions of rough approximations are based on a strict application of the dominance principle. However, when defining non-ambiguous objects, it is reasonable to accept a limited proportion of negative examples, particularly for large decision tables. Such extended version of DRSA is called Variable-Consistency DRSA model (VC-DRSA) Stochastic DRSA In real-life data, particularly for large datasets, the notions of rough approximations were found to be excessively restrictive. Therefore, an extension of DRSA, based on stochastic model (Stochastic DRSA), which allows inconsistencies to some degree, has been introduced. Having stated the probabilistic model for ordinal classification problems with monotonicity constraints, the concepts of lower approximations are extended to the stochastic case. The method is based on estimating the conditional probabilities using the nonparametric maximum likelihood method which leads to the problem of isotonic regression. Stochastic dominance-based rough sets can also be regarded as a sort of variable-consistency model. Software 4eMka2 is a decision support system for multiple criteria classification problems based on dominance-based rough sets (DRSA). JAMM is a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on the Laboratory of Intelligent Decision Support Systems (IDSS) website. See also Rough sets Granular computing Multicriteria Decision Analysis (MCDA) References Chakhar S., Ishizaka A., Labib A., Saad I. (2016). Dominance-based rough set approach for group decisions, European Journal of Operational Research, 251(1): 206-224 Li S., Li T. Zhang Z., Chen H., Zhang J. (2015). Parallel Computing of Approximations in Dominance-based Rough Sets Approach, Knowledge-based Systems, 87: 102-111 Li S., Li T. (2015). Incremental Update of Approximations in Dominance-based Rough Sets Approach under the Variation of Attribute Values, Information Sciences, 294: 348-361 Li S., Li T., Liu D. (2013). Dynamic Maintenance of Approximations in Dominance-based Rough Set Approach under the Variation of the Object Set, International Journal of Intelligent Systems, 28(8): 729-751 External links The International Rough Set Society Laboratory of Intelligent Decision Support Systems (IDSS) at Poznań University of Technology. Extensive list of DRSA references on the Roman Słowiński home page. Theoretical computer science Machine learning algorithms Multiple-criteria decision analysis
Dominance-based rough set approach
Mathematics
2,882
2,121,650
https://en.wikipedia.org/wiki/Signal%20lamp
A signal lamp (sometimes called an Aldis lamp or a Morse lamp) is a visual signaling device for optical communication by flashes of a lamp, typically using Morse code. The idea of flashing dots and dashes from a lantern was first put into practice by Captain Philip Howard Colomb, of the Royal Navy, in 1867. Colomb's design used limelight for illumination, and his original code was not the same as Morse code. During World War I, German signalers used optical Morse transmitters called , with a range of up to 8 km (5 miles) at night, using red filters for undetected communications. Modern signal lamps produce a focused pulse of light, either by opening and closing shutters mounted in front of the lamp, or by tilting a concave mirror. They continue to be used to the present day on naval vessels and for aviation light signals in air traffic control towers, as a backup device in case of a complete failure of an aircraft's radio. History Signal lamps were pioneered by the Royal Navy in the late 19th century. They were the second generation of signalling in the Royal Navy, after the flag signals most famously used to spread Nelson's rallying-cry, "England expects that every man will do his duty", before the Battle of Trafalgar. The idea of flashing dots and dashes from a lantern was first put into practice by Captain, later Vice Admiral, Philip Howard Colomb, of the Royal Navy, in 1867. Colomb's design used limelight for illumination. His original code was not identical to Morse code, but the latter was subsequently adopted. Another signalling lamp was the Begbie lamp, a kerosene lamp with a lens to focus the light over a long distance. During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters, called , the intermediate type for distances of up to 4 km (2.5 miles) in daylight and of up to 8 km (5 miles) at night, using red filters for undetected communications. In 1944, British inventor Arthur Cyril Webb Aldis patented a small hand-held design, which featured an improved shutter. Design Modern signal lamps can produce a focused pulse of light. In large versions, this pulse is achieved by opening and closing shutters mounted in front of the lamp, either via a manually operated pressure switch, or, in later versions, automatically. With hand-held lamps, a concave mirror is tilted by a trigger to focus the light into pulses. The lamps were usually equipped with some form of optical sight, and were most commonly used on naval vessels and in air traffic control towers, using colour signals for stop or clearance. In manual signalling, a signalman would aim the light at the recipient ship and turn a lever, opening and closing the shutter over the lamp, to emit flashes of light to spell out text messages in Morse code. On the recipient ship, a signalman would observe the blinking light, often with binoculars, and translate the code into text. The maximum transmission rate possible via such flashing light apparatus is no more than 14 words per minute. Some signal lamps are mounted on the mastheads of ships while some small hand-held versions are also used. Other more powerful versions are mounted on pedestals. These larger ones use a carbon arc lamp as their light source, with a diameter of . These can be used to signal to the horizon, even in conditions of bright sunlight. Modern use Signal lamps continue to be used to the present day on naval vessels. They provide handy, relatively secure communications, which are especially useful during periods of radio silence, such as for convoys operating during the Battle of the Atlantic. The Commonwealth navies and NATO forces use signal lamps when radio communications need to be silent or electronic "spoofing" is likely. Also, given the prevalence of night vision equipment in today's armed forces, signaling at night is usually done with lights that operate in the infrared (IR) portion of the electromagnetic spectrum, making them less likely to be detected. All modern forces have followed suit due to technological advances in digital communications. Signal lamps are still used today for aviation light signals in air traffic control towers as a backup device in case of a complete failure of an aircraft's radio. Light signals can be green, red, or white, and steady or flashing. Messages are limited to a handful of basic instructions, e.g., "land", "stop", etc.; they are not intended to be used for transmitting messages in Morse code. Aircraft can acknowledge signals by rocking their wings or flashing their landing lights. See also Colt Acetylene Flash Lantern Flag semaphore Heliograph VS-17 References External links An Aldis lamp in operation History of telecommunications Types of lamp Military communications Morse code Optical communications Semaphore
Signal lamp
Engineering
990
679,161
https://en.wikipedia.org/wiki/Pothole
A pothole is a pot-shaped depression in a road surface, usually asphalt pavement, where traffic has removed broken pieces of the pavement. It is usually the result of water in the underlying soil structure and traffic passing over the affected area. Water first weakens the underlying soil; traffic then fatigues and breaks the poorly supported asphalt surface in the affected area. Continued traffic action ejects both asphalt and the underlying soil material to create a hole in the pavement. Formation According to the US Army Corps of Engineers's Cold Regions Research and Engineering Laboratory, pothole formation requires two factors to be present at the same time: water and traffic. Water weakens the soil beneath the pavement while traffic applies the loads that stress the pavement past the breaking point. Potholes form progressively from fatigue of the road surface which can lead to a precursor failure pattern known as crocodile (or alligator) cracking. Eventually, chunks of pavement between the fatigue cracks gradually work loose, and may then be plucked or forced out of the surface by continued wheel loads to create a pothole. In areas subject to freezing and thawing, frost heaving can damage a pavement and create openings for water to enter. In the spring, thaw of pavements accelerates this process when the thawing of upper portions of the soil structure in a pavement cannot drain past still-frozen lower layers, thus saturating the supporting soil and weakening it. Potholes can grow to several feet in width, though they usually only develop to depths of a few inches. If they become large enough, damage to tires, wheels, and vehicle suspensions is liable to occur. Serious road accidents can occur as a direct result, especially on those roads where vehicle speeds are greater. Potholes may result from four main causes: Insufficient pavement thickness to support traffic during freeze/thaw periods without localized failures Insufficient drainage Failures at utility trenches and castings (manhole and drain casings) Pavement defects and cracks left unmaintained and unsealed so as to admit moisture and compromise the structural integrity of the pavement Gallery Prevention The following steps can be taken to avoid pothole formation in existing pavements: 1. Surveying of pavements for risk factors 2. Providing adequate drainage structures 3. Preventive maintenance 4. Utility cut management 5. Sealing asphalt cracks Survey of pavements At-risk pavement are more often local roads with lower structural standards and more complicating factors, like underground utilities, than major arteries. Pavement condition monitoring can lead to timely preventive action. Surveys address pavement distresses, which both diminishes the strength of the asphalt layer and admits water into the pavement, and effective drainage of water from within and around the pavement structure. Drainage Drainage structures, including ditching and storm sewers are essential for removing water from pavements. Avoiding other risk factors with good construction includes well-draining base and sub-base soils that avoid frost action and promote drying of the soil structure. Adequate crowns promote drainage to the sides. Good crack control prevents water penetration into the pavement soil structure. Preventive maintenance Preventive maintenance adds maintaining pavement structural integrity with thickness and continuity to the mix of preventing water penetration and promoting water migration away from the roadway. Utility cut management Eaton, et al., advocate a permitting process for utility cuts with specifications that avoid loss of structural continuity of pavements and flaws or failures that allow water penetration. Some municipalities require contractors to install utility repair tags to identify responsible parties of the deteriorated patches. Sealing asphalt cracks A US Air Force manual advocates semiannual inspection of pavement cracks with crack sealing commencing on cracks that exceed Repair Pothole patching methods may be either temporary or semi-permanent. Temporary patching is reserved for weather conditions that are not favorable to a more permanent solution and usually uses a cold mix asphalt patching compound placed in an expedient manner to temporarily restore pavement smoothness. Semi-permanent patching uses more care in reconstructing the perimeter of the failed area to blend with the surrounding pavement and usually employs a hot-mix asphalt fill above replacement of appropriate base materials. The Federal Highway Administration (FHWA) offers an overview of best practices which includes several repair techniques; throw-and-roll, semi-permanent, spray injection, and edge seal. The FHWA suggests the best patching techniques, at times other than winter, are spray injection, throw-and-roll, semi-permanent, or edge seal procedures. In winter, the throw-and-roll technique may be the only available option. The Council for Scientific and Industrial Research in South Africa offers similar methods for the repair of potholes. Materials Asphaltic patch materials consist of a binder and aggregate that come in two broad categories, hot mix and cold mix. Hot mixes are used by some agencies, they are produced at local asphalt plants. The FHWA manual cites three types of cold mixes, those produced by a local asphalt plant, either 1) using the available aggregate and binder or 2) according to specifications set by the agency that will use the mix. The third type is a proprietary cold mix, which is manufactured to an advertised standard. Throw-and-roll repair The FHWA manual cites the throw-and-roll method as the most basic method, best used as a temporary repair under conditions when it is difficult to control the placement of material, such as winter-time. It consists of: Placing the hot or cold patch material into a pothole Compacting the patch with a vehicle, such as a truck Achieving a crown on the compacted patch of between 3 and 6 mm This method is widely used due to its simplicity and speed, especially as an expedient method when the material is placed under unfavorable conditions of water or temperature. It can also be employed at times when the pothole is dry and clean with more lasting results. Eaton, et al., noted that the failure rate of expedient repairs is high and that they can cost as much as five times the cost of properly done repairs. They advocate this type of repair only when weather conditions prevent proper techniques. Researchers from the University of Minnesota Duluth have tested mixing asphalt with iron ore containing magnetite which is then heated using ferromagnetic resonance (using microwaves at a specific frequency) to heat the mixed asphalt. The mixture used a compound of between 1% and 2% magnetite. The group discovered that material could be heated for a patch to in approximately ten minutes which then applied a more effective repair and drove out moisture which improved adhesion. Semi-permanent repair The FHWA manual cites the semi-permanent repair method as one of the best for repairing potholes, short of full-depth roadway replacement. It consists of: 1. Removing water and debris from the pothole 2. Making clean cuts along the sides of prospective patch area to assure that vertical sides of the repair are in sound pavement (Eaton, et al., recommend a bituminous tack coat in the open cavity, prior to placement of patch material.) 3. Placing the hot or cold patch mix material 4. Compacting the patch with a device that is smaller than the patch area, e.g. vibratory rollers or a vibratory plate While this repair procedure provides durable results, it requires more labor and is more equipment-intensive than the throw-and-roll or the spray-injection procedure. Spray-injection repair The FHWA manual cites the spray-injection procedure as an efficient alternative to semi-permanent repair. It requires specialized equipment, however. It consists of: 1. Blowing water and debris from the pothole 2. Spraying a tack coat of binder on the sides and bottom of the pothole 3. Blowing asphalt and aggregate into the pothole 4. Covering the patched area with a layer of aggregate This procedure requires no compaction after the cover aggregate has been placed. Edge seal repair The FHWA manual cites the edge seal method as an alternative to the above techniques. It consists of: 1. Following the "throw-and-roll" steps 2. After the repaired section has dried, placing a ribbon of asphaltic tack material on top of the patch edge, overlapping the pavement and the patch 3. Placing sand on the tack material to prevent tracking by vehicle tires In this procedure, waiting for any water to dry may require a second visit to place the tack coat. The tack material prevents water from getting through the edge of the patch and helps bond the patch to the surrounding pavement. Efficacy of repair methods An FHWA-sponsored study determined that the "throw-and-roll technique proved as effective as the semi-permanent procedure when the two procedures were compared directly, using similar materials". It also found the throw-and-roll procedure to be generally more cost-effective when using quality materials. It further found that spray-injection repairs were as effective as control patches, depending on the expertise of the equipment operator. Costs to the public The American Automobile Association estimated in the five years prior to 2016 that 16 million drivers in the United States have suffered damage from potholes to their vehicle including tire punctures, bent wheels, and damaged suspensions with a cost of $3 billion a year. In India, between 2015 and 2017, an average of 3,000 people per year were killed in accidents involving potholes. Britain has estimated that the cost of fixing all roads with potholes in the country would cost £12 billion. Reporting Some jurisdictions offer websites or mobile apps for pothole-reporting. These allow citizens to report potholes and other road hazards, optionally including a photograph and GPS coordinates. It is estimated there are 55 million potholes in the United States. The self-proclaimed pothole capital, Edmonton, Alberta, Canada reportedly spends $4.8 million on 450,000 potholes annually, as of 2015. India has historically lost over 3,000 people per year to accidents caused by potholes. This situation has engendered citizen movement to address the problem. In the United Kingdom, more than half a million pot holes were reported in 2017, an increase of 44% on the 2015 figure. There are processes in place to report potholes at different levels of jurisdiction. The process for claiming compensation varies by jurisdiction. In popular culture Potholes have been commented on both in various media. Visual art Two artists, Jim Bachor of Chicago and Baadal Nanjundaswamy of Bangalore, India, have used artwork as a commentary on potholes by placing mosaics (depicting ice cream in various manifestations) or sculpture (in the form of a crocodile) in potholes. Elsewhere, activists in Russia used painted caricatures of local officials with their mouths as potholes, to show their anger about the poor state of the roads. In Manchester, England, a graffiti artist painted images of penises around potholes, which often resulted in them being repaired within 48 hours. Song The Beatles song "A Day in the Life" references potholes. John Lennon wrote the song's final verse inspired by a Far & Near news brief, in the same 17 January edition of the Daily Mail that had inspired the first two verses. Under the headline "The holes in our roads", the brief stated: "There are 4,000 holes in the road in Blackburn, Lancashire, or one twenty-sixth of a hole per person, according to a council survey. If Blackburn is typical, there are two million holes in Britain's roads and 300,000 in London." Television In the Seinfeld episode The Pothole, George discovers that he has lost his keys, including a commemorative Phil Rizzuto keychain that says "Holy Cow" when activated. He then retraces his steps, and returns to a street where he had jumped over a pothole, which is now filled in with asphalt. The "Holy Cow" phrase is heard when a car runs over it. See also Asphalt concrete Road surface Chicago rat hole References External links Federal Highway Administration Manual of Practice for Pothole Repair Road construction Road hazards Holes
Pothole
Technology,Engineering
2,428
2,527,272
https://en.wikipedia.org/wiki/Isotopes%20of%20tennessine
Tennessine (117Ts) is the most-recently synthesized synthetic element, and much of the data is hypothetical. As for any synthetic element, a standard atomic weight cannot be given. Like all synthetic elements, it has no stable isotopes. The first (and so far only) isotopes to be synthesized were 293Ts and 294Ts in 2009. The longer-lived isotope is 294Ts with a half-life of 51 ms. List of isotopes |-id=Tennessine-293 | 293Ts | style="text-align:right" | 117 | style="text-align:right" | 176 | 293.20873(84)# | [] | α | 289Mc | |-id=Tennessine-294 | 294Ts | style="text-align:right" | 117 | style="text-align:right" | 177 | 294.21084(64)# | [] | α | 290Mc | Isotopes and nuclear properties Nucleosynthesis Target-projectile combinations leading to Z=117 compound nuclei The below table contains various combinations of targets and projectiles that could be used to form compound nuclei with atomic number 117. Hot fusion 249Bk(48Ca,xn)297−xTs (x=3,4) Between July 2009 and February 2010, the team at the JINR (Flerov Laboratory of Nuclear Reactions) ran a 7-month-long experiment to synthesize tennessine using the reaction above. The expected cross-section was of the order of 2 pb. The expected evaporation residues, 293Ts and 294Ts, were predicted to decay via relatively long decay chains as far as isotopes of dubnium or lawrencium. The team published a paper in April 2010 (first results were presented in January 2010) that six atoms of the isotopes 294Ts (one atom) and 293Ts (five atoms) were detected. 294Ts decayed by six alpha decays down as far as the new isotope 270Db, which underwent apparent spontaneous fission. The lighter odd-even isotope underwent just three alpha decays, as far as 281Rg, which underwent spontaneous fission. The reaction was run at two different excitation energies, 35 MeV (dose 2×1019) and 39 MeV (dose 2.4×1019). Initial decay data was published as a preliminary presentation on the JINR website. A further experiment in May 2010, aimed at studying the chemistry of the granddaughter of tennessine, nihonium, identified a further two atoms of 286Nh from decay of 294Ts. The original experiment was repeated successfully by the same collaboration in 2012 and by a joint German–American team in May 2014, confirming the discovery. Chronology of isotope discovery Theoretical calculations Evaporation residue cross sections The below table contains various targets-projectile combinations for which calculations have provided estimates for cross section yields from various neutron evaporation channels. The channel with the highest expected yield is given. DNS = Di-nuclear system; σ = cross section Decay characteristics Theoretical calculations in a quantum tunneling model with mass estimates from a macroscopic-microscopic model predict the alpha-decay half-lives of isotopes of tennessine (namely, 289–303Ts) to be around 0.1–40 ms. References Tennessine Tennessine
Isotopes of tennessine
Chemistry
692
22,507,480
https://en.wikipedia.org/wiki/Vish%C4%81kh%C4%81
Vishākhā is a nakshatra in Indian astronomy spread in Tula or Libra (The 7th House of Natural Vedic Astrology). In Hindu mythology, Vishākhā is a daughter of the king Daksha. She is one of the twenty-seven daughters of Daksha, who married the moon-god Chandra. Vishākhā is the sixteenth nakshatra of the zodiac, ruled by the planet Jupiter Brihaspati or Guru, It is also supposed to be the birth star of the goddess Sita. Notable people and entities named Vishākhā Vishakha Singh (born 1986), Indian film actress, producer and entrepreneur Vishakha Raut, a Shiv Sena Politician from Mumbai, former mayor of Brihanmumbai Municipal Corporation Vishakha N. Desai, an Asia scholar with a focus on art, culture, policy, and women's rights Visakhapatnam, city in Andhra Pradesh Visakha FC, Cambodian football club Visakha Stadium, Cambodian football stadium See also Gopi, Vishaka is also one of the main gopis in Krishna lila, Krishna's muses in Goloka Vrndavana References Nakshatra
Vishākhā
Astronomy
243
64,187,070
https://en.wikipedia.org/wiki/Cryptic%20mimicry%20in%20plants
Cryptic mimicry is observed in animals as well as plants. In animals, this may involve nocturnality, camouflage, subterranean lifestyle, and mimicry. Generally, plant herbivores are visually oriented. So a mimicking plant should strongly resemble its host; this can be done through visual and/or textural change. Previous criteria for mimicry include similarity of leaf dimensions, leaf presentation, and intermodal distances between the host and mimicking plant. Australian mistletoe and Boquila trifoliolata are well known examples of this mimicry. Researchers hypothesize that crypsis is used to reduce the likelihood of vertebrate herbivory and thus improve the survivability and fitness of the mimicking plant. Mistletoe Mistletoe, or Viscum album, is an obligate hemi-parasite meaning it attaches to its host tree and extracts water and nutrients. Australia is home to over 90 species of mistletoe, with 70 being native. Studies evaluating the role of crypsis on herbivory measure leaf quality, such as nitrogen and protein levels, water content, etc. Ehleringer et al. examined nitrogen levels, as an indicator of protein status, of mistletoe and their host (Acacia, Cassia, Casuarina, Ceriops, and Eucalyptus) to determine if mimicry reduced herbivory in the plant. One hypothesis tested was mistletoe that show mimicry would have higher nitrogen levels and have a selective advantage through reduced herbivory. A second hypothesis tested was mistletoe that do not demonstrate mimicry would have lower nitrogen levels and appear at a lower nutritional status and have reduced herbivory as a result. Seemingly, the researchers thought that the act of mimicry would increase nitrogen levels and depending on nutritional status and detectability, herbivory would be affected. Previous studies have shown that animals may prefer different food resources depending on their water content, vitamins, carbohydrates and energy needs, and appearance might play into an animal’s perception and preference. Record of Kjeldahl nitrogen in the leaves was taken to measure reduced nitrogen levels in the mistletoe, which would affect amino acids, proteins, etc, and possible preference of herbivores. From their results, Ehleringer et al. found that the majority (17 of 22) of mimetic mistletoe had nitrogen levels that were either equal to or greater than their hosts. However, mistletoe that mimicked the Eucalyptus species had nitrogen levels lower or equal to their hosts. Eucalyptus typically has high oil content which is thought to be an anti-herbivory mechanism. One thought might be that in additional effort to avoid herbivory, having lower nitrogen levels and therefore lower nutrition, mistletoe would be less favorable to herbivores than the host Eucalyptus. Of the non-mimetic mistletoe, 15 of 26 had significantly lower nitrogen levels than their hosts. The lowered nitrogen would reflect lower protein levels and potentially lower nutrition to potential herbivores. Ehleringer et al. can only make predictions about the mechanism of mimicry and herbivory rates as no herbivory was actually studied or measured in mistletoe and host plants. Boquila In their research, Gianoli and Carrasco-Urra demonstrate the effect cryptic mimicry can have on the herbivory of the Boquila trifoliolata. Native to the temperate rainforest of South America, Boquila is a climbing vine. Compared to other cryptic plants, Boquila is unique in its ability to mimic several hosts despite no parasitic relationships. Like others, Gianoli and Carrasco-Urra set out to prove that Boquila mimicry results in protection against herbivory. Boquila leaf traits (such as size, shape, color, orientation) were compared with its native host tree species to try to explain such wide morphological changes. Out of 11 traits, there was significant phenotypic association between 9 traits of the Boquila and host leaves. Leaves of unsupported vines growing on the ground did not differ from those of vines growing on leafless stems or trees; showing that when there is no leaf to mimic, climbing plants do not differ from unsupported plants. It was measured that herbivory remained equal between climbing vines and unsupported hosts. Herbivory was significantly higher in vines growing unsupported than in vines climbing on trees. Lastly, herbivory on vines climbing leafless hosts was higher than in unsupported vines. These results suggest that the act of climbing is not enough to avoid herbivory, but additional mimicry of supported leaves may reduce herbivory rates. An interesting note about the Boquila is that leaf mimicry can occur even when there is no contact between the vine and its host. High phenotypic plasticity allows the Boquila to mimic several hosts simultaneously but it does not explain the mechanism behind its mimicry. Mechanism Currently, there is no known explanation for leaf mimicry. In the Boquila, because mimicry is observed despite lack of physical contact between the vine and its host, hypotheses of plant volatile and horizontal gene transfer have been mentioned. Volatile organic compounds have been shown to elicit defense responses in inter-plant and plant-plant situations. When attacked by herbivores, plants release a blend of volatiles that can initiate response in systemic leaves as well as neighboring plants. It has been observed that volatile signals increased the expression of genes related to plant defense and resulted in change to the transcriptome. In mimicry, response to volatiles could be gene edits in the plant, which could change the expression of certain genes and result in phenotypic change. In the Venus fly trap, stimulation of mechanoreceptors and calcium release trigger jasmonic acid synthesis. Proteins and enzymes have been shown to be involved in transport and perception of volatiles. They could also play a role in the conversion of a volatile signal to a chemical product response e.g. salicylic acid and methyl salicylate. With the mistletoe, a possible line of study could be measuring nitrogen level change after mimicry to see if nitrogen is involved in perception of volatiles or if it changes as a result from perception. Horizontal gene transfer involves movement of genetic material without being passed down to offspring. It plays an important role in the evolution of many organisms. It is hypothesized that transfer is conducted through a vector or is a result of plant-plant parasitism. Little is known about how this method could be involved in plant mimicry but it is mentioned by Gianoli and Carrasco-Urra as an explanation that Boquila mimicry is observed depending on which host the plant is nearest to, despite previous contact. Possible transfer within close distances would explain varying amounts of mimicry seen in Boquila and its hosts. Plant-plant interactions Not much is known about the underlying mechanisms of how the mimicking plants and their hosts are able to communicate, or if they do at all. Kin recognition would be an area to further study as it might reveal more about volatile communication between plants. Heil and Karban note that the use of volatiles can be costly to the emitter which poses the question of competition and cost between the host and mimicker. If the host were to somehow recognize the mimicker as kin, it would offer potential reasoning for the exchange between the two. Similar to a study from Crepy et al. in which they noticed plants that recognized kin shifted their leaf position to benefit kin plants growing nearby. Close phenotypic association was observed and could be explored further. Others have discussed that instead of communication, Boquila and other plants eavesdrop on their hosts. It might be that instead of the mimicking plant solely benefitting, the host plant could also experience reduced herbivory as potential herbivores might mistake the Boquila for the host. This would suggest that the relationship between mimicker and host is not just beneficial to the mimicker. Further research into the plant-plant interactions would need to be done in order to answer these questions. References Mimicry
Cryptic mimicry in plants
Biology
1,683
48,987,892
https://en.wikipedia.org/wiki/Isotropic%20position
In the fields of machine learning, the theory of computation, and random matrix theory, a probability distribution over vectors is said to be in isotropic position if its covariance matrix is equal to the identity matrix. Formal definitions Let be a distribution over vectors in the vector space . Then is in isotropic position if, for vector sampled from the distribution, A set of vectors is said to be in isotropic position if the uniform distribution over that set is in isotropic position. In particular, every orthonormal set of vectors is isotropic. As a related definition, a convex body in is called isotropic if it has volume , center of mass at the origin, and there is a constant such that for all vectors in ; here stands for the standard Euclidean norm. See also Whitening transformation References Machine learning Random matrices
Isotropic position
Physics,Mathematics,Engineering
175
62,596,418
https://en.wikipedia.org/wiki/Carryover%20credits
Carryover Credits (Kyoto carryover credits) are a carbon accounting measure by which nations count historical emission reductions that exceeded previous international goals towards its current targets. In essence, carryover credits represent the volume of emissions a country could have released, but did not. When used in reference to the Paris Agreement, it refers to a scheme under which unspent "Clean Development Mechanism credits" (CDM credits) introduced by the Kyoto Protocol will be "carried over" to the new markets established by the agreement. As part of the Paris Agreement, CDM credits will be replaced by an international emissions trading market, where by countries can sell their excess emissions credits to other countries. While most countries do not count their credits, several countries led by Australia, including Brazil, India, and Ukraine are attempting to allow their credits to be carried over. The proposal has been criticized, with scientists estimating that if countries were to make full use of their excess credits global temperatures could rise by an extra 0.1 °C. In addition countries could use their excess credits to flood the market and greatly reduce the price of credits. References Greenhouse gas emissions
Carryover credits
Chemistry
228
31,712,770
https://en.wikipedia.org/wiki/Fresnel%20zone%20antenna
Fresnel zone antennas are antennas that focus the signal by using the phase shifting property of the antenna surface or its shape. There are several types of Fresnel zone antennas, namely, Fresnel zone plate, offset Fresnel zone plate antennas, phase correcting reflective array or "Reflectarray" antennas and 3 Dimensional Fresnel antennas. They are a class of diffractive antennas and have been used from radio frequencies to X rays. Fresnel antenna Fresnel zone antennas belong to the category of reflector and lens antennas. Unlike traditional reflector and lens antennas, however, the focusing effect in a Fresnel zone antenna is achieved by controlling the phase shifting property of the surface and allows for flat or arbitrary antenna shapes. For historical reasons, a flat Fresnel zone antenna is termed a Fresnel zone plate antenna. An offset Fresnel zone plate can be flush mounted to the wall or roof of a building, printed on a window, or made conformal to the body of a vehicle. The advantages of the Fresnel zone plate antenna are numerous. It is normally cheap to manufacture and install, easy to transport and package and can achieve high gain. Owing to its flat nature, the wind loading force of a Fresnel zone plate can be as little as 1/8 of that of conventional solid or wire-meshed reflectors of similar size. When used at millimetre wave frequencies, a Fresnel zone antenna can be an integrated with the millimetre-wave monolithic integrated circuit (MMIC) and thus becomes even more competitive than a printed antenna array. The simplest Fresnel zone plate antenna is the circular half-wave zone plate invented in the nineteenth century. The basic idea is to divide a plane aperture into circular zones with respect to a chosen focal point on the basis that all radiation from each zone arrives at the focal point in phase within ±π/2 range. If the radiation from alternate zones is suppressed or shifted in phase by π, an approximate focus is obtained and a feed can be placed there to collect the received energy effectively. Despite its simplicity, the half-wave zone plate remained mainly as an optical device for a long time, primarily because its efficiency is too low (less than 20%) and the sidelobe level of its radiation pattern is too high to compete with conventional reflector antennas. Compared with conventional reflector and lens antennas, reported research on microwave and millimetre-wave Fresnel zone antennas appears to be limited. In 1948, Maddaus published the design and experimental work on stepped half-wave lens antennas operating at 23 GHz and sidelobe levels of around −17 dB were achieved. In 1961, Buskirk and Hendrix reported an experiment on simple circular phase reversal zone plate reflector antennas for radio frequency operation. Unfortunately, the sidelobe they achieved was as high as −7 dB. In 1987, Black and Wiltse published their theoretical and experimental work on the stepped quarter-wave zone plate at 35 GHz. A sidelobe level of about −17 dB was achieved. A year later a phase reversal zone plate reflector operating at 94 GHz was reported by Huder and Menzel, and 25% efficiency and −19 dB sidelobe level were obtained. An experiment on a similar antenna at 11.8 GHz was reported by NASA researchers in 1989. 5% 3 dB bandwidth and −16 dB sidelobe level were measured. Until the 1980s, the Fresnel zone plate antenna was regarded as a poor candidate for microwave applications. Following the development of DBS services in the eighties, however, antenna engineers began to consider the use of Fresnel zone plates as candidate antennas for DBS reception, where antenna cost is an important factor. This, to some extent, provided a commercial push to the research on Fresnel zone antennas. Offset Fresnel antenna The offset Fresnel zone plate was first reported in. In contrast to the symmetrical Fresnel zone plate which consists of a set of circular zones, the offset Fresnel zone plate consists of a set of elliptical zones defined by where a, b and c are determined by the offset angle and focal length and the zone index. This feature introduces some new problems to the analysis of offset Fresnel zone plate antennas. The formulae and algorithms for predicting the radiation pattern of an offset Fresnel lens antenna are presented in, where some experimental results are also reported. Although a simple Fresnel lens antenna has low efficiency, it serves as a very attractive indoor candidate when a large window or an electrically transparent wall is available. In the application of direct broadcasting services (DBS), for example, an offset Fresnel lens can be produced by simply painting a zonal pattern on a window glass or a blind with conducting material. The satellite signal passing through the transparent zones is then collected by using an indoor feed. Phase correcting antenna To increase the efficiency of Fresnel zone plate antennas, one can divide each Fresnel zone into several sub-zones, such as quarter-wave sub-zones, and provide an appropriate phase shift in each of them, thus resulting in a sub-zone phase correcting zone plate. The problem with dielectric based zone plate lens antenna is that whilst a dielectric is providing a phase shift to the transmitted wave, it inevitably reflects some of the energy back, so the efficiency of such a lens is limited. However, the low efficiency problem for a zone plate reflector is less severe, as total reflection can be achieved by using a conducting reflector behind the zone plate. Based on the focal field analysis, it is demonstrated that high efficiency zone plate reflectors can be obtained by employing the multilayer phase correcting technique, which is to use a number of dielectric slabs of low permittivity and print different metallic zonal patterns on the different interfaces. The design and experiments of circular and offset multilayer phase correcting zone plate reflectors were presented in. A problem with the multilayer zone plate reflector is the complexity introduced, which might offset the advantage of using Fresnel zone plate antennas. One solution is to print an inhomogeneous array of conducting elements on a grounded dielectric plate, thus leading to the so-called single-layer printed flat reflector. This configuration bears much in common with the printed array antenna but it requires the use of a feed antenna instead of a corporate feed network. In contrast to the normal array antenna, the array elements are different and are arranged in a pseudo-periodic manner. The theory and design method of single layer printed flat reflectors incorporating conducting rings and experimental results on such an antenna operating in the X-band were given in. Naturally, this leads to a more general antenna concept, the phase correcting reflective array. Reflectarray antenna A phase correcting reflective array consists of an array of phase shifting elements illuminated by a feed placed at the focal point. The word "reflective" refers to the fact that each phase shifting element reflects back the energy in the incident wave with an appropriate phase shift. The phase shifting elements can be passive or active. Each phase shifting element can be designed to either produce a phase shift which is equal to that required at the element centre, or provide some quantised phase shifting values. Although the former does not seem to be commercially attractive, the latter proved to be practical antenna configuration. One potential advantage is that such an array can be reconfigured by changing the positions of the elements to produce different radiation patterns. A systematic theory of the phase efficiency of passive phase correcting array antennas and experimental results on an X-band prototype were reported in. In recent years, it became common to call this type of antennas "reflectarrays". Reference phase modulation It has been shown that the phase of the main lobe of a zone plate follows its reference phase, a constant path length or phase added to the formula for the zones, but that the phase of the side lobes is much less sensitive. So, when it is possible to modulate the signal by changing the material properties dynamically, the modulation of the side lobes is much less than that of the main lobe and so they disappear on demodulation, leaving a cleaner and more private signal. Beamsteering Fresnel antennas Beamsteering can be applied by amplitude/phase control or amplitude-only control of the elements of an antenna array positioned in the focal point of the lens as antenna feed. With amplitude-only control, no bandwidth-limiting phase shifters are needed, saving complexity and alleviating bandwidth constraints at the cost of limited beamsteering capability. Three-dimensional Fresnel antennas In order to increase the focusing, resolving and scanning properties and to create different shaped radiation patterns the Fresnel zone plate and antenna can be assembled conformable to a curvilinear natural or man-made formation and used as a diffractive antenna-Radome. Footnotes Radio frequency propagation Antennas (radio)
Fresnel zone antenna
Physics
1,828
32,878,156
https://en.wikipedia.org/wiki/Steam%20generator%20%28boiler%29
A steam generator is a form of low water-content boiler, similar to a flash steam boiler. The usual construction is as a spiral coil of water-tube, arranged as a single, or monotube, coil. Circulation is once-through and pumped under pressure, as a forced-circulation boiler. The narrow-tube construction, without any large-diameter drums or tanks, means that they are safe from the effects of explosion, even if worked at high pressures. The pump flowrate is adjustable, according to the quantity of steam required at that time. The burner output is throttled to maintain a constant working temperature. The burner output required varies according to the quantity of water being evaporated: this can be either adjusted by open-loop control according to the pump throughput, or by a closed-loop control to maintain the measured temperature. They are used as auxiliary boilers on ships. Types Stone-Vapor One of the best-known designs is the Stone-Vapor. The inner casing of the boiler forms a vertical bell, with an outer airtight cylindrical casing. The oil or gas burner is mounted at the top, above the coils, and facing downwards. The heating element is a single tube, arranged into a number of helical cylinders. The first helices (in the flow direction) are small-diameter tubes, wrapped in large diameter turns. Succeeding turns are coiled inside this and the tube is of progressively increasing diameter, to allow for a constant flow rate as the water evaporates into steam and forms bubbles. The steam outlet is from the final turn at the bottom of the inner helix. The outlet is approximately 90% steam (by mass) and residual water is separated by passing it through a steam-water separator. The exhaust gases turn upwards and flow over the outside of the bell, usually passing additional helices that are used as an initial feedwater heater. Clayton The Clayton steam generator is similar to the Stone-Vapor, but the burner and flow directions are reversed. The heating coil is mounted within a simple cylindrical casing. Rather than helical, cylindrical layers, the Clayton coils are arranged as layers of flat spirals. Water is pumped into the top layers and forced downwards. Again, the tube diameter increases in steps, as evaporation takes place. The final turns form a single closely spaced helical cylinder around the burner as a water-wall furnace and is heated by radiant heat. The steam output is passed through a centrifugal separator and a dry steam quality of 99.5% is claimed. See also Steam generator (railroad) Note References Steam boilers Boilers Marine steam propulsion Steam generators
Steam generator (boiler)
Chemistry
549
3,440,755
https://en.wikipedia.org/wiki/D-module
In mathematics, a D-module is a module over a ring D of differential operators. The major interest of such D-modules is as an approach to the theory of linear partial differential equations. Since around 1970, D-module theory has been built up, mainly as a response to the ideas of Mikio Sato on algebraic analysis, and expanding on the work of Sato and Joseph Bernstein on the Bernstein–Sato polynomial. Early major results were the Kashiwara constructibility theorem and Kashiwara index theorem of Masaki Kashiwara. The methods of D-module theory have always been drawn from sheaf theory and other techniques with inspiration from the work of Alexander Grothendieck in algebraic geometry. This approach is global in character, and differs from the functional analysis techniques traditionally used to study differential operators. The strongest results are obtained for over-determined systems (holonomic systems), and on the characteristic variety cut out by the symbols, which in the good case is a Lagrangian submanifold of the cotangent bundle of maximal dimension (involutive systems). The techniques were taken up from the side of the Grothendieck school by Zoghman Mebkhout, who obtained a general, derived category version of the Riemann–Hilbert correspondence in all dimensions. Modules over the Weyl algebra The first case of algebraic D-modules are modules over the Weyl algebra An(K) over a field K of characteristic zero. It is the algebra consisting of polynomials in the following variables x1, ..., xn, ∂1, ..., ∂n. where the variables xi and ∂j separately commute with each other, and xi and ∂j commute for i ≠ j, but the commutator satisfies the relation [∂i, xi] = ∂ixi − xi∂i = 1. For any polynomial f(x1, ..., xn), this implies the relation [∂i, f] = ∂f / ∂xi, thereby relating the Weyl algebra to differential equations. An (algebraic) D-module is, by definition, a left module over the ring An(K). Examples for D-modules include the Weyl algebra itself (acting on itself by left multiplication), the (commutative) polynomial ring K[x1, ..., xn], where xi acts by multiplication and ∂j acts by partial differentiation with respect to xj and, in a similar vein, the ring of holomorphic functions on Cn (functions of n complex variables.) Given some differential operator P = an(x) ∂n + ... + a1(x) ∂1 + a0(x), where x is a complex variable, ai(x) are polynomials, the quotient module M = A1(C)/A1(C)P is closely linked to space of solutions of the differential equation P f = 0, where f is some holomorphic function in C, say. The vector space consisting of the solutions of that equation is given by the space of homomorphisms of D-modules . D-modules on algebraic varieties The general theory of D-modules is developed on a smooth algebraic variety X defined over an algebraically closed field K of characteristic zero, such as K = C. The sheaf of differential operators DX is defined to be the OX-algebra generated by the vector fields on X, interpreted as derivations. A (left) DX-module M is an OX-module with a left action of DX on it. Giving such an action is equivalent to specifying a K-linear map satisfying (Leibniz rule) Here f is a regular function on X, v and w are vector fields, , and [−, −] denotes the commutator. Therefore, if M is in addition a locally free OX-module, giving M a D-module structure is nothing else than equipping the vector bundle associated to M with a flat (or integrable) connection. As the ring DX is noncommutative, left and right D-modules have to be distinguished. However, the two notions can be exchanged, since there is an equivalence of categories between both types of modules, given by mapping a left module M to the tensor product M ⊗ ΩX, where ΩX is the line bundle given by the highest exterior power of differential 1-forms on X. This bundle has a natural right action determined by ω ⋅ v := − Liev (ω), where v is a differential operator of order one, that is to say a vector field, ω a n-form (n = dim X), and Lie denotes the Lie derivative. Locally, after choosing some system of coordinates x1, ..., xn (n = dim X) on X, which determine a basis ∂1, ..., ∂n of the tangent space of X, sections of DX can be uniquely represented as expressions , where the are regular functions on X. In particular, when X is the n-dimensional affine space, this DX is the Weyl algebra in n variables. Many basic properties of D-modules are local and parallel the situation of coherent sheaves. This builds on the fact that DX is a locally free sheaf of OX-modules, albeit of infinite rank, as the above-mentioned OX-basis shows. A DX-module that is coherent as an OX-module can be shown to be necessarily locally free (of finite rank). Functoriality D-modules on different algebraic varieties are connected by pullback and pushforward functors comparable to the ones for coherent sheaves. For a map f: X → Y of smooth varieties, the definitions are this: DX→Y := OX ⊗f−1(OY) f−1(DY) This is equipped with a left DX action in a way that emulates the chain rule, and with the natural right action of f−1(DY). The pullback is defined as f∗(M) := DX→Y ⊗f−1(DY) f−1(M). Here M is a left DY-module, while its pullback is a left module over X. This functor is right exact, its left derived functor is denoted Lf∗. Conversely, for a right DX-module N, f∗(N) := f∗(N ⊗DX DX→Y) is a right DY-module. Since this mixes the right exact tensor product with the left exact pushforward, it is common to set instead f∗(N) := Rf∗(N ⊗LDX DX→Y). Because of this, much of the theory of D-modules is developed using the full power of homological algebra, in particular derived categories. Holonomic modules Holonomic modules over the Weyl algebra It can be shown that the Weyl algebra is a (left and right) Noetherian ring. Moreover, it is simple, that is to say, its only two-sided ideal are the zero ideal and the whole ring. These properties make the study of D-modules manageable. Notably, standard notions from commutative algebra such as Hilbert polynomial, multiplicity and length of modules carry over to D-modules. More precisely, DX is equipped with the Bernstein filtration, that is, the filtration such that FpAn(K) consists of K-linear combinations of differential operators xα∂β with |α| + |β| ≤ p (using multiindex notation). The associated graded ring is seen to be isomorphic to the polynomial ring in 2n indeterminates. In particular it is commutative. Finitely generated D-modules M are endowed with so-called "good" filtrations F∗M, which are ones compatible with F∗An(K), essentially parallel to the situation of the Artin–Rees lemma. The Hilbert polynomial is defined to be the numerical polynomial that agrees with the function n ↦ dimK FnM for large n. The dimension d(M) of an An(K)-module M is defined to be the degree of the Hilbert polynomial. It is bounded by the Bernstein inequality n ≤ d(M) ≤ 2n. A module whose dimension attains the least possible value, n, is called holonomic. The A1(K)-module M = A1(K)/A1(K)P (see above) is holonomic for any nonzero differential operator P, but a similar claim for higher-dimensional Weyl algebras does not hold. General definition As mentioned above, modules over the Weyl algebra correspond to D-modules on affine space. The Bernstein filtration not being available on DX for general varieties X, the definition is generalized to arbitrary affine smooth varieties X by means of order filtration on DX, defined by the order of differential operators. The associated graded ring gr DX is given by regular functions on the cotangent bundle T∗X. The characteristic variety is defined to be the subvariety of the cotangent bundle cut out by the radical of the annihilator of gr M, where again M is equipped with a suitable filtration (with respect to the order filtration on DX). As usual, the affine construction then glues to arbitrary varieties. The Bernstein inequality continues to hold for any (smooth) variety X. While the upper bound is an immediate consequence of the above interpretation of in terms of the cotangent bundle, the lower bound is more subtle. Properties and characterizations Holonomic modules have a tendency to behave like finite-dimensional vector spaces. For example, their length is finite. Also, M is holonomic if and only if all cohomology groups of the complex Li∗(M) are finite-dimensional K-vector spaces, where i is the closed immersion of any point of X. For any D-module M, the dual module is defined by Holonomic modules can also be characterized by a homological condition: M is holonomic if and only if D(M) is concentrated (seen as an object in the derived category of D-modules) in degree 0. This fact is a first glimpse of Verdier duality and the Riemann–Hilbert correspondence. It is proven by extending the homological study of regular rings (especially what is related to global homological dimension) to the filtered ring DX. Another characterization of holonomic modules is via symplectic geometry. The characteristic variety Ch(M) of any D-module M is, seen as a subvariety of the cotangent bundle T∗X of X, an involutive variety. The module is holonomic if and only if Ch(M) is Lagrangian. Applications One of the early applications of holonomic D-modules was the Bernstein–Sato polynomial. Kazhdan–Lusztig conjecture The Kazhdan–Lusztig conjecture was proved using D-modules. Riemann–Hilbert correspondence The Riemann–Hilbert correspondence establishes a link between certain D-modules and constructible sheaves. As such, it provided a motivation for introducing perverse sheaves. Geometric representation theory D-modules are also applied in geometric representation theory. A main result in this area is the Beilinson–Bernstein localization. It relates D-modules on flag varieties G/B to representations of the Lie algebra of a reductive group G. D-modules are also crucial in the formulation of the geometric Langlands program. Notes Bibliography External links Algebraic analysis Partial differential equations Sheaf theory
D-module
Mathematics
2,430
4,405,939
https://en.wikipedia.org/wiki/HD%20187085
HD 187085 is a yellow–hued star in the southern constellation of Sagittarius. It is too faint to be visible to the naked eye, having an apparent visual magnitude of +7.225. The star is located at a distance of approximately 1,010 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +18 km/s. This is an ordinary G-type main-sequence star with a stellar classification of G0V, which means it is generating energy through core hydrogen fusion. It is younger than the Sun with an estimated age of 2.7 billion years and is spinning with a leisurely rotation period of around 21 days. The star is 27% larger and 19% more massive than the Sun. It is radiating 2.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,117 K. In 2006, an extrasolar planet was announced orbiting HD 187085, with a minimum mass slightly below that of the planet Jupiter. It is orbiting the host star with a period of around . The orbit overlaps the habitable zone of this star. In 2009, the presence of an infrared excess was announced, suggesting a debris disk orbits the star. See also HD 188015 HD 20782 List of extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Circumstellar disks Sagittarius (constellation) Durchmusterung objects 187085 097546
HD 187085
Astronomy
314
25,164,793
https://en.wikipedia.org/wiki/Seed%20plant
A seed plant or spermatophyte (; ), also known as a phanerogam (taxon Phanerogamae) or a phaenogam (taxon Phaenogamae), is any plant that produces seeds. It is a category of embryophyte (i.e. land plant) that includes most of the familiar land plants, including the flowering plants and the gymnosperms, but not ferns, mosses, or algae. The term phanerogam or phanerogamae is derived from the Greek (), meaning "visible", in contrast to the term "cryptogam" or "cryptogamae" (, and (), 'to marry'). These terms distinguish those plants with hidden sexual organs (cryptogamae) from those with visible ones (phanerogamae). Description The extant spermatophytes form five divisions, the first four of which are classified as gymnosperms, plants that have unenclosed, "naked seeds": Cycadophyta, the cycads, a subtropical and tropical group of plants, Ginkgophyta, which includes a single living species of tree in the genus Ginkgo, Pinophyta, the conifers, which are cone-bearing trees and shrubs, and Gnetophyta, the gnetophytes, various woody plants in the relict genera Ephedra, Gnetum, and Welwitschia. The fifth extant division is the flowering plants, also known as angiosperms or magnoliophytes, the largest and most diverse group of spermatophytes: Angiosperms, the flowering plants, possess seeds enclosed in a fruit, unlike gymnosperms. In addition to the five living taxa listed above, the fossil record contains evidence of many extinct taxa of seed plants, among those: Pteridospermae, the so-called "seed ferns", were one of the earliest successful groups of land plants, and forests dominated by seed ferns were prevalent in the late Paleozoic. Glossopteris was the most prominent tree genus in the ancient southern supercontinent of Gondwana during the Permian period. By the Triassic period, seed ferns had declined in ecological importance, and representatives of modern gymnosperm groups were abundant and dominant through the end of the Cretaceous, when the angiosperms radiated. Evolutionary history A whole genome duplication event in the ancestor of seed plants occurred about . This gave rise to a series of evolutionary changes that resulted in the origin of modern seed plants. A middle Devonian (385-million-year-old) precursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the mutlilobed integument. It is suspected that the extension was involved in anemophilous (wind) pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed. Runcaria was followed shortly after by plants with a more condensed cupule, such as Spermasporites and Moresnetia. Seed-bearing plants had diversified substantially by the Famennian, the last stage of the Devonian. Examples include Elkinsia, Xenotheca, Archaeosperma, "Hydrasperma", Aglosperma, and Warsteinia. Some of these Devonian seeds are now classified within the order Lyginopteridales. Phylogeny Seed-bearing plants are a clade within the vascular plants (tracheophytes). Internal phylogeny The spermatophytes were traditionally divided into angiosperms, or flowering plants, and gymnosperms, which includes the gnetophytes, cycads, ginkgo, and conifers. Older morphological studies believed in a close relationship between the gnetophytes and the angiosperms, in particular based on vessel elements. However, molecular studies (and some more recent morphological and fossil papers) have generally shown a clade of gymnosperms, with the gnetophytes in or near the conifers. For example, one common proposed set of relationships is known as the gne-pine hypothesis and looks like: However, the relationships between these groups should not be considered settled. Other classifications Other classifications group all the seed plants in a single division, with classes for the five groups: Division Spermatophyta Cycadopsida, the cycads Ginkgoopsida, the ginkgo Pinopsida, the conifers, ("Coniferopsida") Gnetopsida, the gnetophytes Magnoliopsida, the flowering plants, or Angiospermopsida A more modern classification ranks these groups as separate divisions (sometimes under the Superdivision Spermatophyta): Cycadophyta, the cycads Ginkgophyta, the ginkgo Pinophyta, the conifers Gnetophyta, the gnetophytes Magnoliophyta, the flowering plants Unassigned extinct spermatophyte orders, some of which qualify as "seed ferns": †Cordaitales †Calamopityales †Callistophytales †Caytoniales †Gigantopteridales †Glossopteridales †Lyginopteridales †Medullosales †Peltaspermales †Umkomasiales (corystosperms) †Czekanowskiales †Bennettitales †Erdtmanithecales †Pentoxylales †Petriellales Taylor et al. 1994 †Avatiaceae Anderson & Anderson 2003 †Axelrodiopsida Anderson & Anderson †Alexiales Anderson & Anderson 2003 †Hamshawviales Anderson & Anderson 2003 †Hexapterospermales Doweld 2001 †Hlatimbiales Anderson & Anderson 2003 †Matatiellales Anderson & Anderson 2003 †Arberiopsida Doweld 2001 †Iraniales E. Taylor et al. 2008 †Vojnovskyales E. Taylor et al. 2008 †Hermanophytales E. Taylor et al. 2008 †Dirhopalostachyaceae E. Taylor et al. 2008 References Further reading Thomas N. Taylor, Edith L. Taylor, and Michael Krings. 2008. Paleobotany: The Biology and Evolution of Fossil Plants, 2nd edition. Academic Press (an imprint of Elsevier): Burlington, MA; New York, NY; San Diego, CA, USA, London, UK. 1252 pages. . Superphyla Devonian first appearances Plants
Seed plant
Biology
1,459