id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
59,231,369
https://en.wikipedia.org/wiki/Climate%20change%20and%20invasive%20species
Climate change and invasive species refers to the process of the environmental destabilization caused by climate change. This environmental change facilitates the spread of invasive species — species that are not historically found in a certain region, and often bring about a negative impact to that region's native species. This complex relationship is notable because climate change and invasive species are also considered by the USDA to be two of the top four causes of global biodiversity loss. The interaction between climate change and invasive species is complex and not easy to assess. Climate change is likely to favour some invasive species and harm others, but few authors have identified specific consequences of climate change for invasive species. Consequences of climate change for invasive species are distinct from consequences for native species due to different characteristics (traits and qualities associated with invasions), management and abundance and can be direct, through the species survival, or indirect, through other factors such as pest or prey species. Human-caused climate change and the rise in invasive species are directly linked to changing of ecosystems. The destabilization of climate factors in these ecosystems can lead to the creation of a more hospitable habitat for invasive species, thus allowing them to spread beyond their original geographic boundaries. Climate change broadens the invasion pathway that enables the spread of species. Not all invasive species benefit from climate change, but most observations show an acceleration of invasive populations. Examples of invasive species that have benefited from climate change include insects (such as the Western corn rootworm and other crop pests), pathogens (such as cinnamon fungus), freshwater and marine species (such as the brook trout), and plants (such as the umbrella tree). Measurably warmer or colder conditions create opportunities for non-native terrestrial and marine organisms to migrate to new zones and compete with established native species in the same habitat. Given their remarkable adaptability, non-native plants may then invade and take over the ecosystem in which they were introduced. So far, there have been more observations of climate change having a positive or accelerating effect on biological invasions than a negative one. However, most literature focuses on temperature only and due to the complex nature of both climate change and invasive species, outcomes are difficult to predict. There are many ways to manage the impact of invasive species. Prevention, early detection, climate forecasting and genetic control are some ways communities can mitigate the risks of invasive species and climate change. Although the accuracy of models that study the complex patterns of species populations are difficult to assess, many predict range shifts for species as climates change. Definitions Invasive species According to the International Union for Conservation of Nature (2017), IUCN, invasive species are "animals, plants or other organisms that are introduced into places outside their natural range, negatively impacting native biodiversity, ecosystem services or human well-being." Climate change will also redefine which species are considered as invasive species. Some taxa formerly considered as invasive may become less influential in an ecosystem changing with time, while other species formerly considered as non-invasive may become invasive. At the same time, a considerable amount of native species will undergo a range shift and migrate to new areas. Shifting ranges, and changing impacts of invasive species, make the definition of the term "invasive species" difficult – it has become an example of a shifting baseline. Considering the changing dynamics mentioned above, Hellmann et al. (2008), concludes that invasive species should be defined as "those taxa that have been introduced recently" and exert a "substantial negative impact on native biota, economic values, or human health." Consequently, a native species gaining a larger range with a changing climate is not considered to be invasive, as long as it does not cause considerable damage. The taxa that have been introduced by humans throughout history have changed from century to century and decade to decade, and so has the rate of introductions. Studies of global rates of first records of alien species (counted as the amount of first records of established alien species per time unit) show that during the period 1500–1800 the rates stayed at a low level, whereas the rates have been increasing constantly since the year 1800. 37% of all the first records of alien species have been registered as recently as during the period 1970–2014. The invasion of alien species is one of the major drivers of biodiversity loss in general, and the second most common threat being related to complete species extinctions since the 16th century. Invasive alien species are also capable of reducing the resilience of natural habitats, and agricultural as well as urban areas, to climate change. Climate change, in turn, also reduces the resilience of habitats to species invasions. Biological invasions and climate change are both two of the key processes affecting global diversity. Yet, their effects are often looked at separately, as multiple drivers interact in complex and non-additive ways. Some consequences of climate change have been widely acknowledged to accelerate the expansion of alien species, however, among which increasing temperatures is one. Invasion pathway The way in which biological invasions occur is stepwise, and referred to as the invasion pathway. It includes four major stages – the introduction/transport stage, the colonization/casual stage, the establishment stage/naturalization, and the landscape spread/invasion stage. The concept of the invasion pathway describes the environmental filters a certain species need to overcome in each stage in order to become invasive. There is a number of mechanisms affecting the outcome of each step, of which climate change is one. For the initial transport stage, the filter is of a geographic character. For the second colonization stage, the filter is constituted by abiotic conditions – and for the third establishment stage, by biotic interactions. For the last landscape spread stage, certain factors of the landscape make up the filter the species need to pass through. Interactions The interaction between climate change and invasive species is complex and not easy to assess. Climate change is likely to favour some invasive species and harm others, but few authors have identified specific consequences of climate change for invasive species. As early as 1993, a climate/invasive species interaction was speculated for the alien tree species Maesopsis eminii that spread in the East Usambara mountain forests, Tanzania. Temperature changes, extremes of precipitation and decreased mist were cited as potential factors promoting its invasion. Consequences of climate change for invasive species are distinct from consequences for native species due to different characteristics (traits and qualities associated with invasions), management and abundance and can be direct, through the species survival, or indirect, through other factors such as pest or prey species. So far, there have been more observations of climate change having a positive or accelerating effect on biological invasions than a negative one. However, most literature focuses on temperature only and due to the complex nature of both climate change and invasive species, outcomes are difficult to predict. Favorable conditions for the introduction of invasive species Effects on invasion pathway stages Climate change will interact with many existing stressors that affect the distribution, spread, abundance and impact of invasive species. Hence, in relevant literature, the impacts of climate change on invasive species are often considered separately per stage of the invasion pathway: (1) introduction/transport, (2) colonization/casual stage, (3) establishment/naturalization, (4) spread/invasion stage. According to those invasion stages there are 5 nonexclusive consequences of climate change for invasive species according to Hellmann: Altered transport and introduction mechanisms Altered climatic constraints on invasive species Altered distribution of existing invasive species Altered impact of existing invasive species Altered effectiveness of management strategies The first consequence of climate change, altered mechanisms for transport and introduction mechanisms, is given as invasions are often purposefully (e.g. biocontrol, sport fishing, agriculture) or accidentally introduced with the help of humans and climate change could alter the patterns of human transport. Changed recreational and commercial activities will change human transport and increase the propagule pressure of some non-native species from zero, e.g., connecting new regions or above a certain threshold that allows for establishment. Longer shipping seasons can increase the number of transports of non-native species and increase propagule pressure supporting potential invaders as the monkey goby. Additionally, introductions for recreation and conservation purposes could increase. Changing climatic conditions can reduce native species' ability to compete with non-native species and some currently unsuccessful, non-native species will be able to colonize new areas if conditions change towards their original range. Multiple factors can increase the success of colonization, as described in more detail below in 2.2. There is a wide range of climatic factors that affect the distribution of existing invasive species. Range limits due to cold or warm temperature constraints will change as a result of global warming, so that cold-temperature constrained species will be less restricted in their upper-elevation and higher-latitude range limits and warm-temperature constrained species will be less restricted in their lower-elevation and lower-latitude range limits. Changing precipitation patterns, the frequency of stream flow and changes in salinity can also affect hydrologic constraints of invasive species. As many invasive species have been selected for traits that facilitate long-distance dispersal it is likely that shifts in suitable climatic zones favor invasive species. The impact on native species can be altered through population densities of invasive species. Competition interactions and abundance of native species or resources take part in the relative impact of invasive species. The effectiveness of different management strategies is dependent on climate. For instance, mechanical control of invasive species by cold, hard freezes or ice cover can become less effective with increasing temperatures. Changes in the fate and behaviour of pesticides and their effectiveness in controlling invasive species can also occur. Decoupling of the relationship between some biocontrol agents and their targets can support invasions. On the other hand, the effectiveness of other biocontrol agents could increase due to species range overlaps. Effects on climatic conditions Another perspective to look at how climate change creates conditions that facilitate invasions is to consider the changes in the environment that have an impact on species survival. These changes in environmental conditions include temperature (terrestrial and marine), precipitation, chemistry (terrestrial and marine), ocean circulation and sea levels. Most of the available literature on climate-induced biological invasions deals with warming effects, so that there is much more information for temperature effects on invasions than there is for precipitation patterns, extreme events and other climatic conditions. Global warming can cause droughts in dryland, this later on can kill plants which require heavy water use from soil. It also can shift invasive species into this dryland that require water as well. Which in turn can further deplete water supply for plants of that region. All of these influences can lead to physiological stress of organism, thus increasing invasion and further destroying the native ecosystem. Temperature Several researchers found that climate change alters environmental conditions in a way that benefits species' distribution by enabling them to expand their ranges to areas where they were previously not able to survive or reproduce. Those range shifts are mainly attributed to an increased temperature caused by climate change. Shifts of geographic distributions will also challenge the definition of invasive species as mentioned earlier. In aquatic ecosystems, cold temperatures and winter hypoxia are currently the limiting factors for the survival of invasive species and global warming will likely cause new species to become invasive. In each stage of the invasion pathway temperature has potential impacts on the success of an invasive species. They are described in the section about effects of invasion pathway stages. They include facilitating colonization and successful reproduction of invasive species that have not been successful in the respective area before, changed competition interactions between native and invasive species, changed range limits regarding altitude and latitude and changed management effectiveness. Global warming can also modify human activity, like transport, in a way that increases the chances of biological invasions. Extreme weather events Climate change can cause increases in extreme weather like cold winters or storms, which can become problematic for the current invasive species. The invasive species that are adapted to a warmer climate or a more stable climate can get a disadvantage when sudden seasonal changes like an especially cold winter. Unpredictable extreme weather can therefore act as a reset mechanism for invasive species, reducing the amount of invasive species in the affected area. More extreme climatic events such as floods may also result in escapes of previously confined aquatic species and the removal of existing vegetation and creation of bare soil, which is then easier to colonize. Invasive species benefiting from climate change One important aspect of the success of invasive species under climate change is their advantage over native species. Invasive species often carry a set of traits that make them successful invaders (e.g., ability to survive in adverse conditions, broad environmental tolerances, rapid growth rates and wide dispersal), as those traits are selected for in the invasion process. Those traits will often help them succeed in competition with native species under climate change. However, invasive species do not exclusively, nor do all invasive species carry these traits. Rather there are some species that will benefit from climate change and others will be more negatively affected by it. For example, despite an invasive species ability to reach these new environments, their presence could lead to disruptions in the food chain of that ecosystem potentially causing large scale death to others and themselves. Invasive species are just more likely than native species to carry suitable traits that favour them in a changing environment as a result of selection processes along the invasion pathway. Some native species that are dependent on mutualistic relationships will see a reduction in their fitness and competitive ability as a result of climate change effects on the other species in the mutualistic relationship. As non-native species are depending more rarely on mutualistic relationships they will be less affected by this mechanism. Climate change also challenges the adaptability of native species through changes in the environmental conditions, making it difficult for native species to survive and easy for invasive species to take over empty niches. Changes in the environment can also compromise the native species' ability to compete with invaders, that are often generalists. Invasive species do not require climate change to damage ecosystems; however, climate change might exacerbate the damage they do cause. Decoupling of ecosystems Food webs and chains are two varying ways to examine energy transfer and predation through a community. While food webs tend to be more realistic and easy to identify in environments, food chains highlight the importance of energy transfer between trophic levels. Air temperature greatly influences not only germination of vegetative species but also the foraging and reproductive habits of animal species. In either way of approaching relationships between populations, it is important to realize that species likely cannot and will not adjust to climate change in the same way or at the same rate. This phenomenon is known as 'decoupling' and has detrimental effects on the successful functioning of affected environments. In the Arctic, caribou calves are beginning to largely miss out on food as vegetation begins growing earlier in the season as a result of rising temperatures. Specific examples of decoupling within an environment include the time lag between air warming and soil warming and the relationship between temperature (as well as photoperiod) and heterotrophic organisms. The former example results from the ability of soil to hold its temperature. Similar to how water has a higher specific heat than air, which results in ocean temperatures being warmest at the close of the summer season, soil temperature lags behind that of air. This results in a decoupling of above and below ground subsystems. This affects invasion because it increases growth rates and distribution of invasive species. Invasive species typically have better tolerance to different environmental conditions increasing their survival rate when climate changes. This later translates to when species die because they can not live in that ecosystem any more. The new organisms that move in can take over that ecosystem. Other effects The current climate in many areas will change drastically, this can both effect current native species and invasive species. Current invasive coldwater species that are adapted to the current climate may be unable to persist under new climate conditions. This shows that the interaction between climate change and invasive species does not need to be in favour for the invader. If a specific habitat changes drastically due to climate change, can the native species become an invader in its native habitat. Such changes in the habitat can inhibit the native species from completing its life cycle or forcing range shift. Another result from the changed habitat is local extinction of the native species when its unable to migrate. Migration Higher temperatures also mean longer growing seasons for plants and animals, which allows them to shift they ranges toward North. Poleward migration also changes the migration patterns of many species. Longer growing seasons mean the time of arrival for species changes, which changes the amount of food supply available at the time of arrival altering the species reproductive success and survival. There is also secondary effects global warming has on species such as changes in habitat, food source, and predators of that ecosystem. Which later could lead to the local extinction of species or migration to a new area suitable for that species. Examples Insect pests Insect pests have always been viewed as a nuisance, most often for their damaging effects on agriculture, parasitism of livestock, and impacts on human health. Influenced heavily by climate change and invasions, they have recently been looked at as a significant threat to both biodiversity and ecosystem functionality. Forestry industries are also at risk for being affected. There are a plethora of factors that contribute to existing concerns regarding the spread of insect pests: all of which stem from increasing air temperatures. Phenological changes, overwintering, increase in atmospheric carbon dioxide concentration, migration, and increasing rates of population growth all impact pests' presence, spread, and impact both directly and indirectly. Diabrotica virgifera virgifera, western corn rootworm, migrated from North America to Europe. In both continents, western corn rootworm has had significant impacts on corn production and therefore economic costs. Phenological changes and warming of air temperature have allowed this pests' upper boundary to expand further northward. In a similar sense of decoupling, the upper and lower limits of a species' spread is not always paired neatly with one another. Mahalanobis distance and multidimensional envelope analysis performed by Pedro Aragon and Jorge M. Lobo predict that even as the pests' range expands northward, currently invaded European communities will remain within the pests' favored range. In general, it is expected that global distribution of crop pests will increase as an effect of climate change. This is expected for all kinds of crops creating a threat for both agriculture and other commercial use of crops. When the climate gets warmer is the crop pest predicted to spread towards the poles in latitude and in altitude. Dry or cold areas with a current mean temperature around and a current precipitation below 1100 mm/year could potentially be more affected than other areas. The present climate in these areas are often unfavourable for the crop pest that currently lives there, therefore will an increase in the temperature bring advantages to the pests. With increased temperatures will the life-cycle for the crop pests be faster and with winters above freezing temperatures will new crop pests species be able to inhabit these areas. Precipitation has a lesser effect on crop pests than temperatures but it can still impact the crop pests. Drought and dry plants make host plants more attractive for insects and therefore increase the crop pests during droughts. For example, the confused flour beetle is predicted to increase in the South American austral region with an increased temperature. A higher temperature decreased the mortality and development time for the confused flour beetle. The confused flour beetle population is expected to increase the most in higher latitudes Areas with a warmer climate or lower altitudes are predicted to experience and decrease in crop pests. The largest decline in crop pests is expected to occur in areas with a mean temperature of or a precipitation above 1100 mm/year. Despite the decline in crop pests it is unlikely that climate change will result in the complete removal of the existing crop pest species in the area. With a higher amount of precipitation can flush away eggs and larvae that is a potential crop pest Pathogen impacts While still limited in research scope, it is known that climate change and invasive species impact the presence of pathogens and there is evidence that global warming will increase the abundance of plant pathogens specifically. While certain weather changes will affect species differently, increased air moisture plays a significant role in the rapid outbreaks of pathogens. In the little amount of research that has been completed regarding the incidence of plant pathogens in response to climate change, the majority of the completed work focuses on above-ground pathogens. This does not mean that soil-borne pathogens are exempt from experiencing the effects of climate change. Phytophthora cinnamomi, a pathogen that causes oak tree decline, is a soil-borne pathogen that increased in activity in response to climate change. Freshwater and marine environments Barriers between marine ecosystems are typically physiological in nature as opposed to geographic (e.g., mountain ranges). These physiological barriers may be seen as changes in pH, water temperature, water turbidity, or more. Climate change and global warming have begun to affect these barriers – the most significant of which being water temperature. The warming of sea water has allowed crabs to invade Antarctica, and other durophagous predators are not far behind. As these invaders move in, species endemic to the benthic zone will have to adjust and begin to compete for resources, destroying the existing ecosystem. Freshwater systems are significantly affected by climate change. Extinction rates within freshwater bodies of water tend to be equitable or even higher than some terrestrial organisms. While species may experience range-shifts in response to physiologic changes, outcomes are species-specific and not reliable in all organisms. As water temperatures increase, it is organisms that inhibit warmer waters that are positively affected, while cold-water organisms are negatively affected. Warmer temperature also leads to the melting of arctic ice, which increases the sea level. Because of the rise in sea water, most photosynthesizing species are not able to get the right amount of light to sustain living. Compared to terrestrial environments, freshwater ecosystems have very few geographical and allosteric barriers between different areas. The increased temperature and shorter duration of cold temperature will increase the probability of invasive species in the ecosystem, because the winter hypoxia that prevents the species' survival will be eliminated. This is the case with the brook trout that is an invasive species in lakes and streams in Canada. The invasive brook trout has the capacity to eliminate the native bull trout and other native species in Canadian streams. The temperature of the water plays a big part in the brook trout's capacity to inhabit a stream, but other factors like the stream flow and geology are also important factors in how well established the brook trout is. The bull trout has a positive population growth or holds a competitive advantage only in streams that do not exceed in the warmest months. The brook trout has a competitive and a physiological advantage over bull trout in warmer water, e.g., . The winter period is also an important factor for the brook trout's capacity to inhabit a stream. Brook trout may have a reduced survival rate if it is exposed to especially long and harsh winter periods. Due to the observations that the range of brook trout is dependent on the temperature, there is an increasing concern that the brook trout will eliminate the bull trout even further in colder water due to increasing temperature because of climate change. Climate change influences not only the temperature in lakes but also stream flows and therefore other factors in streams. This unknown factor makes it hard to predict how the brook trout and bull trout will react to climate change. Management and prevention Management strategies generally have a different approach regarding invasive species compared to most native species. In terms of climate change and native species, the most fundamental strategy is conservation. The strategy for invasive species is, however, majorly about control management. There are several different types of management and prevention strategies, such as following. Approaches Prevention: This is generally the more environmentally desirable approach, but is difficult to practice due to the issues with separating invasive from non-invasive species. Border control and quarantine measures are normally the first prevention mechanisms. Preventative measures include exchanging ballast water in the middle of the ocean, which is the main tool accessible for ships to limit the introduction of invasive species. Another method of prevention is public education to increase the understanding of individual actions on furthering the spread of invasive species and promote awareness about strategies to reduce the introduction and spread of invasive species. Invasion risk assessment can also aid in preventative management since it allows for early identification. Invasion risk is done by the identification of a potentially invasive species through comparison of common traits. Monitoring and early detection: Samples can be taken in specific areas to see if any new species are present. These samples are then run through a database in order to see if the species are invasive. This can be done using genetic tools such as environmental DNA (eDNA). These eDNA-samples, taken in ecosystems, are then run through a database that contains bioinformatics of species DNA. When the database matches a sequence from the sample's DNA, information about species that are or have been present in the studied area can be obtained. If the species are confirmed to be invasive, the managers can then take precautions in the form of a rapid response eradication method. The eDNA method is majorly used in marine environments, but there are ongoing studies about how to use it in terrestrial environments as well. Rapid response: Several methods of eradication are used to prevent distribution and irreversible introduction of invasive species into new areas and habitats. There are several types of rapid response: Mechanical/manual control: This is often done through human labor, such as hand pulling, mowing, cutting, mulching, flooding, digging and burning of invasive species. Burning often takes place mid-spring, to prevent further damage to the area's ecosystem and harm to the managers who administer the fires. Manual control methods can kill or reduce the populations of non-native species. Mechanical controls are sometimes effective and generally do not engender public criticism. Instead, they can often bring awareness and public interest and support for controlling invasive species. Chemical control: Chemicals such as pesticides (e.g. DDT) and herbicides can be used to eradicate invasive species. Though it might be effective to eliminate target species, it often creates health hazards for both non-target species and humans. It is therefore generally a problematic method when, for example, rare species are present in the area. Biological control: This is a method where organisms are used to control invasive species. One common strategy is to introduce natural enemy species of invasive species in an area, with the aim to establish the enemy which will drive the invasive species' population to a contracted range. One major complication with the biological method is that introduction of enemy species, which itself in a sense is an invasion as well, sometimes can affect non-target species negatively as well. There has been criticism regarding this method, for example when species in conservation areas have been affected or even driven to extinction by biocontrol species. Restoration of ecosystems: Restoration of ecosystems after eradication of invasive species can build resilience against future introductions. To some degree, ecological niche models predict contraction of some species' ranges. If the models are somewhat accurate, this creates opportunities for managers to alter the composition of native species to build resilience against future invasions. Forecasting: Climate models can somewhat be used to project future range shifts of invasive species. Since the future climate itself cannot be determined, though, these models are often very limited. However, the models can still be used as indicators of general range shifts by managers to plan management strategies. Genetic control: New technology has presented a potential solution for invasive species management: genetic control. A form of genetic pest management has been developed that targets the mating behavior of pests to introduce harm-reducing genetically engineered DNA into wild populations. Though not widely implemented yet for invasive species specifically, there is an expanding interest in using genetic pest management for invasive species control. The induction of triploidy also exists to manage invasive species through the production of sterile males to biologically control pests. Similar to the use of triploidy, another form of genetic control is the Trojan Y technique which serves as a sex-marker identification and aims to bias the sex ratio of populations, typically fish, towards males in order to drive the population to extinction. Trojan Y specifically uses sex-reversed females containing two Y chromosomes, known as "Trojan Y", to reduce the success of breeding in the population. A counterpart to the Trojan Y technique, the Trojan Female technique aims to release "Trojan females" carrying mitochondrial DNA mutations that lead to a reduction in female, rather than male, fertility. Gene drive is also another technique to suppress pest populations. Predictions The geographical range of invasive alien species threaten to alter due to climate change, such as the brook trout (Salvelinus fontinalis). To forecast future impact of climate change on distribution of invasive species, there is ongoing research in modelling. These bioclimatic models, also known as ecological niche models or climate envelope models, are developed with the aim to predict changes in species ranges and are an essential tool for the development of effective management strategies and actions (e.g. eradication of invasive species and prevention of introduction) to reduce the future impact of invasive species on ecosystems and biodiversity. The models generally simulate current distributions of species together with predicted changes in climate to forecast future range shifts. Many species ranges are predicted to expand. Yet, studies also predict contractions of many species future range, especially regarding vertebrates and plants at a large spatial scale. One reason for range contractions could possibly be that species ranges due to climate change generally move poleward and that they therefore at some point will reach the sea which acts as a barrier for further spread. This is, however, the case when some phases of the invasion pathway, e.g. transport and introduction, are not considered in the models. Studies majorly investigate predicted range shifts in terms of the actual spread and establishment phases of the invasive pathway, excluding the phases of transportation and introduction. Models have also investigated the impact of invasive species on local climate change--for example, accelerating the increase of wetlands as a result of the loss of forest canopy. These models are useful for making predictions but are yet very limited. Range shifts of invasive species are very complex and difficult to make predictions about, due to the multiple variables affecting the invasion pathway. This causes complications with simulating future predictions. Climate change, which is the most fundamental parameter in these models, cannot be determined since the future level of the greenhouse emissions are uncertain. Additionally, climate variables that are directly linked to greenhouse emissions, such as alterations in temperature and precipitations, are likewise difficult to predict with certainty. How species range shifts will react to changes in climate, e.g. temperature and precipitation, is therefore largely unknown and very complex to understand and predict. Other factors that can limit range shifts, but models often do not consider, are for example presence of the right habitat for the invader species and if there are resources available. The level of accuracy is thus unknown for these models, but they can to some extent be used as indicators that highlight and identify future hotspots for invasions at a larger scale. These hotspots could for example be summarized into risk maps that highlight areas with high suitability for invaders. This would be a beneficial tool for management development and help to construct prevention strategies and to control spreading. References Invasive species Effects of climate change
Climate change and invasive species
Biology
6,384
2,377,830
https://en.wikipedia.org/wiki/Society%20of%20Women%20Engineers
The Society of Women Engineers (SWE) is an international not-for-profit educational and service organization. Founded in 1950 and headquartered in the United States, the Society of Women Engineers is a major advocate for women in engineering and technology. SWE has over 47,000 members in nearly 100 professional sections, 300 collegiate sections, and 60 global affiliate groups throughout the world. Antecedents The SWE archives contain a series of letters from the Elsie Eaves Papers (bequeathed to the Society), which document the origins of the Society in the early 20th century. In 1919, a group of women at the University of Colorado helped establish a small community of women with an engineering or science background, called the American Society of Women Engineers and Architects. While this organization was only recognized within the campus community, it set the foundation for the development of the international Society of Women Engineers. This group included Lou Alta Melton, Hilda Counts, and Elsie Eaves. These women wrote letters to engineering schools across the nation asking for information on female engineering students and graduates. They received responses from 139 women throughout 23 universities. They also received many negative responses from schools that did not admit women into their engineering programs. From the University of North Carolina, Thorndike Saville, Associate Professor of Sanitary Engineering wrote: "I would state that we have not now, have never had, and do not expect to have in the near future, any women students registered in our engineering department." Some responses were generally supportive of women in engineering, but not of a separate society in particular. North Carolina State University's Dean Blake R Van Leer felt differently and encouraged future SWE president Katharine Stinson to be one of the first women to enroll and his daughter Maryly would later open a branch. Many of the women contacted as a result of the inquiries wrote about their support for such an organization. Besides the Hazel Quick letter from Michigan, there was a reply from Alice Goff, expressing her support of the idea of a society for women in engineering and architecture: "Undoubtedly an organization of such a nature would be of great benefit to all members, especially to those just entering the profession." The women in Michigan organized a group in 1914 called the T-Square Society. Although it was not clear if this group was a business, honorary, or social organization, it was proposed as a safe space for women to collaborate and share their ideas comfortably. History Though the Society of Women Engineers did not become a formal organization until 1950, its origins date to the late 1940s, when shortages of men due to World War II provided new opportunities for women to pursue employment in engineering. Female student groups at Drexel Institute of Technology in Philadelphia and at Cooper Union and City College of New York in New York City began forming local meetings and networking activities. On April 3, 1949, seventy students attended a conference at Drexel to start organizing. These seventy students traveled from 19 universities. National vice president Maryly Van Leer Peck and her mother Ella Lillian Wall Van Leer a women's right activist became heavily involved early on. They opened one of the first branches at Georgia Tech after her father Blake R Van Leer successfully lobbied to allow women to attend. Blake also encouraged numerous women to join his engineering program while at NC State. The Van Leers would actively support the organization throughout their lives. On the weekend of May 27–28, 1950, about fifty women representing the four original districts of the Society of Women Engineers – New York City, Philadelphia, Washington, D.C., and Boston – met for the first national meeting at The Cooper Union's Green Engineering Camp in northern New Jersey. During this first meeting, the society elected the first president of SWE, Beatrice Hicks. The first official annual meeting was held in 1951, in New York City. In 1957, the USSR launched Sputnik and widespread interest in technological research and development intensified. This caused many engineering schools to begin admitting women. Membership in SWE doubled to 1,200 and SWE moved its headquarters to the United Engineering Center, in New York City. Over the next decade, an increasing number of young women chose engineering as a profession, but few were able to rise to management-level positions. SWE inaugurated a series of conferences (dubbed the Henniker Conferences after the meeting site in New Hampshire) on the status of women in engineering, and in 1973 signed an agreement with the National Society of Professional Engineers in hopes of recruiting a larger percentage of working women and students to its ranks. At the same time, SWE increasingly became involved in the spirit and activities of the larger women's movement. In 1972, a number of representatives from women's scientific and technical committees and societies (including SWE) met to form an alliance and discuss equity for women in science and engineering. This inaugural meeting eventually led to the formation of the Business and Professional Women's Foundation (BPWF). In addition, SWE's council resolved in 1973 to endorse ratification of the Equal Rights Amendment, and a few years later, resolved not to hold national conventions in non-ERA-ratified states. The Equal Rights Amendment was first proposed by Alice Paul in the 1920s, after women gained the right to vote, and still has not been ratified to this day. By 1982, the Society had swelled to 13,000 graduate and student members spread out over 250 sections across the country. The Council of Section Representatives, which in partnership with an Executive Committee had governed the Society since 1959, had become so large that SWE adopted a regionalization plan designed to bring the leadership closer to the membership. Today, SWE has over 47,000 collegiate and professional members and continues its mission as a 501(c)(3) non-profit educational service organization. The Society of Women Engineers organization exists today largely because the gender balance in engineering does not proportionally reflect population breakdowns of men and women. Encouragement of female students and promotion of engineering as a field of study for women is a necessary and fundamental function of the organization. Engineering and related fields are heavily male-dominated, in part because of gender socialization and artificially reinforced gender norms. Theories such as the STEM pipeline seek to understand and balance how different science, math, and engineering fields tend to over- or under-represent different groups of people in the United States. The Van Leer family still supports the organization to this day. Mission The SWE's mission statement is to "Empower women to achieve full potential in careers as engineers and leaders, expand the image of the engineering and technology professions as a positive force in improving the quality of life, and demonstrate the value of diversity and inclusion." The organization is open to every gender and background in an effort to support and promote diversity. The Society of Women Engineers awards multiple scholarships each year to women in undergraduate and graduate STEM degree programs. In 2019, SWE disbursed nearly 260 new and renewed scholarships valued at more than $810,000. Because the Society is a not-for-profit organization, its scholarships are funded by private donations and corporate sponsors. SWE's CEO and executive director Karen Horting stated that SWE "could not have such a successful program without our corporate and foundation partners and generous individuals who support our scholarships, and our hope is to continue to grow the program and provide financial resources to those studying for a career in engineering and technology." Archives While developing the Society, the organizers assembled masses of information into archives. A committee for these archives was established in 1953. The Society's archives were established in 1957 by the Archives Committee, who voluntarily collected and maintained the Society's records. The archives are currently located at the Walter P. Reuther Library at Wayne State University in Detroit, Michigan. Some of the media includes information about a short-lived society involving both architects and engineers from 1920. The archives detail the history and creation of SWE as an organization and the history of women's involvement in engineering. Due to these collections of women's work on scientific projects, the archives show an alternate perspective on events such as the Space Race, which have traditionally been viewed as male-dominated endeavors but depended on the contributions of many women as well. In 1993, SWE designated the Walter P. Reuther Library as the official repository of its historical materials. Located within the Carey C. Shuart Women's Archive and Research Collection, the Houston Area Section of the Society of Women Engineers contains correspondence, business and financial records, photographs, and publications of the organization. Current work SWE offers support at all levels, from K-12 outreach programs and collegiate sections to professional development in the workplace. Programs are in place to help collegiate and professional members interact with their local communities. Conferences The Society of Women Engineers is organized at the local and Society levels. SWE hosts annual We Local regional events across the world. These events connect members in all stages of their careers and host similar events to the larger annual conference. SWE hosts one annual conference in a different location each year. As of 2024, the annual conference registration was over 20,000 attendees for the three-day conference, making it the largest event of its kind. This conference includes professional development workshops, inspirational speakers, and a career fair. Outreach Every year, SWE sections all over the world host events to help female students in primary and secondary school to understand and explore the possibilities of engineering as future careers. In the 2022 financial year, over 200 outreach events took place across the globe, impacting over 10,000 girls. SWE sponsors SWENext, a way for students aged 5-18 to join the SWE community. SWENext hosts many events including the Invent It. Build It. competition at the annual conference. In 2018, SWE launched a podcast series called SWE’s Diverse Podcasts, which focus on women in engineering and technology. Awards SWE Achievement Award In 1952, the Society of Women Engineers conferred its first and highest tribute, the Achievement Award, to Dr. Mária Telkes for her “meritorious contributions to the utilization of solar energy”. The Achievement Award has been presented annually since and recognizes a woman engineer for outstanding contributions over a significant period of time in any field of engineering. Additional notable Achievement Award recipients include: Elise Harmon, Rebecca Sparling and Frances Arnold. SWE Annual Awards Program SWE recognizes individuals, at all stages in their careers, who enhance the engineering profession and advocate for women in engineering through contributions to industry, education, and the community. SWE also recognizes groups. The Multicultural Awards celebrate groups that demonstrate the strongest advancement of diversity, equity, and inclusion. Mission Awards recognize groups working to further the SWE mission. Awards are also offered for Collegiate Competitions and the K12 SWENext community. In 2021, SWE started organizing virtual awards ceremony called The WE Awards Hall. In 2023, WE Local Awards and SWE’s Individual Awards will merge to create two brand new categories: SWE Awards Program and SWE Recognition Program. SWE Awards Program These awards celebrate high levels of achievement among those who identify as women and allies in engineering, engineering technology, or science related to engineering at all career stages, separated by tracks and years of experience. SWE Recognition Program This awards category acknowledges additional achievements by sections/affiliates, those who identify as women, and allies in engineering, engineering technology, or science related to engineering. Publications In 1951, only a year after the society was first established, SWE began publishing the Journal of the Society of Women Engineers, which included both technical articles and society news. In 1954, the journal was superseded by the SWE Newsletter, a magazine format that focused primarily on SWE and industry news. In 1980, it was again renamed, this time to US Woman Engineer. In 1993, the title was changed yet again to SWE, which remains its current periodical title, with the subtitle "magazine of the Society of Women Engineers". The fifth volume of SWE was published in 2011 to celebrate the society’s 60th anniversary and to explore SWE's history in more depth using its archives. The current SWE Magazine is published five times per year. Past presidents Notable historical members See also Engineering Glossary of engineering Engineering ethics Sources SWE History Allaback, Sarah, "The First American Women Architects", University of Illinois Press, 2008, ()., p. 34 Kindya, Marta Navia, "Four Decades of The Society of Women Engineers", Society of Women Engineers (1990) (ASIN: B0006E93SA) External links Engineering an Technology History Wiki SWE_Awards American engineering organizations Women's organizations based in the United States Women in engineering Organizations for women in science and technology Engineering societies based in the United States Non-profit organizations based in Illinois Organizations established in 1950 1950 establishments in the United States Society of Women Engineers
Society of Women Engineers
Technology
2,618
41,027,768
https://en.wikipedia.org/wiki/Hantavirus%20vaccine
Hantavirus vaccine is a vaccine that protects in humans against hantavirus infections causing hantavirus hemorrhagic fever with renal syndrome (HFRS) or hantavirus pulmonary syndrome (HPS). The vaccine is considered important as acute hantavirus infections are responsible for significant morbidity and mortality worldwide. It is estimated that about 1.5 million cases and 46,000 deaths occurred in China from 1950 to 2007. The number of cases is estimated at 32,000 in Finland from 2005 to 2010 and 90,000 in Russia from 1996 to 2006. The first hantavirus vaccine was developed in 1990 initially for use against Hantaan River virus which causes one of the most severe forms of HFRS. It is estimated that about two million doses of rodent brain or cell-culture derived vaccine are given in China every year. The wide use of this vaccine may be partly responsible for a significant decrease in the number of HFRS cases in China to less than 20,000 by 2007. Other hantaviruses for which the vaccine is used include Seoul (SEOV) virus. However the vaccine is thought not to be effective against European hantaviruses including Puumala (PUUV) and Dobrava-Belgrade (DOBV) viruses. The pharmaceutical trade name for the vaccine is Hantavax. As of 2019 no hantavirus vaccine have been approved for use in Europe or USA. A phase 2 study on a human HTNV/PUUV DNA hantavirus vaccine is ongoing. In addition to Hantavax three more vaccine candidates have been studied in I–II stage clinical trials. They include a recombinant vaccine and vaccines derived from HTNV and PUUV viruses. However, their prospects are unclear See also List of vaccine topics Seoul virus Gou virus Vaccine-naive References External links Serang virus strain details Natural reservoirs of hantaviruses CDC's Hantavirus Technical Information Index page Viralzone: Hantavirus Virus Pathogen Database and Analysis Resource (ViPR): Bunyaviridae Vaccines Hantavirus infections
Hantavirus vaccine
Biology
434
45,637,989
https://en.wikipedia.org/wiki/Biological%20functions%20of%20hydrogen%20sulfide
Hydrogen sulfide is produced in small amounts by some cells of the mammalian body and has a number of biological signaling functions. Only two other such gases are currently known: nitric oxide (NO) and carbon monoxide (CO). The gas is produced from cysteine by the enzymes cystathionine beta-synthase and cystathionine gamma-lyase. It acts as a relaxant of smooth muscle and as a vasodilator and is also active in the brain, where it increases the response of the NMDA receptor and facilitates long term potentiation, which is involved in the formation of memory. Eventually the gas is converted to sulfite in the mitochondria by thiosulfate reductase, and the sulfite is further oxidized to thiosulfate and sulfate by sulfite oxidase. The sulfates are excreted in the urine. Due to its effects similar to nitric oxide (without its potential to form peroxides by interacting with superoxide), hydrogen sulfide is now recognized as potentially protecting against cardiovascular disease. The cardioprotective role effect of garlic is caused by catabolism of the polysulfide group in allicin to , a reaction that could depend on reduction mediated by glutathione. Though both nitric oxide (NO) and hydrogen sulfide have been shown to relax blood vessels, their mechanisms of action are different: while NO activates the enzyme guanylyl cyclase, activates ATP-sensitive potassium channels in smooth muscle cells. Researchers are not clear how the vessel-relaxing responsibilities are shared between nitric oxide and hydrogen sulfide. However, there exists some evidence to suggest that nitric oxide does most of the vessel-relaxing work in large vessels and hydrogen sulfide is responsible for similar action in smaller blood vessels. Recent findings suggest strong cellular crosstalk of NO and , demonstrating that the vasodilatatory effects of these two gases are mutually dependent. Additionally, reacts with intracellular S-nitrosothiols to form the smallest S-nitrosothiol (HSNO), and a role of hydrogen sulfide in controlling the intracellular S-nitrosothiol pool has been suggested. Like nitric oxide, hydrogen sulfide is involved in the relaxation of smooth muscle that causes erection of the penis, presenting possible new therapy opportunities for erectile dysfunction. Hydrogen sulfide () deficiency can be detrimental to the vascular function after an acute myocardial infarction (AMI). AMIs can lead to cardiac dysfunction through two distinct changes; increased oxidative stress via free radical accumulation and decreased NO bioavailability. Free radical accumulation occurs due to increased electron transport uncoupling at the active site of endothelial nitric oxide synthase (eNOS), an enzyme involved in converting L-arginine to NO. During an AMI, oxidative degradation of tetrahydrobiopterin (BH4), a cofactor in NO production, limits BH4 availability and limits NO production by eNOS. Instead, eNOS reacts with oxygen, another cosubstrates involved in NO production. The products of eNOS are reduced to superoxides, increasing free radical production and oxidative stress within the cells. A deficiency impairs eNOS activity by limiting Akt activation and inhibiting Akt phosphorylation of the eNOSS1177 activation site. Instead, Akt activity is increased to phosphorylate the eNOST495 inhibition site, downregulating eNOS production of NO. therapy uses a donor, such as diallyl trisulfide (DATS), to increase the supply of to an AMI patient. donors reduce myocardial injury and reperfusion complications. Increased levels within the body will react with oxygen to produce sulfane sulfur, a storage intermediate for . pools in the body attracts oxygen to react with excess and eNOS to increase NO production. With increased use of oxygen to produce more NO, less oxygen is available to react with eNOS to produce superoxides during an AMI, ultimately lowering the accumulation of reactive oxygen species (ROS). Furthermore, decreased accumulation of ROS lowers oxidative stress in vascular smooth muscle cells, decreasing oxidative degeneration of BH4. Increased BH4 cofactor contributes to increased production of NO within the body. Higher concentrations of directly increase eNOS activity through Akt activation to increase phosphorylation of the eNOSS1177 activation site, and decrease phosphorylation of the eNOST495 inhibition site. This phosphorylation process upregulates eNOS activity, catalyzing more conversion of L-arginine to NO. Increased NO production enables soluble guanylyl cyclase (sGC) activity, leading to an increased conversion of guanosine triphosphate (GTP) to 3’,5’-cyclic guanosine monophosphate (cGMP). In therapy immediately following an AMI, increased cGMP triggers an increase in protein kinase G (PKG) activity. PKG reduces intracellular Ca2+ in vascular smooth muscle to increase smooth muscle relaxation and promote blood flow. PKG also limits smooth muscle cell proliferation, reducing intima thickening following AMI injury, ultimately decreasing myocardial infarct size. In a certain rat model of Parkinson's disease, the brain's hydrogen sulfide concentration was found to be reduced, and administering hydrogen sulfide alleviated the condition. In trisomy 21 (Down syndrome) the body produces an excess of hydrogen sulfide. Hydrogen sulfide is also involved in the disease process of type 1 diabetes. The beta cells of the pancreas in type 1 diabetes produce an excess of the gas, leading to the death of these cells and to a reduced production of insulin by those that remain. In 2005, it was shown that mice can be put into a state of suspended animation-like hypothermia by applying a low dosage of hydrogen sulfide (81 ppm ) in the air. The breathing rate of the animals sank from 120 to 10 breaths per minute and their temperature fell from 37 °C to just 2 °C above ambient temperature (in effect, they had become cold-blooded). The mice survived this procedure for 6 hours and afterwards showed no negative health consequences. In 2006 it was shown that the blood pressure of mice treated in this fashion with hydrogen sulfide did not significantly decrease. A similar process known as hibernation occurs naturally in many mammals and also in toads, but not in mice. (Mice can fall into a state called clinical torpor when food shortage occurs). If the -induced hibernation can be made to work in humans, it could be useful in the emergency management of severely injured patients, and in the conservation of donated organs. In 2008, hypothermia induced by hydrogen sulfide for 48 hours was shown to reduce the extent of brain damage caused by experimental stroke in rats. As mentioned above, hydrogen sulfide binds to cytochrome oxidase and thereby prevents oxygen from binding, which leads to the dramatic slowdown of metabolism. Animals and humans naturally produce some hydrogen sulfide in their body; researchers have proposed that the gas is used to regulate metabolic activity and body temperature, which would explain the above findings. Two recent studies cast doubt that the effect can be achieved in larger mammals. A 2008 study failed to reproduce the effect in pigs, concluding that the effects seen in mice were not present in larger mammals. Likewise a paper by Haouzi et al. noted that there is no induction of hypometabolism in sheep, either. At the February 2010 TED conference, Mark Roth announced that hydrogen sulfide induced hypothermia in humans had completed Phase I clinical trials. The clinical trials commissioned by the company he helped found, Ikaria, were however withdrawn or terminated by August 2011. References Gaseous signaling molecules
Biological functions of hydrogen sulfide
Chemistry
1,656
2,167,518
https://en.wikipedia.org/wiki/Kamptulicon
Kamptulicon, whose name was derived from the Greek kamptos ("flexible") + oulos ("thick"), was a floor covering made from powdered cork and natural rubber. Patented by Elijah Galloway in 1843, kamptulicon was first launched in public at the 1862 International Exhibition in London, where it caused a sensation. Its promoters compared it to thick, soft leather, and lauded its ease of cleaning, water resistance, warmth, and sound-deadening qualities. Critics, however, pointed out that its grey-brown colour was unattractive. Attempts were made to brighten it up by stencilling patterns on it with oil paint, but these suffered from a lack of durability. Kamptulicon was manufactured by sprinkling powdered cork on to thin bands of rubber, which was then rolled and rerolled until thoroughly mixed. It was then coated on one or both sides with linseed oil varnish or oil paint. Powdered sulphur was also sometimes mixed in, and the material then heated to produce a form of vulcanized kamptulicon. As well as a floor covering, kamptulicon was also used as cushions in stamping presses, and as polishing wheels for metals. Within a few years, faced by stiff competition from the manufacturers of oilcloth coupled with huge increases in the price of natural rubber, kamptulicon production ceased. See also Linoleum Notes Composite materials Floors
Kamptulicon
Physics,Engineering
302
36,405,625
https://en.wikipedia.org/wiki/WaterGAP
The global freshwater model WaterGAP calculates flows and storages of water on all continents of the globe (except Antarctica), taking into account the human influence on the natural freshwater system by water abstractions and dams. It supports understanding the freshwater situation across the world's river basins during the 20th and the 21st centuries, and is applied to assess water scarcity, droughts and floods and to quantify the impact of human actions on e.g. groundwater, wetlands, streamflow and sea-level rise. Modelling results of WaterGAP have contributed to international assessment of the global environmental situation including the UN World Water Development Reports, the Millennium Ecosystem Assessment, the UN Global Environmental Outlooks as well as to reports of the Intergovernmental Panel on Climate Change. WaterGAP contributes to the Intersectoral Impact Model Intercomparison Project ISIMIP, where consistent ensembles of model runs by a number of global hydrological models are generated to assess the impact of climate change and other anthropogenic stressors on freshwater resources world-wide. WaterGAP (Water Global Assessment and Prognosis) was developed at the University of Kassel (Germany) since 1996, while later on development has continued at Goethe University Frankfurt and Ruhr University Bochum. It consists of both the WaterGAP Global Hydrology Model (WGHM) and five water use models for the sectors irrigation, livestock, households, manufacturing and cooling of thermal power plants. An additional model component computes the fractions of total water use that are abstracted from either groundwater or surface waters (rivers, lakes and reservoirs). The model runs with a temporal resolution of 1 day; WaterGAP 2 has a spatial resolution of 0.5 degree geographical latitude × 0.5 degree geographical longitude (equivalent to 55 km × 55 km at the equator) and WaterGAP 3 a spatial resolution of 5 arc minutes x 5 arc minutes (9 km x 9 km). Model input includes time series of climate data (e.g. precipitation, temperature and radiation) and information such as characteristics of surface water bodies (lakes, reservoirs and wetlands), land cover, soil type, topography and irrigated area. WaterGAP Global Hydrology Model WGHM WGHM computes time-series of fast-surface and subsurface runoff, groundwater recharge and river discharge as well as storage variations of water in canopy, snow, soil, groundwater, lakes, wetlands and rivers. Thus, it quantifies the total renewable water resources as well as the renewable groundwater resources of a grid cell, river basin, or country. Precipitation on each grid cell is transported through the different storage compartments, where water can also evapotranspirate. Location and size of wetlands, lakes and reservoirs are defined by the global lakes and wetland database (GLWD), and the GRanD database of man-made reservoirs. Groundwater storage is affected by diffuse groundwater recharge through the soil and by point recharge from surface water bodies. Diffuse groundwater recharge is modeled as a function of total runoff, relief, soil texture, hydrogeology and the existence of permafrost or glaciers. Cell runoff is routed downstream until it reaches the ocean or an internal sink. To allow a plausible representation of the actual freshwater situation, version 2.2d of WGHM is tuned against observed long-term mean annual streamflow at 1319 gauging stations. Performance of WGHM with respect to streamflow observations has been compared in various studies to that of other global hydrological models for both Europe and the globe, while performance with respect to GRACE total water storage anomaly was compared globally and for U.S. aquifers. Water Use Models In WaterGAP, modeling of water use refers to computation of water withdrawals (abstractions) from either groundwater or surface water bodies (lakes, reservoirs and rivers), of consumptive water uses (the fraction of the abstracted water that evapotranspires during use) and of the return flows to groundwater or surface water bodies. Consumptive irrigation water use is computed by the Global Irrigation Model as a function of irrigated area and climate in each grid cell. Livestock water use is calculated as a function of the animal numbers and water requirements of different livestock types. Domestic and manufacturing use are based on national values of water withdrawals at different points in time. The temporal development of national household water use is based on statistical data modeled as a function of technological and structural change (the latter as a function of gross domestic product), taking into account population change. The temporal development of manufacturing water use takes into account technological change and the development of manufacturing gross value added. National values of domestic and manufacturing water use are downscaled to the grid cells using population density and urban population density, respectively. Water use for cooling of thermal power plants takes into account the location and characteristics of thermal power plants. Time series of monthly values of irrigation water use are computed, while all other uses are assumed to be constant throughout the year and to only vary from year to year. Based on sectoral water withdrawals and consumptive use as computed by the five water use models, the model component GWSWUSE calculates surface water abstractions from and return flows to groundwater and surface water as well as the total net abstraction from groundwater and from surface water in each grid cell. Applications WaterGAP has been applied to assess which areas of the world are and will be affected by water stress, and to estimate the world's freshwater balance. In many studies, WaterGAP served to estimate the impact of climate change on the global freshwater system, e.g. on groundwater, wetlands, streamflow and irrigation requirements. Groundwater stress and depletion of groundwater resources were analyzed. In addition, the alteration of ecologically relevant river flow characteristics and wetland dynamics due to human water use and dams was studied. Time series of WaterGAP total water storage anomalies were used to process and interpret GRACE (Gravity Recovery and Climate Experiment) satellite measurement of the dynamic gravity of the Earth, as for the continents, the seasonal and longer-term gravity changes are to a large extent caused by changes of the water stored in groundwater, surface waters, soil and snow. These time series also served to estimate the contribution of water storage variations on the continents to sea level rise. WaterGAP results are also used in life-cycle assessments to take into account water stress at production sites. References Hydrology models
WaterGAP
Biology,Environmental_science
1,321
18,826,785
https://en.wikipedia.org/wiki/Oxipurinol
Oxipurinol (INN, or oxypurinol USAN) is an inhibitor of xanthine oxidase. It is an active metabolite of allopurinol and it is cleared renally. In cases of renal disease, this metabolite will accumulate to toxic levels. By inhibiting xanthine oxidase, it reduces uric acid production. High serum uric acid levels may result in gout, kidney stones, and other medical conditions. References Pyrazolopyrimidines Xanthine oxidase inhibitors Human drug metabolites
Oxipurinol
Chemistry
124
34,733,083
https://en.wikipedia.org/wiki/Jessen%E2%80%93Wintner%20theorem
In mathematics, the Jessen–Wintner theorem, introduced by , asserts that a random variable of Jessen–Wintner type, meaning the sum of an almost surely convergent series of independent discrete random variables, is of pure type. References Probability theorems
Jessen–Wintner theorem
Mathematics
54
1,283,621
https://en.wikipedia.org/wiki/Bordetella
Bordetella () is a genus of small (0.2 – 0.7 μm), Gram-negative, coccobacilli bacteria of the phylum Pseudomonadota. Bordetella species, with the exception of B. petrii, are obligate aerobes, as well as highly fastidious, or difficult to culture. All species can infect humans. The first three species to be described (B. pertussis, B. parapertussis, B. bronchiseptica) are sometimes referred to as the 'classical species'. Two of these (B. pertussis and B. bronchiseptica) are also motile. There are about 16 different species of Bordetella likely descending from ancestors who lived in soil and/or water environments. B. pertussis and occasionally B. parapertussis cause pertussis (whooping cough) in humans, and some B. parapertussis strains only colonize sheep. It has also been known to cause bronchitis in cats and bronchopneumonia in pigs. B. bronchiseptica rarely infects healthy humans, though disease in immunocompromised patients has been reported. B. bronchiseptica causes several diseases in other mammals, including kennel cough in dogs and atrophic rhinitis in pigs. Other members of the genus cause similar diseases in other mammals, and in birds (B. hinzii, B. avium). The genus Bordetella is named after Jules Bordet. Pathogenesis The three most common species of Bordetella are B. pertussis, B. parapertussis and B. bronchiseptica. These species are known to accumulate in the respiratory tracts of mammals. This is most commonly seen in human infants as a product of an illness known as whooping cough. The particular species responsible for this illness is B. pertussis, and can only be found in humans. Even with extensive vaccination research on B. pertussis, whooping cough is still considered endemic in many countries. Due to the fact B. pertussis is only found in humans and shows little genetic variation from the other Bordetella species, it is thought that it was derived from a common ancestor in recent years. B. parapertussis can affect both humans and other mammals, primarily sheep. Similar to B. pertussis, it causes whooping cough in babies. Yet, when strains found in sheep are isolated there is a strong distinction between those found in humans. This suggests that the varying strains of this species evolved independently of one another, the one found in humans and the one found in sheep. With this particular distinction it means that there is little to no transmission between the two reservoirs. The species B. bronchiseptica (gram-negative, aerobic) however has a broader host range, causing similar symptoms in a wide range of animals, while only occasionally affecting humans. These symptoms often manifest as chronic and asymptomatic respiratory infections. B. bronchiseptica is a small, coccobacillus shape sized at approximately 0.5 μm. It has peritrichous flagella that enables it to be motile. On a petri dish, colonies of this species appear small, grayish-white, smooth, and shiny. This species is also typically associated with kennel cough (Canine Respiratory Infectious Disease (CRID)) in dogs. The most thoroughly studied of the Bordetella species are B. bronchiseptica, B. pertussis and B. parapertussis, and the pathogenesis of respiratory disease caused by these bacteria has been reviewed. Transmission occurs by direct contact, via respiratory aerosol droplets, or fomites. Bacteria initially adhere to ciliated epithelial cells in the nasopharynx, and this interaction with epithelial cells is mediated by a series of protein adhesins.. These include filamentous haemaglutinin, pertactin, fimbriae, and pertussis toxin (though expression of pertussis toxin is unique to B. pertussis). As well as assisting in adherence to epithelial cells, some of these are also involved in attachment to immune effector cells. The initial catarrhal phase of infection produces symptoms similar to those of the common cold, and during this period, large numbers of bacteria can be recovered from the pharynx. Thereafter, the bacteria proliferate and spread further into the respiratory tract, where the secretion of toxins causes ciliostasis and facilitates the entry of bacteria to tracheal/bronchial ciliated cells. One of the first toxins to be expressed is tracheal cytotoxin, which is a disaccharide-tetrapeptide derived from peptidoglycan. Unlike most other Bordetella toxins, tracheal cytotoxin is expressed constitutively, being a normal product of the breakdown of the bacterial cell wall. Other bacteria recycle this molecule back into the cytoplasm, but in Bordetella and Neisseria gonorrhoeae, it is released into the environment. Tracheal cytotoxin itself is able to reproduce paralysis of the ciliary escalator, inhibition of DNA synthesis in epithelial cells and ultimately killing of the same. One of the most important of the regulated toxins is adenylate cyclase toxin, which aids in the evasion of innate immunity. The toxin is delivered to phagocytic immune cells upon contact. Immune cell functions are then inhibited in part by the resulting accumulation of cyclic AMP. Recently discovered activities of adenylate cyclase toxin, including transmembrane pore formation and stimulation of calcium influx, may also contribute to the intoxication of phagocytes. Virulence factors The virulence factors identified in the Bordetella are common to all three species. These include adhesins, such as filamentous hemagglutinin (FHA), pertactin, tracheal colonization factor and fimbriae, and toxins, such as adenylate cyclase-hemolysin, dermonecrotic toxin and tracheal cytotoxin. These factors are then expressed and regulated most often by environmental stimuli. Differences in virulence factors relate to the loss of regulatory or control functions. Bordetella sp. is typically found to live within the hosts' respiratory tract and immune system and can transmit to new hosts. Bordetella pertussis also affects human adults and even with an 85% vaccination coverage over 160,000 related deaths occur each year all around the globe. There are few antimicrobial susceptibility testing methods but no change or progress have been discovered as of 2018. Most studies performed using Bordetella vaccines have many flaws and fail to come to an official conclusion. Regulation of virulence factor expression The expression of many Bordetella adhesins and toxins is controlled by the two-component regulatory system BvgAS. Much of what is known about this regulatory system is based on work with B. bronchiseptica, but BvgAS is present in B. pertussis, B. parapertussis and B. bronchiseptica and is responsible for phase variation or phenotypic modulation. BvgS is a plasma membrane-bound sensor kinase which responds to stimulation by phosphorylating a cytoplasmic helix-turn-helix-containing protein, BvgA. When phosphorylated, BvgA has increased affinity for specific binding sites in Bvg-activated promoter sequences and is able to promote transcription in in vitro assays. Most of the toxins and adhesins under BvgAS control are expressed under Bvg+ conditions (high BvgA-Pi concentration). But there are also genes expressed solely in the Bvg− state, most notably the flagellin gene flaA. The regulation of Bvg repressed genes is mediated by the product of a 624-bp open reading frame downstream of BvgA, the so-called Bvg-activated repressor protein, BvgR. BvgR binds to a consensus sequence present within the coding sequences of at least some Bvg-repressed genes. Binding of this protein to the consensus sequence represents gene expression by reducing transcription. It is not known what the physiological signals for BvgS are, but in vitro BvgAS can be inactivated by millimolar concentrations of magnesium sulfate or nicotinic acid, or by reduction of the incubation temperature to ≤ 26 °C. The identification of a specific point mutation in the BvgS gene which locks B. bronchiseptica in an intermediate Bvg phase revealed a class of BvgAS-regulated genes that are exclusively transcribed under intermediate concentrations of BvgA-Pi. This intermediate (Bvgi) phenotype can be reproduced in wild-type B. bronchiseptica by growth of the bacteria in a medium containing intermediate concentrations of the BvgAS modulator, nicotinic acid. In these conditions, some, but not all of the virulence factors associated with the Bvg+ phase are expressed, suggesting this two-component regulatory system can give rise to a continuum of phenotypic states in response to the environment. Vaccines The Bordetella vaccine is non-essential, but highly recommended for dogs especially if they are expected to come into contact with other dogs at dog parks, boarding facilities, dog shows, training classes, etc. In fact, it can be required at certain facilities for entry. The vaccine can also be given to cats, but it is less commonly done because infection appears to be uncommon in adult cats. However, it may be a good idea to vaccinate a kitten if it is in a high-risk environment (i.e. living with multiple other cats). The Bordetella vaccine specifically targets Bordetella bronchiseptica, the species typically responsible for kennel cough. The vaccine introduces the bacteria (live or dead) to the body in order to develop an immunity. It is important to remember that the vaccine only protects against one species of Bordetella. Therefore, it is possible for a pet to become infected with another Bordetella species or contract kennel cough from another source, such as the parainfluenza virus, even after being vaccinated for B. bronchiseptica. The Bordetella vaccine is also only about 70% effective. There are three licensed ways to deliver the Bordetella vaccine to dogs: orally, intranasally, and subcutaneously. The two former methods are administered using live bacteria, while the latter is done with a killed bacteria. A comparative study done in 2013 by the School of Veterinary Medicine in Madison, Wisconsin studied the effectiveness of these three methods by vaccinating beagle puppies. The 40 beagles were divided into four groups; a group to test each of the three methods, plus one unvaccinated control group. After 42 days, the dogs were exposed to Bordetella bronchiseptica. This study determined that the live intranasal Bordetella vaccine was more effective than the killed subcutaneous vaccine, and the live oral vaccine works equally as well as the live intranasal vaccine. References Burkholderiales Whooping cough Bacteria genera Vaccine-preventable diseases
Bordetella
Biology
2,430
4,964,562
https://en.wikipedia.org/wiki/La%20petite%20mort
(; ) is an expression that refers to a brief loss or weakening of consciousness, and in modern usage refers specifically to a post-orgasm sensation as likened to death. The first attested use of the expression in English was in 1572 with the meaning of "fainting fit". It later came to mean "nervous spasm" as well. The first attested use with the meaning of "orgasm" was in 1882. In modern usage, this term has generally been interpreted to describe the post-orgasmic state of unconsciousness that certain people perceive after having some sexual experiences. The term does not always apply to sexual experiences. It can also be used when some undesired thing has happened to a person and has affected them so much that "a part of them dies inside." A literary example of this is found in Thomas Hardy's Tess of the D'Urbervilles when he uses the phrase to describe how Tess feels after she comes across a particularly gruesome omen after meeting with her own rapist: She felt the petite mort at this unexpectedly gruesome information, and left the solitary man behind her. The term "little death", a direct translation of , can also be used in English to essentially the same effect. Specifically, it is defined as "a state or event resembling or prefiguring death; a weakening or loss of consciousness, specifically in sleep or during an orgasm," a nearly identical definition to that of the original French. As with , the earlier attested uses are not related to sex or orgasm. See also Georges Bataille Sexual headache Dhat syndrome Post-coital tristesse Postorgasmic illness syndrome Refractory period References Further reading French words and phrases Orgasm Sexuality Sexual slang fr:La petite mort
La petite mort
Biology
360
49,911,242
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20585b
Zinc finger protein 585B is a protein that in humans is encoded by the ZNF585B gene. References Further reading Human proteins
Zinc finger protein 585b
Chemistry
31
28,121,972
https://en.wikipedia.org/wiki/George%20Weinstock
George M. Weinstock (born February 6, 1949) is an American geneticist and microbiologist on the faculty of The Jackson Laboratory for Genomic Medicine, where he is a professor and the associate director for microbial genomics. Before joining The Jackson Laboratory, he taught at Washington University in St. Louis and served as associate director of The Genome Institute. Previously, Weinstock was co-director of the Human Genome Sequencing Center (HGSC) at Baylor College of Medicine in Houston, Texas, and Professor of Molecular and Human Genetics. He received his B.S. degree from the University of Michigan in 1970 and his Ph.D. from the Massachusetts Institute of Technology in 1977. He has spent most of his career taking genomic approaches to study fundamental biological processes. Weinstock's parents met during the Manhattan Project in Los Alamos, New Mexico, and he grew up meeting many of the participants in the atomic bomb project and their colleagues. He performed his PhD thesis under David Botstein at MIT, studying the structure of phage P22 chromosome. As a postdoctoral fellow with I. R. Lehman at Stanford University School of Medicine, Weinstock and Kevin McEntee discovered that the RecA protein of E. coli catalyzed strand transfer in genetic recombination. Later, as a faculty member at the University of Texas Health Science Center at Houston, he led one of the first bacterial genome projects, collaborating with The Institute for Genomic Research to sequence the entire genome of a bacterium, Treponema pallidum, the organism that causes syphilis. In 1999 he joined Richard Gibbs at the HGSC as one of the five main centers to work on the Human Genome Project. The HGSC produced sequences of human chromosomes 3, 12 and X. Weinstock was a principal investigator in projects producing genome sequences for rat, mouse, macaque, bovine, sea urchin, honey bee, fruit fly and many microbial genomes, as well as one of the first personal genome projects, sequencing James Watson’s genome using next-generation sequencing technology. He was a leader of the Human Microbiome Project, studying the collection of microbes that colonize the human body. Awards and honors Fellow, American Association for the Advancement of Science Fellow, American Academy of Microbiology Editorial Board, Genome Biology Editorial Board, Genome Biology and Evolution Editorial Board, BMC Genomics References External links [Author] PubMed Citations "The Next Human Genome Project: Our Microbes." Technology Review. 2 May 2007. 1949 births Washington University in St. Louis faculty University of Texas Health Science Center at Houston faculty Baylor College of Medicine physicians and researchers University of Michigan alumni Massachusetts Institute of Technology alumni Stanford University postdoctoral scholars Living people Human Genome Project scientists
George Weinstock
Engineering
566
25,512,740
https://en.wikipedia.org/wiki/NetLabs
NetLabs was a software company that was founded in 1989 to address management of SNMP and CMOT (CMIP over TCP/IP) devices. CMOT was specified in RFC 1095. This RFC was subsequently obsoleted by RFC 1189. RFC 1147 mentions the company and some of its products in a catalog of network management tools. The company was acquired by Seagate Technology in 1995 as part of its Seagate Software division. History The company was founded in Los Angeles, California by Unni Warrier, Anne Lam, Jon Biggar, and Dan Ketcham. Larry Wall, the inventor of the Perl programming language, joined the company. In 1991, the company relocated to Los Altos, California. A number of employees moved from Los Angeles to the San Francisco Bay Area to continue with the company. Around this time, Unni Warrier and Anne Lam left the company, and Andre Schwager (as CEO) and Roselie Buonauro (head of Marketing) joined. After being acquired by Seagate (announced March 20, 1995), the company moved to Cupertino, California. Subsequently, Seagate Software sold off the Network and Storage Management Group to Veritas Software. Veritas in turn sold off some of the software to OpenService, Inc. OpenService changed its name Log Matrix. Products NetLabs products included: NerveCenter - network management console with correlation engine AssetManager - networked computing asset database with auto-discovery Vision - WYSIWYG network element panel simulator The network management marketplace during the years before it was acquired included HP (OpenView), Sun (SunNet Manager, and Solstice Enterprise Manager), Cabletron (Spectrum) and others. NetLabs licensed software to Sun. It also released a version of software that would allow it to coexist and augment OpenView instead of directly competing. Ultimately, one of the products, NerveCenter, is being offered by LogMatrix. References External links Network Management and Woodchucks Current NerveCenter Information Network management Defunct software companies of the United States
NetLabs
Engineering
436
60,682,970
https://en.wikipedia.org/wiki/List%20of%20SMTP%20server%20return%20codes
This is a list of Simple Mail Transfer Protocol (SMTP) response status codes. Status codes are issued by a server in response to a client's request made to the server. Unless otherwise stated, all status codes described here is part of the current SMTP standard, . The message phrases shown are typical, but any human-readable alternative may be provided. Basic status code A "Basic Status Code" SMTP reply consists of a three digit number (transmitted as three numeric characters) followed by some text. The number is for use by automata (e.g., email clients) to determine what state to enter next; the text ("Text Part") is for the human user. The first digit denotes whether the response is good, bad, or incomplete: 2yz (Positive Completion Reply): The requested action has been successfully completed. 3yz (Positive Intermediate Reply): The command has been accepted, but the requested action is being held in abeyance, pending receipt of further information. 4yz (Transient Negative Completion Reply): The command was not accepted, and the requested action did not occur. However, the error condition is temporary, and the action may be requested again. 5yz (Permanent Negative Completion Reply): The command was not accepted and the requested action did not occur. The SMTP client SHOULD NOT repeat the exact request (in the same sequence). The second digit encodes responses in specific categories: x0z (Syntax): These replies refer to syntax errors, syntactically correct commands that do not fit any functional category, and unimplemented or superfluous commands. x1z (Information): These are replies to requests for information. x2z (Connections): These are replies referring to the transmission channel. x3z : Unspecified. x4z : Unspecified. x5z (Mail system): These replies indicate the status of the receiver mail system. Enhanced status code The Basic Status Codes have been in SMTP from the beginning, with in 1982, but were extended rather extensively, and haphazardly so that by 2003 rather grumpily noted that: "SMTP suffers some scars from history, most notably the unfortunate damage to the reply code extension mechanism by uncontrolled use." defines a separate series of enhanced mail system status codes which is intended to be better structured, consisting of three numerical fields separated by ".", as follows: class "." subject "." detail class = "2" / "4" / "5" subject = 1 to 3 digits detail = 1 to 3 digits The classes are defined as follows: 2.XXX.XXX Success: Report of a positive delivery action. 4.XXX.XXX Persistent Transient Failure: Message as sent is valid, but persistence of some temporary conditions has caused abandonment or delay. 5.XXX.XXX Permanent Failure: Not likely to be resolved by resending the message in current form. In general the class identifier MUST match the first digit of the Basic Status Code to which it applies. The subjects are defined as follows: X.0.XXX Other or Undefined Status X.1.XXX Addressing Status X.2.XXX Mailbox Status X.3.XXX Mail System Status X.4.XXX Network and Routing Status X.5.XXX Mail Delivery Protocol Status X.6.XXX Message Content or Media Status X.7.XXX Security or Policy Status The meaning of the "detail" field depends on the class and the subject, and are listed in and . A server capable of replying with an Enhanced Status Code MUST preface (prepend) the Text Part of SMTP Server responses with the Enhanced Status Code followed by one or more spaces. For example, the "221 Bye" reply (after QUIT command) MUST be sent as "221 2.0.0 Bye" instead. The Internet Assigned Numbers Authority (IANA) maintains the official registry of these enhanced status codes. Common status codes This section list some of the more commonly encountered SMTP Status Codes. This list is not exhaustive, and the actual text message (outside of the 3-field Enhanced Status Code) might be different. — 2yz Positive completion 211 System status, or system help reply 214 Help message (A response to the HELP command) 220 <domain> Service ready 221 <domain> Service closing transmission channel 221 2.0.0 Goodbye 235 2.7.0 Authentication succeeded 240 QUIT 250 Requested mail action okay, completed 251 User not local; will forward 252 Cannot verify the user, but it will try to deliver the message anyway — 3yz Positive intermediate 334 (Server challenge - the text part contains the Base64-encoded challenge) 354 Start mail input — 4yz Transient negative completion "Transient Negative" means the error condition is temporary, and the action may be requested again. The sender should return to the beginning of the command sequence (if any). The accurate meaning of "transient" needs to be agreed upon between the two different sites (receiver- and sender-SMTP agents) must agree on the interpretation. Each reply in this category might have a different time value, but the SMTP client SHOULD try again. 421 Service not available, closing transmission channel (This may be a reply to any command if the service knows it must shut down) 432 4.7.12 A password transition is needed 450 Requested mail action not taken: mailbox unavailable (e.g., mailbox busy or temporarily blocked for policy reasons) 451 Requested action aborted: local error in processing 451 4.4.1 IMAP server unavailable 452 Requested action not taken: insufficient system storage 454 4.7.0 Temporary authentication failure 455 Server unable to accommodate parameters — 5yz Permanent negative completion The SMTP client SHOULD NOT repeat the exact request (in the same sequence). Even some "permanent" error conditions can be corrected, so the human user may want to direct the SMTP client to reinitiate the command sequence by direct action at some point in the future. 500 Syntax error, command unrecognized (This may include errors such as command line too long) 500 5.5.6 Authentication Exchange line is too long 501 Syntax error in parameters or arguments 501 5.5.2 Cannot Base64-decode Client responses 501 5.7.0 Client initiated Authentication Exchange (only when the SASL mechanism specified that client does not begin the authentication exchange) 502 Command not implemented 503 Bad sequence of commands 504 Command parameter is not implemented 504 5.5.4 Unrecognized authentication type 521 Server does not accept mail 523 Encryption Needed 530 5.7.0 Authentication required 534 5.7.9 Authentication mechanism is too weak 535 5.7.8 Authentication credentials invalid 538 5.7.11 Encryption required for requested authentication mechanism 550 Requested action not taken: mailbox unavailable (e.g., mailbox not found, no access, or command rejected for policy reasons) 551 User not local; please try <forward-path> 552 Requested mail action aborted: exceeded storage allocation 553 Requested action not taken: mailbox name not allowed 554 Transaction has failed (Or, in the case of a connection-opening response, "No SMTP service here") 554 5.3.4 Message too big for system 556 Domain does not accept mail Example Below is an example SMTP connection, where a client "C" is sending to server "S": S: 220 smtp.example.com ESMTP Postfix C: HELO relay.example.com S: 250 smtp.example.com, I am glad to meet you C: MAIL FROM:<bob@example.com> S: 250 Ok C: RCPT TO:<alice@example.com> S: 250 Ok C: RCPT TO:<theboss@example.com> S: 250 Ok C: DATA S: 354 End data with <CR><LF>.<CR><LF> C: From: "Bob Example" <bob@example.com> C: To: Alice Example <alice@example.com> C: Cc: theboss@example.com C: Date: Tue, 15 Jan 2008 16:02:43 -0500 C: Subject: Test message C: C: Hello Alice. C: This is a test message with 5 header fields and 4 lines in the message body. C: Your friend, C: Bob C: . S: 250 Ok: queued as 12345 C: QUIT S: 221 Bye {The server closes the connection} And below is an example of an SMTP connection in which the SMTP Server supports the Enhanced Status Code, taken from : S: 220 dbc.mtview.ca.us SMTP service ready C: EHLO ymir.claremont.edu S: 250-dbc.mtview.ca.us says hello S: 250 ENHANCEDSTATUSCODES C: MAIL FROM:<ned@ymir.claremont.edu> S: 250 2.1.0 Originator <ned@ymir.claremont.edu> ok C: RCPT TO:<mrose@dbc.mtview.ca.us> S: 250 2.1.5 Recipient <mrose@dbc.mtview.ca.us> ok C: RCPT TO:<nosuchuser@dbc.mtview.ca.us> S: 550 5.1.1 Mailbox "nosuchuser" does not exist C: RCPT TO:<remoteuser@isi.edu> S: 551-5.7.1 Forwarding to remote hosts disabled S: 551 5.7.1 Select another host to act as your forwarder C: DATA S: 354 Send message, ending in CRLF.CRLF. ... C: . S: 250 2.6.0 Message accepted C: QUIT S: 221 2.0.0 Goodbye {The server closes the connection} References Internet-related lists
List of SMTP server return codes
Technology
2,105
71,416,154
https://en.wikipedia.org/wiki/Sergey%20Piletsky
Sergey Piletsky is a professor of Bioanalytical Chemistry and the Research Director for School of Chemistry, University of Leicester, United Kingdom. Education Sergey graduated from Kyiv University, Ukraine, obtaining an MSc in chemistry in 1985 and researched on synthesis of the polymers selective for nucleic acids, for which he was awarded with a PhD in 1991. Cranfield University awarded Sergey with a DSc for his work on molecularly imprinted polymers for diagnostics applications. Awards Sergey is a recipient of Royal Society Wolfson Research Merit Award, Leverhulme Trust Fellowship, DFG Fellowship from the Institute of Analytical Chemistry, Award of President of Ukraine, and Japan Society for Promotion of Science and Technology Fellowship. Research Sergey's work in molecular imprinting focuses on: (i) the fundamental study of the recognition properties of molecularly imprinted polymers; (ii) the development of sensors and assays for environmental and clinical analysis; and (iii) the development of molecularly imprinted polymer nanoparticles for theranostic applications. Sergey introduced computational design into the field of molecular imprinting, by scientifically demonstrating that non-covalent interaction between the template molecule and polymer is through the technique known as 'bite and switch' wherein functional groups first non-covalently bond with the binding site, but during the rebinding step, the polymer matrix forms irreversible covalent bonds with the target molecule. A number of research groups around the world follow his ideas in developing functional imprinted polymers for a variety of applications. Notable publications Surface-grafted molecularly imprinted polymers for protein recognition, A Bossi, SA Piletsky, EV Piletska, PG Righetti, APF Turner, Analytical chemistry 73 (21), 5281-5286 Electrochemical sensor for catechol and dopamine based on a catalytic molecularly imprinted polymer-conducting polymer hybrid recognition element, Dhana Lakshmi, Alessandra Bossi, Michael J Whitcombe, Iva Chianella, Steven A Fowler, Sreenath Subrahmanyam, Elena V Piletska, Sergey A Piletsky, Analytical Chemistry 81 (9), 3576-3584 Piletsky S.A., Turner A.P.F. (2006). New generation of chemical sensors based on molecularly imprinted polymers, in: Molecular imprinting of polymers, S. Piletsky and A.P.F. Turner (eds.), Landes Bioscience, Georgetown, TX, USA Notable patents Rationally Designed Selective Binding Polymers (2010), Publication number: 20100009859, Inventors: Sergey A. Piletsky, Olena Piletska, Khalku Karim, Coulton H. Legge, Sreenath Subrahmanyam Electrochemical Sensor (2019) Publication number: 20210239643, Inventors: Sergey Piletsky, Omar Sheej Ahamad, Alvaro Garcia Cruz Polymerisation method, polymers and uses thereof (2006) Publication number: 20060122288, Inventors: Sergey Piletsky, Olena Piletska, Anthony Turner, Khalku Karim, Beining Chen Methods and Kits for determining binding sites (2020) Publication number: 20200033356, Inventors: Sergey Piletsky, Elena Piletska, Francesco Canfarotta, Don Jones Photoreactor and Process for Preparing MIP Nanoparticles (2014) Publication number: 20140228472, Inventors: Sergey Piletsky, Olena Piletska, Antonio Guerreiro, Michael Whitcombe, Alessandro Poma References External links Year of birth missing (living people) Living people British chemists Alumni of Cranfield University British inventors Molecular modelling Computational biology Computational chemistry Biosensors Receptors Biomimetics Sensors Bioinorganic chemistry Ukrainian expatriates in England Ukrainian chemists 21st-century Ukrainian scientists
Sergey Piletsky
Chemistry,Technology,Engineering,Biology
796
15,063,933
https://en.wikipedia.org/wiki/PRKRIR
52 kDa repressor of the inhibitor of the protein kinase is an enzyme that in humans is encoded by the PRKRIR gene. Interactions PRKRIR has been shown to interact with STK4 and DNAJC3. References Further reading
PRKRIR
Chemistry
52
2,165,654
https://en.wikipedia.org/wiki/Gel%20extraction
In molecular biology, gel extraction or gel isolation is a technique used to isolate a desired fragment of intact DNA from an agarose gel following agarose gel electrophoresis. After extraction, fragments of interest can be mixed, precipitated, and enzymatically ligated together in several simple steps. This process, usually performed on plasmids, is the basis for rudimentary genetic engineering. After DNA samples are run on an agarose gel, extraction involves four basic steps: identifying the fragments of interest, isolating the corresponding bands, isolating the DNA from those bands, and removing the accompanying salts and stain. To begin, UV light is shone on the gel in order to illuminate all the ethidium bromide-stained DNA. Care must be taken to avoid exposing the DNA to mutagenic radiation for longer than absolutely necessary. The desired band is identified and physically removed with a cover slip or razor blade. The removed slice of gel should contain the desired DNA inside. An alternative method, utilizing SYBR Safe DNA gel stain and blue-light illumination, avoids the DNA damage associated with ethidium bromide and UV light. Several strategies for isolating and cleaning the DNA fragment of interest exist. Spin Column Extraction Gel extraction kits are available from several major biotech manufacturers for a final cost of approximately 1–2 US$ per sample. Protocols included in these kits generally call for the dissolution of the gel-slice in 3 volumes of chaotropic agent at 50 °C, followed by application of the solution to a spin-column (the DNA remains in the column), a 70% ethanol wash (the DNA remains in the column, salt and impurities are washed out), and elution of the DNA in a small volume (30 μL) of water or buffer. Dialysis The gel fragment is placed in a dialysis tube that is permeable to fluids but impermeable to molecules at the size of DNA, thus preventing the DNA from passing through the membrane when soaked in TE buffer. An electric field is established around the tubing (in a way similar to gel electrophoresis) long enough so that the DNA is removed from the gel but remains in the tube. The tube solution can then be pipetted out and will contain the desired DNA with minimal background. Traditional The traditional method of gel extraction involves creating a folded pocket of Parafilm wax paper and placing the agarose fragment inside. The agarose is physically compressed with a finger into a corner of the pocket, partially liquifying the gel and its contents. The liquid droplets can then be directed out of the pocket onto an exterior piece of Parafilm, where they are pipetted into a small tube. A butanol extraction removes the ethidium bromide stain, followed by a phenol/chloroform extraction of the cleaned DNA fragment. The disadvantage of gel isolation is that background can only be removed if it can be physically identified using the UV light. If two bands are very close together, it can be hard to separate them without some contamination. In order to clearly identify the band of interest, further restriction digests may be necessary. Restriction sites unique to unwanted bands of similar size can aid in breaking up these potential contaminants. References Molecular biology Laboratory techniques
Gel extraction
Chemistry,Biology
677
5,605,137
https://en.wikipedia.org/wiki/Oligomycin
Oligomycins are macrolides created by Streptomyces that are strong antibacterial agents but are often poisonous to other organisms, including humans. Function Oligomycins have use as antibiotics. However, in humans, they have limited or no clinical use due to their toxic effects on mitochondria and ATP synthase. Oligomycin A is an inhibitor of ATP synthase. In oxidative phosphorylation research, it is used to prevent stage 3 (phosphorylating) respiration. Oligomycin A inhibits ATP synthase by blocking its proton channel (FO subunit), which is necessary for oxidative phosphorylation of ADP to ATP (energy production). The inhibition of ATP synthesis by oligomycin A will significantly reduce electron flow through the electron transport chain; however, electron flow is not stopped completely due to a process known as proton leak or mitochondrial uncoupling. This process is due to facilitated diffusion of protons into the mitochondrial matrix through an uncoupling protein such as thermogenin, or UCP1. Administering oligomycin to rats can result in very high levels of lactate accumulating in the blood and urine. References Macrolide antibiotics Spiro compounds Diketones ATP synthase inhibitors it:Fosforilazione ossidativa#Inibitori
Oligomycin
Chemistry
293
64,586,299
https://en.wikipedia.org/wiki/Assault%20Engineering%20Brigades
Assault Engineering Brigades () or Storm Engineer-Sapper Brigades were formations of the Reserve of the Supreme High Command of the Red Army, being notable for their service during the Second World War. These brigades were designed to storm settlements and to break through heavily fortified enemy lines. These units are commonly abbreviated as ShISBr (), and are occasionally referred to as "armoured infantry" or "cuirass infantry" (). History Sapper-engineering assault units were formed in 1943. By 30 May of that year, the formation of the first 15 brigades was completed. Most of these units were formed from existing combat battalions, well-proven in battle. In August 1943, assault engineer-sapper brigades arrived at the front. These were each composed of: Brigade Command (40 people) Command Company (87 people) Motorized Engineer-Scout Company (101 people) 5 Assault Engineer-Sapper Battalions (388 people each) Light Bridging and Crossing Equipment Crew (36 people) During the formations of assault engineering brigades, all soldiers over 40 years of age were reassigned. The most distinctive piece of individual equipment used by soldiers of the assault engineering brigades was the SN-42 () steel breastplate. In December 1943, a procedure was developed for the combat utilization of assault formations. Assault brigades were sent into battle to facilitate key breakthroughs in fortified defensive lines by means of combat engineering and sapping. Success in battle hinged on close coordination with infantry, armoured, mechanized, and artillery units. As soldiers of the assault brigades were not equipped with heavy small arms or their own artillery, they were immediately withdrawn after a successful breakthrough in the enemy lines to limit casualties. In the spring of 1944 the assault engineering brigades were supplied with ROKS-3 flamethrowers. The 1st, 2nd, 4th, 10th, and 2nd Guards assault engineer-sapper brigades were supplemented with engineer-tank regiments including PT-3 () mine flails and OT-34 flamethrower tanks, each composed of three companies with 20 combat vehicles per company. By May 1945, the brigades pushed through the city of Konigsberg (now Kaliningrad), with the city falling in a matter of days. Over the course of the Second World War 20 assault engineer-sapper brigades were formed, performing admirably in combat operations, and especially distinguishing themselves in the storming of cities, which was their intended purpose. Legacy Combat engineers are one of the branches of the Red Army that are given special reverence. In 2015 and 2020, a formation of soldiers dressed in the uniforms of engineer assault brigades took part in the Moscow Victory Day Parade on Red Square. During a national ceremony at the Capul de pod Șerpeni Memorial Complex in August 2019, Russian Defence Minister Sergey Shoigu ceremonially handed to Moldovan Defence Minister Pavel Voicu the military banners of the 14th Assault Engineering and Combat Brigade, which until that point, was kept at the Central Armed Forces Museum in Russia. Structure 1st Guards Assault Engineering Sapper Brigade 1st Assault Engineering and Sapper Brigade 2nd Assault Engineering and Combat Engineer Brigade 3rd Assault Engineering and Combat Engineer Brigade 4th Assault Engineering and Sapper Brigade 5th Assault Engineering and Combat Engineer Brigade 6th Assault Engineering and Combat Engineer Brigade 7th Assault Engineering and Sapper Brigade 8th Assault Engineering and Sapper Brigade 9th Assault Engineering and Combat Engineer Brigade 10th Assault Engineering and Combat Engineer Brigade 11th Assault Engineering and Sapper Brigade 12th Assault Engineering and Combat Engineer Brigade 13th Assault Engineering and Combat Engineer Brigade 14th Assault Engineering and Combat Engineer Brigade 15th Assault Engineering and Combat Engineer Brigade 16th Assault Engineering and Combat Engineer Brigade 17th Assault Engineering and Combat Engineer Brigade 18th Assault Engineering and Sapper Brigade 19th Assault Engineering and Sapper Brigade See also Sapper army Pososhniye lyudi References Red Army units and formations of World War II Engineering units and formations
Assault Engineering Brigades
Engineering
773
37,077,548
https://en.wikipedia.org/wiki/Rogelio%20Bernal%20Andreo
Rogelio Bernal Andreo (born 9 January 1969) is a Spanish-American astrophotographer. He is known for his photographs of deep sky objects. His work has been recognized by NASA as a regular contributor to their Astronomy Picture of the Day (APOD) 80 times. Andreo's photography has been published in international magazines and periodicals, as well as television networks including the BBC, National Geographic, and the Discovery Channel series Into the Universe with Stephen Hawking. Personal background Rogelio Bernal Andreo was born on 9 January 1969, in Murcia, Spain. When he was 20 years old, he moved to Boston, Massachusetts. In 1995, he earned a bachelor's degree in computer science from Harvard University and the Wentworth Institute of Technology. He has two children. Professional background After earning his bachelor's degree, Andreo moved to the San Francisco Bay Area, where he worked for Netscape Communications and eBay as lead software engineer. In 2008, he started exploring astrophotography as a hobby and developed a personal style defined by deep wide field images that has led to international recognition and a meaningful influence on the discipline. His work has included using post-processing techniques not very common at the time of their introduction, and he has written of his use of multi-scale processing techniques. Andreo's work has appeared on NASA's Astronomy Picture of the Day, in addition to publications such as Astronomy Magazine, Ciel et Espace, Sky and Telescope, National Geographic, as well as television networks such as the BBC, National Geographic, and the Discovery Channel series Into the Universe with Stephen Hawking. Two of his Orion wide field images were used in the Orion's flyby scene for the Hubble 3D motion picture. Rogelio's work was also used in the Cosmos: A Spacetime Odyssey series. His image, Orion, from Head to Toes was selected by Discover Magazine's Bad Astronomy, as the best astronomy picture of 2010. It was the first time this award was given to an amateur astronomer. Honors and awards 2009: Astronomy Magazine – Deep Sky category in the astroimaging contest (Winner) 2010: Advanced Imaging Conference Board of Directors – (Pleiades Award Winner) 2010: Discover Magazine's Bad Astronomy – The Top Astronomy Picture of 2010 (Winner) 2010: Royal Observatory Greenwich – Astronomy Photographer of the Year, Deep Space category (Winner) 2011: Royal Observatory Greenwich – Astronomy Photographer of the Year, Deep Space category (Highly Commended) 2011: Astronomical Association of Northern California – Outstanding contribution to Amateur Astronomy (Winner) 2011: SBIG's Hall of Fame – For excellence in astronomical imaging (Winner) 2012: Royal Observatory Greenwich – Astronomy Photographer of the Year, Deep Space category (Runner-up) 2013: Royal Observatory Greenwich – Astronomy Photographer of the Year, Deep Space category (Shortlisted) 2014: Royal Observatory Greenwich – Astronomy Photographer of the Year, Deep Space category (Highly Commended) Selected works References External links Orion in Gas, Dust, and Stars – A deep exposure showing the dark nebulae and star clusters of the Orion constellation – Orion constellation showing Betelgeuse, Rigel, Orion's belt and the Orion molecular cloud complex Dark River, Wide Field – Panorama stretching from Sagittarius to Scorpius showing the Dark Rift and the extraordinary starfield surrounding the Galactic Center. – Image highlighting the Taurus constellation from the Pleiades star cluster to the Hyades – Skyscape of the Cepheus constellation showing the large emission nebula known as the Elephant's Trunk nebula The Seagull and the Duck – Emission nebula in Canis Major spanning about 250 light years across 1969 births Living people People from Murcia Spanish emigrants to the United States Astrophotographers Amateur astronomers American software engineers People from Sunnyvale, California Wentworth Institute of Technology alumni
Rogelio Bernal Andreo
Astronomy
784
3,323,565
https://en.wikipedia.org/wiki/Cauchy%20stress%20tensor
In continuum mechanics, the Cauchy stress tensor (symbol , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e: The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless. The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress. The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor. According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses. Euler–Cauchy stress principle – stress vector The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field , called the traction vector, defined on the surface and assumed to depend continuously on the surface's unit vector . To formulate the Euler–Cauchy stress principle, consider an imaginary surface passing through an internal material point dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface ). Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as: Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor. When the body is subjected to external surface forces or contact forces , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area containing , with normal vector , the force distribution is equipollent to a contact force exerted at point P and surface moment . In particular, the contact force is given by where is the mean surface traction. Cauchy's stress principle asserts that as becomes very small and tends to zero the ratio becomes and the couple stress vector vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments. The resultant vector is defined as the surface traction, also called stress vector, traction, or traction vector. given by at the point associated with a plane with a normal vector : This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting. This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector . Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to , and can be resolved into two components (Figure 2.1c): one normal to the plane, called normal stress where is the normal component of the force to the differential area and the other parallel to this plane, called the shear stress where is the tangential component of the force to the differential surface area . The shear stress can be further decomposed into two mutually perpendicular vectors. Cauchy's postulate According to the Cauchy Postulate, the stress vector remains unchanged for all surfaces passing through the point and having the same normal vector at , i.e., having a common tangent at . This means that the stress vector is a function of the normal vector only, and is not influenced by the curvature of the internal surfaces. Cauchy's fundamental lemma A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem, which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as Cauchy's stress theorem—stress tensor The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations. Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n: This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ. To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives: where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product): and then substituting into the equation to cancel out dA: To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13: In index notation this is The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction. Thus, using the components of the stress tensor or, equivalently, Alternatively, in matrix form we have The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form: The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software. Transformation rule of the stress tensor It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4): where A is a rotation matrix with components aij. In matrix form this is Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives The Mohr circle for stress is a graphical representation of this transformation of stresses. Normal and shear stresses The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector: The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem: where Balance laws – Cauchy's equations of motion Cauchy's first law of motion According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations: , where For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form: where is the hydrostatic pressure, and is the kronecker delta. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of equilibrium equations |- |Consider a continuum body (see Figure 4) occupying a volume , having a surface area , with defined traction or surface forces per unit area acting on every point of the body surface, and body forces per unit of volume on every point within the volume . Thus, if the body is in equilibrium the resultant force acting on the volume is zero, thus: By definition the stress vector is , then Using the Gauss's divergence theorem to convert a surface integral to a volume integral gives For an arbitrary volume the integral vanishes, and we have the equilibrium equations |} Cauchy's second law of motion According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine: {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of symmetry of the stress tensor |- | Summing moments about point O (Figure 4) the resultant moment is zero as the body is in equilibrium. Thus, where is the position vector and is expressed as Knowing that and using Gauss's divergence theorem to change from a surface integral to a volume integral, we have The second integral is zero as it contains the equilibrium equations. This leaves the first integral, where , therefore For an arbitrary volume V, we then have which is satisfied at every point within the body. Expanding this equation we have , , and or in general This proves that the stress tensor is symmetric |} However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. Principal stresses and stress invariants At every point in a stressed body there are at least three planes, called principal planes, with normal vectors , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector , and where there are no normal shear stresses . The three stresses normal to these principal planes are called principal stresses. The components of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. A stress vector parallel to the normal unit vector is given by: where is a constant of proportionality, and in this particular case corresponds to the magnitudes of the normal stress vectors or principal stresses. Knowing that and , we have This is a homogeneous system, i.e. equal to zero, of three linear equations where are the unknowns. To obtain a nontrivial (non-zero) solution for , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus, Expanding the determinant leads to the characteristic equation where The characteristic equation has three real roots , i.e. not imaginary due to the symmetry of the stress tensor. The , and , are the principal stresses, functions of the eigenvalues . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients , and , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation. For each eigenvalue, there is a non-trivial solution for in the equation . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation. A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix: The principal stresses can be combined to form the stress invariants, , , and . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part. The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety. Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as: Maximum and minimum shear stresses The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented from the principal stress planes. The maximum shear stress is expressed as Assuming then When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of the maximum and minimum shear stresses |- |The normal stress can be written in terms of principal stresses as Knowing that , the shear stress in terms of principal stresses components is expressed as The maximum shear stress at a point in a continuum body is determined by maximizing subject to the condition that This is a constrained maximization problem, which can be solved using the Lagrangian multiplier technique to convert the problem into an unconstrained optimization problem. Thus, the stationary values (maximum and minimum values)of occur where the gradient of is parallel to the gradient of . The Lagrangian function for this problem can be written as where is the Lagrangian multiplier (which is different from the use to denote eigenvalues). The extreme values of these functions are thence These three equations together with the condition may be solved for and By multiplying the first three equations by and , respectively, and knowing that we obtain Adding these three equations we get this result can be substituted into each of the first three equations to obtain Doing the same for the other two equations we have A first approach to solve these last three equations is to consider the trivial solution . However, this option does not fulfill the constraint . Considering the solution where and , it is determine from the condition that , then from the original equation for it is seen that . The other two possible values for can be obtained similarly by assuming and and Thus, one set of solutions for these four equations is: These correspond to minimum values for and verifies that there are no shear stresses on planes normal to the principal directions of stress, as shown previously. A second set of solutions is obtained by assuming and . Thus we have To find the values for and we first add these two equations Knowing that for and we have and solving for we have Then solving for we have and The other two possible values for can be obtained similarly by assuming and and Therefore, the second set of solutions for , representing a maximum for is Therefore, assuming , the maximum shear stress is expressed by and it can be stated as being equal to one-half the difference between the largest and smallest principal stresses, acting on the plane that bisects the angle between the directions of the largest and smallest principal stresses. |} Stress deviator tensor The stress tensor can be expressed as the sum of two other stress tensors: a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, , which tends to change the volume of the stressed body; and a deviatoric component called the stress deviator tensor, , which tends to distort it. So where is the mean stress given by Pressure () is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e. where is a proportionality constant (viz. the first of the Lamé parameters), is the divergence operator, is the k:th Cartesian coordinate, is the flow velocity and is the k:th Cartesian component of . The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor: Invariants of the stress deviator tensor As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor are the same as the principal directions of the stress tensor . Thus, the characteristic equation is where , and are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of or its principal values , , and , or alternatively, as a function of or its principal values , , and . Thus, Because , the stress deviator tensor is in a state of pure shear. A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as Octahedral stresses Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress and octahedral shear stress , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, . Knowing that the stress tensor of point O (Figure 6) in the principal axes is the stress vector on an octahedral plane is then given by: The normal component of the stress vector at point O associated with the octahedral plane is which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes. The shear stress on the octahedral plane is then See also Cauchy momentum equation Critical plane analysis Stress–energy tensor Notes References Tensor physical quantities Solid mechanics Continuum mechanics Structural analysis
Cauchy stress tensor
Physics,Mathematics,Engineering
5,031
50,501,391
https://en.wikipedia.org/wiki/AIX%20Toolbox%20for%20Linux%20Applications
The AIX Toolbox for Linux Applications is a collection of GNU tools for IBM AIX. These tools are available for installation using Red Hat's RPM format. Licensing Each of these packages includes its own licensing information and while IBM has made the code available to AIX users, the code is provided as is and has not been thoroughly tested. The Toolbox is meant to provide a core set of some of the most common development tools and libraries along with the more popular GNU packages. References External links AIX Toolbox for Open Source Software - Overview Programming tools Free software programmed in C System administration Red Hat UNIX System V IBM AIX Power ISA operating systems PowerPC operating systems
AIX Toolbox for Linux Applications
Technology
132
298,486
https://en.wikipedia.org/wiki/Barrage%20balloon
A barrage balloon is a type of airborne barrage, a large uncrewed tethered balloon used to defend ground targets against aircraft attack, by raising aloft steel cables which pose a severe risk of collision with hostile aircraft, making the attacker's approach difficult and hazardous. Early barrage balloons were often spherical. The kite balloon, having a shape and cable bridling that stabilizes the balloon and reduces drag, could be operated at higher wind speeds than a spherical balloon. Some examples carried small explosive charges that would be pulled up against the aircraft to ensure its destruction. Barrage balloons are not practical at higher altitudes due in large part to the cable's weight. First World War France, Germany, Italy, and the United Kingdom used barrage balloons in the First World War. While the French and German forces developed kite balloons, early British barrage balloons were spherical. Sometimes, especially around London, several balloons were used to lift a "barrage net" length: a steel cable was strung between the balloons, and more cables hung from it. These nets could be raised to an altitude comparable to the operational ceiling () of the bombers of the time. By 1918 the barrage balloon defences around London stretched for , and captured German pilots expressed great fear of them. Second World War In 1938, the British Balloon Command was established to protect cities and key targets such as industrial areas, ports, and harbors. Balloons were intended to defend against dive bombers flying at heights up to , forcing them to fly higher and into the range of concentrated anti-aircraft fire: anti-aircraft guns could not traverse fast enough to attack aircraft flying at low altitude and high speed. By the middle of 1940, there were 1,400 balloons, a third over the London area. While dive-bombing was a devastatingly effective tactic against undefended targets, such as Guernica and Rotterdam, dive-bombers were very vulnerable to attack by fighter aircraft when pulling up after having completed a bombing dive. Due to the effectiveness of the Royal Air Force fighters' tactic of waiting for a dive bomber to complete its dive and then pouncing when it was pulling up - a moment when it was slow and vulnerable - the use of dive bombers against the UK was discontinued by Nazi Germany. Balloons proved to be of little use against the German high-level bombers with which the dive-bombers were replaced, but continued to be manufactured nonetheless until there were almost 3,000 in 1944. They proved to be effective against the V-1 flying bomb, which usually flew at or lower but had wire-cutters on its wings to counter balloons. 231 V-1s are officially claimed to have been destroyed by balloons. The British added two refinements to their balloons, "Double Parachute Link" (DPL) and "Double Parachute/Ripping" (DP/R). The former was triggered by the shock of an enemy bomber snagging the cable, causing that section of cable to be explosively released complete with parachutes at either end; the combined weight and drag bringing down the aircraft. The latter was intended to render the balloon safe if it broke free accidentally. The heavy mooring cable would separate as the balloon and fall to the ground under a parachute; at the same time a panel would be ripped away from the balloon causing it to deflate and fall independently to the ground. The 320th Barrage Balloon Battalion, a Very Low Altitude barrage balloon battalion of the United States Army, participated in the June 1944 Normandy landings, raising barrage balloons on Omaha Beach and Utah Beach. They remained stationed at Normandy until October 1944. In January 1945, during Royal Navy Fleet Air Arm raids on the Palembang oil refineries, the British aircrews were surprised by the massive use of barrage balloons in the Japanese defences. These were spherical and smaller than the British type. One Grumman Avenger was destroyed, and its crew killed, from striking a balloon cable. Barrage balloons were partly filled with highly flammable hydrogen. "The top of the balloon was filled with hydrogen, the bottom half was left empty, so when it was put up at a certain height it filled with natural air", according to Dorothy Brannan, barrage balloon volunteer in Portsmouth, England. Power line disruption In 1942, Canadian and American forces began joint operations to protect the sensitive locks and shipping channel at Sault Ste. Marie along their common border among the Great Lakes against possible air attack. During severe storms in August and October 1942 some barrage balloons broke loose, and the trailing cables short-circuited power lines, causing some localised disruption to mining and manufacturing. In particular, metals production was disrupted. Canadian military historical records indicate that one of the more serious incidents, known as "The October Incident", caused an estimated loss of 400 tonnes of steel and 10 tonnes of ferro-alloys. As a result, balloons were stored during the winter months and training was improved. Lessons learned from breakaway balloons led to Operation Outward, intentional release of balloons trailing conductive cables to disrupt power supplies on the occupied European mainland. Target identification On the road to Aachen in west Germany in 1944, the British 2nd Tactical Air Force floated barrage balloons along the American First Army sector front line (a.k.a. "bomb line") to designate the location of friendly troops during the air assault preceding the advance of ground forces, which took Aachen on October 21, 1944. Conversely, during the First Army advance past Aachen to nearby Düren, barrage balloons were floated eastward to mark the location of enemy troops to be bombed. Post-war nuclear weapon tests After the war, some surplus barrage balloons were used as tethered shot balloons for nuclear weapon tests throughout most of the period when nuclear weapons were tested in the atmosphere. The weapon or shot was carried to the required altitude slung underneath the barrage balloon, allowing test shots in controlled conditions at much higher altitudes than test towers. Several of the tests in the Operation Plumbbob series were lifted to altitude using barrage balloons. See also References External links Barrage Balloon Reunion Club Popular Science, August 1943, British Barrage Balloon Secrets BBC's WW2 People's War: Barrage Balloons RAF Barrage Balloon Squadrons Barrage Balloon in the WWII Balloons (aeronautics) Civil defense
Barrage balloon
Engineering
1,255
55,774,252
https://en.wikipedia.org/wiki/Affinity%20capture
Affinity capture is a technique in molecular biology used to isolate desired compounds based on their chemical properties and a solid substrate. Commonly, plates out of solid materials such as glass are coated with various reagents to allow for covalent bonding of a capturing molecule such as an antibody. Afterwards, a solvent containing a desired compound for isolation is poured onto the plate, and the compound binds to the receptors on the plate (hence the capturing of the compound). Washing the plate and removing the desired compound completes the purification process. Applications Affinity capture has been used to isolate proteins by means of binding a peptide sequence to the solid substrate, thus allowing for protein capture. The process has also been examined for potential automation, but the unique circumstances for any given experiment may impede reproducibility. See also Chem-seq References Molecular biology techniques
Affinity capture
Chemistry,Biology
169
72,693,689
https://en.wikipedia.org/wiki/Association%20193
Association 193 is an anti-nuclear non-governmental organisation in French Polynesia. The association is named for the 193 nuclear weapons tests conducted by France at Moruroa and Fangataufa between 1966 and 1996. It was established in 2014 to preserve the historical memory of nuclear testing and campaign for the French government to tell the truth about its impacts and compensate victims. The association initially called for 2 July - the date of the first French nuclear test in Polynesia - to be made a formal date of commemoration. In January 2016 it launched its first major campaign, a petition for a referendum on the nuclear issue and on compensation. By February 2016 the petition had more than 30,000 signatures. It also worked with Mururoa e Tatou to organise a series of demonstrations around the visit of French President François Hollande. In July 2016 it organised an exhibition and public demonstration to mark the 50th anniversary of the first nuclear test. In October 2016 it successfully opposed plans for potentially contaminated gravel from Hao atoll to be used in road construction on Rikitea. In January 2017 it created a unit to assist test victims to claim compensation from the French government. In August 2017 the association celebrated its third anniversary and announced its support for a campaign by the Maohi Protestant Church to pursue France for crimes against humanity in the International Criminal Court. In March 2020 the association denounced changes to France's nuclear compensation law which would make it more difficult for victims to obtain compensation. It also denounced an attempt to further limit compensation via a clause slipped in to COVID-19 legislation. References Political organizations based in French Polynesia 2014 establishments in French Polynesia Anti-nuclear organizations Indigenous rights organizations
Association 193
Engineering
336
30,048
https://en.wikipedia.org/wiki/Tantalum
Tantalum is a chemical element; it has symbol Ta and atomic number 73. It is named after Tantalus, a figure in Greek mythology. Tantalum is a very hard, ductile, lustrous, blue-gray transition metal that is highly corrosion-resistant. It is part of the refractory metals group, which are widely used as components of strong high-melting-point alloys. It is a group 5 element, along with vanadium and niobium, and it always occurs in geologic sources together with the chemically similar niobium, mainly in the mineral groups tantalite, columbite and coltan. The chemical inertness and very high melting point of tantalum make it valuable for laboratory and industrial equipment such as reaction vessels and vacuum furnaces. It is used in tantalum capacitors for electronic equipment such as computers. It is being investigated for use as a material for high-quality superconducting resonators in quantum processors. Tantalum is considered a technology-critical element by the European Commission. History Tantalum was discovered in Sweden in 1802 by Anders Ekeberg, in two mineral samples – one from Sweden and the other from Finland. One year earlier, Charles Hatchett had discovered columbium (now niobium). In 1809, the English chemist William Hyde Wollaston compared the oxides of columbium and tantalum, columbite and tantalite. Although the two oxides had different measured densities of 5.918 g/cm3 and 7.935 g/cm3, he concluded that they were identical and kept the name tantalum. After Friedrich Wöhler confirmed these results, it was thought that columbium and tantalum were the same element. This conclusion was disputed in 1846 by the German chemist Heinrich Rose, who argued that there were two additional elements in the tantalite sample, and he named them after the children of Tantalus: niobium (from Niobe), and pelopium (from Pelops). The supposed element "pelopium" was later identified as a mixture of tantalum and niobium, and it was found that the niobium was identical to the columbium already discovered in 1801 by Hatchett. The differences between tantalum and niobium were demonstrated unequivocally in 1864 by Christian Wilhelm Blomstrand, and Henri Etienne Sainte-Claire Deville, as well as by Louis J. Troost, who determined the empirical formulas of some of their compounds in 1865. Further confirmation came from the Swiss chemist Jean Charles Galissard de Marignac, in 1866, who proved that there were only two elements. These discoveries did not stop scientists from publishing articles about the so-called ilmenium until 1871. De Marignac was the first to produce the metallic form of tantalum in 1864, when he reduced tantalum chloride by heating it in an atmosphere of hydrogen. Early investigators had only been able to produce impure tantalum, and the first relatively pure ductile metal was produced by Werner von Bolton in Charlottenburg in 1903. Wires made with metallic tantalum were used for light bulb filaments until tungsten replaced it in widespread use. The name tantalum was derived from the name of the mythological Tantalus, the father of Niobe in Greek mythology. In the story, he had been punished after death by being condemned to stand knee-deep in water with perfect fruit growing above his head, both of which eternally tantalized him. (If he bent to drink the water, it drained below the level he could reach, and if he reached for the fruit, the branches moved out of his grasp.) Anders Ekeberg wrote "This metal I call tantalum ... partly in allusion to its incapacity, when immersed in acid, to absorb any and be saturated." For decades, the commercial technology for separating tantalum from niobium involved the fractional crystallization of potassium heptafluorotantalate away from potassium oxypentafluoroniobate monohydrate, a process that was discovered by Jean Charles Galissard de Marignac in 1866. This method has been supplanted by solvent extraction from fluoride-containing solutions of tantalum. Characteristics Physical properties Tantalum is dark (blue-gray), dense, ductile, very hard, easily fabricated, and highly conductive of heat and electricity. The metal is highly resistant to corrosion by acids: at temperatures below 150 °C tantalum is almost completely immune to attack by the normally aggressive aqua regia. It can be dissolved with hydrofluoric acid or acidic solutions containing the fluoride ion and sulfur trioxide, as well as with molten potassium hydroxide. Tantalum's high melting point of 3017 °C (boiling point 5458 °C) is exceeded among the elements only by tungsten, rhenium and osmium for metals, and carbon. Tantalum exists in two crystalline phases, alpha and beta. The alpha phase is stable at all temperatures up to the melting point and has body-centered cubic structure with lattice constant a = 0.33029 nm at 20 °C. It is relatively ductile, has Knoop hardness 200–400 HN and electrical resistivity 15–60 μΩ⋅cm. The beta phase is hard and brittle; its crystal symmetry is tetragonal (space group P42/mnm, a = 1.0194 nm, c = 0.5313 nm), Knoop hardness is 1000–1300 HN and electrical resistivity is relatively high at 170–210 μΩ⋅cm. The beta phase is metastable and converts to the alpha phase upon heating to 750–775 °C. Bulk tantalum is almost entirely alpha phase, and the beta phase usually exists as thin films obtained by magnetron sputtering, chemical vapor deposition or electrochemical deposition from a eutectic molten salt solution. Isotopes Natural tantalum consists of two stable isotopes: 180mTa (0.012%) and 181Ta (99.988%). 180mTa (m denotes a metastable state) is predicted to decay in three ways: isomeric transition to the ground state of 180Ta, beta decay to 180W, or electron capture to 180Hf. However, radioactivity of this nuclear isomer has never been observed, and only a lower limit on its half-life of 2.9 years has been set. The ground state of 180Ta has a half-life of only 8 hours. 180mTa is the only naturally occurring nuclear isomer (excluding radiogenic and cosmogenic short-lived nuclides). It is also the rarest primordial isotope in the Universe, taking into account the elemental abundance of tantalum and isotopic abundance of 180mTa in the natural mixture of isotopes (and again excluding radiogenic and cosmogenic short-lived nuclides). Tantalum has been examined theoretically as a "salting" material for nuclear weapons (cobalt is the better-known hypothetical salting material). An external shell of 181Ta would be irradiated by the intensive high-energy neutron flux from a hypothetical exploding nuclear weapon. This would transmute the tantalum into the radioactive isotope 182Ta, which has a half-life of 114.4 days and produces gamma rays with approximately 1.12 million electron-volts (MeV) of energy apiece, which would significantly increase the radioactivity of the nuclear fallout from the explosion for several months. Such "salted" weapons have never been built or tested, as far as is publicly known, and certainly never used as weapons. Tantalum can be used as a target material for accelerated proton beams for the production of various short-lived isotopes including 8Li, 80Rb, and 160Yb. Chemical compounds Tantalum forms compounds in oxidation states −III to +V. Most commonly encountered are oxides of Ta(V), which includes all minerals. The chemical properties of Ta and Nb are very similar. In aqueous media, Ta only exhibit the +V oxidation state. Like niobium, tantalum is barely soluble in dilute solutions of hydrochloric, sulfuric, nitric and phosphoric acids due to the precipitation of hydrous Ta(V) oxide. In basic media, Ta can be solubilized due to the formation of polyoxotantalate species. Oxides, nitrides, carbides, sulfides Tantalum pentoxide (Ta2O5) is the most important compound from the perspective of applications. Oxides of tantalum in lower oxidation states are numerous, including many defect structures, and are lightly studied or poorly characterized. Tantalates, compounds containing [TaO4]3− or [TaO3]− are numerous. Lithium tantalate (LiTaO3) adopts a perovskite structure. Lanthanum tantalate (LaTaO4) contains isolated tetrahedra. As in the cases of other refractory metals, the hardest known compounds of tantalum are nitrides and carbides. Tantalum carbide, TaC, like the more commonly used tungsten carbide, is a hard ceramic that is used in cutting tools. Tantalum(III) nitride is used as a thin film insulator in some microelectronic fabrication processes. The best studied chalcogenide is Tantalum sulfide (TaS2), a layered semiconductor, as seen for other transition metal dichalcogenides. A tantalum-tellurium alloy forms quasicrystals. Halides Tantalum halides span the oxidation states of +5, +4, and +3. Tantalum pentafluoride (TaF5) is a white solid with a melting point of 97.0 °C. The anion [TaF7]2- is used for its separation from niobium. The chloride , which exists as a dimer, is the main reagent in synthesis of new Ta compounds. It hydrolyzes readily to an oxychloride. The lower halides and , feature Ta-Ta bonds. Organotantalum compounds Organotantalum compounds include pentamethyltantalum, mixed alkyltantalum chlorides, alkyltantalum hydrides, alkylidene complexes as well as cyclopentadienyl derivatives of the same. Diverse salts and substituted derivatives are known for the hexacarbonyl [Ta(CO)6]− and related isocyanides. Occurrence Tantalum is estimated to make up about 1 ppm or 2 ppm of the Earth's crust by weight. There are many species of tantalum minerals, only some of which are so far being used by industry as raw materials: tantalite (a series consisting of tantalite-(Fe), tantalite-(Mn) and tantalite-(Mg)), microlite (now a group name), wodginite, euxenite (actually euxenite-(Y)), and polycrase (actually polycrase-(Y)). Tantalite (Fe, Mn)Ta2O6 is the most important mineral for tantalum extraction. Tantalite has the same mineral structure as columbite (Fe, Mn) (Ta, Nb)2O6; when there is more tantalum than niobium it is called tantalite and when there is more niobium than tantalum is it called columbite (or niobite). The high density of tantalite and other tantalum containing minerals makes the use of gravitational separation the best method. Other minerals include samarskite and fergusonite. Australia was the main producer of tantalum prior to the 2010s, with Global Advanced Metals (formerly known as Talison Minerals) being the largest tantalum mining company in that country. They operate two mines in Western Australia, Greenbushes in the southwest and Wodgina in the Pilbara region. The Wodgina mine was reopened in January 2011 after mining at the site was suspended in late 2008 due to the global financial crisis. Less than a year after it reopened, Global Advanced Metals announced that due to again "... softening tantalum demand ...", and other factors, tantalum mining operations were to cease at the end of February 2012. Wodgina produces a primary tantalum concentrate which is further upgraded at the Greenbushes operation before being sold to customers. Whereas the large-scale producers of niobium are in Brazil and Canada, the ore there also yields a small percentage of tantalum. Some other countries such as China, Ethiopia, and Mozambique mine ores with a higher percentage of tantalum, and they produce a significant percentage of the world's output of it. Tantalum is also produced in Thailand and Malaysia as a by-product of the tin mining there. During gravitational separation of the ores from placer deposits, not only is cassiterite (SnO2) found, but a small percentage of tantalite also included. The slag from the tin smelters then contains economically useful amounts of tantalum, which is leached from the slag. World tantalum mine production has undergone an important geographic shift since the start of the 21st century when production was predominantly from Australia and Brazil. Beginning in 2007 and through 2014, the major sources of tantalum production from mines dramatically shifted to the Democratic Republic of the Congo, Rwanda, and some other African countries. Future sources of supply of tantalum, in order of estimated size, are being explored in Saudi Arabia, Egypt, Greenland, China, Mozambique, Canada, Australia, the United States, Finland, and Brazil. Status as a conflict resource Tantalum is considered a conflict resource. Coltan, the industrial name for a columbite–tantalite mineral from which niobium and tantalum are extracted, can also be found in Central Africa, which is why tantalum is being linked to warfare in the Democratic Republic of the Congo (formerly Zaire). According to an October 23, 2003 United Nations report, the smuggling and exportation of coltan has helped fuel the war in the Congo, a crisis that has resulted in approximately 5.4 million deaths since 1998 – making it the world's deadliest documented conflict since World War II. Ethical questions have been raised about responsible corporate behavior, human rights, and endangering wildlife, due to the exploitation of resources such as coltan in the armed conflict regions of the Congo Basin. The United States Geological Survey reports in its yearbook that this region produced a little less than 1% of the world's tantalum output in 2002–2006, peaking at 10% in 2000 and 2008. USGS data published in January 2021 indicated that close to 40% of the world's tantalum mine production came from the Democratic Republic of the Congo, with another 18% coming from neighboring Rwanda and Burundi. Production and fabrication Several steps are involved in the extraction of tantalum from tantalite. First, the mineral is crushed and concentrated by gravity separation. This is generally carried out near the mine site. Refining The refining of tantalum from its ores is one of the more demanding separation processes in industrial metallurgy. The chief problem is that tantalum ores contain significant amounts of niobium, which has chemical properties almost identical to those of Ta. A large number of procedures have been developed to address this challenge. In modern times, the separation is achieved by hydrometallurgy. Extraction begins with leaching the ore with hydrofluoric acid together with sulfuric acid or hydrochloric acid. This step allows the tantalum and niobium to be separated from the various non-metallic impurities in the rock. Although Ta occurs as various minerals, it is conveniently represented as the pentoxide, since most oxides of tantalum(V) behave similarly under these conditions. A simplified equation for its extraction is thus: Ta2O5 + 14 HF → 2 H2[TaF7] + 5 H2O Completely analogous reactions occur for the niobium component, but the hexafluoride is typically predominant under the conditions of the extraction. Nb2O5 + 12 HF → 2 H[NbF6] + 5 H2O These equations are simplified: it is suspected that bisulfate (HSO4−) and chloride compete as ligands for the Nb(V) and Ta(V) ions, when sulfuric and hydrochloric acids are used, respectively. The tantalum and niobium fluoride complexes are then removed from the aqueous solution by liquid-liquid extraction into organic solvents, such as cyclohexanone, octanol, and methyl isobutyl ketone. This simple procedure allows the removal of most metal-containing impurities (e.g. iron, manganese, titanium, zirconium), which remain in the aqueous phase in the form of their fluorides and other complexes. Separation of the tantalum from niobium is then achieved by lowering the ionic strength of the acid mixture, which causes the niobium to dissolve in the aqueous phase. It is proposed that oxyfluoride H2[NbOF5] is formed under these conditions. Subsequent to removal of the niobium, the solution of purified H2[TaF7] is neutralised with aqueous ammonia to precipitate hydrated tantalum oxide as a solid, which can be calcined to tantalum pentoxide (Ta2O5). Instead of hydrolysis, the H2[TaF7] can be treated with potassium fluoride to produce potassium heptafluorotantalate: H2[TaF7] + 2 KF → K2[TaF7] + 2 HF Unlike H2[TaF7], the potassium salt is readily crystallized and handled as a solid. K2[TaF7] can be converted to metallic tantalum by reduction with sodium, at approximately 800 °C in molten salt. K2[TaF7] + 5 Na → Ta + 5 NaF + 2 KF In an older method, called the Marignac process, the mixture of H2[TaF7] and H2[NbOF5] was converted to a mixture of K2[TaF7] and K2[NbOF5], which was then separated by fractional crystallization, exploiting their different water solubilities. Electrolysis Tantalum can also be refined by electrolysis, using a modified version of the Hall–Héroult process. Instead of requiring the input oxide and output metal to be in liquid form, tantalum electrolysis operates on non-liquid powdered oxides. The initial discovery came in 1997 when Cambridge University researchers immersed small samples of certain oxides in baths of molten salt and reduced the oxide with electric current. The cathode uses powdered metal oxide. The anode is made of carbon. The molten salt at is the electrolyte. The first refinery has enough capacity to supply 3–4% of annual global demand. Fabrication and metalworking All welding of tantalum must be done in an inert atmosphere of argon or helium in order to shield it from contamination with atmospheric gases. Tantalum is not solderable. Grinding tantalum is difficult, especially so for annealed tantalum. In the annealed condition, tantalum is extremely ductile and can be readily formed as metal sheets. Applications Electronics The major use for tantalum, as the metal powder, is in the production of electronic components, mainly capacitors and some high-power resistors. Tantalum electrolytic capacitors exploit the tendency of tantalum to form a protective oxide surface layer, using tantalum powder, pressed into a pellet shape, as one "plate" of the capacitor, the oxide as the dielectric, and an electrolytic solution or conductive solid as the other "plate". Because the dielectric layer can be very thin (thinner than the similar layer in, for instance, an aluminium electrolytic capacitor), a high capacitance can be achieved in a small volume. Because of the size and weight advantages, tantalum capacitors are attractive for portable telephones, personal computers, automotive electronics and cameras. Alloys Tantalum is also used to produce a variety of alloys that have high melting points, strength, and ductility. Alloyed with other metals, it is also used in making carbide tools for metalworking equipment and in the production of superalloys for jet engine components, chemical process equipment, nuclear reactors, missile parts, heat exchangers, tanks, and vessels. Because of its ductility, tantalum can be drawn into fine wires or filaments, which are used for evaporating metals such as aluminium. Tantalum is inert against most acids except hydrofluoric acid and hot sulfuric acid, and hot alkaline solutions also cause tantalum to corrode. This property makes it a useful metal for chemical reaction vessels and pipes for corrosive liquids. Heat exchanging coils for the steam heating of hydrochloric acid are made from tantalum. Tantalum was extensively used in the production of ultra high frequency electron tubes for radio transmitters. Tantalum is capable of capturing oxygen and nitrogen by forming nitrides and oxides and therefore helped to sustain the high vacuum needed for the tubes when used for internal parts such as grids and plates. Surgical uses Medical researcher Gerald L. Burke at the Los Angeles Orthopaedic Hospital first discovered in 1938 that tantalum is bio-inert in human tissue and could be used safely as an orthopaedic implant material. [] Burke also demonstrated perhaps the other most appreciated characteristic of tantalum in surgical procedures: tantalum would permanently bond to bone with no degradation of the surrounding bone. Later, Burke's team working with a team from the California Institute of Technology led by John Norton Wilson showed that tantalum, while hard enough to be fabricated into surgical tools, could also be fabricated in a form sufficiently ductile, yet still sufficiently strong to be drawn into fine threads that could be used for non-scarring sutures. Burke's team in 1940 was the first to propose the use of tantalum for arthroplasty procedures, the repair of intertrochanteric fractures, and for jaw repairs and dental implants. Burke's initial biological research results were confirmed and credited in greater detail by the Harvard Medical School in a series of neurological experiments using powdered tantalum implants. More than 50 years later, researchers were still refining and documenting their understanding of the basic surgical procedures developed by Burke after his pioneering discoveries. Nowadays, in spite of the cost, tantalum is still widely used in making surgical instruments and implants, and new procedures continue to be developed. For example, porous tantalum coatings are used in the construction of titanium implants due to tantalum's exceptional ability to form a direct bond to hard tissue. Because tantalum is a non-ferrous, non-magnetic metal, tantalum implants are considered to be acceptable for patients undergoing MRI procedures. Other uses Tantalum was used by NASA to shield components of spacecraft, such as Voyager 1 and Voyager 2, from radiation. The high melting point and oxidation resistance led to the use of the metal in the production of vacuum furnace parts. Tantalum is extremely inert and is therefore formed into a variety of corrosion resistant parts, such as thermowells, valve bodies, and tantalum fasteners. Due to its high density, shaped charge and explosively formed penetrator liners have been constructed from tantalum. Tantalum greatly increases the armor penetration capabilities of a shaped charge due to its high density and high melting point. It is also occasionally used in precious watches e.g. from Audemars Piguet, F.P. Journe, Hublot, Montblanc, Omega, and Panerai. Tantalum oxide is used to make special high refractive index glass for camera lenses. Spherical tantalum powder, produced by atomizing molten tantalum using gas or liquid, is commonly used in additive manufacturing due to its uniform shape, excellent flowability, and high melting point. Environmental issues Tantalum receives far less attention in the environmental field than it does in other geosciences. Upper Crust Concentration (UCC) and the Nb/Ta ratio in the upper crust and in minerals are available because these measurements are useful as a geochemical tool. The latest value for upper crust concentration is 0.92 ppm, and the Nb/Ta(w/w) ratio stands at 12.7. Little data is available on tantalum concentrations in the different environmental compartments, especially in natural waters where reliable estimates of ‘dissolved’ tantalum concentrations in seawater and freshwaters have not even been produced. Some values on dissolved concentrations in oceans have been published, but they are contradictory. Values in freshwaters fare little better, but, in all cases, they are probably below 1 ng L−1, since ‘dissolved’ concentrations in natural waters are well below most current analytical capabilities. Analysis requires pre-concentration procedures that, for the moment, do not give consistent results. And in any case, tantalum appears to be present in natural waters mostly as particulate matter rather than dissolved. Values for concentrations in soils, bed sediments and atmospheric aerosols are easier to come by. Values in soils are close to 1 ppm and thus to UCC values. This indicates detrital origin. For atmospheric aerosols the values available are scattered and limited. When tantalum enrichment is observed, it is probably due to loss of more water-soluble elements in aerosols in the clouds. Pollution linked to human use of the element has not been detected. Tantalum appears to be a very conservative element in biogeochemical terms, but its cycling and reactivity are still not fully understood. Precautions Compounds containing tantalum are rarely encountered in the laboratory. The metal is highly biocompatible and is used for body implants and coatings, therefore attention may be focused on other elements or the physical nature of the chemical compound. People can be exposed to tantalum in the workplace by breathing it in, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for tantalum exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 over an 8-hour workday and a short-term limit of 10 mg/m3. At levels of 2500 mg/m3, tantalum dust is immediately dangerous to life and health. References External links Tantalum-Niobium International Study Center CDC – NIOSH Pocket Guide to Chemical Hazards Biomaterials Chemical elements with body-centered cubic structure Chemical elements Native element minerals Noble metals Refractory metals Transition metals
Tantalum
Physics,Chemistry,Biology
5,760
49,884,868
https://en.wikipedia.org/wiki/Wide-field%20multiphoton%20microscopy
Wide-field multiphoton microscopy refers to an optical non-linear imaging technique tailored for ultrafast imaging in which a large area of the object is illuminated and imaged without the need for scanning. High intensities are required to induce non-linear optical processes such as two-photon fluorescence or second harmonic generation. In scanning multiphoton microscopes the high intensities are achieved by tightly focusing the light, and the image is obtained by beam scanning. In wide-field multiphoton microscopy the high intensities are best achieved using an optically amplified pulsed laser source to attain a large field of view (~100 μm). The image in this case is obtained as a single frame with a CCD without the need of scanning, making the technique particularly useful to visualize dynamic processes simultaneously across the object of interest. With wide-field multiphoton microscopy the frame rate can be increased up to a 1000-fold compared to multiphoton scanning microscopy. Wide-field multiphoton microscopes are not yet commercially available, but working prototypes exist in several optics laboratories. Introduction The main characteristic of the technique is the illumination of a wide area on the sample with a pulsed laser beam. In nonlinear optics the amount of nonlinear photons (N) generated by a pulsed beam per (illuminating) area per second is proportional to , where E is the energy of the beam in Joules, τ is the duration of the pulse in seconds, A is the illuminating area in square meters, and f is the repetition rate of the pulsed beam in Hertz. Increasing the illumination area thus reduces the amount of generated nonlinear photons unless the energy is increased. Optical damage depends on the energy density, i.e. peak intensity per area Ip=E/(τA). Therefore, both the area and energy can be easily increased without the risk of optical damage if the peak intensity per area is kept low, and yet a gain in the amount of generated nonlinear photons can be obtained because of the quadratic dependence. For example, increasing both the area and energy 1000 fold, leaves the peak intensity unchanged but increases the generated nonlinear photons by 1000 fold. This 1000 extra photons are indeed generated over a larger area. In imaging this means that the extra 1000 photons are spread over the image, which at first might not seem an advantage over multiphoton scanning microscopy. The advantage however becomes evident when the size of the image and the scanning time are considered. The amount of nonlinear photons per image frame per second generated by a wide-field multiphoton microscope compared to a scanning multiphoton microscope is given by , when assuming that the same peak intensity is used in both systems. Here n is the number of scanning points such that . Limitations The technique is not suitable for imaging deep in scattering tissue (e.g. brain), as the image quality rapidly degrades with increasing depth The limit to which the energy can be increased depends on laser system. Optical amplifiers such as a regenerative amplifier, can typically yield energies of up to mJ with lower repetition rates compared to oscillator based systems (e.g. Ti:sapphire laser). Possible damage of the optics if the beam is focused somehow somewhere in the optical system to a small area. Different methods exist to achieve the required illumination without risk of damaging the optics (see Methods). Depth cross-sectioning may be missing. Advantages Ultrafast imaging. A single laser shot is needed to produce one image. The frame rate is thus limited to the repetition rate of the laser system or the frame rate of the CCD camera. Lower damage in cells. In aqueous systems (such as cells), medium to low repetition rates (1 – 200 kHz) allow for thermal diffusion to occur between illuminating pulses so that the damage threshold is higher than with high repetition rates (80 MHz). The whole object can be observed simultaneously because of the wide-field illumination. Larger penetration depth in biological imaging compared to one-photon fluorescence due to the longer wavelengths required. Higher resolution than wide-field one-photon fluorescence microscopy. The optical resolution can be comparable or better than multiphoton scanning microscopes []. Methods There is the technical difficulty of achieving a large illumination area without destroying the imaging optics. One approach is the so-called spatiotemporal focusing in which the pulsed beam is spatially dispersed by a diffraction grating forming a 'rainbow' beam that is subsequently focused by an objective lens. The effect of focusing the 'rainbow' beam while imaging the diffraction grating forces the different wavelengths to overlap at the focal plane of the objective lens. The different wavelengths then only interfere at the overlapping volume, if no further spatial or temporal dispersion is introduced, so that the intense pulsed illumination is retrieved and capable of yielding cross-sectioned images. The axial resolution is typically 2-3 μm even with structured illumination techniques. The spatial dispersion generated by the diffraction grating ensures that the energy in the laser is spread over a wider area in the objective lens, hence reducing the possibility of damaging the lens itself. In contrast to what was initially thought, temporal focusing is remarkably robust to scattering. Its ability to penetrate through turbid media with minimal speckle was used in optogenetics, enabling photo-excitation of arbitrary light patterns through tissue. Temporal focusing was later combined with single-pixel detection to overcome the effect of scattering on fluorescent photons. This technique, called TRAFIX, enabled wide-field imaging through biological tissue at great depths with higher signal-to-background ratio and lower photobleaching than standard point-scanning two-photon microscopy. Another simpler method consists of two beams that are loosely focused and overlapped onto an area (~100 μm) on the sample. With this method it is possible to have access to all the elements of the tensor thanks to the capability of being able to change the polarisation of each beam independently. References Microscopy
Wide-field multiphoton microscopy
Chemistry
1,222
78,179,351
https://en.wikipedia.org/wiki/RPRFamide
RPRFamide is a neurotoxin belonging to the conorfamide family of neuropeptides, which can be found in the venom of cone snails. Etymology and source RPRFamide is a toxin from the carnivorous marine cone snail Conus textile, a predatory species that mainly lives in tropical waters. The venom of marine cone snails contains a diverse variety of toxins, which include conotoxins. RPRFamide belongs to the family of conotoxins, more specifically to the conorfamide family or RFamide family which are peptides that target neuronal ion channels in their prey. Chemistry The sequence for this toxin is identified as RPRF (R = arginine, P = proline, and F = phenylalanine). An amide group (-NH2) is located at the terminal end (C-terminus). The presence of this group is paramount for its biological activity as it enhances its interaction with ion channels. The short length, the C-terminal Arg–Phe–NH2 (RFa) motif, and the lack of cysteines clearly distinguishes these peptides from conotoxins and categorises them as cono-RFamides. Target Two main molecular targets have been discovered for RPRFamide. Firstly, it targets the acid sensing ion channel 3 (ASIC3), involved in the pain pathway. Secondly, it can target and inhibit nicotinic acetylcholine receptors (nAChRs), specifically the alpha-7 subtype. Mode of action The RPRFamide peptide modulates ASIC3, a proton-gated ion channel that is sensitive to acidic conditions and involved in pain perception. This channel is a proton-gated sodium channel involved in nociception in response to acidic environments in a tissue, such as muscle fatigue. The toxin enhances ASIC3 currents, leading to increased pain signalling, particularly in response to acidic stimuli. This explains why RPRFamide can induce pain, particularly muscle pain, through the activation of ASIC3 channels. The peptide delays the desensitization of ASIC3 channels, keeping them open longer and allowing sustained ion flow, which increases sensitivity to pain stimuli and prolongs the nociceptive effect. Studies show that injecting cono-RFamide into mice muscle leads to increased acid induced pain. Additionally, studies showed that RPRFamide causes an increase in excitability of dorsal root ganglion (DRG) neurons. The RPRFamide also modulates nACh receptors by inhibiting them, specifically the alpha-7 and muscle-type nAChRs. These receptors are ligand-gated ion channels that mediate fast synaptic transmission in the nervous system and are involved in neuromuscular function. The toxin's inhibitory effect prevents the influx of ions that would normally result from acetylcholine binding, disrupting neurotransmission and impairing muscle contraction, depending on the receptor subtype. Toxicity The toxicity of RPRFamide has yet to be assessed in humans. However, available literature suggests that ASIC3 channels are expressed in muscle pain receptors, leading to extreme, long-lasting pain when injected into muscle tissue in mice, particularly when administered with an acidic solution. Treatment Currently, no available literature describes a method to counteract the neurotoxic activity of RPRFamide. Therapeutic use The therapeutic potential of RPRFamide has yet to be fully assessed. Some authors have discussed the neurotoxin's modulation of ASIC3 and nAChRs receptors, suggesting that further research could explore its role in pain modulation, including potential treatments for chronic pain. References Ion channel toxins Neurotoxins Snail toxins Tetrapeptides Amides
RPRFamide
Chemistry
790
7,591
https://en.wikipedia.org/wiki/Cholera
Cholera () is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea lasting a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure. Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, insufficient clean drinking water, and poverty. Cholera can be diagnosed by a stool test, or a rapid dipstick test, although the dipstick test is less accurate. Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months, and confer the added benefit of protecting against another type of diarrhea caused by E. coli. In 2017, the US Food and Drug Administration (FDA) approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least three years (the period tested). The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. In children, zinc supplementation has also been found to improve outcomes. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. The choice of antibiotic is aided by antibiotic sensitivity testing. Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. To date, seven cholera pandemics have occurred, with the most recent beginning in 1961, and continuing today. The illness is rare in high-income countries, and affects children most severely. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BCE in Sanskrit literature. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water, and a map of the same was the first recorded incidence of epidemiological tracking. Signs and symptoms The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids. Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children. Cause Transmission Cholera bacteria have been found in shellfish and plankton. Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton. People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence. Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage. Susceptibility About 100million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to develop a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider. The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection. Mechanism When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive. Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place. The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ. Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly. By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine." Genetic structure Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent. Antibiotic resistance In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies. Diagnosis A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem. Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory. Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States. Prevention The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas. Water, sanitation and hygiene Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted: Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents. Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets. Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use. Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases. Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa. Surveillance Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities. Vaccination Spanish physician Jaume Ferran i Clua developed the first successful cholera inoculation in 1885, the first to immunize humans against a bacterial disease. His vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions but it ended up demonstrating its effectiveness and being recognized for it: out of the 30 thousand people he vaccinated only 54 died. Russian-Jewish bacteriologist Waldemar Haffkine also developed a human cholera vaccine in July 1892. He conducted a massive inoculation program in British India. Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested). A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole-cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18–64 as a single dose. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, , it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment. WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. OCV has been recognized as an adjunct tool for prevention and control of cholera. The WHO has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists. Sari filtration Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable. Treatment Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently." Fluids The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake. If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste. Electrolytes As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This is best done by Oral Rehydration Solution (ORS). Antibiotics Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration. Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported. Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin. Zinc supplementation In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world. Prognosis If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill. Epidemiology Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) . This occurs mainly in the developing world. In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world. Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2022 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines). Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease. Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. This was followed by the 2010s Haiti cholera outbreak and another outbreak of cholera in Haiti amid the 2018–2023 Haitian crisis. the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas. History of outbreaks The word cholera is from kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries. References to cholera appear in the European literature as early as 1642, from the Dutch physician Jakob de Bondt's description in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) But at the time, the word "cholera" was historically used by European physicians to refer to any gastrointestinal upset resulting in yellow diarrhea. De Bondt thus used a common word already in regular use to describe the new disease. This was a frequent practice of the time. It was not until the 1830s that the name for severe yellow diarrhea changed in English from "cholera" to "cholera morbus" to differentiate it from what was then known as "Asiatic cholera", or that associated with origins in India and the East. Early outbreaks in the Indian subcontinent are believed to have been the result of crowded, poor living conditions, as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by travelers along trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred since the early 19th century; the first one did not reach the Americas. The seventh pandemic originated in Indonesia in 1961. The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan. The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe. Advancements in transport and global trade, and increased human migration, including soldiers, meant that more people were carrying the disease more widely. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached North and South America. It was introduced to North America at Quebec, Canada, via Irish immigrants from the Great Famine. In this pandemic, Brazil was affected for the first time. The fourth pandemic lasted from 1863 to 1875, spreading from India to Naples and Spain, and reaching the United States at New Orleans, Louisiana in 1873. It spread throughout the Mississippi River system on the continent. The fifth pandemic was from 1881 to 1896. It started in India and spread to Europe, Asia, and South America. The sixth pandemic ran from 1899 to 1923. These epidemics had a lower number of fatalities because physicians and researchers had a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics. Other areas, such as Germany in 1892 (primarily the city of Hamburg, where more than 8.600 people died) and Naples from 1910 to 1911, also had severe outbreaks. The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists () in developing countries. This pandemic had initially subsided about 1975 and was thought to have ended, but, as noted, it has persisted. There were a rise in cases in the 1990s and since. Cholera became widespread in the 19th century. Since then it has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people died from the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera officially became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, in 1854 was the first to identify the importance of contaminated water as its source of transmission. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but it still strongly affects populations in developing countries. In the past, vessels flew a yellow quarantine flag if any crew members or passengers had cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory, that the disease was transmitted by bad air. Some believed that abdominal chilling made one more susceptible, and flannel and cholera belts were included in army kits. In the 1854–1855 outbreak in Naples, homeopathic camphor was used according to Hahnemann. Dr. Hahnemann laid down three main remedies that would be curative in that disease; in early and simple cases camphor; in later stages with excessive cramping, cuprum or with excessive evacuations and profuse cold sweat, veratrum album. These are the Trio Cholera remedies used by homoeopaths around the world. T. J. Ritter's Mother's Remedies book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom, according to William Thomas Fernie. The first effective human vaccine was developed in 1885, and the first effective antibiotic was developed in 1948. Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. In the 19th century the United States, for example, had a severe cholera problem similar to those in some developing countries. It had three large cholera outbreaks in the 1800s, which can be attributed to Vibrio cholerae spread through interior waterways such as the Erie Canal and the extensive Mississippi River valley system, as well as the major ports along the Eastern Seaboard and their cities upriver. The island of Manhattan in New York City touches the Atlantic Ocean, where cholera collected from river waters and ship discharges just off the coast. At this time, New York City did not have as effective a sanitation system as it developed in the later 20th century, so cholera spread through the city's water supply. Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically to what is now defined as the disease of cholera. Research One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was increasingly seen as plausible as medical microbiology developed over the next 30 years or so. For his work on cholera, John Snow is often regarded as the "Father of Epidemiology". The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known. In the same year, the Catalan Joaquim Balcells i Pascual discovered the bacterium. In 1856 António Augusto da Costa Simões and José Ferreira de Macedo Pinto, two Portuguese researchers, are believed to have done the same. Between the mid-1850s and the 1900s, cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease. Hemendra Nath Chatterjee, a Bengali scientist, was the first to formulate and demonstrate the effectiveness of oral rehydration salt (ORS) to treat diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water. Indian medical scientist Sambhu Nath De discovered the cholera toxin, the animal model of cholera, and successfully demonstrated the method of transmission of cholera pathogen Vibrio cholerae. Robert Allan Phillips, working at US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques. He developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967. More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection. Global Strategy In 2017, the WHO launched the "Ending Cholera: a global roadmap to 2030" strategy which aims to reduce cholera deaths by 90% by 2030. The strategy was developed by the Global Task Force on Cholera Control (GTFCC) which develops country-specific plans and monitors progress. The approach to achieve this goal combines surveillance, water sanitation, rehydration treatment and oral vaccines. Specifically, the control strategy focuses on three approaches: i) early detection and response to outbreaks to contain outbreaks, ii) stopping cholera transmission through improved sanitation and vaccines in hotspots, and iii) a global framework for cholera control through the GTFCC. The WHO and the GTFCC do not consider global cholera eradication a viable goal. Even though humans are the only host of cholera, the bacterium can persist in the environment without a human host. While global eradication is not possible, elimination of human to human transmission may be possible. Local elimination is possible, which has been underway most recently during the 2010s Haiti cholera outbreak. Haiti aims to achieve certification of elimination by 2022. The GTFCC targets 47 countries, 13 of which have established vaccination campaigns. Society and culture Health policy In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the government's role, according to a report from the James Baker Institute. The Haitian government's inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well. Similarly, South Africa's cholera outbreak was exacerbated by the government's policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers. According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A government's ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a government's ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the government's surveillance program. This inhibited physicians' abilities to detect cholera cases early. According to Colwell, the quality and inclusiveness of a country's health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the country's poor. The speed with which government leaders respond to cholera outbreaks is important. Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A country's government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent cholera's spread. This limits cholera's ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection. Inversely, poor government response can lead to civil unrest and cholera riots. Notable cases Tchaikovsky's death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovsky's mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide. 2010s Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health. Adam Mickiewicz, Polish poet and novelist, is thought to have died of cholera in Istanbul in 1855. Sadi Carnot, physicist, a pioneer of thermodynamics (d. 1832) Charles X, King of France (d. 1836) James K. Polk, eleventh president of the United States (d. 1849) Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831) Elliot Bovill, Chief Justice of the Straits Settlements (1893) Nikola Tesla, Serbian-American inventor, engineer and futurist known for his contributions to the design of the modern alternating current (AC) electricity supply system, contracted cholera in 1873 at the age of 17. He was bedridden for nine months, and near death multiple times, but survived and fully recovered. In popular culture Unlike tuberculosis ("consumption") which in literature and the arts was often romanticized as a disease of denizens of the demimonde or those with an artistic temperament, cholera is a disease which almost entirely affects the poor living in unsanitary conditions. This, and the unpleasant course of the disease – which includes voluminous "rice-water" diarrhea, the hemorrhaging of liquids from the mouth, and violent muscle contractions which continue even after death – has discouraged the disease from being romanticized, or even being factually presented in popular culture. The 1889 novel Mastro-don Gesualdo by Giovanni Verga presents the course of a cholera epidemic across the island of Sicily, but does not show the suffering of those affected. Cholera is a major plot device in The Painted Veil, a 1925 novel by W. Somerset Maugham. The story concerns a shy bacteriologist who discovers his young, pretty wife is having an adulterous affair. The doctor exacts revenge on his wife by inducing her to travel with him to mainland China which is in the grips of an horrific cholera outbreak. The ravages of the disease are frankly described in the novel. In Thomas Mann's novella Death in Venice, first published in 1912 as Der Tod in Venedig, Mann "presented the disease as emblematic of the final 'bestial degradation' of the sexually transgressive author Gustav von Aschenbach." Contrary to the actual facts of how violently cholera kills, Mann has his protagonist die peacefully on a beach in a deck chair. Luchino Visconti's 1971 film version also hid from the audience the actual course of the disease. Mann's novella was also made into an opera by Benjamin Britten in 1973, his last one, and into a ballet by John Neumeier for his Hamburg Ballet company, in December 2003.* The Horseman on the Roof (orig. French Le Hussard sur le toit) is a 1951 adventure novel written by Jean Giono. It tells the story of Angelo Pardi, a young Italian carbonaro colonel of hussars, caught up in the 1832 cholera epidemic in Provence. In 1995, it was made into a film of the same name directed by Jean-Paul Rappeneau. In Gabriel Garcia Márquez's 1985 novel Love in the Time of Cholera, cholera is "a looming background presence rather than a central figure requiring vile description." The novel was adapted in 2007 for the film of the same name directed by Mike Newell. In The Secret Garden, Mary Lennox's parents die from cholera. Country examples Zambia In Zambia, widespread cholera outbreaks have occurred since 1977, most commonly in the capital city of Lusaka. In 2017, an outbreak of cholera was declared in Zambia after laboratory confirmation of Vibrio cholerae O1, biotype El Tor, serotype Ogawa, from stool samples from two patients with acute watery diarrhea. There was a rapid increase in the number of cases from several hundred cases in early December 2017 to approximately 2,000 by early January 2018. With intensification of the rains, new cases increased on a daily basis reaching a peak on the first week of January 2018 with over 700 cases reported. In collaboration with partners, the Zambia Ministry of Health (MoH) launched a multifaceted public health response that included increased chlorination of the Lusaka municipal water supply, provision of emergency water supplies, water quality monitoring and testing, enhanced surveillance, epidemiologic investigations, a cholera vaccination campaign, aggressive case management and health care worker training, and laboratory testing of clinical samples. The Zambian Ministry of Health implemented a reactive one-dose Oral Cholera Vaccine (OCV) campaign in April 2016 in three Lusaka compounds, followed by a pre-emptive second-round in December. Nigeria In June 2024, the Nigeria Centre for Disease Control and Prevention (NCDC) announced a total of 1,141 suspected and 65 confirmed cases of cholera with 30 deaths from 96 Local Government Areas (LGAs) in 30 states of the country. NCDC, in its public health advisory, said Abia, Bayelsa, Bauchi, Cross River, Delta, Imo, Katsina, Lagos, Nasarawa and Zamfara states were the 10 states that contributed 90 percent of the burden of cholera in the country at the time. India The city of Kolkata, India in the state of West Bengal in the Ganges delta has been described as the "homeland of cholera", with regular outbreaks and pronounced seasonality. In India, where the disease is endemic, cholera outbreaks occur every year between dry seasons and rainy seasons. India is also characterized by high population density, unsafe drinking water, open drains, and poor sanitation which provide an optimal niche for survival, sustenance and transmission of Vibrio cholerae. Democratic Republic of Congo In Goma in the Democratic Republic of Congo, cholera has left an enduring mark on human and medical history. Cholera pandemics in the 19th and 20th centuries led to the growth of epidemiology as a science and in recent years it has continued to press advances in the concepts of disease ecology, basic membrane biology, and transmembrane signaling and in the use of scientific information and treatment design. Explanatory notes References Further reading Bilson, Geoffrey. A Darkened House: Cholera in Nineteenth-Century Canada (U of Toronto Press, 1980). Gilbert, Pamela K. "Cholera and Nation: Doctoring the Social Body in Victorian England" (SUNY Press, 2008). Snowden, Frank M. Naples in the Time of Cholera, 1884–1911 (Cambridge UP, 1995). Vinten-Johansen, Peter, ed. Investigating Cholera in Broad Street: A History in Documents (Broadview Press, 2020). regarding 1850s in England. Vinten-Johansen, Peter, et al. Cholera, chloroform, and the science of medicine: a life of John Snow (2003). External links Prevention and control of cholera outbreaks: WHO policy and recommendations CholeraWorld Health Organization Cholera – Vibrio cholerae infectionCenters for Disease Control and Prevention Diarrhea Foodborne illnesses Gastrointestinal tract disorders Intestinal infectious diseases Tropical diseases Epidemics Pandemics Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Sanitation Waterborne diseases Vaccine-preventable diseases
Cholera
Biology
10,365
9,115,050
https://en.wikipedia.org/wiki/Department%20of%20Electrical%20Engineering%20and%20Computer%20Science%20at%20MIT
The Department of Electrical Engineering and Computer Science at MIT is an engineering department of the Massachusetts Institute of Technology in Cambridge, Massachusetts. It is regarded as one of the most prestigious in the world, and offers degrees of Master of Science, Master of Engineering, Doctor of Philosophy, and Doctor of Science. History The curriculum for the electrical engineering program was created in 1882, and was the first such program in the country. It was initially taught by the physics faculty. In 1902, the Institute set up a separate Electrical Engineering department. The department was renamed to Electrical Engineering and Computer Science in 1975, to highlight the new addition of computer science to the program. Current faculty Professors Silvio Micali Harold Abelson Anant Agarwal Akintunde I. Akinwande Dimitri A. Antoniadis Arvind Arthur B. Baggeroer Hari Balakrishnan Dimitri P. Bertsekas Robert C. Berwick Duane S. Boning Louis D. Braida Rodney A. Brooks Vincent W. S. Chan Anantha P. Chandrakasan Shafira Goldwasser Paul E. Gray (S.B. 1954, S.M. 1955, Ph.D. 1960) Pablo A. Parrilo L. Rafael Reif Jerome H. Saltzer (Sc.D. 1966) Kenneth N. Stevens (Sc.D. 1952) Gerald J. Sussman (S.B. 1968, Ph.D. 1973, both in Mathematics) Patrick H. Winston Regina Barzilay (website) Associate professors Saman P. Amarasinghe Krste Asanovic Marc Baldo Sangeeta Bhatia Vladimir Bulovic Isaac L. Chuang Michael Collins Karl K. Berggren Elfar Adalsteinsson Tomas Palacios Professors emeriti Michael Anthans Abraham Bers Amar Bose (S.B. 1951, S.M. 1952, Sc.D. 1956) James D. Bruce Fernando J. Corbató Shaoul Ezekiel Robert Fano (S.B. 1941, Sc.D. 1947) Former faculty Leo Beranek Gordon S. Brown (S.B. 1931, S.M. 1934, Ph.D. 1938) Vannevar Bush (Eng.D. 1916) Jack Dennis (S.B. 1953, S.M. 1954, Sc.D. 1958) Harold Edgerton (S.M. 1927, Sc.D. 1931) Jay Wright Forrester (S.M. 1945) Irwin M. Jacobs (S.M. 1957, Sc.D. 1959) William B. Lenoir (S.B. 1961, S.M. 1962, Ph.D. 1965) John McCarthy Marvin Minsky Julius Stratton (S.B. 1923, S.M. 1926) Notable alumni References External links MIT Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science Department Computer science departments in the United States Science and technology in Massachusetts Electrical and computer engineering departments
Department of Electrical Engineering and Computer Science at MIT
Engineering
605
45,122,752
https://en.wikipedia.org/wiki/S%C3%B8ren%20Galatius
Søren Galatius (born 1 August 1976) is a Danish mathematician who works as a professor of mathematics at the University of Copenhagen. He works in algebraic topology, where one of his most important results concerns the homology of the automorphisms of free groups. He is also known for his joint work with Oscar Randal-Williams on moduli spaces of manifolds, comprising several papers. Life Galatius was born in Randers, Denmark. He earned his PhD from Aarhus University in 2004 under the supervision of Ib Madsen. He then joined the Stanford University faculty, first with a temporary position as a Szegő Assistant Professor and then two years later with a tenure-track position, eventually becoming full professor in 2011. He relocated to the University of Copenhagen in 2016. Recognition In 2010, Galatius won the Silver Medal of the Royal Danish Academy of Sciences and Letters. In 2012, he became one of the inaugural fellows of the American Mathematical Society. He was an invited speaker at the 2014 International Congress of Mathematicians, speaking about his joint work with Oscar Randal-Williams. In 2017, he won an Elite Research Prize from the Danish Government for his work. In 2022 he was awarded the Clay Research Award jointly with Oscar Randal-Williams. Selected publications References External links 1976 births Living people Danish mathematicians 21st-century American mathematicians Aarhus University alumni Stanford University Department of Mathematics faculty Fellows of the American Mathematical Society People from Randers Topologists Academic staff of the University of Copenhagen
Søren Galatius
Mathematics
299
35,899,362
https://en.wikipedia.org/wiki/Technicien%20sup%C3%A9rieur%20de%20l%27aviation
The degree of Technicien supérieur de l'aviation (TSA, in English Advanced Technician Degree in Aviation) is a certification created in 2010 from the Technicien supérieur des études et de l'exploitation de l'aviation civile certification. It is a title recognized by the CNCP, and registered at level III in the National Classification of Levels of Training. The degree is obtained after training at the École nationale de l'aviation civile (French civil aviation university). Application Students can apply to this training by : A competitive examination organized by ENAC each year; A Validation des Acquis de l'Experience procedure. Training at ENAC TSA students are admitted into one of these two options : "TSA Fonctionnaires" ( "civil servant TSA"); after the training they join the corps of the Technicien supérieur des études et de l'exploitation de l'aviation civile. "TSA Civils" (" civil TSA") The choice is made by ranking at the competitive examination. The two options have the same curriculum. The civil servants have a dual education system before integrate the TSEEAC. Jobs Civil servants: they join the corps of the Technicien supérieur des études et de l'exploitation de l'aviation civile at the Directorate General for Civil Aviation. Civilians: they can be employed at an airport or with an airline. Possible areas include: Ground handling services: air freight or passengers; Air operations: flight preparation and management of flight crews; Other missions of training, audit and expertise in aviation Bibliography Ariane Gilotte, Jean-Philippe Husson & Cyril Lazerge, 50 ans d'Énac au service de l'aviation, Édition S.E.E.P.P, 1999 See also Technicien supérieur des études et de l'exploitation de l'aviation civile (TSEEAC) Technicien Supérieur de l'Aviation (civilian) (TSA civilian) External links TSA training on ENAC website References Air traffic control in France Aviation licenses and certifications École nationale de l'aviation civile Occupations in aviation Professional titles and certifications
Technicien supérieur de l'aviation
Engineering
452
10,874,936
https://en.wikipedia.org/wiki/Sudan%20Airways%20Flight%20139
Sudan Airways Flight 139 was a Sudan Airways passenger flight that crashed on 8 July 2003 at Port Sudan. The Boeing 737 aircraft was operating a domestic scheduled Port Sudan–Khartoum passenger service. Some 15 minutes after takeoff, the aircraft lost power in one of its engines, which prompted the crew to return to the airport for an emergency landing. In doing so, the pilots missed the airport runway, and the airplane descended until it hit the ground, disintegrating after impact. Of the 117 people aboard, 116 died. Aircraft and crew The aircraft involved in the accident was a Boeing 737-2J8C, c/n 21169, registered ST-AFK. Powered by two Pratt & Whitney JT8D-7 engines, it had its maiden flight on 29 August 1975, and was delivered new to Sudan Airways on 15 September 1975. At the time of the accident, the aircraft was almost 28 years old. The pilots involved were Captain Awad Jaber, First Officer Amir al-Nujumi, and Second Officer Walid Khair. Accident The airplane had departed Port Sudan at 4:00 am (UTC+3), bound for Khartoum. Captain Jaber radioed about ten minutes after take-off about a problem with one of the engines, and that he would return to the airport to make an emergency landing. However, the plane plummeted into the ground before returning to the airfield and immediately caught fire. All but one of the 117 occupants of the aircraft— most of them Sudanese— perished in the accident. There were three Indians, a Briton, a Chinese, an Emirati, and an Ethiopian among the dead as well. A two-year-old boy was the sole survivor. Then-Sudanese foreign minister Mustafa Osman Ismail raised the trade embargo imposed by the U.S. government in 1997 as a contributing factor to the accident, claiming the airline was unable to get spare parts for the maintenance of its fleet because of sanctions. The aircraft involved in the accident, in particular, had not been serviced for years. See also Aviation accidents and incidents Sudan Airways Flight 109 References 2003 disasters in Sudan Sudan Airways accidents and incidents Aviation accidents and incidents in Sudan Aviation accidents and incidents in 2003 Accidents and incidents involving the Boeing 737 Original Airliner accidents and incidents caused by pilot error Airliner accidents and incidents caused by mechanical failure 2003 in Sudan July 2003 events in Africa
Sudan Airways Flight 139
Materials_science
488
288,400
https://en.wikipedia.org/wiki/Provenance
Provenance () is the chronology of the ownership, custody or location of a historical object. The term was originally mostly used in relation to works of art, but is now used in similar senses in a wide range of fields, including archaeology, paleontology, archival science, economy, computing, and scientific inquiry in general. The primary purpose of tracing the provenance of an object or entity is normally to provide contextual and circumstantial evidence for its original production or discovery, by establishing, as far as practicable, its later history, especially the sequences of its formal ownership, custody and places of storage. The practice has a particular value in helping authenticate objects. Comparative techniques, expert opinions and the results of scientific tests may also be used to these ends, but establishing provenance is essentially a matter of documentation. The term dates to the 1780s in English. Provenance is conceptually comparable to the legal term chain of custody. For museums and the art trade, in addition to helping establish the authorship and authenticity of an object, provenance has become increasingly important in helping establish the moral and legal validity of a chain of custody, given the increasing amount of looted art. These issues first became a major concern regarding works that had changed hands in Nazi-controlled areas in Europe before and during World War II. Many museums began compiling pro-active registers of such works and their history. Recently the same concerns have come to prominence for works of African art, often exported illegally, and antiquities from many parts of the world, but currently especially in Iraq, and then Syria. In archaeology and paleontology, the derived term provenience is used with a related but very particular meaning, to refer to the location (in modern research, recorded precisely in three dimensions) where an artifact or other ancient item was found. Provenance covers an object's complete documented history. An artifact may thus have both a provenience and a provenance. Works of art and antiques The provenance of works of fine art, antiques and antiquities is of great importance, especially to their owner. There are a number of reasons why painting provenance is important, which mostly also apply to other types of fine art. A good provenance increases the value of a painting, and establishing provenance may help confirm the date, artist and, especially for portraits, the subject of a painting. It may confirm whether a painting is genuinely of the period it seems to date from. The provenance of paintings can help resolve ownership disputes. For example, provenance between 1933 and 1945 can determine whether a painting was looted by the Nazis. Many galleries are putting a great deal of effort into researching the provenance of paintings in their collections for which there is no firm provenance during that period. Documented evidence of provenance for an object can help to establish that it has not been altered and is not a forgery, a reproduction, stolen or looted art. Provenance helps assign the work to a known artist, and a documented history can be of use in helping to prove ownership. An example of a detailed provenance is given in the Arnolfini portrait. The quality of provenance of an important work of art can make a considerable difference to its selling price in the market. This is affected by the degree of certainty of the provenance, the status of past owners as collectors, and in many cases by the strength of evidence that an object has not been illegally excavated or exported from another country. The provenance of a work of art may vary greatly in length, depending on context or the amount that is known, from a single name to an entry in a scholarly catalogue some thousands of words long. An expert certification can mean the difference between an object having no value and being worth a fortune. Certifications themselves may be open to question. Jacques van Meegeren forged the work of his father Han van Meegeren, who had forged the work of Vermeer. Jacques sometimes produced a certificate with his forgeries, stating that a work was created by his father. John Drewe was able to pass off as genuine paintings, a large number of forgeries that would have easily been recognised as such by scientific examination. He established an impressive, but false provenance. Because of this, galleries and dealers accepted the paintings as genuine. He created this false provenance by forging letters and other documents, including false entries in earlier exhibition catalogues. Sometimes provenance can be as simple as a photograph of the item with its original owner. Simple yet definitive documentation such as that can increase its value by an order of magnitude, but only if the owner was of high renown. Many items that were sold at auction have gone far past their estimates because of a photograph showing that item with a famous person. Some examples include antiques owned by politicians, musicians, artists, actors, etc. In the context of discussions about the restitution of cultural objects in museum collections of colonial origin, the AfricaMuseum in Belgium started to publicly present information about such objects in its permanent exhibition in 2021. Researching the provenance of paintings The objective of provenance research is to produce a complete list of owners (together, where possible, with the supporting documentary proof) from when the painting was commissioned or in the artist's studio through to the present time. In practice, there are likely to be gaps in the list and documents that are missing or lost. The documented provenance should also list when the painting has been part of an exhibition and a bibliography of when it has been discussed, or illustrated in print. Where the research is proceeding backwards, to discover the previous provenance of a painting whose current ownership and location are known, it is important to record the physical details of the painting – style, subject, signature, materials, dimensions, frame, etc. The titles of paintings and the attribution to a particular artist may change over time. The size of the work and its description can be used to identify earlier references to the painting. The back of a painting can contain significant provenance information. There may be exhibition marks, dealer stamps, gallery labels and other indications of previous ownership. There may be shipping labels. In the BBC TV programme Fake or Fortune? the provenance of the painting Bords de la Seine à Argenteuil was investigated using a gallery sticker and shipping label on the back. Early provenance can sometimes be indicated by a cartellino, a trompe-l'œil representation of an inscribed label, added to the front of a painting. However, these can be forged, or can fade or be painted over. Auction records are an important resource to assist in researching the provenance of paintings. The Witt Library houses a collection of cuttings from auction catalogs which enables the researcher to identify occasions when a picture has been sold. The Heinz Library at the National Portrait Gallery, London maintains a similar collection, but restricted to portraits. The National Art Library at the Victoria and Albert Museum has a collection of UK sales catalogues. The University of York is establishing a web site with on-line resources for investigating art history in the period 1660–1735. This includes diaries, sales catalogues, bills, correspondence and inventories. The Getty Research Institute in Los Angeles has a Project for the Study of Collecting and Provenance (PSCP) which includes an on-line database, still being compiled, of auction and other records relating to painting provenance. The Frick Art Reference Library in New York has an extensive collection of auction and exhibition catalogues. The Netherlands Institute for Art History (RKD) has a number of databases related to artists from the Netherlands. If a painting has been in private hands for an extended period and on display in a stately home, it may be recorded in an inventory – for example, the Lumley inventory. The painting may also have been noticed by a visitor who subsequently wrote about it. It may have been mentioned in a will or a diary. Where the painting has been bought from a dealer, or changed hands in a private transaction, there may be a bill of sale or sales receipt that provides evidence of provenance. Where the artist is known, there may be a listing all the artist's known works and their location at the time of writing. A database of is available at the International Foundation for Art Research. Historic photos of the painting may be discussed and illustrated in a more general work on the artist, period or genre. Similarly, a photograph of a painting may show inscriptions (or a signature) that subsequently became lost as a result of overzealous restoration. Conversely, a photograph may show that an inscription was not visible at an earlier date. One of the disputed aspects of the "Rice" portrait of Jane Austen concerns apparent inscriptions identifying artist and sitter. Archives Provenance – also known as custodial history – is a core concept within archival science and archival processing. The term refers to the individuals, groups, or organizations that originally created or received the items in an accumulation of records, and to the items' subsequent chain of custody. The principle of provenance, also termed the principle of "archival integrity", and a major strand in the broader principle of respect des fonds, stipulates that records originating from a common source, or fonds, should be kept together – where practicable, physically, but in all cases intellectually, in the way in which they are catalogued and arranged in finding aids. Conversely, records of different provenance should be preserved and documented separately. In archival practice, proof of provenance is provided by the operation of control systems that document the history of records kept in archives, including details of amendments made to them. The authority of an archival document or set of documents of which the provenance is uncertain, because of gaps in the recorded chain of custody, will be considered to be severely compromised. The principles of archival provenance were developed in the 19th century by both French and Prussian archivists, and gained widespread acceptance on the basis of their formulation in the Manual for the Arrangement and Description of Archives by Dutch state archivists Samuel Muller, J. A. Feith, and R. Fruin, published in the Netherlands in 1898, often referred to as the "Dutch Manual". Seamus Ross has argued a case for adapting established principles and theories of archival provenance to the field of modern digital preservation and curation. Provenance is also the title of the journal published by the Society of Georgia Archivists. Books In the case of books, the study of provenance refers to the study of the ownership of individual copies of books. It is usually extended to include the study of the circumstances in which individual copies of books have changed ownership, and of evidence left in books that shows how readers interacted with them. Provenance studies may shed light on the books themselves, providing evidence of the role particular titles have played in social, intellectual and literary history. Such studies may also add to our knowledge of particular owners of books. For instance, looking at the books owned by a writer may help to show which works influenced him or her. Many provenance studies are historically focused, and concentrated on books owned by writers, politicians and public figures. The recent ownership of books is studied, however, as is evidence of how ordinary or anonymous readers have interacted with books. Provenance can be studied both by examining the books themselves, for instance looking at inscriptions, marginalia, bookplates, book rhymes, and bindings, and by reference to external sources of information such as auction catalogues. Pianos Provenance for pianos is authenticated before a piano is inducted into a museum, sold at an auction, or appraised for an estate or legal action, when it has extraordinary value in connection to a composer, performer, event or location that has become famous. For example, the piano that Wolfgang Amadeus Mozart used during the final 10 years of his life, is on display in the Mozarteum Museum in Salzberg, one of many historical pianos in museums around the world. The 300,000th Steinway piano that was presented to President Franklin D. Roosevelt by Theodore Steinway, on behalf of the Steinway family is on display in the White House. It is one of many pianos with a provenance that have extraordinary value because of art, sculpture or design incorporated into the cabinet. It has legs carved into golden eagles and figures painted on the body of the piano. For a piano, provenance can be established by starting with the authentication of the brand of manufacture and serial number, which will usually identify age. Then bills of sale, tuning records, bills of lading, concert programs that identify a piano by serial number, letters, famous signatures inside or on the outside of a piano, statements under oath in a court of law and photographs can all help authenticate a piano's provenance. Piano Provenance and Valuation Pianos can sell for millions of dollars, when the provenance is significant enough to increase its value well beyond what it would be worth as a musical instrument alone. When decisions need to be made in a court of law for a bankruptcy, or before a piano goes up for auction, or when an educational institution needs to establish a value for a deed of trust being established with the gift of a piano, then experts are usually hired to authenticate the piano's provenance. Piano provenance has emerged as a field of study with experts having college degrees in some specialty connected to the piano or to art combined with professional training and experience in the field. Most experts belong to some form of association. For example, Karen Earle Lile niece of Tony Terran and Kendall Ross Bean, members of the Preservations Artisans Guild, were chosen by Mercersburg Academy to research and authenticate the provenance of the Lennon-Ono-Green-Warhol piano before it was put up for sale to fund a Deed of Trust by the Shaool Family to Mercersburg Academy for future student scholarships. Because this piano was part of a famous lawsuit in 2000 and had extensive coverage as the "Lost Lennon Piano", when provenance research done by Lile was revealed by the Alex Cooper Auctioneers to the public, the provenance became the subject of dozens of newspapers and magazines that picked up the story. In the case of sculpture or art that are incorporated into the piano's cabinet, experts might be come from the field of art valuation and belong to an appraiser society such as the American Society of Appraisers or the International Society of Appraisers. Wines In transactions of old wine with the potential of improving with age, the issue of provenance has a large bearing on the assessment of the contents of a bottle, both in terms of quality and the risk of wine fraud. A documented history of wine cellar conditions is valuable in estimating the quality of an older vintage due to the fragile nature of wine. Recent technology developments have aided collectors in assessing the temperature and humidity history of the wine which are two key components in establishing perfect provenance. For example, there are devices available that rest inside the wood case and can be read through the wood by waving a smartphone equipped with a simple app. These devices track the conditions the case has been exposed to for the duration of the battery life, which can be as long as 15 years, and sends a graph and high/low readings to the smartphone user. This takes the trust issue out of the hands of the owner and gives it to a third party for verification. Science Archaeology, anthropology, and paleontology Archaeology and anthropology researchers use provenience to refer to the exact location or find spot of an artifact, a bone or other remains, a soil sample, or a feature within an ancient site, whereas provenance covers an object's complete documented history. Ideally, in modern excavations, the provenience is recorded in three dimensions on a site grid with great precision, and may also be recorded on video to provide additional proof and context. In older work, often undertaken by amateurs, only the general site or approximate area may be known, especially when an artifact was found outside a professional excavation and its specific position not recorded. The term provenience appeared in the 1880s, about a century after provenance. Outside of academic contexts, it has been used as a synonymous variant spelling of provenance, especially in American English. Any given antiquity may have both a provenience, where it was found, and a provenance, where it has been since it was found. A summary of the distinction is that "provenience is a fixed point, while provenance can be considered an itinerary that an object follows as it moves from hand to hand." Another metaphor is that provenience is an artifact's "birthplace", while provenance is its "résumé". This can be imprecise. Many artifacts originated as trade goods created in one region, but were used and finally deposited in another. Aside from scientific precision, a need for the distinction in these fields has been described thus: In this context, the provenance can occasionally be the detailed history of where an object has been since its creation, as in art history contexts – not just since its modern finding. In some cases, such as where there is an inscription on the object, or an account of it in written materials from the same era, an object of study in archaeology or cultural anthropology may have an early provenance – a known history that predates modern research – then a provenience from its modern finding, and finally a continued provenance relating to its handling and storage or display after the modern acquisition. Evidence of provenance in the more general sense can be of importance in archaeology. Fakes are not unknown, and finds are sometimes removed from the context in which they were found without documentation, reducing their value to science. Even when apparently discovered in situ, archaeological finds are treated with caution. The provenience of a find may not be properly represented by the context in which it was found, e.g. due to stratigraphic layers being disturbed by erosion, earthquakes, or ancient reconstruction or other disturbance at a site. Artifacts can be moved through looting as well as trade, far from their place of origin and long before modern rediscovery. Many source nations have passed legislation forbidding the domestic trade in cultural heritage. Further research is often required to establish the true provenance and legal status of a find, and what the relationship is between the exact provenience and the overall provenance. In paleontology and paleoanthropology, it is recognized that fossils can also move from their primary context and are sometimes found, apparently in-situ, in deposits to which they do not belong because they have been moved, for example, by the erosion of nearby but different outcrops. It is unclear how strictly paleontology maintains the provenience and provenance distinction. For example, a short glossary at a website, primarily aimed at young students, of the American Museum of Natural History treats the terms as synonymous, while scholarly paleontology works make frequent use of provenience in the same precise sense as used in archaeology and paleoanthropology. While exacting details of a find's provenience are primarily of use to scientific researchers, most natural history and archaeology museums also make strenuous efforts to record how the items in their collections were acquired. These records are often of use in helping to establish a chain of provenance. Data provenance Scientific research is generally held to be of good provenance when it is documented in detail sufficient to allow reproducibility. Scientific workflow systems assist scientists and programmers with tracking their data through all transformations, analyses, and interpretations. Data sets are reliable when the processes used to create them are reproducible and analyzable for defects. Security researchers are interested in data provenance because it can analyze suspicious data and make large opaque systems transparent. Current initiatives to effectively manage, share, and reuse ecological data are indicative of the increasing importance of data provenance. Examples of these initiatives are National Science Foundation Datanet projects, DataONE and Data Conservancy, as well as the U.S. Global Change Research Program. Some international academic consortia, such as the Research Data Alliance, have specific groups to tackle issues of provenance. In that case it is the Research Data Provenance Interest Group. Computer science Within computer science, informatics uses the term "provenance" to mean the lineage of data, as per data provenance, with research in the last decade extending the conceptual model of causality and relation to include processes that act on data and agents that are responsible for those processes. See, for example, the proceedings of the International Provenance Annotation Workshop (IPAW) and Theory and Practice of Provenance (TaPP). Semantic web standards bodies, including the World Wide Web Consortium in 2014, have ratified a standard data model for provenance representation known as PROV which draws from many of the better-known provenance representation systems that preceded it, such as the Proof Markup Language and the Open Provenance Model. Interoperability is a design goal of most recent computer science provenance theories and models, for example the Open Provenance Model (OPM) 2008 generation workshop aimed at "establishing inter-operability of systems" through information exchange agreements. Data models and serialisation formats for delivering provenance information typically reuse existing metadata models where possible to enable this. Both the OPM Vocabulary and the PROV Ontology make extensive use of metadata models such as Dublin Core and Semantic Web technologies such as the Web Ontology Language (OWL). Current practice is to rely on the W3C PROV data model, OPM's successor. There are several maintained and open-source provenance capture implementation at the operating system level such as CamFlow, Progger for Linux and MS Windows, and SPADE for Linux, MS Windows, and MacOS. Operating system level provenance have gained interest in the security community notably to develop novel intrusion detection techniques. Other implementations exist for specific programming and scripting languages, such as RDataTracker for R, and NoWorkflow for Python. Whole-system provenance implementation for Linux PASS – closed source – not maintained – kernel v2.6.X Hi-Fi – open source – not maintained – kernel v3.2.x Flogger – closed source – not maintained – kernel v2.6.x S2Logger – closed source – not maintained – kernel v2.6.x LPM – open source – not maintained – kernel v2.6.x Progger – open source – not maintained – kernel v2.6.x and kernel v.4.14.x CamFlow – open source – maintained – kernel v6.0.X Petrology In the geologic use of the term, provenance instead refers to the origin or source area of particles within a rock, most commonly in sedimentary rocks. It does not refer to the circumstances of the collection of the rock. The provenance of sandstone, in particular, can be evaluated by determining the proportion of quartz, feldspar, and lithic fragments (see diagram). Seed provenance Seed provenance refers to the geographic location of a parent plant, from which seeds were collected. In the context of ecological restoration, seed provenancing refers to a seed-sourcing strategy that focuses on the geographic location of seed sources, as each provenance can describe the genetic material from that location. Local provenancing is a position maintained by ecologists that suggests that only seeds of local provenance should be planted in a particular area. However, this view depends on the adaptationist program – a view that populations are universally locally adapted. It is maintained that local seed is best adapted to local conditions, and that outbreeding depression will be avoided. Evolutionary biologists suggest that strict adherence to provenance collecting is not a wise decision because: Local adaptation is not as common as assumed. Background population maladaptation can be driven by natural processes. Human actions of habitat fragmentation drive maladaptation up and adaptive potential down. Natural selection is changing rapidly due to climate change. and habitat fragmentation Population fragments are unlikely to divergence by natural selection since fragmentation (< 500 years). This leads to a low risk of outbreeding depression. Provenance trials, where material of different provenances are planted in a single place or at different locations spanning a range of environmental conditions, is a way to reveal genetic variation among provenances. It also contributes to an understanding of how different provenances respond to various climatic and environmental conditions and can as such contribute with knowledge on how to strategically select provenances for climate change adaptation. Computers and law The term provenance is used when ascertaining the source of goods such as computer hardware to assess if they are genuine or counterfeit. Chain of custody is an equivalent term used in law, especially for evidence in criminal or commercial cases. Software provenance encompasses the origin of software and its licensing terms. For example, when incorporating a free, open source or proprietary software component in an application, one may wish to understand its provenance to ensure that licensing requirements are fulfilled and that other software characteristics can be understood. Data provenance covers the provenance of computerized data. There are two main aspects of data provenance: ownership of the data and data usage. Ownership will tell the user who is responsible for the source of the data, ideally including information on the originator of the data. Data usage gives details regarding how the data has been used and modified and often includes information on how to cite the data source or sources. Data provenance is of particular concern with electronic data, as data sets are often modified and copied without proper citation or acknowledgement of the originating data set. Databases make it easy to select specific information from data sets and merge this data with other data sources without any documentation of how the data was obtained or how it was modified from the original data set or sets. The automated analysis of data provenance graphs has been described as a mean to verify compliance with regulations regarding data usage such as introduced by the EU GDPR. Secure Provenance refers to providing integrity and confidentiality guarantees to provenance information. In other words, secure provenance means to ensure that history cannot be rewritten, and users can specify who else can look into their actions on the object. A simple method of ensuring data provenance in computing is to mark a file as read only. This allows the user to view the contents of the file, but not edit or otherwise modify it. Read only can also in some cases prevent the user from accidentally or intentionally deleting the file. See also Art discovery Certificate of origin Chronological dating Post-excavation analysis Chain of custody Traceability References Bibliography Art Book studies Nazi-Era provenance research External links The National Gallery of Art Washington gives brief provenances for most featured works EU Provenance Project - a technology project that sought to support the electronic certification of data provenance W3C Provenance Working Group W3C Provenance Outreach Information Archaeological artifacts Archaeological theory Archival science Art history Visual arts terminology Book terminology Collections care Museology Data collection Evidence law Library science terminology Scientific method Seeds Wine packaging and storage Wine terminology
Provenance
Technology
5,520
2,059,470
https://en.wikipedia.org/wiki/Chlorine%20pentafluoride
Chlorine pentafluoride is an interhalogen compound with formula ClF5. This colourless gas is a strong oxidant that was once a candidate oxidizer for rockets. The molecule adopts a square pyramidal structure with C4v symmetry, as confirmed by its high-resolution 19F NMR spectrum. It was first synthesized in 1963. Preparation Some of the earliest research on the preparation was classified. It was first prepared by fluorination of chlorine trifluoride at high temperatures and high pressures: ClF3 + F2 → ClF5 ClF + 2F2 → ClF5 Cl2 + 5F2 → 2ClF5 CsClF4 + F2 → CsF + ClF5 NiF2 catalyzes this reaction. Certain metal fluorides, MClF4 (i.e. KClF4, RbClF4, CsClF4), react with F2 to produce ClF5 and the corresponding alkali metal fluoride. Reactions In a highly exothermic reaction, ClF5 reacts with water to produce chloryl fluoride and hydrogen fluoride: + 2 → + 4 It is also a strong fluorinating agent. At room temperature it reacts readily with all elements (including otherwise "inert" elements like platinum and gold) except noble gases, nitrogen, oxygen and fluorine. Uses Rocket propellant Chlorine pentafluoride was once considered for use as an oxidizer for rockets. As a propellant, it has a higher maximum specific impulse than ClF3, but with the same difficulties in handling. Due to the hazardous nature of chlorine pentafluoride, it has yet to be used in a large scale rocket propulsion system. See also Chlorine trifluoride Hypervalent molecule References External links National Pollutant Inventory - Fluoride and compounds fact sheet New Jersey Hazardous Substance Fact Sheet WebBook page for ClF5 Fluorides Inorganic chlorine compounds Interhalogen compounds Rocket oxidizers Fluorinating agents Oxidizing agents Chlorine(V) compounds Substances discovered in the 1960s
Chlorine pentafluoride
Chemistry
450
12,389,663
https://en.wikipedia.org/wiki/Insuetophrynus
Insuetophrynus is a monotypic genus of frogs in the family Rhinodermatidae. The sole species is Insuetophrynus acarpicus, also known as Barrio's frog. It is endemic to Chile and only known from few localities on the Valdivian Coast Range between Chanchán in the Los Ríos Region in the south and Queule (southernmost Araucanía Region) and Colequal Alto in the north; the fourth locality is Mehuín, which is the type locality. The altitudinal range is asl. Description Adult males measure and females in snout–vent length. The body is sturdy with muscular arms and legs (these frogs are powerful jumpers). The toes are partially webbed and thinner than the fingers which are short, thick, and unwebbed. The head is wider than long, with a broad, rounded snout. The eyes are large, and tympanum is visible but not large. The back is reddish brown with some whitish granulations. The hind legs have transverse, darker bands. The throat is pinkish yellow, and the stomach is pale. Skin is dorsally quite granular or warty. Also ventral region is also very granular, apart from the throat that is fairly smooth. Habitat and conservation Insuetophrynus acarpicus inhabits coastal streams in temperate forest. Adults hide under stones during the day, emerging at night to feed along the stream margins. Tadpoles can be found under stones in muddy areas with slow current. The species has a small area of distribution (its known range extends only 33 km along the coast) and its habitat is threatened by clear cutting and afforestation. References Rhinodermatidae Monotypic amphibian genera EDGE species Amphibians of Chile Endemic fauna of Chile [[Category:Amphibians ]] Taxonomy articles created by Polbot
Insuetophrynus
Biology
386
8,418,529
https://en.wikipedia.org/wiki/Hermann%20Irving%20Schlesinger
Hermann Irving Schlesinger (October 11, 1882 – October 3, 1960) was an American inorganic chemist, working in boron chemistry. He and Herbert C. Brown discovered sodium borohydride in 1940 and both were involved in the further development of borohydride chemistry. Schlesinger studied chemistry at the University of Chicago from 1900 till 1905, where he received his Ph.D. for work with Julius Stieglitz. In the following two years, he worked with Walther Nernst at the University of Berlin; with Johannes Thiele at the University of Strasbourg; and with John Jacob Abel at Johns Hopkins University. From 1907 to 1960, he taught in the department of chemistry at the University of Chicago, rising through the ranks from instructor to full professor in 1922. He administered the department from 1922-1946, and retired in 1949. Schlesinger was honored by membership in the National Academy of Sciences and received the Priestley Medal, the highest honor of the American Chemical Society. Bibliography External links Biography Biographical Memoirs of the National Academy of Sciences 64 (1994), 369–394. 1882 births 1960 deaths 20th-century American chemists American inorganic chemists
Hermann Irving Schlesinger
Chemistry
246
71,605,294
https://en.wikipedia.org/wiki/UGC%202369
UGC 2369 is a pair of galaxies and interacting galaxies in the constellation Aries, about 424 million light-years away. The two galaxies are called UGC 2369N and UGC 2369S. A tenuous bridge of gas, dust and stars can be seen connecting the two galaxies, created when they pulled material out into space across the diminishing divide between them. Interaction between galaxies is not an uncommon event, however, two similarly sized ones merging is rare. The images released by NASA show both the galaxies distorting as they pulled closer. In the images, a thin bridge of gas, dust and stars can also be seen. The ridge was developed when the gap between both of the galaxies started diminishing. See also Mice Galaxies NGC 4302 Antennae Galaxies References Interacting galaxies 02369 Aries (constellation)
UGC 2369
Astronomy
172
3,223,727
https://en.wikipedia.org/wiki/Nipple%20%28plumbing%29
In plumbing and piping, a nipple is a fitting, consisting of a short piece of pipe, usually provided with a male pipe thread at each end, for connecting two other fittings. The length of the nipple is usually specified by the overall length with thread. It may have a hexagonal section in the center for a wrench to grasp (sometimes referred to as a "hex nipple"), or it may simply be made from a short piece of pipe (sometimes referred to as a "barrel nipple" or "pipe nipple"). A "close nipple" has no unthreaded area; when screwed tightly between two female fittings, very little of the nipple remains exposed. A close nipple can only be unscrewed by gripping one threaded end with a pipe wrench which will damage the threads and necessitate replacing the nipple, or by using a specialty tool known as a nipple wrench (or known as an internal pipe wrench) which grips the inside of the pipe, leaving the threads undamaged. When the ends are of two different sizes it is called a reducer or unequal nipple. Threads used on nipples are BSP, BSPT, NPT, NPSM and Metric. Chase nipple A chase nipple is a short pipe fitting, which creates a path for wires between two electrical boxes. A chase nipple has male threads on one end only. The other end is a hexagon. The chase nipple passes through the knockouts of two boxes, and is secured by an internally threaded ring called a lock nut. Chase-Shawmut Company, of Boston, is the company which first produced chase nipples. See also Coupling (piping) Piping and plumbing fitting Street elbow References Further reading ASTM A733-03 Standard Specification for Welded and Seamless Carbon Steel and Austenitic Stainless Steel Pipe Nipples. ASTM B687-99(2005)e1 Standard Specification for Brass, Copper, and Chromium-Plated Pipe Nipples. ASME B1.20.7 Hose Coupling Screw Threads, Inch. (Quote: The normal sequence of connections, in relation to the direction of flow, is from an externally threaded nipple into an internally threaded coupling) External links Plumbing Piping ja:ニップル (機械) ru:Ниппель tr:Nipel
Nipple (plumbing)
Chemistry,Engineering
479
16,969,599
https://en.wikipedia.org/wiki/Souring
Souring is a food preparation technique that causes a physical and chemical change in food by exposing it to an acid. This acid can be added explicitly (e.g., vinegar, lemon juice, lime juice, etc.), or can be produced within the food itself by a microbe, such as Lactobacillus. Souring is similar to pickling or fermentation, but souring typically occurs in minutes or hours, while pickling and fermentation can take a much longer amount of time. Examples Dairy products produced by souring include: Clabber, Cheese, Crème fraîche, Cultured buttermilk, Curd, Filmjölk, Kefir, Paneer, Smetana, Soured milk, Sour cream, and Yogurt. Grain products include: Idli, Sourdough, and Sour mash. Others foods produced by souring include: Ceviche, Kinilaw, and Key lime pie. See also Fermented milk products Food preservation Marination References External links Buttermilk substitution Free lactic acid in sour milk A comparison of sourdough microflora Cultured milk products Cooking techniques Fermentation in food processing Culinary terminology
Souring
Chemistry
249
473,326
https://en.wikipedia.org/wiki/Poise%20%28unit%29
The poise (symbol P; ) is the unit of dynamic viscosity (absolute viscosity) in the centimetre–gram–second system of units (CGS). It is named after Jean Léonard Marie Poiseuille (see Hagen–Poiseuille equation). The centipoise (1 cP = 0.01 P) is more commonly used than the poise itself. Dynamic viscosity has dimensions of , that is, . The analogous unit in the International System of Units is the pascal-second (Pa⋅s): The poise is often used with the metric prefix centi- because the viscosity of water at 20 °C (standard conditions for temperature and pressure) is almost exactly 1 centipoise. A centipoise is one hundredth of a poise, or one millipascal-second (mPa⋅s) in SI units (1 cP = 10−3 Pa⋅s = 1 mPa⋅s). The CGS symbol for the centipoise is cP. The abbreviations cps, cp, and cPs are sometimes seen. Liquid water has a viscosity of 0.00890 P at 25 °C at a pressure of 1 atmosphere (0.00890 P = 0.890 cP = 0.890 mPa⋅s). See also Poiseuille Viscosity References Centimetre–gram–second system of units Units of dynamic viscosity et:Poise
Poise (unit)
Mathematics
309
39,977,878
https://en.wikipedia.org/wiki/Rousettus%20bat%20coronavirus%20HKU9
Rousettus bat coronavirus HKU9 is bat Betacoronavirus. See also RNA virus Animal viruses References Betacoronaviruses Animal viral diseases Bat diseases Bat virome
Rousettus bat coronavirus HKU9
Biology
39
73,252,323
https://en.wikipedia.org/wiki/Marginal%20sinus
The marginal sinus is a dural venous sinus surrounding the margin of the foramen magnum inside the skull, accommodated by the groove for marginal sinus. It usually drains into either the sigmoid sinus, or the jugular bulb. It communicates with the basilar venous plexus anteriorly, and the occipital sinus posteriorly (the posterior union of the left and the right marginal sinus usually representing the commencement of the occipital sinus); it may form extracranial communications with the internal vertebral venous plexuses, or deep cervical veins. Clinical significance Arteriovenous fistulas involving the marginal sinus have been described - often following basilar skull fractures. The marginal sinus must be traversed during surgical entry into subdural space deep to the foramen magnum. References Veins of the head and neck Anatomy Human anatomy
Marginal sinus
Biology
187
24,621,384
https://en.wikipedia.org/wiki/Irrelevant%20ideal
In mathematics, the irrelevant ideal is the ideal of a graded ring generated by the homogeneous elements of degree greater than zero. It corresponds to the origin in the affine space, which cannot be mapped to a point in the projective space. More generally, a homogeneous ideal of a graded ring is called an irrelevant ideal if its radical contains the irrelevant ideal. The terminology arises from the connection with algebraic geometry. If R = k[x0, ..., xn] (a multivariate polynomial ring in n+1 variables over an algebraically closed field k) is graded with respect to degree, there is a bijective correspondence between projective algebraic sets in projective n-space over k and homogeneous, radical ideals of R not equal to the irrelevant ideal. More generally, for an arbitrary graded ring R, the Proj construction disregards all irrelevant ideals of R. Notes References Sections 1.5 and 1.8 of Commutative algebra Algebraic geometry
Irrelevant ideal
Mathematics
195
54,940,860
https://en.wikipedia.org/wiki/Integrin-like%20receptors
Integrin-like receptors (ILRs) are found in plants and carry unique functional properties similar to true integrin proteins. True homologs of integrins exist in mammals, invertebrates, and some fungi but not in plant cells. Mammalian integrins are heterodimer transmembrane proteins that play a large role in bidirectional signal transduction. As transmembrane proteins, integrins connect the extracellular matrix (ECM) to the plasma membrane of the animal cell. The extracellular matrix of plant cells, fungi, and some protist is referred to as the cell wall. The plant cell wall is composed of a tough cellulose polysaccharide rather than the collagen fibers of the animal ECM. Even with these differences, research indicates that similar proteins involved in the interaction between the ECM and animals cells are also involved in the interaction of the cell wall and plant cells. Integrin-like receptors and integrin-linked kinases together have been implicated in surface adhesion, immune response, and ion accumulation in plant cells in a manner akin to the family of integrin proteins. Structure ILRs contain a transmembrane region with a large extracellular portion and a smaller intracellular section. Most commonly, ILRs resembles the β1 subunit found in integrin proteins. This structural similarity between ILRs and integrins was determined through various imaging techniques, SDS-PAGE, western blotting, and kinetic studies. These proteins are around 55 to 110 kDa and some studies have found them to react with animal anti-β1 antibodies suggesting the structural similarity between animal integrins and these plant integrin-like receptors. Some ILRs mimic the α-subunit of integrin proteins containing the ligand binding region known as the I-domain. The I-domain functions primarily in the recognition and binding of a ligand. Conformational changes in the I-domain leads to ILR activation and is dependent on metal ion interaction at metal-ion-dependent adhesion sites (MIDAS). Activation of these sites occur in the presence of Mg2+, Mn2+, and Ca2+. The extracellular domain of most ILRs contain the highly conserved tripepetid sequence Arg-Gly-Asp (RGD). This sequence is commonly found in integrins and other molecules that attach to the extracellular matrix for cell adhesion. The discovery of the RGD sequence in many proteins suggest the same adhesive ability. While the RGD sequence is the most common, some ILRs have been found with sequences that are similar but differ in one amino acid. A plant protein with structural similarity to integrins contains the amino acid sequence Asn-Gly-Asp (NGD). Function Plants ILRs play a role in protein-protein interaction and are found in the plasma membrane of plant cells in the leaf, root and vasculature of plants. Plants produce a physiological response that is dependent on information obtained from the environment. The majority of this information is received through mechanical signals which include touch, sound, and gravity. Therefore, the interaction between the ECM and the internal cell response is incredibly important for receiving and interpreting information. The specific functionality of ILRs in plants is not well characterized but in addition to mechanical signaling transduction, they are believed to have some role in plant immune response, osmotic stress sensitivity, and ion regulation within the cell. Surface-Adhesion Some β1 integrin-like receptors on the root caps of Tabaco plants are found to play a role in the plant’s ability to detect gravitational pull and aid in root elongation in a process known as gravitropism. ILRs are found on the cellular membrane of plant protoplasts. The dispersion of the ILRs on these protoplasts can vary from species to species. The variation in the ILR surface placement has been correlated to species growth behavior. For example, Rubus fruticosus cells have a uniformed distribution of ILRs on their cellular membrane while Arabidopsis thaliana contains ILRs that cluster resulting in cell growth clusters. Immunology Integrin-like receptors have the capability to relay messages from inside the cell to the outside of the cell and vice versa. This is an important factor in the initiation and sustaining of an immunological response. A good body of research has found ILR proteins that model the glycoproteins vitronectin and fibronectin, two important molecules in membrane stability and homeostasis. These virtonectin-like and fibronectin-like protein provide further support that compounds in the cell membrane of plant cells have important regulatory functions in the immune response such as the activation of immune cells. The non-race specific disease resistance-1 (NDR1) primarily discovered to have a large function in plant immune response. This protein shares functional homology with mammalian integrins in that it connects the ECM to the intracellular matrix to both stabilize the cell structure and allow for signal exchange. NDR1 is also believed to be involved in cell wall adhesion to the plasma membrane and fluid retention of the cell. Fungi In addition to adhesive properties, integrin-like receptors with RGD-binding sites have special functions in fungi. Using peptides that inhibit the activity of proteins with RGD activation, ILR were discovered in Magnaporthe oryzae to initiate fungal conidial adhesion and appressorium formation needed for host infection. Candida albicans is an opportunistic fungi with an integrin-like receptor protein known as αInt1p. This protein maintains structural similarity and sequence homology to the α-subunits of human leukocyte integrins. The αInt1p protein contains an RGD extracellular binding site and allows the organism to attach to epithelial cells in the host organism to begin the infection process. Once bound, the protein then assists in the morphogenesis of the fungi into a tube-like structure. Invertebrates In invertebrates, protein structures with the RGD-binding sequence assist in an array of different functions such as the repairing of wounds and cell adhesion. Integrin-like receptors are found in mollusk and have a part in the spreading of hemocytes to damaged locations in the cellular system. Studies that block the RGD-binding site of these integrin-like receptors indicate a reduction in hemocyte aggregation and spreading suggesting the RGD-binding site on integrin-like receptors is a necessary component in organismal immune response. Further support for this calm shows RGD-binding inhibition reduces nodule formation and encapsulation in invertebrate immune response. References Transmembrane receptors
Integrin-like receptors
Chemistry
1,421
432,450
https://en.wikipedia.org/wiki/Round-off%20error
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors. When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. In ill-conditioned problems, significant error may accumulate. In short, there are two major facets of roundoff errors involved in numerical calculations: The ability of computers to represent both magnitude and precision of numbers is inherently limited. Certain numerical manipulations are highly sensitive to roundoff errors. This can result from both mathematical considerations as well as from the way in which computers perform arithmetic operations. Representation error The error introduced by attempting to represent a number using a finite string of digits is a form of roundoff error called representation error. Here are some examples of representation error in decimal representations: Increasing the number of digits allowed in a representation reduces the magnitude of possible roundoff errors, but any representation limited to finitely many digits will still cause some degree of roundoff error for uncountably many real numbers. Additional digits used for intermediary steps of a calculation are known as guard digits. Rounding multiple times can cause error to accumulate. For example, if 9.945309 is rounded to two decimal places (9.95), then rounded again to one decimal place (10.0), the total error is 0.054691. Rounding 9.945309 to one decimal place (9.9) in a single step introduces less error (0.045309). This can occur, for example, when software performs arithmetic in x86 80-bit floating-point and then rounds the result to IEEE 754 binary64 floating-point. Floating-point number system Compared with the fixed-point number system, the floating-point number system is more efficient in representing real numbers so it is widely used in modern computers. While the real numbers are infinite and continuous, a floating-point number system is finite and discrete. Thus, representation error, which leads to roundoff error, occurs under the floating-point number system. Notation of floating-point number system A floating-point number system is characterized by integers: : base or radix : precision : exponent range, where is the lower bound and is the upper bound Any has the following form: where is an integer such that for , and is an integer such that . Normalized floating-number system A floating-point number system is normalized if the leading digit is always nonzero unless the number is zero. Since the significand is , the significand of a nonzero number in a normalized system satisfies . Thus, the normalized form of a nonzero IEEE floating-point number is where . In binary, the leading digit is always so it is not written out and is called the implicit bit. This gives an extra bit of precision so that the roundoff error caused by representation error is reduced. Since floating-point number system is finite and discrete, it cannot represent all real numbers which means infinite real numbers can only be approximated by some finite numbers through rounding rules. The floating-point approximation of a given real number by can be denoted. The total number of normalized floating-point numbers is where counts choice of sign, being positive or negative counts choice of the leading digit counts remaining significand digits counts choice of exponents counts the case when the number is . IEEE standard In the IEEE standard the base is binary, i.e. , and normalization is used. The IEEE standard stores the sign, exponent, and significand in separate fields of a floating point word, each of which has a fixed width (number of bits). The two most commonly used levels of precision for floating-point numbers are single precision and double precision. Machine epsilon Machine epsilon can be used to measure the level of roundoff error in the floating-point number system. Here are two different definitions. The machine epsilon, denoted , is the maximum possible absolute relative error in representing a nonzero real number in a floating-point number system. The machine epsilon, denoted , is the smallest number such that . Thus, whenever . Roundoff error under different rounding rules There are two common rounding rules, round-by-chop and round-to-nearest. The IEEE standard uses round-to-nearest. Round-by-chop: The base- expansion of is truncated after the -th digit. This rounding rule is biased because it always moves the result toward zero. Round-to-nearest: is set to the nearest floating-point number to . When there is a tie, the floating-point number whose last stored digit is even (also, the last digit, in binary form, is equal to 0) is used. For IEEE standard where the base is , this means when there is a tie it is rounded so that the last digit is equal to . This rounding rule is more accurate but more computationally expensive. Rounding so that the last stored digit is even when there is a tie ensures that it is not rounded up or down systematically. This is to try to avoid the possibility of an unwanted slow drift in long calculations due simply to a biased rounding. The following example illustrates the level of roundoff error under the two rounding rules. The rounding rule, round-to-nearest, leads to less roundoff error in general. Calculating roundoff error in IEEE standard Suppose the usage of round-to-nearest and IEEE double precision. Example: the decimal number can be rearranged into Since the 53rd bit to the right of the binary point is a 1 and is followed by other nonzero bits, the round-to-nearest rule requires rounding up, that is, add 1 bit to the 52nd bit. Thus, the normalized floating-point representation in IEEE standard of 9.4 is Now the roundoff error can be calculated when representing with . This representation is derived by discarding the infinite tail from the right tail and then added in the rounding step. Then . Thus, the roundoff error is . Measuring roundoff error by using machine epsilon The machine epsilon can be used to measure the level of roundoff error when using the two rounding rules above. Below are the formulas and corresponding proof. The first definition of machine epsilon is used here. Theorem Round-by-chop: Round-to-nearest: Proof Let where , and let be the floating-point representation of . Since round-by-chop is being used, it is In order to determine the maximum of this quantity, there is a need to find the maximum of the numerator and the minimum of the denominator. Since (normalized system), the minimum value of the denominator is . The numerator is bounded above by . Thus, . Therefore, for round-by-chop. The proof for round-to-nearest is similar. Note that the first definition of machine epsilon is not quite equivalent to the second definition when using the round-to-nearest rule but it is equivalent for round-by-chop. Roundoff error caused by floating-point arithmetic Even if some numbers can be represented exactly by floating-point numbers and such numbers are called machine numbers, performing floating-point arithmetic may lead to roundoff error in the final result. Addition Machine addition consists of lining up the decimal points of the two numbers to be added, adding them, and then storing the result again as a floating-point number. The addition itself can be done in higher precision but the result must be rounded back to the specified precision, which may lead to roundoff error. For example, adding to in IEEE double precision as follows,This is saved as since round-to-nearest is used in IEEE standard. Therefore, is equal to in IEEE double precision and the roundoff error is . This example shows that roundoff error can be introduced when adding a large number and a small number. The shifting of the decimal points in the significands to make the exponents match causes the loss of some of the less significant digits. The loss of precision may be described as absorption. Note that the addition of two floating-point numbers can produce roundoff error when their sum is an order of magnitude greater than that of the larger of the two. For example, consider a normalized floating-point number system with base and precision . Then and . Note that but . There is a roundoff error of . This kind of error can occur alongside an absorption error in a single operation. Multiplication In general, the product of two p-digit significands contains up to 2p digits, so the result might not fit in the significand. Thus roundoff error will be involved in the result. For example, consider a normalized floating-point number system with the base and the significand digits are at most . Then and . Note that but since there at most significand digits. The roundoff error would be . Division In general, the quotient of 2p-digit significands may contain more than p-digits.Thus roundoff error will be involved in the result. For example, if the normalized floating-point number system above is still being used, then but . So, the tail is cut off. Subtraction Absorption also applies to subtraction. For example, subtracting from in IEEE double precision as follows, This is saved as since round-to-nearest is used in IEEE standard. Therefore, is equal to in IEEE double precision and the roundoff error is . The subtracting of two nearly equal numbers is called subtractive cancellation. When the leading digits are cancelled, the result may be too small to be represented exactly and it will just be represented as . For example, let and the second definition of machine epsilon is used here. What is the solution to ? It is known that and are nearly equal numbers, and . However, in the floating-point number system, . Although is easily big enough to be represented, both instances of have been rounded away giving . Even with a somewhat larger , the result is still significantly unreliable in typical cases. There is not much faith in the accuracy of the value because the most uncertainty in any floating-point number is the digits on the far right. For example, . The result is clearly representable, but there is not much faith in it. This is closely related to the phenomenon of catastrophic cancellation, in which the two numbers are known to be approximations. Accumulation of roundoff error Errors can be magnified or accumulated when a sequence of calculations is applied on an initial input with roundoff error due to inexact representation. Unstable algorithms An algorithm or numerical process is called stable if small changes in the input only produce small changes in the output, and unstable if large changes in the output are produced. For example, the computation of using the "obvious" method is unstable near due to the large error introduced in subtracting two similar quantities, whereas the equivalent expression is stable. Ill-conditioned problems Even if a stable algorithm is used, the solution to a problem may still be inaccurate due to the accumulation of roundoff error when the problem itself is ill-conditioned. The condition number of a problem is the ratio of the relative change in the solution to the relative change in the input. A problem is well-conditioned if small relative changes in input result in small relative changes in the solution. Otherwise, the problem is ill-conditioned. In other words, a problem is ill-conditioned if its conditions number is "much larger" than 1. The condition number is introduced as a measure of the roundoff errors that can result when solving ill-conditioned problems. See also Precision (arithmetic) Truncation Rounding Loss of significance Floating point Kahan summation algorithm Machine epsilon Significant digits Wilkinson's polynomial References Further reading External links Roundoff Error at MathWorld. (, ) 20 Famous Software Disasters Numerical analysis sv:Avrundningsfel
Round-off error
Mathematics
2,550
13,337,703
https://en.wikipedia.org/wiki/Whinstone
Whinstone is a term used in the quarrying industry to describe any hard dark-coloured rock. Examples include the igneous rocks, basalt and dolerite, as well as the sedimentary rock chert. Etymology The Northern English/Scots term whin is first attested in the fourteenth century, and the compound whinstone from the sixteenth. The Oxford English Dictionary concludes that the etymology of whin is obscure, though it has been claimed, fancifully, that the term 'whin' derives from the sound it makes when struck with a hammer. Description Massive outcrops of whinstone occur at the Pentland Hills, Scotland and the Whin Sills, England. It is used for road chippings and dry stone walls, but its natural angular shapes do not fit together well and are not easy to build with, and its hardness makes it a difficult material to work. A common use is in the laying of patios and driveways in its ground/by product state called Whindust. References Rocks Quarrying
Whinstone
Physics
212
67,684,016
https://en.wikipedia.org/wiki/Concerted%20metalation%20deprotonation
Concerted metalation-deprotonation (CMD) is a mechanistic pathway through which transition-metal catalyzed C–H activation reactions can take place. In a CMD pathway, the C–H bond of the substrate is cleaved and the new C–Metal bond forms through a single transition state. This process does not go through a metal hydride species that is bound to the cleaved hydrogen atom. Instead, a carboxylate or carbonate base deprotonates the substrate. The first proposal of a concerted metalation deprotonation pathway was by S. Winstein and T. G. Traylor in 1955 for the acetolysis of diphenylmercury. It was found to be the lowest energy transition state in a number of computational studies, was experimentally confirmed through NMR experiments, and has been hypothesized to occur in mechanistic studies. While there are a number of different possible mechanisms for C–H activation, a CMD pathway is common for high valent, late transition metals like PdII, RhIII, IrIII, and RuII. The C–H bonds that have been found to undergo C–H activation through CMD include those that are aryl, alkyl, and alkenyl. Investigations into CMD paved the way for the development of many new C–H functionalization reactions, especially in the areas of direct arylation and alkylation by palladium and ruthenium. Mechanism CMD begins with a high valent, late transition metal like PdII that may or may not be bound to a carboxylate anion. In the initial stages, there is usually a coordination of the C–H bond with the metal to form a metal–hydrocarbon sigma complex. The computed transition state involves concerted partial formation of a carbon–metal bond and partial protonation of the carboxylate. At the same time, any anionic metal–carboxylate bond begins to break, as does the carbon–hydrogen bond that is being activated. Compared to other possible processes such as oxidative addition of the C–H bond to the metal, CMD is lower in energy in many cases. A transition state in which the carboxylate is bound to the metal can be referred to as either CMD or AMLA, which stands for "ambiphilic metal–ligand assistance," but the latter emphasizes that the carboxylate acts as a ligand during the transition state. History In 1955, S. Winstein and T. G. Traylor published a study of the mechanism of acetolysis of organomercury compounds. They propose a series of possible mechanisms for the process, which they rule out through based on their kinetic data. A concerted metalation deprotonation is considered, and they are unable to rule it out through the data they collect. The metalation of organic C–H bonds was extended from mercury to palladium in 1968 by J. M. Davidson and C. Triggs who identified that palladium acetate reacts with benzene in perchloric acid and acetic acid to give biphenyl, palladium(0), and 2 equivalents of acetic acid through an organopalladium intermediate. Early mechanistic studies found that palladium acetate was the best palladium precatalyst due to the presence of the acetate ligand. Mechanistic investigation has been ongoing since these initial discoveries, and infrared spectroscopy on the picosecond–millisecond time scale was used in 2021 to observe the states involved in proton transfer from acetic acid to a metalated ligand, which is the microscopic reverse of a concerted metalation deprotonation process. Examples Reaction systems that are less efficient or entirely inactive in the absence of carboxylate acids and bases are likely to occur through a concerted metalation protonation reaction pathway. An example of such a reaction with an sp3 C–H bond that was reported in 2007 by Keith Fagnou and coworkers is an intramolecular cyclization that uses a palladium catalyst. A notable example of a reaction that is catalyzed by ruthenium in which directed metalation occurs through CMD was reported by Igor Larossa and coworkers in 2018. The ruthenium catalyst is functional group tolerant and enables the late stage synthesis of pharmaceutically relevant biaryls. Importance of carboxylate Many C–H activation reactions, particularly those involving late transition metals, require carboxylate or carbonate bases. The need for this reaction component often suggests the occurrence of a CMD pathway. However, in order to be classified as CMD, the transition state does not need to involve the carboxylate as a ligand on the metal. Common sources of carboxylate include pivalate, acetate, and benzoate. References Organometallic chemistry Organic chemistry
Concerted metalation deprotonation
Chemistry
1,030
40,839,292
https://en.wikipedia.org/wiki/Local%20Environment
Local Environment: The International Journal of Justice and Sustainability is a peer-reviewed academic journal covering the fields of urban planning, environmental policy, and sustainable development with a focus on the intersection of social justice and sustainability in the local environment. The journal's audience and contributors include "researchers, activists, non-governmental organisations, students, teachers, policy makers and practitioners". It is published monthly by Routledge and was established in 1996. The editor-in-chief is Julian Agyeman of Tufts University. Associate editors are Stewart Barr of Exeter (UK), Michelle Thompson-Fawcett of Otago (NZ), and Robert Krueger of Worcester Polytechnic University (USA). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.928. It was ranked 13th among top urban studies and planning journals and 18th among sustainable development journals in 2018. References External links Environmental social science journals Geography journals Urban studies and planning journals Taylor & Francis academic journals Academic journals established in 1996 English-language journals Routledge academic journals Monthly journals
Local Environment
Environmental_science
212
53,281,588
https://en.wikipedia.org/wiki/Comet%20%28experiment%29
COMET (Coherent Muon to Electron Transition) is a nuclear physics experiment in J-PARC, Tokai, Japan. In contrast to the usual muon decay to an electron and neutrino, COMET seeks to look for neutrinoless muon to electron conversion, where the electron flies away with an energy of 104.8 MeV. Muon to electron conversion is not forbidden in the Standard Model but the branching ratio is about considering neutrino oscillations. In beyond the Standard Model approaches the muon to electron conversion process can be as high as e.g. via the supersymmetric . COMET will be using a new beamline connecting the J-PARC main ring and the J-PARC Nuclear and particle Physics Experimental Hall (NP hall). The current spokesperson is Kuno Yoshitaka alongside project manager Mihara Satoshi. The collaboration consists of universities coming from 15 countries. See also Mu2e experiment References External links SINDRUM MECO Particle experiments Physics beyond the Standard Model Science and technology in Japan
Comet (experiment)
Physics
211
58,138,832
https://en.wikipedia.org/wiki/Nadir%20and%20Occultation%20for%20Mars%20Discovery
Nadir and Occultation for MArs Discovery (NOMAD) is a 3-channel spectrometer on board the ExoMars Trace Gas Orbiter (TGO) launched to Mars orbit on 14 March 2016. NOMAD is designed to perform high-sensitivity orbital identification of atmospheric components, concentration and temperature, their sources, loss, and cycles. It measures the sunlight reflected from the surface and atmosphere of Mars, and it analyses its wavelength spectrum to identify the components of the Martian atmosphere that may suggest a biological source. The Principal Investigator is Ann Carine Vandaele, from the Belgian Institute for Space Aeronomy, Belgium. Overview NOMAD is one of four science instruments on board the European ExoMars TGO orbiter. This spectrometer consists of three separate channels: solar occultation (SO), limb nadir and occultation (LNO), and ultraviolet and visible spectrometer (UVIS). The first two channels work in the infrared (2.2 to 4.3 μm); the third channel (UVIS) works in the UV-visible range (0.2 to 0.65 μm), which is able to measure ozone, sulphuric acid, and perform aerosol studies. Measurements are carried out during solar occultation, i.e. the instrument points toward the sunset as the orbiter moves toward or away the dark side of Mars. It also measures in nadir mode, i.e. looking directly at the sunlight reflected from the surface and atmosphere of Mars. Since 9 April 2018, NOMAD is measuring the existing atmospheric concentrations of gases, their temperature and total densities. Atmospheric methane concentrations below 1 ppb can be detected. These measurements will also facilitate investigations in the production and loss processes for the cycles of water, carbon, and dust. NOMAD development and fabrication was carried out by OIP Sensor Systems at Belgium, in collaboration with partners in Spain, the United Kingdom, Italy, US, and Canada. Its development was based on the SPICAV spectrometer flown on Venus Express. Objectives NOMAD will map the composition and distribution of Mars' atmospheric trace gases and isotopes in unprecedented detail. The specific objectives are: search for signs of past or present life on Mars. investigate how the water and geochemical environment varies investigate Martian atmospheric trace gases and their sources. study the surface environment and identify hazards to future crewed missions to Mars. investigate the planet subsurface and deep interior to better understand the evolution and habitability of Mars. To achieve these objectives, NOMAD covers a spectral region from UV, visible, and infrared that reveals the signatures of the following molecules and isotopologues: CO2 (including 13CO2, 17OCO, 18OCO, C18O2), CO (including 13CO, C18O), H2O (including HDO), NO2, N2O, O3, CH4 (including 13CH4, CH3D), C2H2, C2H4, C2H6, H2CO, HCN, OCS, SO2, HCl, HO2, and H2S. In particular, the detection of the different methane (CH4) isotopologues (13CH4, CH3D) will be key to help determine whether they are of geological (serpentinisation, clathrates) or a biological source. In addition, NOMAD can detect formaldehyde () which is a photochemical product of methane, as well as nitrous oxide () and hydrogen sulfide () which are potential atmospheric biosignatures. SO2, a gas related to volcanism may reveal present or recent volcanic activity on Mars. See also Astrobiology Compact Reconnaissance Imaging Spectrometer for Mars, a spectrometer on the Mars Reconnaissance Orbiter ExoMars programme Jovian Infrared Auroral Mapper, a spectrometer aboard Juno Jupiter orbiter Life on Mars Martian atmosphere Water on Mars References ExoMars Spacecraft instruments Astrobiology Space science experiments
Nadir and Occultation for Mars Discovery
Astronomy,Biology
821
28,058,626
https://en.wikipedia.org/wiki/Neil%20Kelleher%20%28scientist%29
Neil L. Kelleher is the Walter and Mary Elizabeth Glass Professor of Chemistry, Molecular Biosciences, and Medicine at Northwestern University. His research focuses on mass spectrometry, primarily its application to proteomics. He is known mainly for top-down proteomics and the development of the fragmentation technique of electron-capture dissociation with Roman Zubarev while in Fred McLafferty's lab at Cornell University. Early life and education B.S. Pacific Lutheran University M.S. and Ph.D. Cornell University Research interests Mass spectrometry Electron-capture dissociation Proteomics Top-down proteomics Awards Biemann Medal, 2009 Pittsburgh Conference Achievement Award, 2008 Pfizer Award in Enzyme Chemistry (American Chemical Society, Division of Biological Chemistry), 2006 A.F. Findeis Award in Measurement Science (American Chemical Society, Division of Analytical Chemistry), 2006 Beckman Fellow, 2002-2003 Presidential Early Career Award Alfred P. Sloan Fellow Packard Fellow NSF CAREER Award Lilly Analytical Chemistry Award Burroughs Wellcome Fund Young Investigator Searle Scholar Fulbright Scholar References External links Kelleher Group Website Pacific Lutheran University alumni Cornell University alumni Northwestern University faculty 21st-century American chemists Mass spectrometrists Living people 1970 births Harvard University people Harvard University alumni
Neil Kelleher (scientist)
Physics,Chemistry
268
32,790,202
https://en.wikipedia.org/wiki/Varespladib
Varespladib is an inhibitor of the IIa, V, and X isoforms of secretory phospholipase A2 (sPLA2). The molecule acts as an anti-inflammatory agent by disrupting the first step of the arachidonic acid pathway of inflammation. From 2006 to 2012, varespladib was under active investigation by Anthera Pharmaceuticals as a potential therapy for several inflammatory diseases, including acute coronary syndrome and acute chest syndrome. The trial was halted in March 2012 due to inadequate efficacy. The selective sPLA2 inhibitor varespladib (IC50 value 0.009 μM in chromogenic assay, mole fraction 7.3X10-6) was studied in the VISTA-16 randomized clinical trial (clinicaltrials.gov Identifier: NCT01130246) and the results were published in 2014. The sPLA2 inhibition by varespladib in this setting seemed to be potentially harmful, and thus not  a useful strategy for reducing adverse cardiovascular outcomes from acute coronary syndrome. Since 2016, scientific research has focused on the use of Varespladib as an inhibitor of snake venom toxins using various types of  in vitro and in vivo models. Varespladib showed a significant inhibitory effect to snake venom PLA2 which makes it a potential first-line drug candidate in snakebite envenomation therapy.  In 2019, the U.S. Food and Drug Administration (FDA) granted varespladib orphan drug status for its potential to treat snakebite. History Varespladib methyl was originally developed jointly by Eli Lilly and Company and Shionogi & Co., Ltd., and was acquired by Anthera Pharmaceuticals in 2006. A Phase II study demonstrated selective sPLA2 inhibition as well as statistically significant anti-inflammatory responses and reductions in LDL cholesterol levels. Two other Phase II trials, conducted in patients with coronary artery disease, found significant decreases in sPLA2 and LDL cholesterol levels, as well as C-reactive protein (CRP) and other inflammatory biomarkers. Varespladib methyl has also been shown to further reduce LDL and inflammatory biomarker levels when administered in conjunction with a cholesterol lowering statin therapy. In 2010, a Phase III study entitled VISTA-16 was initiated to evaluate the safety and efficacy of short-term treatment with varespladib methyl in subjects with ACS. The trial was halted in March 2012 due to insufficient efficacy. On November 18, 2013, an excess of myocardial infarctions, and of the composite endpoint of cardiovascular mortality, myocardial infarctions and stroke in the VISTA-16 study were reported. First report on its efficacy as an antidote for snake venoms comes from 2016. Due to oral bioavailability it is considered as a potential first-line field-treatment for snakebite envenomation, which could be applied before provision of definitive medical care. Oral varespladib Varespladib methyl (also known as A-002, formerly LY333013 and S-3013) is a secretory phospholipase A2 (sPLA2) inhibitor formerly under development by Anthera Pharmaceuticals as a treatment for acute coronary syndrome (ACS). Varespladib methyl is an orally bioavailable prodrug of the molecule varespladib. From 2006 to 2012, varespladib methyl was under active investigation by Anthera Pharmaceuticals as a potential therapy for several inflammatory diseases, including acute coronary syndrome. In March 2012, Anthera halted further investigation of varespladib per a recommendation from an independent Data Safety Monitoring Board. Varespladib and varespladib methyl were characterised as effective molecules in neutralization of snakes venoms and are under experimental evaluation. Intravenous varespladib Varespladib sodium (also known as A-001, previously LY315920 and S-5920) is a sodium salt of varespladib designed for intravenous delivery. It was under evaluation by Anthera Pharmaceuticals as an anti-inflammatory sPLA2 inhibitor for the prevention of acute chest syndrome (ACS), the leading cause of death for patients with sickle-cell disease. Elevated serum levels of sPLA2 have been observed in sickle-cell patients preceding and during ACS episodes. This profound elevation in sPLA2 levels is not observed in sickle-cell patients at steady-state or during a vaso-occlusive crisis, or in patients with respiratory diseases such as pneumonia. A reduction in serum sPLA2 levels, for example through blood transfusion, reduces the risk of an ACS, suggesting that sPLA2 plays an important role in the onset of ACS. Anthera Pharmaceuticals acquired varespladib sodium from Lilly and Shionogi in 2006. In 2007, the U.S. Food and Drug Administration (FDA) granted varespladib sodium orphan drug status for its potential to treat patients with sickle-cell disease, which was later withdrawn. In 2009, Anthera Pharmaceuticals completed a Phase II study of varespladib sodium in subjects with sickle cell disease at risk for ACS. Inhibitory effect on snake venoms Snakebite envenomation can cause local tissue damage, with edema, hemorrhage, myonecrosis, and systemic toxic responses, including organ failure. In an early report on inhibition of snake venom toxicities, Varespladib, and its orally bioavailable prodrug methyl-varespladib (LY333013) showed strong inhibition of 28 types of svPLA2s from six continents. Varespladib treatment exerted a significant inhibitory effect on snake venom PLA2 both in vitro and in vivo. Hemorrhage and myonecrosis initiated by D. acuts, A. halys, N. atra, and B. multicinctus in an animal model were significantly reversed by varespladib. Furthermore, edema in gastrocnemius muscle was also attenuated. The sPLA2 inhibitor, LY315920 (varespladib sodium), and its orally bioavailable prodrug, LY333013 (varespladib methyl) were highly effective in preventing lethality following experimental envenoming by M. fulvius in a porcine animal model. Considering that some of the toxins of snake venoms are enzymes, the search for low molecular weight enzyme inhibitors that could be safely administered immediately after a snakebite re-focused scientists' attention on Varespladib. Its ability to neutralize the enzymatic and toxic activities of three isolated PLA2 toxins (from medically important snakes found in different region around the world) of structural groups I (pseudexin) and II (crotoxin B and myotoxin I) was evaluated. The results obtained showed that Varespladib was able to neutralize the in vitro cytotoxic and in vivo myotoxic activities of purified PLA2s of both the structural group I (pseudexin) and II (myotoxin-I and crotoxin B), however further detailed analysis are needed. Varespladib also effectively inhibited the non-enzymatic myotoxic activity of the snake venom PLA2-like protein (MjTX-II). Co-crystallization of Varespladib with MjTX-II toxin revealed that the compound binds to a hydrophobic channel of the protein. Such interaction blocks fatty acids binding, thus inhibiting allosteric activation of the toxin. This leads to the toxin losing its ability to disrupt cell membranes.  In 2019, the U.S. Food and Drug Administration (FDA) designated varespladib orphan drug status for its potential to treat snakebite, without being FDA approved for Orphan Indication. Mechanism Prodrug activation Varespladib methyl, in contrast to varespladib, is orally bioavailable and after absorption from the GI tract, it undergoes rapid ester hydrolysis to the active molecule – varespladib. sPLA2 inhibition Increased levels of sPLA2 have been observed in patients with cardiovascular disease, and may lead to both acute and chronic disease manifestations by promoting vascular inflammation. Plasma levels of sPLA2 can predict coronary events in patients who recently suffered an ACS as well as in those with stable coronary artery disease. Furthermore, sPLA2 remodels lipoproteins, notably low-density lipoproteins (LDL) and their receptors, which are responsible for removing cholesterol from the body. This remodeling can lead to increased deposition of LDL and cholesterol in the artery wall. In combination with chronic vascular inflammation, these deposits lead to atherosclerosis. Varespladib inhibits the IIA, V and X isoforms of sPLA2 to reduce inflammation, lower and modulate lipid levels, and reduce levels of C-reactive protein (CRP) and interleukin-6 (IL-6), both indicators of inflammation. Snake venom antidote activity sPLA2 is also present in snake venoms and implicated in their toxicity. It plays a role in the morbidity and mortality from snakebite envenomations, triggering induced cell lysis, disrupted hemostasis, and diminished oxygen transport, as well as myotoxicity and neurotoxicity which can lead to paralysis. Varespladib methyl, as well as varespladib, were found to be inhibitors of the sPLA2 of snake venoms. Varespladib methyl was less potent than varespladib. Both showed activity against a broad spectrum of different snake venoms originating from six continents. They protected rodents against neurotoxicity and hemostatic toxicity, increasing survival of envenomed animals. Varespladib also effectively inhibited in vitro and in vivo the non-enzymatic myotoxic activity of snake venom's PLA2-like protein (MjTX-II). Co-crystallization of varespladib with MjTX-II toxin (PDB code: 6PWH) revealed that the drug binds to a hydrophobic channel of the protein. This blocks fatty acids from binding there, thus inhibiting their allosteric activation of the toxin, thereby impairing its ability to disrupt cell membranes. References External links Varespladib methyl at Anthera.com Varespladib sodium at Anthera.com Varespladib methyl Phase III study at ClinicalTrials.gov Varespladib sodium Phase II study at ClinicalTrials.gov Tryptamines Carboxamides Carboxylic acids
Varespladib
Chemistry
2,313
67,578,063
https://en.wikipedia.org/wiki/Achnanthaceae
Achnanthaceae is a family of algae belonging to the order Achnanthales. Genera: Achnanthes Bory, 1822 Amphicocconeis M.De Stephano & D.Marino, 2002 Diatomella Haloroundia C.A.Düaz & N.I.Maidana, 2006 Platebaikalia Kulikovskiy, Glushchenko, Genkal & Kociolek, 2020 Platessa H.Lange-Bertalot, 2004 References Algae
Achnanthaceae
Biology
107
60,212,303
https://en.wikipedia.org/wiki/Amy%20Rosemond
Amy D. Rosemond is an American aquatic ecosystem ecologist, biogeochemist, and Distinguished Research Professor at the Odum School of Ecology at the University of Georgia. Rosemond studies how global change affects freshwater ecosystems, including effects of watershed urbanization, nutrient pollution, and changes in biodiversity on ecosystem function. She was elected an Ecological Society of America fellow in 2018, and served as president of the Society for Freshwater Science from 2019-2020. Education and early career Rosemond grew up in Florida in the 1970s, where her love of nature was confronted by increasing human pressures on the environment. Rosemond earned her Bachelor of Sciences degree in zoology from the University of North Carolina, Chapel Hill. She remained at the University of North Carolina, Chapel Hill to complete her Master of Arts degree in biology. Rosemond went on to earn a Ph.D. in biology at Vanderbilt University, where she was co-advised by Vanderbilt faculty Susan Brawley and Oak Ridge National Laboratory research scientist Patrick J. Mulholland. Rosemond conducted her dissertation research at the Oak Ridge National Lab, in Tennessee, USA, studying how both top-down predation and bottom-up nutrient availability affect periphyton in headwater streams. After completing her Ph.D. in 1993, Rosemond was awarded a National Science Foundation postdoctoral research fellowship in environmental biology. She completed her postdoc at the Institute of Ecology at the University of Georgia, during which she conducted research at La Selva Biological Station in Costa Rica examining the top-down and bottom-up effects of predatory fishes and shrimps and phosphorus, respectively, on leaf-litter breakdown and carbon processing. While working at La Selva, Rosemond also conducted research on landscape-scale variation in stream phosphorus concentrations, and its effects on stream detritivore food webs. Career Rosemond worked as the assistant director of the Institute of Ecology at the University of Georgia from 1998-2005. She became an assistant professor at the University of Georgia in 2005 in the Odum School of Ecology, an associate professor in 2011, and a professor in 2017. As of 2019, she has advised or co-advised 17 graduate students and three postdocs at Georgia. Broadly, Rosemond and her lab members research the mechanisms and processes that underlie aquatic ecosystem health and function, and seek to understand how stream and river health is altered by human activities and global change. This involves studying how different stressors, including excess nutrients and land-use change through urbanization, affect ecosystem processes. Excess nutrients and stream ecosystem function Leveraging partnerships with the Coweeta Hydrologic Lab long-term ecological research site, Rosemond and her colleagues have used whole-ecosystem experiments to understand how stream carbon stocks, benthic macroinvertebrates, and higher trophic levels, including salamanders, respond to nitrogen and phosphorus pollution. Her research in this area focuses on how terrestrially-derived detrital carbon, including leaves, sticks, and wood that fall into streams, is processed and transmitted through aquatic food webs that are exposed to excess nutrients. She has led research to test the relative importance of nitrogen and phosphorus limitation in stream carbon processing through whole-stream nutrient enrichment studies. Through this work, Rosemond and her collaborators have increased understanding of how nutrients affect energy flow in detritus-based food webs, as previous research on nutrient effects in streams often focused on photosynthetic, algal pathways. Awards Distinguished Research Professor, University of Georgia (2022) Fellow of the Ecological Society of America (2018) Creative Research Medal in Natural Sciences and Engineering, University of Georgia Office of Research (2018) National Science Foundation Postdoctoral Fellowship in Environmental Biology Publications Selected journal articles Rosemond, A.D., et al. 2015. Experimental nutrient additions accelerate terrestrial carbon loss from stream ecosystems. Science 347: 1142-1145. Rosemond, A.D., et al. 2010. Non-additive effects of litter mixing are suppressed in a nutrient-enriched stream. Oikos 119: 326-336. Rosemond, A.D., et al. 2008. Nitrogen versus phosphorus demand in a detritus-based headwater stream: what drives microbial to ecosystem response? Verhandlungen des Internationalen Verein Limnologie. 30: 651-655. Rosemond, A.D., et al. 2002. Landscape variation in phosphorus concentration and effects on detritus-based tropical streams. Limnology and Oceanography 47: 278-289. Rosemond, A.D., et al. 1993. Top-down and bottom-up control of stream periphyton: effects of nutrients and herbivores. Ecology 74: 1264-1280. References American ecologists Biogeochemists American women ecologists University of Georgia faculty University of North Carolina at Chapel Hill alumni Vanderbilt University alumni Oak Ridge National Laboratory people Fellows of the Ecological Society of America Living people Year of birth missing (living people) Scientists from Florida American limnologists Women limnologists Presidents of the Society for Freshwater Science
Amy Rosemond
Chemistry
1,035
29,214,414
https://en.wikipedia.org/wiki/Bell-gable
The bell gable (, , ) is an architectural element crowning the upper end of the wall of church buildings, usually in lieu of a church tower. It consists of a gable end in stone, with small hollow semi-circular arches where the church bells are placed. It is a characteristic example of the simplicity of Romanesque architecture. Overview The bell-gables or espadañas are a feature of Romanesque architecture in Spain. They replaced the bell tower beginning the 12th century due to the Cistercian reformation that called for a more simplified and less ostentatious churches, but also for economical and practical reasons as the Reconquista accelerated and wider territory needed to be re-christianized building more temples and espadañas were cheaper and simpler to build. Today, they are a common sighting in small village churches throughout Spain and Portugal. This simple and sober architectural element would later be brought to the Americas and the Philippines by the Iberian colonizers, where it would find widespread use especially in the earliest structures. The bell gable usually rises over the front façade wall, but in some churches it may be located on top of any other wall or even on top of the toral arch in the midst of the roof. In the Spanish regions of Catalonia and the Valencian Community, the bell-gables are also known as campanar de paret (wall bell tower) or campanar de cadireta. (little-chair bell tower) because it reminds one of the back of a chair. In Écija, Spain, the bell tower of the church of Santa Bárbara fell destroyed by a lightning strike in 1892 and was replaced by an espadaña, a more expedient solution than rebuilding the tower. A bell-cot is a similar structure, but may appear in places other than gables or building ends. Main types and styles See also Bell tower Zvonnitsa References External links Bamboo or Brick: The travails of building churches in Spanish Colonial Philippines by Jose Regalado Trota, Ayala Museum Church architecture Architectural elements Types of wall Articles containing video clips
Bell-gable
Technology,Engineering
421
367,577
https://en.wikipedia.org/wiki/Antipodal%20point
In mathematics, two points of a sphere (or n-sphere, including a circle) are called antipodal or diametrically opposite if they are the endpoints of a diameter, a straight line segment between two points on a sphere and passing through its center. Given any point on a sphere, its antipodal point is the unique point at greatest distance, whether measured intrinsically (great-circle distance on the surface of the sphere) or extrinsically (chordal distance through the sphere's interior). Every great circle on a sphere passing through a point also passes through its antipodal point, and there are infinitely many great circles passing through a pair of antipodal points (unlike the situation for any non-antipodal pair of points, which have a unique great circle passing through both). Many results in spherical geometry depend on choosing non-antipodal points, and degenerate if antipodal points are allowed; for example, a spherical triangle degenerates to an underspecified lune if two of the vertices are antipodal. The point antipodal to a given point is called its antipodes, from the Greek () meaning "opposite feet"; see . Sometimes the s is dropped, and this is rendered antipode, a back-formation. Higher mathematics The concept of antipodal points is generalized to spheres of any dimension: two points on the sphere are antipodal if they are opposite through the centre. Each line through the centre intersects the sphere in two points, one for each ray emanating from the centre, and these two points are antipodal. The Borsuk–Ulam theorem is a result from algebraic topology dealing with such pairs of points. It says that any continuous function from to maps some pair of antipodal points in to the same point in Here, denotes the sphere and is real coordinate space. The antipodal map sends every point on the sphere to its antipodal point. If points on the are represented as displacement vectors from the sphere's center in Euclidean then two antipodal points are represented by additive inverses and and the antipodal map can be defined as The antipodal map preserves orientation (is homotopic to the identity map) when is odd, and reverses it when is even. Its degree is If antipodal points are identified (considered equivalent), the sphere becomes a model of real projective space. See also Cut locus References External links Spherical geometry Point (geometry)
Antipodal point
Mathematics
518
195,700
https://en.wikipedia.org/wiki/List%20of%20electrical%20engineers
This is a list of electrical engineers (by no means exhaustive), people who have made notable contributions to electrical engineering or computer engineering. See also List of engineers - for lists of engineers from other disciplines List of Russian electrical engineers Engineers Electrical Engineers
List of electrical engineers
Technology,Engineering
51
46,737,135
https://en.wikipedia.org/wiki/SPEARpesticides
SPEARpesticides (Species At Risk) is a trait based biological indicator system for streams which quantitatively links pesticide contamination to the composition of macroinvertebrate communities. The approach uses species traits that characterize the ecological requirements posed by pesticide contamination in running waters. Therefore, it is highly specific and only slightly influenced by other environmental factors. SPEARpesticides is linked to the quality classes of the EU Water Framework Directive (WFD) History SPEARpesticides has been first developed for Central Germany and updated. SPEARpesticides was adapted and validated for streams and mesocosms worldwide and provides the first ecotoxicological approach to specifically determine the ecological effects of pesticides on aquatic invertebrate communities. Argentina, Australia Denmark, Finland, France, Germany, Kenya, Switzerland, USA, Russia Mesocosms. Calculation SPEARpesticides estimates pesticide effects and contamination. The calculation is based on monitoring data of invertebrate communities as ascertained for the EU Water Framework Directive (WFD). A simplified version of SPEARpesticides is included in the ASTERICS software for assessing the ecological quality of rivers. A detailed analysis is enabled by the free SPEAR Calculator. The SPEAR Calculator provides most recent information on species traits and allows specific user settings. The SPEARpesticides index is computed as relative abundance of vulnerable 'SPecies At Risk' (SPEAR) to be affected by pesticides. Relevant species traits comprises the physiological sensitivity towards pesticides, generation time, migration ability and exposure probability. The indicator value of SPEARpesticides at a sampling site is calculated as follows: with n = number of taxa; xi = abundance of taxon i; y = 1 if taxon i is classified as SPEAR-sensitive; y = 0 if taxon i is classified as SPEAR-insensitive. An application is available for the calculation. Web address of SPEAR calculator References Bioindicators Water quality indicators Pesticides Water pollution
SPEARpesticides
Chemistry,Biology,Environmental_science
406
28,850,554
https://en.wikipedia.org/wiki/Mezepine
Mezepine is a tricyclic antidepressant (TCA) that was never marketed. See also Tricyclic antidepressant References Amines Dibenzazepines Tricyclic antidepressants Abandoned drugs
Mezepine
Chemistry
53
6,490,564
https://en.wikipedia.org/wiki/Legs%20%28Chinese%20constellation%29
The Legs mansion (奎宿, pinyin: Kuí Xiù) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the western mansions of the White Tiger. Cultural significance In East Asian cultures, the Legs mansion (Kuí Xiù) represents wisdom, scholarship and literature. A notable example is a structure known as "Kuiwen Pavilion" (奎文閣) in the many Confucius temples in China and other East Asian countries. Asterisms See also Kui Xing Chinese constellations
Legs (Chinese constellation)
Astronomy
111
22,093,715
https://en.wikipedia.org/wiki/Foilbacks
Foilbacks, in vintage jewellery, is the practice of inserting metal foil behind gemstones or faux gemstones, to enhance their sparkle and reflective properties. When this foil darkens or peels, these gemstones are often considered dead or lacking in sparkle. Modern jewelers seldom use foiling behind actual gemstones, but faux gems are made in a similar fashion even today. References Jewellery making Jewellery components
Foilbacks
Technology,Engineering
83
56,322,565
https://en.wikipedia.org/wiki/Kazan%20Soda%20Elektrik
The Kazan Soda Elektrik, full name Kazan Soda Elektrik Üretim A.Ş., is a chemical industry and electric energy company in Ankara Province, Turkey producing natural soda ash and baking soda from trona. The company is a subsidiary of Ciner Holding. Background The trona ore deposits were owned by Rio Tinto Group, an Australian-British multinational and one of the world's largest metals and mining corporation. After survey activities, which lasted more than fifteen years, the company concluded that it will be unable to operate the mining of the trona ore reserves there, and sold the deposits to Ciner Holding in 2010. The construction of the soda products plant began in 2015, after five years of efforts for bureaucratic permissions and financing. The investment budget of the project was US$1.5 billion. The financing of the project was provided by the Industrial and Commercial Bank of China (ICBC), Exim Bank of China and Deutsche Bank backed by the China Export and Credit Insurance Corporation (Sinosure). Sberbank of Russia financially contributed during the groundbreaking phase. The construction of the facility was carried out by the China Tianchen Engineering Corporation (TCC). The facility was completed within two and half years. Kazan Soda Elektrik plant was inaugurated on January 15, 2018, in presence of Turkish President Recep Tayyip Erdoğan, Minister of Energy and Natural Resources Berat Albayrak, Minister of Labour and Social Security Jülide Sarıeroğlu, Ambassador of China Yu Hongyang and many other high-profile politicians and officials. Plant and production The Kazan Soda Elektrik consists of three sections, namely mining, processing and cogeneration. While the mining area is located in Kahramankazan district, the production plant is situated within the Sincan district of Ankara Province, northwest of Ankara. It is about north of Ankara. The plant's mining section supplies the processing section with the trona solution (trisodium hydrogendicarbonate dihydrat), which is the primary source of soda ash. For this, trona ore, laying in average at a depth of underground, is injected with hot water through boreholes drilled, and the dissolved trona is pumped up in the form of trona solution. The plant has five processing lines. The congeneration facility produces 380 MWe electric power and 400 tons of steam. The annual production capacity of the plant is 2.5 million tons of soda ash (sodium carbonate, Na2CO3) and 200,000 tons of baking soda (sodium bicarbonate, NaHCO3). If all the production were exported to Europe, it would increase the key glass raw material by around 25%. Around 1,000 people are employed by the company. The trona ore reserve of Kazan Soda Elektrik is the world's second largest. The plant is the biggest soda ash production facility in Europe. With both Kazan Soda and Eti Soda, the Ciner Holding and Turkey becomes the leading soda ash producer of the world. The soda ash produced has a purity grade of 99.8%, which is the purest in the world. The total annual export value of the products from Kazan Soda Elektrik and Eti Soda will be US$800 million. Sustainability The company has published a report by CDP scoring their environmental impact. Their scope 1 and 2 emissions intensity in 2019 was 0.345 tonnes CO2e per tonne of product. However, the first implementation of the EU Carbon Border Adjustment Mechanism does not include soda. See also Eti Soda, Turkey Ciner Wyoming, United States References Ciner Glass and Chemicals Group Mining companies of Turkey Chemical companies of Turkey Chemical plants Industrial buildings in Turkey Industrial buildings completed in 2018 Chemical companies established in 2010 Non-renewable resource companies established in 2010 2010 establishments in Turkey Companies based in Ankara Kahramankazan District Sincan, Ankara 21st-century architecture in Turkey
Kazan Soda Elektrik
Chemistry
808
1,029,423
https://en.wikipedia.org/wiki/Megastructure
A megastructure is a very large artificial object, although the limits of precisely how large vary considerably. Some apply the term to any especially large or tall building. Some sources define a megastructure as an enormous self-supporting artificial construct. The products of megascale engineering or astroengineering are megastructures. Most megastructure designs could not be constructed with today's level of industrial technology. This makes their design examples of speculative (or exploratory) engineering. Those that could be constructed easily qualify as megaprojects. Megastructures are also an architectural concept popularized in the 1960s where a city could be encased in a single building, or a relatively small number of buildings interconnected. Such arcology concepts are popular in science fiction. Megastructures often play a part in the plot or setting of science fiction movies and books, such as Rendezvous with Rama by Arthur C. Clarke. In 1968, Ralph Wilcoxen defined a megastructure as any structural framework into which rooms, houses, or other small buildings can later be installed, uninstalled, and replaced; and which is capable of "unlimited" extension. This type of framework allows the structure to adapt to the individual wishes of its residents, even as those wishes change with time. Other sources define a megastructure as "any development in which residential densities are able to support services and facilities essential for the development to become a self-contained community". Many architects have designed such megastructures. Some of the more notable such architects and architectural groups include the Metabolist Movement, Archigram, Cedric Price, Frei Otto, Constant Nieuwenhuys, Yona Friedman, and Buckminster Fuller. Proposed Atlantropa, a hydroelectric dam to be built across the Strait of Gibraltar, lowering the surface of the Mediterranean Sea by as much as 200 meters. Trans-Global Highway, highway systems that would link all six of the inhabited continents on Earth. The highway would network new and existing bridges and tunnels, not only improving ground transportation but also potentially offering a conduit for utility pipelines. Cloud nine is Buckminster Fuller's proposal for a tensegrity sphere a mile in radius which would be large enough so that it would float in the sky if heated by only one degree above ambient temperature, creating habitats for mini cities of thousands of people in each "Cloud Nine". Fuller also proposed a marine analog consisting of a hollow terraced floating tetrahedron of reinforced concrete measuring one mile from vertex to vertex supporting a population of one million living in air-deployed residential modules on the exterior with the requisite infrastructure providing utilities (water, power, sewerage, etc.) inside. The modules would have standardized utility ports so as to be completely livable within minutes of arrival, and could be subsequently detached and moved to other such cities. The Line, a 170-kilometer-long linear settlement in Saudi Arabia, a smart city currently in the early stages of construction. Theoretical A number of theoretical structures have been proposed which may be considered megastructures. Stellar scale Most stellar scale megastructure proposals are designs to make use of the energy from a sun-like star while possibly still providing gravity or other attributes that would make it attractive for an advanced civilization. The Alderson disk is a theoretical structure in the shape of a disk, whose outer radius is equivalent to the orbit of Mars or Jupiter and whose thickness is several thousand kilometers. A civilization could live on either side, held by the gravity of the disk and still receive sunlight from a star bobbing up and down in the middle of the disk. A Dyson sphere (also known as a Dyson shell) refers to a structure or mass of orbiting objects that completely surrounds a star to make full use of its solar energy. A Matrioshka brain is a collection of multiple concentric Dyson spheres which make use of star's energy for computing. A Stellar engine either uses the temperature difference between a star and interstellar space to extract energy or serves as a Shkadov thruster. A Shkadov thruster accelerates an entire star through space by selectively reflecting or absorbing light on one side of it. Topopolis (also known as Cosmic Spaghetti) is a large tube that rotates to provide artificial gravity. A Ringworld (or Niven Ring) is an artificial ring encircling a star, rotating faster than orbital velocity to create artificial gravity on its inner surface. A non-rotating variant is a transparent ring of breathable gas, creating a continuous microgravity environment around the star, as in the eponymous Smoke Ring. Related structures which might not be classified as individual stellar megastructures, but occur on a similar scale: A Dyson swarm is a Dyson sphere made up of separately orbiting elements (including large habitats) rather than a single continuous shell. A Dyson bubble is a Dyson sphere in which the individual elements are statites, non-orbital objects held aloft by the pressure of sunlight. Planetary scale A Bishop Ring, Halo or Orbital is a space habitat similar to but much smaller than a Niven Ring. Instead of being centered on a star, it is in orbit around the star and its diameter is typically on the order of magnitude of a planet. By tilting the ring relative to its orbit, the inner surface would experience a nearly conventional day and night cycle. Due to its enormous scale, the habitat would not need to be fully enclosed like the Stanford torus, instead its atmosphere would be retained solely by centripetal gravity and side walls, allowing an open sky. Globus Cassus is a hypothetical proposed project for the transformation of Planet Earth into a much bigger, hollow, artificial world with the ecosphere on its inner surface. This model serves as a tool to understand the World's real functioning processes. Shellworlds or paraterraforming are inflated shells holding high pressure air around an otherwise airless world to create a breathable atmosphere. The pressure of the contained air supports the weight of the shell. Completely hollow shell worlds can also be created on a planetary or larger scale by contained gas alone, also called gravitational balloons, as long as the outward pressure from the contained gas balances the gravitational contraction of the entire structure, resulting in no net force on the shell. The scale is limited only by the mass of gas enclosed, the shell can be made of any mundane material. The shell can have an additional atmosphere on the outside. It can also refer to terraformed or artificial planets with multiple concentric layers. Orbital structures An orbital ring is a dynamically elevated ring placed around the Earth that rotates at an angular rate that is faster than orbital velocity at that altitude, stationary platforms can be supported by the excess centripetal acceleration of the super-orbiting ring (similar in principle to a Launch loop), and ground-tethers can be supported from stationary platforms. The Bernal sphere is a proposal for a spherical space colony with a maximum diameter of 16 kilometers. It would have gravity at the equator, and gradually turn to zero G at the poles. Rotating wheel space stations, such as the Stanford torus, are wheel-like space station which produce artificial gravity by rotation. Typical designs include transport spokes to a central hub used for docking and/or micro-gravity research. The related concepts, O'Neill and McKendree cylinders, are both pairs of counter-rotating cylinders containing habitable areas inside and creating 1g on their inner surfaces via centripetal acceleration. The scale of each concept came from estimating the largest 1g cylinder that could be built from steel (O'Neill) or carbon fiber (McKendree). Hollowed asteroids (or Bubble worlds or Terraria) are spun on their axis for simulated gravity and filled with air, allowing them to be inhabited on the inside. In some concepts, the asteroid is heated to molten rock and inflated into its final form. A stellaser is a star-powered laser or maser. Trans-orbital structures A skyhook is a very long tether that hangs down from orbit. A space elevator is a tether that is fixed to the ground, extending beyond geostationary orbital altitude, such that centripetal force exceeds gravitational force, leaving the structure under slight outward tension. A space fountain is a dynamically supported structure held up by the momentum of masses which are shot up to the top at high speeds from the ground. A launch loop (or Lofstrom loop) is a dynamically supported 2000 km long iron loop that projects up in an arc to 80 km that is ridden by maglev cars while achieving orbital velocity. StarTram Generation 2 is a maglev launch track extending from the ground to above 96% of the atmosphere's mass, supported by magnetic levitation. A rotovator is a rotating tether where the lower tip is moving in the opposite direction to the tether's orbital velocity, reducing the difference in velocity relative to the ground, and hence reducing the velocity of rendezvous; the upper tip is likewise moving at greater than orbital velocity, allowing propellantless transfer between orbits. Around an airless world, such as the moon, the lower tip can actually touch the ground with zero horizontal velocity. As with any momentum exchange tether, orbital energy is gained or lost in the transfer. Fictional A number of structures have appeared in fiction which may be considered megastructures. Stellar scale The Dyson sphere has appeared in many works of fiction, including the Star Trek universe. Larry Niven's series of novels beginning with Ringworld centered on, and originated the concept of a ringworld, or Niven ring. A ringworld is an artificial ring with a radius roughly equal to the radius of the Earth's orbit (1 AU). A star is present in the center and the ring spins to create g-forces, with inner walls to hold in the atmosphere. The structure is unstable, and required the author to include workarounds in subsequent novels set on it. In the manga Blame! the megastructure is a vast and chaotic complex of metal, concrete, stone, etc., that covers the Earth and assimilates the Moon, and eventually expands to encompass a volume greater than the orbit of Jupiter. In White Light by William Barton and Michael Capobianco, a Topopolis is presented as taking over the entire universe. In the Heechee Saga series by Frederik Pohl, a race of pure energy beings called The Foe have constructed the Kugelblitz, a black hole made of energy and not matter. In the Xeelee series of books by Stephen Baxter, the eponymous alien race constructed the Ring, a megastructure made of cosmic strings, spanning over 10 million light years. In Freelancer, The Dom'Kavosh's Dyson shell that is inhabited by a drone race created by the Dom'Kavosh, Nomads. This is reached via a hyper gate, created by the same creators as the Dyson sphere. The Saga of Cuckoo series novel Wall Around a Star mentions a proposal to build a super dyson sphere, completely enclosing the Galactic Center. The title of the novel Helix by Eric Brown directly references a stellar-scale helical megastructure. Different types of environments and habitats are interspersed along the structure, while their varying distance from the central star affects the climate. The player's central quest in computer game Dyson Sphere Program is to construct a Dyson sphere. Gameplay focuses on constructing planetary scale factories as a means towards this end. The Quarg in the game Endless Sky are shown building a massive ring around one of their stars, which is most likely around one astronomical unit in diameter. A completed version of this can also be found in another location. In computer games Space Empires IV and Space Empires V, the player can construct sphereworlds and ringworlds around stars. Dennis E. Taylor's 2020 novel Heaven's River features a Topopolis built around an alien system. Different segments of the structure are built with artificial climate and weather. Planetary and orbital scale Several structures from the fictional Halo universe: The original twelve Halos, seen in Halo: Cryptum, were 30,000 kilometers in diameter; a separate array of six Halos are 10,000 kilometers in diameter, with one of the original twelve later being reduced to this size in Halo: Primordium. The Lesser Ark is a 127,530 km diameter structure from which the Halo Array can be activated and capable of building 10,000 km Halos. The "greater" Ark, seen in Cryptum and Primordium, is capable of producing 30,000 km Halos. Onyx is an artificial planet made entirely out of Forerunner Sentinels (advanced replicating robots). At its core is a "shield world", contained within slipstream space, that is approximately one astronomical unit in diameter. The much smaller Shield World 0459, (approximately 1,400 km in diameter), is the setting for the latter half of Halo Wars. A third shield world, Requiem, is the primary setting for Halo 4. Requiem is an artificial hollow planet encased in a kind of Dyson Sphere. Halo 5: Guardians introduces a fourth shield world, Genesis. High Charity, the Covenant's mobile planetoid station. In the Doctor Who episodes The Stolen Earth and Journey’s End, a planet-sized space station known as The Crucible is built by the Daleks, a genocidal alien race, and facilitates the reality bomb, a weapon meant to erase the entire multiverse from existence. The Crucible also held enough Daleks to slaughter the universe they were in even without the bomb, according to The Doctor. In Sonic Adventure 2 and Shadow The Hedgehog, the Eclipse Cannon is a planet-destroying WMD built inside of the Space Colony Ark. Buster Machine III from Gunbuster. The Culture Orbital from The Culture. In the 2013 CGI anime film, Space Pirate Captain Harlock, the Jovian Accelerator is an ancient, Death Star-like Weapon of mass destruction that uses energy from Jupiter's atmosphere to create a large beam of intense light strong enough to destroy an entire planet. Sidonia, the main ship and home of millions of humans, 1000 years in the future in the Knights of Sidonia manga and anime series, created after the destruction of the earth along with other unnamed seed ships. Trantor, the capital of an interstellar empire in Isaac Asimov's Foundation series, is an ecumenopolis, a planet entirely covered in one huge metal-clad building, with only one small green space: the Emperor's palace grounds. The Ori Supergate seen in a number of episodes of Stargate SG1 could be classed as a megastructure. In The Hitchhiker's Guide to the Galaxy series, Earth, as well as other planets, were artificial megastructures. Earth was intended to function as a gigantic computer and was built by a race of beings who made their living by manufacturing other planets. Mata-Nui in the BIONICLE franchise is classifiable as a megastructure. In the story, he is a massive robot as tall as a planet, and inside his body, every inhabitant of the BIONICLE Universe (Matoran, Toa, etc.) all live, unaware that they live inside a massive, space-traveling entity. In the Robotech Sentinels novels, Haydon IV is an artificially constructed cyber-planet with android citizens. In the Invader Zim episode "Planet Jackers", two aliens surround the Earth with a fake sky in order to throw it into their sun. In the 2017 video game Destiny 2, the fleet of Dominus Ghaul, the ruler of the Cabal Empire, features a massive super-weapon named the Almighty, whose wingspan is said to be as wide as the planet Mercury. Almighty links itself to a solar system's star on the quantum level via an energy beam and breaks it down into usable fuel, warping to another system before the star collapses into a supernova. Nightmare's fortress from Kirby: Right Back at Ya! can be classified as a megastructure because it is the size of a small planet. In several works, Arthur C. Clarke writes about a colossal hollow tube, first described in Rendezvous with Rama (1973), and inhabited by different races. The Citadel in the Mass Effect universe is an enormous space station constructed by an ancient race of machines called the Reapers millions of years before the games in the series. At the time of Mass Effect 2, its population is 13.2 million. In the game Airforce Delta Strike a large Space Elevator called the Chiron Lift is used to send supplies out into outer space. In the game Half-Life 2, an alien empire, the Combine, invaded earth through the border world Xen. After the combine invaded earth in an event named the Seven Hour War, they created a large tower 2.5 miles tall, the Combine Citadel. In the Warhammer 40,000 series, the Imperial Palace (site of the Golden Throne wherein the Emperor of Mankind is kept alive indefinitely) could be considered a megastructure. The palace is a complex of continent-wide structures with the Golden Throne being located in an area stretching across the whole of the Himalayan mountains. In the film Elysium, a luxury space station (a Bishop Ring) called Elysium houses the wealthy population of the human species. Large rotating space-stations are a staple of science fiction, including Arthur C. Clarke's novel 2001: A Space Odyssey, the battle school from Ender's Game, and the eponymous Babylon 5. Hollowed asteroids feature in various fiction, such as Kim Stanley Robinson's novel 2312, Larry Niven's Known Space, and Golden Age SF writers like Clarke and Asimov. In the 2022 film Moonfall, Earth's moon is knocked from its orbit and begins to circle closer to Earth. A conspiracy theorist believes the Moon is a Dyson sphere megastructure and turns out to be correct. Star Wars (1977 – present, American sci-fi franchise) The Death Star from Star Wars is 160 km in diameter, followed by a second Death Star 200km in diameter. Starkiller Base was constructed from the dwarf planet Ilum, depending on the source it's diameter 660 to 830 km. The Centerpoint Station was a 350  km spherical space station at the Lagrangian point between the planets Talus and Tralus in the Corellia system. It was a gigantic and ancient hyperspace tractor beam with which an ancient race, known as Celestials, created the Corellia star system. With the help of the tractor beam, whole planets could be moved through hyperspace and arranged into their actual orbits around the central star. On the other hand, the same technology could be used as a weapon to destroy even stars. On the inside of the main sphere, a huge living space called Hollowtown was home to many people in a similar fashion as on the inside of a Dyson sphere. A second, smaller megastructure of near-identical design, called Sinkhole Station, was also built shortly after the construction of Centerpoint Station. Its purpose was to maintain the stability of The Maw, a black hole cluster constructed using Centerpoint Station. Coruscant is an ecumenopolis, the planet is entirely covered by and essentially a city. It serves as the capital of first the Republic and then later the First Galactic Empire. The Galaxy Gun, a large space station designed to destroy entire planets from across the galaxy could be considered a megastructure because its size is more than seven kilometres long. The Star Forge from Star Wars: Knights of the Old Republic. Glavis Ringworld is a ring shaped space station around a star in The Book of Boba Fett. The Core World Kuat was circled by an orbital ring used primarily as a shipyard. There are multiple instances of hollowed asteroids, such as Hammer Station and the Eye of Palpatine. Stellaris (2016 video game) Stellar-Scale Megastructures A Dyson Sphere is a megastructure added in the Utopia expansion, capable of producing massive amounts of energy at the cost of rendering the solar system uninhabitable, except for Habitats. A Ring World is a megastructure added in the Utopia expansion, offering a solar-system sized habitat equivalent to four massive habitable planets. A Matter Decompressor is a megastructure added in the Megacorp expansion to the game and allows the owner to harvest massive amounts of minerals from the cores of Black Holes. A Mega-Shipyard is a massive shipyard in orbit of a star, capable of producing ships much faster than average shipyards. The Aetherophasic Engine is a megastructure built by crisis aspirants, capable of destroying the entire galaxy as a side-product of allowing the race which constructed it to ascend to the "Shroud", an alternate dimension in the game composed of nearly pure energy A Quantum Catapult is a large megastructure built around a pulsar or neutron star that is capable of sending fleets instantly across the galaxy. It however is not completely accurate and thus can send fleets away from their intended destinations. Orbital/Planetary Scale Megastructures A Science Nexus is a massive orbital science laboratory which expands the empire's science production massively. A Sentry Array is a massive orbital station that gives you sight over the entire in-game galaxy. Habitats are orbital structures which serve the purpose of a small planet. A Mega Art Installation is a megastructure that improves your overall amenities and happiness in your empire. A Strategic Coordination Center is a megastructure that Increases your naval capacity, and ship speed, as well as adds several other bonuses. An Interstellar Assembly acts as a hub for the game's Galactic Community, increases your diplomatic weight, adds more envoys, and another empire's opinion of you. Gateways allow for near-instantaneous travel across the galaxy. In addition, there is a unique form of Gateway called "L-Gates" which link up to an extragalactic cluster of stars. Orbital Rings are massive ring structures built around planets that afford extra protection and increase the output of the planet. Hyper Relays are large structures that allow ships to jump to identical Hyper Relays in adjacent systems instead of using the existing hyperlane connections, thereby avoiding having to traverse systems at sublight speeds. In addition, many megastructures can also generate in "ruined" versions, which the player can later repair. See also Arcology Skyscraper References External links National Geographic Channel Megastructure.org Megastructure Art Stellaris Wiki Megastructures in Stellaris Exploratory engineering Science fiction themes
Megastructure
Technology
4,705
54,224,095
https://en.wikipedia.org/wiki/Inocybe%20violaceocaulis
Inocybe violaceocaulis is a species of mushroom native to Western Australia. Collections had been previously classified as I. geophylla var. lilacina. References violaceocaulis Fungi described in 2005 Fungi native to Australia Fungus species
Inocybe violaceocaulis
Biology
53
13,680,395
https://en.wikipedia.org/wiki/Named%20entity
In information extraction, a named entity is a real-world object, such as a person, location, organization, product, etc., that can be denoted with a proper name. It can be abstract or have a physical existence. Examples of named entities include Barack Obama, New York City, Volkswagen Golf, or anything else that can be named. Named entities can simply be viewed as entity instances (e.g., New York City is an instance of a city). From a historical perspective, the term Named Entity was coined during the MUC-6 evaluation campaign and contained ENAMEX (entity name expressions e.g. persons, locations and organizations) and NUMEX (numerical expression). A more formal definition can be derived from the rigid designator by Saul Kripke. In the expression "Named Entity", the word "Named" aims to restrict the possible set of entities to only those for which one or many rigid designators stands for the referent. A designator is rigid when it designates the same thing in every possible world. On the contrary, flaccid designators may designate different things in different possible worlds. As an example, consider the sentence, "Biden is the president of the United States". Both "Biden" and the "United States" are named entities since they refer to specific objects (Joe Biden and United States). However, "president" is not a named entity since it can be used to refer to many different objects in different worlds (in different presidential periods referring to different persons, or even in different countries or organizations referring to different people). Rigid designators usually include proper names as well as certain natural terms like biological species and substances. There is also a general agreement in the Named Entity Recognition community to consider temporal and numerical expressions as named entities, such as amounts of money and other types of units, which may violate the rigid designator perspective. The task of recognizing named entities in text is Named Entity Recognition while the task of determining the identity of the named entities mentioned in text is called Named Entity Disambiguation. Both tasks require dedicated algorithms and resources to be addressed. See also Named-entity recognition (also referred to as entity identification, entity chunking and entity extraction) Entity linking (also referred to as named entity linking (NEL), named entity disambiguation (NED), named entity recognition and disambiguation (NERD) or named entity normalization) Information extraction Knowledge extraction Text mining (also referred to as text data mining) Truecasing Apache OpenNLP spaCy General Architecture for Text Engineering Natural Language Toolkit References zh-yue:有名實體 Natural language processing Computational linguistics
Named entity
Technology
555
51,484,404
https://en.wikipedia.org/wiki/Meizu%20MX4%20Ubuntu%20Edition
The Meizu MX4 Ubuntu Edition is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Ubuntu Touch. It is a previous phablet model of the MX series, representing an alternative edition of the MX4. It is Meizu's first commercially available Ubuntu Touch device and the second commercially available phone with Ubuntu Touch overall. It was unveiled at the Mobile World Congress in March 2015. History In November 2014, Meizu and Canonical signed a cooperation agreement, which set the starting point for Meizu to release devices running on Ubuntu Touch. In February 2015, Meizu confirmed that there will be an Ubuntu Touch-based version of the Meizu MX4 and that it will be showcased at Mobile World Congress 2015. At the Mobile World Congress in March 2015, Meizu has presented the Meizu MX4 Ubuntu Edition, which is an alternative version of the MX4 running on Ubuntu Touch, becoming the second commercially available device on this platform. Release The Meizu MX4 Ubuntu Edition was released through an invite-only system on the European market on June 25, 2015. Features Ubuntu Touch The MX4 Ubuntu Edition is running Ubuntu Touch, which is a mobile operating system based on the Ubuntu linux distribution developed by Canonical. Its goal is to provide a free and open-source mobile operating system and deliver a different approach to user experience by focusing on so-called “scopes” instead of traditional apps. Hardware and design The technical specifications and outer appearance of the MX4 Ubuntu Edition is identical with the Meizu MX4. The Meizu MX4 Ubuntu Edition features a MediaTek MT6595 system-on-a-chip with an array of four ARM Cortex-A17 and four Cortex-A7 CPU cores, a PowerVR G6200 GPU and 3 GB of RAM. The MX4 Ubuntu Edition is only available in with 16 GB of internal storage and with a champagne gold body. The body of the MX4 Ubuntu Edition features a metal frame and measures x x and weighs . It has a slate form factor, being rectangular with rounded corners. The MX4 Ubuntu Edition features a 5.36-inch AMOLED multi-touch capacitive touchscreen display with a FHD resolution of 1152 by 1920 pixels. The pixel density of the display is 403 ppi. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu MX4 Ubuntu Edition has two cameras. The rear camera has a resolution of 20.7 MP, a ƒ/2.2 aperture, a 5-element lens, laser-aided phase-detection autofocus and an LED flash. The front camera has a resolution of 2 MP, a ƒ/2.0 aperture and a 4-element lens. Reception The MX4 Ubuntu Edition received generally positive reviews. ZDNet gave the MX4 a rating of 8.0 out of 10 possible points and mentioned that “the Meizu MX4 Ubuntu Edition offers a number of high-end hardware features at relatively low cost”. TechRadar stated that “Ubuntu Phone has plenty of potential” and praised the build quality, design and powerful specifications of the device. See also Meizu Meizu MX4 Meizu PRO 5 Ubuntu Edition Comparison of smartphones References External links Official product page Meizu Official product page Ubuntu Ubuntu Touch devices Mobile phones introduced in 2015 Meizu smartphones Mobile phones with 4K video recording Discontinued flagship smartphones
Meizu MX4 Ubuntu Edition
Technology
815
3,666,839
https://en.wikipedia.org/wiki/SMC%20protein
SMC complexes represent a large family of ATPases that participate in many aspects of higher-order chromosome organization and dynamics. SMC stands for Structural Maintenance of Chromosomes. Classification Eukaryotic SMCs Eukaryotes have at least six SMC proteins in individual organisms, and they form three distinct heterodimers with specialized functions: A pair of SMC1 and SMC3 constitutes the core subunits of the cohesin complexes involved in sister chromatid cohesion. SMC1 and SMC3 also have functions in the repair of DNA double-strained breaks in the process of homologous recombination. Likewise, a pair of SMC2 and SMC4 acts as the core of the condensin complexes implicated in chromosome condensation. SMC2 and SMC4 have the function of DNA repair as well. Condensin I plays a role in single-strained break repair but not in double-strained breaks. The opposite is true for Condensin II, which plays a role in homologous recombination. A dimer composed of SMC5 and SMC6 functions as part of a yet-to-be-named complex implicated in DNA repair and checkpoint responses. Each complex contains a distinct set of non-SMC regulatory subunits. Some organisms have variants of SMC proteins. For instance, mammals have a meiosis-specific variant of SMC1, known as SMC1β. The nematode Caenorhabditis elegans has an SMC4-variant that has a specialized role in dosage compensation. The following table shows the SMC proteins names for several model organisms and vertebrates: Prokaryotic SMCs SMC proteins are conserved from bacteria to humans. Most bacteria have a single SMC protein in individual species that forms a homodimer. Recently SMC proteins have been shown to aid the daughter cells DNA at the origin of replication to guarantee proper segregation. In a subclass of Gram-negative bacteria, including Escherichia coli, a distantly related protein known as MukB plays an equivalent role. Molecular structure Primary structure SMC proteins are 1,000-1,500 amino-acid long. They have a modular structure that is composed of the following domains: Walker A ATP-binding motif coiled-coil region I hinge region coiled-coil region II Walker B ATP-binding motif; signature motif Secondary and tertiary structure SMC dimers form a V-shaped molecule with two long coiled-coil arms. To make such a unique structure, an SMC protomer is self-folded through anti-parallel coiled-coil interactions, forming a rod-shaped molecule. At one end of the molecule, the N-terminal and C-terminal domains form an ATP-binding domain. The other end is called a hinge domain. Two protomers then dimerize through their hinge domains and assemble a V-shaped dimer. The length of the coiled-coil arms is ~50 nm long. Such long "antiparallel" coiled coils are very rare and found only among SMC proteins (and their relatives such as Rad50). The ATP-binding domain of SMC proteins is structurally related to that of ABC transporters, a large family of transmembrane proteins that actively transport small molecules across cellular membranes. It is thought that the cycle of ATP binding and hydrolysis modulates the cycle of closing and opening of the V-shaped molecule. Still, the detailed mechanisms of action of SMC proteins remain to be determined. Aggregation of SMC The SMC proteins have the potential to form a larger ring-like structure. The ability to create different architectural arrangements allows for various regulations of functions. Some of the possible configurations are double rings, filaments, and rosettes. Double rings are 4 SMC proteins bound at the heads and hinge, forming a ring. Filaments are a chain of alternating SMCs. Rosettes are rose-like structures with terminal segments in the inner region and hinge in the outer region. Genes The following human genes encode SMC proteins: SMC1A SMC1B SMC2 SMC3 SMC4 SMC5 SMC6 See also cohesin condensin Cornelia de Lange Syndrome References EC 3.6.3 Cell biology Mitosis Cell cycle
SMC protein
Biology
896
508,390
https://en.wikipedia.org/wiki/Brownian%20motor
Brownian motors are nanoscale or molecular machines that use chemical reactions to generate directed motion in space. The theory behind Brownian motors relies on the phenomenon of Brownian motion, random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving molecules in the fluid. On the nanoscale (1–100 nm), viscosity dominates inertia, and the extremely high degree of thermal noise in the environment makes conventional directed motion all but impossible, because the forces impelling these motors in the desired direction are minuscule when compared to the random forces exerted by the environment. Brownian motors operate specifically to utilise this high level of random noise to achieve directed motion, and as such are only viable on the nanoscale. The concept of Brownian motors is a recent one, having only been coined in 1995 by Peter Hänggi, but the existence of such motors in nature may have existed for a very long time and help to explain crucial cellular processes that require movement at the nanoscale, such as protein synthesis and muscular contraction. If this is the case, Brownian motors may have implications for the foundations of life itself. In more recent times, humans have attempted to apply this knowledge of natural Brownian motors to solve human problems. The applications of Brownian motors are most obvious in nanorobotics due to its inherent reliance on directed motion. History 20th century The term "Brownian motor" was originally invented by Swiss theoretical physicist Peter Hänggi in 1995. The Brownian motor, like the phenomenon of Brownian motion that underpinned its underlying theory, was also named after 19th century Scottish botanist Robert Brown, who, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water, famously described the random motion of pollen particles in water in 1827. In 1905, almost eighty years later, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen as being moved by individual water molecules, and this was verified experimentally by Jean Perrin in 1908, who was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". These developments helped to create the fundamentals of the present theories of the nanoscale world. Nanoscience has traditionally long remained at the intersection of the physical sciences of physics and chemistry, but more recent developments in research increasingly position it beyond the scope of either of these two traditional fields. 21st century In 2002, a seminal paper on Brownian motors published in the American Institute of Physics magazine Physics Today, "Brownian motors", by Dean Astumian and Peter Hänggi. There, they proposed the then novel concept of Brownian motors and posited that "thermal motion combined with input energy gives rise to a channeling of chance that can be used to exercise control over microscopic systems". Astumian and Hänggi provide in their paper a copy of Wallace Stevens' 1919 poem "The Place of the Solitaires" to elegantly illustrate, from an abstract perspective, the ceaseless nature of noise. A year after the Astumian-Hänggi paper, David Leigh's organic chemistry group reported the first artificial molecular Brownian motors. In 2007 the same team reported a Maxwell's demon-inspired molecular information ratchet. Another important demonstration of nanoengineering and nanotechnology was the building of a practical artificial Brownian motor by IBM in 2018. Specifically, an energy landscape was created by accurately shaping a nanofluidic slit, and alternate potentials and an oscillating electric field were then used to "rock" nanoparticles to produce directed motion. The experiment successfully made the nanoparticles move along a track in the shape of the outline of the IBM logo and serves as an important milestone in the practical use of Brownian motors and other elements at the nanoscale.Additionally, various institutions around the world, such as the University of Sydney Nano Institute, headquartered at the Sydney Nanoscience Hub (SNH), and the Swiss Nanoscience Institute (SNI) at the University of Basel, are examples of the research activity emerging in the field of nanoscience. Brownian motors remain a central concept in both the understanding of natural molecular motors and the construction of useful nanoscale machines that involve directed motion. Theory The thermal noise on the nanoscale is so great that moving in a particular direction is as difficult as "walking in a hurricane" or "swimming in molasses". The theoretical operation of the Brownian motor can be explained by ratchet theory, wherein strong random thermal fluctuations are allowed to move the particle in the desired direction, while energy is expended to counteract forces that would produce motion in the opposite direction. This motion can be both linear and rotational. In the biological sense and in the extent to which this phenomenon appears in nature, this exists as chemical energy is sourced from the molecule adenosine triphosphate (ATP). The Brownian ratchet is an apparent perpetual motion machine that appears to violate the second law of thermodynamics, but was later debunked upon more detailed analysis by Richard Feynman and other physicists. The difference between real Brownian motors and fictional Brownian ratchets is that only in Brownian motors is there an input of energy in order to provide the necessary force to hold the motor in place to counteract the thermal noise that try to move the motor in the opposite direction. Because Brownian motors rely on the random nature of thermal noise to achieve directed motion, they are stochastic in nature, in that they can be analysed statistically but not predicted precisely. Examples in nature In biology, much of what we understand to be protein-based molecular motors may also in fact be Brownian motors. These molecular motors facilitate critical cellular processes in living organisms and, indeed, are fundamental to life itself. Researchers have made significant advances in terms of examining these organic processes to gain insight into their inner workings. For example, molecular Brownian motors in the form of several different types of protein exist within humans. Two common biomolecular Brownian motors are ATP synthase, a rotary motor, and myosin II, a linear motor. The motor protein ATP synthase produces rotational torque that facilitates the synthesis of ATP from Adenosine diphosphate (ADP) and inorganic phosphate (Pi) through the following overall reaction: ADP + Pi + 3H+out ⇌ ATP + H2O + 3H+in In contrast, the torque produced by myosin II is linear and is a basis for the process of muscle contraction. Similar motor proteins include kinesin and dynein, which all convert chemical energy into mechanical work by the hydrolysis of ATP. Many motor proteins within human cells act as Brownian motors by producing directed motion on the nanoscale, and some common proteins of this type are illustrated by the following computer-generated images. Applications Nanorobotics The relevance of Brownian motors to the requirement of directed motion in nanorobotics has become increasingly apparent to researchers from both academia and industry. Artificial replications of Brownian motors are informed by and differ from nature, and one specific type is the photomotor, wherein the motor switches states due to pulses of light and generates directed motion. These photomotors, in contrast to their natural counterpartsˇ, are inorganic and possess greater efficiency and average velocity, and are thus better suited to human use than existing alternatives, such as organic protein motors. Currently, one of the six current "Grand Challenges" of the University of Sydney Nano Institute is to develop nanorobotics for health, a key aspect of which is a "nanoscale parts foundry" that can produce nanoscale Brownian motors for "active transport around the body". The institute predicts that among the implications of this research is a "paradigm shift" in healthcare "away from the "break-fix" model to a focus on prevention and early intervention," such as in the case with heart disease: Professor Paul Bannon, an adult cardiothoracic surgeon of international standing and leading medical researcher, summarises the benefits of nanorobotics in health. See also Molecular machines Molecular motor Brownian motion Brownian ratchet Nanoengineering Nanorobotics Robert Brown Peter Hänggi Notes External links Brownian motor on arxiv.org Nanotechnology Thermodynamics
Brownian motor
Physics,Chemistry,Materials_science,Mathematics,Engineering
1,725
1,321,047
https://en.wikipedia.org/wiki/Commercial%20animal%20cloning
Commercial animal cloning is the cloning of animals for commercial purposes, including animal husbandry, medical research, competition camels and horses, pet cloning, and restoring populations of endangered and extinct animals. The practice was first demonstrated in 1996 with Dolly the sheep. Cloning methods Moving or copying all (or nearly all) genes from one animal to form a second, genetically nearly identical, animal is usually done using one of three methods: the Roslin technique, the Honolulu technique, or Artificial Twinning. The first two of these involve a process known as somatic cell nuclear transfer. In this process, an oocyte is taken from a surrogate mother and undergoes enucleation, a process that removes the nucleus from inside the oocyte. Somatic cells are then taken from the animal that is being cloned, transferred into the blank oocyte in order to provide genetic material, and fused with the oocyte using an electrical current. The oocyte is then activated and re-inserted into the surrogate mother. The result is the formation of an animal that is almost genetically identical to the animal the somatic cells were taken from. While somatic cell nuclear transfer was previously believed to only work using genetic material from somatic cells that were unfrozen or were frozen with cryoprotectant (to avoid cell damage caused by freezing), successful dog cloning in various breeds has now been shown using somatic cells from unprotected specimens that had been frozen for up to four days. The third method of cloning involves embryo splitting, the process of taking the blastomeres from a very early animal embryo and separating them before they become differentiated in order to create two or more separate organisms. When using embryo splitting, cloning must occur before the birth of the animal, and clones grow up at the same time (in a similar fashion to monozygotic twins). Livestock cloning The US Food and Drug Administration has concluded that "Food from cattle, swine, and goat clones is as safe to eat as food from any other cattle, swine, or goat." It has also been noted that the main use of agricultural clones is to produce breeding stock, not food. Clones allow farmers to upgrade the overall quality of their herds by producing more copies of the best animals in the herd. These animals are then used for conventional breeding, and the sexually reproduced offspring become the food producing animals. The goals of cloning listed by the FDA include "disease resistance ... suitability to climate ... quality body type .. fertility ... and market preference (leanness, tenderness, color, size of various cuts, etc.)" Milk productivity is another desirable trait that cloning is used for, including in the case of cloned "supercows". Medical uses Organs from cloned pigs have been transplanted into human patients. (See Xenotransplantation) Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Other working animals with high performance Cloning of super sniffer dogs for airports was reported in 2011, four years after the dog that served as their genetic donor retired. Cloning of a successful rescue dog was reported in 2009 and of a police dog in 2019. Endangered and extinct animals The only extinct animal to be cloned as of 2022 is a Pyrenean ibex, born on July 30, 2003, in Spain, which died minutes later due to physical defects in the lungs. Some animals have been cloned to add genetic diversity to endangered species with small remaining populations, thereby avoiding inbreeding depression. Centers performing this include ViaGen, aided by the San Diego Frozen Zoo, and Revive & Restore. This is also referred to as "conservation cloning". Two examples are the black-footed ferret and Przewalski's horse. In 2022, the world's first cloned Arctic wolf "Maya" was born in Beijing by Sinogene. Although Arctic wolves are no longer listed by the IUCN Red List as an endangered species, the technique can be used to help other animals at risk of extinction, such as Mexican gray wolves and red wolves. The team of Sinogene plans to restore lost species or boost numbers in endangered animal populations. In a recent study using sturgeons, scientists have made improvements to a technique known as somatic nuclear cell transfer, with the ultimate goal being to save endangered species. Sturgeons are endangered due to the high levels of poaching, increased destruction to habitats, water pollution, and overfishing. The somatic nuclear cell transfer technique is a well-known cloning method that has been used for years but focuses on species that are thriving rather than endangered or extinct species. This technique usually uses a single somatic donor cell with a single manipulation and inserts it into a recipient egg of the species of interest. It has recently been found that the position by which that somatic cell is located inside the recipient is very important in order to successfully clone a species. By making adjustments to the original method of using a single somatic cell and instead use multiple somatic donor cells to insert into the recipient egg, the likeliness of the somatic donor cells being in the crucial position on the egg will increase tremendously. This increase will then result in higher success rates with cloning. There is ongoing research using this improved method, but from the data collected thus far, it seems to be a reasonable method to continue and soon be able to help stop species like the sturgeons from becoming endangered and possibly stop extinction from occurring. Cloning long-extinct animals using current methods is impossible because DNA begins to denature after death, meaning the entire genome of an extinct species is not available to be reproduced. However, new studies using genome editing have suggested it may be possible to "bring back" traits of extinct species by incorporating genes from the extinct species into the genome of a closely related living organism. Currently, George Church's lab at Harvard University's Wyss Institute is conducting research into genetically modifying Asian elephants to express genes from the extinct woolly mammoth. Their goals in doing this are to expand the habitat available to Asian elephants and reestablish the ecological interactions woolly mammoths played a role in prior to their extinction. History and commercialization ViaGen began by offering cloning to the livestock and equine industry in 2003, and later as ViaGen Pets included cloning of cats and dogs in 2016. ViaGen's subsidiary, start licensing, owns a cloning patent which is licensed to their only competitor as of 2018, who also offers animal cloning services. (Viagen is a subsidiary of Precigen.) The first commercially cloned pet was a cat named Little Nicky, produced in 2004 by Genetic Savings & Clone for a north Texas woman for the fee of US$50,000. On May 21, 2008, BioArts International announced a limited commercial dog cloning service (through a program it called Best Friends Again) in partnership with a Korean company named Sooam Biotech. This program came after the announcement of the successful cloning of a family dog named Missy, an achievement widely publicized in the Missyplicity Project. In September 2009, BioArts announced the end of its dog cloning service. In July 2008, the Seoul National University (co-parents of Snuppy, reputedly the world's first cloned dog in 2005) created five clones of a dog named Booger for its Californian owner. The woman paid $50,000 for this service. Sooam Biotech continued developing proprietary techniques for cloning dogs based on a licence from ViaGen's subsidiary, stART Licensing (which owned the original patent for the process of animal cloning). (Although the animal itself is not patentable, the process is protected by a patent). Sooam created cloned puppies for owners whose dogs had died, charging $100,000 per clone. Sooam Biotech was reported to have cloned approximately 700 dogs by 2015 and to be producing 500 cloned embryos of various breeds a day in 2016. In 2015, the longest period after which Sooam Biotech could clone a puppy was 12 days from the death of the original pet dog. Sinogene Biotechnology created the first Chinese clone dog in 2017 before commercializing the cloning service and joining in the pet cloning market. In 2019, Sinogene successfully created the first Chinese cloned cat. In June 2022, "Zhuang Zhuang" was cloned by the Beijing laboratory Sinogene. He is the first from the "warmblood" group of breeds to be born in China and to be officially approved by the China Horse Industry Association. Controversies Animal welfare The mortality rate for cloned animals is higher than for those born of natural processes. This includes a discrepancy pre-birth, during birth, and after birth in survival rates and quality of life, leading to ethical concerns. Many of these discrepancies are thought to come from maternal mRNA already present in the oocyte prior to the transfer of genetic material as well as from DNA methylation, both of which contribute to the development of the animal in the womb of the surrogate. Some common issues seen with cloned animals are shortened telomeres, the repetitive end sequences of DNA whose decreasing length over the lifespan of an organism have been associated with aging; large offspring syndrome, the abnormal size of cloned individuals due to epigenetic (gene expression) changes; and methylation patterns of genetic material that are so abnormal compared to standard embryos of the species being cloned as to be incompatible with life. Pet cloning While pet cloning is sometimes advertised as a prospective method for re-gaining a deceased companionship animal, pet cloning does not result in animals that are exactly like the previous pet (in looks or personality). Although the animal in question is cloned, there are still phenotypical differences that may affect its appearance or health. This issue was brought to light in the cloning of a cat named Rainbow. Rainbow's clone, later named CC, was genetically identical to Rainbow, yet CC's coloring patterns were not the same due to the development of the kitten inside the womb as well as random genetic disparities in the clone such as variable X-chromosome inactivation. Despite its controversies, the study of pet cloning holds the potential to contribute to scientific, veterinary, and medical knowledge, and it is a potential resource in efforts to preserve endangered cousins of the cat and dog. In 2005, California Assembly Member Lloyd Levine introduced a bill to ban the sale or transfer of pet clones in California. That bill was voted down. See also Biobank Cultivar: term used in botany to refer to specific breeds (made using selective cross breeding and sometimes genetic modification) that have distinct properties. Often reproduced using cloning to avoid properties being lost due to sexual propagation. List of animals that have been cloned Working animal References Pets Cloning
Commercial animal cloning
Engineering,Biology
2,266
7,356,525
https://en.wikipedia.org/wiki/REDCON
In the U.S. military, the term REDCON is short for Readiness Condition and is used to refer to a unit's readiness to respond to and engage in combat operations. There are five REDCON levels, as described below in this excerpt from Army Field Manual 71–1. Overview REDCON-1: Full alert; unit ready to move and fight. WMD alarms and hot loop equipment stowed; OPs pulled in. (A hot loop is a field telephone circuit between the subunits of a company.) All personnel alert and mounted on vehicles; weapons manned. Engines started. Company team is ready to move immediately. REDCON-1.5 WMD alarms and hot loop equipment stowed; OPs pulled in. All personnel alert and mounted on vehicles; weapons manned. Company team is ready to move immediately. REDCON-2: Full alert; unit ready to fight. Equipment stowed (except hot loop and WMD alarms). Precombat checks complete. All personnel alert and mounted in vehicles; weapons manned & charged, round in chamber, weapon on safe. (NOTE: Depending on the tactical situation and orders from the commander, dismounted OPs may remain in place.) All (100 percent) digital and FM communications links operational. Status reports submitted in accordance with task force SOP. Company team is ready to move within 15 minutes of notification. REDCON-3: Reduced alert. Fifty percent of the unit executes work and rest plans. Remainder of the unit executes security plan. Based on the commander's guidance and the enemy situation, some personnel executing the security plan may execute portions of the work plan. Company team is ready to move within 30 minutes of notification. REDCON-4: Minimum alert. OPs manned; one soldier per platoon designated to monitor radio and man turret weapons. Digital and FM links with task force and other company teams maintained. Company team is ready to move within one hour of notification. See also Alert state DEFCON Force Protection Condition Redcon (2016 game) References External links REDCON levels from Army Field Manual 71-1 on GlobalSecurity.org Alert measurement systems Military life Military terminology of the United States
REDCON
Technology
441
76,504,351
https://en.wikipedia.org/wiki/Photoferroelectric%20imaging
Photoferroelectric imaging is the process of storing an image onto a piece of ferroelectric material by the aid of an applied electric pulse. Stored images are nonvolatile and selectively erasable. Photoferroelectric image storage devices have the advantage of being "extremely simple and easy to fabricate". Photoferroelectric imaging uses a ferroelectric material's photosensitivity in conjunction with its ferroelectric properties. One type of medium which has been used for photoferroelectric imaging is lead lanthanum zirconate titanate (PLZT) ceramics, which exhibit a good combination of properties for imaging: large electro-optic coefficients, high intrinsic and extrinsic photosensitivities, and nonvolatile memory. Process A description of a photoferroelectric imaging process (using PLZT material) is given in the McGraw-Hill Concise Encyclopedia of Science and Technology. In that process, a thin flat plate of transparent, optically polished PLZT material (around 0.25mm thick) was sputter-coated with indium tin oxide (ITO) on both sides, serving as electrodes. Then, the image was exposed onto one of the ITO surfaces, while a voltage pulse was simultaneously applied across the electrodes. The ferroelectric polarization thereby switched from one remanent state to another, and images were "stored both as spatial distributions of light-scattering centers in the bulk of the PLZT and as surface deformation strains which form a relief pattern of the image on the exposed surface." The image may then be viewed directly or indirectly. This photoferroelectric effect is a type of electro-optic effect. In the example process, the ceramic was poled to a saturation remanent polarization state by the light (charge carriers were photoexcited across the PLZT's bandgap). The polarization was then switched by the application of the electric field - a phenomenon called photoassisted domain switching. Applications Photoferroelectric imaging may be useful in temporary image storage and display. It also has potential applications in data storage and holographic recording. References Notes See also Photorefractive effect Electro–optic effect Electrical phenomena Imaging
Photoferroelectric imaging
Physics
473
46,978,792
https://en.wikipedia.org/wiki/Nucleosome%20repeat%20length
The nucleosome repeat length, (NRL) is the average distance between the centers of neighboring nucleosomes. NRL is an important physical chromatin property that determines its biological function. NRL can be determined genome-wide for the chromatin in a given cell type and state, or locally for a large enough genomic region containing several nucleosomes. In chromatin, neighbouring nucleosomes are separated by the linker DNA and in many cases also by the linker histone H1 as well as non-histone proteins. Since the size of the nucleosome is typically fixed (146-147 base pairs), NRL is mostly determined by the size of the linker region between nucleosomes. Alternatively, partial DNA unwrapping from the histone octamer or partial disassembly of the histone octamer can decrease the effective nucleosome size and thus affect NRL. Past studies going back to 1970s showed that, in general, NRL is different for different species and even for different cell types of the same organism. In addition, recent publications reported NRL variations for different genomic regions of the same cell type. Recent works have compared the NRL around yeast transcription start sites (TSSs) in vivo and that for the reconstituted chromatin on the same DNA sequences in vitro. It was shown that ordered nucleosome positioning arises only in the presence of ATP-dependent chromatin remodeling. Furthermore, it was reported that the NRL determined around yeast TSSs is an invariant value universal for a given wild type yeast strain, although it can change when one of chromatin remodelers is missing. In general, NRL depends on the DNA sequence, concentrations of histones and non-histone proteins, as well as long-range interactions between nucleosomes. NRL determines geometric properties of the nucleosome array, and therefore the higher-order packing of the DNA into the chromatin fiber, which might affect gene expression. References Molecular biology Molecular genetics DNA Epigenetics Nuclear substructures
Nucleosome repeat length
Chemistry,Biology
436
70,245,162
https://en.wikipedia.org/wiki/Sierra%20Forest
Sierra Forest is the codename for sixth generation Xeon Scalable server processors designed by Intel, launched in June 2024. It is the first generation of Xeon processors to exclusively feature density-optimized E-cores. Sierra Forest processors are targeted towards cloud server customers with up to 288 Crestmont E-cores. Background On February 17, 2022, Intel announced that upcoming Xeon generations would be split into two tracks for those with P-cores exclusively and E-cores exclusively. These two tracks are intended to serve different market segments with P-core Xeon processors targeting high-performance computing while E-core Xeon processors target cloud customers who prioritize greater core density, energy efficiency and performance in heavily multi-threaded workloads over strong single-threaded usage. On March 29, 2023, Intel announced that Sierra Forest processors had powered on and displayed a processor running 144 E-cores, and announced a release timeline for H1 2024. On September 19, 2023, Intel announced at their Innovation event that a 288-core variant of Sierra Forest would be coming. On June 3, 2024, Intel released the Sierra Forest-SP line of SKUs, also known as the Xeon 6700E series. This product line included seven SKUs at launch, all using the LGA 4710 socket. The low-end SKU has 64 cores, and the high-end SKU has 144 cores. Branding During Intel's Vision event in April 2024, new branding for Xeon processors was unveiled. The Xeon Scalable branding that was introduced in 2017 would be retired in favor of a simplified "Xeon 6" brand for sixth generation Xeon processors. This change brings greater emphasis on processor generation numbers. The badge for the Xeon brand was changed to be more visually in line with the badge design used for Intel's Core Ultra processors since 2023. Architecture Sierra Forest will use only E-cores to achieve higher core counts in order to compete with AMD's Epyc server processors codenamed Bergamo which features up to 128 smaller Zen 4c cores. AMD's Zen 4c cores feature simultaneous multithreading (SMT) while the Crestmont E-cores featured in Sierra Forest processors can only support one thread for each core. The purpose of the Sierra Forest architecture design is to achieve ultra-high core counts for greater compute density that would benefit cloud and HPC server applications. Cloud service providers may not be as interested in HPC accelerators and instead prioritize greater ECU/vCPU integer and floating-point performance. Don Soltis is the principal engineer and chief architect for Xeon E-Core. Products Sierra Forest-SP Sierra Forest-SP (Scalable Performance) uses the Beechnut City platform with the smaller LGA 4710 socket, targeted towards mainstream server. Sierra Forest-SP features up to 144 E-cores and eight-channel DDR5 memory support. TDPs up to 350W are supported on Beechnut City platform. Sierra Forest-AP Sierra Forest-AP uses the Avenue City platform with the larger LGA 7529 socket for higher core count SKUs up to 288. It supports a higher number PCIe lanes and 12-channel DDR5 memory. See also Process–architecture–optimization model, by Intel Tick–tock model, by Intel List of Intel CPU microarchitectures References Intel products Intel microprocessors
Sierra Forest
Technology
702
44,949,681
https://en.wikipedia.org/wiki/Chandler-Parsons%20Blacksmith%20Shop
The Chandler-Parsons Blacksmith Shop, now the Blacksmith Shop Museum, is a historic blacksmith shop at 107 Dawes Road in Dover-Foxcroft, Maine. Believed to be built in the early 1860s, it is one of a very small number of relatively unaltered rural 19th-century blacksmithies in the state. It is owned and operated by the local historical society as a museum, and was listed on the National Register of Historic Places in 1989. Description and history The smithy is a small single-story wood frame structure, with a gable roof and a wood shingle exterior. A shed-roofed addition extends across the western facade. The main facade, facing south, is four bays wide, with the main entrance in one of the center bays and sash windows in the other bays. The main double door is attached via heavy wrought iron hinges. The interior is very plain, with exposed framing, and houses a collection of original 19th-century blacksmithing tools. The shop was probably built in the early 1860s by Nicholas A. Chandler, whose family owned the land at the time, and who was listed in later business directories as a blacksmith, but was also known for breeding and training horses. The next owner, Henry Parsons, was Chandler's brother-in-law, and is believed to be responsible for a number of additions and alterations to the building, including changing its roofline and adding the shed-roof addition. Parsons operated the smithy as a business until about 1905, and the shop was thereafter used intermittently by area residents. It was acquired in 1964 by the Dover-Foxcroft Historical Society, which now operates the property as a museum. The society has added a modern building to the property for use as a demonstration area. The museum is open between Memorial Day and October. See also National Register of Historic Places listings in Piscataquis County, Maine References External links Blacksmith Shop Museum web site Commercial buildings on the National Register of Historic Places in Maine Commercial buildings completed in 1860 Buildings and structures in Piscataquis County, Maine Museums in Piscataquis County, Maine Metallurgical industry of the United States National Register of Historic Places in Piscataquis County, Maine Blacksmith shops
Chandler-Parsons Blacksmith Shop
Chemistry
451
61,947,436
https://en.wikipedia.org/wiki/Time%20in%20Antigua%20and%20Barbuda
Antigua and Barbuda Time (AST) is the official time in Antigua and Barbuda. It is four hours behind Coordinated Universal Time (UTC−04:00). Antigua and Barbuda has only one time zone and does not observe daylight saving time. IANA time zone database In the IANA time zone database Antigua and Barbuda has the following time zone: America/Antigua (AG) References Time by country Geography of Antigua and Barbuda
Time in Antigua and Barbuda
Physics
98
5,636,920
https://en.wikipedia.org/wiki/Bjarne%20Tromborg
Bjarne Tromborg (born 1940) is a Danish physicist, best known for his work in particle physics and photonics. Biography Tromborg was born in Give, Denmark. In 1968, he received the M.Sc. degree in physics and mathematics from the Niels Bohr Institute, in Copenhagen, Denmark. He was a university researcher studying high-energy particle physics from 1968 to 1978. In 1979, he joined the research laboratory of the Danish Teleadministrations in Copenhagen. He was Head of Optical Communications Department at Tele Danmark Research, Horsholm, Denmark from 1987 to 1995. He was an adjunct professor at the Niels Bohr Institute from 1991 to 2001. In 1997, he took a leave of absence at the Technion - Israel Institute of Technology. Until his retirement at the end June 2006, he was a research professor at COM•DTU, Department of Communications, Optics and Materials (which became DTU Fotonik, Department of Photonics Engineering, in 2008 and DTU Electro, Department of Electrical and Photonics Engineering, in 2022), Technical University of Denmark. Research Tromborg co-authored a research monograph and approximately one hundred journal and conference publications, mostly on physics and optoelectronics. At the Niels Bohr Institute, he carried out research in elementary particle physics, particularly analytic S-matrix theory and electromagnetic corrections to hadron scattering. He coauthored a research monograph on dispersion theory. In the early 1980s, he switched to photonics. Tromborg was one of the first to develop advanced theoretical models for complex semiconductor laser structures such as external laser cavities and distributed feedback lasers in the beginning of the 1980s. Computer simulations and measurements confirmed the validity of the theoretical models and their predictions. Several co-workers including Henning Olesen, Gunnar Jacobsen, Jens Henrik Osmundsen, Finn Mogensen, Kristian Stubkjær, Jesper Mørk, Xing Pan, Hans Erik Lassen and Björn Jónsson contributed to this work over a period of almost 15 years until 1995. At TeleDanmark Research in the late 1980s and early 1990s Tromborg and colleagues worked to study the dynamics of active semiconductor materials in order to understand the physical relaxation processes at play, their strength and characteristic time scales. A pump-probe set-up employing femtosecond lasers was established and modeling efforts were initialized. Tromborg led the effort to identify this as a topic that would remain important for many years and argued that Denmark should work to lead in the field. He also proposed theoretical methods that could be used to estimate the size of these ultrafast dynamical effects and their role in understanding the origin of nonlinear gain suppression in semiconductor lasers. From 1999 to his retirement at the end of June 2006, Tromborg was with the Department of Communications, Optics and Materials (COM*DTU) at the Technical University of Denmark. In this period, he worked in both research and education, as well as in securing several European Union research projects for COM*DTU. Tromborg took up the field of photonic crystals and initiated and contributed himself to activities within the theory of photonic crystals. He also applied general techniques within stochastic theory and signal analysis to develop improved descriptions of noise spectra in nonlinear semiconductor optical amplifiers. Awards and recognition Tromborg received the Electro-prize from the Danish Society of Engineers in 1981. He was Chairman of the Danish Optical Society from 1999 to 2002. He has been Associate Editor of the IEEE Journal of Quantum Electronics since 2003. At his retirement, a symposium on photonics was held in his honor on 22 June 2006 at the Technical University of Denmark. References External links Publications 21st-century Danish physicists Particle physicists 1940 births Living people People from Vejle Municipality
Bjarne Tromborg
Physics
781
7,504,769
https://en.wikipedia.org/wiki/List%20of%20Canadian%20plants%20by%20family%20N
Main page: List of Canadian plants by family Families: A | B | C | D | E | F | G | H | I J K | L | M | N | O | P Q | R | S | T | U V W | X Y Z Najadaceae Najas flexilis — slender naiad Najas gracillima — threadlike naiad Najas guadalupensis — southern naiad Najas marina — holly-leaved naiad Neckeraceae Homalia trichomanoides — lime homalia Metaneckera menziesii Neckera complanata Neckera douglasii Neckera pennata Neomacounia nitida — Macoun's shining moss Nelumbonaceae Nelumbo lutea — American lotus Notothyladaceae Notothylas orbicularis Phaeoceros carolinianus Nyctaginaceae Abronia latifolia — yellow sand-verbena Abronia umbellata — beach sand-verbena Mirabilis hirsuta — hairy four-o'clock Mirabilis linearis — narrow-leaved umbrellawort Mirabilis nyctaginea — wild four-o'clock Tripterocalyx micranthus — smallflower sand-verbena Nymphaeaceae Nuphar lutea — yellow pond-lily Nuphar rubrodisca Nymphaea leibergii — dwarf water-lily Nymphaea loriana Nymphaea odorata — American water-lily Nymphaea tetragona — pygmy water-lily Nyssaceae Nyssa sylvatica — black tupelo Canada,family,N
List of Canadian plants by family N
Biology
340
12,132,347
https://en.wikipedia.org/wiki/C3H2ClF5O
{{DISPLAYTITLE:C3H2ClF5O}} The molecular formula C3H2ClF5O (molar mass: 184.49 g/mol, exact mass: 183.9714 u) may refer to: Enflurane (2-chloro-1,1,2,-trifluoroethyl-difluoromethyl ether) Isoflurane
C3H2ClF5O
Chemistry
92
68,496,223
https://en.wikipedia.org/wiki/Fireproof%20banknote
The fireproof banknote is a demonstration of putting a banknote, previously soaked in 50% (v/v) alcohol fuel solution, to a flame. The fire is lit and later extinguished by itself without the banknote being burnt. This demonstration can be used to teach about the fire triangle and classes of fire. Explanation A 50% (v/v) alcohol solution is composed of 50% alcohol and 50% water in which water acts as a solvent. By igniting a paper banknote completely soaked with 50% alcohol solution, the alcohol (which is the fuel in the fire triangle) is combusted into carbon dioxide and water vapour. Contrarily water is heated up, with some being evaporated as it absorbs energy from the combustion of alcohol. The evaporating water helps cool down the system, so not all water is evaporated and the paper banknote is not burnt. The water-to-alcohol ratio should be 50% or higher; a lower ratio leads to the banknote being slightly burnt because there is not enough water to absorb the combustion energy and cool down the system. CnH2n+1OH + (3n/2) O2 → n CO2 + (n+1) H2O Common alcohol fuels for this experiment can be methanol (n=1), ethanol (n=2) and both isomers of propanol (n=3). The fire lit in this scenario is categorized as a class B fire (fire from flammable liquids), while the fire from burning paper (banknote) is categorized as class A. The alcohol-water mixture flame can be hard to detect, so sodium chloride can be added to give the flames an orange-yellow color. For safety purpose, a water tray should be prepared for emergency use in case a paper banknote caught a fire, and flammable and combustible materials should not be kept or put near the flame. Alternative materials or setups Other materials Euro banknotes are recommended since it is made of paper and it is legally permitted to artistically mutilate it or burn in small amounts. Moreover, there are no depictions of any persons on the banknotes. Aside from banknotes, a similar experiment can be performed by using towels, paper or exam paper. Other setups No material A solution of about 50% fuel alcohol and 50% water can catch on fire and extinguish itself in a "burning water" demonstration. In contrast to the subsection above, this can be done in a glassware without any absorbing materials like banknotes, towels, or paper. No fuel Tap water may be added to a paper bag which then is put on a stove to boil. The paper bag can absorb water, which cools down the system and prevents the paper bag from being burnt. Gallery References Banknotes Chemistry classroom experiments Articles containing video clips Fire protection
Fireproof banknote
Chemistry,Engineering
592
30,134,316
https://en.wikipedia.org/wiki/Momordenol
Momordenol (3β-hydroxy-stigmasta-5,14-dien-16-one) is a natural chemical compound, a sterol found in the fresh fruit of the bitter melon (Momordica charantia). The compound is soluble in ethyl acetate and methanol but not in pure chloroform or petrol. It crystallizes as fine needles that melt at 160–161 °C. It was isolated in 1997 by S. Begum and others. See also Momordicilin Momordicin I Momordicin-28 Momordicinin Momordol Stigmasterol References Phytosterols Ketones Sterols
Momordenol
Chemistry
146
59,742,809
https://en.wikipedia.org/wiki/Palmaria%20%28alga%29
Palmaria is a genus of algae. One of its most notable members is dulse, Palmaria palmata. References Florideophyceae Red algae genera Taxa named by John Stackhouse
Palmaria (alga)
Biology
42
47,360,092
https://en.wikipedia.org/wiki/Lasiodiplodia%20citricola
Lasiodiplodia citricola is an endophytic fungus. It was first isolated in northern Iran, and is named after its first known host, citrus plants. It has since been isolated in other plants in other continents, and is considered a plant pathogen. L. citricola is phylogenetically related to L. parva, but conidia of the former are longer and wider. Description Its conidiomata are stromatic and pycnidial; mycelium is uniloculate, up to in diameter, non-papillate and with a central ostiole. Its paraphyses are hyaline, cylindrical and thin-walled. Conidiophores are absent in this species. Conidiogenous cells are holoblastic and also hyaline. Conidia are aseptate, ellipsoid to ovoid and with longitudinal striations. References Further reading Chen, S. F., et al. "First report of Lasiodiplodia citricola and Neoscytalidium dimidiatum causing death of graft union of English walnut in California." Fungal Diversity 67 (2014): 157–179. Chen, S. F., et al. "First report of Lasiodiplodia citricola associated with stem canker of peach in California, USA." Journal of Plant Pathology 95.3 (2013). Van der Linde, Johannes Alwyn, et al. "Lasiodiplodia species associated with dying Euphorbia ingens in South Africa." Southern Forests: a Journal of Forest Science 73.3-4 (2011): 165–173. Marques, Marília W., et al. "Species of Lasiodiplodia associated with mango in Brazil." Fungal Diversity 61.1 (2013): 181–193. External links MycoBank Botryosphaeriaceae Fungal citrus diseases Fungi described in 2010 Fungus species
Lasiodiplodia citricola
Biology
413
11,545,402
https://en.wikipedia.org/wiki/Oregon%20Institute%20of%20Marine%20Biology
The Oregon Institute of Marine Biology (or OIMB) is the marine station of the University of Oregon. This marine station is located in Charleston, Oregon at the mouth of Coos Bay. Currently, OIMB is home to several permanent faculty members and a number of graduate students. OIMB is a member of the National Association of Marine Laboratories (NAML). In addition to graduate research, undergraduate classes are offered year round, including marine birds and mammals, estuarine biology, marine ecology, invertebrate zoology, molecular biology, biology of fishes, biological oceanography, and embryology. The Loyd and Dorothy Rippey Library, one of eight branches of the UO Libraries, was added to the campus in 1999. The Rippey Library is open to the public by appointment, and the Oregon Card Program allows Oregon residents 16 years old and over to borrow from the collection. The Charleston Marine Life Center (or CMLC) is a public museum and aquarium on the edge of the harbor in Charleston, OR, across the street from the OIMB campus. Displays aimed at visitors of all ages emphasize the diversity of animal and plant life in local marine ecosystems. Visitors learn where to interact with marine organisms in their natural environments and how local scientists study the life histories, evolution and ecology of underwater plants and animals. History The University of Oregon first established OIMB as a summer research and education program in 1924, operating out of tents along the beach of Sunset Bay. OIMB settled into its current location in 1931, when 100 acres of the Coos Head Military Reserve, including several buildings from the Army Corps of Engineers, was deeded to the University of Oregon. In 1937, OIMB was transferred to Oregon State College (now Oregon State University), and remained theirs until the federal government required the property during World War II. Following the war, OIMB was initially returned to Oregon State University, but the University of Oregon reclaimed it in 1955 as a summer research facility. Since 1966, OIMB has been expanding its educational programs and its campus, constructing more teaching and research facilities and developing year-round educational programs. Notably, the Loyd and Dorothy Rippey Library was built in 1999, and construction of the Charleston Marine Life Center began in 2012. The CMLC is entirely powered by a wind turbine that was erected in 2014. OIMB has functioned as a year-round research facility since 1966, and courses were developed for fall, spring, and summer soon after. Winter classes became available beginning in 2017. Research Vessels OIMB operates the R/V Pluteus, a 42-foot aluminum-hull trawler, the R/V Pugettia, a 20-foot aluminum (Woolridge) boat, and several small vessels, including flat-bottom aluminum boats, an inflatable zodiac, and a large kayak. OIMB also has its own 600 m Phantom ROV (Remotely Operated Vehicle). Courses The Oregon Institute of Marine Biology offers both undergraduate and graduate level courses each term, which are open to University of Oregon students as well as students from other institutions. Courses offered are as follows: Invertebrate Zoology Marine Ecology Estuarine Biology Deep-sea and Subtidal Ecology Biological Oceanography Marine Environmental Issues Cell Biology Marine Conservation Biology Molecular Biology for Marine Sciences Comparative Embryology and Larval Biology Animal Behavior Marine Birds and Mammals Biology of Fishes Introduction to Experimental Design and Statistics Various weekend workshops during the summer including: Parasitology Biological Illustrations Seaweed Biology Marine Biological Invasions Marine Bioluminescence See also Hatfield Marine Science Center, a similar research facility associated with the Oregon State University and located in Newport, Oregon Hopkins Marine Station, a similar research facility run by Stanford University in Monterey, California References External links Oregon Institute of Marine Biology Follow OIMB on Twitter! University of Oregon Homepage Buildings and structures in Coos County, Oregon Marine biology Research institutes in Oregon University of Oregon Biological research institutes in the United States Education in Coos County, Oregon
Oregon Institute of Marine Biology
Biology
816
43,863,686
https://en.wikipedia.org/wiki/Corticolous%20lichen
A corticolous lichen is a lichen that grows on bark. This is contrasted with lignicolous lichen, which grows on wood that has had the bark stripped from it, and saxicolous lichen, which grows on rock. Examples of corticolous lichens include the crustose lichen Graphis plumierae, foliose lichen Melanohalea subolivacea and the fruticose Bryoria fuscescens. See also Phyllopsora ochroxantha References Lichenology
Corticolous lichen
Biology
118
2,198,300
https://en.wikipedia.org/wiki/Phosphaalkyne
In chemistry, a phosphaalkyne (IUPAC name: alkylidynephosphane) is an organophosphorus compound containing a triple bond between phosphorus and carbon with the general formula R-C≡P. Phosphaalkynes are the heavier congeners of nitriles, though, due to the similar electronegativities of phosphorus and carbon, possess reactivity patterns reminiscent of alkynes. Due to their high reactivity, phosphaalkynes are not found naturally on earth, but the simplest phosphaalkyne, phosphaethyne (H-C≡P) has been observed in the interstellar medium. Synthesis From phosphine gas The first of preparation of a phosphaalkyne was achieved in 1961 when Thurman Gier produced phosphaethyne by passing phosphine gas at low pressure over an electric arc produced between two carbon electrodes. Condensation of the gaseous products in a –196 °C (–321 °F) trap revealed that the reaction had produced acetylene, ethylene, phosphaethyne, which was identified by infrared spectroscopy. By elimination reactions Elimination of hydrogen halides Following the initial synthesis of phosphaethyne, it was realized that the same compound can be prepared more expeditiously via the flash pyrolysis of methyldichlorophosphine (CH3PCl2), resulting in the loss of two equivalents of hydrogen chloride. This methodology has been utilized to synthesize numerous substituted phosphaalkynes, including the methyl, vinyl, chloride, and fluoride derivatives. Fluoromethylidynephosphane (F-C≡P) can also be prepared via the potassium hydroxide promoted dehydrofluorination of trifluoromethylphosphine (CF3PH2). It is speculated that these reactions generally proceed via an intermediate phosphaethylene with general structure RClC=PH. This hypothesis has found experimental support in the observation of F2C=PH by 31P NMR spectroscopy during the synthesis of F-C≡P. Elimination of chlorotrimethylsilane The high strength of silicon–halogen bonds can be leveraged toward the synthesis of phosphaalkynes. Heating bis-trimethylsilylated methyldichlorophosphines ((SiMe3)2CRPCl2) under vacuum results in the expulsion of two equivalents of chlorotrimethylsilane and the ultimate formation of a new phosphaalkyne. This synthetic strategy has been applied in the synthesis of 2-phenylphosphaacetylene and 2-trimethylsilylphosphaacetylene. As in the case of synthetic routes reliant upon the elimination of a hydrogen halide, this route is suspected to involve an intermediate phosphaethylene species containing a C=P double bond, though such a species has not yet been observed. Elimination of hexamethyldisiloxane Like the preceding method, the most popular method for synthesizing phosphaalkynes is reliant upon the expulsion of products containing strong silicon-element bonds. Specifically, it is possible to synthesize phosphaalkynes via the elimination of hexamethyldisiloxane (HMDSO) from certain silylated phosphaalkenes with the general structure RO(SiMe3)C=PSiMe3. These phosphaalkenes are formed rapidly following the synthesis of the appropriate acyl bis-trimethylsilylphosphine, which undergoes a rapid [1,3]-silyl shift to produce the relevant phosphaalkene. This synthetic strategy is particularly appealing because the precursors (an acyl chloride and tris-trimethylsilylphosphine or bis-trimethylsilylphosphide) are either readily available or simple to synthesize. This method has been utilized to produce a variety of kinetically stable phosphaalkynes, including aryl, tertiary alkyl, secondary alkyl, and even primary alkyl phosphaalkynes in good yields. By rearrangement of a putative phospha-isocyanide Dihalophospaalkenes of the general form R-P=CX2, where X is Cl, Br, or I, undergo lithium-halogen exchange with organolithium reagents to yield intermediates of the form R-P=CXLi. These species then eject the corresponding lithium halide salt, LiX, to putatively give a phospha-isocyanide, which can rearrange, much in the same way as an isocyanide, to yield the corresponding phosphaalkyne. This rearrangement has been evaluated using the tools of computational chemistry, which has shown that this isomerization process should proceed very rapidly, in line with current experimental evidence showing that phosphaisonitriles are unobservable intermediates, even at –85 °C (–121 °C). Other methods It has been demonstrated by Cummins and coworkers that thermolysis of compounds of the general form C14H10PC(=PPh3)R leads to the extrusion of C14H10 (anthracene), triphenylphosphine, and the corresponding substituted phosphaacetylene: R-C≡P. Unlike the previous method, which derives the phosphaalkyne substituent from an acyl chloride, this method derives the substituent from a Wittig reagent. Structure and bonding The carbon-phosphorus triple bond in phosphaalkynes represents an exception to the so-called "double bond rule", which would suggest that phosphorus tends not to form multiple bonds to carbon, and the nature of bonding within phosphaalkynes has therefore attracted much interest from synthetic and theoretical chemists. For simple phosphaalkynes such as H-C≡P and Me-C≡P, the carbon-phosphorus bond length is known by microwave spectroscopy, and for certain more complex phosphaalkynes, these bond lengths are known from single-crystal X-ray diffraction experiments. These bond lengths can be compared to the theoretical bond length for a carbon-phosphorus triple bond predicted by Pekka Pyykkö of 1.54 Å. By bond length metrics, most structurally characterized alkyl and aryl substituted phosphaalkynes contain triple bonds between carbon and phosphorus, as their bond lengths are either equal to or less than the theoretical bond distance. The carbon-phosphorus bond order in phosphaalkynes has also been the subject of computational inquiry, where quantum chemical calculations have been utilized to determine the nature of bonding in these molecules from first principles. In this context, natural bond orbital (NBO) theory has provided valuable insight into the bonding within these molecules. Lucas and coworkers have investigated the electronic structure of various substituted phosphaalkynes, including the cyaphide anion (C≡P–), using NBO, natural resonance theory (NRT), and quantum theory of atoms in molecules (QTAIM) in an attempt to better describe the bonding in these molecules. For the simplest systems, C≡P– and H-C≡P, NBO analysis suggests that the only relevant resonance structure is that in which there is a triple bond between carbon and phosphorus. For more complex molecules, such as Me-C≡P and (Me)3C-C≡P, the triple bonded resonance structure is still the most relevant, but accounts for only some of the overall electron density within the molecule (81.5% and 72.1%, respectively). This is due to interactions between the two carbon-phosphorus pi-bonds and the C-H or C-C sigma-bonds of the substituents, which can be visualized by inspecting the C-P pi-bonding molecular orbitals in these molecules. Reactivity Phosphaalkynes possess diverse reactivity profiles, and can be utilized in the synthesis of various phosphorus-containing saturated of unsaturated heterocyclic compounds. Cycloaddition reactivity One of the most developed areas of phosphaalkyne chemistry is that of cycloadditions. Like other multiply bonded molecular fragments, phosphaalkynes undergo myriad reactions such as [1+2] cycloadditions, [3+2] cycloadditions, and [4+2] cycloadditions. This reactivity is summarized in graphical format below, which includes some examples of 1,2-addition reactivity (which is not a form of cycloaddition). Oligomerization The pi-bonds of phosphaalkynes are weaker than most carbon-phosphorus sigma bonds, rendering phosphaalkynes reactive with respect to the formation of oligomeric species containing more sigma bonds. These oligomerization reactions are triggered thermally, or can be catalyzed by transition or main-group metals. Uncatalyzed Phosphaalkynes with small substituents (H, F, Me, Ph, etc.) undergo decomposition at or below room temperature by way of polymerization/oligimerization to yield mixtures of products which are challenging to characterize. The same is largely true of kinetically stable phosphaalkynes, which undergo oligomerization reactions at elevated temperature. In spite of the challenges associated with isolating and identifying the products of these oligimerizations, however, cuboidal tetramers of tert-butylphosphaalkyne and tert-pentylphosphaalkyne have been isolated (albeit in low yield) and identified following heating of the respective phosphaalkyne. Computational chemistry has proved a valuable tool for studying these synthetically complex reactions, and it has been shown that while the formation of phosphaalkyne dimers is thermodynamically favorable, the formation of trimers, tetramers, and higher order oligomeric species tends to be more favorable, accounting for the generation of intractable mixtures upon inducing oligomerization of phosphaalkynes experimentally. Metal-mediated Unlike thermally initiated phosphaalkyne oligomerization reactions, transition metals and main group metals are capable of oligomerizing phosphaalkynes in a controlled manner, and have led to the isolation of phosphaalkyne dimers, trimers, tetramers, pentamers, and even hexamers. A nickel complex is capable of catalytically homocoupling tBu-C≡P to yield a diphosphatetrahedrane. See also Arsaalkyne Cyaphide References Functional groups Organophosphanes
Phosphaalkyne
Chemistry
2,312
19,897,702
https://en.wikipedia.org/wiki/Parseval%E2%80%93Gutzmer%20formula
In mathematics, the Parseval–Gutzmer formula states that, if is an analytic function on a closed disk of radius r with Taylor series then for z = reiθ on the boundary of the disk, which may also be written as Proof The Cauchy Integral Formula for coefficients states that for the above conditions: where γ is defined to be the circular path around origin of radius r. Also for we have: Applying both of these facts to the problem starting with the second fact: Further Applications Using this formula, it is possible to show that where This is done by using the integral References Theorems in complex analysis
Parseval–Gutzmer formula
Mathematics
127
12,223,583
https://en.wikipedia.org/wiki/Gale%E2%80%93Church%20alignment%20algorithm
In computational linguistics, the Gale–Church algorithm is a method for aligning corresponding sentences in a parallel corpus. It works on the principle that equivalent sentences should roughly correspond in length; that is, longer sentences in one language should correspond to longer sentences in the other language. The algorithm was described in a 1993 paper by William A. Gale and Kenneth W. Church of AT&T Bell Laboratories. References External links Computational linguistics
Gale–Church alignment algorithm
Technology
86
52,187,779
https://en.wikipedia.org/wiki/NGC%20340
NGC 340 is a spiral galaxy in the constellation Cetus. It was discovered on September 27, 1864, by Albert Marth. It was described by Dreyer as "very faint, small, extended." References External links 0340 -01-03-055 003610 18640927 Cetus Spiral galaxies
NGC 340
Astronomy
67
1,855,357
https://en.wikipedia.org/wiki/Ecosystem%20service
Ecosystem services are the various benefits that humans derive from healthy ecosystems. These ecosystems, when functioning well, offer such things as provision of food, natural pollination of crops, clean air and water, decomposition of wastes, or flood control. Ecosystem services are grouped into four broad categories of services. There are provisioning services, such as the production of food and water. Regulating services, such as the control of climate and disease. Supporting services, such as nutrient cycles and oxygen production. And finally there are cultural services, such as spiritual and recreational benefits. Evaluations of ecosystem services may include assigning an economic value to them. For example, estuarine and coastal ecosystems are marine ecosystems that perform the four categories of ecosystem services in several ways. Firstly, their provisioning services include marine resources and genetic resources. Secondly, their supporting services include nutrient cycling and primary production. Thirdly, their regulating services include carbon sequestration (which helps with climate change mitigation) and flood control. Lastly, their cultural services include recreation and tourism. The Millennium Ecosystem Assessment (MA) in the early 2000s has made this concept better known. Definition Ecosystem services or eco-services are defined as the goods and services provided by ecosystems to humans. Per the 2006 Millennium Ecosystem Assessment (MA), ecosystem services are "the benefits people obtain from ecosystems". The MA also delineated the four categories of ecosystem services into provisioning, regulating, supporting, and cultural. By 2010, there had evolved various working definitions and descriptions of ecosystem services in the literature. To prevent double-counting in ecosystem services audits, for instance, The Economics of Ecosystems and Biodiversity (TEEB) replaced "Supporting Services" in the MA with "Habitat Services" and "ecosystem functions", defined as "a subset of the interactions between ecosystem structure and processes that underpin the capacity of an ecosystem to provide goods and services". While Gretchen Daily's original definition distinguished between ecosystem goods and ecosystem services, Robert Costanza and colleagues' later work and that of the Millennium Ecosystem Assessment lumped all of these together as ecosystem services. Categories Four different types of ecosystem services have been distinguished by the scientific body: regulating services, provisioning services, cultural services and supporting services. An ecosystem does not necessarily offer all four types of services simultaneously; but given the intricate nature of any ecosystem, it is usually assumed that humans benefit from a combination of these services. The services offered by diverse types of ecosystems (forests, seas, coral reefs, mangroves, etc.) differ in nature and in consequence. In fact, some services directly affect the livelihood of neighboring human populations (such as fresh water, food or aesthetic value, etc.) while other services affect general environmental conditions by which humans are indirectly impacted (such as climate change, erosion regulation or natural hazard regulation, etc.). The Millennium Ecosystem Assessment report 2005 defined ecosystem services as benefits people obtain from ecosystems and distinguishes four categories of ecosystem services, where the so-called supporting services are regarded as the basis for the services of the other three categories. Provisioning services Provisioning services consist of all "the products obtained from ecosystems". The following services are also known as ecosystem goods: food (including seafood and game), crops, wild foods, and spices raw materials (including lumber, skins, fuelwood, organic matter, fodder, and fertilizer) genetic resources (including crop improvement genes, and health care) biogenic minerals medicinal resources (including pharmaceuticals, chemical models, and test and assay organisms) energy (hydropower, biomass fuels) ornamental resources (including fashion, handicrafts, jewelry, pets, worship, decoration, and souvenirs like furs, feathers, ivory, orchids, butterflies, aquarium fish, shells, etc.) Forests and forest management produce a large type and variety of timber products, including roundwood, sawnwood, panels, and engineered wood, e.g., cross-laminated timber, as well as pulp and paper. Besides the production of timber, forestry activities may also result in products that undergo little processing, such as fire wood, charcoal, wood chips and roundwood used in an unprocessed form. Global production and trade of all major wood-based products recorded their highest ever values in 2018. Production, imports and exports of roundwood, sawnwood, wood-based panels, wood pulp, wood charcoal and pellets reached their maximum quantities since 1947 when FAO started reporting global forest product statistics. In 2018, growth in production of the main wood-based product groups ranged from 1 percent (woodbased panels) to 5 percent (industrial roundwood). The fastest growth occurred in the Asia-Pacific, Northern American and European regions, likely due to positive economic growth in these areas. Over 40% of the territory in the European Union is covered by forests. This region has grown via afforestation by roughly 0.4% year in recent decades. In the European Union, just 60% of the yearly forest growth is harvested. Forests also provide non-wood forest products, including fodder, aromatic and medicinal plants, and wild foods. Worldwide, around 1 billion people depend to some extent on wild foods such as wild meat, edible insects, edible plant products, mushrooms and fish, which often contain high levels of key micronutrients. The value of forest foods as a nutritional resource is not limited to low- and middle-income countries; more than 100 million people in the European Union (EU) regularly consume wild food. Some 2.4 billion people – in both urban and rural settings – use wood-based energy for cooking. Regulating services Regulating services are the "benefits obtained from the regulation of ecosystem processes". These include: Purification of water and air Carbon sequestration (this contributes to climate change mitigation) Waste decomposition and detoxification Predation regulates prey populations Biological control pest and disease control Pollination Disturbance regulation, i.e. flood protection Water purification An example for water purification as an ecosystem service is as follows: In New York City, where the quality of drinking water had fallen below standards required by the U.S. Environmental Protection Agency (EPA), authorities opted to restore the polluted Catskill Watershed that had previously provided the city with the ecosystem service of water purification. Once the input of sewage and pesticides to the watershed area was reduced, natural abiotic processes such as soil absorption and filtration of chemicals, together with biotic recycling via root systems and soil microorganisms, water quality improved to levels that met government standards. The cost of this investment in natural capital was estimated at $1–1.5 billion, which contrasted dramatically with the estimated $6–8 billion cost of constructing a water filtration plant plus the $300 million annual running costs. Pollination Pollination of crops by bees is required for 15–30% of U.S. food production; most large-scale farmers import non-native honey bees to provide this service. A 2005 study reported that in California's agricultural region, it was found that wild bees alone could provide partial or complete pollination services or enhance the services provided by honey bees through behavioral interactions. However, intensified agricultural practices can quickly erode pollination services through the loss of species. The remaining species are unable to compensate this. The results of this study also indicate that the proportion of chaparral and oak-woodland habitat available for wild bees within 1–2 km of a farm can stabilize and enhance the provision of pollination services. The presence of such ecosystem elements functions almost like an insurance policy for farmers. Buffer zones Coastal and estuarine ecosystems act as buffer zones against natural hazards and environmental disturbances, such as floods, cyclones, tidal surges and storms. The role they play is to "[absorb] a portion of the impact and thus [lessen] its effect on the land". Wetlands (which include saltwater swamps, salt marshes, ...) and the vegetation it supports – trees, root mats, etc. – retain large amounts of water (surface water, snowmelt, rain, groundwater) and then slowly releases them back, decreasing the likeliness of floods. Mangrove forests protect coastal shorelines from tidal erosion or erosion by currents; a process that was studied after the 1999 cyclone that hit India. Villages that were surrounded with mangrove forests encountered less damages than other villages that were not protected by mangroves. Supporting services Supporting services are the services that allow for the other ecosystem services to be present. They have indirect impacts on humans that last over a long period of time. Several services can be considered as being both supporting services and regulating/cultural/provisioning services. Supporting services include for example nutrient cycling, primary production, soil formation, habitat provision. These services make it possible for the ecosystems to continue providing services such as food supply, flood regulation, and water purification. Nutrient cycling Nutrient cycling is the movement of nutrients through an ecosystem by biotic and abiotic processes. The ocean is a vast storage pool for these nutrients, such as carbon, nitrogen and phosphorus. The nutrients are absorbed by the basic organisms of the marine food web and are thus transferred from one organism to the other and from one ecosystem to the other. Nutrients are recycled through the life cycle of organisms as they die and decompose, releasing the nutrients into the neighboring environment. "The service of nutrient cycling eventually impacts all other ecosystem services as all living things require a constant supply of nutrients to survive". Primary production Primary production refers to the production of organic matter, i.e., chemically bound energy, through processes such as photosynthesis and chemosynthesis. The organic matter produced by primary producers forms the basis of all food webs. Further, it generates oxygen (O2), a molecule necessary to sustain animals and humans. On average, a human consumes about 550 liter of oxygen per day, whereas plants produce 1,5 liter of oxygen per 10 grams of growth. Cultural services Cultural services relate to the non-material world, as they benefit the benefit recreational, aesthetic, cognitive and spiritual activities, which are not easily quantifiable in monetary terms. They include: cultural (including use of nature as motif in books, film, painting, folklore, national symbols, advertising, etc.) spiritual and historical (including use of nature for religious or heritage value or natural) recreational experiences (including ecotourism, outdoor sports, and recreation) science and education (including use of natural systems for school excursions, and scientific discovery) therapeutic (including eco-therapy, social forestry and animal assisted therapy) As of 2012, there was a discussion as to how the concept of cultural ecosystem services could be operationalized, how landscape aesthetics, cultural heritage, outdoor recreation, and spiritual significance to define can fit into the ecosystem services approach. who vote for models that explicitly link ecological structures and functions with cultural values and benefits. Likewise, there has been a fundamental critique of the concept of cultural ecosystem services that builds on three arguments: Pivotal cultural values attaching to the natural/cultivated environment rely on an area's unique character that cannot be addressed by methods that use universal scientific parameters to determine ecological structures and functions. If a natural/cultivated environment has symbolic meanings and cultural values the object of these values are not ecosystems but shaped phenomena like mountains, lakes, forests, and, mainly, symbolic landscapes. Cultural values do result not from properties produced by ecosystems but are the product of a specific way of seeing within the given cultural framework of symbolic experience. The Common International Classification of Ecosystem Services (CICES) is a classification scheme developed to accounting systems (like National counts etc.), in order to avoid double-counting of Supporting Services with others Provisioning and Regulating Services. Recreation and tourism Sea sports are very popular among coastal populations: surfing, snorkeling, whale watching, kayaking, recreational fishing ... a lot of tourists also travel to resorts close to the sea or rivers or lakes to be able to experience these activities, and relax near the water. The United Nations Sustainable Development Goal 14 also has targets aimed at enhancing the use of ecosystem services for sustainable tourism especially in Small Island Developing States. Estuarine and coastal ecosystem services Estuarine and marine coastal ecosystems are both marine ecosystems. Together, these ecosystems perform the four categories of ecosystem services in a variety of ways: The provisioning services include forest products, marine products, fresh water, raw materials, biochemical and genetic resources. Regulating services include carbon sequestration (contributing to climate change mitigation) as well as waste treatment and disease regulation and buffer zones. Supporting services of coastal ecosystems include nutrient cycling, biologically mediated habitats and primary production. Cultural services of coastal ecosystems include inspirational aspects, recreation and tourism, science and education. Coasts and their adjacent areas on and offshore are an important part of a local ecosystem. The mixture of fresh water and salt water (brackish water) in estuaries provides many nutrients for marine life. Salt marshes, mangroves and beaches also support a diversity of plants, animals and insects crucial to the food chain. The high level of biodiversity creates a high level of biological activity, which has attracted human activity for thousands of years. Coasts also create essential material for organisms to live by, including estuaries, wetland, seagrass, coral reefs, and mangroves. Coasts provide habitats for migratory birds, sea turtles, marine mammals, and coral reefs. Economics There are questions regarding the environmental and economic values of ecosystem services. Some people may be unaware of the environment in general and humanity's interrelatedness with the natural environment, which may cause misconceptions. Although environmental awareness is rapidly improving in our contemporary world, ecosystem capital and its flow are still poorly understood, threats continue to impose, and we suffer from the so-called 'tragedy of the commons'. Many efforts to inform decision-makers of current versus future costs and benefits now involve organizing and translating scientific knowledge to economics, which articulate the consequences of our choices in comparable units of impact on human well-being. An especially challenging aspect of this process is that interpreting ecological information collected from one spatial-temporal scale does not necessarily mean it can be applied at another; understanding the dynamics of ecological processes relative to ecosystem services is essential in aiding economic decisions. Weighting factors such as a service's irreplaceability or bundled services can also allocate economic value such that goal attainment becomes more efficient. The economic valuation of ecosystem services also involves social communication and information, areas that remain particularly challenging and are the focus of many researchers. In general, the idea is that although individuals make decisions for any variety of reasons, trends reveal the aggregated preferences of a society, from which the economic value of services can be inferred and assigned. The six major methods for valuing ecosystem services in monetary terms are: Avoided cost: Services allow society to avoid costs that would have been incurred in the absence of those services (e.g. waste treatment by wetland habitats avoids health costs) Replacement cost: Services could be replaced with human-made systems (e.g. restoration of the Catskill Watershed cost less than the construction of a water purification plant) Factor income: Services provide for the enhancement of incomes (e.g. improved water quality increases the commercial take of a fishery and improves the income of fishers) Travel cost: Service demand may require travel, whose costs can reflect the implied value of the service (e.g. value of ecotourism experience is at least what a visitor is willing to pay to get there) Hedonic pricing: Service demand may be reflected in the prices people will pay for associated goods (e.g. coastal housing prices exceed that of inland homes) Contingent valuation: Service demand may be elicited by posing hypothetical scenarios that involve some valuation of alternatives (e.g. visitors willing to pay for increased access to national parks) A peer-reviewed study published in 1997 estimated the value of the world's ecosystem services and natural capital to be between US$16 and $54 trillion per year, with an average of US$33 trillion per year. However, Salles (2011) indicated 'The total value of biodiversity is infinite, so having debate about what is the total value of nature is actually pointless because we can't live without it'. As of 2012, many companies were not fully aware of the extent of their dependence and impact on ecosystems and the possible ramifications. Likewise, environmental management systems and environmental due diligence tools are more suited to handle "traditional" issues of pollution and natural resource consumption. Most focus on environmental impacts, not dependence. Several tools and methodologies can help the private sector value and assess ecosystem services, including Our Ecosystem, the 2008 Corporate Ecosystem Services Review, the Artificial Intelligence for Environment & Sustainability (ARIES) project from 2007, the Natural Value Initiative (2012) and InVEST (Integrated Valuation of Ecosystem Services & Tradeoffs, 2012) To provide an example of a cost comparison: The land of the United States Department of Defense is said to provide substantial ecosystem services to local communities, including benefits to carbon storage, resiliency to climate, and endangered species habitat. As of 2020, the Eglin Air Force Base is said to provide about $110 million in ecosystem services per year, $40 million more than if no base was present. Payments Management and policy Although monetary pricing continues with respect to the valuation of ecosystem services, the challenges in policy implementation and management are significant and considerable. The administration of common pool resources has been a subject of extensive academic pursuit. From defining the problems to finding solutions that can be applied in practical and sustainable ways, there is much to overcome. Considering options must balance present and future human needs, and decision-makers must frequently work from valid but incomplete information. Existing legal policies are often considered insufficient since they typically pertain to human health-based standards that are mismatched with necessary means to protect ecosystem health and services. In 2000, to improve the information available, the implementation of an Ecosystem Services Framework has been suggested (ESF), which integrates the biophysical and socio-economic dimensions of protecting the environment and is designed to guide institutions through multidisciplinary information and jargon, helping to direct strategic choices. As of 2005 Local to regional collective management efforts were considered appropriate for services like crop pollination or resources like water. Another approach that has become increasingly popular during the 1990s is the marketing of ecosystem services protection. Payment and trading of services is an emerging worldwide small-scale solution where one can acquire credits for activities such as sponsoring the protection of carbon sequestration sources or the restoration of ecosystem service providers. In some cases, banks for handling such credits have been established and conservation companies have even gone public on stock exchanges, defining an evermore parallel link with economic endeavors and opportunities for tying into social perceptions. However, crucial for implementation are clearly defined land rights, which are often lacking in many developing countries. In particular, many forest-rich developing countries suffering deforestation experience conflict between different forest stakeholders. In addition, concerns for such global transactions include inconsistent compensation for services or resources sacrificed elsewhere and misconceived warrants for irresponsible use. As of 2001, another approach focused on protecting ecosystem service biodiversity hotspots. Recognition that the conservation of many ecosystem services aligns with more traditional conservation goals (i.e. biodiversity) has led to the suggested merging of objectives for maximizing their mutual success. This may be particularly strategic when employing networks that permit the flow of services across landscapes, and might also facilitate securing the financial means to protect services through a diversification of investors. For example, as of 2013, there had been interest in the valuation of ecosystem services provided by shellfish production and restoration. A keystone species, low in the food chain, bivalve shellfish such as oysters support a complex community of species by performing a number of functions essential to the diverse array of species that surround them. There is also increasing recognition that some shellfish species may impact or control many ecological processes; so much so that they are included on the list of "ecosystem engineers"—organisms that physically, biologically or chemically modify the environment around them in ways that influence the health of other organisms. Many of the ecological functions and processes performed or affected by shellfish contribute to human well-being by providing a stream of valuable ecosystem services over time by filtering out particulate materials and potentially mitigating water quality issues by controlling excess nutrients in the water. As of 2018, the concept of ecosystem services had not been properly implemented into international and regional legislation yet. Notwithstanding, the United Nations Sustainable Development Goal 15 has a target to ensure the conservation, restoration, and sustainable use of ecosystem services. An estimated $125 trillion to $140 trillion is added to the economy each year by all ecosystem services. However, many of these services are at risk due to climate and other anthropogenic impacts. Climate-driven shifts in biome ranges is expected to cause a 9% decline in ecosystem services on average at global scale by 2100 Ecosystem-based adaptation (EbA) Land use change decisions Ecosystem services decisions require making complex choices at the intersection of ecology, technology, society, and the economy. The process of making ecosystem services decisions must consider the interaction of many types of information, honor all stakeholder viewpoints, including regulatory agencies, proposal proponents, decision makers, residents, NGOs, and measure the impacts on all four parts of the intersection. These decisions are usually spatial, always multi-objective, and based on uncertain data, models, and estimates. Often it is the combination of the best science combined with the stakeholder values, estimates and opinions that drive the process. One analytical study modeled the stakeholders as agents to support water resource management decisions in the Middle Rio Grande basin of New Mexico. This study focused on modeling the stakeholder inputs across a spatial decision, but ignored uncertainty. Another study used Monte Carlo methods to exercise econometric models of landowner decisions in a study of the effects of land-use change. Here the stakeholder inputs were modeled as random effects to reflect the uncertainty. A third study used a Bayesian decision support system to both model the uncertainty in the scientific information Bayes Nets and to assist collecting and fusing the input from stakeholders. This study was about siting wave energy devices off the Oregon Coast, but presents a general method for managing uncertain spatial science and stakeholder information in a decision making environment. Remote sensing data and analyses can be used to assess the health and extent of land cover classes that provide ecosystem services, which aids in planning, management, monitoring of stakeholders' actions, and communication between stakeholders. In Baltic countries scientists, nature conservationists and local authorities are implementing integrated planning approach for grassland ecosystems. They are developing an integrated planning tool based on GIS (geographic information system) technology and put online that will help for planners to choose the best grassland management solution for concrete grassland. It will look holistically at the processes in the countryside and help to find best grassland management solutions by taking into account both natural and socioeconomic factors of the particular site. History While the notion of human dependence on Earth's ecosystems reaches to the start of Homo sapiens existence, the term 'natural capital' was first coined by E. F. Schumacher in 1973 in his book Small is Beautiful. Recognition of how ecosystems could provide complex services to humankind date back to at least Plato (c. 400 BC) who understood that deforestation could lead to soil erosion and the drying of springs. Modern ideas of ecosystem services probably began when Marsh challenged in 1864 the idea that Earth's natural resources are unbounded by pointing out changes in soil fertility in the Mediterranean. It was not until the late 1940s that three key authors—Henry Fairfield Osborn, Jr, William Vogt, and Aldo Leopold—promoted recognition of human dependence on the environment. In 1956, Paul Sears drew attention to the critical role of the ecosystem in processing wastes and recycling nutrients. In 1970, Paul Ehrlich and Rosa Weigert called attention to "ecological systems" in their environmental science textbook and "the most subtle and dangerous threat to man's existence ... the potential destruction, by man's own activities, of those ecological systems upon which the very existence of the human species depends". The term environmental services was introduced in a 1970 report of the Study of Critical Environmental Problems, which listed services including insect pollination, fisheries, climate regulation and flood control. In following years, variations of the term were used, but eventually 'ecosystem services' became the standard in scientific literature. The ecosystem services concept has continued to expand and includes socio-economic and conservation objectives. See also Blue carbon Diversity-function debate Earth Economics Ecological goods and services Ecosystem-based disaster risk reduction Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services Natural capital Soil functions Nature Based Solutions References Sources External links Millennium Ecosystem Assessment Earth Economics Water Evaluation And Planning (WEAP) system for modeling impacts on aquatic ecosystem services GecoServ – Gulf of Mexico Ecosystem Services Valuation Database (includes studies from all over the world, but only coastal ecosystems relevant to the Gulf of Mexico) Ecological restoration Ecological economics Systems ecology Social ecology Human ecology Forestry and the environment Environmental social science concepts Ecological economics concepts
Ecosystem service
Chemistry,Engineering,Environmental_science
5,134
26,611,936
https://en.wikipedia.org/wiki/Penrose%20tiling
A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and a tiling is aperiodic if it does not contain arbitrarily large periodic regions or patches. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s. There are several variants of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together in a way that avoids periodic tiling. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings. Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles. The study of these tilings has been important in the understanding of physical materials that also form quasicrystals. Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown. Background and history Periodic and aperiodic tilings Covering a flat surface ("the plane") with some pattern of geometric shapes ("tiles"), with no overlaps or gaps, is called a tiling. The most familiar tilings, such as covering a floor with squares meeting edge-to-edge, are examples of periodic tilings. If a square tiling is shifted by the width of a tile, parallel to the sides of the tile, the result is the same pattern of tiles as before the shift. A shift (formally, a translation) that preserves the tiling in this way is called a period of the tiling. A tiling is called periodic when it has periods that shift the tiling in two different directions. The tiles in the square tiling have only one shape, and it is common for other tilings to have only a finite number of shapes. These shapes are called prototiles, and a set of prototiles is said to admit a tiling or tile the plane if there is a tiling of the plane using only these shapes. That is, each tile in the tiling must be congruent to one of these prototiles. A tiling that has no periods is non-periodic. A set of prototiles is said to be aperiodic if all of its tilings are non-periodic, and in this case its tilings are also called aperiodic tilings. Penrose tilings are among the simplest known examples of aperiodic tilings of the plane by finite sets of prototiles. Earliest aperiodic tilings The subject of aperiodic tilings received new interest in the 1960s when logician Hao Wang noted connections between decision problems and tilings. In particular, he introduced tilings by square plates with colored edges, now known as Wang dominoes or tiles, and posed the "Domino Problem": to determine whether a given set of Wang dominoes could tile the plane with matching colors on adjacent domino edges. He observed that if this problem were undecidable, then there would have to exist an aperiodic set of Wang dominoes. At the time, this seemed implausible, so Wang conjectured no such set could exist. Wang's student Robert Berger proved that the Domino Problem was undecidable (so Wang's conjecture was incorrect) in his 1964 thesis, and obtained an aperiodic set of 20,426 Wang dominoes. He also described a reduction to 104 such prototiles; the latter did not appear in his published monograph, but in 1968, Donald Knuth detailed a modification of Berger's set requiring only 92 dominoes. The color matching required in a tiling by Wang dominoes can easily be achieved by modifying the edges of the tiles like jigsaw puzzle pieces so that they can fit together only as prescribed by the edge colorings. Raphael Robinson, in a 1971 paper which simplified Berger's techniques and undecidability proof, used this technique to obtain an aperiodic set of just six prototiles. Development of the Penrose tilings The first Penrose tiling (tiling P1 below) is an aperiodic set of six prototiles, introduced by Roger Penrose in a 1974 paper, based on pentagons rather than squares. Any attempt to tile the plane with regular pentagons necessarily leaves gaps, but Johannes Kepler showed, in his 1619 work Harmonices Mundi, that these gaps can be filled using pentagrams (star polygons), decagons and related shapes. Kepler extended this tiling by five polygons and found no periodic patterns, and already conjectured that every extension would introduce a new feature hence creating an aperiodic tiling. Traces of these ideas can also be found in the work of Albrecht Dürer. Acknowledging inspiration from Kepler, Penrose found matching rules for these shapes, obtaining an aperiodic set. These matching rules can be imposed by decorations of the edges, as with the Wang tiles. Penrose's tiling can be viewed as a completion of Kepler's finite Aa pattern. Penrose subsequently reduced the number of prototiles to two, discovering the kite and dart tiling (tiling P2 below) and the rhombus tiling (tiling P3 below). The rhombus tiling was independently discovered by Robert Ammann in 1976. Penrose and John H. Conway investigated the properties of Penrose tilings, and discovered that a substitution property explained their hierarchical nature; their findings were publicized by Martin Gardner in his January 1977 "Mathematical Games" column in Scientific American. In 1981, N. G. de Bruijn provided two different methods to construct Penrose tilings. De Bruijn's "multigrid method" obtains the Penrose tilings as the dual graphs of arrangements of five families of parallel lines. In his "cut and project method", Penrose tilings are obtained as two-dimensional projections from a five-dimensional cubic structure. In these approaches, the Penrose tiling is viewed as a set of points, its vertices, while the tiles are geometrical shapes obtained by connecting vertices with edges. A 1990 construction by Baake, Kramer, Schlottmann, and Zeidler derived the Penrose tiling and the related Tübingen triangle tiling in a similar manner from the four-dimensional 5-cell honeycomb. Penrose tilings The three types of Penrose tiling, P1–P3, are described individually below. They have many common features: in each case, the tiles are constructed from shapes related to the pentagon (and hence to the golden ratio), but the basic tile shapes need to be supplemented by matching rules in order to tile aperiodically. These rules may be described using labeled vertices or edges, or patterns on the tile faces; alternatively, the edge profile can be modified (e.g. by indentations and protrusions) to obtain an aperiodic set of prototiles. Original pentagonal Penrose tiling (P1) Penrose's first tiling uses pentagons and three other shapes: a five-pointed "star" (a pentagram), a "boat" (roughly 3/5 of a star) and a "diamond" (a thin rhombus). To ensure that all tilings are non-periodic, there are that specify how tiles may meet each other, and there are three different types of matching rule for the pentagonal tiles. Treating these three types as different prototiles gives a set of six prototiles overall. It is common to indicate the three different types of pentagonal tiles using three different colors, as in the figure above right. Kite and dart tiling (P2) Penrose's second tiling uses quadrilaterals called the "kite" and "dart", which may be combined to make a rhombus. However, the matching rules prohibit such a combination. Both the kite and dart are composed of two triangles, called Robinson triangles, after 1975 notes by Robinson. The kite is a quadrilateral whose four interior angles are 72, 72, 72, and 144 degrees. The kite may be bisected along its axis of symmetry to form a pair of acute Robinson triangles (with angles of 36, 72 and 72 degrees). The dart is a non-convex quadrilateral whose four interior angles are 36, 72, 36, and 216 degrees. The dart may be bisected along its axis of symmetry to form a pair of obtuse Robinson triangles (with angles of 36, 36 and 108 degrees), which are smaller than the acute triangles. The matching rules can be described in several ways. One approach is to color the vertices (with two colors, e.g., black and white) and require that adjacent tiles have matching vertices. Another is to use a pattern of circular arcs (as shown above left in green and red) to constrain the placement of tiles: when two tiles share an edge in a tiling, the patterns must match at these edges. These rules often force the placement of certain tiles: for example, the concave vertex of any dart is necessarily filled by two kites. The corresponding figure (center of the top row in the lower image on the left) is called an "ace" by Conway; although it looks like an enlarged kite, it does not tile in the same way. Similarly the concave vertex formed when two kites meet along a short edge is necessarily filled by two darts (bottom right). In fact, there are only seven possible ways for the tiles to meet at a vertex; two of these figures – namely, the "star" (top left) and the "sun" (top right) – have 5-fold dihedral symmetry (by rotations and reflections), while the remainder have a single axis of reflection (vertical in the image). Apart from the ace (top middle) and the sun, all of these vertex figures force the placement of additional tiles. Rhombus tiling (P3) The third tiling uses a pair of rhombuses (often referred to as "rhombs" in this context) with equal sides but different angles. Ordinary rhombus-shaped tiles can be used to tile the plane periodically, so restrictions must be made on how tiles can be assembled: no two tiles may form a parallelogram, as this would allow a periodic tiling, but this constraint is not sufficient to force aperiodicity, as figure 1 above shows. There are two kinds of tile, both of which can be decomposed into Robinson triangles. The thin rhomb t has four corners with angles of 36, 144, 36, and 144 degrees. The t rhomb may be bisected along its short diagonal to form a pair of acute Robinson triangles. The thick rhomb T has angles of 72, 108, 72, and 108 degrees. The T rhomb may be bisected along its long diagonal to form a pair of obtuse Robinson triangles; in contrast to the P2 tiling, these are larger than the acute triangles. The matching rules distinguish sides of the tiles, and entail that tiles may be juxtaposed in certain particular ways but not in others. Two ways to describe these matching rules are shown in the image on the right. In one form, tiles must be assembled such that the curves on the faces match in color and position across an edge. In the other, tiles must be assembled such that the bumps on their edges fit together. There are 54 cyclically ordered combinations of such angles that add up to 360 degrees at a vertex, but the rules of the tiling allow only seven of these combinations to appear (although one of these arises in two ways). The various combinations of angles and facial curvature allow construction of arbitrarily complex tiles, such as the Penrose chickens. Features and constructions Golden ratio and local pentagonal symmetry Several properties and common features of the Penrose tilings involve the golden ratio (approximately 1.618). This is the ratio of chord lengths to side lengths in a regular pentagon, and satisfies = 1 + 1/. Consequently, the ratio of the lengths of long sides to short sides in the (isosceles) Robinson triangles is :1. It follows that the ratio of long side lengths to short in both kite and dart tiles is also :1, as are the length ratios of sides to the short diagonal in the thin rhomb t, and of long diagonal to sides in the thick rhomb T. In both the P2 and P3 tilings, the ratio of the area of the larger Robinson triangle to the smaller one is :1, hence so are the ratios of the areas of the kite to the dart, and of the thick rhomb to the thin rhomb. (Both larger and smaller obtuse Robinson triangles can be found in the pentagon on the left: the larger triangles at the top – the halves of the thick rhomb – have linear dimensions scaled up by compared to the small shaded triangle at the base, and so the ratio of areas is 2:1.) Any Penrose tiling has local pentagonal symmetry, in the sense that there are points in the tiling surrounded by a symmetric configuration of tiles: such configurations have fivefold rotational symmetry about the center point, as well as five mirror lines of reflection symmetry passing through the point, a dihedral symmetry group. This symmetry will generally preserve only a patch of tiles around the center point, but the patch can be very large: Conway and Penrose proved that whenever the colored curves on the P2 or P3 tilings close in a loop, the region within the loop has pentagonal symmetry, and furthermore, in any tiling, there are at most two such curves of each color that do not close up. There can be at most one center point of global fivefold symmetry: if there were more than one, then rotating each about the other would yield two closer centers of fivefold symmetry, which leads to a mathematical contradiction. There are only two Penrose tilings (of each type) with global pentagonal symmetry: for the P2 tiling by kites and darts, the center point is either a "sun" or "star" vertex. Inflation and deflation Many of the common features of Penrose tilings follow from a hierarchical pentagonal structure given by substitution rules: this is often referred to as inflation and deflation, or composition and decomposition, of tilings or (collections of) tiles. The substitution rules decompose each tile into smaller tiles of the same shape as those used in the tiling (and thus allow larger tiles to be "composed" from smaller ones). This shows that the Penrose tiling has a scaling self-similarity, and so can be thought of as a fractal, using the same process as the pentaflake. Penrose originally discovered the P1 tiling in this way, by decomposing a pentagon into six smaller pentagons (one half of a net of a dodecahedron) and five half-diamonds; he then observed that when he repeated this process the gaps between pentagons could all be filled by stars, diamonds, boats and other pentagons. By iterating this process indefinitely he obtained one of the two P1 tilings with pentagonal symmetry. Robinson triangle decompositions The substitution method for both P2 and P3 tilings can be described using Robinson triangles of different sizes. The Robinson triangles arising in P2 tilings (by bisecting kites and darts) are called A-tiles, while those arising in the P3 tilings (by bisecting rhombs) are called B-tiles. The smaller A-tile, denoted AS, is an obtuse Robinson triangle, while the larger A-tile, AL, is acute; in contrast, a smaller B-tile, denoted BS, is an acute Robinson triangle, while the larger B-tile, BL, is obtuse. Concretely, if AS has side lengths (1, 1, ), then AL has side lengths (, , 1). B-tiles can be related to such A-tiles in two ways: If BS has the same size as AL then BL is an enlarged version AS of AS, with side lengths (, , 2 = 1 + ) – this decomposes into an AL tile and AS tile joined along a common side of length 1. If instead BL is identified with AS, then BS is a reduced version (1/)AL of AL with side lengths (1/,1/,1) – joining a BS tile and a BL tile along a common side of length 1 then yields (a decomposition of) an AL tile. In these decompositions, there appears to be an ambiguity: Robinson triangles may be decomposed in two ways, which are mirror images of each other in the (isosceles) axis of symmetry of the triangle. In a Penrose tiling, this choice is fixed by the matching rules. Furthermore, the matching rules also determine how the smaller triangles in the tiling compose to give larger ones. It follows that the P2 and P3 tilings are mutually locally derivable: a tiling by one set of tiles can be used to generate a tiling by another. For example, a tiling by kites and darts may be subdivided into A-tiles, and these can be composed in a canonical way to form B-tiles and hence rhombs. The P2 and P3 tilings are also both mutually locally derivable with the P1 tiling (see figure 2 above). The decomposition of B-tiles into A-tiles may be written BS = AL, BL = AL + AS (assuming the larger size convention for the B-tiles), which can be summarized in a substitution matrix equation: Combining this with the decomposition of enlarged A-tiles into B-tiles yields the substitution so that the enlarged tile AL decomposes into two AL tiles and one AS tiles. The matching rules force a particular substitution: the two AL tiles in a AL tile must form a kite, and thus a kite decomposes into two kites and a two half-darts, and a dart decomposes into a kite and two half-darts. Enlarged B-tiles decompose into B-tiles in a similar way (via A-tiles). Composition and decomposition can be iterated, so that, for example The number of kites and darts in the nth iteration of the construction is determined by the nth power of the substitution matrix: where Fn is the nth Fibonacci number. The ratio of numbers of kites to darts in any sufficiently large P2 Penrose tiling pattern therefore approximates to the golden ratio . A similar result holds for the ratio of the number of thick rhombs to thin rhombs in the P3 Penrose tiling. Deflation for P2 and P3 tilings Starting with a collection of tiles from a given tiling (which might be a single tile, a tiling of the plane, or any other collection), deflation proceeds with a sequence of steps called generations. In one generation of deflation, each tile is replaced with two or more new tiles that are scaled-down versions of tiles used in the original tiling. The substitution rules guarantee that the new tiles will be arranged in accordance with the matching rules. Repeated generations of deflation produce a tiling of the original axiom shape with smaller and smaller tiles. This rule for dividing the tiles is a subdivision rule. The above table should be used with caution. The half kite and half dart deflation are useful only in the context of deflating a larger pattern as shown in the sun and star deflations. They give incorrect results if applied to single kites and darts. In addition, the simple subdivision rule generates holes near the edges of the tiling which are just visible in the top and bottom illustrations on the right. Additional forcing rules are useful. Consequences and applications Inflation and deflation yield a method for constructing kite and dart (P2) tilings, or rhombus (P3) tilings, known as up-down generation. The Penrose tilings, being non-periodic, have no translational symmetry – the pattern cannot be shifted to match itself over the entire plane. However, any bounded region, no matter how large, will be repeated an infinite number of times within the tiling. Therefore, no finite patch can uniquely determine a full Penrose tiling, nor even determine which position within the tiling is being shown. This shows in particular that the number of distinct Penrose tilings (of any type) is uncountably infinite. Up-down generation yields one method to parameterize the tilings, but other methods use Ammann bars, pentagrids, or cut and project schemes. Related tilings and topics Decagonal coverings and quasicrystals In 1996, German mathematician Petra Gummelt demonstrated that a covering (so called to distinguish it from a non-overlapping tiling) equivalent to the Penrose tiling can be constructed using a single decagonal tile if two kinds of overlapping regions are allowed. The decagonal tile is decorated with colored patches, and the covering rule allows only those overlaps compatible with the coloring. A suitable decomposition of the decagonal tile into kites and darts transforms such a covering into a Penrose (P2) tiling. Similarly, a P3 tiling can be obtained by inscribing a thick rhomb into each decagon; the remaining space is filled by thin rhombs. These coverings have been considered as a realistic model for the growth of quasicrystals: the overlapping decagons are 'quasi-unit cells' analogous to the unit cells from which crystals are constructed, and the matching rules maximize the density of certain atomic clusters. The aperiodic nature of the coverings can make theoretical studies of physical properties, such as electronic structure, difficult due to the absence of Bloch's theorem. However, spectra of quasicrystals can still be computed with error control. Related tilings The three variants of the Penrose tiling are mutually locally derivable. Selecting some subsets from the vertices of a P1 tiling allows to produce other non-periodic tilings. If the corners of one pentagon in P1 are labeled in succession by 1,3,5,2,4 an unambiguous tagging in all the pentagons is established, the order being either clockwise or counterclockwise. Points with the same label define a tiling by Robinson triangles while points with the numbers 3 and 4 on them define the vertices of a Tie-and-Navette tiling. There are also other related unequivalent tilings, such as the hexagon-boat-star and Mikulla–Roth tilings. For instance, if the matching rules for the rhombus tiling are reduced to a specific restriction on the angles permitted at each vertex, a binary tiling is obtained. Its underlying symmetry is also fivefold but it is not a quasicrystal. It can be obtained either by decorating the rhombs of the original tiling with smaller ones, or by applying substitution rules, but not by de Bruijn's cut-and-project method. Art and architecture The aesthetic value of tilings has long been appreciated, and remains a source of interest in them; hence the visual appearance (rather than the formal defining properties) of Penrose tilings has attracted attention. The similarity with certain decorative patterns used in North Africa and the Middle East has been noted; the physicists Peter J. Lu and Paul Steinhardt have presented evidence that a Penrose tiling underlies examples of medieval Islamic geometric patterns, such as the girih (strapwork) tilings at the Darb-e Imam shrine in Isfahan. Drop City artist Clark Richert used Penrose rhombs in artwork in 1970, derived by projecting the rhombic triacontahedron shadow onto a plane observing the embedded "fat" rhombi and "skinny" rhombi which tile together to produce the non-periodic tessellation. Art historian Martin Kemp has observed that Albrecht Dürer sketched similar motifs of a rhombus tiling. In 1979, Miami University used a Penrose tiling executed in terrazzo to decorate the Bachelor Hall courtyard in their Department of Mathematics and Statistics. In Indian Institute of Information Technology, Allahabad, since the first phase of construction in 2001, academic buildings were designed on the basis of "Penrose Geometry", styled on tessellations developed by Roger Penrose. In many places in those buildings, the floor has geometric patterns composed of Penrose tiling. The floor of the atrium of the Bayliss Building at The University of Western Australia is tiled with Penrose tiles. The Andrew Wiles Building, the location of the Mathematics Department at the University of Oxford as of October 2013, includes a section of Penrose tiling as the paving of its entrance. The pedestrian part of the street Keskuskatu in central Helsinki is paved using a form of Penrose tiling. The work was finished in 2014. San Francisco's 2018 Salesforce Transit Center features perforations in its exterior's undulating white metal skin in the Penrose pattern. See also Girih tiles List of aperiodic sets of tiles Pinwheel tiling Pentagonal tiling Quaquaversal tiling Tübingen triangle Notes References Primary sources . . . . . . Secondary sources . . . (First published by W. H. Freeman, New York (1989), .) Chapter 1 (pp. 1–18) is a reprint of . . . . . . . . . (Page numbers cited here are from the reproduction as .) . . . External links This has a list of additional resources. Discrete geometry Aperiodic tilings Mathematics and art Golden ratio
Penrose tiling
Physics,Mathematics
5,557
61,082,109
https://en.wikipedia.org/wiki/NGC%201345
NGC 1345 is a barred spiral galaxy in the constellation Eridanus. It was discovered by John Herschel on Dec 11, 1835. See also List of NGC objects (1001–2000) References External links Barred spiral galaxies Eridanus (constellation) 1345 012979
NGC 1345
Astronomy
60
21,139,310
https://en.wikipedia.org/wiki/Naegleria%20gruberi
Naegleria gruberi is a species of Naegleria. It is famous for its ability to change from an amoeba, which lacks a cytoplasmic microtubule cytoskeleton, to a flagellate, which has an elaborate microtubule cytoskeleton, including flagella. This "transformation" includes de novo synthesis of basal bodies (or centrioles). Background It was first characterized in 1899, and the genome sequence published in 2010. Naegleria gruberi is a non-pathogenic biosafety level 1 organism, although it is related to the deadly Naegleria fowleri. Naegleria gruberi is a free-living organism that can be extracted from wet soil and freshwater The strain NEG-M is the only Naegleria species that has a fully sequenced genome. Naegleria belongs to the Jakobids, Euglenozoans, and Heteroloboseans (JEH) group. The Naegleria genome sequence has indicated that the amoeboflagellate contains actin and microtubule cytoskeletons, mitotic and meiotic machinery, and several transcription factors. Naegleria'''s mitochondrial genome encodes some components of a mitochondrial c and c1 maturation system.Naegleria's mitochondria resemble the evolutionary intermediate thought to have occurred within the ancestor of all eukaryotes, because of its presence of mitochondrial Fe-hydrogenase and complete aerobic respiration system. The Naegleria genome is able to oxidize glucose, various amino acids and fatty acids through the Krebs cycle. The ancestor of existing eukaryotes have been thought to contain a fair number of introns. Nearly 36% of Naegleria genes are assumed to contain at least one intron and 17% contain multiple introns. The position of the introns are conserved, indicating that they are ancient.Naegleria amoeba undergo a closed mitosis, in which the nuclear envelope doesn't break down, but still proceeds through the typical stages. The multitubulin hypothesis predicts that eukaryotes contain multiple tublin genes with distinct properties. Naegleria uses different tubulins for mitosis and flagellar assembly. Observations suggest that Naegleria is primarily an asexual organism that reproduces by division of its amoebae to produce substantial clonal populations. However, analysis of the genome strain NEG-M revealed that it is a composite of two distinct haplotypes having arisen from an interbreeding population. Therefore, Naegleria'' is likely to be able to undergo genetic exchange. The NEG-M strain is the heterozygous result of a past mating of two strains, and it appears genetically equipped to mate again. However, further studies still need to be performed. References External links Centers for Disease Control and Prevention (CDC) Naegleria Information Percolozoa Model organisms Discoba species
Naegleria gruberi
Biology
636
6,909,686
https://en.wikipedia.org/wiki/Donald%20Sadoway
Donald Robert Sadoway (born 7 March 1950) is professor emeritus of materials chemistry at the Massachusetts Institute of Technology. He is a noted expert on batteries and has done significant research on how to improve the performance and longevity of portable power sources. In parallel, he is an expert on the extraction of metals from their ores and the inventor of molten oxide electrolysis, which has the potential to produce crude steel without the use of carbon reductant thereby totally eliminating greenhouse gas emissions. Background Sadoway was born in Toronto, Ontario, Canada. He did both his undergraduate and graduate studies at the University of Toronto, receiving his PhD in 1977. There he focused his studies on chemical metallurgy. He also served on the National Executive of the Ukrainian Canadian Students' Union (SUSK) from 1972 to 1974. In 1977, he received a NATO postdoctoral fellowship from the National Research Council of Canada and came to MIT to conduct his postdoctoral research under Julian Szekely. Sadoway joined the MIT faculty in 1978. On 19 June 2013, Sadoway was awarded an honorary Doctorate of Engineering by the University of Toronto in recognition of his contributions to sustainable energy and sustainable metal production as well as to higher education both in curriculum and in teaching style. In 2014, Sadoway received an honorary doctorate from NTNU, the Norwegian University of Science and Technology. Research As a researcher, Sadoway has focused on environmental ways to extract metals from their ores, as well as producing more efficient batteries. His research has often been driven by the desire to reduce greenhouse gas emissions while improving quality and lowering costs. He is the co-inventor of a solid polymer electrolyte. This material, used in his "sLimcell" has the capability of allowing batteries to offer twice as much power per kilogram as is possible in current lithium ion batteries. In August 2006, a team that he led demonstrated the feasibility of extracting iron from its ore through molten oxide electrolysis. When powered exclusively by renewable electricity, this technique has the potential to eliminate the carbon dioxide emissions that are generated through traditional methods. In 2009, Sadoway disclosed the liquid metal battery comprising liquid layers of magnesium and antimony separated by a layer of molten salt that could be used for stationary energy storage. Research on this concept was being funded by ARPA-E and the French energy company Total Experimental data showed a 69% DC-to-DC storage efficiency with good storage capacity and relatively low leakage current (self discharge). In 2010, with funding from Bill Gates and Total, Sadoway and two others, David Bradwell and Luis Ortiz, co-founded a company called the Liquid Metal Battery Corporation (later, Ambri) in order to scale up and commercialize the technology. Teaching For 16 years Sadoway taught 3.091 Introduction to Solid State Chemistry at MIT, one of the largest classes at MIT. Sadoway's animated teaching style was popular with students and freshman enrollment in the course steadily increased through 2010. In the fall of 2007, the number of students registering for 3.091 reached 570 students, over half the freshman class. The largest lecture hall available on campus seats 566 students. Sadoway much preferred teaching in one of the smaller lecture halls, seating only 450; as such, the institute had to take the unprecedented step of streaming digital video of the lecture into an overflow room to accommodate all the students interested in taking the course. In contrast, most classes at MIT are relatively small with approximately 60% of classes at MIT having fewer than 20 students. The popularity of this course has reached outside of the MIT campus as a result of the MIT OpenCourseWare initiative. This is seen in a comment by Bill Gates who told the Seattle Post-Intelligencer "Everybody should watch chemistry lectures -- they're far better than you think. Don Sadoway, MIT -- best chemistry lessons anywhere. Unbelievable". Sadoway's lectures often included the history of science, especially with respect to the Nobel Prize. Sadoway gave out "library assignments" in which he asked students to research Nobel Prize–winning papers. He began his lectures by playing music, which has some connection with the lecture's material. For example, for the lecture on hydrogen bonding he plays Handel's Water Music. For one of the lectures on polymers he plays Aretha Franklin's "Chain of Fools". He ended his lectures with five minutes on the topic of "chemistry and the world around us". Examples include automotive exhaust catalytic converters (technology), forensic examination of paintings (chemistry in the fine arts), the mistreatment of Rosalind Franklin in the quest to discover the structure of DNA (intellectual dishonesty), the metallurgical failure that sank the Titanic (greed and incompetence), and the clarification of champagne (viticulture). Media recognition On 29 February 2012, Sadoway gave a TED talk on his invention of the liquid metal battery for grid-scale storage. The talk is as much about the inventive process as it is about the technology. Sadoway was named one of Time magazine's 100 Most Influential People in the World in 2012 for accomplishments in energy storage as well as his approach to mentoring students (hire the novice instead of the expert). On 22 October 2012, Sadoway appeared as a guest on The Colbert Report to discuss his liquid metal battery technology and his view that electrochemistry is the key to world peace (batteries usher in the electric age reducing the dependence on petroleum dropping its price thereby destabilizing dictatorships). Sadoway appeared in "MIT Gangnam Style". See also John F. Elliott – MIT has a chaired professorship named after Elliott. Since 1999, Sadoway has occupied that chair. References External links Donald Sadoway resume Introduction to Solid State Chemistry: Course description, from OCW.Mit.edu Don Sadoway Playlist Appearance on WMBR's Dinnertime Sampler (radio show) 2 October 2002. "Innovation in Energy Storage: What I Learned in 3.091 was All I Needed to Know". lecture by Donald R. Sadoway, 5 June 2010. "The missing link to renewable energy" (TED2012) (also ) 1950 births Living people American materials scientists MIT School of Engineering faculty Canadian emigrants to the United States Scientists from Toronto University of Toronto alumni Fellows of the Minerals, Metals & Materials Society Canadian materials scientists Solid state chemists
Donald Sadoway
Chemistry
1,329
22,136,672
https://en.wikipedia.org/wiki/George%20Oenslager
George Oenslager (September 25, 1873 – February 5, 1956) was a Goodrich chemist who discovered that a derivative of aniline accelerated the vulcanization of rubber with sulfur. He first introduced carbon black as a rubber reinforcing agent in 1912. Biography Oenslager attended Harrisburg & Phillips Exeter Academies, AB 1894, AM 1896. He first worked for the Warren Paper Co. in Maine from 1896 until 1905. He then worked for the Diamond & B.F. Goodrich Rubber Companies from 1905 until 1940. In 1912, Oenslager was working with David Spence at Diamond Rubber on additives to improve the vulcanization process. Working off of Oenslager's aniline additives, Spence discovered that p-aminodimethylaniline was a far superior accelerator, vastly improving the tensile strength of the rubber. para-aminodimethylaniline was adopted as the accelerator of choice by the Diamond Rubber Company in 1912. During World War I Oenslager inflated the first hydrogen balloon in the US. Oenslager received his Ph.D. from Harvard under Prof. Theodore William Richards. He was awarded the Perkin Medal in 1933 for his discovery of organic accelerators, specifically thiocarbanilide. This development crucial to the commercialization of both natural and synthetic rubber. Oenslager was awarded the Charles Goodyear Medal in 1948. He was married to Ruth Alderfer Oenslager. References American chemists 1873 births 1956 deaths Polymer scientists and engineers Harvard University alumni
George Oenslager
Chemistry,Materials_science
317
34,884,919
https://en.wikipedia.org/wiki/Air-mixing%20plenum
An air-mixing plenum (or mixing box) is used in building services engineering and HVAC construction for mixing air from different ductwork systems. Usage Air streams are mixed to save energy and improve energy efficiency by partially recirculating conditioned air. The most common application for an air-mixing plenum is the mixing of return air (or extract air) with fresh air to provide a supply air mixture for onward distribution to the building or area which the ventilation system is serving. The air transferred from the return air stream to the supply air stream is termed recirculated air. All air not mixed is rejected to the atmosphere as exhaust air. Operation The mixing plenum normally combines two air streams, and includes for three sets of dampers: one for the fresh air, one for the exhaust air, and a mixing damper between the two air streams. The mix of fresh air and recirculated air can thus be adjusted to suit the needs of the building's occupants. Most systems will use motorized dampers to control the air mixing, and controlled by the building management system (BMS), or controls system. Typically as the fresh air and exhaust air dampers are driven from 0% open to 100% open, the mixing damper will in turn be driven from 100% open to 0% open, so as to always ensure a constant volume of supply and extract air. Energy efficiency Air supply to a building is generally performed by an air handling unit. The process may include filtering, heating, cooling, humidification, or dehumidification, all of which consume energy. Since building occupants demand less than 100% fresh air, only a fraction of that amount is admitted to the system, with and equal amount of treated air exhausted to the atmosphere; fresh air is mixed with conditioned air in a plenum. Enhanced controls systems may monitor the return air quality or carbon dioxide concentration in order to automatically modulate the air mix for optimum energy efficiency while maintaining desired fresh air requirements. Such systems work very well in buildings where the occupancy rate can vary greatly throughout the day, or seasonally. Additionally, when outside air conditions are such, typically mid-season weather conditions, it may be that ambient temperatures are suitable for free cooling purposes. In such conditions the mixing damper will be set to close and the system use full fresh air for optimum energy efficiency. Where fresh air is not required, such as early morning pre-heat or pre-conditioning periods, the mixing damper can be automatically set to full recirculation, again for optimum energy efficiency. References Ventilation Heating, ventilation, and air conditioning Building biology Fluid dynamics
Air-mixing plenum
Chemistry,Engineering
544
5,698,359
https://en.wikipedia.org/wiki/Immunodermatology
Immunodermatology studies skin as an organ of immunity in health and disease. Several areas have special attention, such as photo-immunology (effects of UV light on skin defense), inflammatory diseases such as Hidradenitis suppurativa, allergic contact dermatitis and atopic eczema, presumably autoimmune skin diseases such as vitiligo and psoriasis, and finally the immunology of microbial skin diseases such as retrovirus infections and leprosy. New therapies in development for the immunomodulation of common immunological skin diseases include biologicals aimed at neutralizing TNF-alfa and chemokine receptor inhibitors. Testing sites There are multiple universities currently do Immunodermatology: University of Utah Health. University of North Carolina. See also Dermatology Immune response References Branches of immunology Dermatology
Immunodermatology
Biology
186
41,651,252
https://en.wikipedia.org/wiki/Automatic%20taxonomy%20construction
Automatic taxonomy construction (ATC) is the use of software programs to generate taxonomical classifications from a body of texts called a corpus. ATC is a branch of natural language processing, which in turn is a branch of artificial intelligence. A taxonomy (or taxonomical classification) is a scheme of classification, especially, a hierarchical classification, in which things are organized into groups or types. Among other things, a taxonomy can be used to organize and index knowledge (stored as documents, articles, videos, etc.), such as in the form of a library classification system, or a search engine taxonomy, so that users can more easily find the information they are searching for. Many taxonomies are hierarchies (and thus, have an intrinsic tree structure), but not all are. Manually developing and maintaining a taxonomy is a labor-intensive task requiring significant time and resources, including familiarity of or expertise in the taxonomy's domain (scope, subject, or field), which drives the costs and limits the scope of such projects. Also, domain modelers have their own points of view which inevitably, even if unintentionally, work their way into the taxonomy. ATC uses artificial intelligence techniques to quickly automatically generate a taxonomy for a domain in order to avoid these problems and remove limitations. Approaches There are several approaches to ATC. One approach is to use rules to detect patterns in the corpus and use those patterns to infer relations such as hyponymy. Other approaches use machine learning techniques such as Bayesian inferencing and Artificial Neural Networks. Keyword extraction One approach to building a taxonomy is to automatically gather the keywords from a domain using keyword extraction, then analyze the relationships between them (see Hyponymy, below), and then arrange them as a taxonomy based on those relationships. Hyponymy and "is-a" relations In ATC programs, one of the most important tasks is the discovery of hypernym and hyponym relations among words. One way to do that from a body of text is to search for certain phrases like "is a" and "such as". In linguistics, is-a relations are called hyponymy. Words that describe categories are called hypernyms and words that are examples of categories are hyponyms. For example, dog is a hypernym and Fido is one of its hyponyms. A word can be both a hyponym and a hypernym. So, dog is a hyponym of mammal and also a hypernym of Fido. Taxonomies are often represented as is-a hierarchies where each level is more specific than (in mathematical language "a subset of") the level above it. For example, a basic biology taxonomy would have concepts such as mammal, which is a subset of animal, and dogs and cats, which are subsets of mammal. This kind of taxonomy is called an is-a model because the specific objects are considered instances of a concept. For example, Fido is-a instance of the concept dog and Fluffy is-a cat. Applications ATC can be used to build taxonomies for search engines, to improve search results. ATC systems are a key component of ontology learning (also known as automatic ontology construction), and have been used to automatically generate large ontologies for domains such as insurance and finance. They have also been used to enhance existing large networks such as Wordnet to make them more complete and consistent. ATC software Other names Other names for automatic taxonomy construction include: Automated outline building Automated outline construction Automated outline creation Automated outline extraction Automated outline generation Automated outline induction Automated outline learning Automated outlining Automated taxonomy building Automated taxonomy construction Automated taxonomy creation Automated taxonomy extraction Automated taxonomy generation Automated taxonomy induction Automated taxonomy learning Automatic outline building Automatic outline construction Automatic outline creation Automatic outline extraction Automatic outline generation Automatic outline induction Automatic outline learning Automatic taxonomy building Automatic taxonomy creation Automatic taxonomy extraction Automatic taxonomy generation Automatic taxonomy induction Automatic taxonomy learning Outline automation Outline building Outline construction Outline creation Outline extraction Outline generation Outline induction Outline learning Semantic taxonomy building Semantic taxonomy construction Semantic taxonomy creation Semantic taxonomy extraction Semantic taxonomy generation Semantic taxonomy induction Semantic taxonomy learning Taxonomy automation Taxonomy building Taxonomy construction Taxonomy creation Taxonomy extraction Taxonomy generation Taxonomy induction Taxonomy learning See also Document classification Information extraction References Further reading Automatic Taxonomy Construction from Keywords (2012) Domain taxonomy learning from text: The subsumption method versus hierarchical clustering from Data & Knowledge Engineering, Volume 83, January 2013, Pages 54–69 Learning taxonomic relations from a set of text documents Learning Taxonomic Relations from Heterogeneous Sources of Evidence A Metric-based Framework for Automatic Taxonomy Induction A New Method for Evaluating Automatically Learned Terminological Taxonomies Problematizing and Addressing the Article-as-Concept Assumption in Wikipedia Structured Learning for Taxonomy Induction with Belief Propagation Taxonomy Learning Using Word Sense Induction External links Taxonomy 101: The Basics and Getting Started with Taxonomies – shows where ATC fits in to the general activity of managing taxonomies for a business enterprise in need of knowledge management. Natural language processing Taxonomy
Automatic taxonomy construction
Technology
1,037
32,376,085
https://en.wikipedia.org/wiki/Consumer%20math
Consumer math comprises practical mathematical techniques used in commerce and everyday life. In the United States, consumer math is typically offered in high schools, some elementary schools, or in some colleges which grant associate's degrees. A U.S. consumer math course might include a review of elementary arithmetic, including fractions, decimals, and percentages. Elementary algebra is often included as well, in the context of solving practical business problems. The practical applications typically include: changing money, checking accounts, budgeting, price discounts, markups and markdowns, payroll calculations, investing (simple and compound interest), taxes, consumer and business credit, and mortgages. The emphasis in these courses is on computational skills and their practical application, with practical application being predominant. For instance, while computational formulas are covered in the material on interest and mortgages, the use of prepared tables based on those formulas is also presented and emphasized. See also Business mathematics Financial literacy References Bibliography Brechner, Robert. (2006). Contemporary Mathematics for Business and Consumers, Thomson South-Western. T. R. Ittelson, (2009), "Financial Statements", Career Press, 2009. Mathematics education Mathematical finance
Consumer math
Mathematics
244
31,466,181
https://en.wikipedia.org/wiki/Puccinia%20sessilis
Puccinia sessilis is a fungal species and plant pathogen, which is also known as arum rust or ramsons rust. It commonly infects Arum maculatum and Allium ursinum causing yellow to orange circular patches on leaves. On the underside of the leaves, it produces raised orange aecia commonly covered in spores. It is common in Eurasia in the spring. It was originally found on the leaves of Iris versicolor in New York, USA. Other plant species affected by this rust include Convallaria majalis, Dactylorhiza fuchsii, Dactylorhiza incarnata, Dactylorhiza majalis, Gymnadenia conopsea, Neottia ovata, Paris quadrifolia and Phalaris arundinacea. A specialised form, Puccinia sessilis f.sp. narcissi-orchidacearum (now called Aecidium narcissi) is a cause of rust in daffodils (Narcissus) and also on various wild Orchidaceae species. See also List of Puccinia species References Fungal plant pathogens and diseases Orchid diseases sessilis Fungi described in 1870 Fungus species
Puccinia sessilis
Biology
256