id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
18,287,344 | https://en.wikipedia.org/wiki/Degrowth | Degrowth is an academic and social movement critical of the concept of growth in gross domestic product as a measure of human and economic development. The idea of degrowth is based on ideas and research from economic anthropology, ecological economics, environmental sciences, and development studies. It argues that modern capitalism's unitary focus on growth causes widespread ecological damage and is unnecessary for the further increase of human living standards. Degrowth theory has been met with both academic acclaim and considerable criticism.
Degrowth's main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being. Degrowth theorists posit that this would increase human living standards and ecological preservation even as GDP growth slows.
Degrowth theory is highly critical of free market capitalism, and it highlights the importance of extensive public services, care work, self-organization, commons, relational goods, community, and work sharing.
Degrowth theory partly orients itself as a critique of green capitalism or as a radical alternative to the market-based, sustainable development goal (SDG) model of addressing ecological overshoot and environmental collapse.
A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies.
Background
The "degrowth" movement arose from concerns over the consequences of the productivism and consumerism associated with industrial societies (whether capitalist or socialist) including:
The reduced availability of energy sources (see peak oil);
The destabilization of Earth's ecosystems upon which all life on Earth depends (see Holocene Extinction, Anthropocene, global warming, pollution, current biodiversity loss);
The rise of negative societal side-effects (unsustainable development, poorer health, poverty); and
The ever-expanding use of resources by Global North countries to satisfy lifestyles that consume more food and energy, and produce greater waste, at the expense of the Global South (see neocolonialism).
A 2017 review of the research literature on degrowth, found that it focused on three main goals: (1) reduction of environmental degradation; (2) redistribution of income and wealth locally and globally; (3) promotion of a social transition from economic materialism to participatory culture.
Decoupling
The concept of decoupling denotes decoupling economic growth, usually measured in GDP growth, GDP per capita growth or GNI per capita growth from the use of natural resources and greenhouse gas (GHG) emissions. Absolute decoupling refers to GDP growth coinciding with a reduction in natural resource use and GHG emissions, while relative decoupling describes an increase in resource use and GHG emissions lower than the increase in GDP growth. The degrowth movement heavily critiques this idea and argues that absolute decoupling is only possible for short periods, specific locations, or with small mitigation rates. In 2021 NGO European Environmental Bureau called stated that "not only is there no empirical evidence supporting the existence of a decoupling of economic growth from environmental pressures on anywhere near the scale needed to deal with environmental breakdown", and that reported cases of existing eco-economic decouplings either depict relative decoupling and/or are observed only temporarily and/or only on a local scale, arguing that alternatives to eco-economic decoupling are needed. This is supported by several other studies which state that absolute decoupling is highly unlikely to be achieved fast enough to prevent global warming over 1.5 °C or 2 °C, even under optimistic policy conditions.
Major criticism of this view points out that Degrowth is politically unpalatable, defaulting towards the more free market green growth orthodoxy as a set of solutions that is more politically tenable. The problems with the SDG process are political rather than technical, Ezra Klein of the New York Times claims in summary of these criticisms, and degrowth has less plausibility than green growth as a democratic political platform. However, in a recent review of efforts toward Sustain Development Goals by the Council of Foreign Relations in 2023 it was found that progress toward 50% of the minimum viable SDG's have stalled and 30% of these verticals have reversed (or are getting worse, rather than better). Thus, while it may be true that Degrowth will be 'a difficult sell' (per Ezra Klein) to introduce via democratic voluntarism, the critique of SDG's and decoupling against green capitalism leveled by Degrowth theorists appear to have predictive power.
Resource depletion
Degrowth proponents argue that economic expansion must be met with a corresponding increase in resource consumption. Non-renewable resources, like petroleum, have a limited supply and can eventually be exhausted. Similarly, renewable resources can also be depleted if they are harvested at unsustainable rates for prolonged periods. An example of this depletion is evident in the case of caviar production in the Caspian Sea.
Supporters of degrowth contend that reducing demand is the sole permanent solution to bridging the demand gap. To sustain renewable resources, both demand and production must be regulated to levels that avert depletion and ensure environmental sustainability. Transitioning to a society less reliant on oil is crucial for averting societal collapse as non-renewable resources dwindle. Degrowth can also be interpreted as a plea for resource reallocation, aiming to halt unsustainable practices of transforming certain entities into resources, such as non-renewable natural resources. Instead, the focus shifts towards identifying and utilizing alternative resources, such as renewable human capabilities.
Ecological footprint
The ecological footprint measures human demand on the Earth's ecosystems by comparing human demand with the Earth's ecological capacity to regenerate. It represents the amount of biologically productive land and sea area required to regenerate the resources a human population consumes and to absorb and render harmless the corresponding waste.
According to a 2005 Global Footprint Network report, inhabitants of high-income countries live off of 6.4 global hectares (gHa), while those from low-income countries live off of a single gHa. For example, while each inhabitant of Bangladesh lives off of what they produce from 0.56 gHa, a North American requires 12.5 gHa. Each inhabitant of North America uses 22.3 times as much land as a Bangladeshi. According to the same report, the average number of global hectares per person was 2.1, while current consumption levels have reached 2.7 hectares per person. For the world's population to attain the living standards typical of European countries, the resources of between three and eight planet Earths would be required with current levels of efficiency and means of production. For world economic equality to be achieved with the currently available resources, proponents say rich countries would have to reduce their standard of living through degrowth. The constraints on resources would eventually lead to a forced reduction in consumption. A controlled reduction of consumption would reduce the trauma of this change, assuming no technological changes increase the planet's carrying capacity. Multiple studies now demonstrate that in many affluent countries per-capita energy consumption could be decreased substantially and quality living standards still be maintained.
Sustainable development
Degrowth ideology opposes all manifestations of productivism, which advocates that economic productivity and growth should be the primary objectives of human organization. Consequently, it stands in opposition to the prevailing model of sustainable development. While the concept of sustainability aligns with some aspects of degrowth philosophy, sustainable development, as conventionally understood, is based on mainstream development principles focused on augmenting economic growth and consumption. Degrowth views sustainable development as contradictory because any development reliant on growth within a finite and ecologically strained context is deemed intrinsically unsustainable. Development based on growth in a finite, environmentally stressed world is viewed as inherently unsustainable.
Critics of degrowth argue that a slowing of economic growth would result in increased unemployment, increased poverty, and decreased income per capita. Many who believe in negative environmental consequences of growth still advocate for economic growth in the South, even if not in the North. Slowing economic growth would fail to deliver the benefits of degrowth — self-sufficiency and material responsibility — and would indeed lead to decreased employment. Rather, degrowth proponents advocate the complete abandonment of the current (growth) economic model, suggesting that relocalizing and abandoning the global economy in the Global South would allow people of the South to become more self-sufficient and would end the overconsumption and exploitation of Southern resources by the North. Supporters of degrowth view it as a potential method to shield ecosystems from human exploitation. Within this concept, there is an emphasis on communal stewardship of the environment, fostering a symbiotic relationship between humans and nature. Degrowth recognizes ecosystems as valuable entities beyond their utility as mere sources of resources. During the Second International Conference on degrowth, discussions encompassed concepts like implementing a maximum wage and promoting open borders. Degrowth advocates an ethical shift that challenges the notion that high-resource consumption lifestyles are desirable. Additionally, alternative perspectives on degrowth include addressing perceived historical injustices perpetrated by the global North through centuries of colonization and exploitation, advocating for wealth redistribution. Determining the appropriate scale of action remains a focal point of debate within degrowth movements.
Some researchers believe that the world is poised to experience a Great Transformation, either by disastrous events or intentional design. They maintain that ecological economics must incorporate Postdevelopment theories, Buen vivir, and degrowth to affect the change necessary to avoid these potentially catastrophic events.
A 2022 paper by Mark Diesendorf found that limiting global warming to 1,5 degrees with no overshoot would require a reduction of energy consumption. It describes (chapters 4–5) degrowth toward a steady state economy as possible and probably positive. The study ends with the words: "The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation.
"Rebound effect"
Technologies designed to reduce resource use and improve efficiency are often touted as sustainable or green solutions. Degrowth literature, however, warns about these technological advances due to the "rebound effect", also known as Jevons paradox. This concept is based on observations that when a less resource-exhaustive technology is introduced, behavior surrounding the use of that technology may change, and consumption of that technology could increase or even offset any potential resource savings. In light of the rebound effect, proponents of degrowth hold that the only effective "sustainable" solutions must involve a complete rejection of the growth paradigm and a move to a degrowth paradigm. There are also fundamental limits to technological solutions in the pursuit of degrowth, as all engagements with technology increase the cumulative matter-energy throughput. However, the convergence of digital commons of knowledge and design with distributed manufacturing technologies may arguably hold potential for building degrowth future scenarios.
Mitigation of climate change and determinants of 'growth'
Scientists report that degrowth scenarios, where economic output either "declines" or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC), finding that investigated degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. However, structurally realigning 'economic growth' and socioeconomic activity determination-structures may not be widely debated in both the degrowth community and in degrowth research which may largely focus on reducing economic growth either more generally or without structural alternative but with e.g. nonsystemic political interventions. Similarly, many green growth advocates suggest that contemporary socioeconomic mechanisms and metrics – including for economic growth – can be continued with forms of nonstructural "energy-GDP decoupling". A study concluded that public services are associated with higher human need satisfaction and lower energy requirements while contemporary forms of economic growth are linked with the opposite, with the contemporary economic system being fundamentally misaligned with the twin goals of meeting human needs and ensuring ecological sustainability, suggesting that prioritizing human well-being and ecological sustainability would be preferable to overgrowth in current metrics of economic growth. The word 'degrowth' was mentioned 28 times in the United Nations IPCC Sixth Assessment Report by Working Group III published in April 2022.
Open Localism
Open localism is a concept that has been promoted by the degrowth community when envisioning an alternative set of social relations and economic organization. It builds upon the political philosophies of localism and is based on values such as diversity, ecologies of knowledge, and openness. Open localism does not look to create an enclosed community but rather to circulate production locally in an open and integrative manner.
Open localism is a direct challenge to the acts of closure regarding identitarian politics. By producing and consuming as much as possible locally, community members enhance their relationships with one another and the surrounding environment.
Degrowth's ideas around open localism share similarities with ideas around the commons while also having clear differences. On the one hand, open localism promotes localized, common production in cooperative-like styles similar to some versions of how commons are organized. On the other hand, open localism does not impose any set of rules or regulations creating a defined boundary, rather it favours a cosmopolitan approach.
Feminism
The degrowth movement builds on feminist economics that has criticized measures of economic growth like the GDP as it excludes work mainly done by women such as unpaid care work (the work performed to fulfill people's needs) and reproductive work (the work sustaining life), first argued by Marilyn Waring. Further, degrowth draws on the critique of socialist feminists like Silvia Federici and Nancy Fraser claiming that capitalist growth builds on the exploitation of women's work. Instead of devaluing it, degrowth centers the economy around care, proposing that care work should be organized as a commons.
Centering care goes hand in hand with changing society's time regimes. Degrowth scholars propose a working time reduction. As this does not necessarily lead to gender justice, the redistribution of care work has to be equally pushed. A concrete proposal by Frigga Haug is the 4-in-1 perspective that proposes 4 hours of wage work per day, freeing time for 4 hours of care work, 4 hours of political activities in a direct democracy, and 4 hours of personal development through learning.
Furthermore, degrowth draws on materialist ecofeminisms that state the parallel of the exploitation of women and nature in growth-based societies and proposes a subsistence perspective conceptualized by Maria Mies and Ariel Salleh. Synergies and opportunities for cross-fertilization between degrowth and feminism were proposed in 2022, through networks including the Feminisms and Degrowth Alliance (FaDA). FaDA argued that the 2023 launch of Degrowth Journal created "a convivial space for generating and exploring knowledge and practice from diverse perspectives".
Decolonialism
A relevant concept within the theory of degrowth is decolonialism, which refers to putting an end to the perpetuation of political, social, economic, religious, racial, gender, and epistemological relations of power, domination, and hierarchy of the global north over the global south.
The foundation of this relationship lies in the claim that the imminent socio-ecological collapse is caused by capitalism, which is sustained by economic growth. This economic growth in turn can only be maintained under the eaves of colonialism and extractivism, perpetuating asymmetric power relationships between territories. Colonialism is understood as the appropriation of common goods, resources, and labor, which is antagonistic to degrowth principles.
Through colonial domination, capital depresses the prices of inputs and colonial cheapening occurs to the detriment of the oppressed countries. Degrowth criticizes these appropriation mechanisms and enclosure of one territory over another and proposes a provision of human needs through disaccumulation, de-enclosure, and decommodification. It also reconciles with social movements and seeks to recognize the ecological debt to achieve the catch-up, which is postulated as impossible without decolonization.
In practice, decolonial practices close to degrowth are observed, such as the movement of Buen vivir or sumak kawsay by various indigenous peoples.
Policies
There is a wide range of policy proposals associated with degrowth. In 2022, Nick Fitzpatrick, Timothée Parrique and Inês Cosme conducted a comprehensive survey of degrowth literature from 2005 to 2020 and found 530 specific policy proposals with "50 goals, 100 objectives, 380 instruments". The survey found that the ten most frequently cited proposals were: universal basic incomes, work-time reductions, job guarantees with a living wage, maximum income caps, declining caps on resource use and emissions, not-for-profit cooperatives, holding deliberative forums, reclaiming the commons, establishing ecovillages, and housing cooperatives.
To address the common criticism that such policies are not realistically financeable, economic anthropologist Jason Hickel sees an opportunity to learn from modern monetary theory, which argues that monetary sovereign states can issue the money needed to pay for anything available in the national economy without the need to first tax their citizens for the requisite funds. Taxation, credit regulations and price controls could be used to mitigate the inflation this may generate, while also reducing consumption.
Origins of the movement
The contemporary degrowth movement can trace its roots back to the anti-industrialist trends of the 19th century, developed in Great Britain by John Ruskin, William Morris and the Arts and Crafts movement (1819–1900), in the United States by Henry David Thoreau (1817–1862), and in Russia by Leo Tolstoy (1828–1910).
Degrowth movements draw on the values of humanism, enlightenment, anthropology and human rights.
Club of Rome reports
In 1968, the Club of Rome, a think tank headquartered in Winterthur, Switzerland, asked researchers at the Massachusetts Institute of Technology for a report on the limits of our world system and the constraints it puts on human numbers and activity. The report, called The Limits to Growth, published in 1972, became the first significant study to model the consequences of economic growth.
The reports (also known as the Meadows Reports) are not strictly the founding texts of the degrowth movement, as these reports only advise zero growth, and have also been used to support the sustainable development movement. Still, they are considered the first studies explicitly presenting economic growth as a key reason for the increase in global environmental problems such as pollution, shortage of raw materials, and the destruction of ecosystems. The Limits to Growth: The 30-Year Update was published in 2004, and in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years. In 2021, Club of Rome committee member Gaya Herrington published an article comparing the proposed models' predictions against empirical data trends. The BAU2 ("Business as Usual 2") scenario, predicting "collapse through pollution", as well as the CT ("Comprehensive Technology") scenario, predicting exceptional technological development and gradual decline, were found to align most closely with data observed as of 2019. In September 2022, the Club of Rome released updated predictive models and policy recommendations in a general-audiences book titled Earth for all – A survival guide to humanity.
Lasting influence of Georgescu-Roegen
The degrowth movement recognises Romanian American mathematician, statistician and economist Nicholas Georgescu-Roegen as the main intellectual figure inspiring the movement. In his 1971 work, The Entropy Law and the Economic Process, Georgescu-Roegen argues that economic scarcity is rooted in physical reality; that all natural resources are irreversibly degraded when put to use in economic activity; that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse.
Georgescu-Roegen's intellectual inspiration to degrowth dates back to the 1970s. When Georgescu-Roegen delivered a lecture at the University of Geneva in 1974, he made a lasting impression on the young, newly graduated French historian and philosopher, Jacques Grinevald, who had earlier been introduced to Georgescu-Roegen's works by an academic advisor. Georgescu-Roegen and Grinevald became friends, and Grinevald devoted his research to a closer study of Georgescu-Roegen's work. As a result, in 1979, Grinevald published a French translation of a selection of Georgescu-Roegen's articles entitled Demain la décroissance: Entropie – Écologie – Économie ('Tomorrow, the Decline: Entropy – Ecology – Economy'). Georgescu-Roegen, who spoke French fluently, approved the use of the term décroissance in the title of the French translation. The book gained influence in French intellectual and academic circles from the outset. Later, the book was expanded and republished in 1995 and once again in 2006; however, the word Demain ('tomorrow') was removed from the book's title in the second and third editions.
By the time Grinevald suggested the term décroissance to form part of the title of the French translation of Georgescu-Roegen's work, the term had already permeated French intellectual circles since the early 1970s to signify a deliberate political action to downscale the economy on a permanent and voluntary basis. Simultaneously, but independently, Georgescu-Roegen criticised the ideas of The Limits to Growth and Herman Daly's steady-state economy in his article, "Energy and Economic Myths", delivered as a series of lectures from 1972, but not published before 1975. In the article, Georgescu-Roegen stated the following:
When reading this particular passage of the text, Grinevald realised that no professional economist of any orientation had ever reasoned like this before. Grinevald also realised the congruence of Georgescu-Roegen's viewpoint and the French debates occurring at the time; this resemblance was captured in the title of the French edition. The translation of Georgescu-Roegen's work into French both fed on and gave further impetus to the concept of décroissance in France—and everywhere else in the francophone world—thereby creating something of an intellectual feedback loop.
By the 2000s, when décroissance was to be translated from French back into English as the catchy banner for the new social movement, the original term "decline" was deemed inappropriate and misdirected for the purpose: "Decline" usually refers to an unexpected, unwelcome, and temporary economic recession, something to be avoided or quickly overcome. Instead, the neologism "degrowth" was coined to signify a deliberate political action to downscale the economy on a permanent, conscious basis—as in the prevailing French usage of the term—something good to be welcomed and maintained, or so followers believe.
When the first international degrowth conference was held in Paris in 2008, the participants honoured Georgescu-Roegen and his work. In his manifesto on Petit traité de la décroissance sereine ("Farewell to Growth"), the leading French champion of the degrowth movement, Serge Latouche, credited Georgescu-Roegen as the "main theoretical source of degrowth". Likewise, Italian degrowth theorist Mauro Bonaiuti considered Georgescu-Roegen's work to be "one of the analytical cornerstones of the degrowth perspective".
Schumacher and Buddhist economics
E. F. Schumacher's 1973 book Small Is Beautiful predates a unified degrowth movement but nonetheless serves as an important basis for degrowth ideas. In this book he critiques the neo-liberal model of economic development, arguing that an increasing "standard of living", based on consumption is absurd as a goal of economic activity and development. Instead, under what he refers to as Buddhist economics, we should aim to maximize well-being while minimizing consumption.
Ecological and social issues
In January 1972, Edward Goldsmith and Robert Prescott-Allen—editors of The Ecologist—published A Blueprint for Survival, which called for a radical programme of decentralisation and deindustrialization to prevent what the authors referred to as "the breakdown of society and the irreversible disruption of the life-support systems on this planet".
In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions:
Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate.
The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance.
Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply.
To fix the problem, humanity needs transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. Page 8 of the report proposes "enabling visions of a good quality of life that do not entail ever-increasing material consumption" as one of the main measures. The report states that "Some pathways chosen to achieve the goals related to energy, economic growth, industry and infrastructure and sustainable consumption and production (Sustainable Development Goals 7, 8, 9 and 12), as well as targets related to poverty, food security and cities (Sustainable Development Goals 1, 2 and 11), could have substantial positive or negative impacts on nature and therefore on the achievement of other Sustainable Development Goals".
In a June 2020 paper published in Nature Communications, a group of scientists argue that "green growth" or "sustainable growth" is a myth: "we have to get away from our obsession with economic growth—we really need to start managing our economies in a way that protects our climate and natural resources, even if this means less, no or even negative growth." They conclude that a change in economic paradigms is imperative to prevent environmental destruction, and suggest a range of ideas from the reformist to the radical, with the latter consisting of degrowth, eco-socialism and eco-anarchism.
In June 2020, the official site of one of the organizations promoting degrowth published an article by Vijay Kolinjivadi, an expert in political ecology, arguing that the emergence of COVID-19 is linked to the ecological crisis.
The 2019 World Scientists' Warning of a Climate Emergency and its 2021 update have asserted that economic growth is a primary driver of the overexploitation of ecosystems, and to preserve the biosphere and mitigate climate change civilization must, in addition to other fundamental changes including stabilizing population growth and adopting largely plant-based diets, "shift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality." In an opinion piece published in Al Jazeera, Jason Hickel states that this paper, which has more than 11,000 scientist cosigners, demonstrates that there is a "strong scientific consensus" towards abandoning "GDP as a measure of progress."
In a 2022 comment published in Nature, Hickel, Giorgos Kallis, Juliet Schor, Julia Steinberger and others say that both the IPCC and the IPBES "suggest that degrowth policies should be considered in the fight against climate breakdown and biodiversity loss, respectively".
Movement
Conferences
The movement has included international conferences promoted by the network Research & Degrowth (R&D). The First International Conference on Economic Degrowth for Ecological Sustainability and Social Equity in Paris (2008) was a discussion about the financial, social, cultural, demographic, and environmental crisis caused by the deficiencies of capitalism and an explanation of the main principles of degrowth. Further conferences were in Barcelona (2010), Montreal (2012), Venice (2012), Leipzig (2014), Budapest (2016), Malmö (2018), and Zagreb (2023). The 10th International Degrowth Conference will be held in Pontevedra in June 2024. Separately, two conferences have been organised as cross-party initiatives of Members of the European Parliament: the Post-Growth 2018 Conference and the Beyond Growth 2023 Conference, both held in the European Parliament in Brussels.
International Degrowth Network
The conferences have also been accompanied by informal degrowth assemblies since 2018, to build community between degrowth groups across countries. The 4th Assembly in Zagreb in 2023 discussed a proposal to create a more intentional organisational structure and led to the creation of the International Degrowth Network, which organised the 5th assembly in June 2024.
Relation to other social movements
The degrowth movement has a variety of relations to other social movements and alternative economic visions, which range from collaboration to partial overlap. The Konzeptwerk Neue Ökonomie (Laboratory for New Economic Ideas), which hosted the 2014 international Degrowth conference in Leipzig, has published a project entitled "Degrowth in movement(s)" in 2017, which maps relationships with 32 other social movements and initiatives. The relation to the environmental justice movement is especially visible.
Although not explicitly called degrowth, movements inspired by similar concepts and terminologies can be found around the world, including Buen Vivir in Latin America, the Zapatistas in Mexico, the Kurdish Rojava or Eco-Swaraj in India, and the sufficiency economy in Thailand. The Cuban economic situation has also been of interest to degrowth advocates because its limits on growth were socially imposed (although as a result of geopolitics), and has resulted in positive health changes.
Another set of movements the degrowth movement finds synergy with is the wave of initiatives and networks inspired by the commons, where resources are sustainably shared in a decentralised and self-managed manner, instead of through capitalist organization. For example, initiatives inspired by commons could be food cooperatives, open-source platforms, and group management of resources such as energy or water. Commons-based peer production also guides the role of technology in degrowth, where conviviality and socially useful production are prioritised over capital gain. This could happen in the form of cosmolocalism, which offers a framework for localising collaborative forms of production while sharing resources globally as digital commons, to reduce dependence on global value chains.
Criticisms, challenges and dilemmas
Critiques of degrowth concern the poor study quality of degrowth studies, negative connotation that the term "degrowth" imparts, the misapprehension that growth is seen as unambiguously bad, the challenges and feasibility of a degrowth transition, as well as the entanglement of desirable aspects of modernity with the growth paradigm.
Criticisms
According to a highly cited scientific paper of environmental economist Jeroen C. J. M. van den Bergh, degrowth is often seen as an ambiguous concept due to its various interpretations, which can lead to confusion rather than a clear and constructive debate on environmental policy. Many interpretations of degrowth do not offer effective strategies for reducing environmental impact or transitioning to a sustainable economy. Additionally, degrowth is unlikely to gain significant social or political support, making it an ineffective strategy for achieving environmental sustainability.
Ineffectiveness and better alternatives
In his scientific paper, Jeroen C. J. M. van den Bergh concludes that a degrowth strategy, which focuses on reducing the overall scale of the economy or consumption, tends to overlook the significance of changes in production composition and technological innovation.
Van den Bergh also highlights that a focus solely on reducing consumption (or consumption degrowth) may lead to rebound effects. For instance, reducing consumption of certain goods and services might result in an increase in spending on other items, as disposable income remains unchanged. Alternatively, it could lead to savings, which would provide additional funds for others to borrow and spend.
He emphasizes the importance of (global) environmental policies, such as pricing externalities through taxes or permits, which incentivize behavior changes that reduce environmental impact and which provide essential information for consumers and help manage rebound effects. Effective environmental regulation through pricing is crucial for transitioning from polluting to cleaner consumption patterns.
Study quality
A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies.
Negative connotation
The use of the term "degrowth" is criticized for being detrimental to the degrowth movement because it could carry a negative connotation, in opposition to the positively perceived "growth". "Growth" is associated with the "up" direction and positive experiences, while "down" generates the opposite associations. Research in political psychology has shown that the initial negative association of a concept, such as of "degrowth" with the negatively perceived "down", can bias how the subsequent information on that concept is integrated at the unconscious level. At the conscious level, degrowth can be interpreted negatively as the contraction of the economy, although this is not the goal of a degrowth transition, but rather one of its expected consequences. In the current economic system, a contraction of the economy is associated with a recession and its ensuing austerity measures, job cuts, or lower salaries. Noam Chomsky commented on the use of the term: "When you say 'degrowth' it frightens people. It's like saying you're going to have to be poorer tomorrow than you are today, and it doesn't mean that."
Since "degrowth" contains the term "growth", there is also a risk of the term having a backfire effect, which would reinforce the initial positive attitude toward growth. "Degrowth" is also criticized for being a confusing term, since its aim is not to halt economic growth as the word implies. Instead, "a-growth" is proposed as an alternative concept that emphasizes that growth ceases to be an important policy objective, but that it can still be achieved as a side-effect of environmental and social policies.
Systems theoretical critique
In stressing the negative rather than the positive side(s) of growth, the majority of degrowth proponents remain focused on (de-)growth, thus giving continued attention to the issue of growth, leading to continued attention to the arguments that sustainable growth is possible. One way to avoid giving attention to growth might be extending from the economic concept of growth, which proponents of both growth and degrowth commonly adopt, to a broader concept of growth that allows for the observation of growth in other sociological characteristics of society. A corresponding "recoding" of "growth-obsessed", capitalist organizations was proposed by Steffen Roth.
Marxist critique
Traditional Marxists distinguish between two types of value creation: that which is useful to mankind, and that which only serves the purpose of accumulating capital. Traditional Marxists consider that it is the exploitative nature and control of the capitalist production relations that is the determinant and not the quantity. According to Jean Zin, while the justification for degrowth is valid, it is not a solution to the problem. Other Marxist writers have adopted positions close to the de-growth perspective. For example, John Bellamy Foster and Fred Magdoff, in common with David Harvey, Immanuel Wallerstein, Paul Sweezy and others focus on endless capital accumulation as the basic principle and goal of capitalism. This is the source of economic growth and, in the view of these writers, results in an unsustainable growth imperative. Foster and Magdoff develop Marx's own concept of the metabolic rift, something he noted in the exhaustion of soils by capitalist systems of food production, though this is not unique to capitalist systems of food production as seen in the Aral Sea. Many degrowth theories and ideas are based on neo-Marxist theory. Foster emphasizes that degrowth "is not aimed at austerity, but at finding a 'prosperous way down' from our current extractivist, wasteful, ecologically unsustainable, maldeveloped, exploitative, and unequal, class-hierarchical world."
Challenges
Lack of macroeconomics for sustainability
It is reasonable for society to worry about recession as economic growth has been the unanimous goal around the globe in the past decades. However, in some advanced countries, there are attempts to develop a model for a regrowth economy. For instance, the Cool Japan strategy has proven to be instructive for Japan, which has been a static economy for almost decades.
Political and social spheres
According to some scholars in Sociology, the growth imperative is deeply entrenched in market capitalist societies such that it is necessary for their stability. Moreover, the institutions of modern societies, such as the nation state, welfare, labor market, education, academia, law and finance, have co-evolved with growth to sustain them. A degrowth transition thus requires not only a change of the economic system but of all the systems on which it relies. As most people in modern societies are dependent on those growth-oriented institutions, the challenge of a degrowth transition also lies in individual resistance to move away from growth.
Land privatisation
Baumann, Alexander and Burdon suggest that "the Degrowth movement needs to give more attention to land and housing costs, which are significant barriers hindering true political and economic agency and any grassroots driven degrowth transition."
They claim that land – a necessity like land and air – privatisation creates an absolute economic growth determinant. They point out that even one who is fully committed to degrowth nevertheless has no option but decades of market growth participation to pay rent or mortgage. Because of this, land privatisation is a structural impediment to moving forward that makes degrowth economically and politically unviable. They conclude that without addressing land privatisation (the market's inaugural privatisation – primitive accumulation) the degrowth movement's strategies cannot succeed. Just as land enclosure (privatisation) initiated capitalism (economic growth), degrowth must start with reclaiming land commons.
Agriculture
When it comes to agriculture, a degrowth society would require a shift from industrial agriculture to less intensive and more sustainable agricultural practices such as permaculture or organic agriculture. Still, it is not clear if any of those alternatives could feed the current and projected global population. In the case of organic agriculture, Germany, for example, would not be able to feed its population under ideal organic yields over all of its arable land without meaningful changes to patterns of consumption, such as reducing meat consumption and food waste. Moreover, labour productivity of non-industrial agriculture is significantly lower due to the reduced use or absence of fossil fuels, which leaves much less labour for other sectors. Potential solutions to this challenge include scaling up approaches such as community-supported agriculture (CSA).
Dilemmas
Given that modernity has emerged with high levels of energy and material throughput, there is an apparent compromise between desirable aspects of modernity (e.g., social justice, gender equality, long life expectancy, low infant mortality) and unsustainable levels of energy and material use. Some researchers, however, argue that the decline in income inequality and rise in social mobility occurring under capitalism from the late 1940s to the 1960s was a product of the heavy bargaining power of labor unions and increased wealth and income redistribution during that time; while also pointing to the rise in income inequality in the 1970s following the collapse of labor unions and weakening of state welfare measures. Others also argue that modern capitalism maintains gender inequalities by means of advertising, messaging in consumer goods, and social media.
Another way of looking at the argument that the development of desirable aspects of modernity require unsustainable energy and material use is through the lens of the Marxist tradition, which relates the superstructure (culture, ideology, institutions) and the base (material conditions of life, division of labor). A degrowth society, with its drastically different material conditions, could produce equally drastic changes in society's cultural and ideological spheres. The political economy of global capitalism has generated a lot of social and environmental bads, such as socioeconomic inequality and ecological devastation, which in turn have also generated a lot of goods through individualization and increased spatial and social mobility. At the same time, some argue the widespread individualization promulgated by a capitalist political economy is a bad due to its undermining of solidarity, aligned with democracy as well as collective, secondary, and primary forms of caring, and simultaneous encouragement of mistrust of others, highly competitive interpersonal relationships, blame of failure on individual shortcomings, prioritization of one's self-interest, and peripheralization of the conceptualization of human work required to create and sustain people. In this view, the widespread individuation resulting from capitalism may impede degrowth measures, requiring a change in actions to benefit society rather than the individual self.
Some argue the political economy of capitalism has allowed social emancipation at the level of gender equality, disability, sexuality and anti-racism that has no historical precedent. However, others dispute social emancipation as being a direct product of capitalism or question the emancipation that has resulted. The feminist writer Nancy Holmstrom, for example, argues that capitalism's negative impacts on women outweigh the positive impacts, and women tend to be hurt by the system. In her examination of China following the Chinese Communist Revolution, Holmstrom notes that women were granted state-assisted freedoms to equal education, childcare, healthcare, abortion, marriage, and other social supports. Thus, whether the social emancipation achieved in Western society under capitalism may coexist with degrowth is ambiguous.
Doyal and Gough allege that the modern capitalist system is built on the exploitation of female reproductive labor as well as that of the Global South, and sexism and racism are embedded in its structure. Therefore, some theories (such as Eco-Feminism or political ecology) argue that there cannot be equality regarding gender and the hierarchy between the Global North and South within capitalism.
The structural properties of growth present another barrier to degrowth as growth shapes and is enforced by institutions, norms, culture, technology, identities, etc. The social ingraining of growth manifests in peoples' aspirations, thinking, bodies, mindsets, and relationships. Together, growth's role in social practices and in socio-economic institutions present unique challenges to the success of the degrowth movement. Another potential barrier to degrowth is the need for a rapid transition to a degrowth society due to climate change and the potential negative impacts of a rapid social transition including disorientation, conflict, and decreased well-being.
In the United States, a large barrier to the support of the degrowth movement is the modern education system, including both primary and higher learning institutions. Beginning in the second term of the Reagan administration, the education system in the US was restructured to enforce neoliberal ideology by means of privatization schemes such as commercialization and performance contracting, implementation of standards and accountability measures incentivizing schools to adopt a uniform curriculum, and higher education accreditation and curricula designed to affirm market values and current power structures and avoid critical thought concerning the relations between those in power, ethics, authority, history, and knowledge. The degrowth movement, based on the empirical assumption that resources are finite and growth is limited, clashes with the limitless growth ideology associated with neoliberalism and the market values affirmed in schools, and therefore faces a major social barrier in gaining widespread support in the US.
Nevertheless, co-evolving aspects of global capitalism, liberal modernity, and the market society, are closely tied and will be difficult to separate to maintain liberal and cosmopolitan values in a degrowth society. At the same time, the goal of the degrowth movement is progression rather than regression, and researchers point out that neoclassical economic models indicate neither negative nor zero growth would harm economic stability or full employment. Several assert the main barriers to the movement are social and structural factors clashing with implementing degrowth measures.
Healthcare
It has been pointed out that there is an apparent trade-off between the ability of modern healthcare systems to treat individual bodies to their last breath and the broader global ecological risk of such an energy and resource intensive care. If this trade-off exists, a degrowth society must choose between prioritizing the ecological integrity and the ensuing collective health or maximizing the healthcare provided to individuals. However, many degrowth scholars argue that the current system produces both psychological and physical damage to people. They insist that societal prosperity should be measured by well-being, not GDP.
See also
A Blueprint for Survival
Agrowth
Anti-consumerism
Critique of political economy
Degrowth advocates (category)
Political ecology
Postdevelopment theory
Power Down: Options and Actions for a Post-Carbon World
Paradox of thrift
The Path to Degrowth in Overdeveloped Countries
Post-capitalism
Productivism
Prosperity Without Growth
Slow movement
Steady-state economy
Transition town
Uneconomic growth
References
Reference details
Further reading
External links
List of International Degrowth conferences on degrowth.info
Research and Degrowth
International Degrowth Network
Degrowth Journal
Degrowth Database
Planned Degrowth: Ecosocialism and Sustainable Human Development. Monthly Reviewissue on "Planned Degrowth". July 1, 2023.
Simple living
Sustainability
Green politics
Ecological economics
Environmental movements
Environmental ethics
Environmental economics
Environmental social science concepts | Degrowth | Environmental_science | 9,615 |
20,207,529 | https://en.wikipedia.org/wiki/Methimepip | Methimepip is a histamine agonist which is highly selective for the H3 subtype. It is the N-methyl derivative of immepip.
References
Imidazoles
Histamine agonists
Piperidines | Methimepip | Chemistry | 51 |
519,218 | https://en.wikipedia.org/wiki/Tridiagonal%20matrix | In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:
The determinant of a tridiagonal matrix is given by the continuant of its elements.
An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.
Properties
A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.
The set of all n × n tridiagonal matrices forms a 3n-2
dimensional vector space.
Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.
Determinant
The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation. Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let
The sequence (fi) is called the continuant and satisfies the recurrence relation
with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.
Inversion
The inverse of a non-singular tridiagonal matrix T
is given by
where the θi satisfy the recurrence relation
with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy
with initial conditions ϕn+1 = 1 and ϕn = an.
Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal or Toeplitz matrices and for the general case as well.
In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa. The inverse of a symmetric tridiagonal matrix can be written as a single-pair matrix (a.k.a. generator-representable semiseparable matrix) of the form
where
with
Solution of linear system
A system of equations Ax = b for can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.
Eigenvalues
When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely:
A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero. Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring operations for a matrix of size , although fast algorithms exist which (without parallel computation) require only .
As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.
Similarity to symmetric tridiagonal matrix
For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation.
Given a real tridiagonal, nonsymmetric matrix
where .
Assume that each product of off-diagonal entries is positive and define a transformation matrix by
The similarity transformation yields a symmetric tridiagonal matrix by:
Note that and have the same eigenvalues.
Computer programming
A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.
A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.
Applications
The discretization in space of the one-dimensional diffusion or heat equation
using second order central finite differences results in
with discretization constant . The matrix is tridiagonal with and . Note: no boundary conditions are used here.
See also
Pentadiagonal matrix
Jacobi matrix (operator)
Notes
External links
Tridiagonal and Bidiagonal Matrices in the LAPACK manual.
High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
Tridiagonal linear system solver in C++
Sparse matrices | Tridiagonal matrix | Mathematics | 1,218 |
543,407 | https://en.wikipedia.org/wiki/Torque%20converter | A torque converter is a device, usually implemented as a type of fluid coupling, that transfers rotating power from a prime mover, like an internal combustion engine, to a rotating driven load. In a vehicle with an automatic transmission, the torque converter connects the prime mover to the automatic gear train, which then drives the load. It is thus usually located between the engine's flexplate and the transmission. The equivalent device in a manual transmission is the mechanical clutch.
A torque converter serves to increase transmitted torque when the output rotational speed is low. In the fluid coupling embodiment, it uses a fluid, driven by the vanes of an input impeller, and directed through the vanes of a fixed stator, to drive an output turbine in such a manner that torque on the output is increased when the output shaft is rotating more slowly than the input shaft, thus providing the equivalent of an adaptive reduction gear. This is a feature beyond what a simple fluid coupling provides, which can match rotational speed but does not multiply torque. Fluid-coupling–based torque converters also typically include a lock-up function to rigidly couple input and output and avoid the efficiency losses associated with transmitting torque by fluid flow when operating conditions permit.
Hydraulic systems
By far the most common form of torque converter in automobile transmissions is the hydrodynamic device described above. There are also hydrostatic systems which are widely used in small machines such as compact excavators.
Mechanical systems
There are also mechanical designs for torque converters, many of which are similar to mechanical continuously variable transmissions or capable of acting as such as well. They include the pendulum-based Constantinesco torque converter, the Lambert friction gearing disk drive transmission and the
Variomatic with expanding pulleys and a belt drive.
Usage
Automatic transmissions on automobiles, such as cars, buses, and on/off highway trucks.
Forwarders and other heavy duty vehicles.
Marine propulsion systems.
Industrial power transmission such as conveyor drives, almost all modern forklifts, winches, drilling rigs, construction equipment, and diesel-hydraulic railway locomotives.
Function
Theory of operation
Torque converter equations of motion are governed by Leonhard Euler's eighteenth century turbomachine equation:
The equation expands to include the fifth power of radius; as a result, torque converter properties are very dependent on the size of the device.
Mathematical formulations for the torque converter are available from several authors.
Hrovat derived the equations of the pump, turbine, stator, and conservation of energy. Four first-order differential equations can define the performance of the torque converter.
where
is density
is flow area
is pump radius
is turbine radius
is stator radius
is pump exit angle
is turbine exit angle
is stator exit angle
is inertia
is fluid inertia length
A simpler correlation is provided by Kotwicki.
Torque converter elements
A fluid coupling is a two-element drive that is incapable of multiplying torque, while a torque converter has at least one extra element—the stator—which alters the drive's characteristics during periods of high slippage, producing an increase in output torque.
In a torque converter there are at least three rotating elements: the impeller, which is mechanically driven by the prime mover; the turbine, which drives the load; and the stator, which is interposed between the impeller and turbine so that it can alter oil flow returning from the turbine to the impeller. The classic torque converter design dictates that the stator be prevented from rotating under any condition, hence the term stator. In practice, however, the stator is mounted on an overrunning clutch, which prevents the stator from counter-rotating with respect to the prime mover but allows forward rotation.
Modifications to the basic three element design have been periodically incorporated, especially in applications where higher than normal torque multiplication is required. Most commonly, these have taken the form of multiple turbines and stators, each set being designed to produce differing amounts of torque multiplication. For example, the Buick Dynaflow automatic transmission was a non-shifting design and, under normal conditions, relied solely upon the converter to multiply torque. The Dynaflow used a five-element converter to produce the wide range of torque multiplication needed to propel a heavy vehicle.
Although not strictly a part of classic torque converter design, many automotive converters include a lock-up clutch to improve cruising power transmission efficiency and reduce heat. The application of the clutch locks the turbine to the impeller, causing all power transmission to be mechanical, thus eliminating losses associated with fluid drive.
Operational phases
A torque converter has three stages of operation:
Stall. The prime mover is applying power to the impeller but the turbine cannot rotate. For example, in an automobile, this stage of operation would occur when the driver has placed the transmission in gear but is preventing the vehicle from moving by continuing to apply the brakes. At stall, the torque converter can produce maximum torque multiplication if sufficient input power is applied (the resulting multiplication is called the stall ratio). The stall phase actually lasts for a brief period when the load (e.g., vehicle) initially starts to move, as there will be a very large difference between pump and turbine speed.
Acceleration. The load is accelerating but there still is a relatively large difference between impeller and turbine speed. Under this condition, the converter will produce torque multiplication that is less than what could be achieved under stall conditions. The amount of multiplication will depend upon the actual difference between pump and turbine speed, as well as various other design factors.
Coupling. The turbine has reached approximately 90 percent of the speed of the impeller. Torque multiplication has essentially ceased and the torque converter is behaving in a manner similar to a simple fluid coupling. In modern automotive applications, it is usually at this stage of operation where the lock-up clutch is applied, a procedure that tends to improve fuel efficiency.
The key to the torque converter's ability to multiply torque lies in the stator. In the classic fluid coupling design, periods of high slippage cause the fluid flow returning from the turbine to the impeller to oppose the direction of impeller rotation, leading to a significant loss of efficiency and the generation of considerable waste heat. Under the same condition in a torque converter, the returning fluid will be redirected by the stator so that it aids the rotation of the impeller, instead of impeding it. The result is that much of the energy in the returning fluid is recovered and added to the energy being applied to the impeller by the prime mover. This action causes a substantial increase in the mass of fluid being directed to the turbine, producing an increase in output torque. Since the returning fluid is initially traveling in a direction opposite to impeller rotation, the stator will likewise attempt to counter-rotate as it forces the fluid to change direction, an effect that is prevented by the one-way stator clutch.
Unlike the radially straight blades used in a plain fluid coupling, a torque converter's turbine and stator use angled and curved blades. The blade shape of the stator is what alters the path of the fluid, forcing it to coincide with the impeller rotation. The matching curve of the turbine blades helps to correctly direct the returning fluid to the stator so the latter can do its job. The shape of the blades is important as minor variations can result in significant changes to the converter's performance.
During the stall and acceleration phases, in which torque multiplication occurs, the stator remains stationary due to the action of its one-way clutch. However, as the torque converter approaches the coupling phase, the energy and volume of the fluid returning from the turbine will gradually decrease, causing pressure on the stator to likewise decrease. Once in the coupling phase, the returning fluid will reverse direction and now rotate in the direction of the impeller and turbine, an effect which will attempt to forward-rotate the stator. At this point, the stator clutch will release and the impeller, turbine and stator will all (more or less) turn as a unit.
Unavoidably, some of the fluid's kinetic energy will be lost due to friction and turbulence, causing the converter to generate waste heat (dissipated in many applications by water cooling). This effect, often referred to as pumping loss, will be most pronounced at or near stall conditions. In modern designs, the blade geometry minimizes oil velocity at low impeller speeds, which allows the turbine to be stalled for long periods with little danger of overheating (as when a vehicle with an automatic transmission is stopped at a traffic signal or in traffic congestion while still in gear).
Efficiency and torque multiplication
A torque converter cannot achieve 100 percent coupling efficiency. The classic three element torque converter has an efficiency curve that resembles ∩: zero efficiency at stall, generally increasing efficiency during the acceleration phase and low efficiency in the coupling phase. The loss of efficiency as the converter enters the coupling phase is a result of the turbulence and fluid flow interference generated by the stator, and as previously mentioned, is commonly overcome by mounting the stator on a one-way clutch.
Even with the benefit of the one-way stator clutch, a converter cannot achieve the same level of efficiency in the coupling phase as an equivalently sized fluid coupling. Some loss is due to the presence of the stator (even though rotating as part of the assembly), as it always generates some power-absorbing turbulence. Most of the loss, however, is caused by the curved and angled turbine blades, which do not absorb kinetic energy from the fluid mass as well as radially straight blades. Since the turbine blade geometry is a crucial factor in the converter's ability to multiply torque, trade-offs between torque multiplication and coupling efficiency are inevitable. In automotive applications, where steady improvements in fuel economy have been mandated by market forces and government edict, the nearly universal use of a lock-up clutch has helped to eliminate the converter from the efficiency equation during cruising operation.
The maximum amount of torque multiplication produced by a converter is highly dependent on the size and geometry of the turbine and stator blades, and is generated only when the converter is at or near the stall phase of operation. Typical stall torque multiplication ratios range from 1.8:1 to 2.5:1 for most automotive applications (although multi-element designs as used in the Buick Dynaflow and Chevrolet Turboglide could produce more). Specialized converters designed for industrial, rail, or heavy marine power transmission systems are capable of as much as 5.0:1 multiplication. Generally speaking, there is a trade-off between maximum torque multiplication and efficiency—high stall ratio converters tend to be relatively inefficient around the coupling speed, whereas low stall ratio converters tend to provide less possible torque multiplication.
The characteristics of the torque converter must be carefully matched to the torque curve of the power source and the intended application. Changing the blade geometry of the stator and/or turbine will change the torque-stall characteristics, as well as the overall efficiency of the unit. For example, drag racing automatic transmissions often use converters modified to produce high stall speeds to improve off-the-line torque, and to get into the power band of the engine more quickly. Highway vehicles generally use lower stall torque converters to limit heat production, and provide a more firm feeling to the vehicle's characteristics.
A design feature once found in some General Motors automatic transmissions was the variable-pitch stator, in which the blades' angle of attack could be varied in response to changes in engine speed and load. The effect of this was to vary the amount of torque multiplication produced by the converter. At the normal angle of attack, the stator caused the converter to produce a moderate amount of multiplication but with a higher level of efficiency. If the driver abruptly opened the throttle, a valve would switch the stator pitch to a different angle of attack, increasing torque multiplication at the expense of efficiency.
Some torque converters use multiple stators and/or multiple turbines to provide a wider range of torque multiplication. Such multiple-element converters are more common in industrial environments than in automotive transmissions, but automotive applications such as Buick's Triple Turbine Dynaflow and Chevrolet's Turboglide also existed. The Buick Dynaflow utilized the torque-multiplying characteristics of its planetary gear set in conjunction with the torque converter for low gear and bypassed the first turbine, using only the second turbine as vehicle speed increased. The unavoidable trade-off with this arrangement was low efficiency and eventually these transmissions were discontinued in favor of the more efficient three speed units with a conventional three element torque converter.
It is also found that efficiency of torque converter is maximum at very low speeds.
Lock-up torque converters
As described above, impelling losses within the torque converter reduce efficiency and generate waste heat. In modern automotive applications, this problem is commonly avoided by use of a lock-up clutch that physically links the impeller and turbine, effectively changing the converter into a purely mechanical coupling. The result is no slippage, and virtually no power loss.
The first automotive application of the lock-up principle was Packard's Ultramatic transmission, introduced in 1949, which locked up the converter at cruising speeds, unlocking when the throttle was floored for quick acceleration or as the vehicle slowed. This feature was also present in some Borg-Warner transmissions produced during the 1950s. It fell out of favor in subsequent years due to its extra complexity and cost. In the late 1970s lock-up clutches started to reappear in response to demands for improved fuel economy, and are now nearly universal in automotive applications.
Capacity and failure modes
As with a basic fluid coupling the theoretical torque capacity of a converter is proportional to , where is the mass density of the fluid (kg/m3), is the impeller speed (rpm), and is the diameter (m). In practice, the maximum torque capacity is limited by the mechanical characteristics of the materials used in the converter's components, as well as the ability of the converter to dissipate heat (often through water cooling). As an aid to strength, reliability and economy of production, most automotive converter housings are of welded construction. Industrial units are usually assembled with bolted housings, a design feature that eases the process of inspection and repair, but adds to the cost of producing the converter.
In high performance, racing and heavy duty commercial converters, the pump and turbine may be further strengthened by a process called furnace brazing, in which molten brass is drawn into seams and joints to produce a stronger bond between the blades, hubs and annular ring(s). Because the furnace brazing process creates a small radius at the point where a blade meets with a hub or annular ring, a theoretical decrease in turbulence will occur, resulting in a corresponding increase in efficiency.
Overloading a converter can result in several failure modes, some of them potentially dangerous in nature:
Overheating: Continuous high levels of slippage may overwhelm the converter's ability to dissipate heat, resulting in damage to the elastomer seals that retain fluid inside the converter. A prime example in passenger cars would be getting stuck in snow or mud and having to rock the vehicle forward and backward to gain momentum by going back and forth from drive to reverse using significant power. The transmission fluid will quickly overheat, not to mention the repeated impacts on the stator clutch (next topic). Also, overheating transmission fluid causes it to lose viscosity and damage the transmission. Such abuse can in rare cases cause the torque converter to leak and eventually stop functioning due to lack of fluid.
Stator clutch seizure: The inner and outer elements of the one-way stator clutch become permanently locked together, thus preventing the stator from rotating during the coupling phase. Most often, seizure is precipitated by severe loading and subsequent distortion of the clutch components. Eventually, galling of the mating parts occurs, which triggers seizure. A converter with a seized stator clutch will exhibit very poor efficiency during the coupling phase, and in a motor vehicle, fuel consumption will drastically increase. Converter overheating under such conditions will usually occur if continued operation is attempted.
Stator clutch breakage: A very abrupt application of power, as in putting the transmission in neutral and increasing engine RPMs before engaging a gear (commonly called a "neutral start"), can cause shock loading of the stator clutch, resulting in breakage. If this occurs, the stator will freely counter-rotate in the direction opposite to that of the pump and almost no power transmission will take place. In an automobile, the effect is similar to a severe case of transmission slippage and the vehicle is all but incapable of moving under its own power.
Blade deformation and fragmentation: If subjected to abrupt loading or excessive heating of the converter, pump and/or turbine blades may be deformed, separated from their hubs and/or annular rings, or may break up into fragments. At the least, such a failure will result in a significant loss of efficiency, producing symptoms similar (although less pronounced) to those accompanying stator clutch failure. In extreme cases, catastrophic destruction of the converter will occur.
Ballooning: Prolonged operation under excessive loading, very abrupt application of load, or operating a torque converter at very high RPM may cause the shape of the converter's housing to be physically distorted due to internal pressure and/or the stress imposed by inertia. Under extreme conditions, ballooning will cause the converter housing to rupture, resulting in the violent dispersal of hot oil and metal fragments over a wide area.
Manufacturers
Current
Aisin AW, used in automobiles
Allison Transmission, used in bus, refuse, fire, construction, distribution, military and specialty applications
BorgWarner, used in automobiles
Exedy, used in automobiles
Isuzu, used in automobiles
Jatco, used in automobiles
LuK USA LLC, produces Torque Converters for Ford, GM, Allison Transmission, and Hyundai
Subaru, used in automobiles
Twin Disc, used in vehicle, marine and oilfield applications
Valeo, produces Torque converter for Ford, GM, Mazda, Subaru
Voith turbo transmissions, used in many diesel locomotives and diesel multiple units
ZF Friedrichshafen, automobiles, forestry machines, popular in city bus applications
Past
Lysholm-Smith, named after its inventor, Alf Lysholm, produced by Leyland Motors and used in buses from 1933 to 1939 and also some British Rail Derby Lightweight and Ulster Transport Authority diesel multiple units
Mekydro, used in British Rail Class 35 Hymek locomotives.
Packard, used in the Ultramatic automobile transmission system
Rolls-Royce (Twin Disc), used in some British United Traction diesel multiple units
Vickers-Coates
See also
Clutch
Fluid coupling
Servomechanism
Torque amplifier
Transmission (mechanics)
Water brake
References
External links
HowStuffWorks article on torque converters
YouTube video about torque converters
Variators
Mechanical power control
Mechanical power transmission
Continuously variable transmissions
Automotive transmission technologies
Converter | Torque converter | Physics | 3,947 |
5,445,552 | https://en.wikipedia.org/wiki/Poussin%20proof | In number theory, a branch of mathematics, the Poussin proof is the proof of an identity related to the fractional part of a ratio.
In 1838, Peter Gustav Lejeune Dirichlet proved an approximate formula for the average number of divisors of all the numbers from 1 to n:
where d represents the divisor function, and γ represents the Euler-Mascheroni constant.
In 1898, Charles Jean de la Vallée-Poussin proved that if a large number n is divided by all the primes up to n, then the average fraction by which the quotient falls short of the next whole number is γ:
where {x} represents the fractional part of x, and π represents the prime-counting function.
For example, if we divide 29 by 2, we get 14.5, which falls short of 15 by 0.5.
References
Dirichlet, G. L. "Sur l'usage des séries infinies dans la théorie des nombres", Journal für die reine und angewandte Mathematik 18 (1838), pp. 259–274. Cited in MathWorld article "Divisor Function" below.
de la Vallée Poussin, C.-J. Untitled communication. Annales de la Societe Scientifique de Bruxelles 22 (1898), pp. 84–90. Cited in MathWorld article "Euler-Mascheroni Constant" below.
External links
Number theory | Poussin proof | Mathematics | 310 |
52,373,496 | https://en.wikipedia.org/wiki/Notakto | Notakto is a tic-tac-toe variant, also known as neutral or impartial tic-tac-toe. The game is a combination of the games tic-tac-toe and Nim, played across one or several boards with both of the players playing the same piece (an "X" or cross). The game ends when all the boards contain a three-in-a-row of Xs, at which point the player to have made the last move loses the game. However, in this game, unlike tic-tac-toe, there will always be a player who wins any game of Notakto.
Notakto is an impartial game, where the allowable moves depend only on the state of the game and not on which player is taking their turn. When played across multiple boards it is a disjunctive game. The game is attributed to professor and backgammon player Bob Koca, who is said to have invented the game in 2010, when his five-year-old nephew suggested playing a game of tic-tac-toe with both players as "X".
Play
Notakto is played on a finite number of empty three-by-three boards. Then, each player takes turns placing an X on the board(s) in a vacant space (a space not occupied by an X already on the board). If a board has a three-in-a-row, the board is dead and it cannot be played on any more. When one player makes a three-in-a-row and there are no more boards to play on, that player loses.
Optimal strategy
The optimal strategy for a single-board game of Notakto allows the first player to force a win. It is for the first player to play the center and then play a knight's move (two squares vertically and one square horizontally, or vice versa) away from the opponent's play. This strategy works because it makes a boot-like structure, which is called the boot trap. From the boot trap position the first player will be able to force a win.
With two boards, the second player should on their first move play in the center square of the empty board (the one with no Xs in it). Then, the second player sacrifices one of the boards (by making a three-in-a-row) if it is possible. Now, the game is a 1-board game of Notakto so the second player uses the knight's move or boot trap strategies to win.
From these two strategies, any game with more than two boards can always be won by the first player (on an odd number of boards) or by the second player (on an even number of boards).
See also
Misère
References
Mathematical games
Paper-and-pencil games
Combinatorial game theory
Tic-tac-toe
Tic-tac-toe variants | Notakto | Mathematics | 608 |
67,532,448 | https://en.wikipedia.org/wiki/Emergency%20Warning%20System | Emergency Warning System () is a warning system the Japan Meteorological Agency (JMA) launched on 30 August 2013. Emergency Warnings are issued to alert people to the significant likelihood of catastrophes in association with natural phenomena of extraordinary magnitude. Residents should take all measures possible to protect themselves in the event that an Emergency Warning is issued.
Overview
JMA issues various warnings to alert people to possible catastrophes caused by extraordinary natural phenomena such as heavy rain, earthquakes, tsunami and storm surges. In addition to such warnings, advisories and other bulletins, JMA started issuing Emergency Warnings to alert people to the significant likelihood of catastrophes if phenomena are expected to be of a scale that will far exceed the warning criteria.
Emergency Warnings are intended for extraordinary phenomena such as the major tsunami caused by the 2011 Great East Japan Earthquake by which 18,000 people were killed or left missing, the 1959 storm surge in Ise Bay caused by Typhoon Vera, by which more than 5,000 people were killed or left missing, and the 2011 heavy rain caused by Typhoon Talas, by which around 100 people were killed or left missing.
The issuance of an Emergency Warning for an area indicates a level of exceptional risk of a magnitude observed only once every few decades. Residents should pay attention to their surroundings and relevant information such as municipal evacuation advisories and evacuations, and should take all steps necessary to protect life.
Relationship between Emergency Warnings and Warnings/Advisories
Emergency Warnings are issued if a phenomenon is expected to be of a scale that will far exceed the relevant warning criteria, such as the 2011 Great East Japan Earthquake and Typhoon Vera in 1959.
Emergency Warnings are intended for extraordinary phenomena expected to be of a scale that will far exceed the warning criteria. Warnings and Advisories continue to be issued in their current form even after the introduction of Emergency Warnings.
Residents should not let down their guard even if no Emergency Warning is currently in effect in the area. It is important to take action early wherever possible with reference to relevant weather bulletins, Advisories and Warnings, which are updated in response to the latest phenomenon observations or predictions.
The criteria for Emergency Warning issuance were determined in response to the views of local governments in charge of disaster management for their own areas. In regard to earthquakes, tsunami and volcanic eruptions, JMA maintains the system of warning nomenclature used until 29 August, 2013 but issues messages in the new classification of Emergency Warnings for high-risk conditions. These include Major Tsunami Warnings, Volcanic Warnings (Level 4 or more) and Earthquake Early Warnings (incorporating prediction of tremors measuring 6-lower or more on JMA's seismic intensity scale).
References
External links
Emergency Warning System - JMA
Emergency Warning System (A New Service to Protect Life) - JMA
Emergency Warning System Starting Shortly - JMA
Warning systems
Japan Meteorological Agency
Weather warnings and advisories | Emergency Warning System | Technology,Engineering | 571 |
27,557,852 | https://en.wikipedia.org/wiki/Stable%20nucleic%20acid%20lipid%20particle | Stable nucleic acid lipid particles (SNALPs) are microscopic particles approximately 120 nanometers in diameter, smaller than the wavelengths of visible light. They have been used to deliver siRNAs therapeutically to mammals in vivo. In SNALPs, the siRNA is surrounded by a lipid bilayer containing a mixture of cationic and fusogenic lipids, coated with diffusible polyethylene glycol.
Introduction
RNA interference(RNAi) is a process that occurs naturally within the cytoplasm inhibiting gene expression at specific sequences. Regulation of gene expression through RNAi is possible by introducing small interfering RNAs(siRNAs), which effectively silence expression of a targeted gene. RNAi activates the RNA-induced silencing complex(RISC) containing siRNA, siRNA derived from cleaved dsRNA. The siRNA guides the RISC complex to a specific sequence on the mRNA that is cleaved by RISC and, consequently, silences those genes.
However, without modifications to the RNA backbone or inclusion of inverted bases at either end, siRNA instability in the plasma makes it extremely difficult to apply this technique in vivo. Pattern recognition receptors(PRRs), which can be grouped as endocytic PRRs or signaling PRRs, are expressed in all cells of the innate immune system. Signaling PRRs, in particular, include Toll-like receptors(TLRs) and are involved primarily with identifying pathogen-associated molecular patterns(PAMPs). For example, TLRs can recognize specific regions conserved in various pathogens, recognition stimulating an immune response with potentially devastating effects to the organism. In particular, TLR 3 recognizes both dsRNA characteristic of viral replication and siRNA, which is also double-stranded. In addition to this instability, another limitation of siRNA therapy concerns the inability to target a tissue with any specificity.
SNALPs, though, may provide the stability and specificity required for this mode of RNAi therapy to be effective. Consisting of a lipid bilayer, SNALPs are able to provide stability to siRNAs by protecting them from nucleases within the plasma that would degrade them. In addition, delivery of siRNAs is subject to endosomal trafficking, potentially exposing them to TLR3 and TLR7, and can lead to activation of interferons and proinflammatory cytokines. However, SNALPs allow siRNA uptake into the endosome without activating Toll-like receptors and consequently stimulating an impeding immune response, thus enabling siRNA escape from the endosome.
Development of SNALP delivery of siRNA
Downregulation of gene expression via siRNA has been an important research tool in in vitro studies. Susceptibility of siRNAs to nuclease degradation, though, makes use of them in vivo problematic. In 2005, researchers working with hepatitis B virus(HBV) in rodents, determined that certain modifications of the siRNA prevented degradation by nucleases within the plasma and lead to increased gene silencing compared to unmodified siRNA. Modifications to the sense and antisense strands were made differentially. With respect to both sense and antisense strands, 2'-OH was substituted with 2'-fluoro at all pyrimidine positions. In addition, sense strands were modified at all purine positions with deoxyribose, antisense strands modified with 2'-O-methyl at the same positions. The 5' and 3' ends of the sense strand were capped with abasic inverted repeats, while a phosphorothioate linkage was incorporated at the 3' end of the antisense strand.
Although this research demonstrated a potential RNAi therapy using modified siRNA, the 90% reduction in HBV DNA in rodents resulted from a 30 mg/kg dosage with frequent administration. Because this is not a viable dosing regime, this same group looked at the effects of encapsulating the siRNA in a PEGylated lipid bilayer, or SNALP. Specifically, the lipid bilayer facilitates uptake into the cell and subsequent release from the endosome, the PEGylated outer layer providing stability during formulation due to the resulting hydrophilicity of the exterior. According to this 2005 study, researchers obtained 90% reduction in HBV DNA with a 3 mg/kg/day dose of siRNA for three days, a dose substantially lower than the earlier study. In addition, in contrast to unmodified or modified and non-encapsulated siRNA, administration of SNALP-delivered siRNA resulted in no detectable levels of interferons, such as IFN-a, or inflammatory cytokines associated with immunostimulation. Even so, researchers acknowledged that more work was necessary in order to reach a feasible dose and dosing regime.
In 2006, researchers working on silencing of apolipoprotein B(ApoB) in non-human primates achieved 90% silencing with a single dose of 2.5 mg/kg of SNALP-delivered APOB-specific siRNA. ApoB is a protein involved with the assembly and secretion of very-low-density lipoprotein(VLDL) and low-density lipoprotein(LDL), and it is expressed primarily in the liver and jejunum. Both VLDL and LDL are important in cholesterol transport and its metabolism. Not only was this degree of silencing observed very quickly, in about 24 hours post-administration, but the silencing effects maintained for over 22 days after only a single dose. Researchers tested a 1 mg/kg single dose, too, obtaining a 68% silencing of the target gene, indicating dose-dependent silencing. This dose-dependent silencing was evident not only on the degree of silencing but the duration of silencing, expression of the target gene recovering 72 hours post-administration.
Although SNALPs having a 100 nm diameter have been used effectively to target specific genes for silencing, there are a variety of systemic barriers that relate specifically to size. For example, diffusion into solid tumors is impeded by large SNALPs and, similarly, inflamed cells having enhanced permeation and retention make it difficult for large SNALPs to enter. In addition, reticuloendothelial elimination, blood–brain barrier size-selectivity and limitations of capillary fenestrae all necessitate a smaller SNALP in order to effectively deliver target-specific siRNA. In 2012, scientists in Germany developed what they termed "mono-NALPs" using a fairly simple solvent exchange method involving progressive dilution of a 50% isopropanol solution. What results is a very stable delivery system similar to traditional SNALPs, but one having only a diameter of 30 nm. The mono-NALPs developed here, however, are inactive, but can become active carriers by implementing specific targeting and release mechanisms used by similar delivery systems.
Applications
Zaire Ebola virus (ZEBOV)
In May 2010, an application of SNALPs to the Ebola Zaire virus made headlines, as the preparation was able to cure rhesus macaques when administered shortly after their exposure to a lethal dose of the virus, which can be up to 90% lethal to humans in sporadic outbreaks in Africa. The treatment used for rhesus macaques consisted of three siRNAs (staggered duplexes of RNA) targeting three viral genes. The SNALPs (around 81 nm in size here) were formulated by spontaneous vesiculation from a mixture of cholesterol, dipalmitoyl phosphatidylcholine, 3-N-[(ω-methoxy
poly(ethylene glycol)2000)carbamoyl]-1,2-dimyrestyloxypropylamine, and cationic 1,2-dilinoleyloxy-3-N,N-dimethylaminopropane.
In addition to the rhesus macaque application, SNALPs have also been proven to protect cavia porcellua from viremia and death when administered shortly after postexposure to ZEBOV. A polymerase (L) gene-specific siRNAs delivery system was imposed upon four genes associated with the viral genomic RNA in the ribonucleoprotein complex found within EBOV particles (three of which match the application above): NP, VP30, VP35, and the L protein. The SNALPs ranged from 71 – 84 nm in size and were composed of synthetic cholesterol, phospholipid DSPC, PEG lipid PEGC-DMA, and cationic lipid DLinDMA at the molar ratio of 48:20:2:30. The results confirm complete protection against viremia and death in guinea pigs when administered a SNALP-siRNA delivery system after diagnosis of the Ebola virus, thus proving this technology to be an effective treatment. Future studies will focus mainly upon evaluating the effects of siRNA ‘cocktails’ on EBOV genes to increase antiviral effects.
Hepatocellular Carcinoma
In 2010, researchers developed an applicable targeting therapy for hepatocellular carcinoma (HCC) in humans. The identification of CSN5, the fifth subunit of the COP9 signalosome complex found in early HCC, was used as a therapeutic target for siRNA induction. Systemic delivery of modified CSN5siRNA encapsulated in SNALPs significantly inhibited hepatic tumor growth in the Huh7-luc+ orthotopic xenograft model of human liver cancer. SiRNA-mediated CSN5 knockdown was also proven to inhibit cell-cycle progression and increases the rate of apoptosis in HCC cells in vitro. Not only do these results demonstrate the role of CSN5 in liver cancer progression, they also indicate that CSN5 has an essential role in HCC pathogenesis. In conclusion, SNALPs have been proven to significantly reduce hepatocellular carcinoma tumor growth in human Huh7-luc* cells through therapeutic silencing.
Tumors
In 2009, researchers developed siRNAs capable of targeting both polo-like kinase 1(PLK1) and kinesin spindle protein(KSP). Both proteins are important to the cell-cycle of tumor cells, PLK1 involved with phosphorylation of a variety of proteins and KSP integral to chromosome segregation during mitosis. Specifically, bipolar mitotic spindles are unable to form when KSP is inhibited, leading to arrest of the cell cycle and, eventually, apoptosis. Likewise, inhibition of PLK1 facilitates mitotic arrests and cell apoptosis. According to the study, a 2 mg/kg dose of PLK1-specific siRNA administered for 3 weeks to mice implanted with tumors resulted in increased survival times and obvious reduction of tumors. In fact, the median survival time of treated mice was 51 days as opposed to 32 days for the controls. Further, only 2 of the 6 mice treated had noticeable tumors around the implantation site. Even so, GAPDH, a tumor-derived signal, was present at low levels, indicating significant suppression of tumor growth but not complete elimination. Still, the results suggested minimal toxicity and no significant dysfunction of the bone marrow. Animals treated with KSP-specific siRNA, too, exhibited increased survival times of 28 days compared to 20 days in the controls.
References
Molecular biology
RNA interference | Stable nucleic acid lipid particle | Chemistry,Biology | 2,420 |
8,396,078 | https://en.wikipedia.org/wiki/Z-channel%20%28information%20theory%29 | In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems.
Definition
A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability p of being transmitted incorrectly as a 0, and probability 1–p of being transmitted correctly as a 1. In other words, if X and Y are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities:
Capacity
The channel capacity of the Z-channel with the crossover 1 → 0 probability p, when the input random variable X is distributed according to the Bernoulli distribution with probability for the occurrence of 0, is given by the following equation:
where for the binary entropy function .
This capacity is obtained when the input variable X has Bernoulli distribution with probability of having value 0 and of value 1, where:
For small p, the capacity is approximated by
as compared to the capacity of the binary symmetric channel with crossover probability p.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Calculation
|-
|
To find the maximum we differentiate
And we see the maximum is attained for
yielding the following value of as a function of p
|}
For any p, (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As , the limiting value of is .
Bounds on the size of an asymmetric-error-correcting code
Define the following distance function on the words of length n transmitted via a Z-channel
Define the sphere of radius t around a word of length n as the set of all the words at distance t or less from , in other words,
A code of length n is said to be t-asymmetric-error-correcting if for any two codewords , one has . Denote by the maximum number of codewords in a t-asymmetric-error-correcting code of length n.
The Varshamov bound.
For n≥1 and t≥1,
The constant-weight code bound.
For n > 2t ≥ 2, let the sequence B0, B1, ..., Bn-2t-1 be defined as
for .
Then
Notes
References
Coding theory
Information theory
Inequalities | Z-channel (information theory) | Mathematics,Technology,Engineering | 511 |
227,086 | https://en.wikipedia.org/wiki/Waverider | A waverider is a hypersonic aircraft design that improves its supersonic lift-to-drag ratio by using the shock waves being generated by its own flight as a lifting surface, a phenomenon known as compression lift.
The waverider remains a well-studied design for high-speed aircraft in the Mach 5 and higher hypersonic regime, although no such design has yet entered production. The Boeing X-51 scramjet demonstration aircraft was tested from 2010 to 2013. In its final test flight, it reached a speed of .
History
Early work
The waverider design concept was first developed by Terence Nonweiler of the Queen's University of Belfast, and first described in print in 1951 as a re-entry vehicle. It consisted of a delta-wing platform with a low wing loading to provide considerable surface area to dump the heat of re-entry. At the time, Nonweiler was forced to use a greatly simplified 2D model of airflow around the aircraft, which he realized would not be accurate due to spanwise flow across the wing. However, he also noticed that the spanwise flow would be stopped by the shockwave being generated by the aircraft, and that if the wing was positioned to deliberately approach the shock, the spanwise flow would be trapped under wing, increasing pressure, and thus increasing lift.
In the 1950s, the British started a space program based around the Blue Streak missile, which was, at some point, to include a crewed vehicle. Armstrong-Whitworth were contracted to develop the re-entry vehicle, and unlike the U.S. space program, they decided to stick with a winged vehicle instead of a ballistic capsule. Between 1957 and 1959, they contracted Nonweiler to develop his concepts further. This work produced a pyramid-shaped design with a flat underside and short wings. Heat was conducted through the wings to the upper cool surfaces, where it was dumped into the turbulent air on the top of the wing. In 1960, work on the Blue Streak was canceled as the missile was seen as being obsolete before it could have entered service. Work then moved to the Royal Aircraft Establishment (RAE), where it continued as a research program into high-speed (Mach 4 to 7) civilian airliners.
This work was discovered by engineers at North American Aviation during the early design studies of what would lead to the XB-70 bomber. They re-designed the original "classic" delta wing to incorporate drooping wing tips in order to trap the shock waves mechanically, rather than using a shock cone generated from the front of the aircraft. This mechanism also had two other beneficial effects; it reduced the amount of horizontal lifting surface at the rear of the aircraft, which helped offset a nose-down trim that occurs at high speeds, and it added more vertical surface which helped improve the directional stability, which decreased at high speed.
Caret wing
Nonweiler's original design used the shock wave generated by the aircraft as a way to control spanwise flow, and thereby increase the amount of air trapped under the wing in the same way as a wing fence. While working on these concepts, he noticed that it was possible to shape the wing in such a way that the shock wave generated off its leading edge would form a horizontal sheet under the craft. In this case, the airflow would not only be trapped horizontally, spanwise, but vertically as well. The only area the air above the shock wave could escape would be out the back of the sheet where the fuselage ended. Since the air was trapped between this sheet and the fuselage, a large volume of air would be trapped, much more than the more basic approach he first developed. Furthermore, since the shock surface was held at a distance from the craft, shock heating was limited to the leading edges of the wings, lowering the thermal loads on the fuselage.
In 1962 Nonweiler moved to Glasgow University to become Professor of Aerodynamics and Fluid Mechanics. That year his "Delta Wings of Shapes Amenable to Exact Shock-Wave Theory" was published by the Journal of the Royal Aeronautical Society, and earned him that society's Gold Medal. A craft generated using this model looks like a delta wing that has been broken down the center and the two sides folded downward. From the rear it looks like an upside-down V, or alternately, the "caret", ^, and such designs are known as "caret wings". Two to three years later the concept briefly came into the public eye, due to the airliner work at the RAE that led to the prospect of reaching Australia in 90 minutes. Newspaper articles led to an appearance on Scottish Television.
Hawker Siddeley examined the caret wing waverider in the later 1960s as a part of a three-stage lunar rocket design. The first stage was built on an expanded Blue Steel, the second a waverider, and the third a nuclear-powered crewed stage. This work was generalized in 1971 to produce a two-staged reusable spacecraft. The long first stage was designed as a classical waverider, with air-breathing propulsion for return to the launch site. The upper stage was designed as a lifting body, and would have carried an 8000-pound (3.6 t) payload to low Earth orbit.
Cone flow waveriders
Nonweiler's work was based on studies of planar 2D shocks due to the difficulty understanding and predicting real-world shock patterns around 3D bodies. As the study of hypersonic flows improved, researchers were able to study waverider designs that used different shockwave shapes, the simplest being the conical shock generated by a cone. In these cases, a waverider is designed to keep the rounded shockwave attached to its wings, not a flat sheet, which increases the volume of air trapped under the surface, and thereby increases lift.
Unlike the caret wing, the cone flow designs smoothly curve their wings, from near horizontal in the center, to highly drooped where they meet the shock. Like the caret wing, they have to be designed to operate at a specific speed to properly attach the shock wave to the wing's leading edge, but unlike them the entire body shape can be varied dramatically at the different design speeds, and sometimes have wingtips that curve upward to attach to the shockwave.
Further development of the conical sections, adding canopies and fuselage areas, led to the "osculating cones waverider", which develops several conical shock waves at different points on the body, blending them to produce a single shaped shock. The expansion to a wider range of compression surface flows allowed the design of waveriders with control of volume, upper surface shape, engine integration and centre of pressure position. Performance improvements and off-design analysis continued until 1970.
During this period at least one waverider was tested at the Woomera Rocket Range, mounted on the nose of an air-launched Blue Steel missile, and a number of airframes were tested in the wind tunnel at NASA's Ames Research Center. However, during the 1970s most work in hypersonics disappeared, and the waverider along with it.
Viscous optimized waveriders
One of the many differences between supersonic and hypersonic flight concerns the interaction of the boundary layer and the shock waves generated from the nose of the aircraft. Normally the boundary layer is quite thin compared to the streamline of airflow over the wing, and can be considered separately from other aerodynamic effects. However, as the speed increases and the shock wave increasingly approaches the sides of the craft, there comes a point where the two start to interact and the flowfield becomes very complex. Long before that point, the boundary layer starts to interact with the air trapped between the shock wave and the fuselage, the air that is being used for lift on a waverider.
Calculating the effects of these interactions was beyond the abilities of aerodynamics until the introduction of useful computational fluid dynamics starting in the 1980s. In 1981, Maurice Rasmussen at the University of Oklahoma started a waverider renaissance by publishing a paper on a new 3D underside shape using these techniques. These shapes have superior lifting performance and less drag. Since then, whole families of cone-derived waveriders have been designed using more and more complex conic shocks, based on more complex software. This work eventually led to a conference in 1989, the First International Hypersonic Waverider Conference, held at the University of Maryland.
These newest shapes, the "viscous optimized waveriders", look similar to conical designs as long as the angle of the shock wave on the nose is beyond some critical angle, about 14 degrees for a Mach 6 design for instance. The angle of the shock can be controlled by widening out the nose into a curved plate of specific radius, and reducing the radius produces a smaller shock cone angle. Vehicle design starts by selecting a given angle and then developing the body shape that traps that angle, then repeating this process for different angles. For any given speed, a single shape will generate the best results.
Design
During re-entry, hypersonic vehicles generate lift only from the underside of the fuselage. The underside, which is inclined to the flow at a high angle of attack, creates lift in reaction to the vehicle wedging the airflow downwards. The amount of lift is not particularly high, compared to a traditional wing, but more than enough to maneuver given the amount of distance the vehicle covers.
Most re-entry vehicles have been based on the blunt-nose reentry design pioneered by Theodore von Kármán. He demonstrated that a shock wave is forced to "detach" from a curved surface, forced out into a larger configuration that requires considerable energy to form. Energy expended in forming this shock wave is no longer available as heat, so this shaping can dramatically reduce the heat load on the spacecraft. Such a design has been the basis for almost every re-entry vehicle since, found on the blunt noses of the early ICBM warheads, the bottoms of the various NASA capsules, and the large nose of the Space Shuttle.
The problem with the blunt-nose system is that the resulting design creates very little lift, meaning the vehicle has problems maneuvering during re-entry. If the spacecraft is meant to be able to return to its point of launch "on command", then some sort of maneuvering will be required to counteract the fact that the Earth is turning under the spacecraft as it flies. After a single low Earth orbit, the launching point will be over to the east of the spacecraft by the time it has completed one full orbit. A considerable amount of research was dedicated to combining the blunt-nose system with wings, leading to the development of the lifting body designs in the U.S.
It was while working on one such design that Nonweiler developed the waverider. He noticed that the detachment of the shock wave over the blunt leading edges of the wings of the Armstrong-Whitworth design would allow the air on the bottom of the craft to flow spanwise and escape to the upper part of the wing through the gap between the leading edge and the detached shock wave. This loss of airflow reduced (by up to a quarter) the lift being generated by the waverider, which led to studies on how to avoid this problem and keep the flow trapped under the wing.
Nonweiler's resulting design is a delta-wing with some amount of negative dihedral — the wings are bent down from the fuselage towards the tips. When viewed from the front, the wing resembles a caret symbol () in cross section, and these designs are often referred to as carets. The more modern 3D version typically looks like a rounded letter 'M'. Theoretically, a star-shaped waverider with a frontal cross-section of a "+" or "×" could reduce drag by another 20%. The disadvantage of this design is that it has more area in contact with the shock wave and therefore has more pronounced heat dissipation problems.
Waveriders generally have sharp noses and sharp leading edges on their wings. The underside shock-surface remains attached to this. Air flowing in through the shock surface is trapped between the shock and the fuselage, and can only escape at the rear of the fuselage. With sharp edges, all the lift is retained.
Even though sharp edges get much hotter than rounded ones at the same air density, the improved lift means that waveriders can glide on re-entry at much higher altitudes where the air density is lower. A list ranking various space vehicles in order of heating applied to the airframe would have capsules at the top (re-entering quickly with very high heating loads), waveriders at the bottom (extremely long gliding profiles at high altitude), and the Space Shuttle somewhere in the middle.
Simple waveriders have substantial design problems. First, the obvious designs only work at a particular Mach number, and the amount of lift captured will change dramatically as the vehicle changes speed. Another problem is that the waverider depends on radiative cooling, possible as long as the vehicle spends most of its time at very high altitudes. However these altitudes also demand a very large wing to generate the needed lift in the thin air, and that same wing can become rather unwieldy at lower altitudes and speeds.
Because of these problems, waveriders have not found favor with practical aerodynamic designers, despite the fact that they might make long-distance hypersonic vehicles efficient enough to carry air freight.
Some researchers controversially claim that there are designs that overcome these problems. One candidate for a multi-speed waverider is a "caret wing", operated at different angles of attack. A caret wing is a delta wing with longitudinal conical or triangular slots or strakes. It strongly resembles a paper airplane or rogallo wing. The correct angle of attack would become increasingly precise at higher Mach numbers, but this is a control problem that is theoretically solvable. The wing is said to perform even better if it can be constructed of tight mesh, because that reduces its drag, while maintaining lift. Such wings are said to have the unusual attribute of operating at a wide range of Mach numbers in different fluids with a wide range of Reynolds numbers.
The temperature problem can be solved with some combination of a transpiring surface, exotic materials, and possibly heat-pipes. In a transpiring surface, small amounts of a coolant such as water are pumped through small holes in the aircraft's skin (see transpiration and perspiration). This design works for Mach 25 spacecraft re-entry shields, and therefore should work for any aircraft that can carry the weight of the coolant. Exotic materials such as carbon-carbon composite do not conduct heat but endure it, but they tend to be brittle. Heatpipes are not widely used at present. Like a conventional heat exchanger, they conduct heat better than most solid materials, but like a thermosiphon are passively pumped. The Boeing X-51A deals with external heating through the use of a tungsten nosecone and space shuttle-style heat shield tiles on its belly. Internal (engine) heating is absorbed by using the JP-7 fuel as a coolant prior to combustion. Other high temperature materials, referred to as SHARP materials (typically zirconium diboride and hafnium diboride) have been used on steering vanes for ICBM reentry vehicles since the 1970s, and are proposed for use on hypersonic vehicles. They are said to permit Mach 11 flight at altitudes and Mach 7 flight at sea level. These materials are more structurally rugged than the Reinforced Carbon Composite (RCC) used on the space shuttle nose and leading edges, have higher radiative and temperature tolerance properties, and do not suffer from oxidation issues that RCC needs to be protected against with coatings.
Surface material
A surface material for waverider and hypersonic (Mach 5 – 10) vehicles developed by scientists at the China Academy of Aerospace Aerodynamics (CAAA) in Beijing was tested during 2023.
An alternative developed by RTX Corporation uses a perspiring membrane developed under work supported by the United States Air Force under Contract No. United States Air Force FA8650-20-C-7001''
See also
Index of aviation articles
References
External links
Hypersonic Waveriders from Aerospace.org
ASTRA Waverider from gbnet.net
Accurate Automation Corporation, a company that has built several model waveriders, including the LoFLYTE and the NASA X-43
Aerodynamics
Aerospace engineering
Aircraft configurations
Hypersonic aircraft | Waverider | Chemistry,Engineering | 3,362 |
61,475,383 | https://en.wikipedia.org/wiki/BEAMA | BEAMA, formerly the British Electrotechnical and Allied Manufacturers' Association, is a trade association for energy infrastructure companies in the United Kingdom.
History
The organisation was established in 1902 as the National Electrical Manufacturers' Association before changing its name to the British Electrotechnical and Allied Manufacturers' Association in 1911.
The first director was Daniel Nicol Dunlop, also chairman of the first World Power Conference (now the World Energy Council) in 1924.
A collection of the organisation's records for the period 1905 to 1986 is held in the Modern Records Centre of the University of Warwick covering the subjects of trolley buses, electric welding, electric railroads, the electric industries, and the training of electrical engineers.
Activities
BEAMA lobbies on behalf of energy infrastructure companies in the United Kingdom.
Selected publications
The Electrical Industry of Great Britain (1929)
See also
Hugh Quigley
References
External links
1902 establishments in the United Kingdom
Trade associations based in the United Kingdom
Business organisations based in London
Electrical engineering organizations | BEAMA | Engineering | 195 |
1,641,187 | https://en.wikipedia.org/wiki/Joseph%20Achille%20Le%20Bel | Joseph Achille Le Bel (21 January 1847 in Pechelbronn – 6 August 1930, in Paris, France) was a French chemist. He is best known for his work in stereochemistry. Le Bel was educated at the École Polytechnique in Paris. In 1874 he announced his theory outlining the relationship between molecular structure and optical activity. This discovery laid the foundation of the science of stereochemistry, which deals with the spatial arrangement of atoms in molecules. This hypothesis was put forward in the same year by the Dutch physical chemist Jacobus Henricus van 't Hoff and is currently known as Le Bel–van't Hoff rule. Le Bel wrote Cosmologie Rationelle (Rational Cosmology) in 1929.
Works
See also
Hexamethylbenzene
Optical rotation
References
Royal Society of Chemistry obituary
1847 births
1930 deaths
19th-century French chemists
Members of the French Academy of Sciences
Foreign members of the Royal Society
Stereochemists
People from Bas-Rhin | Joseph Achille Le Bel | Chemistry | 205 |
25,159,988 | https://en.wikipedia.org/wiki/L-Deoxyribose | {{DISPLAYTITLE:L-Deoxyribose}}
-Deoxyribose is an organic compound with formula C5H10O4. It is a synthetic monosaccharide, a stereoisomer (mirror image) of the natural compound -deoxyribose.
-Deoxyribose can be synthesized from -galactose. It has been used in chemical research, e.g. in the synthesis of mirror-image DNA.
References
Deoxy sugars
Aldopentoses | L-Deoxyribose | Chemistry | 111 |
428,053 | https://en.wikipedia.org/wiki/Herbert%20Boyer | Herbert Wayne "Herb" Boyer (born July 10, 1936) is an American biotechnologist, researcher and entrepreneur in biotechnology. Along with Stanley N. Cohen and Paul Berg, he discovered recombinant DNA, a method to coax bacteria into producing foreign proteins, which aided in jump-starting the field of genetic engineering. By 1969, he had performed studies on a couple of restriction enzymes of E. coli with especially useful properties.
He is recipient of the 1990 National Medal of Science, co-recipient of the 1996 Lemelson–MIT Prize, and a co-founder of Genentech. He was professor at the University of California, San Francisco (UCSF) and later served as vice president of Genentech from 1976 until his retirement in 1991.
Early life and education
Herbert Boyer was born in 1936 in Derry, Pennsylvania. He received his bachelor's degree in biology and chemistry from Saint Vincent College in Latrobe, Pennsylvania, in 1958. He married his wife Grace the following year. He received his PhD at the University of Pittsburgh in 1963 and participated as an activist in the civil rights movement.
Career
Boyer spent three years in postdoctoral work at Yale University in the laboratories of Professors Edward Adelberg and Bruce Carlton, and then became an assistant professor at the University of California, San Francisco and a professor of biochemistry from 1976 to 1991, where he discovered that genes from bacteria could be combined with genes from eukaryotes. In 1977, Boyer's laboratory and collaborators Keiichi Itakura and Arthur Riggs at City of Hope National Medical Center described the first-ever synthesis and expression of a peptide-coding gene. In August 1978, he produced synthetic insulin using his new transgenic genetically modified bacteria, followed in 1979 by a growth hormone.
In 1976, Boyer founded Genentech with venture capitalist Robert A. Swanson. Genentech's approach to the first synthesis of insulin won out over Walter Gilbert's approach at Biogen which used whole genes from natural sources. Boyer built his gene from its individual nucleotides.
In 1990, Boyer and his wife Grace gave the single largest donation ($10,000,000) bestowed on the Yale School of Medicine by an individual. The Boyer Center for Molecular Medicine was named after the Boyer family in 1991.
At the Class of 2007 Commencement, St. Vincent College announced that they had renamed the School of Natural Science, Mathematics, and Computing the Herbert W. Boyer School.
Among his professional activities, Boyer is on the board of directors of Scripps Research.
Awards
1980 the Albert Lasker Award for Basic Medical Research
1981 the Golden Plate Award of the American Academy of Achievement
1982 the Industrial Research Institute (IRI) Achievement Award
1989 the National Medal of Technology
1990 the National Medal of Science from President George H. W. Bush
1993 Helmut Horten Research Award
2000 Biotechnology Heritage Award with Robert A. Swanson, from the Biotechnology Industry Organization (BIO) and the Chemical Heritage Foundation
2004 Albany Medical Center Prize (shared with Stanley N. Cohen)
2004 Shaw Prize in Life Science and Medicine
2005 Winthrop-Sears Medal
2007 Perkin Medal
2009 CSHL Double Helix Medal Honoree
References
They Made America by Harold Evans (Little Brown, 2004) and in the subsequent WGBH television series.
1936 births
Living people
20th-century American biologists
American biotechnologists
American civil rights activists
American company founders
Businesspeople in the pharmaceutical industry
Genentech people
History of biotechnology
History of genetics
Lemelson–MIT Prize
Members of the United States National Academy of Sciences
National Medal of Science laureates
National Medal of Technology recipients
People from Westmoreland County, Pennsylvania
Recipients of the Albert Lasker Award for Basic Medical Research
Saint Vincent College alumni
University of California, San Francisco faculty
University of Pittsburgh alumni | Herbert Boyer | Biology | 763 |
60,458,468 | https://en.wikipedia.org/wiki/Instapoetry | Instapoetry is a style of written poetry that emerged after the advent of social media, especially on Instagram. The term has been used to describe poems written specifically for being shared online, most commonly on Instagram, but also other platforms including Twitter, Tumblr, and TikTok.
The style usually consists of short, direct lines in aesthetically pleasing fonts that are sometimes accompanied by an image or drawing, often without rhyme schemes or meter, and dealing with commonplace themes. Literary critics, poets, and writers have contended with Instapoetry's focus on brevity and plainness compared to traditional poetry, criticizing it for reproducing rather than subverting normative ideas on social media platforms that favor popularity and accessibility over craft and depth.
History
Instapoetry developed as a result of young, amateur poets sharing their output to expand their readership, who began using social media as their preferred method of distribution rather than traditional publishing methods. The term "instapoetry" was created by other writers trying to define and understand the new extension of instant poetry shared via social media, most prominently Instagram.
In its most basic form, Instapoetry usually consists of bite-sized verses that consider political and social subjects such as immigration, domestic violence, sexual assault, love, culture, feminism, gun violence, war, racism, LGBTQ rights, and other social justice topics. All of these elements are usually made to fit social media feeds that are easily accessible through applications on smartphones.
Scholarship
Despite the diversity of poetry on Instagram, the Brazilian linguist Bruna Osaki Fazano found that shared "aspects of the compositional form, theme and style" mean that it can be understood as a specific genre. Writing in Poetics Today, JuEunhae Knox combined quantitative and qualitative analysis to show that Instapoetry is a cohesive genre, in part because "the sheer volume and rapidity of content production in turn encourages posts that are not only visually appealing but also immediately recognizable as Instapoems".
Instapoetry has been seen as a practice that serves as a form of self-staging for poets and "[crafts] authenticity". describes the work of Norwegian poet as appearing to offer a "simple, almost direct access to the inner self". Vassenden writes that poems such as Rupi Kaur's "if you are not enough for yourself / you will never be enough / for someone else" are "authentic" to such an extent that they are not literary.
Scholars have also studied the work of specific Instapoets, such as Rupi Kaur, R.M. Drake, Aja Monet, Yrsa Daley-Ward, Nayyirah Waheed, Atticus, Nikita Gill and .
Overview
Academics have shown appreciation for the way in which Instapoetry has stimulated interest in poetry in general. Meanwhile, it has been argued that since Instapoets avoid critical evaluations, academics, and the publishing industry, Instapoets qualify more as online celebrities than literary figures. Additionally, although Instapoetry has been characterized as anti-establishment, Alyson Miller noted traditional or even conservative views in the online posts of Instapoets in contrast with the activist views the style is associated with, and that there is a contradiction between "the extra-textual commentary surrounding Instapoetry, particularly by way of interviews and artistic statements, and the content of works which repeatedly reinscribe conservative, patriarchal, and heteronormative worldviews". Thom Young, a poet and high school English teacher, created a parody Instagram page as a way to mock Instapoets and their work, describing it as "fidget-spinner poetry. Like they're just scrolling on their devices, to read something instantly, while the libraries are empty. I think people today don't want to read anything that causes a whole lot of critical thinking." According to Johnathan Ford's piece in the Financial Times, as Instagram's algorithms have limited prospective Instapoets' reach-per-post, it has pushed them to pay to promote their material. Popular Instagram accounts will be promoted to the front of users' feeds, with the app's algorithm, in the view of critics, favoring the spread of bland, inauthentic, or clichéd content while preventing disciplined poetry from reaching new audiences.
Writers described as Instapoets
Rupi Kaur
Atticus
Amanda Lovelace
Tyler Knott Gregson
Najwa Zebian
Lang Leav
Nikita Gill
Winnie Nantongo
Upile Chisala
Donna Ashworth
See also
Hashtag activism
Virtue signalling
References
External links
Meet Rupi Kaur, Queen of the ‘Instapoets’, Rolling Stone (magazine) (December 2017)
NidaMahmoed - Instaquotes: give peace a chance, Vogue Mexico Vogue México y Latinoamérica (July 2016)
Genres of poetry
Social media
Genres of electronic literature | Instapoetry | Technology | 1,038 |
11,901,730 | https://en.wikipedia.org/wiki/Gi%20alpha%20subunit | {{DISPLAYTITLE:Gi alpha subunit}}
Gi protein alpha subunit is a family of heterotrimeric G protein alpha subunits. This family is also commonly called the Gi/o (Gi /Go ) family or Gi/o/z/t family to include closely related family members. G alpha subunits may be referred to as Gi alpha, Gαi, or Giα.
Family members
There are four distinct subtypes of alpha subunits in the Gi/o/z/t alpha subunit family that define four families of heterotrimeric G proteins:
Gi proteins: Gi1α, Gi2α, and Gi3α
Go protein: Goα (in mouse there is alternative splicing to generate Go1α and Go2α)
Gz protein: Gzα
Transducins (Gt proteins): Gt1α, Gt2α, Gt3α
Giα proteins
Gi1α
Gi1α is encoded by the gene GNAI1.
Gi2α
Gi2α is encoded by the gene GNAI2.
Gi3α
Gi3α is encoded by the gene GNAI3.
Goα protein
Go1α is encoded by the gene GNAO1.
Gzα protein
Gzα is encoded by the gene GNAZ.
Transducin proteins
Gt1α
Transducin/Gt1α is encoded by the gene GNAT1.
Gt2α
Transducin 2/Gt2α is encoded by the gene GNAT2.
Gt3α
Gustducin/Gt3α is encoded by the gene GNAT3.
Function
The general function of Gi/o/z/t is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Giα, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to GDP and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and GTP binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. GTP-bound Gα and Gβγ are then freed to activate their respective downstream signaling enzymes.
Gi proteins primarily inhibit the cAMP dependent pathway by inhibiting adenylyl cyclase activity, decreasing the production of cAMP from ATP, which, in turn, results in decreased activity of cAMP-dependent protein kinase. Therefore, the ultimate effect of Gi is the inhibition of the cAMP-dependent protein kinase. The Gβγ liberated by activation of Gi and Go proteins is particularly able to activate downstream signaling to effectors such as G protein-coupled inwardly-rectifying potassium channels (GIRKs). Gi and Go proteins are substrates for pertussis toxin, produced by Bordetella pertussis, the infectious agent in whooping cough. Pertussis toxin is an ADP-ribosylase enzyme that adds an ADP-ribose moiety to a particular cysteine residue in Giα and Goα proteins, preventing their coupling to and activation by GPCRs, thus turning off Gi and Go cell signaling pathways.
Gz proteins also can link GPCRs to inhibition of adenylyl cyclase, but Gz is distinct from Gi/Go by being insensitive to inhibition by pertussis toxin.
Gt proteins function in sensory transduction. The Transducins Gt1 and Gt2 serve to transduce signals from G protein-coupled receptors that receive light during vision. Rhodopsin in dim light night vision in retinal rod cells couples to Gt1, and color photopsins in color vision in retinal cone cells couple to Gt2, respectively. Gt3/Gustducin subunits transduce signals in the sense of taste (gustation) in taste buds by coupling to G protein-coupled receptors activated by sweet or bitter substances.
Receptors
The following G protein-coupled receptors couple to Gi/o subunits:
5-HT1 and 5-HT5 serotonergic receptors
Acetylcholine M2 & M4 receptors
Adenosine A1 & A3 receptors
Adrenergic α2A, α2B, & α2C receptors
Apelin receptors
Calcium-sensing receptor
Cannabinoid receptors (CB1 and CB2)
Chemokine CXCR4 receptor
Dopamine D2, D3 and D4 receptors
GABAB receptor
Glutamate mGluR2, mGluR3, mGluR4, mGluR6, mGluR7, & mGluR8 receptors
Histamine H3 & H4 receptors
Melatonin MT1, MT2, & MT3 receptors
Hydroxycarboxylic acid receptors: HCA1, HCA2, & HCA3
Opioid δ, κ, μ, & nociceptin receptors
Prostaglandin EP1, EP3, FP, & TP receptors
Short chain fatty acid receptors: FFAR2 & FFAR3
Somatostatin sst1, sst2, sst3, sst4 & sst5 receptors
Trace amine-associated receptor 8
See also
Second messenger system
G protein-coupled receptor
Heterotrimeric G protein
Adenylyl cyclase
Protein kinase A
Gs alpha subunit
Gq alpha subunit
G12/G13 alpha subunits
Retina
Taste
References
External links
Peripheral membrane proteins | Gi alpha subunit | Chemistry | 1,219 |
9,632 | https://en.wikipedia.org/wiki/Ecosystem | An ecosystem (or ecological system) is a system formed by organisms in interaction with their environment. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
Definition
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
"Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.
Origin and development of the term
The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope".
G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.
Processes
External and internal factors
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.
Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.
Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function.
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.
Primary production
Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Energy flow
Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.
Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.
In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.
The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network.
Decomposition
The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.
Decomposition rates
Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.
Dynamics and resilience
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.
Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply."
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.
Nutrient cycling
Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical.
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.
Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.
Function and biodiversity
Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species.
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.
An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.
Study approaches
Ecosystem ecology
Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.
The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.
Classifications
Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification.
Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.
Human interactions with ecosystems
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.
Ecosystem goods and services
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.
The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change.
Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.
Degradation and decline
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.
Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.
Management
When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).
Restoration and sustainable development
Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.
See also
Complex system
Earth science
Ecoregion
Ecological resilience
Ecosystem-based adaptation
Artificialization
Types
The following articles are types of ecosystems for particular types of regions or zones:
Aquatic ecosystem
Freshwater ecosystem
Lake ecosystem (lentic ecosystem)
River ecosystem (lotic ecosystem)
Marine ecosystem
Large marine ecosystem
Tropical salt pond ecosystem
Terrestrial ecosystem
Boreal ecosystem
Groundwater-dependent ecosystems
Montane ecosystem
Urban ecosystem
Ecosystems grouped by condition
Agroecosystem
Closed ecosystem
Depauperate ecosystem
Novel ecosystem
Reference ecosystem
Instances
Ecosystem instances in specific regions of the world:
Greater Yellowstone Ecosystem
Leuser Ecosystem
Longleaf pine Ecosystem
Tarangire Ecosystem
References
External links | Ecosystem | Biology | 5,994 |
71,469,504 | https://en.wikipedia.org/wiki/Okroy%20Cloud | The Okroy Cloud, is an intergalactic dust cloud near the Milky Way. Its intergalactic nature was first studied by Bogdan Wszolek and Solvia Massi in 1988.
See also
Local Group
Satellite galaxies of the Milky Way
References
Milky Way
Milky Way Subgroup
Virgo (constellation)
Stellar streams
?
Local Group | Okroy Cloud | Astronomy | 68 |
75,599,532 | https://en.wikipedia.org/wiki/Quantum%20computational%20chemistry | Quantum computational chemistry is an emerging field that exploits quantum computing to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient.
Efficient quantum algorithms for chemistry problems are expected to have run-times and resource requirements that scale polynomially with system size and desired accuracy. Experimental efforts have validated proof-of-principle chemistry calculations, though currently limited to small systems.
Brief History of Quantum Computational Chemistry
1929: Dirac noted the inherent complexity of quantum mechanical equations, underscoring the difficulties in solving these equations using classical computation.
1982: Feynman proposed using quantum hardware for simulations, addressing the inefficiency of classical computers in simulating quantum systems.
Common Methods in Quantum Computational Chemistry
While there are several common methods in quantum chemistry, the section below lists only a few examples.
Qubitization
Qubitization is a mathematical and algorithmic concept in quantum computing for the simulation of quantum systems via Hamiltonian dynamics. The core idea of qubitization is to encode the problem of Hamiltonian simulation in a way that is more efficiently processable by quantum algorithms.
Qubitization involves a transformation of the Hamiltonian operator, a central object in quantum mechanics representing the total energy of a system. In classical computational terms, a Hamiltonian can be thought of as a matrix describing the energy interactions within a quantum system. The goal of qubitization is to embed this Hamiltonian into a larger, unitary operator, which is a type of operator in quantum mechanics that preserves the norm of vectors upon which it acts.
Mathematically, the process of qubitization constructs a unitary operator such that a specific projection of is proportional to the Hamiltonian of interest. This relationship can often be represented as , where is a specific quantum state and is its conjugate transpose. The efficiency of this method comes from the fact that the unitary operator can be implemented on a quantum computer with fewer resources (like qubits and quantum gates) than would be required for directly simulating
A key feature of qubitization is in simulating Hamiltonian dynamics with high precision while reducing the quantum resource overhead. This efficiency is especially beneficial in quantum algorithms where the simulation of complex quantum systems is necessary, such as in quantum chemistry and materials science simulations. Qubitization also develops quantum algorithms for solving certain types of problems more efficiently than classical algorithms. For instance, it has implications for the Quantum Phase Estimation algorithm, which is fundamental in various quantum computing applications, including factoring and solving linear systems of equations.
Applications of qubitization in chemistry
Gaussian orbital basis sets
In Gaussian orbital basis sets, phase estimation algorithms have been optimized empirically from to where is the number of basis sets. Advanced Hamiltonian simulation algorithms have further reduced the scaling, with the introduction of techniques like Taylor series methods and qubitization, providing more efficient algorithms with reduced computational requirements.
Plane wave basis sets
Plane wave basis sets, suitable for periodic systems, have also seen advancements in algorithm efficiency, with improvements in product formula-based approaches and Taylor series methods.
Quantum phase estimation in chemistry
Overview
Phase estimation, as proposed by Kitaev in 1996, identifies the lowest energy eigenstate ( ) and excited states ( ) of a physical Hamiltonian, as detailed by Abrams and Lloyd in 1999. In quantum computational chemistry, this technique is employed to encode fermionic Hamiltonians into a qubit framework.
Brief methodology
Initialization
The qubit register is initialized in a state, which has a nonzero overlap with the Full Configuration Interaction (FCI) target eigenstate of the system. This state is expressed as a sum over the energy eigenstates of the Hamiltonian, , where represents complex coefficients.
Application of Hadamard gates
Each ancilla qubit undergoes a Hadamard gate application, placing the ancilla register in a superposed state. Subsequently, controlled gates, as shown above, modify this state.
Inverse quantum fourier transform
This transform is applied to the ancilla qubits, revealing the phase information that encodes the energy eigenvalues.
Measurement
The ancilla qubits are measured in the Z basis, collapsing the main register into the corresponding energy eigenstate based on the probability .
Requirements
The algorithm requires ancilla qubits, with their number determined by the desired precision and success probability of the energy estimate. Obtaining a binary energy estimate precise to n bits with a success probability necessitates. ancilla qubits. This phase estimation has been validated experimentally across various quantum architectures.
Applications of QPEs in chemistry
Time evolution and error analysis
The total coherent time evolution required for the algorithm is approximately . The total evolution time is related to the binary precision , with an expected repeat of the procedure for accurate ground state estimation. Errors in the algorithm include errors in energy eigenvalue estimation (), unitary evolutions (), and circuit synthesis errors (), which can be quantified using techniques like the Solovay-Kitaev theorem.
The phase estimation algorithm can be enhanced or altered in several ways, such as using a single ancilla qubit for sequential measurements, increasing efficiency, parallelization, or enhancing noise resilience in analytical chemistry. The algorithm can also be scaled using classically obtained knowledge about energy gaps between states.
Limitations
Effective state preparation is needed, as a randomly chosen state would exponentially decrease the probability of collapsing to the desired ground state. Various methods for state preparation have been proposed, including classical approaches and quantum techniques like adiabatic state preparation.
Variational quantum eigensolver (VQE)
Overview
The variational quantum eigensolver is an algorithm in quantum computing, crucial for near-term quantum hardware. Initially proposed by Peruzzo et al. in 2014 and further developed by McClean et al. in 2016, VQE finds the lowest eigenvalue of Hamiltonians, particularly those in chemical systems. It employs the variational method (quantum mechanics), which guarantees that the expectation value of the Hamiltonian for any parameterized trial wave function is at least the lowest energy eigenvalue of that Hamiltonian. VQE is a hybrid algorithm that utilizes both quantum and classical computers. The quantum computer prepares and measures the quantum state, while the classical computer processes these measurements and updates the system. This synergy allows VQE to overcome some limitations of purely quantum methods.
Applications of VQEs in chemistry
1-RDM and 2-RDM calculations
The reduced density matrices (1-RDM and 2-RDM) can be used to extrapolate the electronic structure of a system.
Ground state energy extrapolation
In the Hamiltonian variational ansatz, the initial state is prepared to represent the ground state of the molecular Hamiltonian without electron correlations. The evolution of this state under the Hamiltonian, split into commuting segments , is given by the equation below.
where are variational parameters optimized to minimize the energy, providing insights into the electronic structure of the molecule.
Measurement scaling
McClean et al. (2016) and Romero et al. (2019) proposed a formula to estimate the number of measurements ( ) required for energy precision. The formula is given by , where are coefficients of each Pauli string in the Hamiltonian. This leads to a scaling of in a Gaussian orbital basis and in a plane wave dual basis. Note that is the number of basis functions in the chosen basis set.
Fermionic level grouping
A method by Bonet-Monroig, Babbush, and O'Brien (2019) focuses on grouping terms at a fermionic level rather than a qubit level, leading to a measurement requirement of only circuits with an additional gate depth of .
Limitations of VQE
While VQE's application in solving the electronic Schrödinger equation for small molecules has shown success, its scalability is hindered by two main challenges: the complexity of the quantum circuits required and the intricacies involved in the classical optimization process. These challenges are significantly influenced by the choice of the variational ansatz, which is used to construct the trial wave function. Modern quantum computers face limitations in running deep quantum circuits, especially when using the existing ansatzes for problems that exceed several qubits.
Jordan-Wigner encoding
Jordan-Wigner encoding is a method in quantum computing used for simulating fermionic systems like molecular orbitals and electron interactions in quantum chemistry.
Overview
In quantum chemistry, electrons are modeled as fermions with antisymmetric wave functions. The Jordan-Wigner encoding maps these fermionic orbitals to qubits, preserving their antisymmetric nature. Mathematically, this is achieved by associating each fermionic creation and annihilation operator with corresponding qubit operators through the Jordan-Wigner transformation:
Where , , and are Pauli matrices acting on the qubit.
Applications of Jordan-Wigner encoding in chemistry
Electron hopping
Electron hopping between orbitals, central to chemical bonding and reactions, is represented by terms like . Under Jordan-Wigner encoding, these transform as follows:This transformation captures the quantum mechanical behavior of electron movement and interaction within molecules.
Computational complexity in molecular systems
The complexity of simulating a molecular system using Jordan-Wigner encoding is influenced by the structure of the molecule and the nature of electron interactions. For a molecular system with orbitals, the number of required qubits scales linearly with , but the complexity of gate operations depends on the specific interactions being modeled.
Limitations of Jordan–Wigner encoding
The Jordan-Wigner transformation encodes fermionic operators into qubit operators, but it introduces non-local string operators that can make simulations inefficient. The FSWAP gate is used to mitigate this inefficiency by rearranging the ordering of fermions (or their qubit representations), thus simplifying the implementation of fermionic operations.
Fermionic SWAP (FSWAP) network
FSWAP networks rearrange qubits to efficiently simulate electron dynamics in molecules. These networks are essential for reducing the gate complexity in simulations, especially for non-neighboring electron interactions.
When two fermionic modes (represented as qubits after the Jordan-Wigner transformation) are swapped, the FSWAP gate not only exchanges their states but also correctly updates the phase of the wavefunction to maintain fermionic antisymmetry. This is in contrast to the standard SWAP gate, which does not account for the phase change required in the antisymmetric wavefunctions of fermions.
The use of FSWAP gates can significantly reduce the complexity of quantum circuits for simulating fermionic systems. By intelligently rearranging the fermions, the number of gates required to simulate certain fermionic operations can be reduced, leading to more efficient simulations. This is particularly useful in simulations where fermions need to be moved across large distances within the system, as it can avoid the need for long chains of operations that would otherwise be required.
References
Quantum chemistry
Further reading | Quantum computational chemistry | Physics,Chemistry | 2,316 |
60,645,788 | https://en.wikipedia.org/wiki/ALG1-CDG | ALG1-CDG is an autosomal recessive congenital disorder of glycosylation caused by biallelic pathogenic variants in ALG1. The first cases of ALG1-CDG were described in 2004, and the causative gene was identified at the same time. This disorder was originally designated CDG-IK, under earlier nomenclature for congenital disorders of glycosylation. Clinically, individuals with ALG1-CDG have developmental delay, hypotonia, seizures and microcephaly. Fewer than 60 cases of ALG1-CDG have been confirmed in published literature. ALG1-CDG can be suspected based on clinical findings, and abnormal serum transferrin glycosylation test results. Confirmation of the diagnosis can be performed based on sequence analysis of ALG1. The analysis of ALG1 is complicated by the presence of a pseudogene. There are no specific treatments for ALG1-CDG, and most care consists of managing symptoms.
References
Congenital disorders of glycosylation
Autosomal recessive disorders | ALG1-CDG | Chemistry | 228 |
19,437,647 | https://en.wikipedia.org/wiki/Notes%20and%20Records | Notes and Records: the Royal Society Journal of the History of Science is an international, quarterly peer-reviewed academic journal which publishes original research in the history of science, technology, and medicine. The journal welcomes other forms of contribution including: research notes elucidating recent archival discoveries (in the collections of the Royal Society and elsewhere); news of research projects and online and other resources of interest to historians; book reviews, including essay reviews, on material relating primarily to the history of the Royal Society; recollections or autobiographical accounts written by Fellows and others recording important moments in science from the recent past. It is published by the Royal Society and the editor-in-chief is Anna Marie Roos supported by an eminent editorial board.
Notes and Records is fully compliant with the open access requirements of a range of funders including the HEFCE (REF 2020), AHRC, Scottish Funding Council, Wellcome Trust and European Commission. It is designated as "green" on the SHERPA/RoMEO website.
History
The journal was established in 1938 as the Notes and Records of the Royal Society, under the control of Henry Lyons with the help of the assistant secretary of the Royal Society.
It obtained its current name, Notes and Records: the Royal Society journal of the history of science in 2014.
Abstracting and indexing
The journal is abstracted and indexed in Scopus, Science Citation Index Expanded, Arts & Humanities Citation Index, Current Contents/Arts & Humanities, Current Mathematical Publications, and MathSciNet. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.4.
See also
Notes and Queries
References
External links
History of science journals
History of technology
History of medicine
Royal Society academic journals
Quarterly journals
Academic journals established in 1938
English-language journals | Notes and Records | Technology | 365 |
1,278,990 | https://en.wikipedia.org/wiki/List%20of%20the%2072%20names%20on%20the%20Eiffel%20Tower | On the Eiffel Tower, 72 names of French men (scientists, engineers, and mathematicians) are engraved in recognition of their contributions. Gustave Eiffel chose this "invocation of science" because of his concern over the protests against the tower, and chose names of those who had distinguished themselves since 1789. The engravings are found on the sides of the tower under the first balcony, in letters about tall, and were originally painted in gold. The engraving was painted over at the beginning of the 20th century and restored in 1986–87 by Société Nouvelle d'exploitation de la Tour Eiffel, the company that the city of Paris contracts to operate the Tower. The repainting of 2010–11 restored the letters to their original gold colour. There are also names of the engineers who helped build the Tower and design its architecture on a plaque on the top of the Tower, where a laboratory was built as well.
List
Location
The list is split in four parts (one for each side of the tower). The sides have been named after the parts of Paris that each side faces:
The North-East side (also known as La Bourdonnais side)
The South-East side (also known as the Military School side)
The South-West side (also known as the Grenelle side)
The North West side (also known as the Trocadéro side)
Names
In the table below are all the names on the four sides.
Criticism
Women
The list contains no women. The list has been criticized for excluding the name of Sophie Germain, a noted French mathematician whose work on the theory of elasticity was used in the construction of the tower itself. In 1913, John Augustine Zahm suggested that Germain was excluded because she was a woman.
Hydraulic engineers and scholars
Fourteen hydraulic engineers and scholars are listed on the Eiffel Tower. Eiffel acknowledged most of the leading scientists in the field. Henri Philibert Gaspard Darcy is missing; some of his work did not come into wide use until the 20th century. Also missing are Antoine Chézy, who was less famous; Joseph Valentin Boussinesq, who was early in his career at the time; and mathematician Évariste Galois. Other famous French mathematicians are missing from the list: Joseph Liouville and Charles Hermite.
Notes
References
Further reading
Reprinted as
External links
Paris streets named for the 72 scientists
Eiffel
72 names on the Eiffel Tower
Eiffel Tower hall of fame
Eiffel Tower
Eiffel Tower list
Eiffel Tower
Eiffel Tower | List of the 72 names on the Eiffel Tower | Technology | 518 |
16,258,342 | https://en.wikipedia.org/wiki/Formalism%20%28philosophy%20of%20mathematics%29 | In the philosophy of mathematics, formalism is the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings (alphanumeric sequences of symbols, usually as equations) using established manipulation rules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess." According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they are not "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation (or semantics). In contrast to mathematical realism, logicism, or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist.
Along with realism and intuitionism, formalism is one of the main theories in the philosophy of mathematics that developed in the late nineteenth and early twentieth century. Among formalists, David Hilbert was the most prominent advocate.
Early formalism
The early mathematical formalists attempted "to block, avoid, or sidestep (in some way) any ontological commitment to a problematic realm of abstract objects." German mathematicians Eduard Heine and Carl Johannes Thomae are considered early advocates of mathematical formalism. Heine and Thomae's formalism can be found in Gottlob Frege's criticisms in The Foundations of Arithmetic.
According to Alan Weir, the formalism of Heine and Thomae that Frege attacks can be "describe[d] as term formalism or game formalism." Term formalism is the view that mathematical expressions refer to symbols, not numbers. Heine expressed this view as follows: "When it comes to definition, I take a purely formal position, in that I call certain tangible signs numbers, so that the existence of these numbers is not in question."
Thomae is characterized as a game formalist who claimed that "[f]or the formalist, arithmetic is a game with signs which are called empty. That means that they have no other content (in the calculating game) than they are assigned by their behaviour with respect to certain rules of combination (rules of the game)."
Frege provides three criticisms of Heine and Thomae's formalism: "that [formalism] cannot account for the application of mathematics; that it confuses formal theory with metatheory; [and] that it can give no coherent explanation of the concept of an infinite sequence." Frege's criticism of Heine's formalism is that his formalism cannot account for infinite sequences. Dummett argues that more developed accounts of formalism than Heine's account could avoid Frege's objections by claiming they are concerned with abstract symbols rather than concrete objects. Frege objects to the comparison of formalism with that of a game, such as chess. Frege argues that Thomae's formalism fails to distinguish between game and theory.
Hilbert's formalism
A major figure of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent (i.e. no contradictions can be derived from the system).
The way that Hilbert tried to show that an axiomatic system was consistent was by formalizing it using a particular language. In order to formalize an axiomatic system, a language must first be chosen in which operations can be expressed and performed within that system. This language must include five components:
It must include variables such as x, which can stand for some number.
It must have quantifiers such as the symbol for the existence of an object.
It must include equality.
It must include connectives such as ↔ for "if and only if."
It must include certain undefined terms called parameters. For geometry, these undefined terms might be something like a point or a line, which we still choose symbols for.
By adopting this language, Hilbert thought that all theorems could be proven within any axiomatic system using nothing more than the axioms themselves and the chosen formal language.
Gödel's conclusion in his incompleteness theorems was that one cannot prove consistency within any consistent axiomatic system rich enough to include classical arithmetic. On the one hand, only the formal language chosen to formalize this axiomatic system must be used; on the other hand, it is impossible to prove the consistency of this language in itself. Hilbert was originally frustrated by Gödel's work because it shattered his life's goal to completely formalize everything in number theory. However, Gödel did not feel that he contradicted everything about Hilbert's formalist point of view. After Gödel published his work, it became apparent that proof theory still had some use, the only difference is that it could not be used to prove the consistency of all of number theory as Hilbert had hoped.
Hilbert was initially a deductivist, but he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.
Further developments
Other formalists, such as Rudolf Carnap, considered mathematics to be the investigation of formal axiom systems.
Haskell Curry defines mathematics as "the science of formal systems." Curry's formalism is unlike that of term formalists, game formalists, or Hilbert's formalism. For Curry, mathematical formalism is about the formal structure of mathematics and not about a formal system. Stewart Shapiro describes Curry's formalism as starting from the "historical thesis that as a branch of mathematics develops, it becomes more and more rigorous in its methodology, the end-result being the codification of the branch in formal deductive systems."
Criticism
Kurt Gödel indicated one of the weak points of formalism by addressing the question of consistency in axiomatic systems.
Bertrand Russell has argued that formalism fails to explain what is meant by the linguistic application of numbers in statements such as "there are three men in the room".
See also
QED project
Mathematical formalism
Formalized mathematics
Formal system
References
External links
Philosophy of mathematics | Formalism (philosophy of mathematics) | Mathematics | 1,370 |
8,350,356 | https://en.wikipedia.org/wiki/Formstone | Formstone is a type of stucco commonly applied to brick rowhouses in many East Coast urban areas in the United States, although it is most strongly associated with Baltimore. As a form of simulated masonry, Formstone is commonly colored and shaped on the building to imitate various forms of masonry compound, creating the trompe-l'œil appearance of stone.
History and popularity
Formstone was patented by Lewis Albert Knight of the Baltimore-based Lasting Products Company in 1937, although a similar product named Permastone had been invented in Columbus, Ohio, eight years prior. The name Formstone was actually a brand name used by Knight. Permastone, Fieldstone, Dixie Stone, and Stone of Ages were names used for a product similar to Knight's Formstone, particularly in other cities.
Baltimore
Formstone was used widely in Baltimore city. Formstone was primarily used in remodeling but could also be used for new construction. Film director and Baltimore native John Waters described Formstone as "the polyester of brick." Baltimore became the “Formstone capital of the world."
The Baltimore-based Lasting Products Company, the parent company of Formstone, was later known as the Formstone Co. Not long after Formstone was invented, the Lasting Products Company disbanded during World War II. When business resumed, the majority of buildings faced in Formstone were single-family homes. It was not long until rowhouse homeowners in more urban and working-class areas of Baltimore wanted the clean and polished look of Formstone. The company opened successful franchise locations in various cities across the United States but Baltimore was the epicenter of the Formstone phenomenon. The company provided all of the tools and materials needed to complete a Formstone project and trained registered contractors on how to sell and apply Formstone. There were many competitors in the 1950s and 1960s, some with salesmen who advertised different materials under the Formstone product name. The competitors used a simulated stone product similar to Formstone but with different variations in patterns, colors, and application.
Formstone’s popularity came from the promise that it was inexpensive, maintenance-free, and more energy-efficient. It also gave Baltimore rowhouses a more modern look. A 1950 advertisement for Formstone revealed the “secret of its popularity: weatherproof and insulating forever; first cost is the last; no upkeep or repair…lasting beauty for exteriors or interiors; tried and proven; fully guaranteed.” Formstone was meant to be a maintenance-free alternative to the low-quality brick that many early Baltimore rowhouses were constructed with. These brick buildings required a lot of upkeep and frequent painting. But for the cost of three paint jobs, Formstone could be applied to the building’s exterior and eliminate much of the effort to maintain the exterior brick.
At the height of its popularity in the 1950s, Formstone was a sign of wealth and stability in the working class neighborhoods of Baltimore. But the longevity of Formstone was not living up to the company’s promises and the Formstone Co. went out of business in the late 1960s. Aluminum and vinyl siding, much cheaper ways to weather-proof buildings, became more popular and contributed to the decline of Formstone and other simulated stone products.
San Francisco
Formstone, described as "[a]n odd architectural fad" by urban design critic John King, appeared in San Francisco in the 1930s and '40s. While not particularly common, it is still found around the city. Like San Francisco, in Washington, DC, the same debate over whether it is historic or should be removed continues where it was used to mimic granite and other stone in a rather historic city famous for real stone buildings.
Application
Formstone is mixed on-site and applied directly to a building’s exterior. Knight wanted to provide a process for that used the tools of masonry and cement finishers so they could easily follow the application process. Formstone is applied in three layers, anchored by a perforated metal lath attached to the underlying brick with nails. Galvanized mesh was used in many instances to reduce the likelihood of rusting. The three layers are cement-based concrete. The first layer of cement mortar is 3/8” to 3/4” thick and it is scored before it dries. The second layer is between 1/4” and 3/8” thick. The top layer, or finish layer, is also between 1/4” and 3/8” thick and is applied while the second layer is still plastic. While the finish layer is still wet, it is hand-sculpted into the shape of stones. The finish layer contains the coloration used to imitate stone and is textured using waxed paper and an aluminum roller. Mica could also be sprayed on the surface to give the Formstone a sparkly, clean look. Mortar joints are then scored into the top layer to mimic natural stone construction.
Preservation issues
One major failure of Formstone is that the metal lath holding the faux stone to the building can start to pull away from the brick. Without a strong bond between the Formstone and the underlying brick, moisture is allowed to enter between the two materials and become trapped. Applying Formstone to rowhouses constructed with early brick from the eighteenth and nineteenth centuries caused many problems. This early brick was soft, porous, and susceptible to deterioration. Formstone prevents the historic brick from breathing and the accumulation of moisture causes cracks to form. This moisture combined with the freeze-thaw cycle can damage the Formstone material and, if left uncorrected, can lead to further deterioration and penetration of moisture into the underlying brick. This can lead to spalling, efflorescence, and loosened mortar joints on the brick façade. Formstone is only waterproof as long as it does not deteriorate and separate from the wall.
Another preservation issue stems from the application of the Formstone. When it was applied to the exterior façade of a building, historically significant architectural features were often covered up or removed. Features such as cornices, belt courses, lintels, and sills were not only decorative, they were necessary for diverting water away from the building, leading to even more damage from moisture intrusion.
When a building owner decides to remove the Formstone, historical fabric and significant features can be damaged during this process. When the metal lath is removed, it leaves the original poor quality brick surface pock-marked with holes in the mortar joints. This requires cleaning and repointing of the brick, and sometimes replacement of severely damaged brick, which can be very expensive.
Historical significance
There is debate over the historical significance of Formstone. Because it was usually applied to buildings long after their initial construction, Formstone is viewed by some as an inauthentic addition that detracts from the historical significance of the building. But some historic preservationists, architects, and citizens, particularly in the city of Baltimore, argue that Formstone has acquired its own historical significance as it has become a part of the Baltimore landscape and is representative of the history and evolution of the city’s working-class neighborhoods.
References
Works cited
Blasius, Elizabeth. "Permastone and Formstone: Modern Marvels or the Margarine of Architecture." MAS Context, 2024.
McKee, Ann Milkovich. "Stonewalling America: Simulated Stone Products", Twentieth Century Building Materials, McGraw-Hill, 1995.
Williams, Paul K. "The Faux Stone Follies", Old House Journal online, Washington DC, June 2003
Building materials
Architecture in the United States
Architecture in Maryland
Culture of Baltimore | Formstone | Physics,Engineering | 1,552 |
52,102,799 | https://en.wikipedia.org/wiki/G.%20A.%20Mansoori | Gholam Ali Mansoori (born in 1943), G. Ali Mansoori also known as "GA Mansoori" is an Iranian-American scientist known for his research within energy, nanotechnology and thermodynamics. He is a professor at the Departments of Bioengineering, Chemical Engineering and also Physics at University of Illinois at Chicago.
Life and education
Mansoori completed his PhD at the University of Oklahoma in 1969 with a dissertation on "A Variational Approach to the Equilibrium Thermodynamic Properties of Simple Liquids and Phase Transitions". Mansoori did post-doctoral work at Rice University.
Career
Mansoori contributed to over 550 publications including ten books, some of which became text-book references in thermodynamics and nanotechnology. The most cited work he has co-authored is Equilibrium thermodynamic properties of the mixture of hard spheres in The Journal of Chemical Physics in 1971 with more than 2000 citations as of February 2019.
Books
A list of Mansoori's books is:
Mansoori, G.A. and Enayati N., Agyarko L. B. (2015). Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State. World Scientific
Mansoori, G.A. (2015) Principles of Nanotechnology, Molecular-Based Study of Condensed Matter in Small Systems. World Scientific
Mansoori, G.A., Barros de Araujo Patricia Lopes, Silvano de Araujo, Elmo (2012) Diamondoid Molecules: With Applications in Biomedicine, Materials Science, Nanotechnology & Petroleum Science.
Mansoori, G.A. and Haile, J.M. (1983). Molecular-based Study of Fluids (ch. 1: Molecular Study of Fluids: A Historical Survey, pgs. 1-28).
Mansoori, G.A. and Chom, Larry G. (1988). Advances in Thermodynamics, Vol. I: C7+ Fraction Characterization. New York: Taylor & Francis Pub. Co.,
Mansoori, G.A. and Matteoli, E. (1990). Advances in Thermodynamics, Vol. II: Fluctuation Theory of Mixtures. Taylor & Francis.
Mansoori, G.A., Sieniutycz, S. and Salamon, P. (1990). Advances in Thermodynamics, Vol. III: Nonequilibrium Theory and Extremum Principles. Taylor & Francis.
Mansoori, G.A. and Hoffman, E.J. (1991). Advances in Thermodynamics, Vol. V: Analytic Thermodynamics. Taylor & Francis.
Mansoori, G.A., Sieniutycz, S. and Salamon, P. (1992). Advances in Thermodynamics, Vol. VI: Diffusion and rate Processes. Taylor & Francis.
Mansoori, G.A., Sieniutycz, S. and Salamon, P. (1992). Advances in Thermodynamics, Vol. VII: Extended Thermodynamic Systems. Taylor & Francis.
References
External links
Professor's Mansoori webpage on UIC
1943 births
Living people
University of Illinois Chicago faculty
Iranian expatriate academics
Academics from Chicago
University of Oklahoma alumni
Rice University alumni
Thermodynamicists
American nanotechnologists | G. A. Mansoori | Physics,Chemistry | 729 |
26,016,875 | https://en.wikipedia.org/wiki/Bermuda%20National%20Grid | The Bermuda National Grid 2000 (BNG) is a kind of Transverse Mercator projection. It is not a Universal Transverse Mercator (UTM) projection, as it has an origin and other parameters that are different from those used in UTM.
Grid Parameters:
References
Online resources:
Convert from BNG to lat/lon in Google earth
Eye4Software's online converter.
Map projections
Geography of Bermuda | Bermuda National Grid | Mathematics | 90 |
2,583,446 | https://en.wikipedia.org/wiki/Giovanni%20Battista%20Brocchi | Giovanni Battista (or Giambattista) Brocchi (18 February 177225 September 1826) was an Italian naturalist, mineralogist and geologist.
Biography
Giovanni Battista Brocchi was born in Bassano del Grappa and studied jurisprudence at the University of Padua, but his attention was turned to mineralogy and botany. The Bassanese naturalist Antonio Gaidon, guided him towards his first scientific studies and was Brocchi's first master in the geological and mineralogical disciplines. Gaidon introduced Brocchi to the naturalists Giuseppe Olivi and Alberto Fortis, the latter accompanying Brocchi on geological excursions in the Bassano area. In 1802 he was appointed professor of botany in the new lyceum of Brescia; but he more especially devoted himself to geological researches in the adjacent districts. The fruits of these labors appeared in different publications, particularly in his Trattato mineralogico e chimico sulle miniere di ferro del dipartimento del Mella (1808) a treatise on the iron mines of the Mella traditional region. These researches procured him the office of inspector of mines in the recently established Kingdom of Italy, and enabled him to extend his investigations over a great part of the country.
In 1811 Brocchi produced a valuable essay entitled Memoria mineralogica sulla Valle di Fassa in Tirolo; but his most important work is the Conchiologia fossile subapennina con osservazioni geologiche sugli Apennini, e sul suolo adiacente (2 vols., Milan, 1814), containing accurate details of the structure of the Apennine range, and an account of the marine shell fossils of the Italian Tertiary strata compared with existing species. These subjects were further illustrated by his geognostic map, and his Catalogo ragionato di una raccolta di rocce, disposto con ordine geografico, per servire alla geognosia dell' Italia (Milan, 1817). His work Dello stato fisico del suolo di Roma (1820), with its accompanying map, is likewise noteworthy. In it he corrected the erroneous views of Scipione Breislak, who conceived that Rome occupies the site of a volcano, to which he ascribed the volcanic materials that cover the seven hills. Brocchi pointed out that these materials were derived either from Monte Albano, an extinct volcano, twelve miles from the city, or from the Monti Cimini, still farther to the north.
In 1814 Brocchi presented the thesis that species, like individuals, age and eventually die out — an idea that later influenced Charles Darwin.
Several papers by him, on mineralogical subjects, appeared in the Biblioteca Italiana from 1816 to 1823. In the latter year, Brocchi sailed for Egypt, in order to explore the geology of that country and report on its mineral resources. Every facility was granted by Mehemet Ali, who in 1825 appointed him one of a commission to examine the territory of the recently conquered Kingdom of Sennar; but Brocchi fell a victim to the climate, and died at Khartoum on the 25th of September 1826, possibly of dysentery. Much of his writings and collections are now housed in the Museo Civico di Bassano.
References
Further reading
Stefano Dominici; Niles Eldredge. (2010). Brocchi, Darwin, and Transmutation: Phylogenetics and Paleontology at the Dawn of Evolutionary Biology. Evo Edu Outreach 3: 576–584.
1772 births
1826 deaths
19th-century Italian botanists
Italian naturalists
19th-century Italian geologists
Italian paleontologists
People from Bassano del Grappa
Proto-evolutionary biologists
Republic of Venice scientists | Giovanni Battista Brocchi | Biology | 769 |
42,055,328 | https://en.wikipedia.org/wiki/Electric%20Avenue%20%28TV%20series%29 | Part of the BBC Computer Literacy Project, Electric Avenue was a late-night TV Series starting with an initial 10-episode series in 1988. The show followed Micro Live and was presented by Fred Harris.
Programmes
The first series was split into 10 programmes, each about 24 minutes long and dealing with a particular subject area. They were as follows (original air-dates in brackets):
The By-Product - (24 October 1988)
The Machine - (31 October 1988)
Well Connected - (7 November 1988)
What Next? - (14 November 1988)
New Directions - (28 November 1988)
Chips and Drumsticks - (5 December 1988)
Housewives Choice? - (12 December 1988)
Money Talks - (9 January 1989)
Safety First - (16 January 1989)
The Design Machine - (23 January 1989)
In 1990 a second series aired, with 5 further episodes:
Computing the President - (15 January 1990)
The Experts' Expert - (22 January 1990)
Computers Can't Go Wrong, Can They? - (29 January 1990)
Computers: A Cautionary Tale - (5 February 1990)
Home Bleep Home - (12 February 1990)
References
External links
Electric Avenue on BBC Computer Literacy Project
Computer television series
1988 British television series debuts
1990 British television series endings
British English-language television shows
BBC One original programming
BBC Two original programming | Electric Avenue (TV series) | Technology | 274 |
338,849 | https://en.wikipedia.org/wiki/The%20Last%20Starfighter | The Last Starfighter is a 1984 American space opera film directed by Nick Castle. The film tells the story of Alex Rogan (Lance Guest), a teenager who, after winning the high score in an arcade game that's secretly a simulation test, is recruited by an alien defense force to fight in an interstellar war. It also features Dan O'Herlihy, Catherine Mary Stewart, and Robert Preston in his final role in a theatrical film. The character of Centauri, a "lovable con-man", was written with him in mind and was a nod to his most famous role as Professor Harold Hill in The Music Man (1962).
The Last Starfighter was released on July 13, 1984 by Universal Pictures. It received $28.7 million in the worldwide box office, against a budget of $15 million, and positive reviews from critics. The film, along with Walt Disney Pictures' Tron (1982), has the distinction of being one of cinema's earliest films to use extensive "real-life" computer-generated imagery (CGI) to depict its many starships, environments, and battle scenes. There was a subsequent novelization of the film by Alan Dean Foster, as well as a video game based on the production. In 2004, it was also adapted as an off-Broadway musical.
Plot
Alex Rogan is a teenager living in a trailer park with his mother and younger brother Louis, spending most of his spare time as the park's ad hoc handyman. Aside from his girlfriend Maggie, Alex's only diversion from his mundane existence is an arcade game called Starfighter, in which the player is "recruited by the Star League to defend the Frontier against Xur and the Ko-Dan Armada" in a space battle. On the evening he breaks the game's record as its highest-scoring player, Alex becomes angry and depressed on learning his bank loan for a college tuition has been rejected.
The inventor of Starfighter, Centauri, arrives in a futuristic car with a proposition for Alex. Centauri is in fact a disguised alien and his car a spacecraft. Alex is taken to the planet Rylos while Beta, a doppelgänger android, is used to cover Alex's absence. Alex learns there is actually a real conflict between a Star League of peaceful worlds and the oppressive Ko-Dan Empire; the latter's armada, poised to invade Rylos, is led by Xur, a tyrannical Rylan traitor who has sabotaged the Frontier forcefield shielding Rylos and other worlds from the Ko-Dan. The last line of defense against the armada is a small fleet of Gunstar spacecraft, operated by "Navigators" paired with gunners called "Starfighters". Centauri's Starfighter arcade game is a recruiting tool designed to train Starfighters. Alex meets a friendly reptilian Navigator named Grig, and explains his unwillingness to take part in the coming conflict. Grig sympathizes with Alex while Centauri tries to persuade him to stay, touting him as a gifted Starfighter.
Xur contacts Starfighter Command as Alex watches. After publicly executing a Star League spy, Xur threatens Rylos with imminent invasion, and an unnerved Alex asks to be taken home. On Earth, a disappointed Centauri gives Alex a means to contact him should he change his mind. A saboteur eliminates Starfighter Command's defenses and the Ko-Dan attack, killing the Starfighters and destroying their Gunstars. The saboteur warns Xur of Alex's escape.
Alex discovers Beta and contacts Centauri to retrieve him. Centauri arrives just as Alex and Beta are attacked by a Zando-Zan, an alien assassin in Xur's service. Centauri is wounded protecting Alex, and he and Beta explain that more of them will be on their way to Earth; the only way for Alex to protect himself, his family, and his planet is to embrace his ability as a Starfighter. Alex agrees, and Centauri flies Alex back to Starfighter Command before succumbing to his injury. Alex and Grig take off in a prototype Gunstar which survived the earlier attack.
While Grig mentors Alex, Beta finds it difficult to maintain his impersonation, particularly with Maggie. When another Zando-Zan shoots Beta in front of Maggie, revealing to both that Beta is an android imposter, Beta tells her the truth. They steal a pickup truck and chase the Zando-Zan back to its ship as it attempts to warn Xur. Beta has Maggie jump out before sacrificing himself by crashing the truck into the ship, destroying both and preventing the assassin's warning from being sent.
The arrogant Xur assumes Alex has been eliminated and orders the armada to invade, but Alex and Grig ambush his command ship from behind. Ko-Dan Commander Kril orders Xur's arrest, but Alex's attack severely damages the command ship's weapons and communications with its fighters, and Xur escapes in the confusion. Alex and Grig attack the Ko-Dan fighters but are outnumbered and overwhelmed. Alex desperately activates a secret weapon that quickly destroys the remaining fighters. Kril attempts to ram them, but Alex cripples the command ship further, causing it to crash into Rylos' moon.
Alex is proclaimed the savior of Rylos, and is persuaded to stay and help rebuild the Star League's Starfighter legion by Grig, Rylan Ambassador Enduran, and a recovered Centauri. Alex and Grig briefly return to Earth, landing their Gunstar in the trailer park, where Grig tells its residents of Alex's heroism. Alex bids his family farewell and asks Maggie to come with him, and she agrees. Inspired, Louis begins playing the Starfighter game.
Cast
Lance Guest as Alex Rogan / Beta Alex Rogan
Robert Preston as Centauri
Dan O'Herlihy as Grig
Catherine Mary Stewart as Maggie Gordon
Norman Snow as Xur
Kay E. Kuter as Ambassador Enduran
Barbara Bosson as Jane Rogan
Chris Hebert as Louis Rogan
Dan Mason as Lord Kril
Vernon Washington as Otis
John O'Leary as Rylan Bursar
George McDaniel as Kodan 1st Officer
Charlene Nelson as Rylan Technician
John Maio as Friendly Alien
Al Berry as Rylan Spy
Scott Dunlop as Tentacle Alien
Peter Nelson as Jack Blake
Peggy Pope as Elvira
Meg Wyllie as Granny Gordon
Ellen Blake as Clara Potter
Britt Leach as Mr. Potter
Bunny Summers as Mrs. Boone
Owen Bush as Mr. Boone
Marc Alaimo as Hitchhiker
Wil Wheaton as Louis' Friend
Cameron Dye as Andy
Geoffrey Blake as Gary
Production
The Last Starfighter was shot in 38 days, mostly night shoots in Canyon Country. It was one of the earliest films to make extensive use of computer graphics for its special effects. In place of physical models, 3D rendered models were used to depict space ships and many other objects. The Gunstar and other spaceships were designed by artist Ron Cobb, who also worked on Dark Star, Alien, Star Wars and Conan the Barbarian.
Computer graphics for the film were rendered by Digital Productions (DP) on a Cray X-MP supercomputer. The company created 27 minutes of effects for the film. This was considered an enormous amount of computer generated imagery at the time. For the 300 scenes containing computer graphics in the film, each frame of the animation contained an average of 250,000 polygons and had a resolution of 3000 × 5000 36-bit pixels. Digital Productions estimated that using computer animation required only half the time and between a third to half of the cost of traditional special effects. The result was a cost of $14 million for a film that made close to $29 million at the box office.
DP used Fortran, CFT77 for programming:
Not all special effects in the film were done with computer animation. The depiction of the Beta unit before it had taken Alex's form was a practical effect, created by makeup artist Lance Anderson. The Starcar, created by Gene Winfield and driven by Centauri, was a working vehicle based on Winfield's Spinner designs from Blade Runner.
Because the test audiences responded positively to the Beta Alex character, director Nick Castle added more scenes of Beta Alex interacting with the trailer park community. Because Lance Guest had cut his hair short after initial filming had been completed and he contracted an illness during the re-shoots, his portrayal of Beta Alex in the added scenes has him wearing a wig and heavy makeup. Wil Wheaton had a few lines of dialogue that were ultimately cut from the film, but he still is visible in the background of several scenes.
Music
Composer Craig Safan wanted to go "bigger than Star Wars" and therefore utilized a "Mahler-sized" orchestra, resulting in an unusual breadth of instruments, including "quadruple woodwinds" and "eight trumpets, [trombones], and horns!"
Reception
Critical response
At the review aggregator website Rotten Tomatoes, The Last Starfighter received an approval rating of 76%, based on 90 reviews, with an average rating of 6.5/10. The website's critical consensus reads: "While The Last Starfighter is clearly derivative of other sci-fi franchises, its boundary-pushing visual effects and lovably plucky tone make for an appealing adventure". Metacritic gave the film a score of 67 based on 8 reviews, indicating "generally favorable reviews". Over time, it has developed a cult following.
Roger Ebert of The Chicago Sun-Times gave the film two-and-a-half out of four stars. While the actors were good, particularly Preston and O'Herlihy, Ebert wrote The Last Starfighter was "not a terrifically original movie" but it was nonetheless "well-made". Colin Greenland reviewed The Last Starfighter for Imagine magazine, and stated that "apart from a mildly amusing little sub-plot with the android replica left on Earth to conceal his absence, Alex's adventure is strictly the movie of the video game: simple as can be, and pitched at a pre-teen audience who can believe Alex and Grig blasting a hundred alien ships and escaping without a scratch." Halliwell's Film Guide described the film as "a surprisingly pleasant variation on the Star Wars boom, with sharp and witty performances from two reliable character actors and some elegant gadgetry to offset the teenage mooning".
In 2017, Variety described it as having "a simple yet ingenious plot" and added that "the action is suitably fast and furious, but what makes the movie especially enjoyable are the quirky character touches given to Guest and his fellow players." Variety also noted that film critic Gene Siskel described The Last Starfighter as the best of all Star Wars imitators. Alan Jones awarded it three stars out of five for Radio Times, writing that it was a "glossy, space-age fairy tale" and "highly derivative — Star Trek-like aliens have Star Wars-inspired dog-fights against a computer-graphic backdrop — but the sensitive love story between Guest and Catherine Mary Stewart cuts through the cuteness and gives the intergalactic adventures a much-needed boost."
Adaptations
The Last Starfighters popularity has resulted in several non-film adaptations of the storyline and uses of the name.
Musical
A musical adaptation was first produced at the Storm Theatre Off-Off Broadway in New York City in October 2004, with music and lyrics by Skip Kennon and book by Fred Landau. In November 2005, the original cast recording was released on the Kritzerland label.
Books
Alan Dean Foster wrote a novelization of the film shortly after it was released ().
Comics
The year the film was released, Marvel Comics published a comic book adaptation by writer Bill Mantlo and artists Bret Blevins and Tony Salmons in Marvel Super Special #31. The adaptation was also available as a three-issue limited series.
Games
In 1984, FASA, a sci-fi tabletop game maker, created The Last Starfighter: Tunnel Chase board game.
Video games
Arcade
A real The Last Starfighter arcade game by Atari, Inc. is promised in the end credits, but was never released. If released, the game would have been Atari's first 3D polygonal arcade game to use a Motorola 68000 as the CPU. Gameplay was to have been taken from game scenes and space battle scenes in the film, and used the same controller used on the first Star Wars arcade game. The game was abandoned once Atari representatives saw the film in post-production and decided it was not going to be a financial success.
Home
Home versions of the game for the Atari 2600 and Atari 5200 consoles and Atari 8-bit computers were also developed, but never commercially released under the Last Starfighter name. The home computer version was eventually renamed and released (with some minor changes) as Star Raiders II. A prototype exists for the Atari 2600 Last Starfighter game, which was in actuality a game already in development by Atari under the name Universe. The game was eventually released as Solaris.
In 1990, an NES game titled The Last Starfighter was released, but was actually a conversion of Uridium for Commodore 64, with modified sprites, title screen and soundtrack.
A freeware playable version of the game, based on what is seen in the film, was released for PC in 2007. This is a faithful reproduction of the arcade game from the film, featuring full sound effects and music from the game. Game creators Rogue Synapse also built a working arcade cabinet of the game.
Potential sequel
In February 2008, production company GPA Entertainment added "Starfighter – The sequel to the classic motion picture Last Starfighter to its list of projects. But two months later, the project was reported to be "stuck in the pre-production phase". It was still there . Hollywood directors including Seth Rogen and Steven Spielberg, as well as screenwriter Gary Whitta, expressed interest in creating a sequel or remake, but the potential sequel rights-holder, Jonathan R. Betuel, has allegedly indicated that he does not want another film made.
The rights to the film have not been clearly defined due to conflicting information. Multiple sources say Universal Pictures still owns the theatrical and home media distribution rights while Warner Bros., which absorbed Lorimar-Telepictures (Lorimar's successor) in 1990, has the international distribution rights. Another source states that Universal has the option to remake the film while Betuel has sequel rights. Further complicating the situation is a claim that both Universal and Warner Bros. each have remake and sequel rights.
In July 2015, it was reported that Betuel would write a TV reboot of the film.
On April 4, 2018, Whitta posted concept art for The Last Starfighter sequel on his Twitter account. In the same tweet, he also indicated that Betuel would be collaborating with him on the project. In a follow-up interview with Gizmodo, Whitta referred to the project as "a combination of reboot and sequel that we both think honors the legacy of the original film while passing the torch to a new generation."
On October 20, 2020, Betuel stated that, with Whitta, a script for a sequel was being written and the rights to the film had been recaptured.
On March 25, 2021, Whitta posted a sequel concept reel on YouTube called The Last Starfighters with concept art by Matt Allsopp and music by Chris Tilton and Craig Safan, featuring an audio clip from the original movie by Robert Preston.
See also
Ender's Game — a 1977 short story/novelette by Orson Scott Card
Armada — A 2015 novel by Ernest Cline with a similar premise
References
External links
Animation Timeline from Brown University
The Last Starfighter video game
Arcade game specifications by Atari
Podcast about The Last Starfighter by the Retroist
1984 films
1980s coming-of-age films
1980s science fiction action films
American coming-of-age films
American science fiction action films
American science fiction war films
American space adventure films
American space opera films
Films about computing
Films about video games
Films adapted into comics
Films directed by Nick Castle
Films scored by Craig Safan
Films set on fictional planets
Films set on spacecraft
Films set in California
Films shot in California
Fiction about flying cars
Universal Pictures films
1980s English-language films
1980s American films
1984 science fiction films
English-language science fiction action films | The Last Starfighter | Technology | 3,379 |
28,384,824 | https://en.wikipedia.org/wiki/Biochimie | Biochimie is a monthly peer-reviewed scientific journal covering the fields of biochemistry, biophysics, and molecular biology. It is published by Elsevier on behalf of the . The journal, started as a French language publication, is also open to English language articles since the late 2000s.
, the editor-in-chief is Bertrand Friguet, succeeding Richard H. Buckingham.
History
The journal was established in 1914 under the title Bulletin de la Société de Chimie Biologique, obtaining its current title in 1971.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.372.
References
External links
Société Française de Biochimie et de Biologie Moléculaire
Biochemistry journals
Elsevier academic journals
Publications established in 1914
Monthly journals
English-language journals | Biochimie | Chemistry | 172 |
33,287,071 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2019 | In molecular biology, Glycoside hydrolase family 19 is a family of glycoside hydrolases , which are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. y[ _]9
Glycoside hydrolase family 19 CAZY GH_19 comprises enzymes with only one known activity; chitinase ().
Chitinases are enzymes that catalyze the hydrolysis of the beta-1,4-N-acetyl-D-glucosamine linkages in chitin polymers. Chitinases belong to glycoside hydrolase families 18 or 19. Chitinases of family 19 (also known as classes IA or I and IB or II) are enzymes from plants that function in the defence against fungal and insect pathogens by destroying their chitin-containing cell wall. Class IA/I and IB/II enzymes differ in the presence (IA/I) or absence (IB/II) of a N-terminal chitin-binding domain. The catalytic domain of these enzymes consist of about 220 to 230 amino acid residues.
Active site
GH19 enzymes has a conserved sequence motif ([FHY]-G-R-G-[AP]-ζ-Q-[IL]-[ST]-[FHYW]-[HN]-[FY]-[NY], ζ= hydrophilic amino acid) in its active site.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 19 | Biology | 406 |
53,498,449 | https://en.wikipedia.org/wiki/Hajra%20Waheed | Hajra Waheed is a Montréal-based artist. Her multimedia practice includes works on paper, collage, sound, video, sculpture and installation. Waheed uses news accounts, extensive research and personal histories to critically examine multiple issues including: covert power, mass surveillance, cultural distortion and the traumas of displacement caused by colonialism and mass migration.
Waheed was born in 1980 in Canada. She has complex ties and relationships to North America, the Middle East and South Asia. She grew up within the gated compound of Saudi ARAMCO in Dhahran. She studied at the Art Institute of Chicago where she received her BFA in advanced painting and art history, in 2002. She moved to Montréal in 2005 and completed her MA at McGill University in 2007. At 34, Waheed received the Victor Martyn Lynch-Staunton Award for Outstanding Achievement as a Canadian Mid-Career Visual Artist. She was shortlisted for the Sobey Art Award in 2016.
Waheed's works are in the collections of the Museum of Modern Art, the British Museum, the Devi Art Foundation, Samdani Art Foundation, the Musée d'art contemporain de Montréal and the National Gallery of Canada.
Exhibitions
(In) The First Circle, Antoni Tàpies Foundation, Barcelona (2012)
Lines of Control, Herbert F. Johnson Museum of Art, Ithaca, (2012)
Field Notes and Other Backstories, Art Gallery of Windsor, Windsor, (2013)
Collages: Gesture and Fragments, Musée d'art contemporain de Montréal, Montréal, (2014)
Lines of Control, The Nasher Museum of Art, Duke University, (2014)
La Biennale de Montréal, Musée d'art contemporain de Montréal, Montréal, (2014)
Asylum in the Sea, Fonderie Darling, Montréal, 2015
Still Against the Sky, KW Institute for Contemporary Art, London, (2015)
The Missing One, Dhaka Art Summit, Dhaka, (2016)
Sobey Art Award Exhibition, National Gallery of Canada, Ottawa, (2016)
The Eighth Climate (What Does Art Do?), 11th Gwangju Biennale, Gwangju, (2016)
Sea Change - Chapter 1, Character 1: In the Rough, Mosaic Rooms, London (2016)
The Cyphers, BALTIC Centre for Contemporary Art, Gateshead, (2016)
Farewell Photography, Biennale für aktuelle Fotografie, Kunstverein Ludwigshafen, Ludwigshafen, (2017)
Turbulent Landings: NGC Canadian Biennial, Art Gallery of Alberta, Edmonton, (2017)
Viva Arte Viva, Venice Biennale, Venice, (2017)
The Video Installation Project, Musée d'art contemporain de Montréal, Montréal, (2017)
Hold Everything Dear, The Power Plant, Toronto, (2019)
References
External links
1980 births
Living people
School of the Art Institute of Chicago alumni
McGill University alumni
21st-century Canadian artists
Multimedia artists
Women multimedia artists | Hajra Waheed | Technology | 598 |
55,529,178 | https://en.wikipedia.org/wiki/International%20Ideographs%20Core | International Ideographs Core (IICore) is a subset of up to ten thousand CJK Unified Ideographs characters, which can be implemented on devices with limited memories and capability that make it not feasible to implement the full ISO 10646/Unicode standard.
History
The IICore subset was initially raised in the 21st meeting of the Ideographic Rapporteur Group (IRG) in Guilin during 17th-20 November in 2003, and is subsequently passed in the group's 22nd meeting in Chengdu in May 2004.
See also
Chinese character encoding
Han unification
References
External links
International Ideographs Core (IICORE) Comparison Utility
IICore development information
Chinese-language computing
Encodings of Japanese
Korean language
Unicode
Natural language and computing
Character encoding
Mobile computers | International Ideographs Core | Technology | 152 |
34,545,370 | https://en.wikipedia.org/wiki/Airflow | Airflow, or air flow, is the movement of air. Air behaves in a fluid manner, meaning particles naturally flow from areas of higher pressure to those where the pressure is lower. Atmospheric air pressure is directly related to altitude, temperature, and composition.
In engineering, airflow is a measurement of the amount of air per unit of time that flows through a particular device.
It can be described as a volumetric flow rate (volume of air per unit time) or a mass flow rate (mass of air per unit time). What relates both forms of description is the air density, which is a function of pressure and temperature through the ideal gas law. The flow of air can be induced through mechanical means (such as by operating an electric or manual fan) or can take place passively, as a function of pressure differentials present in the environment.
Types of airflow
Like any fluid, air may exhibit both laminar and turbulent flow patterns. Laminar flow occurs when air can flow smoothly, and exhibits a parabolic velocity profile; turbulent flow occurs when there is an irregularity (such as a disruption in the surface across which the fluid is flowing), which alters the direction of movement. Turbulent flow exhibits a flat velocity profile. Velocity profiles of fluid movement describe the spatial distribution of instantaneous velocity vectors across a given cross section. The size and shape of the geometric configuration that the fluid is traveling through, the fluid properties (such as viscosity), physical disruptions to the flow, and engineered components (e.g. pumps) that add energy to the flow are factors that determine what the velocity profile looks like. Generally, in encased flows, instantaneous velocity vectors are larger in magnitude in the middle of the profile due to the effect of friction from the material of the pipe, duct, or channel walls on nearby layers of fluid. In tropospheric atmospheric flows, velocity increases with elevation from ground level due to friction from obstructions like trees and hills slowing down airflow near the surface. The level of friction is quantified by a parameter called the "roughness length." Streamlines connect velocities and are tangential to the instantaneous direction of multiple velocity vectors. They can be curved and do not always follow the shape of the container. Additionally, they only exist in steady flows, i.e. flows whose velocity vectors do not change over time. In a laminar flow, all particles of the fluid are traveling in parallel lines which gives rise to parallel streamlines. In a turbulent flow, particles are traveling in random and chaotic directions which gives rise to curved, spiraling, and often intersecting streamlines.
The Reynolds number, a ratio indicating the relationship between viscous and inertial forces in a fluid, can be used to predict the transition from laminar to turbulent flow. Laminar flows occur at low Reynold's numbers where viscous forces dominate, and turbulent flows occur at high Reynold's numbers where inertial forces dominate. The range of Reynold's number that defines each type of flow depends on whether the air is moving through a pipe, wide duct, open channel, or around airfoils. Reynold's number can also characterize an object (for example, a particle under the effect of gravitational settling) moving through a fluid. This number and related concepts can be applied to studying flow in systems of all scales. Transitional flow is a mixture of turbulence in the center of the velocity profile and laminar flow near the edges. Each of these three flows have distinct mechanisms of frictional energy losses that give rise to different behavior. As a result, different equations are used to predict and quantify the behavior of each type of flow.
The speed at which a fluid flows past an object varies with distance from the object's surface. The region surrounding an object where the air speed approaches zero is known as the boundary layer. It is here that surface friction most affects flow; irregularities in surfaces may affect boundary layer thickness, and hence act to disrupt flow.
Units
Typical units to express airflow are:
By volume
m3/min (cubic metres per minute)
m3/h (cubic metres per hour)
ft3/h (cubic feet per hour)
ft3/min (cubic feet per minute, a.k.a. CFM)
l/s (litres per second)
By mass
kg/s (kilograms per second)
Airflow can also be described in terms of air changes per hour (ACH), indicating full replacement of the volume of air filling the space in question. This unit is frequently used in the field of building science, with higher ACH values corresponding to leakier envelopes which are typical of older buildings that are less tightly sealed.
Measurement
The instrument that measures airflow is called an airflow meter. Anemometers are also used to measure wind speed and indoor airflow.
There are a variety of types, including straight probe anemometers, designed to measure air velocity, differential pressure, temperature, and humidity; rotating vane anemometers, used for measuring air velocity and volumetric flow; and hot-sphere anemometers.
Anemometers may use ultrasound or resistive wire to measure the energy transfer between the measurement device and the passing particles. A hot-wire anemometer, for example, registers decreases in wire temperature, which can be translated into airflow velocity by analyzing the rate of change. Convective cooling is a function of airflow rate, and the electrical resistance of most metals is dependent upon the temperature of the metal, which is affected by the convective cooling. Engineers have taken advantage of these physical phenomena in the design and use of hot-wire anemometers. Some tools are capable of calculating air flow, wet bulb temperature, dew point, and turbulence.
Simulation
Air flow can be simulated using Computational Fluid Dynamics (CFD) modeling, or observed experimentally through the operation of a wind tunnel. This may be used to predict airflow patterns around automobiles, aircraft, and marine craft, as well as air penetration of a building envelope. Because CFD models "also track the flow of solids through a system," they can be used for analysis of pollution concentrations in indoor and outdoor environments. Particulate matter generated indoors generally comes from cooking with oil and combustion activities such as burning candles or firewood. In outdoor environments, particulate matter comes from direct sources such as internal combustion engine vehicles’ (ICEVs) tailpipe emissions from burning fuel (petroleum products), windblow and soil, and indirectly from atmospheric oxidation of volatile organic compounds (VOCs), sulfur dioxide (SO2), and nitrogen oxide (NOx) emissions.
Control
One type of equipment that regulates the airflow in ducts is called a damper. The damper can be used to increase, decrease or completely stop the flow of air. A more complex device that can not only regulate the airflow but also has the ability to generate and condition airflow is an air handler. Fans also generate flows by "producing air flows with high volume and low pressure (although higher than ambient pressure)." This pressure differential induced by the fan is what causes air to flow. The direction of airflow is determined by the direction of the pressure gradient. Total or static pressure rise, and therefore by extension airflow rate, is determined primarily by the fan speed measured in revolutions per minute (RPM). In control of HVAC systems to modulate the airflow rate, one typically changes the fan speed, which often come in 3-category settings such as low, medium, and high.
Uses
Measuring the airflow is necessary in many applications such as ventilation (to determine how much air is being replaced), pneumatic conveying (to control the air velocity and phase of transport) and engines (to control the Air–fuel ratio).
Aerodynamics is the branch of fluid dynamics (physics) that is specifically concerned with the measurement, simulation, and control of airflow. Managing airflow is of concern to many fields, including meteorology, aeronautics, medicine, mechanical engineering, civil engineering, environmental engineering and building science.
Airflow in buildings
In building science, airflow is often addressed in terms of its desirability, for example in contrasting ventilation and infiltration. Ventilation is defined as the desired flow of fresh outdoor supply air to another, typically indoor, space, along with the simultaneous expulsion of exhaust air from indoors to the outdoors. This may be achieved through mechanical means (i.e. the use of a louver or damper for air intake and a fan to induce flow through ductwork) or through passive strategies (also known as natural ventilation). While natural ventilation has economic benefits over mechanical ventilation because it typically requires far less operational energy consumption, it can only be utilized during certain times of day and under certain outdoor conditions. If there is a large temperature difference between the outdoor air and indoor conditioned air, the use of natural ventilation may cause unintentional heating or cooling loads on a space and increase HVAC energy consumption to maintain comfortable temperatures within ranges determined by the heating and cooling setpoint temperatures. Natural ventilation also has the flaw that its feasibility is dependent on outdoor conditions; if outdoor air is significantly polluted with ground-level ozone concentrations from transportation related emissions or particulate matter from wildfires for example, residential and commercial building occupants may have to keep doors and windows closed to preserve indoor environmental quality (IEQ). By contrast, air infiltration is characterized as the uncontrolled influx of air through an inadequately-sealed building envelope, usually coupled with unintentional leakage of conditioned air from the interior of a building to the exterior.
Buildings may be ventilated using mechanical systems, passive systems or strategies, or a combination of the two.
Airflow in mechanical ventilation systems (HVAC)
Mechanical ventilation uses fans to induce flow of air into and through a building. Duct configuration and assembly affect air flow rates through the system. Dampers, valves, joints and other geometrical or material changes within a duct can lead to flow pressure (energy) losses.
Passive strategies for maximizing airflow
Passive ventilation strategies take advantage of inherent characteristics of air, specifically thermal buoyancy and pressure differentials, to evacuate exhaust air from within a building. Stack effect equates to using chimneys or similar tall spaces with openings near the top to passively draw exhaust air up and out of the space, thanks to the fact that air will rise when its temperature increases (as the volume increases and pressure decreases). Wind-driven passive ventilation relies on building configuration, orientation, and aperture distribution to take advantage of outdoor air movement. Cross-ventilation requires strategically-positioned openings aligned with local wind patterns.
Relationship of air movement to thermal comfort and overall Indoor Environmental Quality (IEQ)
Airflow is a factor of concern when designing to meet occupant thermal comfort standards (such as ASHRAE 55). Varying rates of air movement may positively or negatively impact individuals’ perception of warmth or coolness, and hence their comfort. Air velocity interacts with air temperature, relative humidity, radiant temperature of surrounding surfaces and occupants, and occupant skin conductivity, resulting in particular thermal sensations.
Sufficient, properly-controlled and designed airflow (ventilation) is important for overall Indoor Environmental Quality (IEQ) and Indoor Air Quality (IAQ), in that it provides the necessary supply of fresh air and effectively evacuates exhaust air.
See also
Air current
Volumetric flow rate
Air flow meter
Damper (flow)
Air handling unit
Fluid dynamics
Pressure gradient force
Atmosphere of Earth
Anemometer
Computational Fluid Dynamics
Ventilation (architecture)
Natural ventilation
Infiltration (HVAC)
Particle tracking velocimetry
Laminar flow
Turbulent flow
Wind
References
Heating, ventilation, and air conditioning
Mechanical engineering | Airflow | Physics,Engineering | 2,411 |
8,614,958 | https://en.wikipedia.org/wiki/Social%20connection | Social connection is the experience of feeling close and connected to others. It involves feeling loved, cared for, and valued, and forms the basis of interpersonal relationships."Connection is the energy that exists between people when they feel seen, heard and valued; when they can give and receive without judgement; and when they derive sustenance and strength from the relationship." —Brené Brown, Professor of social work at the University of HoustonIncreasingly, social connection is understood as a core human need, and the desire to connect as a fundamental drive. It is crucial to development; without it, social animals experience distress and face severe developmental consequences. In humans, one of the most social species, social connection is essential to nearly every aspect of health and well-being. Lack of connection, or loneliness, has been linked to inflammation, accelerated aging and cardiovascular health risk, suicide, and all-cause mortality.
Feeling socially connected depends on the quality and number of meaningful relationships one has with family, friends, and acquaintances. Going beyond the individual level, it also involves a feeling of connecting to a larger community. Connectedness on a community level has profound benefits for both individuals and society.
Related terms
Social support is the help, advice, and comfort that we receive from those with whom we have stable, positive relationships. Importantly, it appears to be the perception, or feeling, of being supported, rather than objective number of connections, that appears to buffer stress and affect our health and psychology most strongly.
Close relationships refer to those relationships between friends or romantic partners that are characterized by love, caring, commitment, and intimacy.
Attachment is a deep emotional bond between two or more people, a "lasting psychological connectedness between human beings." Attachment theory, developed by John Bowlby during the 1950s, is a theory that remains influential in psychology today.
Conviviality has many different interpretations and understandings, one of which denotes the idea of living together and enjoying each other's company. This understanding of the term is derived from the French convivialité, which can be traced back to Jean Anthelme Brillat-Savarin in the 19th century. Other interpretations of conviviality include the art of living in the company of others; everyday experiences of community cohesion and togetherness in diverse settings; and the capacity of individuals to interact creatively and autonomously with one another and their environment for the satisfaction of their needs. This third interpretation is rooted in the work of Ivan Illich from the 1970s onwards. Social connection is fundamental to all of these interpretations of conviviality.
A basic need
In his influential theory on the hierarchy of needs, Abraham Maslow proposed that our physiological needs are the most basic and necessary to our survival, and must be satisfied before we can move on to satisfying more complex social needs like love and belonging. However, research over the past few decades has begun to shift our understanding of this hierarchy. Social connection and belonging may in fact be a basic need, as powerful as our need for food or water. Mammals are born relatively helpless, and rely on their caregivers not only for affection, but for survival. This may be evolutionarily why mammals need and seek connection, and also for why they suffer prolonged distress and health consequences when that need is not met.
In 1965, Harry Harlow conducted his landmark monkey studies. He separated baby monkeys from their mothers, and observed which surrogate mothers the baby monkeys bonded with: a wire "mother" that provided food, or a cloth "mother" that was soft and warm. Overwhelmingly, the baby monkeys preferred to spend time clinging to the cloth mother, only reaching over to the wire mother when they became too hungry to continue without food. This study questioned the idea that food is the most powerful primary reinforcement for learning. Instead, Harlow's studies suggested that warmth, comfort, and affection (as perceived from the soft embrace of the cloth mother) are crucial to the mother-child bond, and may be a powerful reward that mammals may seek in and of itself. Although historically significant, it is important to acknowledge that this study does not meet current research standards for the ethical treatment of animals.
In 1995, Roy Baumeister proposed his influential belongingness hypothesis: that human beings have a fundamental drive to form lasting relationships, to belong. He provided substantial evidence that indeed, the need to belong and form close bonds with others is itself a motivating force in human behavior. This theory is supported by evidence that people form social bonds relatively easily, are reluctant to break social bonds, and keep the effect on their relationships in mind when they interpret situations. He also contends that our emotions are so deeply linked to our relationships that one of the primary functions of emotion may be to form and maintain social bonds, and that both partial and complete deprivation of relationships leads to not only painful but pathological consequences. Satisfying or disrupting our need to belong, our need for connection, has been found to influence cognition, emotion, and behavior.
In 2011, Roy Baumeister furthered this notion of belongingness by proposing the Need to Belong Theory, which asserts that humans have an inherent drive to maintain a minimum number of social relationships to foster a sense of belonging. Baumeister highlights the importance of satiation and substitution in driving human behavior and social connection. Motivational satiation is a phenomenon in which an individual may desire something, but at a certain point, they may reach a point where they have had enough and no longer want or need any more of it. This concept can be applied to the formation of friendships, where an individual may desire social connections, but they may reach a point where they have enough friends and do not seek any more. However, Baumeister suggests that people still require a certain minimum amount of social connection, and to some extent, these bonds can substitute for each other. The Need to Belong Theory is a primary motivator of human behavior, providing a framework for understanding social relationships as a basic, fundamental need for psychological health and well-being.
Neurobiology
Brain areas
While it appears that social isolation triggers a "neural alarm system" of threat-related regions of the brain (including the amygdala, dorsal anterior cingulate cortex (dACC), anterior insula, and periaqueductal gray (PAG)), separate regions may process social connection. Two brain areas that are part of the brain's reward system are also involved in processing social connection and attention to loved ones: the ventromedial prefrontal cortex (VMPFC), a region that also responds to safety and inhibits threat responding, and the ventral striatum (VS) and septal area (SA), part of a neural system that is activated by taking care of one's own young.
Key neurochemicals
Opioids
In 1978, neuroscientist Jaak Panksepp observed that small doses of opiates reduced the distressed cries of puppies that were separated from their mothers. As a result, he developed the brain opioid theory of attachment, which posits that endogenous (internally produced) opioids underlie the pleasure that social animals derive from social connection, especially within close relationships. Extensive animal research supports this theory. Mice who have been genetically modified to not have mu-opioid receptors (mu-opioid receptor knockout mice), as well as sheep with their mu-receptors blocked temporarily following birth, do not recognize or bond with their mother. When separated from their mother and conspecifics, rats, chicks, puppies, guinea pigs, sheep, dogs, and primates emit distress vocalizations, however giving them morphine (i.e. activating their opioid receptors), quiets this distress. Endogenous opioids appear to be produced when animals engage in bonding behavior, while inhibiting the release of these opioids results in signs of social disconnection. In humans, blocking mu-opioid receptors with the opioid antagonist naltrexone has been found to reduce feelings of warmth and affection in response to a film clip about a moment of bonding, and to increase feelings of social disconnection towards loved ones in daily life as well as in the lab in response to a task designed to elicit feelings of connection. Although the human research on opioids and bonding behavior is mixed and ongoing, this suggests that opioids may underlie feelings of social connection and bonding in humans as well.
Oxytocin
In mammals, oxytocin has been found to be released during childbirth, breastfeeding, sexual stimulation, bonding, and in some cases stress. In 1992, Sue Carter discovered that administering oxytocin to prairie voles would accelerate their monogamous pair-bonding behavior. Oxytocin has also been found to play many roles in the bonding between mother and child. In addition to pair-bonding and motherhood, oxytocin has been found to play a role in prosocial behavior and bonding in humans. Nicknamed the “love drug” or “cuddle chemical,” plasma levels of oxytocin increase following physical affection, and are linked to more trusting and generous social behavior, positively biased social memory, attraction, and anxiety and hormonal responses. Further supporting a nuanced role in adult human bonding, greater circulating oxytocin over a 24-hour period was associated with greater love and perceptions of partner responsiveness and gratitude, however was also linked to perceptions of a relationship being vulnerable and in danger. Thus oxytocin may play a flexible role in relationship maintenance, supporting both the feelings that bring us closer and the distress and instinct to fight for an intimate bond in peril.
Health
Consequences of disconnection
A wide range of mammals, including rats, prairie voles, guinea pigs, cattle, sheep, primates, and humans, experience distress and long-term deficits when separated from their parent. In humans, long-lasting health consequences result from early experiences of disconnection. In 1958, John Bowlby observed profound distress and developmental consequences when orphans lacked warmth and love of our first and most important attachments: our parents. Loss of a parent during childhood was found to lead to altered cortisol and sympathetic nervous system reactivity even a decade later, and affect stress response and vulnerability to conflict as a young adult.
In addition to the health consequences of lacking connection in childhood, chronic loneliness at any age has been linked to a host of negative health outcomes. In a meta-analytic review conducted in 2010, results from 308,849 participants across 148 studies found that people with strong social relationships had a 50% greater chance of survival. This effect on mortality is not only on par with one of the greatest risks, smoking, but exceeds many other risk factors such as obesity and physical inactivity. Loneliness has been found to negatively affect the healthy function of nearly every system in the body: the brain, immune system, circulatory and cardiovascular systems, endocrine system, and genetic expression.
Not only is social isolation harmful to health, but it is more and more common. As many as 80% of young people under 18 years old, and 40% of adults over the age of 65 report being lonely sometimes, and 15–30% of the general population feel chronic loneliness. These numbers appear to be on the rise, and researchers have called for social connection to be public health priority.
Social immune system
One of the main ways social connection may affect our health is through the immune system. The immune system's primary activity, inflammation, is the body's first line of defense against injury and infection. However, chronic inflammation has been tied to atherosclerosis, Type II diabetes, neurodegeneration, and cancer, as well as compromised regulation of inflammatory gene expression by the brain. Research over the past few decades has revealed that the immune system not only responds to physical threats, but social ones as well. It has become clear that there is a bidirectional relationship between circulating biomarkers of inflammation (e.g. the cytokine IL-6) and feelings of social connection and disconnection; not only are feelings of social isolation linked to increased inflammation, but experimentally induced inflammation alters social behavior and induces feelings of social isolation. This has important health implications. Feelings of chronic loneliness appear to trigger chronic inflammation. However, social connection appears to inhibit inflammatory gene expression and increase antiviral responses. Performing acts of kindness for others were also found to have this effect, suggesting that helping others provides similar health benefits.
Why might our immune system respond to our perceptions of our social world? One theory is that it may have been evolutionarily adaptive for our immune system to "listen" in to our social world to anticipate the kinds of bacterial or microbial threats we face. In our evolutionary past, feeling socially isolated may have meant we were separated from our tribe, and therefore more likely to experience physical injury or wounds, requiring an inflammatory response to heal. On the other hand, feeling connected may have meant we were in relative physical safety of community, but at greater risk of socially transmitted viruses. To meet these threats with greater efficiency, the immune system responds with anticipatory changes. A genetic profile was discovered to initiate this pattern of immune response to social adversity and stress — up-regulation of inflammation, down-regulation of antiviral activity — known as Conserved Transcriptional Response to Adversity. The inverse of this pattern, associated with social connection, has been linked to positive health outcomes as well as eudaemonic well-being.
Positive pathways
Social connection and support have been found to reduce the physiological burden of stress and contribute to health and well-being through several other pathways as well, although there remains a subject of ongoing research. One way social connection reduces our stress response is by inhibiting activity in our pain and alarm neural systems. Brain areas that respond to social warmth and connection (notably, the septal area) have inhibitory connections to the amygdala, which have the structural capacity to reduce threat responding.
Another pathway by which social connection positively affects health is through the parasympathetic nervous system (PNS), the "rest and digest" system which parallels and offsets the "flight or fight" sympathetic nervous system (SNS). Flexible PNS activity, indexed by vagal tone, helps regulate the heart rate and has been linked to a healthy stress response as well as numerous positive health outcomes. Vagal tone has been found to predict both positive emotions and social connectedness, which in turn result in increased vagal tone, in an "upward spiral" of well-being. Social connection often occurs along with and causes positive emotions, which themselves benefit our health.
Measures
Social Connectedness Scale
This scale was designed to measure general feelings of social connectedness as an essential component of belongingness. Items on the Social Connectedness Scale reflect feelings of emotional distance between the self and others, and higher scores reflect more social connectedness.
UCLA Loneliness Scale
Measuring feelings of social isolation or disconnection can be helpful as an indirect measure of feelings of connectedness. This scale is designed to measure loneliness, defined as the distress that results when one feels disconnected from others.
Relationship Closeness Inventory (RCI)
This measure conceptualizes closeness in a relationship as a high level of interdependence in two people's activities, or how much influence they have over one another. It correlates moderately with self-reports of closeness, measured using the Subjective Closeness Index (SCI).
Liking and Loving Scales
These scales were developed to measure the difference between liking and loving another person—critical aspects of closeness and connection. Good friends were found to score highly on the liking scale, and only romantic partners scored highly on the loving scale. They support Zick Rubin's conceptualization of love as containing three main components: attachment, caring, and intimacy.
Personal Acquaintance Measure (PAM)
This measure identifies six components that can help determine the quality of a person's interactions and feelings of social connectedness with others:
Duration of relationship
Frequency of interaction with the other person
Knowledge of the other person's goals
Physical intimacy or closeness with the other person
Self-disclosure to the other person
Social network familiarity—how familiar is the other person with the rest of your social circle
Experimental manipulations
Social connection is a unique, elusive, person-specific quality of our social world. Yet, can it be manipulated? This is a crucial question for how it can be studied, and whether it can be intervened on in a public health context. There are at least two approaches that researchers have taken to manipulate social connection in the lab:
Social connection task
This task was developed at UCLA by Tristen Inagaki and Naomi Eisenberger to elicit feelings of social connection in the laboratory. It consists of collecting positive and neutral messages from 6 loved ones of a participant, and presenting them to the participant in the laboratory. Feelings of connection and neural activity in response to this task have been found to rely on endogenous opioid activity.
Closeness-generating procedure
Arthur Aron at the State University of New York at Stony Brook and collaborators designed a series of questions designed to generate interpersonal closeness between two individuals who have never met. It consists of 36 questions that subject pairs ask each other over a 45-minute period. It was found to generate a degree of closeness in the lab, and can be more carefully controlled than connection within existing relationships.
See also
Affection
Attachment theory
Friendship
Interpersonal relationships
Interpersonal ties
Interpersonal emotion regulation
Intimate relationships
Human bonding
Love
Social isolation
Social robot
Social support
References
Emotion
Interpersonal relationships
Social psychology concepts | Social connection | Biology | 3,613 |
22,625,701 | https://en.wikipedia.org/wiki/European%20Calcium%20Society | The European Calcium Society is a non-profit society that aims to develop relationships between different generations of scientists in Europe working in the field of calcium signaling and the proteins involved in the Calcium Toolkit.
Origin
The First European Symposium took place in 1989 and covered calcium binding proteins in normal and transformed cells. The symposium resulted from a 30-month gestation.
The symposium filled a gap given the lack of European fora in which young European researchers could participate (the International Symposium was held in Asilomar, CA 1986, in Nagoya in 1988, in Banff, Canada, etc.)
A European Union grant called Stimulation Action was awarded to Roland Pochet in November 1986. Long discussions in 1988 between Pochet and Jacques Haiech at Mont Sainte-Odile who pointed out the importance of European researchers in calcium binding proteins (Hamoir, Liége, 1955, Pechere, Montpellier, 1965, Drabikowski, Varsovie, 1970) and the strong support received from Claus Heizmann.
History
1997 was important because the “European Calcium Society” was registered under E.U. guidelines, which had earlier rejected a proposal to finance the fourth symposium because of lack of structure. In 1997 they created the group's first ECS Web site, logo, newsletter and a set of statutes published in the “Moniteur belge” as an “Arrêté Royal du 22 septembre 1997” signed by King Albert II.
1998-2005
1998-2005 was a consolidation period. Since 2000, ECS has been selected as an EU High-level Scientific Conference allowing it to offer grants to young European researchers. The board was enlarged to include Volker Gerke and Steve Moss. ECS provided posters, prizes and recently special grants for young researchers.
Youth emphasis
Since its creation, 30 to 35% of the participants at ECS symposia were young researchers (below 35 years old). Encouraging young researchers to participate has always been one of the main objectives.
Publication
Since 1992 Heizmann has sponsored the publication of significant articles in the scientific journal Biochimica et Biophysica Acta. A newsletter is also produced twice a year (May and November).
Workshops
In 2007, ECS launched its first workshop, in Ariège (France). The second took place in June 2009 in Smolenice (Slovakia).
Sources
Pan-European scientific societies
Physiology organizations
Calcium
Human homeostasis | European Calcium Society | Biology | 490 |
11,728,902 | https://en.wikipedia.org/wiki/Fixation%20%28psychology%29 | Fixation () is a concept (in human psychology) that was originated by Sigmund Freud (1905) to denote the persistence of anachronistic sexual traits. The term subsequently came to denote object relationships with attachments to people or things in general persisting from childhood into adult life.
Freud
In Three Essays on the Theory of Sexuality (1905), Freud distinguished the fixations of the libido on an incestuous object from a fixation upon a specific, partial aim, such as voyeurism.
Freud theorized that some humans may develop psychological fixation due to one or more of the following:
A lack of proper gratification during one of the psychosexual stages of development.
Receiving a strong impression from one of these stages, in which case the person's personality would reflect that stage throughout adult life.
"An excessively strong manifestation of these instincts at a very early age [which] leads to a kind of partial fixation, which then constitutes a weak point in the structure of the sexual function".
As Freud's thought developed, so did the range of possible 'fixation points' he saw as significant in producing particular neuroses. However, he continued to view fixation as "the manifestation of very early linkages which it is hard to between instincts and impressions and the objects involved in those impressions".
Psychoanalytic therapy involved producing a new transference fixation in place of the old one. The new for example a father-transference onto the may be very different from the old, but will absorb its energies and enable them eventually to be released for non-fixated purposes.
Objections
Whether a particularly obsessive attachment is a fixation or a defensible expression of love is at times debatable. Fixation to intangibles (i.e., ideas, ideologies, etc.) can also occur. The obsessive factor of fixation is also found in symptoms pertaining to obsessive compulsive disorder, which psychoanalysts linked to a mix of early (pregenital) frustrations and gratifications.
Fixation has been compared to psychological imprinting at an early and sensitive period of development. Others object that Freud was attempting to stress the looseness of the ties between libido and object, and the need to find a specific cause any given (perverse or neurotic) fixation.
Post-Freudians
Melanie Klein saw fixation as inherently pathological – a blocking of potential sublimation by way of repression.
Erik H. Erikson distinguished fixation to zone – oral or anal, for example – from fixation to mode, such as taking in, as with his instance of the man who "may eagerly absorb the 'milk of wisdom' where he once desired more tangible fluids from more sensuous containers". Eric Berne, developed his insight further as part of transactional analysis, suggesting that "particular games and scripts, and their accompanying physical symptoms, are based in appropriate zones and modes".
Heinz Kohut saw the grandiose self as a fixation upon a normal childhood stage; while other post-Freudians explored the role of fixation in aggression and criminality.
In popular culture
Coleridge's Christabel has been seen as using witchcraft as a vehicle to explore psychological fixation.
Tennyson has been considered to show a romantic fixation on days of old.
See also
References
External links
Claude Smadja, "Fixation"
Fixation
Freudian psychology
Psychoanalytic terminology
Sexology
1900s neologisms | Fixation (psychology) | Biology | 732 |
4,767,366 | https://en.wikipedia.org/wiki/Standard%20addition | The Standard addition method, often used in analytical chemistry, quantifies the analyte present in an unknown. This method is useful for analyzing complex samples where a matrix effect interferes with the analyte signal. In comparison to the calibration curve method, the standard addition method has the advantage of the matrices of the unknown and standards being nearly identical. This minimizes the potential bias arising from the matrix effect when determining the concentration.
Variations
Standard addition involves adding known amounts of analyte to an unknown sample, a process known as spiking. By increasing the number of spikes, the analyst can extrapolate for the analyte concentration in the unknown that has not been spiked. There are multiple approaches to the standard addition. The following section summarize each approach.
Single standard addition used in polarography
In classic polarography, the standard addition method involves creating two samples – one sample without any spikes, and another one with spikes. By comparing the current measured from two samples, the amount of analyte in the unknown is determined. This approach was the first reported use of standard addition, and was introduced by a German mining chemist, Hans Hohn, in 1937. In his polarography practical book, titled Chemische Analysen mit dem Polargraphen, Hohn referred this method as Eizhusatzes, which translates to "calibration addition" in English. Later in the German literature, this method was called as Standardzugabe, meaning "standard addition" in English.
Modern polarography typically involves using three solutions: the standard solution, the unknown solution, and a mixture of the standard and unknown solution. By measuring any two of these solutions, the unknown concentration is calculated.
As polarographic standard addition involves using only one solution with the standard added – the two-level design, polarographers always refer to the method as singular, standard addition.
Successive addition of standards in constant sample and total volume
Outside the field of polarography, Harvey's book Spectrochemical Procedures was the next earliest reference book to mention standard addition. Harvey's approach, which involves the successive addition of standards, closely resembles the most commonly used method of standard addition today.
To apply this method, analysts prepare multiple solutions containing equal amounts of unknown and spike them with varying concentrations of the analyte. The amount of unknown and the total volume are the same across the standards and the only difference between the standards is the amount of analyte spiked. This leads to a linear relationship between the analyte signal and the amount of analyte added, allowing for the determination of the unknown's concentration by extrapolating the zero analyte signal. One disadvantage of this approach is that it requires sufficient amount of the unknown. When working with limiting amount of sample, an analyst might need to make a single addition, but it is generally considered a best practice to make at least two additions whenever possible.
Note that this is not limited to liquid samples. In atomic absorption spectroscopy, for example, standard additions are often used with solid as the sample.
In atomic emission spectroscopy, background signal cannot be resolved by standard addition. Thus, background signal must be subtracted from the unknown and standard intensities prior to extrapolating for the zero signal.
As this approach involves varying amount of standards added, it is often referred in the plural form as standard additions.
Example
Suppose an analyst is determining the concentration of silver in samples of waste solution in photographic film by atomic absorption spectroscopy. Using the calibration curve method, the analyst can calibrate the spectrometer with a pure silver aqueous solutions, and use the calibration graph to determine the amount of silver present in the waste samples. This method, however, assumes the pure aqueous solution of silver and a photographic waste sample have the same matrix and therefore the waste samples are free of matrix effect.
Matrix effects occur even with methods such as plasma spectrometry, which have a reputation for being relatively free from interferences. As such, analyst would use standard additions in this case.
For standard additions, equal volumes of the sample solutions are taken, and all are separately spiked with varying amounts of the analyte – 0, 1, 2, 3, 4, 5 mL, where 0 mL addition is a pure test sample solution. All solutions are then diluted to the same volume of 25 mL, by using the same solvent as the one used to prepare the spiking solutions. Each prepared solution is then analyzed using an atomic absorption spectrometer. The resulting signals and corresponding spiked silver concentrations are plotted, with concentration on the x-axis and the signal on the y-axis. A regression line is calculated through least squares analysis and the x-intercept of the line is determined by the ratio of the y-intercept and the slope of the regression line. This x-intercept represents the silver concentration of the test sample where there is no standard solution added.
Error
While the standard addition method is effective in reducing the interference of most matrix effects on the analyte signal, it cannot correct for the translational matrix effects. These effects are caused by other substances present in the unknown sample that are often independent of the analyte concentration. They are commonly referred to as 'background' and can impact the intercept of the regression line without affecting the slope. This results in bias towards the unknown concentration. In other words, standard addition will not correct for these backgrounds or other spectral interferences.
Analysts also needs to evaluate the precision of the determined unknown concentration by calculating for the standard deviation, . Lower indicates greater precision of the measurements. The value of is given by
where the calculation involves the following variables:
standard deviation of the residuals,
absolute value of the slope of the least-squares line,
y-intercept of the linear curve,
number of standards,
average measurement of the standards,
concentrations of the standards,
average concentration of the standards,
See also
Standard curve
Isotope dilution
Internal standard
References
Analytical chemistry
Laboratory techniques | Standard addition | Chemistry | 1,202 |
47,682,934 | https://en.wikipedia.org/wiki/Charles%20Clement%20Coe | Charles Clement Coe (8 February 1830 – 1 April 1921) was an English Unitarian minister and writer who advocated non-Darwinian evolution.
Coe was born in King's Lynn and was educated at Manchester College, Oxford. He was President of the Leicester Literary and Philosophical Society (1862-1863) and was Minister of the Unitarian Great Meeting chapel in Bond Street, Leicester. His was minister at Bank Street Unitarian Chapel in Bolton, Lancashire, from 1874 to 1895, when he moved to Bournemouth.
It was while at Bolton that Coe wrote a large volume, Nature Versus Natural Selection: An Essay on Organic Evolution (1895). He defended evolution but rejected natural selection. The biologist J. Arthur Thomson gave the book a positive review, commenting that it is a very interesting critique of natural selection written with much skill. It was also positively reviewed in The Lancet journal.
Coe was an early writer to use the term neo-Darwinism in 1889.
Publications
Outlines of a Christian Faith (1862)
The Law of Parsimony and the Argument of Design (1882)
General Gordon in a New Light: The Cause of War, and the Advocate of Peace (1885)
Darwinism and Neo-Darwinism (1889)
Nature Versus Natural Selection: An Essay on Organic Evolution (1895)
Notes
External links
1830 births
1921 deaths
19th-century English clergy
20th-century English clergy
Alumni of Harris Manchester College, Oxford
English Unitarian ministers
Non-Darwinian evolution
People from King's Lynn | Charles Clement Coe | Biology | 294 |
27,713,781 | https://en.wikipedia.org/wiki/Suicide%20epidemic | A suicide epidemic is a large number of suicides taking place over a period of time in a manner that resembles a disease epidemic. Such epidemics have occurred in the former Soviet Union in the 1990s, among police officers, on Indian reservations, and in Micronesia. The Werther effect occurs when suicides that are made publicly known encourage others to imitate them. It has been suggested that the teaching of stories such as Romeo and Juliet may help teens to be more open in discussing suicide among young people.
See also
Mass suicide
Epidemiology of suicide
Copycat suicide
References
Epidemic
Epidemics | Suicide epidemic | Biology | 121 |
22,207,814 | https://en.wikipedia.org/wiki/Pathema | Pathema was one of the eight bioinformatics resource centers funded by the National Institute of Allergy and Infectious Diseases (NIAID), a component of the National Institute of Health (NIH), which is an agency of the United States Department of Health and Human Services.
Pathema was funded for five years from 2004 through a contract to The J. Craig Venter Institute, and is currently led by PI Granger Sutton.
Pathema is the web resource for JCVI's NIAID-funded Bioinformatics Resource Center, and was one of eight such centers designed to support bio-defense and infectious disease research. The overarching goal of Pathema is to provide a core resource that will accelerated scientific progress towards understanding, detection, diagnosis and treatment of diseases caused by six clades of Category A-C pathogens (Bacillus anthracis, Clostridium botulinum, Burkholderia mallei, Burkholderia pseudomallei, Clostridium perfringens, and Entamoeba histolytica) involved in new and re-emerging infectious diseases. Pathema provides comprehensive curated datasets for the targeted pathogen clades, along with advanced bioinformatics capabilities geared specifically towards biodefense requirements, and the identification of potential targets for vaccine development, therapeutics and diagnostics.
Supported Pathema organisms
Pathema analysis tools
Pathema supports a suite of over 50 web-based single gene, whole-genome and multi-genome comparative tools to facilitate analyses of genomic sequence and annotation data of over 80 NIAID Category A-C prokaryotic pathogens. Tools available on the BRC resource are developed and customized to best meet the scientific needs of the pathogen research community based on feedback solicited through community outreach. Additionally, they are designed to facilitate scientific exploration in the areas of functional curation, pathogenicity, therapeutics, comparative analysis and functional genomics. While every tool has several applications, taken together they provides numerous opportunities for discovery and hypothesis generation.
The suite of Pathema analysis tools include:
Over 25 different search capabilities allow users to mine and retrieve data stored in the BRC comparative database. Search tools query genes, genomes, sequences or text, matching user-defined strings across gene loci, gene symbols and protein product names. Other queries include EC#, GenBank, SwissProt, and GO id searches. Common sequence search methods such as BLAST, HMM and protein motif searches are also available.
Over 20 different displays and analyses of whole-genome data. Whole-genome data can be displayed graphically as a linear representation of genes on regions of a chromosome or as a complete circle for an entire chromosome. Users can investigate biochemical pathways, codon usage tables, percent GC plots, computer generated 2D and restriction digest gels, and retrieve summary information such as average gene size or numbers of coding regions as viewable and downloadable tables and lists.
Over 15 different comparative analysis tools. The basis for Pathema's comparative tools is either pre-generated protein clusters or All vs. All searches. Incorporated, are the most popular tools of the publicly available Sybil comparative analysis suite. Sybil uses pre-generated protein clusters as the underlying data for its synteny gradient and comparative genomic displays, with protein cluster ortholog, paralog and singleton data available to the user.
Individual gene pages highlighting annotation data and associated evidence as well as single gene analysis tools. Annotation data displayed and downloadable includes product name, gene symbol, EC number, GO ids, functional role category assignment, DNA sequence and protein sequence. Calculating the TmHMM profile, secondary structure and third position GC-Skew are just a few types of analyses users can perform. Links to other relevant resources such as Swiss-Prot, GenBank, Prosite, Pfam, etc., are also available.
References
External links
Pathema Publications
Pathema Presentations
NIAID home page
Bioinformatics Resource Centers The NIAID page describing the goals and activities of the BRCs
PathogenPortal - A hub site for the BRCs; provides Summary information
Bioinformatics organizations
Biological databases | Pathema | Biology | 858 |
10,849,414 | https://en.wikipedia.org/wiki/Lever%20rule | In chemistry, the lever rule is a formula used to determine the mole fraction (xi) or the mass fraction (wi) of each phase of a binary equilibrium phase diagram. It can be used to determine the fraction of liquid and solid phases for a given binary composition and temperature that is between the liquidus and solidus line.
In an alloy or a mixture with two phases, α and β, which themselves contain two elements, A and B, the lever rule states that the mass fraction of the α phase is
where
is the mass fraction of element B in the α phase
is the mass fraction of element B in the β phase
is the mass fraction of element B in the entire alloy or mixture
all at some fixed temperature or pressure.
Derivation
Suppose an alloy at an equilibrium temperature T consists of mass fraction of element B. Suppose also that at temperature T the alloy consists of two phases, α and β, for which the α consists of , and β consists of . Let the mass of the α phase in the alloy be so that the mass of the β phase is , where is the total mass of the alloy.
By definition, then, the mass of element B in the α phase is , while the mass of element B in the β phase is . Together these two quantities sum to the total mass of element B in the alloy, which is given by . Therefore,
By rearranging, one finds that
This final fraction is the mass fraction of the α phase in the alloy.
Calculations
Binary phase diagrams
Before any calculations can be made, a tie line is drawn on the phase diagram to determine the mass fraction of each element; on the phase diagram to the right it is line segment LS. This tie line is drawn horizontally at the composition's temperature from one phase to another (here the liquid to the solid). The mass fraction of element B at the liquidus is given by wBl (represented as wl in this diagram) and the mass fraction of element B at the solidus is given by wBs (represented as ws in this diagram). The mass fraction of solid and liquid can then be calculated using the following lever rule equations:
where wB is the mass fraction of element B for the given composition (represented as wo in this diagram).
The numerator of each equation is the original composition that we are interested in is +/- the opposite lever arm. That is if you want the mass fraction of solid then take the difference between the liquid composition and the original composition. And then the denominator is the overall length of the arm so the difference between the solid and liquid compositions. If you're having difficulty realising why this is so, try visualising the composition when wo approaches wl. Then the liquid concentration will start increasing.
Eutectic phase diagrams
There is now more than one two-phase region. The tie line drawn is from the solid alpha to the liquid and by dropping a vertical line down at these points the mass fraction of each phase is directly read off the graph, that is the mass fraction in the x axis element. The same equations can be used to find the mass fraction of alloy in each of the phases, i.e. wl is the mass fraction of the whole sample in the liquid phase.
References
Metallurgy
Phase transitions
Materials science
Charts
Diagrams | Lever rule | Physics,Chemistry,Materials_science,Engineering | 670 |
18,974,136 | https://en.wikipedia.org/wiki/Mathematical%20beauty | Mathematical beauty is the aesthetic pleasure derived from the abstractness, purity, simplicity, depth or orderliness of mathematics. Mathematicians may express this pleasure by describing mathematics (or, at least, some aspect of mathematics) as beautiful or describe mathematics as an art form, (a position taken by G. H. Hardy) or, at a minimum, as a creative activity.
Comparisons are made with music and poetry.
In method
Mathematicians commonly describe an especially pleasing method of proof as elegant. Depending on context, this may mean:
A proof that uses a minimum of additional assumptions or previous results.
A proof that is unusually succinct.
A proof that derives a result in a surprising way (e.g., from an apparently unrelated theorem or a collection of theorems).
A proof that is based on new and original insights.
A method of proof that can be easily generalized to solve a family of similar problems.
In the search for an elegant proof, mathematicians may search for multiple independent ways to prove a result, as the first proof that is found can often be improved. The theorem for which the greatest number of different proofs have been discovered is possibly the Pythagorean theorem, with hundreds of proofs being published up to date. Another theorem that has been proved in many different ways is the theorem of quadratic reciprocity. In fact, Carl Friedrich Gauss alone had eight different proofs of this theorem, six of which he published.
Conversely, results that are logically correct but involve laborious calculations, over-elaborate methods, highly conventional approaches or a large number of powerful axioms or previous results are usually not considered to be elegant, and may be even referred to as ugly or clumsy.
In results
Some mathematicians see beauty in mathematical results that establish connections between two areas of mathematics that at first sight appear to be unrelated. These results are often described as deep. While it is difficult to find universal agreement on whether a result is deep, some examples are more commonly cited than others. One such example is Euler's identity:
This elegant expression ties together arguably the five most important mathematical constants (, , , 1, and 0) with the two most common mathematical symbols (+, =). Euler's identity is a special case of Euler's formula, which the physicist Richard Feynman called "our jewel" and "the most remarkable formula in mathematics". Modern examples include the modularity theorem, which establishes an important connection between elliptic curves and modular forms (work on which led to the awarding of the Wolf Prize to Andrew Wiles and Robert Langlands), and "monstrous moonshine", which connects the Monster group to modular functions via string theory (for which Richard Borcherds was awarded the Fields Medal).
Other examples of deep results include unexpected insights into mathematical structures. For example, Gauss's Theorema Egregium is a deep theorem that states that the gaussian curvature is invariant under isometry of the surface. Another example is the fundamental theorem of calculus (and its vector versions including Green's theorem and Stokes' theorem).
The opposite of deep is trivial. A trivial theorem may be a result that can be derived in an obvious and straightforward way from other known results, or which applies only to a specific set of particular objects such as the empty set. In some occasions, a statement of a theorem can be original enough to be considered deep, though its proof is fairly obvious.
In his 1940 essay A Mathematician's Apology, G. H. Hardy suggested that a beautiful proof or result possesses "inevitability", "unexpectedness", and "economy".
In 1997, Gian-Carlo Rota, disagreed with unexpectedness as a sufficient condition for beauty and proposed a counterexample:
In contrast, Monastyrsky wrote in 2001:
This disagreement illustrates both the subjective nature of mathematical beauty and its connection with mathematical results: in this case, not only the existence of exotic spheres, but also a particular realization of them.
In experience
Interest in pure mathematics that is separate from empirical study has been part of the experience of various civilizations, including that of the ancient Greeks, who "did mathematics for the beauty of it". The aesthetic pleasure that mathematical physicists tend to experience in Einstein's theory of general relativity has been attributed (by Paul Dirac, among others) to its "great mathematical beauty". The beauty of mathematics is experienced when the physical reality of objects are represented by mathematical models. Group theory, developed in the early 1800s for the sole purpose of solving polynomial equations, became a fruitful way of categorizing elementary particles—the building blocks of matter. Similarly, the study of knots provides important insights into string theory and loop quantum gravity.
Some believe that in order to appreciate mathematics, one must engage in doing mathematics.
For example, Math Circles are after-school enrichment programs where students engage with mathematics through lectures and activities; there are also some teachers who encourage student engagement by teaching mathematics in kinesthetic learning. In a general Math Circle lesson, students use pattern finding, observation, and exploration to make their own mathematical discoveries. For example, mathematical beauty arises in a Math Circle activity on symmetry designed for 2nd and 3rd graders, where students create their own snowflakes by folding a square piece of paper and cutting out designs of their choice along the edges of the folded paper. When the paper is unfolded, a symmetrical design reveals itself. In a day to day elementary school mathematics class, symmetry can be presented as such in an artistic manner where students see aesthetically pleasing results in mathematics.
Some teachers prefer to use mathematical manipulatives to present mathematics in an aesthetically pleasing way. Examples of a manipulative include algebra tiles, cuisenaire rods, and pattern blocks. For example, one can teach the method of completing the square by using algebra tiles. Cuisenaire rods can be used to teach fractions, and pattern blocks can be used to teach geometry. Using mathematical manipulatives helps students gain a conceptual understanding that might not be seen immediately in written mathematical formulas.
Another example of beauty in experience involves the use of origami. Origami, the art of paper folding, has aesthetic qualities and many mathematical connections. One can study the mathematics of paper folding by observing the crease pattern on unfolded origami pieces.
Combinatorics, the study of counting, has artistic representations which some find mathematically beautiful. There are many visual examples which illustrate combinatorial concepts. Some of the topics and objects seen in combinatorics courses with visual representations include, among others Four color theorem, Young tableau, Permutohedron, Graph theory, Partition of a set.
Brain imaging experiments conducted by Semir Zeki and his colleagues show that the experience of mathematical beauty has, as a neural correlate, activity in field A1 of the medial orbito-frontal cortex (mOFC) of the brain and that this activity is parametrically related to the declared intensity of beauty. The location of the activity is similar to the location of the activity that correlates with the experience of beauty from other sources, such as music or joy or sorrow. Moreover, mathematicians seem resistant to revising their judgment of the beauty of a mathematical formula in light of contradictory opinion given by their peers.
In philosophy
Some mathematicians are of the opinion that the doing of mathematics is closer to discovery than invention, for example:
These mathematicians believe that the detailed and precise results of mathematics may be reasonably taken to be true without any dependence on the universe in which we live. For example, they would argue that the theory of the natural numbers is fundamentally valid, in a way that does not require any specific context. Some mathematicians have extrapolated this viewpoint that mathematical beauty is truth further, in some cases becoming mysticism.
In Plato's philosophy there were two worlds, the physical one in which we live and another abstract world which contained unchanging truth, including mathematics. He believed that the physical world was a mere reflection of the more perfect abstract world.
Hungarian mathematician Paul Erdős spoke of an imaginary book, in which God has written down all the most beautiful mathematical proofs. When Erdős wanted to express particular appreciation of a proof, he would exclaim "This one's from The Book!"
Twentieth-century French philosopher Alain Badiou claimed that ontology is mathematics. Badiou also believes in deep connections between mathematics, poetry and philosophy.
In many cases, natural philosophers and other scientists who have made extensive use of mathematics have made leaps of inference between beauty and physical truth in ways that turned out to be erroneous. For example, at one stage in his life, Johannes Kepler believed that the proportions of the orbits of the then-known planets in the Solar System have been arranged by God to correspond to a concentric arrangement of the five Platonic solids, each orbit lying on the circumsphere of one polyhedron and the insphere of another. As there are exactly five Platonic solids, Kepler's hypothesis could only accommodate six planetary orbits and was disproved by the subsequent discovery of Uranus.
Analysis of beauty in mathematics
G. H. Hardy analysed the beauty of mathematical proofs into these six dimensions: general, serious, deep, unexpected, inevitable, economical (simple). Paul Ernest proposes seven dimensions for any mathematical objects, including concepts, theorems, proofs and theories. These are
1. Economy, simplicity, brevity, succinctness, elegance;
2. Generality, abstraction, power;
3. Surprise, ingenuity, cleverness;
4. Pattern, structure, symmetry, regularity, visual design;
5. Logicality, rigour, tight reasoning and deduction, pure thought;
6. Interconnectedness, links, unification;
7. Applicability, modelling power, empirical generality.
He argues that individual mathematicians and communities of mathematicians will have preferred choices from this list. Some, like Hardy, will reject some (Hardy claimed that applied mathematics is ugly). However, Rentuya Sa and colleagues compared the views of British mathematicians and undergraduates and Chinese mathematicians on the beauty of 20 well known equations and found a strong measure of agreement between their views.
In information theory
In the 1970s, Abraham Moles and Frieder Nake analyzed links between beauty, information processing, and information theory. In the 1990s, Jürgen Schmidhuber formulated a mathematical theory of observer-dependent subjective beauty based on algorithmic information theory: the most beautiful objects among subjectively comparable objects have short algorithmic descriptions (i.e., Kolmogorov complexity) relative to what the observer already knows. Schmidhuber explicitly distinguishes between beautiful and interesting. The latter corresponds to the first derivative of subjectively perceived beauty: the observer continually tries to improve the predictability and compressibility of the observations by discovering regularities such as repetitions and symmetries and fractal self-similarity. Whenever the observer's learning process (possibly a predictive artificial neural network) leads to improved data compression such that the observation sequence can be described by fewer bits than before, the temporary interesting-ness of the data corresponds to the compression progress, and is proportional to the observer's internal curiosity reward.
In the arts
Music
Examples of the use of mathematics in music include the stochastic music of Iannis Xenakis, the Fibonacci sequence in Tool's Lateralus, counterpoint of Johann Sebastian Bach, polyrhythmic structures (as in Igor Stravinsky's The Rite of Spring), the Metric modulation of Elliott Carter, permutation theory in serialism beginning with Arnold Schoenberg, and application of Shepard tones in Karlheinz Stockhausen's Hymnen. They also include the application of Group theory to transformations in music in the theoretical writings of David Lewin.
Visual arts
Examples of the use of mathematics in the visual arts include applications of chaos theory and fractal geometry to computer-generated art, symmetry studies of Leonardo da Vinci, projective geometries in development of the perspective theory of Renaissance art, grids in Op art, optical geometry in the camera obscura of Giambattista della Porta, and multiple perspective in analytic cubism and futurism.
Sacred geometry is a field of its own, giving rise to countless art forms including some of the best known mystic symbols and religious motifs, and has a particularly rich history in Islamic architecture. It also provides a means of meditation and comtemplation, for example the study of the Kaballah Sefirot (Tree Of Life) and Metatron's Cube; and also the act of drawing itself.
The Dutch graphic designer M. C. Escher created mathematically inspired woodcuts, lithographs, and mezzotints. These feature impossible constructions, explorations of infinity, architecture, visual paradoxes and tessellations.
Some painters and sculptors create work distorted with the mathematical principles of anamorphosis, including South African sculptor Jonty Hurwitz.
British constructionist artist John Ernest created reliefs and paintings inspired by group theory. A number of other British artists of the constructionist and systems schools of thought also draw on mathematics models and structures as a source of inspiration, including Anthony Hill and Peter Lowe. Computer-generated art is based on mathematical algorithms.
Quotes by mathematicians
Bertrand Russell expressed his sense of mathematical beauty in these words:
Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as poetry.
Paul Erdős expressed his views on the ineffability of mathematics when he said, "Why are numbers beautiful? It's like asking why is Beethoven's Ninth Symphony beautiful. If you don't see why, someone can't tell you. I know numbers are beautiful. If they aren't beautiful, nothing is".
See also
Argument from beauty
Cellular automaton
Descriptive science
Fluency heuristic
Golden ratio
Mathematics and architecture
Neuroesthetics
Normative science
Philosophy of mathematics
Processing fluency theory of aesthetic pleasure
Pythagoreanism
Theory of everything
Notes
References
Aigner, Martin, and Ziegler, Gunter M. (2003), Proofs from THE BOOK, 3rd edition, Springer-Verlag.
Chandrasekhar, Subrahmanyan (1987), Truth and Beauty: Aesthetics and Motivations in Science, University of Chicago Press, Chicago, IL.
Hadamard, Jacques (1949), The Psychology of Invention in the Mathematical Field, 1st edition, Princeton University Press, Princeton, NJ. 2nd edition, 1949. Reprinted, Dover Publications, New York, NY, 1954.
Hardy, G.H. (1940), A Mathematician's Apology, 1st published, 1940. Reprinted, C. P. Snow (foreword), 1967. Reprinted, Cambridge University Press, Cambridge, UK, 1992.
Hoffman, Paul (1992), The Man Who Loved Only Numbers, Hyperion.
Huntley, H.E. (1970), The Divine Proportion: A Study in Mathematical Beauty, Dover Publications, New York, NY.
Lang, Serge (1985). The Beauty of Doing Mathematics: Three Public Dialogues. New York: Springer-Verlag. .
Loomis, Elisha Scott (1968), The Pythagorean Proposition, The National Council of Teachers of Mathematics. Contains 365 proofs of the Pythagorean Theorem.
Pandey, S.K. . The Humming of Mathematics: Melody of Mathematics. Independently Published, 2019. .
Peitgen, H.-O., and Richter, P.H. (1986), The Beauty of Fractals, Springer-Verlag.
Strohmeier, John, and Westbrook, Peter (1999), Divine Harmony, The Life and Teachings of Pythagoras, Berkeley Hills Books, Berkeley, CA.
Further reading
External links
Mathematics, Poetry and Beauty
Is Mathematics Beautiful? cut-the-knot.org
Justin Mullins.com
Edna St. Vincent Millay (poet): Euclid alone has looked on beauty bare
Terence Tao, What is good mathematics?
Mathbeauty Blog
A Mathematical Romance Jim Holt December 5, 2013 issue of The New York Review of Books review of Love and Math: The Heart of Hidden Reality by Edward Frenkel
Aesthetic beauty
Elementary mathematics
Philosophy of mathematics
Mathematical terminology
Mathematics and art | Mathematical beauty | Mathematics | 3,425 |
288,307 | https://en.wikipedia.org/wiki/Replica%20plating | Replica plating is a microbiological technique in which one or more secondary Petri plates containing different solid (agar-based) selective growth media (lacking nutrients or containing chemical growth inhibitors such as antibiotics) are inoculated with the same colonies of microorganisms from a primary plate (or master dish), reproducing the original spatial pattern of colonies. The technique involves pressing a velveteen-covered disk, and then imprinting secondary plates with cells in colonies removed from the original plate by the material. Generally, large numbers of colonies (roughly 30-300) are replica plated due to the difficulty in streaking each out individually onto a separate plate.
The purpose of replica plating is to be able to compare the master plate and any secondary plates, typically to screen for a desired phenotype. For example, when a colony that was present on the primary plate (or master dish), fails to appear on a secondary plate, it shows that the colony was sensitive to a substance on that particular secondary plate. Common screenable phenotypes include auxotrophy and antibiotic resistance.
Replica plating is especially useful for "negative selection". However, it is more correct to refer to "negative screening" instead of using the term 'selection'. For example, if one wanted to select colonies that were sensitive to ampicillin, the primary plate could be replica plated on a secondary Amp+ agar plate. The sensitive colonies on the secondary plate would die but the colonies could still be deduced from the primary plate since the two have the same spatial patterns from ampicillin resistant colonies. The sensitive colonies could then be picked off from the primary plate. Frequently the last plate will be non-selective. In the figure, a nonselective plate will be replica plated after the Amp+ plate to confirm that the absence of growth on the selective plate is due to the selection itself and not a problem with transferring cells. If one sees growth on the third (nonselective) plate but not the second one, the selective agent is responsible for the lack of growth. If the non-selective plate shows no growth, one cannot say whether viable cells were transferred at all, and no conclusions can be made about the presence or absence of growth on selective media. This is particularly useful if there are questions about the age or viability of the cells on the original plate.
By increasing the variety of secondary plates with different selective growth media, it is possible to rapidly screen a large number of individual isolated colonies for as many phenotypes as there are secondary plates.
The development of replica plating required two steps. The first step was to define the problem: a method of identifiably duplicating colonies. The second step was to devise a means to reliably implement the first step. Replica plating was first described by Esther Lederberg and Joshua Lederberg in 1952.One of the nutrient agar plate will have antibiotic resistance. Lederberg sought to use a fabric that was able to be sterilized, and had a vertical fabric pile, akin to a 2D analog "wire brush" of that had been classically used to transfer colonies. Paper was unsatisfactory as "its lateral capillarity and its compression of the colonies distorted and broke up the original growth pattern.", and nylon velvet was too expensive and its stiffer fibers caused problems, leading to the choice and eventual standardization on cotton velveteen. While first demonstrated with bacteria, velveteen based replica plating has also become a standard technique in the microbiology of eukaryotes, such as yeast.
References
Molecular biology
Microbiology techniques
ja:培養 | Replica plating | Chemistry,Biology | 756 |
58,673 | https://en.wikipedia.org/wiki/Liquid%20hydrogen | Liquid hydrogen () is the liquid state of the element hydrogen. Hydrogen is found naturally in the molecular H2 form.
To exist as a liquid, H2 must be cooled below its critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . A common method of obtaining liquid hydrogen involves a compressor resembling a jet engine in both appearance and principle. Liquid hydrogen is typically used as a concentrated form of hydrogen storage. Storing it as liquid takes less space than storing it as a gas at normal temperature and pressure. However, the liquid density is very low compared to other common fuels. Once liquefied, it can be maintained as a liquid for some time in thermally insulated containers.
There are two spin isomers of hydrogen; whereas room temperature hydrogen is mostly orthohydrogen, liquid hydrogen consists of 99.79% parahydrogen and 0.21% orthohydrogen.
Hydrogen requires a theoretical minimum of to liquefy, and including converting the hydrogen to the para isomer, but practically generally takes compared to a heating value of hydrogen.
History
In 1885, Zygmunt Florenty Wróblewski published hydrogen's critical temperature as ; critical pressure, ; and boiling point, .
Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. The first synthesis of the stable isomer form of liquid hydrogen, parahydrogen, was achieved by Paul Harteck and Karl Friedrich Bonhoeffer in 1929.
Spin isomers of hydrogen
The two nuclei in a dihydrogen molecule can have two different spin states.
Parahydrogen, in which the two nuclear spins are antiparallel, is more stable than orthohydrogen, in which the two are parallel. At room temperature, gaseous hydrogen is mostly in the ortho isomeric form due to thermal energy, but an ortho-enriched mixture is only metastable when liquified at low temperature. It slowly undergoes an exothermic reaction to become the para isomer, with enough energy released as heat to cause some of the liquid to boil. To prevent loss of the liquid during long-term storage, it is therefore intentionally converted to the para isomer as part of the production process, typically using a catalyst such as iron(III) oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromium(III) oxide, or some nickel compounds.
Uses
Liquid hydrogen is a common liquid rocket fuel for rocketry application and is used by NASA and the U.S. Air Force, which operate a large number of liquid hydrogen tanks with an individual capacity up to 3.8 million liters (1 million U.S. gallons).
In most rocket engines fueled by liquid hydrogen, it first cools the nozzle and other parts before being mixed with the oxidizer, usually liquid oxygen, and burned to produce water with traces of ozone and hydrogen peroxide. Practical H2–O2 rocket engines run fuel-rich so that the exhaust contains some unburned hydrogen. This reduces combustion chamber and nozzle erosion. It also reduces the molecular weight of the exhaust, which can increase specific impulse, despite the incomplete combustion.
Liquid hydrogen can be used as the fuel for an internal combustion engine or fuel cell. Various submarines, including the Type 212 submarine, Type 214 submarine, and others, and concept hydrogen vehicles have been built using this form of hydrogen, such as the DeepC, BMW H2R, and others. Due to its similarity, builders can sometimes modify and share equipment with systems designed for liquefied natural gas (LNG). Liquid hydrogen is being investigated as a zero carbon fuel for aircraft. Because of the lower volumetric energy, the hydrogen volumes needed for combustion are large. Unless direct injection is used, a severe gas-displacement effect also hampers maximum breathing and increases pumping losses.
Liquid hydrogen is also used to cool neutrons to be used in neutron scattering. Since neutrons and hydrogen nuclei have similar masses, kinetic energy exchange per interaction is maximum (elastic collision). Finally, superheated liquid hydrogen was used in many bubble chamber experiments.
The first thermonuclear bomb, Ivy Mike, used liquid deuterium, also known as hydrogen-2, for nuclear fusion.
Properties
The product of hydrogen combustion in a pure oxygen environment is solely water vapor. However, the high combustion temperatures and present atmospheric nitrogen can result in the breaking of N≡N bonds, forming toxic NOx if no exhaust scrubbing is done. Since water is often considered harmless to the environment, an engine burning it can be considered "zero emissions". In aviation, however, water vapor emitted in the atmosphere contributes to global warming (to a lesser extent than CO2). Liquid hydrogen also has a much higher specific energy than gasoline, natural gas, or diesel.
The density of liquid hydrogen is only 70.85 kg/m3 (at 20 K), a relative density of just 0.07. Although the specific energy is more than twice that of other fuels, this gives it a remarkably low volumetric energy density, many fold lower.
Liquid hydrogen requires cryogenic storage technology such as special thermally insulated containers and requires special handling common to all cryogenic fuels. This is similar to, but more severe than liquid oxygen. Even with thermally insulated containers it is difficult to keep such a low temperature, and the hydrogen will gradually leak away (typically at a rate of 1% per day). It also shares many of the same safety issues as other forms of hydrogen, as well as being cold enough to liquefy, or even solidify atmospheric oxygen, which can be an explosion hazard.
The triple point of hydrogen is at 13.81 K and 7.042 kPa.
Safety
Due to its cold temperatures, liquid hydrogen is a hazard for cold burns. Hydrogen itself is biologically inert and its only human health hazard as a vapor is displacement of oxygen, resulting in asphyxiation, and its very high flammability and ability to detonate when mixed with air. Because of its flammability, liquid hydrogen should be kept away from heat or flame unless ignition is intended. Unlike ambient-temperature gaseous hydrogen, which is lighter than air, hydrogen recently vaporized from liquid is so cold that it is heavier than air and can form flammable heavier-than-air air–hydrogen mixtures.
See also
Industrial gas
Liquefaction of gases
Hydrogen safety
Compressed hydrogen
Cryo-adsorption
Expansion ratio
Gasoline gallon equivalent
Slush hydrogen
Solid hydrogen
Metallic hydrogen
Hydrogen infrastructure
Hydrogen-powered aircraft
Liquid hydrogen tank car
Liquid hydrogen tanktainer
Hydrogen tanker
References
Hydrogen physics
Hydrogen technologies
Hydrogen storage
Liquid fuels
Rocket fuels
Coolants
Cryogenics
Hydrogen
Industrial gases
1898 in science | Liquid hydrogen | Physics,Chemistry | 1,407 |
58,297,912 | https://en.wikipedia.org/wiki/Peter%20Pauson | Prof Peter Ludwig Israel Pauson FRSE FRIC (1925–2013) was a German–Jewish emigrant who settled in Britain and who is remembered for his contributions to chemistry, most notably the Pauson–Khand reaction and as joint discoverer of ferrocene.
Life
He was born in Bamberg, Germany on 30 July 1925, the son of Stefan Pauson and his wife, Helene Dorothea Herzfelder. His parents escaped to England in 1939 with Peter and his two sisters to flee the Nazi persecution of Jews.
In 1942 the family moved to Glasgow and he began studying chemistry in the University of Glasgow under Thomas Stevens Stevens. After graduating in 1946, he moved to Sheffield University as a postgraduate, studying under Robert Downs Haworth and receiving his doctorate in 1949. He then went to Duquesne University in Pittsburgh, Pennsylvania and pursued research on tropolones and other aromatic non-benzenoid molecules. His discovery of ferrocene with his student, Thomas J. Kealy, arose from an attempt to dimerize cyclopentadienylmagnesium bromide using Iron(III) chloride; the orange-yellow solid with formula C10H10Fe was described as a "molecular sandwich" in Pauson's note which was published in Nature in 1951.
From 1951 to 1952 he studied at the University of Chicago under Morris Kharasch, then becoming a DuPont Fellow at Harvard University. He then gained practical experience at the DuPont Laboratories in Wilmington. Returning to Britain, he became a lecturer at Sheffield University and in 1959 became Professor of Organic Chemistry at Strathclyde University. In 1964 he was elected a Fellow of the Royal Society of Edinburgh.
Pauson and his postdoctoral assistant, Ihsan Khand, discovered the reaction now renowned as the Pauson–Khand reaction in 1971, though Pauson always referred to it as the "Khand reaction".
In 1994, the University of Strathclyde established the Merck Pauson Chair in Preparative Chemistry, funded by Merck, marking the contribution of Pauson to chemistry and to the university.
Pauson retired in 1995 and died peacefully at home on 10 December 2013. He was cremated at Clydebank Crematorium. In his obituary, he is described as "a gentleman of modesty, humility, and compassion … a fine man and a marvellous scientist".
Family
He married Lai-Ngau Mary (née Wong) (1928 – March 18, 2010), having met her at a party hosted by Enrico Fermi when Pauson was at the University of Chicago in the early 1950s. They went on to have two children, Hilary and Alfred.
Selected publications
Organometallic Chemistry (1967)
References
1925 births
2013 deaths
Jewish emigrants from Nazi Germany to the United Kingdom
German organic chemists
Alumni of the University of Sheffield
Academics of the University of Sheffield
Academics of the University of Strathclyde
Fellows of the Royal Society of Edinburgh
Duquesne University alumni | Peter Pauson | Chemistry | 603 |
12,703,254 | https://en.wikipedia.org/wiki/Custody%20transfer | Custody Transfer in the oil and gas industry refers to the transactions involving transporting physical substance from one operator to another. This includes the transferring of raw and refined petroleum between tanks and railway tank cars; onto ships, and other transactions. Custody transfer in fluid measurement is defined as a metering point (location) where the fluid is being measured for sale from one party to another. During custody transfer, accuracy is of great importance to both the company delivering the material and the eventual recipient, when transferring a material.
The term "fiscal metering" is often interchanged with custody transfer, and refers to metering that is a point of a commercial transaction such as when a change in ownership takes place. Custody transfer takes place any time fluids are passed from the possession of one party to another. The use of the phrase "fiscal metering" does not necessary imply any single expectation of the quality of the instrumentation to be installed. "Fiscal" refers to the meter's service, not its quality. "Fiscal" usually means ‘concerned with government finance’.
Custody transfer generally involves:
Industry standards;
National metrology standards;
Contractual agreements between custody transfer parties; and
Government regulation and taxation.
Due to the high level of accuracy required during custody transfer applications, the flowmeters which are used to perform this are subject to approval by an organization such as the American Petroleum Institute (API).
Custody transfer operations can occur at a number of points along the way; these may include operations, transactions or transferring of oil from an oil production platform to a ship, barge, railcar, truck and also to the final destination point, such as a refinery.
To complete standards and/or agreements and achieve maximum accuracy all parties included in fuel distribution processes (sellers and buyers, transport & storage services, fiscal depts) must follow the custody transfer procedures, appropriate measurements and related documenting operations must be fully implemented. Custody transfer measurements involve measurements in pipelines, storage tanks, transportation tanks (tankers, trailers or railway tanks) - whole fuel distribution process must be traceable. In order measurements can be made in a volume or mass units (or both), so various metering methods are commonly used.
Current volume of a product stored in a tank can be calculated using a tank capacity table (sometimes called "tank calibration table") and current levels and temperatures of a product in a tank. Tank capacity table stores data about level and appropriate volume in a tank and have a very high impact on overall accuracy of volume calculation. Typical accuracy of a capacity tables for custody transfer operations is 0.05..0.1%. Initial installation of a tank, its accuracy and lifecycle changes (like inclination or sediments) affect the accuracy of the capacity table so they must be revised periodically. Some capacity tables are multidimensional and store additional data - like heel and trim for ships tanks density of stored products and/or are used in systems for automated volume/mass calculations.
Metering methods
Custody transfer is one of the most important applications for flow measurement. Many flow measurement technologies are used for custody transfer applications; these include differential pressure (DP) flowmeters, turbine flowmeters, positive displacement flowmeters, Coriolis flowmeters and ultrasonic flowmeters.
Differential pressure flowmeters
Differential pressure (DP) flowmeters are used for the custody transfer of liquid and gas to measure the flow of liquid, gas, and steam. The DP flowmeter consist of a differential pressure transmitter and a primary element. The primary element places a constriction in a flow stream, while the DP transmitter measures the difference in pressure upstream and downstream of the constriction.
In many cases, pressure transmitters and primary elements are bought by the end-users from different suppliers. However, several vendors have integrated the pressure transmitter with the primary element to form a complete flowmeter. The advantage of this is that they can be calibrated with the primary element and DP transmitter already in place.
Standards and criteria for the use of DP flowmeters for custody transfer applications are specified by the American Gas Association (AGA) and the American Petroleum Institute (API).
An advantage of using a DP flowmeters is that they are the most studied and best understood type of flowmeter. A disadvantage of using a DP flowmeters is that they introduce a pressure drop into the flowmeter line. This is a necessary result of the constriction in the line that is required to make the DP flow measurement.
One important development in the use of DP flowmeters for custody transfer applications has been the development of single and dual chamber orifice fittings.
Turbine flowmeters
The first turbine flowmeter was invented by Reinhard Woltman, a German engineer in 1790. Turbine flowmeters consist of a rotor with propeller-like blades that spins as water or some other fluid passes over it. The rotor spins in proportion to flow rate (see turbine meters) . There are many types of turbine meters, but many of those used for gas flow are called axial meters.
The turbine flowmeter is most useful when measuring clean, steady, high-speed flow of low-viscosity fluids. In comparison to other flowmeters, the turbine flowmeter has a significant cost advantage over ultrasonic flowmeters, especially in the larger line sizes, and it also has a favourable price compared to the prices of DP flowmeters, especially in cases where one turbine meter can replace several DP meters.
The disadvantage of turbine flowmeters is that they have moving parts that are subject to wear. To prevent wear and inaccuracy, durable materials are used, including ceramic ball bearings.
Positive displacement flowmeters
Positive displacement (PD) flowmeters are highly accurate meters that are widely used for custody transfer of commercial and industrial water, as well as for custody transfer of many other liquids. PD flowmeters have the advantage that they have been approved by a number of regulatory bodies for this purpose, and they have not yet been displaced by other applications.
PD meters excel at measuring low flows, and also at measuring highly viscous flows, because PD meters captures the flow in a container of known volume. Speed of flow doesn't matter when using a PD meter.
Coriolis flowmeters
Coriolis flowmeters have been around for more than 30 years and are preferred in process industries such as chemical and food and beverage. Coriolis technology offers accuracy and reliability in measuring material flow, and is often hailed as among the best flow measurement technologies due to direct mass flow, fluid density, temperature, and precise calculated volume flow rates. Coriolis meters do not have any moving parts and provide long term stability, repeatability, and reliability. Because they are direct mass flow measurement devices, Coriolis meters can handle the widest range of fluids from gases to heavy liquids and are not impacted by viscosity or density changes that often effect velocity based technologies (PD, Turbine, Ultrasonic). With the widest flow range capability of any flow technology, Coriolis can be sized for low pressure drop. This combined with the fact that they are not flow profile dependent helps eliminate the need for straight runs and flow conditioning which enables custody transfer systems to be designed with minimal pressure drop.
It has to be mentioned that any measurement instrument that relies on one measurement principle only will show a higher measurement uncertainty under two-phase flow conditions. Conventional measurement principles, like positive displacement, turbine meters, orifice plates will seemingly continue to measure, but will not be able to inform the user about the occurrence of two-phase flow. Yet modern principles based on the Coriolis effect or ultrasonic flow measurement will inform the user by means of diagnostic functions.
Flow is measured using Coriolis meters by analyzing the changes in the Coriolis force of a flowing substance. The force is generated in a mass moving within a rotating frame of reference. An angular, outward acceleration, which is factored with linear velocity is produced due to the rotation. With a fluid mass, the Coriolis force is proportional to the mass flow rate of that fluid.
A Coriolis meter has two main components: an oscillating flow tube equipped with sensors and drivers, and an electronic transmitter that controls the oscillations, analyzes the results, and transmits the information. The Coriolis principle for flow measurement requires the oscillating section of a rotating pipe to be exploited. Oscillation produces the Coriolis force, which traditionally is sensed and analyzed to determine the rate of flow. Modern coriolis meters utilize the phase difference measured at each end of the oscillating pipe.
Ultrasonic flowmeters
Ultrasonic flowmeters were first introduced into industrial markets in 1963 by Tokyo Keiki (now Tokimec) in Japan. Custody transfer measurements have been around for a long time, and over the past ten years, Coriolis and ultrasonic meters have become the flowmeters of choice for custody transfer in the oil and gas industry.
Ultrasonic meters provide volumetric flow rate. They typically use the transit-time method, where sounds waves transmitted in the direction of fluid flow travel faster than those travelling upstream. The transit time difference is proportional to fluid velocity. Ultrasonic flow meters have negligible pressure drop if recommended installation is followed, have high turndown capability, and can handle a wide range of applications. Crude oil production, transportation, and processing are typical applications for this technology.
The use of ultrasonic flowmeters is continuing to grow for custody transfer. Unlike PD and turbine meters, ultrasonic flowmeters do not have moving parts. Pressure drop is much reduced with an ultrasonic meter when compared to PD, turbine, and DP meters. Installation of ultrasonic meters is relatively straightforward, and maintenance requirements are low.
In June 1998, The American Gas Association published a standard called AGA-9. This standard lays out the criteria for the use of ultrasonic flowmeters for Custody Transfer of Natural Gas.
Components
Custody transfer requires an entire metering system that is designed and engineered for the application, not just flowmeters. Components of a custody transfer system typically include:
Multiple meters/meter runs;
Flow computers;
Quality systems (gas chromatographs to measure energy content of natural gas and sampling systems for liquid);
Calibration using in-place or mobile provers for liquid, or master-meter for liquid or gas; and
Supporting automation.
A typical liquid custody transfer skid includes multiple flowmeters and meter provers. Provers are used to calibrate meters in-situ and are performed frequently; typically before, during, and after a batch transfer for metering assurance. A good example of this is a Lease Automatic Custody Transfer (LACT) unit in a crude oil production facility.
Accuracy
In the ISO 5725-1 standard accuracy for measuring instruments is defined as “the closeness of agreement between a test result and the accepted reference value”. This term “accuracy” includes both the systematic error and the bias component. Each device has its manufacturer stated accuracy specification and its tested accuracy. Uncertainty takes all the metering system factors that impact measurement accuracy into account. The accuracy of flowmeters could be used in two different metering systems that ultimately have different calculated uncertainties due to other factors in the system that affect flow calculations. Uncertainty even includes such factors as the flow computer's A/D converter accuracy. The quest for accuracy in a custody transfer system requires meticulous attention to detail.
Custody transfer requirements
Custody transfer metering systems must meet requirements set by industry bodies such as AGA, API, or ISO, and national metrology standards such as OIML (International), NIST (U.S.), PTB (Germany), CMC (China), and GOST (Russia), DSTU (Ukraine) among others. These requirements can be of two types: Legal and Contract.
Legal
The national Weights & Measures codes and regulations control the wholesale and retail trade requirements to facilitate fair trade. The regulations and accuracy requirements vary widely between countries and commodities, but they all have one common characteristic - “traceability”. There is always a procedure that defines the validation process where the duty meter is compared to a standard that is traceable to the legal metrology agency of the respective region.
Contract
A contract is a written agreement between buyers and sellers that defines the measurement requirements. These are large-volume sales between operating companies where refined products and crude oils are transported by marine, pipeline or rail. Custody transfer measurement must be at the highest level of accuracy possible because a small error in measurement can amount to a large financial difference. Due to these critical natures of measurements, petroleum companies around the world have developed and adopted standards to meet the industry's needs.
In Canada, for instance, all measurement of a custody transfer nature falls under the purview of Measurement Canada. In the US, the Federal Energy Regulatory Commission (FERC) controls the standards which must be met for interstate trade.
Liquid custody transfer
Custody transfer of liquid flow measurement follow guidelines set by the ISO. By industrial consensus, liquid flow measurement is defined as having an overall uncertainty of ±0.25% or better. The overall uncertainty is derived from an appropriate statistical combination of the component uncertainties in the measurement system.
Mode of measurement
Volume or mass measurement
Liquid flow measurements are usually in volumetric or mass unit. Volume is normally used for stand-alone field tanker loading operations, while mass is used for multi-field pipeline or offshore pipeline with an allocation requirement.
Mass measurement and reporting are achieved by
Measurement of volume flow rate (for example, by turbine or ultrasonic meter) and fluid density
Direct mass measurement by Coriolis meter
Sampling system
An automatic flow-proportional sampling system is used in flow measurement to determine the average water content, average density and for analysis purposes. Sampling systems should be broadly in accordance with ISO 3171.
The sampling system is a critical section during flow measurement. Any errors introduced through sampling error will generally have a direct, linear effect on the overall measurement.
Temperature and pressure measurement
Temperature and pressure measurement are important factors to consider when taking flow measurements of liquids. Temperature and pressure measurement points should be situated as close to the meter as possible, in reference to their conditions at the meter inlet. Temperature measurements that affect the accuracy of the metering system should have an overall loop accuracy of 0.5 °C or better, and the corresponding readout should have a resolution of 0.2 °C or better.
Temperature checks are performed by certified thermometers with the aid of Thermowells
Pressure measurements that affect the accuracy of the metering system should have an overall loop accuracy of 0.5 bar or better and the corresponding readout should have a resolution of 0.1 bar or better.
Gaseous custody transfer
Custody transfer of gaseous flow measurement follow guidelines set by the international bodies. By industrial consensus, gaseous flow measurement is defined as mass flow measurement with an overall uncertainty of ±1.0% or better. The overall uncertainty is derived from an appropriate statistical combination of the component uncertainties in the measurement system.
Mode of measurement
Volume or mass unit
All gaseous flow measurement must be made on single-phase gas streams, having measurements in either volumetric or mass units.
Sampling
Sampling is an important aspect, as they help to ascertain accuracy. Apt facilities should be provided for the purpose of obtaining representative samples. The type of instrumentation and the measuring system may influence this requirement.
Gas density
Gas density at the meter may be determined either by:
Continuous direct measurement, by on-line densitometer
Calculation, using a recognised equation of state together with measurements of the gas temperature, pressure and composition.
Most industries prefer to use the continuous measurement of gas density. However, both methods may be used simultaneously, and the comparison of their respective results may provide additional confidence in the accuracy of each method.
Best practices
In any custody transfer application, a true random uncertainty has an equal chance of favouring either party, the net impact should be zero to both parties, and measurement accuracy and repeatability should not be valued. Measurement accuracy and repeatability are of high value to most seller because many users install check meters.
The first step in designing any custody transfer system is to determine the mutual measurement performance expectations of the supplier and the user over the range of flow rates. This determination of mutual performance expectations should be made by individuals who have a clear understanding of all of the costs of measurement disputes caused by poor repeatability.
The second step is to quantify the operating conditions which are not controllable. For a flow measurement, these can include:
Expected ambient temperature variation;
Maximum static line pressure;
Static line pressure and temperature variation;
Maximum allowable permanent pressure loss;
Flow turndown; and
Expected frequency of flow variation and/or pulsation.
The third and final step is to select hardware, installation and maintenance procedures which will ensure that the measurement provides the required installed performance under the expected (uncontrollable) operating conditions. For example, the user can:
Select a static and/or differential pressure transmitter which has better or worse performance under the given real-world operating conditions.
Calibrate the transmitter(s) frequently or infrequently.
In the case of a DP flowmeter, size the primary element for a higher or lower differential pressure (higher DP's provide higher accuracy, at the expense of higher pressure loss).
Select a flowmeter and pressure transmitter with faster or slower response.
Use long or short interconnection (impulse) lines, or direct connect for fastest response.
While the first and second steps involve gathering data, the third step may require calculations and/or testing.
General formula for calculating energy transferred (LNG)
The formula for calculating the LNG transferred depends on the contractual sales conditions. These can relate to three types of sale contract as defined by Incoterms 2000: an FOB sale, a CIF sale or a DES sale.
In the case of an FOB (Free On Board) sale, the determination of the energy transferred and invoiced for will be made in the loading port.
In the case of a CIF (Cost Insurance & Freight) or a DES (Delivered Ex Ship) sale, the energy transferred and invoiced for will be determined in the unloading port.
In FOB contracts, the buyer is responsible to provide and maintain the custody transfer measurement systems on board the vessel for volume, temperature and pressure determination and the seller is responsible to provide and maintain the custody transfer measurement systems at the loading terminal such as the sampling and gas analysis. For CIF and DES contracts the responsibility is reversed.
Both buyer and seller have the right to verify the accuracy of each system that is provided, maintained and operated by the other party.
The determination of the transferred energy usually happens in the presence of one or more surveyors, the ship's cargo officer and a representative of the LNG terminal operator. A representative of the buyer can also be present.
In all cases, the transferred energy can be calculated with the following formula:
E =(VLNG × DLNG × GVCLNG) - Egas displaced ± Egas to ER (if applicable)
Where:
E = the total net energy transferred from the loading facilities to the LNG carrier, or from the LNG carrier to the unloading facilities.
VLNG= the volume of LNG loaded or unloaded in m3.
DLNG = the density of LNG loaded or unloaded in kg/m3.
GCVLNG = the gross calorific value of the LNG loaded or unloaded in million BTU/kg
E gas displaced = The net energy of the displaced gas, also in million BTU, which is either:
sent back onshore by the LNG carrier when loading (volume of gas in cargo tanks displaced by same volume of loaded LNG),
Or, gas received by the LNG carrier in its cargo tanks when unloading in replacement of the volume of discharged LNG.
E(gas to ER) = If applicable, the energy of the gas consumed in the LNG carrier's engine room during the time between opening and closing custody transfer surveys, i.e. used by the vessel at the port, which is:
+ For an LNG loading transfer or
- For an LNG unloading transfer
References
External links
Measurement Canada.
CMC
Tokyo KEIKI
API
Flow Research
GUIDANCE NOTES FOR PETROLEUM MEASUREMENT (Highly recommended)
ISO
Fluid mechanics | Custody transfer | Engineering | 4,177 |
202,522 | https://en.wikipedia.org/wiki/Ionizing%20radiation | Ionizing radiation (US, ionising radiation in the UK), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum.
Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, infrared, microwaves, and radio waves are non-ionizing radiation. Nearly all types of laser light are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV.
Ionizing subatomic particles include alpha particles, beta particles, and neutrons. These particles are created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission.
Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).
Ionizing radiation is used in a wide variety of fields such as medicine, nuclear power, research, and industrial manufacturing, but presents a health hazard if proper measures against excessive exposure are not taken. Exposure to ionizing radiation causes cell damage to living tissue and organ damage. In high acute doses, it will result in radiation burns and radiation sickness, and lower level doses over a protracted time can cause cancer. The International Commission on Radiological Protection (ICRP) issues guidance on ionizing radiation protection, and the effects of dose uptake on human health.
Directly ionizing radiation
Ionizing radiation may be grouped as directly or indirectly ionizing.
Any charged particle with mass can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. Such particles include atomic nuclei, electrons, muons, charged pions, protons, and energetic charged nuclei stripped of their electrons. When moving at relativistic speeds (near the speed of light, c) these particles have enough kinetic energy to be ionizing, but there is considerable speed variation. For example, a typical alpha particle moves at about 5% of c, but an electron with 33 eV (just enough to ionize) moves at about 1% of c.
Two of the first types of directly ionizing radiation to be discovered are alpha particles which are helium nuclei ejected from the nucleus of an atom during radioactive decay, and energetic electrons, which are called beta particles.
Natural cosmic rays are made up primarily of relativistic protons but also include heavier atomic nuclei like helium ions and HZE ions. In the atmosphere such particles are often stopped by air molecules, and this produces short-lived charged pions, which soon decay to muons, a primary type of cosmic ray radiation that reaches the surface of the earth. Pions can also be produced in large amounts in particle accelerators.
Alpha particles
Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Alpha particle emissions are generally produced in the process of alpha decay.
Alpha particles are a strongly ionizing form of radiation, but when emitted by radioactive decay they have low penetration power and can be absorbed by a few centimeters of air, or by the top layer of human skin. More powerful alpha particles from ternary fission are three times as energetic, and penetrate proportionately farther in air. The helium nuclei that form 10–12% of cosmic rays, are also usually of much higher energy than those produced by radioactive decay and pose shielding problems in space. However, this type of radiation is significantly absorbed by the Earth's atmosphere, which is a radiation shield equivalent to about 10 meters of water.
The alpha particle was named by Ernest Rutherford after the first letter in the Greek alphabet, α, when he ranked the known radioactive emissions in descending order of ionising effect in 1899. The symbol is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as or indicating a Helium ion with a +2 charge (missing its two electrons). If the ion gains electrons from its environment, the alpha particle can be written as a normal (electrically neutral) helium atom .
Beta particles
Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactive nuclei, such as potassium-40. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (β). There are two forms of beta decay, β− and β+, which respectively give rise to the electron and the positron. Beta particles are much less penetrating than gamma radiation, but more penetrating than alpha particles.
High-energy beta particles may produce X-rays known as bremsstrahlung ("braking radiation") or secondary electrons (delta ray) as they pass through matter. Both of these can cause an indirect ionization effect. Bremsstrahlung is of concern when shielding beta emitters, as the interaction of beta particles with some shielding materials produces Bremsstrahlung. The effect is greater with material having high atomic numbers, so material with low atomic numbers is used for beta source shielding.
Positrons and other types of antimatter
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. When a low-energy positron collides with a low-energy electron, annihilation occurs, resulting in their conversion into the energy of two or more gamma ray photons (see electron–positron annihilation). As positrons are positively charged particles they can directly ionize an atom through Coulomb interactions.
Positrons can be generated by positron emission nuclear decay (through weak interactions), or by pair production from a sufficiently energetic photon. Positrons are common artificial sources of ionizing radiation used in medical positron emission tomography (PET) scans.
Charged nuclei
Charged nuclei are characteristic of galactic cosmic rays and solar particle events and except for alpha particles (charged helium nuclei) have no natural sources on earth. In space, however, very high energy protons, helium nuclei, and HZE ions can be initially stopped by relatively thin layers of shielding, clothes, or skin. However, the resulting interaction will generate secondary radiation and cause cascading biological effects. If just one atom of tissue is displaced by an energetic proton, for example, the collision will cause further interactions in the body. This is called "linear energy transfer" (LET), which utilizes elastic scattering.
LET can be visualized as a billiard ball hitting another in the manner of the conservation of momentum, sending both away with the energy of the first ball divided between the two unequally. When a charged nucleus strikes a relatively slow-moving nucleus of an object in space, LET occurs and neutrons, alpha particles, low-energy protons, and other nuclei will be released by the collisions and contribute to the total absorbed dose of tissue.
Indirectly ionizing radiation
Indirectly ionizing radiation is electrically neutral and does not interact strongly with matter, therefore the bulk of the ionization effects are due to secondary ionization.
Photon radiation
Even though photons are electrically neutral, they can ionize atoms indirectly through the photoelectric effect and the Compton effect. Either of those interactions will cause the ejection of an electron from an atom at relativistic speeds, turning that electron into a beta particle (secondary beta particle) that will ionize other atoms. Since most of the ionized atoms are due to the secondary beta particles, photons are indirectly ionizing radiation.
Radiated photons are called gamma rays if they are produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. They are called x-rays if produced outside the nucleus. The generic term "photon" is used to describe both.
X-rays normally have a lower energy than gamma rays, and an older convention was to define the boundary as a wavelength of 10−11 m (or a photon energy of 100 keV). That threshold was driven by historic limitations of older X-ray tubes and low awareness of isomeric transitions. Modern technologies and discoveries have shown an overlap between X-ray and gamma energies. In many fields they are functionally identical, differing for terrestrial studies only in origin of the radiation. In astronomy, however, where radiation origin often cannot be reliably determined, the old energy division has been preserved, with X-rays defined as being between about 120 eV and 120 keV, and gamma rays as being of any energy above 100 to 120 keV, regardless of source. Most astronomical "gamma-ray astronomy" are known not to originate in nuclear radioactive processes but, rather, result from processes like those that produce astronomical X-rays, except driven by much more energetic electrons.
Photoelectric absorption is the dominant mechanism in organic materials for photon energies below 100 keV, typical of classical X-ray tube originated X-rays. At energies beyond 100 keV, photons ionize matter increasingly through the Compton effect, and then indirectly through pair production at energies beyond 5 MeV. The accompanying interaction diagram shows two Compton scatterings happening sequentially. In every scattering event, the gamma ray transfers energy to an electron, and it continues on its path in a different direction and with reduced energy.
Definition boundary for lower-energy photons
The lowest ionization energy of any element is 3.89 eV, for caesium. However, US Federal Communications Commission material defines ionizing radiation as that with a photon energy greater than 10 eV (equivalent to a far ultraviolet wavelength of 124 nanometers). Roughly, this corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen, both about 14 eV. In some Environmental Protection Agency references, the ionization of a typical water molecule at an energy of 33 eV is referenced as the appropriate biological threshold for ionizing radiation: this value represents the so-called W-value, the colloquial name for the ICRU's mean energy expended in a gas per ion pair formed, which combines ionization energy plus the energy lost to other processes such as excitation. At 38 nanometers wavelength for electromagnetic radiation, 33 eV is close to the energy at the conventional 10 nm wavelength transition between extreme ultraviolet and X-ray radiation, which occurs at about 125 eV. Thus, X-ray radiation is always ionizing, but only extreme-ultraviolet radiation can be considered ionizing under all definitions.
Neutrons
Neutrons have a neutral electrical charge often misunderstood as zero electrical charge and thus often do not directly cause ionization in a single step or interaction with matter. However, fast neutrons will interact with the protons in hydrogen via linear energy transfer, energy that a particle transfers to the material it is moving through. This mechanism scatters the nuclei of the materials in the target area, causing direct ionization of the hydrogen atoms. When neutrons strike the hydrogen nuclei, proton radiation (fast protons) results. These protons are themselves ionizing because they are of high energy, are charged, and interact with the electrons in matter.
Neutrons that strike other nuclei besides hydrogen will transfer less energy to the other particle if linear energy transfer does occur. But, for many nuclei struck by neutrons, inelastic scattering occurs. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal or somewhere in between. It is also dependent on the nuclei it strikes and its neutron cross section.
In inelastic scattering, neutrons are readily absorbed in a type of nuclear reaction called neutron capture and attributes to the neutron activation of the nucleus. Neutron interactions with most types of matter in this manner usually produce radioactive nuclei. The abundant oxygen-16 nucleus, for example, undergoes neutron activation, rapidly decays by a proton emission forming nitrogen-16, which decays to oxygen-16. The short-lived nitrogen-16 decay emits a powerful beta ray. This process can be written as:
16O (n,p) 16N (fast neutron capture possible with >11 MeV neutron)
16N → 16O + β− (Decay t1/2 = 7.13 s)
This high-energy β− further interacts rapidly with other nuclei, emitting high-energy γ via Bremsstrahlung
While not a favorable reaction, the 16O (n,p) 16N reaction is a major source of X-rays emitted from the cooling water of a pressurized water reactor and contributes enormously to the radiation generated by a water-cooled nuclear reactor while operating.
For the best shielding of neutrons, hydrocarbons that have an abundance of hydrogen are used.
In fissile materials, secondary neutrons may produce nuclear chain reactions, causing a larger amount of ionization from the daughter products of fission.
Outside the nucleus, free neutrons are unstable and have a mean lifetime of 14 minutes, 42 seconds. Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay:
In the adjacent diagram, a neutron collides with a proton of the target material, and then becomes a fast recoil proton that ionizes in turn. At the end of its path, the neutron is captured by a nucleus in an (n,γ)-reaction that leads to the emission of a neutron capture photon. Such photons always have enough energy to qualify as ionizing radiation.
Physical effects
Nuclear effects
Neutron radiation, alpha radiation, and extremely energetic gamma (> ~20 MeV) can cause nuclear transmutation and induced radioactivity. The relevant mechanisms are neutron activation, alpha absorption, and photodisintegration. A large enough number of transmutations can change macroscopic properties and cause targets to become radioactive themselves, even after the original source is removed.
Chemical effects
Ionization of molecules can lead to radiolysis (breaking chemical bonds), and formation of highly reactive free radicals. These free radicals may then react chemically with neighbouring materials even after the original radiation has stopped. (e.g., ozone cracking of polymers by ozone formed by ionization of air). Ionizing radiation can also accelerate existing chemical reactions such as polymerization and corrosion, by contributing to the activation energy required for the reaction. Optical materials deteriorate under the effect of ionizing radiation.
High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purple color. The glow can be observed, e.g., during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or the inside of a damaged nuclear reactor like during the Chernobyl disaster.
Monatomic fluids, e.g. molten sodium, have no chemical bonds to break and no crystal lattice to disturb, so they are immune to the chemical effects of ionizing radiation. Simple diatomic compounds with very negative enthalpy of formation, such as hydrogen fluoride will reform rapidly and spontaneously after ionization.
Electrical effects
The ionization of materials temporarily increases their conductivity, potentially permitting damaging current levels. This is a particular hazard in semiconductor microelectronics employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra-atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods.
Proton radiation found in space can also cause single-event upsets in digital circuits. The electrical effects of ionizing radiation are exploited in gas-filled radiation detectors, e.g. the Geiger-Muller counter or the ion chamber.
Health effects
Most adverse health effects of exposure to ionizing radiation may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to killing or malfunction of cells following high doses from radiation burns.
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is one cause of chronic myelogenous leukemia, although most people with CML have not been exposed to radiation. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial.
The most widely accepted model, the Linear no-threshold model (LNT), holds that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.
Although DNA is always susceptible to damage by ionizing radiation, the DNA molecule may also be damaged by radiation with enough energy to excite certain molecular bonds to form pyrimidine dimers. This energy may be less than ionizing, but near to it. A good example is ultraviolet spectrum energy which begins at about 3.1 eV (400 nm) at close to the same energy level which can cause sunburn to unprotected skin, as a result of photoreactions in collagen and (in the UV-B range) also damage in DNA (for example, pyrimidine dimers). Thus, the mid and lower ultraviolet electromagnetic spectrum is damaging to biological tissues as a result of electronic excitation in molecules which falls short of ionization, but produces similar non-thermal effects. To some extent, visible light and also ultraviolet A (UVA) which is closest to visible energies, have been proven to result in formation of reactive oxygen species in skin, which cause indirect damage since these are electronically excited molecules which can inflict reactive damage, although they do not cause sunburn (erythema). Like ionization-damage, all these effects in skin are beyond those produced by simple thermal effects.
Measurement of radiation
The table below shows radiation and dose quantities in SI and non-SI units.
Uses of radiation
Ionizing radiation has many industrial, military, and medical uses. Its usefulness must be balanced with its hazards, a compromise that has shifted over time. For example, at one time, assistants in shoe shops in the US used X-rays to check a child's shoe size, but this practice was halted when the risks of ionizing radiation were better understood.
Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old).
Sources of radiation
Ionizing radiation is generated through nuclear reactions, nuclear decay, by very high temperature, or via acceleration of charged particles in electromagnetic fields. Natural sources include the sun, lightning and supernova explosions. Artificial sources include nuclear reactors, particle accelerators, and x-ray tubes.
The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized types of human exposures.
The International Commission on Radiological Protection manages the International System of Radiological Protection, which sets recommended limits for dose uptake.
Background radiation
Background radiation comes from both natural and human-made sources.
The global average exposure of humans to ionizing radiation is about 3 mSv (0.3 rem) per year, 80% of which comes from nature. The remaining 20% results from exposure to human-made radiation sources, primarily from medical imaging. Average human-made exposure is much higher in developed countries, mostly due to CT scans and nuclear medicine.
Natural background radiation comes from five primary sources: cosmic radiation, solar radiation, external terrestrial sources, radiation in the human body, and radon.
The background rate for natural radiation varies considerably with location, being as low as 1.5 mSv/a (1.5 mSv per year) in some areas and over 100 mSv/a in others. The highest level of purely natural radiation recorded on the Earth's surface is 90 μGy/h (0.8 Gy/a) on a Brazilian black beach composed of monazite. The highest background radiation in an inhabited area is found in Ramsar, primarily due to naturally radioactive limestone used as a building material. Some 2000 of the most exposed residents receive an average radiation dose of 10 mGy per year, (1 rad/yr) ten times more than the ICRP recommended limit for exposure to the public from artificial sources. Record levels were found in a house where the effective radiation dose due to external radiation was 135 mSv/a, (13.5 rem/yr) and the committed dose from radon was 640 mSv/a (64.0 rem/yr). This unique case is over 200 times higher than the world average background radiation. Despite the high levels of background radiation that the residents of Ramsar receive there is no compelling evidence that they experience a greater health risk. The ICRP recommendations are conservative limits and may represent an over representation of the actual health risk. Generally radiation safety organization recommend the most conservative limits assuming it is best to err on the side of caution. This level of caution is appropriate but should not be used to create fear about background radiation danger. Radiation danger from background radiation may be a serious threat but is more likely a small overall risk compared to all other factors in the environment.
Cosmic radiation
The Earth, and all living things on it, are constantly bombarded by radiation from outside our solar system. This cosmic radiation consists of relativistic particles: positively charged nuclei (ions) from 1 amu protons (about 85% of it) to 26 amu iron nuclei and even beyond. (The high-atomic number particles are called HZE ions.) The energy of this radiation can far exceed that which humans can create, even in the largest particle accelerators (see ultra-high-energy cosmic ray). This radiation interacts in the atmosphere to create secondary radiation that rains down, including x-rays, muons, protons, antiprotons, alpha particles, pions, electrons, positrons, and neutrons.
The dose from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and based largely on the geomagnetic field, altitude, and solar cycle. The cosmic-radiation dose rate on airplanes is so high that, according to the United Nations UNSCEAR 2000 Report (see links at bottom), airline flight crew workers receive more dose on average than any other worker, including those in nuclear power plants. Airline crews receive more cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where this type of radiation is maximal.
Cosmic rays also include high-energy gamma rays, which are far beyond the energies produced by solar or human sources.
External terrestrial sources
Most materials on Earth contain some radioactive atoms, even if in small quantities. Most of the dose received from these sources is from gamma-ray emitters in building materials, or rocks and soil when outside. The major radionuclides of concern for terrestrial radiation are isotopes of potassium, uranium, and thorium. Each of these sources has been decreasing in activity since the formation of the Earth.
Internal radiation sources
All earthly materials that are the building blocks of life contain a radioactive component. As humans, plants, and animals consume food, air, and water, an inventory of radioisotopes builds up within the organism (see banana equivalent dose). Some radionuclides, like potassium-40, emit a high-energy gamma ray that can be measured by sensitive electronic radiation measurement systems. These internal radiation sources contribute to an individual's total radiation dose from natural background radiation.
Radon
An important source of natural radiation is radon gas, which seeps continuously from bedrock but can, because of its high density, accumulate in poorly ventilated houses.
Radon-222 is a gas produced by the α-decay of radium-226. Both are a part of the natural uranium decay chain. Uranium is found in soil throughout the world in varying concentrations. Radon is the largest cause of lung cancer among non-smokers and the second-leading cause overall.
Radiation exposure
There are three standard ways to limit exposure:
Time: For people exposed to radiation in addition to natural background radiation, limiting or minimizing the exposure time will reduce the dose from the source of radiation.
Distance: Radiation intensity decreases sharply with distance, according to an inverse-square law (in an absolute vacuum).
Shielding: Air or skin can be sufficient to substantially attenuate alpha radiation, while sheet metal or plastic is often sufficient to stop beta radiation. Barriers of lead, concrete, or water are often used to give effective protection from more penetrating forms of ionizing radiation such as gamma rays and neutrons. Some radioactive materials are stored or handled underwater or by remote control in rooms constructed of thick concrete or lined with lead. There are special plastic shields that stop beta particles, and air will stop most alpha particles. The effectiveness of a material in shielding radiation is determined by its half-value thicknesses, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. Some generally accepted thicknesses of attenuating material are 5 mm of aluminum for most beta particles, and 3 inches of lead for gamma radiation.
These can all be applied to natural and human-made sources. For human-made sources the use of Containment is a major tool in reducing dose uptake and is effectively a combination of shielding and isolation from the open environment. Radioactive materials are confined in the smallest possible space and kept out of the environment such as in a hot cell (for radiation) or glove box (for contamination). Radioactive isotopes for medical use, for example, are dispensed in closed handling facilities, usually gloveboxes, while nuclear reactors operate within closed systems with multiple barriers that keep the radioactive materials contained. Work rooms, hot cells and gloveboxes have slightly reduced air pressures to prevent escape of airborne material to the open environment.
In nuclear conflicts or civil nuclear releases civil defense measures can help reduce exposure of populations by reducing ingestion of isotopes and occupational exposure. One is the issue of potassium iodide (KI) tablets, which blocks the uptake of radioactive iodine (one of the major radioisotope products of nuclear fission) into the human thyroid gland.
Occupational exposure
Occupationally exposed individuals are controlled within the regulatory framework of the country they work in, and in accordance with any local nuclear licence constraints. These are usually based on the recommendations of the International Commission on Radiological Protection.
The ICRP recommends limiting artificial irradiation. For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period.
The radiation exposure of these individuals is carefully monitored with the use of dosimeters and other radiological protection instruments which will measure radioactive particulate concentrations, area gamma dose readings and radioactive contamination. A legal record of dose is kept.
Examples of activities where occupational exposure is a concern include:
Airline crew (the most exposed population)
Industrial radiography
Medical radiology and nuclear medicine
Uranium mining
Nuclear power plant and nuclear fuel reprocessing plant workers
Research laboratories (government, university and private)
Some human-made radiation sources affect the body through direct radiation, known as effective dose (radiation) while others take the form of radioactive contamination and irradiate the body from within. The latter is known as committed dose.
Public exposure
Medical procedures, such as diagnostic X-rays, nuclear medicine, and radiation therapy are by far the most significant source of human-made radiation exposure to the general public. Some of the major radionuclides used are I-131, Tc-99m, Co-60, Ir-192, and Cs-137. The public is also exposed to radiation from consumer products, such as tobacco (polonium-210), combustible fuels (gas, coal, etc.), televisions, luminous watches and dials (tritium), airport X-ray systems, smoke detectors (americium), electron tubes, and gas lantern mantles (thorium).
Of lesser magnitude, members of the public are exposed to radiation from the nuclear fuel cycle, which includes the entire sequence from processing uranium to the disposal of the spent fuel. The effects of such exposure have not been reliably measured due to the extremely low doses involved. Opponents use a cancer per dose model to assert that such activities cause several hundred cases of cancer per year, an application of the widely accepted Linear no-threshold model (LNT).
The International Commission on Radiological Protection recommends limiting artificial irradiation to the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.
In a nuclear war, gamma rays from both the initial weapon explosion and fallout would be the sources of radiation exposure.
Spaceflight
Massive particles are a concern for astronauts outside the Earth's magnetic field who would receive solar particles from solar proton events (SPE) and galactic cosmic rays from cosmic sources. These high-energy charged nuclei are blocked by Earth's magnetic field but pose a major health concern for astronauts traveling to the Moon and to any distant location beyond the Earth orbit. Highly charged HZE ions in particular are known to be extremely damaging, although protons make up the vast majority of galactic cosmic rays. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts.
Air travel
Air travel exposes people on aircraft to increased radiation from space as compared to sea level, including cosmic rays and from solar flare events. Software programs such as Epcard, CARI, SIEVERT, PCAIRE are attempts to simulate exposure by aircrews and passengers. An example of a measured dose (not simulated dose) is 6 μSv per hour from London Heathrow to Tokyo Narita on a high-latitude polar route. However, dosages can vary, such as during periods of high solar activity. The United States FAA requires airlines to provide flight crew with information about cosmic radiation, and an International Commission on Radiological Protection recommendation for the general public is no more than 1 mSv per year. In addition, many airlines do not allow pregnant flightcrew members, to comply with a European Directive. The FAA has a recommended limit of 1 mSv total for a pregnancy, and no more than 0.5 mSv per month. Information originally based on Fundamentals of Aerospace Medicine published in 2008.
Radiation hazard warning signs
Hazardous levels of ionizing radiation are signified by the trefoil sign on a yellow background. These are usually posted at the boundary of a radiation controlled area or in any place where radiation levels are significantly above background due to human intervention.
The red ionizing radiation warning symbol (ISO 21482) was launched in 2007, and is intended for IAEA Category 1, 2 and 3 sources defined as dangerous sources capable of death or serious injury, including food irradiators, teletherapy machines for cancer treatment and industrial radiography units. The symbol is to be placed on the device housing the source, as a warning not to dismantle the device or to get any closer. It will not be visible under normal use, only if someone attempts to disassemble the device. The symbol will not be located on building access doors, transportation packages or containers.
See also
European Committee on Radiation Risk
International Commission on Radiological Protection – manages the International System of Radiological Protection
Ionometer
Irradiated mail
National Council on Radiation Protection and Measurements – US national organisation
Nuclear safety
Nuclear semiotics
Radiant energy
Exposure (radiation)
Radiation hormesis
Radiation physics
Radiation protection
Radiation Protection Convention, 1960
Radiation protection of patients
Sievert
Treatment of infections after accidental or hostile exposure to ionizing radiation
References
Literature
External links
The Nuclear Regulatory Commission regulates most commercial radiation sources and non-medical exposures in the US:
NLM Hazardous Substances Databank – Ionizing Radiation
United Nations Scientific Committee on the Effects of Atomic Radiation 2000 Report Volume 1: Sources, Volume 2: Effects
Beginners Guide to Ionising Radiation Measurement
(from CT scans and xrays).
Free Radiation Safety Course
Health Physics Society Public Education Website
Oak Ridge Reservation Basic Radiation Facts
Carcinogens
Mutagens
Radioactivity
Radiobiology
Radiation health effects
Radiation protection | Ionizing radiation | Physics,Chemistry,Materials_science,Biology,Environmental_science | 6,995 |
25,198 | https://en.wikipedia.org/wiki/Quaternary | The Quaternary ( ) is the current and most recent of the three periods of the Cenozoic Era in the geologic time scale of the International Commission on Stratigraphy (ICS), as well as the current and most recent of the twelve periods of the Phanerozoic eon. It follows the Neogene Period and spans from 2.58 million years ago to the present. The Quaternary Period is divided into two epochs: the Pleistocene (2.58 million years ago to 11.7 thousand years ago) and the Holocene (11.7 thousand years ago to today); a proposed third epoch, the Anthropocene, was rejected in 2024 by IUGS, the governing body of the ICS.
The Quaternary is typically defined by the Quaternary glaciation, the cyclic growth and decay of continental ice sheets related to the Milankovitch cycles and the associated climate and environmental changes that they caused.
Research history
In 1759 Giovanni Arduino proposed that the geological strata of northern Italy could be divided into four successive formations or "orders" (). The term "quaternary" was introduced by Jules Desnoyers in 1829 for sediments of France's Seine Basin that clearly seemed to be younger than Tertiary Period rocks.
The Quaternary Period follows the Neogene Period and extends to the present. The Quaternary covers the time span of glaciations classified as the Pleistocene, and includes the present interglacial time-period, the Holocene.
This places the start of the Quaternary at the onset of Northern Hemisphere glaciation approximately 2.6 million years ago (mya). Prior to 2009, the Pleistocene was defined to be from 1.805 million years ago to the present, so the current definition of the Pleistocene includes a portion of what was, prior to 2009, defined as the Pliocene.
Quaternary stratigraphers usually worked with regional subdivisions. From the 1970s, the International Commission on Stratigraphy (ICS) tried to make a single geologic time scale based on GSSP's, which could be used internationally. The Quaternary subdivisions were defined based on biostratigraphy instead of paleoclimate.
This led to the problem that the proposed base of the Pleistocene was at 1.805 million years ago, long after the start of the major glaciations of the northern hemisphere. The ICS then proposed to abolish use of the name Quaternary altogether, which appeared unacceptable to the International Union for Quaternary Research (INQUA).
In 2009, it was decided to make the Quaternary the youngest period of the Cenozoic Era with its base at 2.588 mya and including the Gelasian Stage, which was formerly considered part of the Neogene Period and Pliocene Epoch. This was later revised to 2.58 mya.
The Anthropocene was proposed as a third epoch as a mark of the anthropogenic impact on the global environment starting with the Industrial Revolution, or about 200 years ago. The Anthropocene was rejected as a geological epoch in 2024 by the International Union of Geological Sciences (IUGS), the governing body of the ICS.
Geology
The 2.58 million years of the Quaternary represents the time during which recognisable humans existed. Over this geologically short time period there has been relatively little change in the distribution of the continents due to plate tectonics.
The Quaternary geological record is preserved in greater detail than that for earlier periods.
The major geographical changes during this time period included the emergence of the straits of Bosphorus and Skagerrak during glacial epochs, which respectively turned the Black Sea and Baltic Sea into fresh water lakes, followed by their flooding (and return to salt water) by rising sea level; the periodic filling of the English Channel, forming a land bridge between Britain and the European mainland; the periodic closing of the Bering Strait, forming the land bridge between Asia and North America; and the periodic flash flooding of Scablands of the American Northwest by glacial water.
The current extent of Hudson Bay, the Great Lakes and other major lakes of North America are a consequence of the Canadian Shield's readjustment since the last ice age; different shorelines have existed over the course of Quaternary time.
Climate
The climate was one of periodic glaciations with continental glaciers moving as far from the poles as 40 degrees latitude. Glaciation took place repeatedly during the Quaternary Ice age – a term coined by Schimper in 1839 that began with the start of the Quaternary about 2.58 Mya and continues to the present day.
In 1821, a Swiss engineer, Ignaz Venetz, presented an article in which he suggested the presence of traces of the passage of a glacier at a considerable distance from the Alps. This idea was initially disputed by another Swiss scientist, Louis Agassiz, but when he undertook to disprove it, he ended up affirming his colleague's hypothesis. A year later, Agassiz raised the hypothesis of a great glacial period that would have had long-reaching general effects. This idea gained him international fame and led to the establishment of the Glacial Theory.
In time, thanks to the refinement of geology, it has been demonstrated that there were several periods of glacial advance and retreat and that past temperatures on Earth were very different from today.
In particular, the Milankovitch cycles of Milutin Milankovitch are based on the premise that variations in incoming solar radiation are a fundamental factor controlling Earth's climate.
During this time, substantial glaciers advanced and retreated over much of North America and Europe, parts of South America and Asia, and all of Antarctica.
Flora and fauna
There was a major extinction of large mammals globally during the Late Pleistocene Epoch. Many forms such as sabre-toothed cats, mammoths, mastodons, glyptodonts, etc., became extinct worldwide. Others, including horses, camels and American cheetahs became extinct in North America.
The Great Lakes formed and giant mammals thrived in parts of North America and Eurasia not covered in ice. These mammals became extinct when the glacial period ended about 11,700 years ago. Modern humans evolved about 315,000 years ago. During the Quaternary Period, mammals, flowering plants, and insects dominated the land.
See also
List of Quaternary volcanic eruptions
References
External links
Subcommission on Quaternary Stratigraphy
Stratigraphical charts for the Quaternary
Version history of the global Quaternary chronostratigraphical charts (from 2004b)
Silva, P.G. C. Zazo, T. Bardají, J. Baena, J. Lario, A. Rosas, J. Van der Made. 2009, "Tabla Cronoestratigrafíca del Cuaternario en la Península Ibérica - V.2". [Versión PDF, 3.6 Mb]. Asociación Española para el Estudio del Cuaternario (AEQUA), Departamento de Geología, Universidad de Salamanca, Spain. (Correlation chart of European Quaternary and cultural stages and fossils)
Welcome to the XVIII INQUA-Congress, Bern, 2011
Quaternary (chronostratigraphy scale)
Geological periods
Physical geography
Physical oceanography | Quaternary | Physics | 1,542 |
18,926,097 | https://en.wikipedia.org/wiki/Clofilium | Clofilium is an antiarrhythmic agent.
Quaternary ammonium compounds
Antiarrhythmic agents
4-Chlorophenyl compounds | Clofilium | Chemistry | 37 |
11,322,659 | https://en.wikipedia.org/wiki/Coleosporium%20ipomoeae | Coleosporium ipomoeae is a plant pathogen. Specifically, it is a fungus that can develop on sweet potatoes.
Fungal plant pathogens and diseases
Eudicot diseases
Pucciniales
Fungi described in 1885
Fungus species | Coleosporium ipomoeae | Biology | 48 |
64,416,075 | https://en.wikipedia.org/wiki/Geocarto%20International | Geocarto International is an academic journal published by Taylor & Francis.
It focuses on remote sensing, GIS, geoscience and environmental sciences. Its editor-in-chief is Kamlesh Lulla. The 2019-2020 Journal Impact IF of Geocarto International is 4.889, which is updated in 2020.
References
Taylor & Francis academic journals
Remote sensing journals
Earth and atmospheric sciences journals
Geographic information systems | Geocarto International | Technology | 86 |
11,436,452 | https://en.wikipedia.org/wiki/Cercospora%20carotae | Cercospora carotae is a fungal plant pathogen.
References
carotae
Fungal plant pathogens and diseases
Fungus species | Cercospora carotae | Biology | 27 |
24,150,455 | https://en.wikipedia.org/wiki/C18H23NO4 | {{DISPLAYTITLE:C18H23NO4}}
The molecular formula C18H23NO4 (molar mass: 317.37 g/mol, exact mass: 317.1627 u) may refer to:
Arbutamine
Cocaethylene
Denopamine
14-Hydroxydihydrocodeine | C18H23NO4 | Chemistry | 71 |
1,824,845 | https://en.wikipedia.org/wiki/Cartesian%20tensor | In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product.
Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.
Cartesian basis and related terminology
Vectors in three dimensions
In 3D Euclidean space, , the standard basis is , , . Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal.
Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details.
For Cartesian tensors of order 1, a Cartesian vector can be written algebraically as a linear combination of the basis vectors , , :
where the coordinates of the vector with respect to the Cartesian basis are denoted , , . It is common and helpful to display the basis vectors as column vectors
when we have a coordinate vector in a column vector representation:
A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used separately for specific reasons – see Einstein notation and covariance and contravariance of vectors for why.
The term "component" of a vector is ambiguous: it could refer to:
a specific coordinate of the vector such as (a scalar), and similarly for and , or
the coordinate scalar-multiplying the corresponding basis vector, in which case the "-component" of is (a vector), and similarly for and .
A more general notation is tensor index notation, which has the flexibility of numerical values rather than fixed coordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors , , and coordinates , , . In general, the notation , , refers to any basis, and , , refers to the corresponding coordinate system; although here they are restricted to the Cartesian system. Then:
It is standard to use the Einstein notation—the summation sign for summation over an index that is present exactly twice within a term may be suppressed for notational conciseness:
An advantage of the index notation over coordinate-specific notations is the independence of the dimension of the underlying vector space, i.e. the same expression on the right hand side takes the same form in higher dimensions (see below). Previously, the Cartesian labels x, y, z were just labels and not indices. (It is informal to say "i = x, y, z").
Second-order tensors in three dimensions
A dyadic tensor T is an order-2 tensor formed by the tensor product of two Cartesian vectors and , written . Analogous to vectors, it can be written as a linear combination of the tensor basis , , ..., (the right-hand side of each identity is only an abbreviation, nothing more):
Representing each basis tensor as a matrix:
then can be represented more systematically as a matrix:
See matrix multiplication for the notational correspondence between matrices and the dot and tensor products.
More generally, whether or not is a tensor product of two vectors, it is always a linear combination of the basis tensors with coordinates , , ..., :
while in terms of tensor indices:
and in matrix form:
Second-order tensors occur naturally in physics and engineering when physical quantities have directional dependence in the system, often in a "stimulus-response" way. This can be mathematically seen through one aspect of tensors – they are multilinear functions. A second-order tensor T which takes in a vector u of some magnitude and direction will return a vector v; of a different magnitude and in a different direction to u, in general. The notation used for functions in mathematical analysis leads us to write , while the same idea can be expressed in matrix and index notations (including the summation convention), respectively:
By "linear", if for two scalars and and vectors and , then in function and index notations:
and similarly for the matrix notation. The function, matrix, and index notations all mean the same thing. The matrix forms provide a clear display of the components, while the index form allows easier tensor-algebraic manipulation of the formulae in a compact manner. Both provide the physical interpretation of directions; vectors have one direction, while second-order tensors connect two directions together. One can associate a tensor index or coordinate label with a basis vector direction.
The use of second-order tensors are the minimum to describe changes in magnitudes and directions of vectors, as the dot product of two vectors is always a scalar, while the cross product of two vectors is always a pseudovector perpendicular to the plane defined by the vectors, so these products of vectors alone cannot obtain a new vector of any magnitude in any direction. (See also below for more on the dot and cross products). The tensor product of two vectors is a second-order tensor, although this has no obvious directional interpretation by itself.
The previous idea can be continued: if takes in two vectors and , it will return a scalar . In function notation we write , while in matrix and index notations (including the summation convention) respectively:
The tensor T is linear in both input vectors. When vectors and tensors are written without reference to components, and indices are not used, sometimes a dot ⋅ is placed where summations over indices (known as tensor contractions) are taken. For the above cases:
motivated by the dot product notation:
More generally, a tensor of order which takes in vectors (where is between and inclusive) will return a tensor of order , see for further generalizations and details. The concepts above also apply to pseudovectors in the same way as for vectors. The vectors and tensors themselves can vary within throughout space, in which case we have vector fields and tensor fields, and can also depend on time.
Following are some examples:
For the electrical conduction example, the index and matrix notations would be:
while for the rotational kinetic energy :
See also constitutive equation for more specialized examples.
Vectors and tensors in dimensions
In -dimensional Euclidean space over the real numbers, , the standard basis is denoted , , , ... . Each basis vector points along the positive axis, with the basis being orthonormal. Component of is given by the Kronecker delta:
A vector in takes the form:
Similarly for the order-2 tensor above, for each vector a and b in :
or more generally:
Transformations of Cartesian vectors (any number of dimensions)
Meaning of "invariance" under coordinate transformations
The position vector in is a simple and common example of a vector, and can be represented in any coordinate system. Consider the case of rectangular coordinate systems with orthonormal bases only. It is possible to have a coordinate system with rectangular geometry if the basis vectors are all mutually perpendicular and not normalized, in which case the basis is orthogonal but not orthonormal. However, orthonormal bases are easier to manipulate and are often used in practice. The following results are true for orthonormal bases, not orthogonal ones.
In one rectangular coordinate system, as a contravector has coordinates and basis vectors , while as a covector it has coordinates and basis covectors , and we have:
In another rectangular coordinate system, as a contravector has coordinates and basis , while as a covector it has coordinates and basis , and we have:
Each new coordinate is a function of all the old ones, and vice versa for the inverse function:
and similarly each new basis vector is a function of all the old ones, and vice versa for the inverse function:
for all , .
A vector is invariant under any change of basis, so if coordinates transform according to a transformation matrix , the bases transform according to the matrix inverse , and conversely if the coordinates transform according to inverse , the bases transform according to the matrix . The difference between each of these transformations is shown conventionally through the indices as superscripts for contravariance and subscripts for covariance, and the coordinates and bases are linearly transformed according to the following rules:
where represents the entries of the transformation matrix (row number is and column number is ) and denotes the entries of the inverse matrix of the matrix .
If is an orthogonal transformation (orthogonal matrix), the objects transforming by it are defined as Cartesian tensors. This geometrically has the interpretation that a rectangular coordinate system is mapped to another rectangular coordinate system, in which the norm of the vector is preserved (and distances are preserved).
The determinant of is , which corresponds to two types of orthogonal transformation: () for rotations and () for improper rotations (including reflections).
There are considerable algebraic simplifications, the matrix transpose is the inverse from the definition of an orthogonal transformation:
From the previous table, orthogonal transformations of covectors and contravectors are identical. There is no need to differ between raising and lowering indices, and in this context and applications to physics and engineering the indices are usually all subscripted to remove confusion for exponents. All indices will be lowered in the remainder of this article. One can determine the actual raised and lowered indices by considering which quantities are covectors or contravectors, and the relevant transformation rules.
Exactly the same transformation rules apply to any vector , not only the position vector. If its components do not transform according to the rules, is not a vector.
Despite the similarity between the expressions above, for the change of coordinates such as , and the action of a tensor on a vector like , is not a tensor, but is. In the change of coordinates, is a matrix, used to relate two rectangular coordinate systems with orthonormal bases together. For the tensor relating a vector to a vector, the vectors and tensors throughout the equation all belong to the same coordinate system and basis.
Derivatives and Jacobian matrix elements
The entries of are partial derivatives of the new or old coordinates with respect to the old or new coordinates, respectively.
Differentiating with respect to :
so
is an element of the Jacobian matrix. There is a (partially mnemonical) correspondence between index positions attached to L and in the partial derivative: i at the top and j at the bottom, in each case, although for Cartesian tensors the indices can be lowered.
Conversely, differentiating with respect to :
so
is an element of the inverse Jacobian matrix, with a similar index correspondence.
Many sources state transformations in terms of the partial derivatives:
and the explicit matrix equations in 3d are:
similarly for
Projections along coordinate axes
As with all linear transformations, depends on the basis chosen. For two orthonormal bases
projecting to the axes:
projecting to the axes:
Hence the components reduce to direction cosines between the and axes:
where and are the angles between the and axes. In general, is not equal to , because for example and are two different angles.
The transformation of coordinates can be written:
and the explicit matrix equations in 3d are:
similarly for
The geometric interpretation is the components equal to the sum of projecting the components onto the axes.
The numbers arranged into a matrix would form a symmetric matrix (a matrix equal to its own transpose) due to the symmetry in the dot products, in fact it is the metric tensor . By contrast or do not form symmetric matrices in general, as displayed above. Therefore, while the matrices are still orthogonal, they are not symmetric.
Apart from a rotation about any one axis, in which the and for some coincide, the angles are not the same as Euler angles, and so the matrices are not the same as the rotation matrices.
Transformation of the dot and cross products (three dimensions only)
The dot product and cross product occur very frequently, in applications of vector analysis to physics and engineering, examples include:
power transferred by an object exerting a force with velocity along a straight-line path:
tangential velocity at a point of a rotating rigid body with angular velocity :
potential energy of a magnetic dipole of magnetic moment in a uniform external magnetic field :
angular momentum for a particle with position vector and momentum :
torque acting on an electric dipole of electric dipole moment in a uniform external electric field :
induced surface current density in a magnetic material of magnetization on a surface with unit normal :
How these products transform under orthogonal transformations is illustrated below.
Dot product, Kronecker delta, and metric tensor
The dot product ⋅ of each possible pairing of the basis vectors follows from the basis being orthonormal. For perpendicular pairs we have
while for parallel pairs we have
Replacing Cartesian labels by index notation as shown above, these results can be summarized by
where are the components of the Kronecker delta. The Cartesian basis can be used to represent in this way.
In addition, each metric tensor component with respect to any basis is the dot product of a pairing of basis vectors:
For the Cartesian basis the components arranged into a matrix are:
so are the simplest possible for the metric tensor, namely the :
This is not true for general bases: orthogonal coordinates have diagonal metrics containing various scale factors (i.e. not necessarily 1), while general curvilinear coordinates could also have nonzero entries for off-diagonal components.
The dot product of two vectors and transforms according to
which is intuitive, since the dot product of two vectors is a single scalar independent of any coordinates. This also applies more generally to any coordinate systems, not just rectangular ones; the dot product in one coordinate system is the same in any other.
Cross product, Levi-Civita symbol, and pseudovectors
For the cross product () of two vectors, the results are (almost) the other way round. Again, assuming a right-handed 3d Cartesian coordinate system, cyclic permutations in perpendicular directions yield the next vector in the cyclic collection of vectors:
while parallel vectors clearly vanish:
and replacing Cartesian labels by index notation as above, these can be summarized by:
where , , are indices which take values . It follows that:
These permutation relations and their corresponding values are important, and there is an object coinciding with this property: the Levi-Civita symbol, denoted by . The Levi-Civita symbol entries can be represented by the Cartesian basis:
which geometrically corresponds to the volume of a cube spanned by the orthonormal basis vectors, with sign indicating orientation (and not a "positive or negative volume"). Here, the orientation is fixed by , for a right-handed system. A left-handed system would fix or equivalently .
The scalar triple product can now be written:
with the geometric interpretation of volume (of the parallelepiped spanned by , , ) and algebraically is a determinant:
This in turn can be used to rewrite the cross product of two vectors as follows:
Contrary to its appearance, the Levi-Civita symbol is not a tensor, but a pseudotensor, the components transform according to:
Therefore, the transformation of the cross product of and is:
and so transforms as a pseudovector, because of the determinant factor.
The tensor index notation applies to any object which has entities that form multidimensional arrays – not everything with indices is a tensor by default. Instead, tensors are defined by how their coordinates and basis elements change under a transformation from one coordinate system to another.
Note the cross product of two vectors is a pseudovector, while the cross product of a pseudovector with a vector is another vector.
Applications of the tensor and pseudotensor
Other identities can be formed from the tensor and pseudotensor, a notable and very useful identity is one that converts two Levi-Civita symbols adjacently contracted over two indices into an antisymmetrized combination of Kronecker deltas:
The index forms of the dot and cross products, together with this identity, greatly facilitate the manipulation and derivation of other identities in vector calculus and algebra, which in turn are used extensively in physics and engineering. For instance, it is clear the dot and cross products are distributive over vector addition:
without resort to any geometric constructions – the derivation in each case is a quick line of algebra. Although the procedure is less obvious, the vector triple product can also be derived. Rewriting in index notation:
and because cyclic permutations of indices in the symbol does not change its value, cyclically permuting indices in to obtain allows us to use the above - identity to convert the symbols into tensors:
thusly:
Note this is antisymmetric in and , as expected from the left hand side. Similarly, via index notation or even just cyclically relabelling , , and in the previous result and taking the negative:
and the difference in results show that the cross product is not associative. More complex identities, like quadruple products;
and so on, can be derived in a similar manner.
Transformations of Cartesian tensors (any number of dimensions)
Tensors are defined as quantities which transform in a certain way under linear transformations of coordinates.
Second order
Let and be two vectors, so that they transform according to , .
Taking the tensor product gives:
then applying the transformation to the components
and to the bases
gives the transformation law of an order-2 tensor. The tensor is invariant under this transformation:
More generally, for any order-2 tensor
the components transform according to;
and the basis transforms by:
If does not transform according to this rule – whatever quantity may be – it is not an order-2 tensor.
Any order
More generally, for any order tensor
the components transform according to;
and the basis transforms by:
For a pseudotensor of order , the components transform according to;
Pseudovectors as antisymmetric second order tensors
The antisymmetric nature of the cross product can be recast into a tensorial form as follows. Let be a vector, be a pseudovector, be another vector, and be a second order tensor such that:
As the cross product is linear in and , the components of can be found by inspection, and they are:
so the pseudovector can be written as an antisymmetric tensor. This transforms as a tensor, not a pseudotensor. For the mechanical example above for the tangential velocity of a rigid body, given by , this can be rewritten as where is the tensor corresponding to the pseudovector :
For an example in electromagnetism, while the electric field is a vector field, the magnetic field is a pseudovector field. These fields are defined from the Lorentz force for a particle of electric charge traveling at velocity :
and considering the second term containing the cross product of a pseudovector and velocity vector , it can be written in matrix form, with , , and as column vectors and as an antisymmetric matrix:
If a pseudovector is explicitly given by a cross product of two vectors (as opposed to entering the cross product with another vector), then such pseudovectors can also be written as antisymmetric tensors of second order, with each entry a component of the cross product. The angular momentum of a classical pointlike particle orbiting about an axis, defined by , is another example of a pseudovector, with corresponding antisymmetric tensor:
Although Cartesian tensors do not occur in the theory of relativity; the tensor form of orbital angular momentum enters the spacelike part of the relativistic angular momentum tensor, and the above tensor form of the magnetic field enters the spacelike part of the electromagnetic tensor.
Vector and tensor calculus
The following formulae are only so simple in Cartesian coordinates – in general curvilinear coordinates there are factors of the metric and its determinant – see tensors in curvilinear coordinates for more general analysis.
Vector calculus
Following are the differential operators of vector calculus. Throughout, let be a scalar field, and
be vector fields, in which all scalar and vector fields are functions of the position vector and time .
The gradient operator in Cartesian coordinates is given by:
and in index notation, this is usually abbreviated in various ways:
This operator acts on a scalar field Φ to obtain the vector field directed in the maximum rate of increase of Φ:
The index notation for the dot and cross products carries over to the differential operators of vector calculus.
The directional derivative of a scalar field is the rate of change of along some direction vector (not necessarily a unit vector), formed out of the components of and the gradient:
The divergence of a vector field is:
Note the interchange of the components of the gradient and vector field yields a different differential operator
which could act on scalar or vector fields. In fact, if A is replaced by the velocity field of a fluid, this is a term in the material derivative (with many other names) of continuum mechanics, with another term being the partial time derivative:
which usually acts on the velocity field leading to the non-linearity in the Navier-Stokes equations.
As for the curl of a vector field , this can be defined as a pseudovector field by means of the symbol:
which is only valid in three dimensions, or an antisymmetric tensor field of second order via antisymmetrization of indices, indicated by delimiting the antisymmetrized indices by square brackets (see Ricci calculus):
which is valid in any number of dimensions. In each case, the order of the gradient and vector field components should not be interchanged as this would result in a different differential operator:
which could act on scalar or vector fields.
Finally, the Laplacian operator is defined in two ways, the divergence of the gradient of a scalar field :
or the square of the gradient operator, which acts on a scalar field or a vector field :
In physics and engineering, the gradient, divergence, curl, and Laplacian operator arise inevitably in fluid mechanics, Newtonian gravitation, electromagnetism, heat conduction, and even quantum mechanics.
Vector calculus identities can be derived in a similar way to those of vector dot and cross products and combinations. For example, in three dimensions, the curl of a cross product of two vector fields and :
where the product rule was used, and throughout the differential operator was not interchanged with or . Thus:
Tensor calculus
One can continue the operations on tensors of higher order. Let denote a second order tensor field, again dependent on the position vector and time .
For instance, the gradient of a vector field in two equivalent notations ("dyadic" and "tensor", respectively) is:
which is a tensor field of second order.
The divergence of a tensor is:
which is a vector field. This arises in continuum mechanics in Cauchy's laws of motion – the divergence of the Cauchy stress tensor is a vector field, related to body forces acting on the fluid.
Difference from the standard tensor calculus
Cartesian tensors are as in tensor algebra, but Euclidean structure of and restriction of the basis brings some simplifications compared to the general theory.
The general tensor algebra consists of general mixed tensors of type :
with basis elements:
the components transform according to:
as for the bases:
For Cartesian tensors, only the order of the tensor matters in a Euclidean space with an orthonormal basis, and all indices can be lowered. A Cartesian basis does not exist unless the vector space has a positive-definite metric, and thus cannot be used in relativistic contexts.
History
Dyadic tensors were historically the first approach to formulating second-order tensors, similarly triadic tensors for third-order tensors, and so on. Cartesian tensors use tensor index notation, in which the variance may be glossed over and is often ignored, since the components remain unchanged by raising and lowering indices.
See also
Tensor algebra
Tensor calculus
Tensors in curvilinear coordinates
Rotation group
References
General references
Further reading and applications
External links
Cartesian Tensors
V. N. Kaliakin, Brief Review of Tensors, University of Delaware
R. E. Hunt, Cartesian Tensors, University of Cambridge
Linear algebra
Tensors
Applied mathematics | Cartesian tensor | Mathematics,Engineering | 5,191 |
4,986,733 | https://en.wikipedia.org/wiki/Hammett%20equation | In organic chemistry, the Hammett equation describes a linear free-energy relationship relating reaction rates and equilibrium constants for many reactions involving benzoic acid derivatives with meta- and para-substituents to each other with just two parameters: a substituent constant and a reaction constant. This equation was developed and published by Louis Plack Hammett in 1937 as a follow-up to qualitative observations in his 1935 publication.
The basic idea is that for any two reactions with two aromatic reactants only differing in the type of substituent, the change in free energy of activation is proportional to the change in Gibbs free energy. This notion does not follow from elemental thermochemistry or chemical kinetics and was introduced by Hammett intuitively.
The basic equation is:
where
= Reference constant
= Substituent constant
= Reaction rate constant
relating the equilibrium constant, , for a given equilibrium reaction with substituent R and the reference constant when R is a hydrogen atom to the substituent constant which depends only on the specific substituent R and the reaction rate constant ρ which depends only on the type of reaction but not on the substituent used.
The equation also holds for reaction rates k of a series of reactions with substituted benzene derivatives:
In this equation is the reference reaction rate of the unsubstituted reactant, and k that of a substituted reactant.
A plot of for a given equilibrium versus for a given reaction rate with many differently substituted reactants will give a straight line.
Substituent constants
The starting point for the collection of the substituent constants is a chemical equilibrium for which the substituent constant is arbitrarily set to 0 and the reaction constant is set to 1: the deprotonation of benzoic acid or benzene carboxylic acid (R and R' both H) in water at 25 °C.
Having obtained a value for K0, a series of equilibrium constants (K) are now determined based on the same process, but now with variation of the para substituent—for instance, or . These values, combined in the Hammett equation with K0 and remembering that ρ = 1, give the para substituent constants compiled in table 1 for amine, methoxy, ethoxy, dimethylamino, methyl, fluorine, bromine, chlorine, iodine, nitro and cyano substituents. Repeating the process with meta-substituents afford the meta substituent constants. This treatment does not include ortho-substituents, which would introduce steric effects.
The σ values displayed in the Table above reveal certain substituent effects. With ρ = 1, the group of substituents with increasing positive values—notably cyano and nitro—cause the equilibrium constant to increase compared to the hydrogen reference, meaning that the acidity of the carboxylic acid (depicted on the left of the equation) has increased. These substituents stabilize the negative charge on the carboxylate oxygen atom by an electron-withdrawing inductive effect (-I) and also by a negative mesomeric effect (-M).
The next set of substituents are the halogens, for which the substituent effect is still positive but much more modest. The reason for this is that while the inductive effect is still negative, the mesomeric effect is positive, causing partial cancellation. The data also show that for these substituents, the meta effect is much larger than the para effect, due to the fact that the mesomeric effect is greatly reduced in a meta substituent. With meta substituents a carbon atom bearing the negative charge is further away from the carboxylic acid group (structure 2b).
This effect is depicted in scheme 3, where, in a para substituted arene 1a, one resonance structure 1b is a quinoid with positive charge on the X substituent, releasing electrons and thus destabilizing the Y substituent. This destabilizing effect is not possible when X has a meta orientation.
Other substituents, like methoxy and ethoxy, can even have opposite signs for the substituent constant as a result of opposing inductive and mesomeric effect. Only alkyl and aryl substituents like methyl are electron-releasing in both respects.
Of course, when the sign for the reaction constant is negative (next section), only substituents with a likewise negative substituent constant will increase equilibrium constants.
The σp– and σp+ constants
Because the carbonyl group is unable to serve a source of electrons for -M groups (in contrast to lone pair donors like OH), for reactions involving phenol and aniline starting materials, the σp values for electron-withdrawing groups will appear too small. For reactions where resonance effects are expected to have a major impact, a modified parameter, and a modified set of σp– constants may give a better fit. This parameter is defined using the ionization constants of para substituted phenols, via a scaling factor to match up the values of σp– with those of σp for "non-anomalous" substituents, so as to maintain comparable ρ values: for ArOH ⇄ ArO– + H+, we define .
Likewise, the carbonyl carbon of a benzoic acid is at a nodal position and unable to serve as a sink for +M groups (in contrast to a carbocation at the benzylic position). Thus for reactions involving carbocations at the α-position, the σp values for electron-donating groups will appear insufficiently negative. Based on similar considerations, a set of σp+ constants give better fit for reactions involving electron-donating groups at the para position and the formation of a carbocation at the benzylic site. The σp+ are based on the rate constants of the SN1 reaction of cumyl chlorides in 90% acetone/water: for , we define . Note that the scaling factor is negative, since an electron-donating group speeds up the reaction. For a reaction whose Hammett plot is being constructed, these alternative Hammett constants may need to be tested to see if a better linearity could be obtained.
Rho value
With knowledge of substituent constants it is now possible to obtain reaction constants for a wide range of organic reactions. The archetypal reaction is the alkaline hydrolysis of ethyl benzoate (R=R'=H) in a water/ethanol mixture at 30 °C. Measurement of the reaction rate k0 combined with that of many substituted ethyl benzoates ultimately result in a reaction constant of +2.498.
Reaction constants are known for many other reactions and equilibria. Here is a selection of those provided by Hammett himself (with their values in parentheses):
the hydrolysis of substituted cinnamic acid ester in ethanol/water (+1.267)
the ionization of substituted phenols in water (+2.008)
the acid catalyzed esterification of substituted benzoic esters in ethanol (-0.085)
the acid catalyzed bromination of substituted acetophenones (Ketone halogenation) in an acetic acid/water/hydrochloric acid (+0.417)
the hydrolysis of substituted benzyl chlorides in acetone-water at 69.8 °C (-1.875).
The reaction constant, or sensitivity constant, ρ, describes the susceptibility of the reaction to substituents, compared to the ionization of benzoic acid. It is equivalent to the slope of the Hammett plot. Information on the reaction and the associated mechanism can be obtained based on the value obtained for ρ. If the value of:
ρ>1, the reaction is more sensitive to substituents than benzoic acid and negative charge is built during the reaction (or positive charge is lost).
0<ρ<1, the reaction is less sensitive to substituents than benzoic acid and negative charge is built (or positive charge is lost).
ρ=0, no sensitivity to substituents, and no charge is built or lost.
ρ<0, the reaction builds positive charge (or loses negative charge).
These relations can be exploited to elucidate the mechanism of a reaction. As the value of ρ is related to the charge during the rate determining step, mechanisms can be devised based on this information. If the mechanism for the reaction of an aromatic compound is thought to occur through one of two mechanisms, the compound can be modified with substituents with different σ values and kinetic measurements taken. Once these measurements have been made, a Hammett plot can be constructed to determine the value of ρ. If one of these mechanisms involves the formation of charge, this can be verified based on the ρ value. Conversely, if the Hammett plot shows that no charge is developed, i.e. a zero slope, the mechanism involving the building of charge can be discarded.
Hammett plots may not always be perfectly linear. For instance, a curve may show a sudden change in slope, or ρ value. In such a case, it is likely that the mechanism of the reaction changes upon adding a different substituent. Other deviations from linearity may be due to a change in the position of the transition state. In such a situation, certain substituents may cause the transition state to appear earlier (or later) in the reaction mechanism.
Dominating electronic effects
3 kinds of ground state or static electrical influences predominate:
Resonance (mesomeric) effect
Inductive effect: electrical influence of a group which is transmitted primarily by polarization of the bonding electrons from one atom to the next
Direct electrostatic (field) effect: electrical influence of a polar or dipolar substituent which is transmitted primarily to the reactive group through space (including solvent, if any) according to the laws of classical electrostatics
The latter two influences are often treated together as a composite effect, but are treated here separately. Westheimer demonstrated that the electrical effects of π-substituted dipolar groups on the acidities of benzoic and phenylacetic acids can be quantitatively correlated, by assuming only direct electrostatic action of the substituent on the ionizable proton of the carboxyl group. Westheimer's treatment worked well except for those acids with substituents that have unshared electron pairs such as –OH and –OCH3, as these substituents interact strongly with the benzene ring.
Roberts and Moreland studied the reactivities of 4-substituted bicyclo[2.2.2]octane-1-carboxylic acids and esters. In such a molecule, transmission of electrical effects of substituents through the ring by resonance is not possible. Hence, this hints on the role of the π-electrons in the transmission of substituent effects through aromatic systems.
Reactivity of 4-substituted bicyclo[2.2.2]octane-1-carboxylic acids and esters were measured in 3 different processes, each of which had been previously used with the benzoic acid derivatives. A plot of log(k) against log(KA) showed a linear relationship. Such linear relationships correspond to linear free energy relationships, which strongly imply that the effect of the substituents are exerted through changes of potential energy and that the steric and entropy terms remain almost constant through the series. The linear relationship fit well in the Hammett Equation. For the 4-substituted bicyclo[2.2.2.]octane-1-carboxylic acid derivatives, the substituent and reaction constants are designated σ’ and ρ’.
Comparison of ρ and ρ’
Reactivity data indicate that the effects of substituent groups in determining the reactivities of substituted benzoic and bicyclo[2.2.2.]-octane-1-carboxylic acids are comparable. This implies that the aromatic π-electrons do not play a dominant role in the transmission of electrical effects of dipolar groups to the ionizable carboxyl group Difference between ρ and ρ’ for the reactions of the acids with diphenylazomethane is probably due to an inverse relation to the solvent dielectric constant De
Comparison of σ and σ’
For meta-directing groups (electron withdrawing group or EWG), σmeta and σpara are more positive than σ’. (The superscript, c, in table denotes data from Hammett, 1940.) For ortho-para directing groups (electron donating group or EDG), σ’ more positive than σmeta and σpara. The difference between σpara and σ’ (σpara – σ’) is greater than that between σmeta and σ’(σmeta − σ’). This is expected as electron resonance effects are felt more strongly at the p-positions. The (σ – σ’) values can be taken as a reasonable measurement of the resonance effects.
Nonlinearity
The plot of the Hammett equation is typically seen as being linear, with either a positive or negative slope correlating to the value of rho. However, nonlinearity emerges in the Hammett plot when a substituent affects the rate of reaction or changes the rate-determining step or reaction mechanism of the reaction. For the reason of the former case, new sigma constants have been introduced to accommodate the deviation from linearity otherwise seen resulting from the effect of the substituent. σ+ takes into account positive charge buildup occurring in the transition state of the reaction. Therefore, an electron donating group (EDG) will accelerate the rate of the reaction by resonance stabilization and will give the following sigma plot with a negative rho value.
σ- is designated in the case where negative charge buildup in the transition state occurs, and the rate of the reaction is consequently accelerated by electron withdrawing groups (EWG). The EWG withdraws electron density by resonance and effectively stabilizes the negative charge that is generated. The corresponding plot will show a positive rho value.
In the case of a nucleophilic acyl substitution the effect of the substituent, X, of the non-leaving group can in fact accelerate the rate of the nucleophilic addition reaction when X is an EWG. This is attributed to the resonance contribution of the EWG to withdraw electron density thereby increasing the susceptibility for nucleophilic attack on the carbonyl carbon. A change in rate occurs when X is EDG, as is evidenced when comparing the rates between X = Me and X = OMe, and nonlinearity is observed in the Hammett plot.
The effect of the substituent may change the rate-determining step (rds) in the mechanism of the reaction. A certain electronic effect may accelerate a certain step so that it is no longer the rds.
A change in the mechanism of a reaction also results in nonlinearity in the Hammett plot. Typically, the model used for measuring the changes in rate in this instance is that of the SN2 reaction. However, it has been observed that in some cases of an SN2 reaction that an EWG does not accelerate the reaction as would be expected and that the rate varies with the substituent. In fact, the sign of the charge and degree to which it develops will be affected by the substituent in the case of the benzylic system.
For example, the substituent may determine the mechanism to be an SN1 type reaction over a SN2 type reaction, in which case the resulting Hammett plot will indicate a rate acceleration due to an EDG, thus elucidating the mechanism of the reaction.
Another deviation from the regular Hammett equation is explained by the charge of nucleophile. Despite nonlinearity in benzylic SN2 reactions, electron withdrawing groups could either accelerate or retard the reaction. If the nucleophile is negatively charged (e.g. cyanide) the electron withdrawing group will increase the rate due to stabilization of the extra charge which is put on the carbon in the transition state. On the other hand, if the nucleophile is not charged (e.g. triphenylphosphine), electron withdrawing group is going to slow down the reaction by decreasing the electron density in the anti bonding orbital of leaving group in the transition state.
Hammett modifications
Other equations now exist that refine the original Hammett equation: the Swain–Lupton equation, the Taft equation, the Grunwald–Winstein equation, and the Yukawa–Tsuno equation. An equation that addresses stereochemistry in aliphatic systems has also been developed.
Estimation of Hammett sigma constants
Core-electron binding energy (CEBE) shifts correlate linearly with the Hammett substituent constants (σ) in substituted benzene derivatives.
Consider para-disubstituted benzene p-F-C6H4-Z, where Z is a substituent such as NH2, NO2, etc. The fluorine atom is para with respect to the substituent Z in the benzene ring. The image on the right shows four distinguished ring carbon atoms, C1(ipso), C2(ortho), C3(meta), C4(para) in p-F-C6H4-Z molecule. The carbon with Z is defined as C1(ipso) and fluorinated carbon as C4(para). This definition is followed even for Z = H. The left-hand side of () is called CEBE shift or ΔCEBE, and is defined as the difference between the CEBE of the fluorinated carbon atom in p-F-C6H4-Z and that of the fluorinated carbon in the reference molecule FC6H5.
The right-hand side of Eq. is a product of a parameter κ and a Hammett substituent constant at the para position, σp. The parameter is defined by eq. :
where and are the Hammett reaction constants for the reaction of the neutral molecule and core ionized molecule, respectively. ΔCEBEs of ring carbons in p-F-C6H4-Z were calculated with density functional theory to see how they correlate with Hammett σ-constants. Linear plots were obtained when the calculated CEBE shifts at the ortho, meta and para carbon were plotted against Hammett σo, σm and σp constants respectively.
value calculated ≈ 1.
Hence the approximate agreement in numerical value and in sign between the CEBE shifts and their corresponding Hammett σ constant.
See also
Bell–Evans–Polanyi principle
Craig plot
Free-energy relationship
pKa
Quantitative structure–activity relationship
Notes
References
Further reading
General
Thomas H. Lowry & Kathleen Schueller Richardson, 1987, Mechanism and Theory in Organic Chemistry, 3rd Edn., New York, NY, US: Harper & Row, , see , accessed 20 June 2015.
Francis A. Carey & Richard J. Sundberg, 2006, "Title Advanced Organic Chemistry: Part A: Structure and Mechanisms," 4th Edn., New York, NY, US: Springer Science & Business Media, , see , accessed 19 June 2015.
Michael B. Smith & Jerry March, 2007, "March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure," 6th Ed., New York, NY, US: Wiley & Sons, , see , accessed 19 June 2015.
Theory
L.P. Hammett, 1970, Physical Organic Chemistry, 2nd Edn., New York, NY, US: McGraw-Hill.
John Shorter, 1982, Correlation Analysis of Organic Reactivity, Chichester 1982.
Otto Exner, 1988, Correlation Analysis of Chemical Data, New York, NY, US: Plenum.
Surveys of descriptors
Roberto Todeschini, Viviana Consonni, Raimund Mannhold, Hugo Kubinyi & Hendrik Timmerman, 2008, "Entry: Electronic substituent constants (Hammet substituent constants, σ electronic constants)," in Handbook of Molecular Descriptors, Vol. 11 of Methods and Principles in Medicinal Chemistry (book series), pp. 144–157, New York, NY, US: John Wiley & Sons, , see , accessed 22 June 2015.
N. Chapman, 2012, Correlation Analysis in Chemistry: Recent Advances, New York, NY, US: Springer Science & Business, , see , accessed 22 June 2015.
History
John Shorter, 2000, "The prehistory of the Hammett equation," Chem. Listy, 94:210-214.
Frank Westheimer, 1997, "Louis Plack Hammett, 1894—1987: A Biographical Memoir," pp. 136–149, in Biographical Memoirs, Washington, DC, US: National Academies Press, see , accessed 22 June 2015.
Physical organic chemistry
Equations | Hammett equation | Chemistry,Mathematics | 4,478 |
50,649,514 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20510 | Zinc finger protein 510 is a protein that in humans is encoded by the ZNF510 gene.
Function
This gene encodes a krueppel C2H2-type zinc-finger protein family member. The encoded protein is expressed in several cancer cell types and may be a biomarker for early diagnosis of these cancers.
References
Further reading | Zinc finger protein 510 | Chemistry | 72 |
60,055,007 | https://en.wikipedia.org/wiki/Lyapunov%20dimension | In the mathematics of dynamical systems, the concept of Lyapunov dimension was suggested by Kaplan and Yorke for estimating the Hausdorff dimension of attractors.
Further the concept has been developed and rigorously justified in a number of papers, and nowadays various different approaches to the definition of Lyapunov dimension are used. Remark that the attractors with noninteger Hausdorff dimension are called strange attractors. Since the direct numerical computation of the Hausdorff dimension of attractors is often a problem of high numerical complexity, estimations via the Lyapunov dimension became widely spread.
The Lyapunov dimension was named after the Russian mathematician Aleksandr Lyapunov because of the close connection with the Lyapunov exponents.
Definitions
Consider a dynamical system
, where is the shift operator along the solutions:
,
of ODE , ,
or difference equation , ,
with continuously differentiable vector-function .
Then is the fundamental matrix of solutions of linearized system
and denote by ,
singular values with respect to their algebraic multiplicity,
ordered by decreasing for any and .
Definition via finite-time Lyapunov dimension
The concept of finite-time Lyapunov dimension and related definition of the Lyapunov dimension, developed in the works by N. Kuznetsov, is convenient for the numerical experiments where only finite time can be observed.
Consider an analog of the Kaplan–Yorke formula for the finite-time Lyapunov exponents:
with respect to the ordered set of finite-time Lyapunov exponents
at the point .
The finite-time Lyapunov dimension of dynamical system with respect
to invariant set
is defined as follows
In this approach the use of the analog of Kaplan–Yorke formula
is rigorously justified by the Douady–Oesterlè theorem, which proves that for any fixed
the finite-time Lyapunov dimension for a closed bounded invariant set
is an upper estimate of the Hausdorff dimension:
Looking for best such estimation
, the Lyapunov dimension is defined as follows:
The possibilities of changing the order of the time limit and the supremum over set is discussed, e.g., in.
Note that the above defined Lyapunov dimension is invariant under Lipschitz diffeomorphisms.
Exact Lyapunov dimension
Let the Jacobian matrix at one of the equilibria have simple real eigenvalues:
,
then
If the supremum of local Lyapunov dimensions on the global attractor, which involves all equilibria, is achieved at an equilibrium point, then this allows one to get analytical formula of the exact Lyapunov dimension of the global attractor (see corresponding Eden’s conjecture).
Definition via statistical physics approach and ergodicity
Following the statistical physics approach and assuming the ergodicity
the Lyapunov dimension of attractor is estimated by
limit value of the local Lyapunov dimension
of a typical trajectory, which belongs to the attractor.
In this case
and .
From a practical point of view, the rigorous use of ergodic Oseledec theorem,
verification that the considered trajectory is a typical trajectory,
and the use of corresponding Kaplan–Yorke formula is a challenging task
(see, e.g. discussions in).
The exact limit values of finite-time Lyapunov exponents,
if they exist and are the same for all ,
are called the absolute ones and used in the Kaplan–Yorke formula.
Examples of the rigorous use of the ergodic theory for the computation of the Lyapunov exponents and dimension can be found in.
References
Dynamical systems | Lyapunov dimension | Physics,Mathematics | 758 |
67,522,937 | https://en.wikipedia.org/wiki/Gjitonia | The gjitonia is a form of consolidated social cooperation and today little present in the whole historical Arbëreshë region.
It differs from the typical Italian "rione" for its urban architecture and social contribution.
Etymology
The word gjitonìa is a complex word that contains social values of parental extraction of the ancient extended family Arbëreshë; literally from Albanian gjitonìa, gjit / ngjit or 'neighborhood' or 'the neighbor', in detail the medieval Albanian word, it is used by arvaniti, or those Albanian populations established secularly in present-day Greece, while it is less widespread, or in other cases disappeared, in Albania (or 'place of the five senses', gjitonë is "literally the opposite of the indigenous neighborhood") [without source]. The origins of gjitonia in the countries of southern Italy are linked to three fundamental elements of a material type: the fence that delimits the space shared by the family, the state, or the house, meeting place and material refuge and the botanical garden, the place of Pharmacy. A journey to the places of the historical region.
Gjitonië comes from the union of the words gjindë tonë ('people of the same linguistic tonality') (or in another version it would derive from gjithë tonë 'all ours'), and is made up of two words: Gji indicating 'the people' and tonië meaning 'people who are able to make the tones of the five senses vibrate in their lived environments and confirm their belonging through the idiomatic metric'.
Architectural-urban aspect
Unlike the districts, the "gjitonie" are small agglomerations of houses often attached to each other (often understood as the smallest portion of the urban fabric) (however, as it represents a historically complex social phenomenon and as it represents a Mediterranean model, this definition does not find all enthusiasts and scholars unanimous) [without source]. The peculiarity is in the constitution of small houses that are usually built in a semicircle, flanking a "mother" house called "stately home" and all overlook a small square or open space. The gjitonia unlike the neighborhood that unites people in the sense of built, represents a social model that crosses what could be found in the indigenous neighborhood. The social structure is foreign to the typical architecture of Italian centers. The various "gjitonie" are connected to each other by small and narrow alleys, so much so that it is difficult to access them by car and in recent times this has penalized the historic centers favoring large residential complexes.
The "gjitonie" represents for the Arbereshe, Arberi or Arbanon the place of the root of the original extended family stock, it originated in the aftermath of the growth of the group that did not exceed but twenty elements, reached this name it expired and was repeated each time the generated groups reached the partition number. for this the phenomenon continually generated the search for the original stock and in order to become part of it, it was necessary to provide a series of elements of historical memory that could guarantee the originality of the dynasty. for this reason the gjitonia is the indefinite place within which the sounds and the proximity of a scattered identity are perceived and which meets to share moments and difficulties that life has in store for us
In the case of Lungro, the "gjitonie" and the urban center were the focus of a historical-urbanistic study.
Social aspect
The social aspect of " gjitonia " mainly refers to an ancient and historical experience where at the basis of human relationships coexisted values of hospitality and solidarity between the families of the neighborhood and where the aggregation was lived without difference of social classes . Mainly for the Italian-Albanian communities the " gjitonia " is a world where relationships were so strong as to create real family relationships, so much so that the phrase in Arbëresh is typical. Gjitoni gjirì "(" vinato parentato ").
Perhaps handed down by the Albanians of origin, the gjitonia is close to the values of kanùn ("Family, Individual and Hospitality") albeit contaminated and altered by customs over the years and local customs. The gjitonia has therefore become a place of socialization and contagion of knowledge and anecdotes or where one was preparing to learn a "profession" as a beginner, but it is also still a place where sacred and profane are intertwined, or, where religious songs alternate with fairy tales and popular songs.
Typically the " gjitonie " are also the place where votive bonfires for the saints are lit or where stops for religious processions are set up; these events are treated with the participation of the whole neighborhood as a unit.
The use of the "half-door", the sharing of basic necessities, the participation of the whole gjitonia in mourning and joys are some of the most visible social aspects of this phenomenon that has been diminishing over the years with depopulation of the communities Arbëreshë. In Lungro, for example, to make the meaning of " gjitonia " more marked, the neighborhood used to (and in some cases still today) get together for a mate (an Argentinian drink imported from Lungresi emigrants).
Modern times
In recent years, for various causes such as depopulation and demographic decline in the first place, instability hydrogeological, unappealing houses, architectural barriers, historic centers not accessible by cars and other reasons, this phenomenon has is slowly fading; this "light" is kept by the elderly who continue with their daily rites to keep alive the traditional gjitonia arbëreshe.
In the village of San Basile (CS) since 2010 the " A house in San Basile " initiative has been active by the municipal administration, or the sale of houses in the historic center at reasonable prices to encourage repopulation, the tourist vocation of the place and "stop" the loss of " gjitonie ".
See also
Rione
References
Bibliography
Mattanò Vincenzo Maria, 2012, Il centro antico di Lungro. Un raro documento di rigore tipologico e di sofisticata strategia insediativa, Il Coscile.
Rennis Giovan Battista, 2000, La tradizione popolare della comunità arbëreshe di Lungro'', Il Coscile.
Neighbourhoods in Italy
Urban planning
Arbëreshë culture | Gjitonia | Engineering | 1,350 |
17,367,123 | https://en.wikipedia.org/wiki/Scaly-foot%20gastropod | Chrysomallon squamiferum, commonly known as the scaly-foot gastropod, scaly-foot snail, sea pangolin, or volcano snail is a species of deep-sea hydrothermal-vent snail, a marine gastropod mollusc in the family Peltospiridae. This vent-endemic gastropod is known only from deep-sea hydrothermal vents in the Indian Ocean, where it has been found at depths of about . C. squamiferum differs greatly from other deep-sea gastropods, even the closely related neomphalines. In 2019, it was declared endangered on the IUCN Red List, the first species to be listed as such due to risks from deep-sea mining of its vent habitat.
The shell is of a unique construction, with three layers; the outer layer consists of iron sulphides, the middle layer is equivalent to the organic periostracum found in other gastropods, and the innermost layer is made of aragonite. The foot is also unusual, being armored at the sides with iron-mineralised sclerites.
The snail's oesophageal gland houses symbiotic gammaproteobacteria from which the snail appears to obtain its nourishment. This species is considered to be one of the most peculiar deep-sea hydrothermal-vent gastropods, and it is the only known extant animal that incorporates iron sulfide into its skeleton (into both its sclerites and into its shell as an exoskeleton). Its heart is, proportionately speaking, unusually large for any animal: the heart comprises approximately 4% of its body volume.
Taxonomy
This species was first discovered in April 2001, and has been referred to as the "scaly-foot" gastropod since 2001. It has been referred to as Chrysomallon squamiferum since 2003, but it was not formally described in the sense of the International Code of Zoological Nomenclature until Chen et al. named it in 2015. Type specimens are stored in the Natural History Museum, London. During the time when the name was not yet formalized, an incorrect spelling variant was "Crysomallon squamiferum".
Chrysomallon squamiferum is the type species and the sole species within the genus Chrysomallon. The generic name Chrysomallon is from the Ancient Greek language, and means "golden haired", because pyrite (a compound occurring in its shell) is golden in color. The specific name squamiferum is from the Latin language and means "scale-bearing", because of its sclerites. At first it was not known to which family this species belonged. Warén et al. classified this species in the family Peltospiridae, within the Neomphalina in 2003. Molecular analyses based on sequences of cytochrome-c oxidase I (COI) genes confirmed the placement of this species within the Peltospiridae. Morphotypes from two localities are dark; a morphotype from a third locality is white (see next section for explanation of localities). These different colored snails appear to be simply "varieties" of the same species, according to the results of genetic analysis.
Distribution
The scaly-foot gastropod is a vent-endemic gastropod known only from the deep-sea hydrothermal vents of the Indian Ocean, which are around in depth. The species was discovered in 2001, living on the bases of black smokers in the Kairei hydrothermal vent field, , on the Central Indian Ridge, just north of the Rodrigues Triple Point. The species has subsequently also been found in the Solitaire field, , Central Indian Ridge, within the Exclusive Economic Zone of Mauritius and Longqi (means "Dragon flag" in Chinese) field, , Southwest Indian Ridge. Longqi field was designated as the type locality; all type material originated from this vent field. The distance between Kairei and Solitaire is about . The distance between Solitaire and Longqi is about . These three sites belong to the Indian Ocean biogeographic province of hydrothermal vent systems sensu Rogers et al. (2012). The distance between sites is large, but the total distribution area is very small, less than .
Peltospiridae snails are mainly known to live in Eastern Pacific vent fields. Nakamura et al. hypothesized that the occurrence of the scaly-foot gastropod in the Indian Ocean suggests a relationship of the hydrothermal vent faunas between these two areas.
Research expeditions have included:
2000 – an expedition of the Japan Agency for Marine-Earth Science and Technology using the ship RV Kairei and ROV Kaikō discovered the Kairei vent field, but scaly-foot gastropods were not found at that time. This was the first vent field discovered in the Indian Ocean.
2001 – an expedition of the U.S. research vessel RV Knorr with ROV Jason discovered scaly-foot gastropods in the Kairei vent field.
2007 – an expedition of RV Da Yang Yi Hao discovered the Longqi vent field.
2009 – an expedition of RV Yokosuka with DSV Shinkai 6500 discovered the Solitaire field and sampled scaly-foot gastropods there.
2009 – an expedition of RV Da Yang Yi Hao visually observed scaly-foot gastropods at Longqi vent field.
2011 – an expedition of the British Royal Research Ship RRS James Cook with ROV Kiel 6000 sampled the Longqi vent field.
Description
Sclerites
In this species, the sides of the snail's foot are extremely unusual, being armoured with hundreds of iron-mineralised sclerites; these are composed of iron sulfides greigite and pyrite. Each sclerite has a soft epithelial tissue core, a conchiolin cover, and an uppermost layer containing pyrite and greigite. Prior to the discovery of the scaly-foot gastropod, it was thought that the only extant molluscs possessing scale-like structures were in the classes Caudofoveata, Solenogastres and Polyplacophora. Sclerites are not homologous to a gastropod operculum. The sclerites of the scaly-foot gastropod are also not homologous to the sclerites found in chitons (Polyplacophora). It has been hypothesized that the sclerites of Cambrian halwaxiids such as Halkieria may potentially be more analogous to the sclerites of this snail than are the sclerites of chitons or aplacophorans. As recently as 2015, detailed morphological analysis for testing this hypothesis had not been carried out.
The sclerites of C. squamiferum are mainly proteinaceous (conchiolin is a complex protein); in contrast, the sclerites of chitons are mainly calcareous. There are no visible growth lines of conchiolin in cross-sections of sclerites. No other extant or extinct gastropods possess dermal sclerites, and no other extant animal is known to use iron sulfides in this way, either in its skeleton, or exoskeleton.
The size of each sclerite is about 1 × 5 mm in adults. Juveniles have scales in few rows, while adults have dense and asymmetric scales. The Solitaire population of snails has white sclerites instead of black; this is due to a lack of iron in the sclerites. The sclerites are imbricated (overlapped in a manner reminiscent of roof tiles). The purpose of sclerites has been speculated to be protection or detoxification. The sclerites may help protect the gastropod from the vent fluid, so that its bacteria can live close to the source of electron donors for chemosynthesis. Or alternatively, the sclerites may result from deposition of toxic sulfide waste from the endosymbionts, and therefore represent a novel solution for detoxification. But the true function of sclerites is, as yet, unknown. The sclerites of the Kairei population, which have a layer of iron sulfide, are ferrimagnetic. The non-iron-sulfide-mineralized sclerite from the Solitaire morphotype showed greater mechanical strength of the whole structure in the three-point bending stress test (12.06 MPa) than did the sclerite from the Kairei morphotype (6.54 MPa).
In life, the external surfaces of sclerites host a diverse array of epibionts: Campylobacterota (formerly Epsilonproteobacteria) and Thermodesulfobacteriota (formerly part of Deltaproteobacteria). These bacteria probably provide their mineralization. Goffredi et al. (2004) hypothesized that the snail secretes some organic compounds that facilitate the attachment of the bacteria.
Shell
The shell of these species has three whorls. The shape of the shell is globose and the spire is compressed. The shell sculpture consists of ribs and fine growth lines. The shape of the aperture is elliptical. The apex of the shell is fragile and it is corroded in adults.
This is a very large peltospirid compared to the majority of other species, which are usually below in shell length. The width of the shell is ; the maximum width of the shell reaches . The average width of the shell of adult snails is 32 mm. The average shell width in the Solitaire population was slightly less than that in the Kairei population. The height of the shell is . The width of the aperture is . The height of the aperture is .
The shell structure consists of three layers. The outer layer is about 30 μm thick, black, and is made of iron sulfides, containing greigite Fe3S4. This species is the only extant animal known to feature this material in its skeleton. The middle layer (about 150 μm) is equivalent to the organic periostracum which is also found in other gastropods. The periostracum is thick and brown. The innermost layer is made of aragonite (about 250 μm thick), a form of calcium carbonate that is commonly found both in the shells of molluscs and in various corals. The color of the aragonite layer is milky white.
Each shell layer appears to contribute to the effectiveness of the snail's defence in different ways. The middle organic layer appears to absorb mechanical strain and energy generated by a squeezing attack (for example by the claws of a crab), making the shell much tougher. The organic layer also acts to dissipate heat. Features of this composite material are in focus of researchers for possible use in civilian and military protective applications.
Operculum
In this species, the shape of the operculum changes during growth, from a rounded shape in juveniles to a curved shape in adults. The relative size of the operculum decreases as individuals grow. About a half of all adult snails of this species possess an operculum among the sclerites at the rear of the animal. It seems likely that the sclerites gradually grow and fully cover the whole foot for protection, and the operculum loses its protective function as the animal grows.
External anatomy
The scaly-foot gastropod has a thick snout, which tapers distally to a blunt end. The mouth is a circular ring of muscles when contracted and closed. The two smooth cephalic tentacles are thick at the base and gradually taper to a fine point at their distal tips. This snail has no eyes. There is no specialised copulatory appendage. The foot is red and large, and the snail cannot withdraw the foot entirely into the shell. There is no pedal gland in the front part of the foot. There are also no epipodial tentacles.
Internal anatomy
In C. squamiferum, the soft parts of the animal occupy approximately two whorls of the interior of the shell. The shell muscle is horseshoe-shaped and large, divided in two parts on the left and right, and connected by a narrower attachment. The mantle edge is thick but simple without any distinctive features. The mantle cavity is deep and reaches the posterior edge of the shell. The medial to left side of the cavity is dominated by a very large bipectinate ctenidium. Ventral to the visceral mass, the body cavity is occupied by a huge esophageal gland, which extends to fill the ventral floor of the mantle cavity.
The digestive system is simple, and is reduced to less than 10% of the volume typical in gastropods. The radula is "weak", of the rhipidoglossan type, with a single pair of radular cartilages. The formula of the radula is ~50 + 4 + 1 + 4 + ~50. The radula ribbon is 4 mm long, 0.5 mm wide; the width to length ratio is approximately 1:10. There is no jaw, and no salivary glands. A part of the anterior oesophagus rapidly expands into a huge, hypertrophied, blind-ended esophageal gland, which occupies much of the ventral face of the mantle cavity (estimated 9.3% body volume). The esophageal gland grows isometrically with the snail, consistent with the snail depending on its endosymbiont microbes throughout its settled life. The oesophageal gland has a uniform texture, and is highly vascularised with fine blood vessels. The stomach has at least three ducts at its anterior right, connecting to the digestive gland. There are consolidated pellets in both the stomach and in the hindgut. These pellets are probably granules of sulfur produced by the endosymbiont as a way to detoxify hydrogen sulfide. The intestine is reduced, and only has a single loop. The extensive and unconsolidated digestive gland extends to the posterior, filling the shell apex of the shell. The rectum does not penetrate the heart, but passes ventral to it. The anus is located on the right side of the snail, above the genital opening.
In the excretory system, the nephridium is central, tending to the right side of the body, as a thin dark layer of glandular tissue. The nephridium is anterior and ventral of the digestive gland, and is in contact with the dorsal side of the foregut.
The respiratory system and circulatory system consist of a single left bipectinate ctenidium (gill), which is very large (15.5% of the body volume), and is supported by many large and mobile blood sinuses filled with haemocoel. On dissection, the blood sinuses and lumps of haemocoel material are a prominent feature throughout the body cavity. Although the circulatory system in Chrysomallon is mostly closed (meaning that haemocoel mostly does not leave blood sinuses), the prominent blood sinuses appear to be transient, and occur in different areas of the body in different individuals. There are thin gill filaments on either side of the ctenidium. The bipectinate ctenidium extends far behind the heart into the upper shell whorls; it is much larger than in Peltospira. Although this species has a similar shell shape and general form to other peltospirids, the ctenidium is proportional size to that of Hirtopelta, which has the largest gill among peltospirid genera that have been investigated anatomically so far.
The ctenidium provides oxygen for the snail, but the circulatory system is enlarged beyond the scope of other similar vent gastropods. There are no endosymbionts in or on the gill of C. squamiferum. The enlargement of the gill is probably to facilitate extracting oxygen in the low-oxygen conditions that are typical of hydrothermal-vent ecosystems.
At the posterior of the ctenidium is a remarkably large and well-developed heart. The heart is unusually large for any animal proportionally. Based on the volume of the single auricle and ventricle, the heart complex represents approximately 4% of the body volume (for example, the heart of humans is 1.3% of the body volume). The ventricle is 0.64 mm long in juveniles with a shell length of 2.2 mm, and grows to 8 mm long in adults. This proportionally giant heart primarily sucks blood through the ctenidium and supplies the highly vascularised oesophageal gland. In C. squamiferum the endosymbionts are housed in an esophageal gland, where they are isolated from the vent fluid. The host is thus likely to play a major role in supplying the endosymbionts with necessary chemicals, leading to increased respiratory needs. Detailed investigation of the haemocoel of C. squamiferum will reveal further information about its respiratory pigments.
The scaly-foot gastropod is a chemosymbiotic holobiont. It hosts thioautotrophic (sulfur-oxidising) gammaproteobacterial endosymbionts in a much enlarged oesophageal gland, and appears to rely on these symbionts for nutrition. The closest known relative of this endosymbiont is that one from Alviniconcha snails. In this species, the size of the oesophageal gland is about two orders of magnitude larger than the usual size. There is a significant embranchment within the oesophageal gland, where the blood pressure likely decreases to almost zero. The elaborate cardiovascular system most likely evolved to oxygenate the endosymbionts in an oxygen-poor environment, and/or to supply hydrogen sulfide to the endosymbionts. Thioautotrophic gammaproteobacteria have a full set of genes required for aerobic respiration, and are probably capable of switching between the more efficient aerobic respiration, and the less efficient anaerobic respiration, depending on oxygen availability. In 2014, the endosymbiont of the scaly-foot gastropod become the first endosymbiont of any gastropod for which the complete genome was known. C. squamiferum was previously thought to be the only species of Peltospiridae that has an enlarged oesophageal gland, but later it was discovered that both species of Gigantopelta also have an enlarged oesophageal gland. Chrysomallon and Gigantopelta are the only vent animals, except siboglinid tubeworms, that house endosymbionts within an enclosed part of the body not in direct contact with vent fluid.
The nervous system is large, and the brain is a solid neural mass without ganglia. The nervous system is reduced in complexity and enlarged in size compared to other neomphaline taxa. As is typical of gastropods, the nervous system is composed of an anterior oesophageal nerve ring and two pairs of longitudinal nerve cords, the ventral pair innervating the foot and the dorsal pair forming a twist via streptoneury. The frontal part of the oesophageal nerve ring is large, connecting two lateral swellings. The huge fused neural mass is directly adjacent to, and passes through, the oeosophageal gland, where the bacteria are housed. There are large tentacular nerves projecting into the cephalic tentacles. The sensory organs of the scaly-foot gastropod include statocysts surrounded by the oesophageal gland, each statocyst with a single statolith. There are also sensory ctenidial bursicles on the tip of the gill filaments; these are known to be present in most vetigastropods, and are present some neomphalines.
The reproductive system has some unusual features. The gonads of adult snails are not inside the shell; they are in the head-foot region on the right side of the body. There are no gonads present in juveniles with shell length of 2.2 mm. Adults possess both testis and ovary in different levels of development. The testis is placed ventrally; the ovary is placed dorsally, and the nephridium lies between them. There is a "spermatophore packaging organ" next to the testis. Gonoducts from the testis and ovary are initially separate, but apparently fuse to a single duct, and emerge as a single genital opening on the right of the mantle cavity. The animal has no copulatory organ.
It is hypothesized that the derived strategy of housing endosymbiotic microbes in an oesophageal gland, has been the catalyst for anatomical innovations that serve primarily to improve the fitness of the bacteria, over and above the needs of the snail. The great enlargement of the oesophageal gland, the snail's protective dermal sclerites, its highly enlarged respiratory and circulatory systems and its high fecundity are all considered to be adaptations which are beneficial to its endosymbiont microbes. These adaptations appear to be a result of specialisation to resolve energetic needs in an extreme chemosynthetic environment.
Ecology
Habitat
This species inhabits the hydrothermal vent fields of the Indian Ocean. It lives adjacent to both acidic and reducing vent fluid, on the walls of black-smoker chimneys, or directly on diffuse flow sites.
The depth of the Kairei field varies from , and its dimensions are approximately . The slope of the field is 10° to 30°. The substrate rock is troctolite and depleted mid-ocean ridge basalt. The Kairei-field scaly-foot gastropods live in the low-temperature diffuse fluids of a single chimney. The transitional zone, where these gastropods were found, is about in width, with temperature of 2–10 °C. The preferred water temperature for this species is about 5 °C. These snails live in an environment which has high concentrations of hydrogen sulfide, and low concentrations of oxygen.
The abundance of scaly-foot gastropods was lower in the Kairei field than in the Longqi field. The Kairei hydrothermal-vent community consists of 35 taxa, including sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae genus and sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Pseudorimula sp., Eulepetopsis sp., Shinkailepas sp., and Alviniconcha marisindica, Desbruyeresia marisindica, Bruceiella wareni, Phymorhynchus sp., Sutilizona sp., slit limpet sp. 1, slit limpet sp. 2, Iphinopsis boucheti, solenogastres Helicoradomenia? sp., annelids Amphisamytha sp., Archinome jasoni, Capitellidae sp. 1, Ophyotrocha sp., Hesionidae sp. 1, Hesionoidae sp. 2, Branchinotogluma sp., Branchipolynoe sp., Harmothoe? sp., Levensteiniella? sp., Prionospio sp., unidentified Nemertea and unidentified Platyhelminthes. Scaly-foot gastropods live in colonies with Alviniconcha marisindica snails, and there are colonies of Rimicaris kairei above them.
The Solitaire field is at a depth of , and its dimensions are approximately . The substrate rock is enriched mid-ocean ridge basalt. Scaly-foot gastropods live near the high-temperature diffuse fluids of chimneys in the vent field. The abundance of scaly-foot gastropods was lower in the Solitaire field than in the Longqi field. The Solitaire hydrothermal-vent community comprises 22 taxa, including: sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae gen et sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Eulepetopsis sp., Shinkailepas sp., Alviniconcha sp. type 3, Desbruyeresia sp., Phymorhynchus sp., annelids Alvinellidae genus and sp., Archinome jasoni, Branchinotogluma sp., echinoderm holothurians Apodacea gen et sp., fish Macrouridae genus and sp., unidentified Nemertea, and unidentified Platyhelminthes.
The Longqi vent field is in a depth of , and its dimensions are approximately . C. squamiferum was densely populated in the areas immediately surrounding the diffuse-flow venting. The Longqi hydrothermal-vent community include 23 macro- and megafauna taxa: sea anemones Actinostolidae sp., annelids Polynoidae n. gen. n. sp. “655”, Branchipolynoe n. sp. “Dragon”, Peinaleopolynoe n. sp. “Dragon”, Hesiolyra cf. bergi, Hesionidae sp. indet., Ophryotrocha n. sp. “F-038/1b”, Prionospio cf. unilamellata, Ampharetidae sp. indet., mussels Bathymodiolus marisindicus, gastropods Gigantopelta aegis, Dracogyra subfuscus, Lirapex politus, Phymorhynchus n. sp. “SWIR”, Lepetodrilus n. sp. “SWIR”, crustaceans Neolepas sp. 1, Rimicaris kairei, Mirocaris indica, Chorocaris sp., Kiwa n. sp. “SWIR”17, Munidopsis sp. and echinoderm holothurians Chiridota sp. The density of Lepetodrilus n. sp. “SWIR” and scaly-foot gastropods is over 100 snails per m2 in close distance from vent fluid sources at Longqi vent field.
Feeding habits
The scaly-foot gastropod is an obligate symbiotroph throughout post-settlement life. Throughout its post-larval life, the scaly-foot gastropod obtains all of its nutrition from the chemoautotrophy of its endosymbiotic bacteria. The scaly-foot gastropod is neither a filter-feeder nor uses other mechanisms for feeding. The radula and radula cartilage are small, respectively constituting only 0.4% and 0.8% of juveniles' body volume, compared to 1.4% and 2.6% in the mixotrophic juveniles of Gigantopelta chessoia.
For identification of trophic interactions in a habitat, where direct observation of feeding habits is complicated, carbon and nitrogen stable-isotope compositions can be measured. There are depleted values of δ13C in the oesophageal gland (relative to photosynthetically derived organic carbon). Chemoautotrophic symbionts were presumed as a source of such carbon. Chemoautotrophic origin of the stable carbon isotope 13C was confirmed experimentally.
Life cycle
This gastropod is a simultaneous hermaphrodite. It is the only species in the family Peltospiridae that is so far known to be a simultaneous hermaphrodite. It has a high fecundity. It lays eggs that are probably of lecithotrophic type. Eggs of the scaly-foot gastropod are negatively buoyant under atmospheric pressure. Neither the larvae nor the protoconch is known as of 2016, but it is thought that the species has a planktonic dispersal stage. The smallest C. squamiferum juvenile specimens ever collected had a shell length 2.2 mm. The results of statistical analyses revealed no genetic differentiation between the two populations in the Kairei and Solitaire fields, suggesting potential connectivity between the two vent fields. The Kairei population represents a potential source population for the two populations in the Central Indian Ridge. These snails are difficult to keep alive in an artificial environment; however, they survived in aquaria at atmospheric pressure for more than three weeks.
Conservation measures and threats
The scaly-foot gastropod is not protected. Its potential habitat across all Indian Ocean hydrothermal vent fields has been estimated to be at most , while the three known sites at which it has been found, between which only negligible migration occurs, add up to , or less than one-fifth of a football field.
The population at the Longqi vent field may be of particular concern. The Southwest Indian Ridge, within which it is located, is one of the slowest-spreading mid-ocean ridges, and the low rate of natural disturbances is associated with ecological communities that are likely more sensitive to and recover more slowly from disruptions. Slow-spreading centers may also create larger mineral deposits, making those sensitive areas primary targets for deep-sea mining. Furthermore, by genetic measures the population at Longqi is poorly connected to those at the Kairei and Solitaire vent fields, over 2000 km away within the Central Indian Ridge.
The Solitaire Vent Field falls within the exclusive economic zone of Mauritius, while the other two sites are within Areas Beyond National Jurisdiction (commonly known as the high seas) under the authority of the International Seabed Authority, which has granted commercial mining exploration licenses for both. The Kairei Vent Field is under a license to Germany (2015–2030), the Longqi Vent Field to China (2011–2026). As of 2017, no conservation measures are proposed or in place for any of the three sites.
It has been listed as an endangered species in the IUCN Red List of Threatened Species since July 4, 2019.
See also
Iron in biology
Notes
References
External links
Peltospiridae
Animals living on hydrothermal vents
Gastropods described in 2015
Chemosynthetic symbiosis | Scaly-foot gastropod | Biology | 6,414 |
5,584,334 | https://en.wikipedia.org/wiki/Vacuum%20flange | A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other. Vacuum flanges are used for scientific and industrial applications to allow various pieces of equipment to interact via physical connections and for vacuum maintenance, monitoring, and manipulation from outside a vacuum's chamber. Several flange standards exist with differences in ultimate attainable pressure, size, and ease of attachment.
Vacuum flange types
Several vacuum flange standards exist, and the same flange types are called by different names by different manufacturers and standards organizations.
KF/QF
The ISO standard quick-release flange is known by the names Quick Flange (QF) or Kleinflansch (KF, German which translates to "Small flange" in English). The KF designation has been adopted by ISO, DIN, and Pneurop. KF flanges are made with a chamfered back surface that is attached with a circular clamp and an elastomeric o-ring (AS568 specification) that is mounted in a metal centering ring. Standard sizes are indicated by the nominal inner diameter in millimeters for flanges 10 through 50 mm in diameter. Sizes 10, 20 and 32 are less common sizes (see Renard numbers). Some sizes share their flange dimensions with their respective larger neighbor and use the same clamp size. This means a DN10KF can mate to a DN16KF by using an adaptive centering ring. The same applies for DN20KF to DN25KF and DN32KF to DN40KF.
ISO
The ISO large flange standard is known as LF, LFB, MF or sometimes just ISO flange. As in KF flanges, the flanges are joined by a centering ring and an elastomeric o-ring. An extra spring-loaded circular clamp is often used around the large-diameter o-rings to prevent them from rolling off from the centering ring during mounting.
ISO large flanges come in two varieties. ISO-K (or ISO LF) flanges are joined with double-claw clamps, which clamp to a circular groove on the tubing side of the flange. ISO-F (or ISO LFB) flanges have holes for attaching the two flanges with bolts. Two tubes with ISO-K and ISO-F flanges can be joined together by clamping the ISO-K side with single-claw clamps, which are then bolted to the holes on the ISO-F side.
ISO large flanges are available in sizes from 63 to 500 mm nominal tube diameter:
CF (Conflat)
CF (ConFlat) flanges use an oxygen-free high thermal conductivity copper gasket and knife-edge flange to achieve an ultrahigh vacuum seal. The term "ConFlat" is a registered trademark of Varian, Inc., so "CF" is commonly used by other flange manufacturers. Each face of the two mating CF flanges has a knife edge, which cuts into the softer metal gasket, providing an extremely leak-tight, metal-to-metal seal. Deformation of the metal gasket fills small defects in the flange, allowing ConFlat flanges to operate down to 10−13 Torr (10−11 Pa) pressure. The knife edge is recessed in a groove in each flange. In addition to protecting the knife edge, the groove helps hold the gasket in place, which aligns the two flanges and also reduces gasket expansion during bake-out. For stainless-steel ConFlat flanges, baking temperatures of 450 °C can be achieved; the temperature is limited by the choice of gasket material. CF flanges are sexless and interchangeable. In North America, flange sizes are given by flange outer diameter in inches, while in Europe and Asia, sizes are given by tube inner diameter in millimeters. Despite the different naming conventions, the actual flanges are the same.
ConFlat gaskets were originally invented by William Wheeler and other engineers at Varian in an attempt to build a flange that would not leak after baking.
Wheeler
A Wheeler flange is a large wire-seal flange often used on large vacuum chambers.
American Standards Association (ASA)
A flange standard popularized in the United States is codified by the American National Standards Institute (ANSI), and is also sometimes named after the organization's previous name, the American Standards Association (ASA). These flanges have elastomeric o-ring seals and can be used for both vacuum and pressure applications. Flange sizes are indicated by tube nominal inner diameter (ANSI naming convention) or by flange outer diameter in inches (ASA naming convention).
Vacuum gaskets
To achieve a vacuum seal, a gasket is required. An elastomeric o-ring gasket can be made of Buna rubber, viton fluoropolymer, silicone rubber or teflon. O-rings can be placed in a groove or may be used in combination with a centering ring or as a "captured" o-ring that is held in place by separate metal rings. Metal gaskets are used in ultra-high-vacuum systems where outgassing of the elastomer could be a significant gas load. A copper ring gasket is used with ConFlat flanges. Metal wire gaskets made of copper, gold or indium can be used.
Vacuum feedthrough
A vacuum feedthrough is a flange that contains a vacuum-tight electrical, physical or mechanical connection to the vacuum chamber. An electrical feedthrough allows voltages to be applied to components under vacuum, for example a filament or heater. An example of a physical feedthrough is a vacuum-tight connection for cooling water. A mechanical feedthrough is used for rotation and translation of components under vacuum. A wobble stick is a mechanical feedthrough device that can be used to pick up, move and otherwise manipulate objects in a vacuum chamber.
See also
Vacuum engineering
Vacuum grease
References
External links
ISO 1609:1986 Vacuum technology - Flange dimensions
all vacuum flange manufacturers in vacuum-guide.com
Flange
Plumbing | Vacuum flange | Physics,Engineering | 1,338 |
2,526,861 | https://en.wikipedia.org/wiki/Isotopes%20of%20thorium | Thorium (90Th) has seven naturally occurring isotopes but none are stable. One isotope, 232Th, is relatively stable, with a half-life of 1.405×1010 years, considerably longer than the age of the Earth, and even slightly longer than the generally accepted age of the universe. This isotope makes up nearly all natural thorium, so thorium was considered to be mononuclidic. However, in 2013, IUPAC reclassified thorium as binuclidic, due to large amounts of 230Th in deep seawater. Thorium has a characteristic terrestrial isotopic composition and thus a standard atomic weight can be given.
Thirty-one radioisotopes have been characterized, with the most stable being 232Th, 230Th with a half-life of 75,380 years, 229Th with a half-life of 7,917 years, and 228Th with a half-life of 1.92 years. All of the remaining radioactive isotopes have half-lives that are less than thirty days and the majority of these have half-lives that are less than ten minutes. One isotope, 229Th, has a nuclear isomer (or metastable state) with a remarkably low excitation energy, recently measured to be It has been proposed to perform laser spectroscopy of the 229Th nucleus and use the low-energy transition for the development of a nuclear clock of extremely high accuracy.
The known isotopes of thorium range in mass number from 207 to 238.
List of isotopes
|-id=Thorium-207
| 207Th
|
| style="text-align:right" | 90
| style="text-align:right" | 117
|
|
| α
| 203Ra
|
|
|
|-id=Thorium-208
| 208Th
|
| style="text-align:right" | 90
| style="text-align:right" | 118
| 208.017915(34)
| 2.4(12) ms
| α
| 204Ra
| 0+
|
|
|-id=Thorium-209
| 209Th
|
| style="text-align:right" | 90
| style="text-align:right" | 119
| 209.017998(27)
| 3.1(12) ms
| α
| 205Ra
| 13/2+
|
|
|-id=Thorium-210
| 210Th
|
| style="text-align:right" | 90
| style="text-align:right" | 120
| 210.015094(20)
| 16.0(36) ms
| α
| 206Ra
| 0+
|
|
|-id=Thorium-211
| 211Th
|
| style="text-align:right" | 90
| style="text-align:right" | 121
| 211.014897(92)
| 48(20) ms
| α
| 207Ra
| 5/2−#
|
|
|-id=Thorium-212
| 212Th
|
| style="text-align:right" | 90
| style="text-align:right" | 122
| 212.013002(11)
| 31.7(13) ms
| α
| 208Ra
| 0+
|
|
|-id=Thorium-213
| 213Th
|
| style="text-align:right" | 90
| style="text-align:right" | 123
| 213.0130115(99)
| 144(21) ms
| α
| 209Ra
| 5/2−
|
|
|-id=Thorium-213m
| style="text-indent:1em" | 213mTh
|
| colspan="3" style="text-indent:2em" | 1180.0(14) keV
| 1.4(4) μs
| IT
| 213Th
| (13/2)+
|
|
|-id=Thorium-214
| 214Th
|
| style="text-align:right" | 90
| style="text-align:right" | 124
| 214.011481(11)
| 87(10) ms
| α
| 210Ra
| 0+
|
|
|-id=Thorium-214m
| style="text-indent:1em" | 214mTh
|
| colspan="3" style="text-indent:2em" | 2181.0(27) keV
| 1.24(12) μs
| IT
| 214Th
| 8+#
|
|
|-id=Thorium-215
| 215Th
|
| style="text-align:right" | 90
| style="text-align:right" | 125
| 215.0117246(68)
| 1.35(14) s
| α
| 211Ra
| (1/2−)
|
|
|-id=Thorium-215m
| style="text-indent:1em" | 215mTh
|
| colspan="3" style="text-indent:2em" | 1471(50)# keV
| 770(60) ns
| IT
| 215Th
| 9/2+#
|
|
|-id=Thorium-216
| 216Th
|
| style="text-align:right" | 90
| style="text-align:right" | 126
| 216.011056(12)
| 26.28(16) ms
| α
| 212Ra
| 0+
|
|
|-id=Thorium-216m1
| rowspan=2 style="text-indent:1em" | 216m1Th
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 2041(8) keV
| rowspan=2|135.4(29) μs
| IT (97.2%)
| 216Th
| rowspan=2|8+
| rowspan=2|
| rowspan=2|
|-
| α (2.8%)
| 212Ra
|-id=Thorium-216m2
| style="text-indent:1em" | 216m2Th
|
| colspan="3" style="text-indent:2em" | 2648(8) keV
| 580(26) ns
| IT
| 216Th
| (11−)
|
|
|-id=Thorium-216m3
| style="text-indent:1em" | 216m3Th
|
| colspan="3" style="text-indent:2em" | 3682(8) keV
| 740(70) ns
| IT
| 216Th
| (14+)
|
|
|-id=Thorium-217
| 217Th
|
| style="text-align:right" | 90
| style="text-align:right" | 127
| 217.013103(11)
| 248(4) μs
| α
| 213Ra
| 9/2+#
|
|
|-id=Thorium-217m1
| style="text-indent:1em" | 217m1Th
|
| colspan="3" style="text-indent:2em" | 673.3(1) keV
| 141(50) ns
| IT
| 217Th
| (15/2−)
|
|
|-id=Thorium-217m2
| style="text-indent:1em" | 217m2Th
|
| colspan="3" style="text-indent:2em" | 2307(32) keV
| 71(14) μs
| IT
| 217Th
| (25/2+)
|
|
|-id=Thorium-218
| 218Th
|
| style="text-align:right" | 90
| style="text-align:right" | 128
| 218.013276(11)
| 122(5) ns
| α
| 214Ra
| 0+
|
|
|-id=Thorium-219
| 219Th
|
| style="text-align:right" | 90
| style="text-align:right" | 129
| 219.015526(61)
| 1.023(18) μs
| α
| 215Ra
| 9/2+#
|
|
|-id=Thorium-220
| 220Th
|
| style="text-align:right" | 90
| style="text-align:right" | 130
| 220.015770(15)
| 10.2(3) μs
| α
| 216Ra
| 0+
|
|
|-id=Thorium-221
| 221Th
|
| style="text-align:right" | 90
| style="text-align:right" | 131
| 221.0181858(86)
| 1.75(2) ms
| α
| 217Ra
| 7/2+#
|
|
|-id=Thorium-222
| 222Th
|
| style="text-align:right" | 90
| style="text-align:right" | 132
| 222.018468(11)
| 2.24(3) ms
| α
| 218Ra
| 0+
|
|
|-id=Thorium-223
| 223Th
|
| style="text-align:right" | 90
| style="text-align:right" | 133
| 223.0208111(85)
| 0.60(2) s
| α
| 219Ra
| (5/2)+
|
|
|-id=Thorium-224
| 224Th
|
| style="text-align:right" | 90
| style="text-align:right" | 134
| 224.021466(10)
| 1.04(2) s
| α
| 220Ra
| 0+
|
|
|-id=Thorium-225
| rowspan=2|225Th
| rowspan=2|
| rowspan=2 style="text-align:right" | 90
| rowspan=2 style="text-align:right" | 135
| rowspan=2|225.0239510(55)
| rowspan=2|8.75(4) min
| α (~90%)
| 221Ra
| rowspan=2|3/2+
| rowspan=2|
| rowspan=2|
|-
| EC (~10%)
| 225Ac
|-id=Thorium-226
| rowspan=2|226Th
| rowspan=2|
| rowspan=2 style="text-align:right" | 90
| rowspan=2 style="text-align:right" | 136
| rowspan=2|226.0249037(48)
| rowspan=2|30.70(3) min
| α
| 222Ra
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| CD (<%)
| 208Pb18O
|-id=Thorium-227
| 227Th
| Radioactinium
| style="text-align:right" | 90
| style="text-align:right" | 137
| 227.0277025(22)
| 18.693(4) d
| α
| 223Ra
| (1/2+)
| Trace
|
|-
| rowspan=2|228Th
| rowspan=2|Radiothorium
| rowspan=2 style="text-align:right" | 90
| rowspan=2 style="text-align:right" | 138
| rowspan=2|228.0287397(19)
| rowspan=2|1.9125(7) y
| α
| 224Ra
| rowspan=2|0+
| rowspan=2|Trace
| rowspan=2|
|-
| CD (1.13×10−11%)
| 208Pb20O
|-
| 229Th
|
| style="text-align:right" | 90
| style="text-align:right" | 139
| 229.0317614(26)
| 7916(17) y
| α
| 225Ra
| 5/2+
| Trace
|
|-
| style="text-indent:1em" | 229mTh
|
| colspan="3" style="text-indent:2em" | 8.355733554021(8) eV
| 7(1) μs
| IT
| 229Th+
| 3/2+
|
|
|-
| style="text-indent:1em" | 229mTh+
|
| colspan="3" style="text-indent:2em" | 8.355733554021(8) eV
| 29(1) min
| γ
| 229Th+
| 3/2+
|
|
|-
| rowspan=3|230Th
| rowspan=3|Ionium
| rowspan=3 style="text-align:right" | 90
| rowspan=3 style="text-align:right" | 140
| rowspan=3|230.0331323(13)
| rowspan=3|7.54(3)×104 y
| α
| 226Ra
| rowspan=3|0+
| rowspan=3|0.0002(2)
| rowspan=3|
|-
| CD (5.8×10−11%)
| 206Hg24Ne
|-
| SF (<4×10−12%)
| (Various)
|-
| 231Th
| Uranium Y
| style="text-align:right" | 90
| style="text-align:right" | 141
| 231.0363028(13)
| 25.52(1) h
| β−
| 231Pa
| 5/2+
| Trace
|
|-
| rowspan=4|232Th
| rowspan=4|Thorium
| rowspan=4 style="text-align:right" | 90
| rowspan=4 style="text-align:right" | 142
| rowspan=4|232.0380536(15)
| rowspan=4|1.40(1)×1010 y
| α
| 228Ra
| rowspan=4|0+
| rowspan=4|0.9998(2)
| rowspan=4|
|-
| SF (1.1×10−9%)
| (various)
|-
| CD (<2.78×10−10%)
| 208Hg24Ne
|-
| CD (<2.78×10−10%)
| 206Hg26Ne
|-
| 233Th
|
| style="text-align:right" | 90
| style="text-align:right" | 143
| 233.0415801(15)
| 21.83(4) min
| β−
| 233Pa
| 1/2+
| Trace
|
|-
| 234Th
| Uranium X1
| style="text-align:right" | 90
| style="text-align:right" | 144
| 234.0435998(28)
| 24.107(24) d
| β−
| 234mPa
| 0+
| Trace
|
|-id=Thorium-235
| 235Th
|
| style="text-align:right" | 90
| style="text-align:right" | 145
| 235.047255(14)
| 7.2(1) min
| β−
| 235Pa
| 1/2+#
|
|
|-id=Thorium-236
| 236Th
|
| style="text-align:right" | 90
| style="text-align:right" | 146
| 236.049657(15)
| 37.3(15) min
| β−
| 236Pa
| 0+
|
|
|-id=Thorium-237
| 237Th
|
| style="text-align:right" | 90
| style="text-align:right" | 147
| 237.053629(17)
| 4.8(5) min
| β−
| 237Pa
| 5/2+#
|
|
|-id=Thorium-238
| 238Th
|
| style="text-align:right" | 90
| style="text-align:right" | 148
| 238.05639(30)#
| 9.4(20) min
| β−
| 238Pa
| 0+
|
|
Uses
Thorium has been suggested for use in thorium-based nuclear power.
In many countries the use of thorium in consumer products is banned or discouraged because it is radioactive.
It is currently used in cathodes of vacuum tubes, for a combination of physical stability at high temperature and a low work energy required to remove an electron from its surface.
It has, for about a century, been used in mantles of gas and vapor lamps such as gas lights and camping lanterns.
Low dispersion lenses
Thorium was also used in certain glass elements of Aero-Ektar lenses made by Kodak during World War II. Thus they are mildly radioactive. Two of the glass elements in the f/2.5 Aero-Ektar lenses are 11% and 13% thorium by weight. The thorium-containing glasses were used because they have a high refractive index with a low dispersion (variation of index with wavelength), a highly desirable property. Many surviving Aero-Ektar lenses have a tea colored tint, possibly due to radiation damage to the glass.
These lenses were used for aerial reconnaissance because the radiation level is not high enough to fog film over a short period. This would indicate the radiation level is reasonably safe. However, when not in use, it would be prudent to store these lenses as far as possible from normally inhabited areas; allowing the inverse square relationship to attenuate the radiation.
Actinides vs. fission products
Notable isotopes
Thorium-228
228Th is an isotope of thorium with 138 neutrons. It was once named Radiothorium, due to its occurrence in the disintegration chain of thorium-232. It has a half-life of 1.9116 years. It undergoes alpha decay to 224Ra. Occasionally it decays by the unusual route of cluster decay, emitting a nucleus of 20O and producing stable 208Pb. It is a daughter isotope of 232U in the thorium decay series.
228Th has an atomic weight of 228.0287411 grams/mole.
Together with its decay product 224Ra it is used for alpha particle radiation therapy.
Thorium-229
229Th is a radioactive isotope of thorium that decays by alpha emission with a half-life of 7917 years.
229Th is produced by the decay of uranium-233, and its principal use is for the production of the medical isotopes actinium-225 and bismuth-213.
Thorium-229m
229Th has a nuclear isomer, , with a remarkably low excitation energy of .
Due to this low energy, the lifetime of 229mTh very much depends on the electronic environment of the nucleus. In neutral 229Th, the isomer decays by internal conversion within a few microseconds. However, the isomeric energy is not enough to remove a second electron (thorium's second ionization energy is ), so internal conversion is impossible in Th+ ions. Radiative decay occurs with a half-life orders of magnitude longer, in excess of 1000 seconds. Embedded in ionic crystals, ionization is not quite 100%, so a small amount of internal conversion occurs, leading to a recently measured lifetime of ≈, which can be extrapolated to a lifetime for isolated ions of .
This excitation energy corresponds to a photon frequency of (wavelength ). Although in the very high frequency vacuum ultraviolet frequency range, it is possible to build a laser operating at this frequency, giving the only known opportunity for direct laser excitation of a nuclear state, which could have applications like a nuclear clock of very high accuracy or as a qubit for quantum computing.
These applications were for a long time impeded by imprecise measurements of the isomeric energy, as laser excitation's exquisite precision makes it difficult to use to search a wide frequency range. There were many investigations, both theoretical and experimental, trying to determine the transition energy precisely and to specify other properties of the isomeric state of 229Th (such as the lifetime and the magnetic moment) before the frequency was accurately measured in 2024.
History
Early measurements were performed via gamma ray spectroscopy, producing the excited state of 229Th, and measuring the difference in emitted gamma ray energies as it decays to either the 229mTh (90%) or 229Th (10%) isomeric states. In 1976, Kroger and Reich sought to understand coriolis force effects in deformed nuclei, and attempted to match thorium's gamma-ray spectrum to theoretical nuclear shape models. To their surprise, the known nuclear states could not be reasonably classified into different total angular momentum quantization levels. They concluded that some states previously identified as 229Th actually arose from a spin- nuclear isomer, 229mTh, with a remarkably low excitation energy.
At that time the energy was inferred to be below 100 eV, purely based on the non-observation of the isomer's direct decay. However, in 1990, further measurements led to the conclusion that the energy is almost certainly below 10 eV, making it one of the lowest known isomeric excitation energies. In the following years, the energy was further constrained to , which was for a long time the accepted energy value.
Improved gamma ray spectroscopy measurements using an advanced high-resolution X-ray microcalorimeter were carried out in 2007, yielding a new value for the transition energy of , corrected to in 2009. This higher energy has two consequences which had not been considered by earlier attempts to observe emitted photons:
Because it is above thorium's first ionization energy, neutral 229mTh will decay radiatively with an extremely low likelihood, and
Because it is above the vacuum ultraviolet cutoff, the produced photons cannot travel through air.
But even knowing the higher energy, most of the searches in the 2010s for light emitted by the isomeric decay failed to observe any signal, pointing towards a potentially strong non-radiative decay channel. A direct detection of photons emitted in the isomeric decay was claimed in 2012 and again in 2018. However, both reports were subject to controversial discussions within the community.
A direct detection of electrons being emitted in the internal conversion decay channel of 229mTh was achieved in 2016. However, at the time the isomer's transition energy could only be weakly constrained to between 6.3 and 18.3 eV. Finally, in 2019, non-optical electron spectroscopy of the internal conversion electrons emitted in the isomeric decay allowed for a determination of the isomer's excitation energy to . However, this value appeared at odds with the 2018 preprint showing that a similar signal as an xenon VUV photon can be shown, but with about less energy and a (retrospectively correct) lifetime. In that paper, 229Th was embedded in SiO2, possibly resulting in an energy shift and altered lifetime, although the states involved are primarily nuclear, shielding them from electronic interactions.
In another 2018 experiment, it was possible to perform a first laser-spectroscopic characterization of the nuclear properties of 229mTh. In this experiment, laser spectroscopy of the 229Th atomic shell was conducted using a 229Th2+ ion cloud with 2% of the ions in the nuclear excited state. This allowed probing for the hyperfine shift induced by the different nuclear spin states of the ground and the isomeric state. In this way, a first experimental value for the magnetic dipole and the electric quadrupole moment of 229mTh could be inferred.
In 2019, the isomer's excitation energy was constrained to based on the direct detection of internal conversion electrons and a secure population of 229mTh from the nuclear ground state was achieved by excitation of the nuclear excited state via synchrotron radiation. Additional measurements by a different group in 2020 produced a figure of ( wavelength). Combining these measurements, the expected transition energy is .
In September 2022, spectroscopy on decaying samples determined the excitation energy to be .
In April 2024, two separate groups finally reported precision laser excitation Th4+ cations doped into ionic crystals (of CaF2 and LiSrAlF6 with additional interstitial F− anions for charge compensation), giving a precise (~1 part per million) measurement of the transition energy. A one-part-per-trillion () measurement soon followed in June 2024, and future high-precision lasers will measure the frequency up to the accuracy of the best atomic clocks.
Thorium-230
230Th is a radioactive isotope of thorium that can be used to date corals and determine ocean current flux. Ionium was a name given early in the study of radioactive elements to the 230Th isotope produced in the decay chain of 238U before it was realized that ionium and thorium are chemically identical. The symbol Io was used for this supposed element. (The name is still used in ionium–thorium dating.)
Thorium-231
231Th has 141 neutrons. It is the decay product of uranium-235. It is found in very small amounts on the earth and has a half-life of 25.5 hours. When it decays, it emits a beta ray and forms protactinium-231. It has a decay energy of 0.39 MeV. It has a mass of 231.0363043 u.
Thorium-232
232Th is the only primordial nuclide of thorium and makes up effectively all of natural thorium, with other isotopes of thorium appearing only in trace amounts as relatively short-lived decay products of uranium and thorium.
The isotope decays by alpha decay with a half-life of 1.405 years, over three times the age of the Earth and approximately the age of the universe.
Its decay chain is the thorium series, eventually ending in lead-208. The remainder of the chain is quick; the longest half-lives in it are 5.75 years for radium-228 and 1.91 years for thorium-228, with all other half-lives totaling less than 15 days.
232Th is a fertile material able to absorb a neutron and undergo transmutation into the fissile nuclide uranium-233, which is the basis of the thorium fuel cycle.
In the form of Thorotrast, a thorium dioxide suspension, it was used as a contrast medium in early X-ray diagnostics. Thorium-232 is now classified as carcinogenic.
Thorium-233
233Th is an isotope of thorium that decays into protactinium-233 through beta decay. It has a half-life of 21.83 minutes. Traces occur in nature as the result of natural neutron activation of 232Th.
Thorium-234
234Th is an isotope of thorium whose nuclei contain 144 neutrons. 234Th has a half-life of 24.1 days, and when it decays, it emits a beta particle, and in doing so, it transmutes into protactinium-234. 234Th has a mass of 234.0436 atomic mass units, and it has a decay energy of about 270 keV. Uranium-238 usually decays into this isotope of thorium (although in rare cases it can undergo spontaneous fission instead).
References
Isotope masses from:
Isotopic compositions and standard atomic masses from:
Half-life, spin, and isomer data selected from the following sources.
Thorium
Thorium | Isotopes of thorium | Chemistry | 5,898 |
166,365 | https://en.wikipedia.org/wiki/Vorticity%20equation | The vorticity equation of fluid dynamics describes the evolution of the vorticity of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity). The governing equation is:where is the material derivative operator, is the flow velocity, is the local fluid density, is the local pressure, is the viscous stress tensor and represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching.
The equation is valid in the absence of any concentrated torques and line forces for a compressible, Newtonian fluid. In the case of incompressible flow (i.e., low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation:
where is the kinematic viscosity and is the Laplace operator. Under the further assumption of two-dimensional flow, the equation simplifies to:
Physical interpretation
The term on the left-hand side is the material derivative of the vorticity vector . It describes the rate of change of vorticity of the moving fluid particle. This change can be attributed to unsteadiness in the flow (, the unsteady term) or due to the motion of the fluid particle as it moves from one point to another (, the convection term).
The term on the right-hand side describes the stretching or tilting of vorticity due to the flow velocity gradients. Note that is a vector quantity, as is a scalar differential operator, while is a nine-element tensor quantity.
The term describes stretching of vorticity due to flow compressibility. It follows from the Navier-Stokes equation for continuity, namely where is the specific volume of the fluid element. One can think of as a measure of flow compressibility. Sometimes the negative sign is included in the term.
The term is the baroclinic term. It accounts for the changes in the vorticity due to the intersection of density and pressure surfaces.
The term , accounts for the diffusion of vorticity due to the viscous effects.
The term provides for changes due to external body forces. These are forces that are spread over a three-dimensional region of the fluid, such as gravity or electromagnetic forces. (As opposed to forces that act only over a surface (like drag on a wall) or a line (like surface tension around a meniscus).
Simplifications
In case of conservative body forces, .
For a barotropic fluid, . This is also true for a constant density fluid (including incompressible fluid) where . Note that this is not the same as an incompressible flow, for which the barotropic term cannot be neglected.
This note seems to be talking about the fact that conservation of momentum says and there's a difference between assuming that ρ=constant (the 'incompressible fluid' option, above) and that (the 'incompressible flow' option, above). With the first assumption, conservation of momentum implies (for non-zero density) that ; whereas the second assumption doesn't necessary imply that ρ is constant. This second assumption only strictly requires that the time rate of change of the density is compensated by the gradient of the density, as in:. You can make sense of this by considering the ideal gas law (which is valid if the Reynolds number is large enough that viscous friction becomes unimportant.) Then, even for an adiabatic, chemically-homogenous fluid, the density can vary when the pressure changes, e.g. with Bernoulli.
For inviscid fluids, the viscosity tensor is zero.
Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to
Alternately, in case of incompressible, inviscid fluid with conservative body forces,
For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to.
Derivation
The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains:
Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation:
where is any scalar field.
Tensor notation
The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol :
In specific sciences
Atmospheric sciences
In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is
Here, is the polar () component of the vorticity, is the atmospheric density, , , and w are the components of wind velocity, and is the 2-dimensional (i.e. horizontal-component-only) del.
See also
Vorticity
Barotropic vorticity equation
Vortex stretching
Burgers vortex
References
Further reading
Equations of fluid dynamics
Transport phenomena | Vorticity equation | Physics,Chemistry,Engineering | 1,115 |
21,787,394 | https://en.wikipedia.org/wiki/Pathogenomics | Pathogenomics is a field which uses high-throughput screening technology and bioinformatics to study encoded microbe resistance, as well as virulence factors (VFs), which enable a microorganism to infect a host and possibly cause disease. This includes studying genomes of pathogens which cannot be cultured outside of a host. In the past, researchers and medical professionals found it difficult to study and understand pathogenic traits of infectious organisms. With newer technology, pathogen genomes can be identified and sequenced in a much shorter time and at a lower cost, thus improving the ability to diagnose, treat, and even predict and prevent pathogenic infections and disease. It has also allowed researchers to better understand genome evolution events - gene loss, gain, duplication, rearrangement - and how those events impact pathogen resistance and ability to cause disease. This influx of information has created a need for bioinformatics tools and databases to analyze and make the vast amounts of data accessible to researchers, and it has raised ethical questions about the wisdom of reconstructing previously extinct and deadly pathogens in order to better understand virulence.
History
During the earlier times when genomics was being studied, scientists found it challenging to sequence genetic information. The field began to explode in 1977 when Fred Sanger, PhD, along with his colleagues, sequenced the DNA-based genome of a bacteriophage, using a method now known as the Sanger Method. The Sanger Method for sequencing DNA exponentially advanced molecular biology and directly led to the ability to sequence genomes of other organisms, including the complete human genome.
The Haemophilus influenza genome was one of the first organism genomes sequenced in 1995 by J. Craig Venter and Hamilton Smith using whole genome shotgun sequencing. Since then, newer and more efficient high-throughput sequencing, such as Next Generation Genomic Sequencing (NGS) and Single-Cell Genomic Sequencing, have been developed. While the Sanger method is able to sequence one DNA fragment at a time, NGS technology can sequence thousands of sequences at a time. With the ability to rapidly sequence DNA, new insights developed, such as the discovery that since prokaryotic genomes are more diverse than originally thought, it is necessary to sequence multiple strains in a species rather than only a few. E.coli was an example of why this is important, with genes encoding virulence factors in two strains of the species differing by at least thirty percent. Such knowledge, along with more thorough study of genome gain, loss, and change, is giving researchers valuable insight into how pathogens interact in host environments and how they are able to infect hosts and cause disease.
Pathogen Bioinformatics
With this high influx of new information, there has arisen a higher demand for bioinformatics so scientists can properly analyze the new data. In response, software and other tools have been developed for this purpose. Also, as of 2008, the amount of stored sequences was doubling every 18 months, making urgent the need for better ways to organize data and aid research. In response, many publicly accessible databases and other resources have been created, including the NCBI pathogen detection program, the Pathosystems Resource Integration Centre (PATRIC), Pathogenwatch, the Virulence Factor Database (VFDB) of pathogenic bacteria, the Victors database of virulence factors in human and animal pathogens. Until 2022, the most sequenced pathogens are Salmonella enterica and E. coli - Shigella. The sequencing technologies, the bioinformatics tools, the databases, statistics related to pathogen genomes and the applications in forensics, epidemiology, clinical practice and food safety have been extensively reviewed.
Microbe analysis
Pathogens may be prokaryotic (archaea or bacteria), single-celled eukarya or viruses. Prokaryotic genomes have typically been easier to sequence due to smaller genome size compared to Eukarya. Due to this, there is a bias in reporting pathogenic bacterial behavior. Regardless of this bias in reporting, many of the dynamic genomic events are similar across all the types of pathogen organisms. Genomic evolution occurs via gene gain, gene loss, and genome rearrangement, and these "events" are observed in multiple pathogen genomes, with some bacterial pathogens experiencing all three. Pathogenomics does not focus exclusively on understanding pathogen-host interactions, however. Insight of individual or cooperative pathogen behavior provides knowledge into the development or inheritance of pathogen virulence factors. Through a deeper understanding of the small sub-units that cause infection, it may be possible to develop novel therapeutics that are efficient and cost-effective.
Cause and analysis of genomic diversity
Dynamic genomes with high plasticity are necessary to allow pathogens, especially bacteria, to survive in changing environments. With the assistance of high throughput sequencing methods and in silico technologies, it is possible to detect, compare and catalogue many of these dynamic genomic events. Genomic diversity is important when detecting and treating a pathogen since these events can change the function and structure of the pathogen. There is a need to analyze more than a single genome sequence of a pathogen species to understand pathogen mechanisms. Comparative genomics is a methodology which allows scientists to compare the genomes of different species and strains. There are several examples of successful comparative genomics studies, among them the analysis of Listeria and Escherichia coli. Some studies have attempted to address the difference between pathogenic and non-pathogenic microbes. This inquiry proves to be difficult, however, since a single bacterial species can have many strains, and the genomic content of each of these strains varies.
Evolutionary dynamics
Varying microbe strains and genomic content are caused by different forces, including three specific evolutionary events which have an impact on pathogen resistance and ability to cause disease, a: gene gain, gene loss, and genome rearrangement.
Gene loss and genome decay
Gene loss occurs when genes are deleted. The reason why this occurs is still not fully understood, though it most likely involves adaptation to a new environment or ecological niche. Some researchers believe gene loss may actually increase fitness and survival among pathogens. In a new environment, some genes may become unnecessary for survival, and so mutations are eventually "allowed" on those genes until they become inactive "pseudogenes." These pseudogenes are observed in organisms such as Shigella flexneri, Salmonella enterica, and Yersinia pestis. Over time, the pseudogenes are deleted, and the organisms become fully dependent on their host as either endosymbionts or obligate intracellular pathogens, as is seen in Buchnera, Myobacterium leprae, and Chlamydia trachomatis. These deleted genes are also called Anti-virulence genes (AVG) since it is thought they may have prevented the organism from becoming pathogenic. In order to be more virulent, infect a host and remain alive, the pathogen had to get rid of those AVGs. The reverse process can happen as well, as was seen during analysis of Listeria strains, which showed that a reduced genome size led to a non-pathogenic Listeria strain from a pathogenic strain. Systems have been developed to detect these pseudogenes/AVGs in a genome sequence.
Gene gain and duplication
One of the key forces driving gene gain is thought to be horizontal (lateral) gene transfer (LGT). It is of particular interest in microbial studies because these mobile genetic elements may introduce virulence factors into a new genome. A comparative study conducted by Gill et al. in 2005 postulated that LGT may have been the cause for pathogen variations between Staphylococcus epidermidis and Staphylococcus aureus. There still, however, remains skepticism about the frequency of LGT, its identification, and its impact. New and improved methodologies have been engaged, especially in the study of phylogenetics, to validate the presence and effect of LGT. Gene gain and gene duplication events are balanced by gene loss, such that despite their dynamic nature, the genome of a bacterial species remains approximately the same size.
Genome rearrangement
Mobile genetic insertion sequences can play a role in genome rearrangement activities. Pathogens that do not live in an isolated environment have been found to contain a large number of insertion sequence elements and various repetitive segments of DNA. The combination of these two genetic elements is thought help mediate homologous recombination. There are pathogens, such as Burkholderia mallei, and Burkholderia pseudomallei which have been shown to exhibit genome-wide rearrangements due to insertion sequences and repetitive DNA segments. At this time, no studies demonstrate genome-wide rearrangement events directly giving rise to pathogenic behavior in a microbe. This does not mean it is not possible. Genome-wide rearrangements do, however, contribute to the plasticity of bacterial genome, which may prime the conditions for other factors to introduce, or lose, virulence factors.
Single-nucleotide polymorphisms
Single Nucleotide Polymorphisms, or SNPs, allow for a wide array of genetic variation among humans as well as pathogens. They allow researchers to estimate a variety of factors: the effects of environmental toxins, how different treatment methods affect the body, and what causes someone's predisposition to illnesses. SNPs play a key role in understanding how and why mutations occur. SNPs also allows for scientists to map genomes and analyze genetic information.
Pan and core genomes
Pan-genome overview The most recent definition of a bacterial species comes from the pre-genomic era. In 1987, it was proposed that bacterial strains showing >70% DNA·DNA re-association and sharing characteristic phenotypic traits should be considered to be strains of the same species. The diversity within pathogen genomes makes it difficult to identify the total number of genes that are associated within all strains of a pathogen species. It has been thought that the total number of genes associated with a single pathogen species may be unlimited, although some groups are attempting to derive a more empirical value. For this reason, it was necessary to introduce the concept of pan-genomes and core genomes. Pan-genome and core genome literature also tends to have a bias towards reporting on prokaryotic pathogenic organisms. Caution may need to be exercised when extending the definition of a pan-genome or a core-genome to the other pathogenic organisms because there is no formal evidence of the properties of these pan-genomes.
A core genome is the set of genes found across all strains of a pathogen species. A pan-genome is the entire gene pool for that pathogen species, and includes genes that are not shared by all strains. Pan-genomes may be open or closed depending on whether comparative analysis of multiple strains reveals no new genes (closed) or many new genes (open) compared to the core genome for that pathogen species. In the open pan-genome, genes may be further characterized as dispensable or strain specific. Dispensable genes are those found in more than one strain, but not in all strains, of a pathogen species. Strain specific genes are those found only in one strain of a pathogen species. The differences in pan-genomes are reflections of the life style of the organism. For example, Streptococcus agalactiae, which exists in diverse biological niches, has a broader pan-genome when compared with the more environmentally isolated Bacillus anthracis. Comparative genomics approaches are also being used to understand more about the pan-genome. Recent discoveries show that the number of new species continue to grow with an estimated 1031 bacteriophages on the planet with those bacteriophages infecting 1024 others per second, the continuous flow of genetic material being exchanged is difficult to imagine.
Virulence factors
Multiple genetic elements of human-affecting pathogens contribute to the transfer of virulence factors: plasmids, pathogenicity island, prophages, bacteriophages, transposons, and integrative and conjugative elements. Pathogenicity islands and their detection are the focus of several bioinformatics efforts involved in pathogenomics. It is a common belief that "environmental bacterial strains" lack the capacity to harm or do damage to humans. However, recent studies show that bacteria from aquatic environments have acquired pathogenic strains through evolution. This allows for the bacteria to have a wider range in genetic traits and can cause a potential threat to humans from which there is more resistance towards antibiotics.
Microbe-microbe interactions
Microbe-host interactions tend to overshadow the consideration of microbe-microbe interactions. Microbe-microbe interactions though can lead to chronic states of infirmity that are difficult to understand and treat.
Biofilms
Biofilms are an example of microbe-microbe interactions and are thought to be associated with up to 80% of human infections. Recently it has been shown that there are specific genes and cell surface proteins involved in the formation of biofilm. These genes and also surface proteins may be characterized through in silico methods to form an expression profile of biofilm-interacting bacteria. This expression profile may be used in subsequent analysis of other microbes to predict biofilm microbe behaviour, or to understand how to dismantle biofilm formation.
Host microbe analysis
Pathogens have the ability to adapt and manipulate host cells, taking full advantage of a host cell's cellular processes and mechanisms.
A microbe may be influenced by hosts to either adapt to its new environment or learn to evade it. An insight into these behaviours will provide beneficial insight for potential therapeutics. The most detailed outline of host-microbe interaction initiatives is outlined by the Pathogenomics European Research Agenda. Its report emphasizes the following features:
Microarray analysis of host and microbe gene expression during infection. This is important for identifying the expression of virulence factors that allow a pathogen to survive a host's defense mechanism. Pathogens tend to undergo an assortment of changed in order to subvert and hosts immune system, in some case favoring a hyper variable genome state. The genomic expression studies will be complemented with protein-protein interaction networks studies.
Using RNA interference (RNAi) to identify host cell functions in response to infections. Infection depends on the balance between the characteristics of the host cell and the pathogen cell. In some cases, there can be an overactive host response to infection, such as in meningitis, which can overwhelm the host's body. Using RNA, it will be possible to more clearly identify how a host cell defends itself during times of acute or chronic infection. This has also been applied successfully is Drosophila.
Not all microbe interactions in host environment are malicious. Commensal flora, which exists in various environments in animals and humans may actually help combating microbial infections. The human flora, such as the gut for example, is home to a myriad of microbes.
The diverse community within the gut has been heralded to be vital for human health. There are a number of projects under way to better understand the ecosystems of the gut. The sequence of commensal Escherichia coli strain SE11, for example, has already been determined from the faecal matter of a healthy human and promises to be the first of many studies. Through genomic analysis and also subsequent protein analysis, a better understanding of the beneficial properties of commensal flora will be investigated in hopes of understanding how to build a better therapeutic.
Eco-evo perspective
The "eco-evo" perspective on pathogen-host interactions emphasizes the influences ecology and the environment on pathogen evolution. The dynamic genomic factors such as gene loss, gene gain and genome rearrangement, are all strongly influenced by changes in the ecological niche where a particular microbial strain resides. Microbes may switch from being pathogenic and non-pathogenic due to changing environments. This was demonstrated during studies of the plague, Yersinia pestis, which apparently evolved from a mild gastrointestinal pathogen to a very highly pathogenic microbe through dynamic genomic events. In order for colonization to occur, there must be changes in biochemical makeup to aid survival in a variety of environments. This is most likely due to a mechanism allowing the cell to sense changes within the environment, thus influencing change in gene expression. Understanding how these strain changes occur from being low or non-pathogenic to being highly pathogenic and vice versa may aid in developing novel therapeutics for microbial infections.
Applications
Human health has greatly improved and the mortality rate has declined substantially since the second world war because of improved hygiene due to changing public health regulations, as well as more readily available vaccines and antibiotics. Pathogenomics will allow scientists to expand what they know about pathogenic and non-pathogenic microbes, thus allowing for new and improved vaccines. Pathogenomics also has wider implication, including preventing bioterrorism.
Reverse vaccinology
Reverse vaccinology is relatively new. While research is still being conducted, there have been breakthroughs with pathogens such as Streptococcus and Meningitis. Methods of vaccine production, such as biochemical and serological, are laborious and unreliable. They require the pathogens to be in vitro to be effective. New advances in genomic development help predict nearly all variations of pathogens, thus making advances for vaccines. Protein-based vaccines are being developed to combat resistant pathogens such as Staphylococcus and Chlamydia.
Countering bioterrorism
In 2005, the sequence of the 1918 Spanish influenza was completed. Accompanied with phylogenetic analysis, it was possible to supply a detailed account of the virus' evolution and behavior, in particular its adaptation to humans. Following the sequencing of the Spanish influenza, the pathogen was also reconstructed. When inserted into mice, the pathogen proved to be incredibly deadly. The 2001 anthrax attacks shed light on the possibility of bioterrorism as being more of a real than imagined threat. Bioterrorism was anticipated in the Iraq war, with soldiers being inoculated for a smallpox attack. Using technologies and insight gained from reconstruction of the Spanish influenza, it may be possible to prevent future deadly planted outbreaks of disease. There is a strong ethical concern however, as to whether the resurrection of old viruses is necessary and whether it does more harm than good. The best avenue for countering such threats is coordinating with organizations which provide immunizations. The increased awareness and participation would greatly decrease the effectiveness of a potential epidemic. An addition to this measure would be to monitor natural water reservoirs as a basis to prevent an attack or outbreak. Overall, communication between labs and large organizations, such as Global Outbreak Alert and Response Network (GOARN), can lead to early detection and prevent outbreaks.
See also
References
Microbiology
Pathogen genomics | Pathogenomics | Chemistry,Biology | 3,900 |
6,230,931 | https://en.wikipedia.org/wiki/Expander%20mixing%20lemma | The expander mixing lemma intuitively states that the edges of certain -regular graphs are evenly distributed throughout the graph. In particular, the number of edges between two vertex subsets and is always close to the expected number of edges between them in a random -regular graph, namely .
d-Regular Expander Graphs
Define an -graph to be a -regular graph on vertices such that all of the eigenvalues of its adjacency matrix except one have absolute value at most The -regularity of the graph guarantees that its largest absolute value of an eigenvalue is In fact, the all-1's vector is an eigenvector of with eigenvalue , and the eigenvalues of the adjacency matrix will never exceed the maximum degree of in absolute value.
If we fix and then -graphs form a family of expander graphs with a constant spectral gap.
Statement
Let be an -graph. For any two subsets , let be the number of edges between S and T (counting edges contained in the intersection of S and T twice). Then
Tighter Bound
We can in fact show that
using similar techniques.
Biregular Graphs
For biregular graphs, we have the following variation, where we take to be the second largest eigenvalue.
Let be a bipartite graph such that every vertex in is adjacent to vertices of and every vertex in is adjacent to vertices of . Let with and . Let . Then
Note that is the largest eigenvalue of .
Proofs
Proof of First Statement
Let be the adjacency matrix of and let be the eigenvalues of (these eigenvalues are real because is symmetric). We know that with corresponding eigenvector , the normalization of the all-1's vector. Define and note that . Because is symmetric, we can pick eigenvectors of corresponding to eigenvalues so that forms an orthonormal basis of .
Let be the matrix of all 1's. Note that is an eigenvector of with eigenvalue and each other , being perpendicular to , is an eigenvector of with eigenvalue 0. For a vertex subset , let be the column vector with coordinate equal to 1 if and 0 otherwise. Then,
.
Let . Because and share eigenvectors, the eigenvalues of are . By the Cauchy-Schwarz inequality, we have that . Furthermore, because is self-adjoint, we can write
.
This implies that and .
Proof Sketch of Tighter Bound
To show the tighter bound above, we instead consider the vectors and , which are both perpendicular to . We can expand
because the other two terms of the expansion are zero. The first term is equal to , so we find that
We can bound the right hand side by using the same methods as in the earlier proof.
Applications
The expander mixing lemma can be used to upper bound the size of an independent set within a graph. In particular, the size of an independent set in an -graph is at most This is proved by letting in the statement above and using the fact that
An additional consequence is that, if is an -graph, then its chromatic number is at least This is because, in a valid graph coloring, the set of vertices of a given color is an independent set. By the above fact, each independent set has size at most so at least such sets are needed to cover all of the vertices.
A second application of the expander mixing lemma is to provide an upper bound on the maximum possible size of an independent set within a polarity graph. Given a finite projective plane with a polarity the polarity graph is a graph where the vertices are the points a of , and vertices and are connected if and only if In particular, if has order then the expander mixing lemma can show that an independent set in the polarity graph can have size at most a bound proved by Hobart and Williford.
Converse
Bilu and Linial showed that a converse holds as well: if a -regular graph satisfies that for any two subsets with we have
then its second-largest (in absolute value) eigenvalue is bounded by .
Generalization to hypergraphs
Friedman and Widgerson proved the following generalization of the mixing lemma to hypergraphs.
Let be a -uniform hypergraph, i.e. a hypergraph in which every "edge" is a tuple of vertices. For any choice of subsets of vertices,
Notes
References
.
F.C. Bussemaker, D.M. Cvetković, J.J. Seidel. Graphs related to exceptional root systems, Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), volume 18 of Colloq. Math. Soc. János Bolyai (1978), 185-191.
.
.
.
Theoretical computer science
Lemmas in graph theory
Algebraic graph theory | Expander mixing lemma | Mathematics | 1,025 |
43,887,987 | https://en.wikipedia.org/wiki/Mobility%20analogy | The mobility analogy, also called admittance analogy or Firestone analogy, is a method of representing a mechanical system by an analogous electrical system. The advantage of doing this is that there is a large body of theory and analysis techniques concerning complex electrical systems, especially in the field of filters. By converting to an electrical representation, these tools in the electrical domain can be directly applied to a mechanical system without modification. A further advantage occurs in electromechanical systems: Converting the mechanical part of such a system into the electrical domain allows the entire system to be analysed as a unified whole.
The mathematical behaviour of the simulated electrical system is identical to the mathematical behaviour of the represented mechanical system. Each element in the electrical domain has a corresponding element in the mechanical domain with an analogous constitutive equation. All laws of circuit analysis, such as Kirchhoff's laws, that apply in the electrical domain also apply to the mechanical mobility analogy.
The mobility analogy is one of the two main mechanical–electrical analogies used for representing mechanical systems in the electrical domain, the other being the impedance analogy. The roles of voltage and current are reversed in these two methods, and the electrical representations produced are the dual circuits of each other. The mobility analogy preserves the topology of the mechanical system when transferred to the electrical domain whereas the impedance analogy does not. On the other hand, the impedance analogy preserves the analogy between electrical impedance and mechanical impedance whereas the mobility analogy does not.
Applications
The mobility analogy is widely used to model the behaviour of mechanical filters. These are filters that are intended for use in an electronic circuit, but work entirely by mechanical vibrational waves. Transducers are provided at the input and output of the filter to convert between the electrical and mechanical domains.
Another very common use is in the field of audio equipment, such as loudspeakers. Loudspeakers consist of a transducer and mechanical moving parts. Acoustic waves themselves are waves of mechanical motion: of air molecules or some other fluid medium.
Elements
Before an electrical analogy can be developed for a mechanical system, it must first be described as an abstract mechanical network. The mechanical system is broken down into a number of ideal elements each of which can then be paired with an electrical analogue. The symbols used for these mechanical elements on network diagrams are shown in the following sections on each individual element.
The mechanical analogies of lumped electrical elements are also lumped elements, that is, it is assumed that the mechanical component possessing the element is small enough that the time taken by mechanical waves to propagate from one end of the component to the other can be neglected. Analogies can also be developed for distributed elements such as transmission lines but the greatest benefits are with lumped-element circuits. Mechanical analogies are required for the three passive electrical elements, namely, resistance, inductance and capacitance. What these analogies are is determined by what mechanical property is chosen to represent voltage, and what property is chosen to represent current. In the mobility analogy the analogue of voltage is velocity and the analogue of current is force. Mechanical impedance is defined as the ratio of force to velocity, thus it is not analogous to electrical impedance. Rather, it is the analogue of electrical admittance, the inverse of impedance. Mechanical admittance is more commonly called mobility, hence the name of the analogy.
Resistance
The mechanical analogy of electrical resistance is the loss of energy of a moving system through such processes as friction. A mechanical component analogous to a resistor is a shock absorber and the property analogous to inverse resistance (conductance) is damping (inverse, because electrical impedance is the analogy of the inverse of mechanical impedance). A resistor is governed by the constitutive equation of Ohm's law,
The analogous equation in the mechanical domain is,
where,
G = 1/R is conductance
R is resistance
v is voltage
i is current
Rm is mechanical resistance, or damping
F is force
u is velocity induced by the force.
Electrical conductance represents the real part of electrical admittance. Likewise, mechanical resistance is the real part of mechanical impedance.
Inductance
The mechanical analogy of inductance in the mobility analogy is compliance. It is more common in mechanics to discuss stiffness, the inverse of compliance. A mechanical component analogous to an inductor is a spring. An inductor is governed by the constitutive equation,
The analogous equation in the mechanical domain is a form of Hooke's law,
where,
L is inductance
t is time
Cm = 1/S is mechanical compliance
S is stiffness
The impedance of an inductor is purely imaginary and is given by,
The analogous mechanical admittance is given by,
where,
Z is electrical impedance
j is the imaginary unit
ω is angular frequency
Ym is mechanical admittance.
Capacitance
The mechanical analogy of capacitance in the mobility analogy is mass. A mechanical component analogous to a capacitor is a large, rigid weight or a mechanical Inerter.
A capacitor is governed by the constitutive equation,
The analogous equation in the mechanical domain is Newton's second law of motion,
where,
C is capacitance
M is mass
The impedance of a capacitor is purely imaginary and is given by,
The analogous mechanical admittance is given by,
.
Inertance
A curious difficulty arises with mass as the analogy of an electrical element. It is connected with the fact that in mechanical systems the velocity of the mass (and more importantly, its acceleration) is always measured against some fixed reference frame, usually the earth. Considered as a two-terminal system element, the mass has one terminal at velocity ''u'', analogous to electric potential. The other terminal is at zero velocity and is analogous to electric ground potential. Thus, mass cannot be used as the analogue of an ungrounded capacitor.
This led Malcolm C. Smith of the University of Cambridge in 2002 to define a new energy storing element for mechanical networks called inertance. A component that possesses inertance is called an inerter. The two terminals of an inerter, unlike a mass, are allowed to have two different, arbitrary velocities and accelerations. The constitutive equation of an inerter is given by,
where,
F is an equal and opposite force applied to the two terminals
B is the inertance
u1 and u2 are the velocities at terminals 1 and 2 respectively
Δu = u2 − u1
Inertance has the same units as mass (kilograms in the SI system) and the name indicates its relationship to inertia. Smith did not just define a network theoretic element, he also suggested a construction for a real mechanical component and made a small prototype. Smith's inerter consists of a plunger able to slide in or out of a cylinder. The plunger is connected to a rack and pinion gear which drives a flywheel inside the cylinder. There can be two counter-rotating flywheels in order to prevent a torque developing. Energy provided in pushing the plunger in will be returned when the plunger moves in the opposite direction, hence the device stores energy rather than dissipates it just like a block of mass. However, the actual mass of the inerter can be very small, an ideal inerter has no mass. Two points on the inerter, the plunger and the cylinder case, can be independently connected to other parts of the mechanical system with neither of them necessarily connected to ground.
Smith's inerter has found an application in Formula One racing where it is known as the J-damper. It is used as an alternative to the now banned tuned mass damper and forms part of the vehicle suspension. It may have been first used secretly by McLaren in 2005 following a collaboration with Smith. Other teams are now believed to be using it. The inerter is much smaller than the tuned mass damper and smoothes out contact patch load variations on the tyres. Smith also suggests using the inerter to reduce machine vibration.
The difficulty with mass in mechanical analogies is not limited to the mobility analogy. A corresponding problem also occurs in the impedance analogy, but in that case it is ungrounded inductors, rather than capacitors, that cannot be represented with the standard elements.
Resonator
A mechanical resonator consists of both a mass element and a compliance element. Mechanical resonators are analogous to electrical LC circuits consisting of inductance and capacitance. Real mechanical components unavoidably have both mass and compliance so it is a practical proposition to make resonators as a single component. In fact, it is more difficult to make a pure mass or pure compliance as a single component. A spring can be made with a certain compliance and mass minimised, or a mass can be made with compliance minimised, but neither can be eliminated altogether. Mechanical resonators are a key component of mechanical filters.
Generators
Analogues exist for the active electrical elements of the voltage source and the current source (generators). The mechanical analogue in the mobility analogy of the constant current generator is the constant force generator. The mechanical analogue of the constant voltage generator is the constant velocity generator.
An example of a constant force generator is the constant-force spring. An example of a practical constant velocity generator is a lightly loaded powerful machine, such as a motor, driving a belt. This is analogous to a real voltage source, such as a battery, which remains near constant-voltage with load provided that the load resistance is much higher than the battery internal resistance.
Transducers
Electromechanical systems require transducers to convert between the electrical and mechanical domains. They are analogous to two-port networks and like those can be described by a pair of simultaneous equations and four arbitrary parameters. There are numerous possible representations, but the form most applicable to the mobility analogy has the arbitrary parameters in units of admittance. In matrix form (with the electrical side taken as port 1) this representation is,
The element is the short circuit mechanical admittance, that is, the admittance presented by the mechanical side of the transducer when zero voltage (short circuit) is applied to the electrical side. The element , conversely, is the unloaded electrical admittance, that is, the admittance presented to the electrical side when the mechanical side is not driving a load (zero force). The remaining two elements, and , describe the transducer forward and reverse transfer functions respectively. They are both analogous to transfer admittances and are hybrid ratios of an electrical and mechanical quantity.
Transformers
The mechanical analogy of a transformer is a simple machine such as a pulley or a lever. The force applied to the load can be greater or less than the input force depending on whether the mechanical advantage of the machine is greater or less than unity respectively. Mechanical advantage is analogous to the inverse of transformer turns ratio in the mobility analogy. A mechanical advantage less than unity is analogous to a step-up transformer and greater than unity is analogous to a step-down transformer.
Power and energy equations
Examples
Simple resonant circuit
The figure shows a mechanical arrangement of a platform of mass M that is suspended above the substrate by a spring of stiffness S and a damper of resistance Rm. The mobility analogy equivalent circuit is shown to the right of this arrangement and consists of a parallel resonant circuit. This system has a resonant frequency, and may have a natural frequency of oscillation if not too heavily damped.
Advantages and disadvantages
The principal advantage of the mobility analogy over its alternative, the impedance analogy, is that it preserves the topology of the mechanical system. Elements that are in series in the mechanical system are in series in the electrical equivalent circuit and elements in parallel in the mechanical system remain in parallel in the electrical equivalent.
The principal disadvantage of the mobility analogy is that it does not maintain the analogy between electrical and mechanical impedance. Mechanical impedance is represented as an electrical admittance and a mechanical resistance is represented as an electrical conductance in the electrical equivalent circuit. Force is not analogous to voltage (generator voltages are often called electromotive force), but rather, it is analogous to current.
History
Historically, the impedance analogy was in use long before the mobility analogy. Mechanical admittance and the associated mobility analogy were introduced by F. A. Firestone in 1932 to overcome the issue of preserving topologies. W. Hähnle independently had the same idea in Germany. Horace M. Trent developed a treatment for analogies in general from a mathematical graph theory perspective and introduced a new analogy of his own.
References
Bibliography
Atkins, Tony; Escudier, Marcel, A Dictionary of Mechanical Engineering, Oxford University Press, 2013 .
Beranek, Leo Leroy; Mellow, Tim J., Acoustics: Sound Fields and Transducers, Academic Press, 2012 .
Busch-Vishniac, Ilene J., Electromechanical Sensors and Actuators, Springer Science & Business Media, 1999 .
Carr, Joseph J., RF Components and Circuits, Newnes, 2002 .
Debnath, M. C.; Roy, T., "Transfer scattering matrix of non-uniform surface acoustic wave transducers", International Journal of Mathematics and Mathematical Sciences, vol. 10, iss. 3, pp. 563–581, 1987.
De Groote, Steven, "J-dampers in Formula One", F1 Technical, 27 September 2008.
Eargle, John, Loudspeaker Handbook, Kluwer Academic Publishers, 2003 .
Fahy, Frank J.; Gardonio, Paolo, Sound and Structural Vibration: Radiation, Transmission and Response, Academic Press, 2007 .
Findeisen, Dietmar, System Dynamics and Mechanical Vibrations, Springer, 2000 .
Firestone, Floyd A., "A new analogy between mechanical and electrical systems", Journal of the Acoustical Society of America, vol. 4, pp. 249–267 (1932–1933).
Hähnle, W., "Die Darstellung elektromechanischer Gebilde durch rein elektrische Schaltbilder", Wissenschaftliche Veröffentlichungen aus dem Siemens-Konzern, vol. 1, iss. 11, pp. 1–23, 1932.
Kleiner, Mendel, Electroacoustics, CRC Press, 2013 .
Pierce, Allan D., Acoustics: an Introduction to its Physical Principles and Applications, Acoustical Society of America 1989 .
Pusey, Henry C. (ed), 50 years of shock and vibration technology, Shock and Vibration Information Analysis Center, Booz-Allen & Hamilton, Inc., 1996 .
Smith, Malcolm C., "Synthesis of mechanical networks: the inerter", IEEE Transactions on Automatic Control, vol. 47, iss. 10, pp. 1648–1662, October 2002.
Talbot-Smith, Michael, Audio Engineer's Reference Book, Taylor & Francis, 2013 .
Taylor, John; Huang, Qiuting, CRC Handbook of Electrical Filters, CRC Press, 1997 .
Trent, Horace M., "Isomorphisms between oriented linear graphs and lumped physical systems", The Journal of the Acoustical Society of America, vol. 27, pp. 500–527, 1955.
Electrical analogies
Electromechanical engineering
Electronic design | Mobility analogy | Engineering | 3,180 |
2,897,745 | https://en.wikipedia.org/wiki/Embedded%20value | The Embedded Value (EV) of a life insurance company is the present value of future profits plus adjusted net asset value. It is a construct from the field of actuarial science which allows insurance companies to be valued.
Background
Life insurance policies are long-term contracts, where the policyholder pays a premium to be covered against a possible future event (such as the death of the policyholder).
Future income for the insurer consists of premiums paid by policyholders whilst future outgoings comprise claims paid to policyholders as well as various expenses. The difference, combined with income on and release of statutory reserves, represents future profit.
Net asset value is the difference between the total assets and liabilities of an insurance company.
For companies, the net asset value is usually calculated at book value. This needs to be adjusted to market values for EV purposes. Furthermore, this value may be discounted to reflect the "lock in" of some of the assets by their nature. (An example of such a lock-in would be assets held within the with-profits fund)
Value of the insurer
EV measures the value of the insurer by adding today's value of the existing business (i.e. future profits) to the market value of net assets (i.e. accumulated past profits).
It is a conservative measure of the insurer's value in the sense that it only considers future profits from existing policies and so ignores the possibility that the insurer may sell new policies in future. It also excludes goodwill. As a result, the insurer is worth more than its EV.
Formula
Embedded Value is calculated as follows:
EV = PVFP + ANAV
where
EV = Embedded Value
PVFP = present value of future profits
ANAV = adjusted net asset value
Improvements
European embedded value (EEV) is a variation of EV which was set up by the CFO Forum which allows for a more formalised method of choosing the parameters and doing the calculations, to enable greater transparency and comparability.
Market Consistent Embedded Value is a more generalised methodology, of which EEV is one example.
References
External links
Embedded value definition from Investopedia
Actuarial science | Embedded value | Mathematics | 441 |
49,530,308 | https://en.wikipedia.org/wiki/Neutron%20magnetic%20imaging | Neutrons are spin 1/2 particles that interact with magnetic induction fields via the Zeeman interaction. This interaction is both rather large and simple to describe. Several neutron scattering techniques have been developed to use thermal neutrons to characterize magnetic micro and nanostructures.
Polarized small-angle neutron scattering (SANS)
Small-angle neutron scattering is a technique which is especially suited for the study of nanoparticles. It has for example been used extensively for the study of ferrofluids. More recently, polarized SANS has become more widely available and a wide range of study have been performed. Polarized SANS allows either to probe the internal structure of magnetic nanoparticles via the measurement of the magnetic form factor or the magnetic interactions between magnetic nanoparticles via the structure factor. In a few cases, Polarized Grazing Incidence SANS was performed on magnetic systems
A few polarized neutrons SANS spectrometers are available across the world:
D33 at the Institut Laue-Langevin (ILL) in Grenoble France
PA20 at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France (CEA Saclay site)
SANS-I and KWS-1 and KWS-2 at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany
V4 at Helmholtz Zentrum Berlin
Polarized neutron reflectometry
Polarized neutron reflectometry allows probing magnetic thin films and ultra-thin films. The polarized reflectivity measurements allow measuring the magnitude and directions of the magnetic induction in magnetic heterostructures with a depth resolution on the order of 2-3 nm for films with thicknesses ranging from 5 to 100 nm.
A number of polarized neutrons reflectometers are available across the world:
Platypus at ANSTO in Sydney, Australia
C5 spectrometer at NRC Canada Chalk River Labs in Chalk River, Canada.
D3 reflectometer at NRC Canada Chalk River Labs in Chalk River, Canada.
D17, SuperADAM at the Institut Laue-Langevin (ILL) in Grenoble, France
PRISM (alternate) at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France
N-REX+, MIRA, TREFF@NoSpec and MARIA at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany
REFLEX and REMUR at Joint Institute for Nuclear Research IBR-2 in Dubna, Russia
AMOR at the Paul Scherrer Institute (PSI) in Villigen, Switzerland
SURF, CRISP, INTER, Offspec and polREF at the ISIS neutron source (ISIS) in Oxfordshire, United Kingdom
NG1, NG7 at the NIST Center for Neutron Research (NCNR) in Gaithersburg, Maryland, United States
Magnetism at the Spallation Neutron Source (ORNL) in Oak Ridge, Tennessee, United States
A catalogue of neutron reflectometers is available at www.reflectometry.net.
Polarized Neutron Radiography and Tomography
Precession techniques
The neutron precession in an induction field is expressed as where is the neutron magnetic moment, is the local magnetic induction at the neutron position and is the neutron gyromagnetic ratio. For neutrons, the gyromagnetic ratio is (note that for neutrons g factor is negative and equal to -3.83).
Bulk systems
Neutron radiography can be used to map the distribution of an induction field in space. In order to perform such experiments, the neutron beam is initially polarized, it interacts with the induction field of interest and the neutron precession is measured with a neutron analyzer in front of the 2D detector. The beam can be either polarized with supermirrors or with polarized 3He gaz
Thin film structures
The neutron precession in an induction field is rather small. Thus in the case of thin films (~1 μm thick) the neutron interaction is rather small. Thus in order to obtain a measurable signal, it has been proposed that a grazing incidence geometry could be used. In such a geometry, the interaction is enhance since the neutron travels a longer path inside the induction field. Such measurements however assume that the planar structure of the system is homogeneous and that the induction varie only through the depth of the magnetic film. The magnetisation depth profile was measured in thick CoZr films in which the magnetic anisotropy field was "engineered" during deposition. A very thorough description of the measurement process can be found in.
Phase imaging
Phase contrast (or dark field) imaging has recently been developed for neutron radiography and tomography. It has been applied to visualize magnetic domains in several types of systems:
soft magnetic alloys
magnetic vortices in low Tc superconductors superconductors
Scanning magnetic neutron imaging
Magnetic neutron radiography is currently limited in spatial resolution due to the need of analyzing the neutron polarization which results in losses in spatial resolution. It has been proposed that neutron scanning imaging could be performed by using micro beams. It is however only possible to produce 1 dimensional microbeams due to the intrinsic limitation in neutron flux. Hence this technique can presently be applied only for 1 dimensional problems.
See also
Tomography
Tomographic reconstruction
Neutron Tomography
References
Small-angle scattering
Neutron scattering
Imaging | Neutron magnetic imaging | Chemistry | 1,116 |
54,247,835 | https://en.wikipedia.org/wiki/Biosolarization | Biosolarization is an alternative technology to soil fumigation used in agriculture. It is closely related to biofumigation and soil solarization, or the use of solar power to control nematodes, bacteria, fungi and other pests that damage crops. In solarization, the soil is mulched and covered with a tarp to trap solar radiation and heat the soil to a temperature that kills pests. Biosolarization adds the use of organic amendments or compost to the soil before it is covered with plastic, which speeds up the solarization process by decreasing the soil treatment time through increased microbial activity. Research conducted in Spain on the use of biosolarization in strawberry fruit production has shown it to be a sustainable and cost effective option. The practice of biosolarization is being used among small agricultural operations in California. Biosolarization is a growing practice in response to the need for methods for organic soil solarization. The option for more widespread use of biosolarization is being studied by researchers at the Western Center for Agricultural Health and Safety at the University of California at Davis in order to validate the effectiveness of biosolarization in commercial agriculture in California, where it has the potential to greatly reduce the use of conventional fumigants. Biosolarization can also use as organic waste management practice. Recent studies showed the potential of food industrial residues as soil amendments that can improve the efficiency of biosolarization.
References
Soil science
Soil contamination
Biocides
Pest control techniques
Agricultural terminology | Biosolarization | Chemistry,Biology,Environmental_science | 312 |
1,630,953 | https://en.wikipedia.org/wiki/Buying%20center | A buying center, also called a decision-making unit (DMU), brings together "all those members of an organization who become involved in the buying process for a particular product or service".
The concept of a DMU was developed in 1967 by Robinson, Farris and Wind (1967). A DMU consists of all the people of an organization who are involved in the buying decision. The decision to purchase involves those with purchasing and financial expertise and those with technical expertise, and (in some cases) an organization's top management. McDonald, Rogers and Woodburn (2000) state that identifying and influencing all the people involved in the buying decision is a prerequisite in the process of selling to an organization.
Modelling buying centers
The concept of a buying center (as a focus of business-to-business marketing, and as a core factor in creating customer value and influence in organisational efficiency and effectiveness) formulates the understanding of purchasing decision-making in complex environments.
Some of the key factors influencing a buying center or DMU's activities include:
Buy class or situation. The "Buygrid" model developed by Robinson et al. in 1967 classified "buy classes" as "straight rebuy", "modified rebuy" or "new task", also referred to as "new task buying". Michelle Bunn extended this range to six basic buying situations in a 1993 article:
Casual purchasing involving no search or analysis
Routine low priority purchasing or rebuying
Simple modified rebuys where selection options are limited
Judgemental new purchasing tasks, e.g. for a special type of equipment
Complex modified rebuys requiring more structured processes for establishing and evaluating options, such as through a competitive tendering process
Strategic new tasks establishing long-term business partnerships and purchasing plans.
Product type (e.g. materials, components, plant and equipment, or maintenance, repair and operations (MRO)
Importance of the purchase.
In some cases the buying center is an informal ad hoc group, but in other cases, it is a formally sanctioned group with a specific mandate. American research undertaken by McWilliams in 1992 found out that the mean size of a buying center mainly consisted of four people. The range in this research was between three and five people. The type of purchase that has to be done and the stage of the buying process influence the size. More recent research found that the structure, including the size, of buying centers depends on the organizational structure, with centralization and formalization driving the development of large buying centers.
Decision-making process
When the DMU wants to purchase a certain product or service the following steps are taken inside the buying center:
Need or problem recognition: the recognition can start for two reasons. The first reason can be to solve a specific problem of the company. The other reason can be to improve a company's current operations/performance or to pursue new market opportunities.
Determining product specification: the specification includes the characteristics and functionality which the product/service that is going to be purchased must contain.
Supplier and product search: this process contains the search for suppliers that can meet a company's product or service needs. First a supplier that matches with the specifications of the company has to be found. The second condition is that the supplier can satisfy the organization's financial and supply requirements.
Evaluation of proposals and selection of suppliers: the different possible suppliers will be evaluated by the different departments of the company.
Selection of order routine: this stadium starts after the selection of the supplier. It mainly consists of negotiating and agreeing with the supplier about certain details.
Performance feedback and evaluation: performance and quality of the purchased goods will be evaluated.
In this process of making decisions different roles can be given to certain members of the center or the unit depending on the importance of the part of the organization.
Robinson et al.'s "Buygrid Framework" saw new task activities, dealing with a problem which has not arisen before, as more complex than the other buy classes, and closer to achieving a general solution applicable in future rebuy activities. McQuiston in 1989 noted mixed empirical findings regarding the framework: "some studies have shown that participation and influence do vary according to the buygrid framework ... but other studies have shown that they do not". Co-author Yoram Wind, looking back at the Buygrid Model 25 years after its publication, held that the model had provided "a very useful framework" whose "underlying dimensions [were] valid", but "its generalizability under a variety of market situations [was] not yet completely understood".
Issues in buying center research
There are several conceptual and methodological issues concerning buying centers which in 1986 were thought to need additional research. These issues can be divided into:
Buying center boundaries and buying center domain
Distinguishing internal buying center processes from the influence of external environmental factors, also defining and delimiting the activities of a particular buying center. Webster and Wind (1972) list a number of environmental factors including physical, economic, legal and cultural aspects of the external environment, and identify physical, technological, economic and cultural aspects with "the [internal] organisational climate". Johnston and Bonoma used interaction theory in a 1981 paper to help analyse the distinction between internal and external factors.
Buying center structure
Understanding how organizational structures may differ from or may shape the structure of the buying center, and examining how a particular buying strategy may serve to mediate the effects of environmental uncertainty on the structure of the buying center.
Process considerations in buying center
Power and conflict issues within the buying center.
Decision making
One stream of research focuses on the number of decision phases and their timing and the other emphasizes the type of decision-making model (or choice routine) utilized.
Communications flow
The informal interactions that emerge during the buying process.
Application to small and medium-sized businesses
Andrews and Rogers noted in 2005 that very little academic discussion had taken place regarding buyer behaviour within small and medium-sized enterprises (SMEs). Thompson and Panayiotopoulos suggest that some purchasing decisions in SMEs, especially in a rebuy context, are made by one person and therefore not really a "group" activity, although in a new-buy situation, "the influence of other people may be greater".
See also
Procurement - formalised organizational procedures for purchasing
References
Business-to-business
Organizational behavior
Procurement | Buying center | Biology | 1,291 |
28,452,110 | https://en.wikipedia.org/wiki/Lucas%20Introna | Lucas D. Introna (born 1961) is Professor of Organisation, Technology and Ethics at the Lancaster University Management School. He is a scholar within the Social Study of Information Systems field. His research is focused on the phenomenon of technology. Within the area of technology studies he has made significant contributions to our understanding of the ethical and political implications of technology for society.
Work
Early on in his career Introna was concerned with the way managers incorporated information in support of managerial practices (such as planning, decision-making, etc.). In this work he provided an account of the manager as an always already involved and entangled actor (which is always to a greater or lesser extent already compromised and configured) in contrast to the traditional normative model of the manager as a rational objective free agent that can choose to act or not act in particular ways. Later on his work shifted to a more critical appraisal of technology itself. He, together with co-workers, published a number of critical evaluations of information technology including search engines web search engines, ATMs, facial recognition systems facial recognition systems, etc. His recent work focuses on the ethical and political aspects of technology as well as making contribution to a field that has become known as sociomateriality.
Management, Information and Power
In his book Management, Information and Power, Introna argued that most management education is normatively based (i.e. telling managers how they ought to act), yet managers' organisational reality is mostly based on the ongoing play of power and politics, as has been shown by Henry Mintzberg (See also his recent book Managing). Thus, instead of using information to inform rationality (as the traditional normative models assume) information is rather most often deployed as a resource in organisational politics. This fact, Introna argues, requires an understanding of the relationship between information and power (as suggested in the work of Michel Foucault) rather than information and rationality, as traditionally assumed in the mainstream management literature.
Phenomenological and technology
Drawing on phenomenology, especially the work of Martin Heidegger and Don Ihde, Introna together with Fernando Ilharco developed a phenomenological analysis of information technology—in particular a detailed account of the phenomenology of the screen. They argue that in the phenomenon screen, seeing is not merely being aware of a surface. The very watching of the screen, as a screen, implies that the screen has already soaked up our attention. In screening, screens already attract and hold our attention. They continue to hold our attention as they present what is supposedly relevant—this is exactly why they have the power to attract and hold our attention. This ongoing relevance has as its necessary condition an implicit agreement, not of content, but of a way of living and a way of doing—or rather a certain agreement about the possibilities of truth. As such they argue that screens are ontological entities.
The ethics and politics of technology
Introna (with a variety of co-workers) has developed a variety of detailed empirical studies of the ethics and politics of technology—within the tradition of Science and technology studies. For example, with Helen Nissenbaum he published a paper on the politics of web search engines. This research showed that the indexing and ranking algorithms of Google are producing a particular version of the internet. One which systematically exclude (in some cases by design and in some, accidentally) certain sites and certain types of sites in favour of others, systematically giving prominence to some at the expense of others. Introna also published similar political and ethical studies on Facial recognition systems, Automatic teller machines, and plagiarism detection Systems, amongst others.
Sociomateriality and the ethics of things
More recently Introna has suggested that if we are cyborgs, as argued by Donna Haraway and others, then our ethical relationships with the inanimate material world needs to be reconsidered in a fundamental way. According to him this can only be achieved if we humans abandon a human centric ethical framework and opt for an ethical framework in which all beings are considered worthy of ethical consideration.
Selected publications
2023. Being-in-the-Screen: Phenomenological Reflections on Contemporary Screenhood, in Robson, G.J., & Tsou, J.Y. (eds). Technology Ethics: A Philosophical Introduction and Readings (1st ed.). Routledge. https://doi.org/10.4324/9781003189466
2021. “Touching Tactfully: The Impossible Community”, in Olsen, B., Burström, M., DeSilvey, C. and Pétursdóttir, Þ. (Eds.), After Discourse: Things, Affects, Ethics, 1st edition., Routledge, London & New York, pp. 207–218.
2019. Performativity and sociomaterial becoming: what technologies do? In S. Webb (Editor), The Routledge Handbook of Critical Social Work. Oxford ; New York: Routledge, 312-323
2017. On the making of sense in sensemaking: decentred sensemaking in the meshwork of life, Organization Studies, https://doi.org/10.1177/0170840618765579.
2016. Algorithms, Governance and Governmentality: On governing academic writing, Science, Technology and Human Values, 41(1):17-49
2014. Ethics and Flesh: Being touched by the otherness of things, In Olsen, Bjørnar and Þóra Pétursdóttir (eds.), Ruin Memories: Materialities, Aesthetics and the Archaeology of the Recent Past, Oxford: Routledge, p. 41-61. [ISBN 9781317695790]
2013. Afterword: Performativity and the becoming of sociomaterial assemblages. In de Vaujany, F-X., & Mitev, N. (Eds.), Materiality and Space: Organizations, Artefacts and Practices.(pp. 330–342).Palgrave Macmillan.
2013. Otherness and the letting-be of becoming: or, ethics beyond bifurcation. In Carlile, P., Nicolini, D., Langley, A., & Tsoukas, H. (Eds.), How matter matters. (pp. 260–287). Oxford: Oxford University Press.
2011. The Enframing of Code: Agency, originality and the plagiarist, Theory, Culture and Society, 28(6): 113-141.
2009. Ethics and the speaking of things, Theory, Culture and Society, 26(4): 398-419.
2008. Phenomenology, Organisation and Technology, Universidade Católica Editora, Lisbon. (with Fernando Ilharco and Eric Faÿ)
2007. Maintaining the Reversibility of Foldings: Making the ethics (politics) of information technology visible, Ethics and Information Technology, 9(1): 11-25
2006. The Meaning of Screens: Towards a phenomenological account of screenness, Human Studies, 29(1): 57-76. (with Fernando M. Ilharco)
2005. Disclosing the Digital Face: The ethics of facial recognition systems, Ethics and Information Technology, 7(2): 75-86
2002. The (im)possibility of ethics in the information age. Information and Organisation, 12(2):71-84.
2000. Shaping the Web: Why the politics of search engines matters, The Information Society, 16(3):169-185 (with Helen Nissenbaum )
1999. Privacy in the Information Age: Stakeholders, interests and values. Journal of Business Ethics, 22(1): 27-38 (with Nancy Poloudi)
1997. Privacy and the Computer: Why we Need Privacy in the Information Society. Metaphilosophy, 28(3): 259-275
1997. Management, Information and Power: A narrative of the involved manager, Macmillan, Basingstoke.
References
External links
Lucas Introna's webpage at Lancaster University
His publication archive
His Google Scholar citation page
Research Gate entry
1961 births
Living people
Academics of Lancaster University
Philosophers of technology
British ethicists
Information systems researchers | Lucas Introna | Technology | 1,696 |
73,052,423 | https://en.wikipedia.org/wiki/Double%20operator%20integral | In functional analysis, double operator integrals (DOI) are integrals of the form
where is a bounded linear operator between two separable Hilbert spaces,
are two spectral measures, where stands for the set of orthogonal projections over , and is a scalar-valued measurable function called the symbol of the DOI. The integrals are to be understood in the form of Stieltjes integrals.
Double operator integrals can be used to estimate the differences of two operators and have application in perturbation theory. The theory was mainly developed by Mikhail Shlyomovich Birman and Mikhail Zakharovich Solomyak in the late 1960s and 1970s, however they appeared earlier first in a paper by Daletskii and Krein.
Double operator integrals
The map
is called a transformer. We simply write , when it's clear which spectral measures we are looking at.
Originally Birman and Solomyak considered a Hilbert–Schmidt operator and defined a spectral measure by
for measurable sets , then the double operator integral can be defined as
for bounded and measurable functions . However one can look at more general operators as long as stays bounded.
Examples
Perturbation theory
Consider the case where is a Hilbert space and let and be two bounded self-adjoint operators on . Let and be a function on a set , such that the spectra and are in . As usual, is the identity operator. Then by the spectral theorem and and , hence
and so
where and denote the corresponding spectral measures of and .
Literature
References
Functional analysis
Definitions of mathematical integration | Double operator integral | Mathematics | 315 |
48,951,615 | https://en.wikipedia.org/wiki/Syrian%20hamster%20behavior | Syrian hamster behavior refers to the ethology of the Syrian hamster (Mesocricetus auratus).
Sleeping habits
Syrian hamsters have a sleep cycle that lasts about 10 to 12 minutes.
In the laboratory, Syrian hamsters are observed to be nocturnal and in their natural circadian rhythm they wake and sleep on a consistent schedule. In all kinds of laboratory settings hamsters do 80% of their routine activities at night. Hamsters are most active early in the night, then become less active as the night passes.
A study of Syrian hamsters in the wild found that they were active almost exclusively in the daytime, which is a surprising difference from behavior in the laboratory. The sleeping behavior of wild hamsters is not well understood.
Reproduction
The female Syrian hamster has anatomic features that are unique from other animals. They mature between 8–10 weeks of age and have a 4-day estrous cycle.
Female Syrian hamsters show mate preference before they engage in copulation by displaying vaginal marking, known to solicit males. She often chooses to mate with an alpha male, who will flank mark (a scent-marking behaviour associated with aggression and competition) more frequently than any subordinate males present.
Male offspring are at higher risk than female offspring of enduring effects from maternal social stress. In the presence of a dominant pregnant female, subordinate pregnant female hamsters have the ability to reabsorb or spontaneously abort their young (most often males) in utero. The subordinate females produce smaller litters overall, and any male offspring they do produce will be smaller in size than those that were produced by the dominant female. After a mother hamster gives birth, normal behavior from the mother in the postpartum period can include establishing a maternal bond with the babies, the mother being aggressive to protect the babies, or infanticide in rodents of the mother to her young.
The male Syrian hamster has a requirement for both hormonal cues and chemosensory cues in order to engage in copulation. Further, the integration of steroid cues (i.e. testosterone) and odour cues (relayed through the olfactory bulb) is crucial for mating. It has also been shown that within the medial amygdala, the anterior and posterior regions work together to process the stimuli (odors), showing that their mating behaviour relies on the main olfactory system's communication to nuclei in the amygdala regions. Their behaviour has demonstrated this phenomenon, as they are attracted to the odor of female hamster's vaginal discharge. Males have even demonstrated mounting behaviour on other males who are scented with the female vaginal discharge.
When one male and two females are placed in the same environment, the male is likely to engage in copulation with both females as it provides him with a reproductive advantage. In all observed scenarios where there was one male and two females, he did not demonstrate preference for either female and engaged in copulation with both the females present. There has been no reproductive disadvantage to the female when another female is present, other than decreased stimulation as compared to a one-male one-female situation.
Interactions with others
Syrian hamsters acquire learned helplessness when they are bullied a few times by larger animal. Syrian hamsters can regain lost confidence when some time passes without experiencing bullying.
Interactions between male and female Syrian hamsters are influenced by the estrous cycle - in addition, their behaviour changes over the course of the 4-day cycle. Parameters for interactions that have been studied include sniffing, approaching, leaving, and following each other (male/female pair). Specific to the male hamster, his response to the female can be measured by mounting behaviour, intromission and ejaculation.
Under semi-natural conditions, the mating behaviours of male and female hamsters were observed during the 4-day period of estrous. When they were allowed free interaction, females displayed lordosis in their own living area 93% of the time, where after 60 minutes of copulation the male would be driven out by the female while she retrieved his food supply and forced him into a corner farthest away from her nest via displays of aggressive behaviour.
When a Syrian hamster is introduced to a stranger hamster in its own cage, they perform a standard sequence of acts and postures (also known as a fixed action pattern) that are agonistic by nature. It has been observed that one hamster becomes the dominant and the other becomes submissive, as shown by their posture. The stranger hamster was observed to be the dominant in the majority of situations, and the resident hamster was the submissive.
Feeding behaviour
Food-anticipatory activity (FAA), meaning increased locomotion due to restricted feeding schedules (often found in laboratory settings), is a behaviour seen in many rodents. The Syrian hamster is one of only few exceptions to this activity. It has been found that the arcuate nucleus, ventromedial nucleus, and dorsomedial nucleus are all involved in the presence of FAA, and that Syrian hamsters in the laboratory do not demonstrate FAA because of the presence of light and the typical light cycles used in experiments.
In a study of their food-hoarding behaviour, Syrian hamsters were given a limited access to food and expected to consume more in each sitting than they typically would. Instead, they exhibited hoarding behaviour where they took the food during the given time period and continuously ate the food that they hoarded as though they were on a free-fed schedule. This allowed them to maintain typical body weight, and mimic the adaptive feeding strategies they may use in their natural habitats.
Females have shown signs of anorexia and anxiety when separated from social interactions. Social separation of hamsters has a bias toward females, thus providing a model for the differences between sexes when experiencing anorexia and anxiety in their adulthood.
Laboratory behaviour
Although most all hamsters display wire-gnawing behaviour in all laboratory cage sizes, it has been shown that the more restricted the cage size, the more their gnawing behaviour increases. Additionally, hamsters in smaller cages used the roof of their house as a platform more often than those in a larger cage which may suggest that they are trying to create more space for themselves within their cage.
In another study, the bedding depth of hamsters and its influence on their stress and wire-gnawing behaviour was tracked by assigning 3 groups different bedding depths - 10 cm, 40 cm, and 80 cm. This is due to the natural instinct that laboratory rodents have to dig. Hamsters who had the 10 cm deep bedding showed significantly more wire-gnawing than any others, and the 80 cm deep bedding group demonstrated no wire-gnawing behaviour. This research demonstrates the importance of having enough bedding for the hamsters to indulge their natural tendencies and have enough material to dig.
The behaviour and responses of Syrian hamsters have been observed and tested for a variety of medical-related studies as well, such as the development of the palate and incidence of cleft palate, the influence of retinoic acid on physical malformations in fetuses, immune responses to diseases like hookworm, and the effects of ingesting ethanol solution on liver composition and fatty acid accumulation.
References
Behavior
Mammal behavior
Ethology | Syrian hamster behavior | Biology | 1,496 |
53,990,911 | https://en.wikipedia.org/wiki/Gray%20molasses | Gray molasses is a method of sub-Doppler laser cooling of atoms. It employs principles from Sisyphus cooling in conjunction with a so-called "dark" state whose transition to the excited state is not addressed by the resonant lasers. Ultracold atomic physics experiments on atomic species with poorly-resolved hyperfine structure, like isotopes of lithium
and potassium,
often utilize gray molasses instead of Sisyphus cooling as a secondary cooling stage after the ubiquitous magneto-optical trap (MOT) to achieve temperatures below the Doppler limit. Unlike a MOT, which combines a molasses force with a confining force, a gray molasses can only slow but not trap atoms; hence, its efficacy as a cooling mechanism lasts only milliseconds before further cooling and trapping stages must be employed.
Overview
Like Sisyphus cooling, the cooling mechanism of gray molasses relies on a two-photon Raman-type transition between two hyperfine-split ground states mediated by an excited state. Orthogonal superpositions of these ground states constitute "bright" and "dark" states, so called since the former couples to the excited state via dipole transitions driven by the laser, and the latter is only accessible via spontaneous emission from the excited state. As neither are eigenstates of the kinetic energy operator, the dark state also evolves into the bright state with frequency proportional to the atom's external momentum. Gradients in the polarization of the molasses beam create a sinusoidal potential energy landscape for the bright state in which atoms lose kinetic energy by traveling "uphill" to potential energy maxima that coincide with circular polarizations capable of executing electric dipole transitions to the excited state. Atoms in the excited state are then optically pumped to the dark state and subsequently evolve back to the bright state to restart the cycle. Alternately, the pair of bright and dark ground states can be generated by electromagnetically-induced transparency (EIT).
The net effect of many cycles from bright to excited to dark states is to subject atoms to Sisyphus-like cooling in the bright state and select the coldest atoms to enter the dark state and escape the cycle. The latter process constitutes velocity-selective coherent population trapping (VSCPT).
The combination of bright and dark states thus inspires the name "gray molasses."
History
In 1988, The NIST group in Washington led by William Phillips first measured temperatures below the Doppler limit in sodium atoms in an optical molasses, prompting the search for the theoretical underpinnings of sub-Doppler cooling.
The next year, Jean Dalibard and Claude Cohen-Tannoudji identified the cause as the multi-photon process of Sisyphus cooling,
and Steven Chu's group likewise modeled sub-Doppler cooling as fundamentally an optical pumping scheme.
As a result of their efforts, Phillips, Cohen-Tannoudji, and Chu jointly won the 1997 Nobel Prize in Physics. T.W. Hänsch, et al., first outlined the theoretical formulation of gray molasses in 1994,
and a four-beam experimental realization in cesium was achieved by G. Grynberg the next year.
It has since been regularly used to cool all the other alkali (hydrogenic) metals.
Comparison to Sisyphus Cooling
In Sisyphus cooling, the two Zeeman levels of a atomic ground state manifold experience equal and opposite AC Stark shifts from the near-resonant counter-propagating beams. The beams also effect a polarization gradient, alternating between linear and circular polarizations. The potential energy maxima of one coincide with pure circular polarization, which optically pumps atoms to the other , which experiences its minima in the same location. Over time, the atoms expend their kinetic energy traversing the potential energy landscape and transferring the potential energy difference between the crests and troughs of the AC-Stark-shifted ground state levels to emitted photons.
In contrast, gray molasses only has one sinusoidally light-shifted ground state; optical pumping at the peaks of this potential energy landscape takes atoms to the dark state, which can selectively evolve to the bright state and re-enter the cycle with sufficient momentum. Sisyphus cooling is difficult to implement when the excited state manifold is poorly-resolved (i.e. whose hyperfine spacing is comparable to or less than the constituent linewidths); in these atomic species, the Raman-type gray molasses is preferable.
Theory
Dressed-State Picture
Denote the two ground states and the excited state of the electron and , respectively. The atom also has overall momentum, so the overall state of the atom is a product state of its internal state and its momentum, as shown in the figure. In the presence of counter-propagating beams of opposite polarization, the internal states experience the atom-light interaction Hamiltonian
where is the Rabi frequency, approximated to be the same for both transitions. Using the definition of the translation operator in momentum space,
the effect of on the state is
This suggests the dressed state that couples to is a more convenient basis state of the two ground states. The orthogonal basis state defined below does not couple to at all.
The action of on these states is
Thus, and undergo Sisyphus-like cooling, identifying the former as the bright state. is optically inaccessible and constitutes the dark state. However, and are not eigenstates of the momentum operator, and thus motionally couple to one another via the kinetic energy term of the unperturbed Hamiltonian:
As a result of this coupling, the dark state evolves into the bright state with frequency proportional to the momentum, effectively selecting hotter atoms to re-enter the Sisyphus cooling cycle. This nonadiabatic coupling occurs predominantly at the potential minima of the light-shifted coupling state. Over time, atoms cool until they lack the momentum to traverse the sinusoidal light shift of the bright state and instead populate the dark state.
Raman Condition
The resonance condition of any -type Raman process requires that the difference in the two photon energies match the difference in energy between the states at the "legs" of the , here the ground states identified above. In experimental settings, this condition is realized when the detunings of the cycling and repumper frequencies in respect to the and transition frequencies, respectively, are equal.
Unlike most Doppler cooling techniques, light in the gray molasses must be blue-detuned from its resonant transition; the resulting Doppler heating is offset by polarization-gradient cooling. Qualitatively, this is because the choice of means that the AC Stark shifts of the three levels are the same sign at any given position. Selecting the potential energy maxima as the sites of optical pumping to the dark state requires the overall light to be blue-detuned; in doing so, the atoms in the bright state traverse the maximum potential energy difference and thus dissipate the most kinetic energy. A full quantitative explanation of the molasses force with respect to detuning can be found in Hänsch's paper.
See also
Sisyphus cooling
Raman cooling
Notes
References
Atomic physics
Cooling technology
Laser applications
Thermodynamics | Gray molasses | Physics,Chemistry,Mathematics | 1,502 |
272,522 | https://en.wikipedia.org/wiki/FM%20broadcast%20band | The FM broadcast band is a range of radio frequencies used for FM broadcasting by radio stations. The range of frequencies used differs between different parts of the world. In Europe and Africa (defined as International Telecommunication Union (ITU) region 1) and in Australia and New Zealand, it spans from 87.5 to 108 megahertz (MHz) - also known as VHF Band II - while in the Americas (ITU region 2) it ranges from 88 to 108 MHz. The FM broadcast band in Japan uses 76 to 95 MHz, and in Brazil, 76 to 108 MHz. The International Radio and Television Organisation (OIRT) band in Eastern Europe is from 65.9 to 74.0 MHz, although these countries now primarily use the 87.5 to 108 MHz band, as in the case of Russia. Some other countries have already discontinued the OIRT band and have changed to the 87.5 to 108 MHz band.
Narrow band Frequency Modulation was developed and demonstrated by Hanso Idzerda in 1919.
Wide band Frequency modulation radio originated in the United States during the 1930s; the system was developed by the American electrical engineer Edwin Howard Armstrong. However, FM broadcasting did not become widespread, even in North America, until the 1960s.
Frequency-modulated radio waves can be generated at any frequency. All the bands mentioned in this article are in the very high frequency (VHF) range, which extends from 30 to 300 MHz.
CCIR band plan
Center frequencies
While all countries use FM channel center frequencies ending in 0.1, 0.3, 0.5, 0.7, and 0.9 MHz, some countries also use center frequencies ending in 0.0, 0.2, 0.4, 0.6, and 0.8 MHz. A few others also use 0.05, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, and 0.95 MHz.
An ITU conference in Geneva, Switzerland, on December 7, 1984, resolved to discontinue the use of 50 kHz channel spacings throughout Europe.
Most countries have used 100 kHz or 200 kHz channel spacings for FM broadcasting since this ITU conference in 1984.
Some digitally-tuned FM radios are unable to tune using 50 kHz or even 100 kHz increments. Therefore, when traveling abroad or importing receivers, stations that broadcast on certain frequencies using such increments may not be heard clearly. This problem will not affect reception on an analog-tuned radio.
A few countries, such as Italy, which have heavily congested FM bands, still allow a station on any multiple of 50 kHz wherever one can be squeezed in.
The 50 kHz channel spacings help prevent co-channel interference, and these take advantage of FM's capture effect and receiver selectivity.
ITU Region 2 bandplan and channel numbering
The original frequency allocation in North America used by Edwin Armstrong used the frequency band from 42 through 50 MHz, but this allocation was changed to a higher band beginning in 1945. In Canada, the United States, Mexico, the Bahamas, etc., there are 101 FM channels numbered from 200 (center frequency 87.9 MHz) to 300 (center frequency 107.9 MHz), though these numbers are rarely used outside the fields of radio engineering and government.
The center frequencies of the FM channels are spaced in increments of 200 kHz. The frequency of 87.9 MHz, while technically part of TV channel 6 (82 to 88 MHz), is used by just two FM class-D stations in the United States. Portable radio tuners often tune down to 87.5 MHz, so that the same radios can be made and sold worldwide. Automobiles usually have FM radios that can tune down to 87.7 MHz, so that TV channel 6's audio at 87.75 MHz (±10 kHz) could be received while driving. This is largely no longer possible due to the 2009 digital television transition, though in 2023 the FCC authorized fourteen low-powered Channel 6 television stations to continue to operate radio services indefinitely.
In the United States, the twenty-one channels with center frequencies of 87.9–91.9 MHz (channels 200 through 220) constitute the reserved band, exclusively for non-commercial educational (NCE) stations. The other channels (92.1 MHz through 107.9 MHz (Channels 221–300) may be used by both commercial and non-commercial stations. (Note that in Canada and in Mexico this reservation does not apply; Mexico introduced a reservation of 106.1–107.9 MHz for community and indigenous stations in 2014, though dozens of stations are grandfathered due to lack of space to relocate them.)
Originally, the American Federal Communications Commission (FCC) devised a bandplan in which FM radio stations would be assigned at intervals of four channels (800 kHz separation) for any one geographic area. Thus, in one area, stations might be at 88.1, 88.9, 89.7, etc., while in an adjacent area, stations might be at 88.3, 89.1, 89.9, 90.7 etc. Certain frequencies were designated for Class A only (see FM broadcasting), which had a limit of three kilowatts of effective radiated power (ERP) and an antenna height limit for the center of radiation of 300 feet (91.4 m) height above average terrain (HAAT). These frequencies were 92.1, 92.7, 93.5, 94.3, 95.3, 95.9, 96.7, 97.7, 98.3, 99.3, 100.1, 100.9, 101.7, 102.3, 103.1, 103.9, 104.9, 105.5, 106.3 and 107.1. On other frequencies, a station could be Class B (50 kW, 500 feet) or Class C (100 kW, 2,000 feet), depending on which zone it was in.
In the late 1980s, the FCC switched to a bandplan based on a distance separation table using currently operating stations, and subdivided the class table to create extra classes and change antenna height limits to meters. Class A power was doubled to six kilowatts, and the frequency restrictions noted above were removed. As of late 2004, a station can be "squeezed in" anywhere as long as the location and class conform to the rules in the FCC separation table. The rules for second-adjacent-channel spacing do not apply for stations licensed before 1964.
In 2017, Brazil laid the groundwork to reclaim channels 5 and 6 (76.1–87.5 MHz) for sound broadcasting use and required new radio receivers to be able to tune into the new extended band (, abbreviated eFM). Five transmitters of public broadcaster Brazil Communication Company were the first extended-band stations to begin broadcasts on May 7, 2021.
In 2023, Chile announced the expansion of the FM band to 76-108 MHz as part of the analog TV shutdown, scheduled for April 2024.
Deviation and bandpass
Normally each channel is 200 kHz (0.2 MHz) wide, and can pass audio and subcarrier frequencies up to 100 kHz. Deviation is typically limited to 150 kHz total (±75 kHz) in order to prevent adjacent-channel interference on the band. Stations in the U.S. may go up to 10% over this limit if they use non-stereo subcarriers, increasing total modulation by 0.5% for each 1% used by the subcarriers. Some stations may limited to (±50 kHz) deviation in order to reduce transmitted bandwidth so that additional stations can be squeezed in.
OIRT bandplan
The OIRT FM broadcast band covers 65.8 to 74 MHz. It was used in the Soviet Union and most of the other Warsaw Pact member countries of the International Radio and Television Organisation in Eastern Europe (OIRT), with the exception of East Germany, which always used the 87.5 to 100 (later 104) MHz broadcast band—in line with Western Europe.
The lower portion of the VHF band behaves a bit like shortwave radio in that it has a longer reach than the upper portion of the VHF band. It was ideally suited for reaching vast and remote areas that would otherwise lack FM radio reception. In a way, FM suited this band because the capture effect of FM could mitigate interference from skywaves.
Transition to the 87.5 to 108 MHz band started as early as the 1980s in some East European countries. Following the collapse of the communist governments, that transition was remarkably accelerated as private stations have been established. This was also prompted by the lack of equipment for the OIRT band and the modernisation of existing transmission networks.
Many countries have completely ceased broadcasting on the OIRT FM band, although use continues in others, mainly the former republics of the USSR. The future of broadcasting on the OIRT FM band is limited, due to the lack of new consumer receivers for this band outside of Russia.
Countries which still use the OIRT band are Russia (including Kaliningrad), Belarus, Moldova, Ukraine, and Turkmenistan.
In Czechoslovakia, the decision to use the 87.5 to 108 MHz band instead of 65.9 to 74 MHz band was made in the beginning of the eighties. The frequency plan was created, which was internationally coordinated at Regional Administrative Conference for FM Sound Broadcasting in the VHF band in Geneva, 1984. Allocated frequencies are still valid and are used in the Czech Republic and Slovakia. The first transmitter was put into operation on 102.5 MHz near Prague in November 1984. Three years later, there were eleven transmitters in service across the country, including three in the Prague neighborhood of Žižkov. In 1988, the plan was to set up 270 transmitters in 45 locations eventually. The transition was finished in 1993.
In Poland all OIRT broadcast transmitters were closed down at the end of 1999.
Hungary closed down its remaining broadcast transmitters in 2007, and for thirty days in July of that year, several Hungarian amateur radio operators received a temporary experimental permit to perform propagation and interference experiments in the 70–70.5 MHz band.
In Belarus, only government-run public radio stations are still active on OIRT. All stations on OIRT in Belarus are a mirror of normal FM broadcasts. The main purpose of those stations is compatibility with older equipment.
In 2014, Russia began replacing OIRT-banded transmitter with CCIR-banded (the "western") FM transmitters. The main reason for the change to CCIR FM is to reach more listeners.
Unlike Western practice, OIRT FM frequencies are based on 30 kHz rather than 50, 100 or 200 kHz multiples. This may have been to reduce co-channel interference caused by Sporadic E propagation and other atmospheric effects, which occur more often at these frequencies. However, multipath distortion effects are less annoying than on the CCIR band.
Stereo is generally achieved by sending the stereo difference signal, using a process called polar modulation. Polar modulation uses a reduced subcarrier on 31.25 kHz with the audio on both side-bands. This gives the following signal structure: L + R --> 31.25 kHz reduced subcarrier L - R.
The 4-meter band (70–70.5 MHz) amateur radio allocation used in many European countries is entirely within the OIRT FM band. Operators on this band and the 6-meter band (50–54 MHz) use the presence of broadcast stations as an indication that there is an "opening" into Eastern Europe or Russia. This can be a mixed blessing because the 4 meter amateur allocation is only 0.5 MHz or less, and a single broadcast station causes considerable interference to a large part of the band.
The System D television channels R4 and R5 lie wholly or partly within the 87.5–108 MHz FM audio broadcast band. Countries which still use System D therefore have to consider the re-organisation of TV broadcasting in order to make full use of this band for audio broadcasting.
Japanese bandplan
The FM band in Japan is 76–95 MHz (previously 76–90). The 90–108 MHz section was used for analog VHF TV Channels 1, 2 and 3 (each NTSC television channel is 6 MHz wide). The narrowness of the Japanese band (19 MHz compared to slightly more than 20 MHz for the CCIR band) limits the number of FM stations that can be accommodated on the dial with the result that many commercial radio stations are forced to use AM.
Many Japanese radios are capable of receiving both the Japanese FM band and the CCIR FM band, so that the same model can be sold within Japan or exported. The radio may cover 76 to 108 MHz, the frequency coverage may be selectable by the user, or during assembly the radio may be set to operate on one band by means of a specially placed diode or other internal component.
Conventional analog-tuned (dial & pointer) radios were formerly marked with "TV Sound" in the 76–88 section. If these radios were sold in the US, for example, the 76–88 section would be marked TV sound for VHF channels 5 and 6 (as two 6 MHz-wide NTSC TV channels), with the 88–108 section band as normal FM. The compatibility of "TV sound" with conventional FM radio ended with the U.S. digital TV transition in 2009, with the exception of the limited number of low-power stations on channel 6 that still use analog; these low-power stations will switch to digital in 2021.
Second-hand automobiles imported from Japan contain a radio designed for the Japanese FM band, and importers often fit a "converter" to down-convert the 87.5 to 107.9 MHz band to the frequencies that the radio can accept. In addition to showing an incorrect frequency, there are two other disadvantages that can result in undesired performance; the converter cannot down-convert in full the regular international FM band (up to 20.5 MHz wide) to the only 14 MHz-wide Japanese band (unless the converter incorporates two user-switchable down-convert modes), and the car's antenna may perform poorly on the higher FM band. Some converters simply down-convert the FM band by 12 MHz, leading to logical frequencies (e.g. 78.9 for 90.9, 82.3 for 94.3, etc.), but leaving off the 102–108 MHz band. Also, RDS is not used in Japan, whereas most modern car radios available in Europe have this system. Also the converter may not allow pass-through of the MW band, which is used for AM broadcasting. A better solution is to replace the radio and antenna with ones designed for the country where the car will be used.
Australia had a similar situation with Australian TV channels 3, 4 and 5 that are between 88 and 108 MHz, and was intending to follow Japan, but in the end opted for the western bandplan, due to CCIR radios that entered the country. There were some radios sold in Australia for 76 to 90 MHz.
Historic U.S. bandplan
In the 1930s investigations were begun into establishing radio stations transmitting on "Very High Frequency" (VHF) assignments above 30 MHz. In October 1937, the Federal Communications Commission (FCC) announced new frequency allocations, which included a band of experimental and educational "Apex" stations, that consisted of 75 channels spanning from 41.02 to 43.98 MHz. Like the existing AM band these stations employed amplitude modulation, however the 40 kHz spacing between adjacent frequencies was four times as much as the 10 kHz spacing on the standard AM broadcast band, which reduced adjacent-frequency interference, and provided more bandwidth for high-fidelity programming.
Also during the 1930s Edwin Howard Armstrong developed a competing transmission technology, "wide-band frequency modulation", which was promoted as being superior to AM transmissions, in particular due to its high-fidelity and near immunity to static interference. In May 1940, largely as the result of Armstrong's efforts, the FCC decided to eliminate the Apex band, and authorized an FM band effective January 1, 1941, operating on 40 channels spanning 42–50 MHz, with the first five channels reserved for educational stations. There was significant interest in the new FM band by station owners, however, construction restrictions that went into place during World War II limited the growth of the new service.
Following the end of the war, the FCC moved to standardize its frequency allocations. One area of concern was the effects of tropospheric and Sporadic E propagation, which at times reflected station signals over great distances, causing mutual interference. A particularly controversial proposal, spearheaded by the Radio Corporation of America (RCA), which was headed by David Sarnoff, was that the FM band needed to be shifted to higher frequencies in order to avoid this potential problem. Armstrong charged that this reassignment had the covert goal of disrupting FM radio development, however RCA's proposal prevailed, and on June 27, 1945 the FCC announced the reassignment of the FM band to 90 channels from 88–106 MHz, which was soon expanded to 100 channels from 88–108 MHz, with the first 20 channels reserved for educational stations. A period of allowing existing FM stations to broadcast on both the original "low" and new "high" FM bands followed, which ended at midnight on January 8, 1949, at which time all low band transmissions had to end.
In 1978 one additional frequency reserved for educational stations, 87.9 MHz, was allocated. In March 2008, the FCC requested public comment on turning the bandwidth currently occupied by analog television channels 5 and 6 (76–88 MHz) over to extending the FM broadcast band when the digital television transition was to be completed in February 2009 (ultimately delayed to June 2009). This proposed allocation would have effectively assigned frequencies corresponding to the existing Japanese FM radio service (which begins at 76 MHz) for use as an extension to the existing North American FM broadcast band. Several low-power television stations colloquially known as "Franken-FMs" operated primarily as radio stations on channel 6, using the 87.7 MHz audio carrier of that channel as a radio station receivable on most FM receivers configured to cover the whole of Band II, from 2009 to 2021; since then, a reduced number have received special temporary authority to carry a special audio carrier on their ATSC 3.0 signals to continue the status quo.
FM radio switch-off
With the gradual adoption of digital radio broadcasting (e.g. HD Radio, DAB+) radio, some countries have planned and started an FM radio switch-off. Norway, in January 2018, was the first country to discontinue FM as a result.
See also
FM broadcasting
Frequency modulation
References
Bandplans
Broadcast engineering | FM broadcast band | Engineering | 3,873 |
37,780,355 | https://en.wikipedia.org/wiki/Hyundai%20Play%20X | The Hyundai Play X is an Android tablet.
The device has a 2048×1536 resolution IPS screen with 3.14 million total pixels, pixel density (definition) reaches 264 ppi. The Hyundai Play X (X900) houses a dual core Rockchip RK3066 processor which is based on the 40 nm LP process of low-power and its frequency reaches 1.6 GHz. The quad-core graphics processor with 1GB of RAM with the help of POP special technology drives the retina screen.
Description
It runs Android 4.1 and has two cameras: the front camera is 2.0 MP, the rear-facing one 5.0 MP. The 10-point multi-touch is used for video games. It is equipped with a Li-ion battery which can support eight hours working time.
Based on the original slim design style, it has a black surface panel with s a silver frame. The front part of the device is a 9.7-inch retina screen with a resolution of 2048×1536, 4:3 screen ratio. The tablet is thin, about 9.5 mm. For hardware performance the device is powered by Cortex A9 Dual Core RK3066 1.6 GHz CPU and GC-400 MP4 Quad Core GPU. It has 1 GB RAM and 16 GB ROM. The contrast ratio of the 9.7-inch retina screen is up to 800:1 and the brightness is 440 cd/m2.
References
Android (operating system) devices
Hyundai
Tablet computers | Hyundai Play X | Technology | 315 |
2,521,046 | https://en.wikipedia.org/wiki/Free%20induction%20decay | In Fourier transform nuclear magnetic resonance spectroscopy, free induction decay (FID) is the observable nuclear magnetic resonance (NMR) signal generated by non-equilibrium nuclear spin magnetization precessing about the magnetic field (conventionally along z). This non-equilibrium magnetization can be created generally by applying a pulse of radio-frequency close to the Larmor frequency of the nuclear spins.
If the magnetization vector has a non-zero component in the XY plane, then the precessing magnetisation will induce a corresponding oscillating voltage in a detection coil surrounding the sample. This time-domain signal (a sinusoid) is typically digitised and then Fourier transformed in order to obtain a frequency spectrum of the NMR signal i.e. the NMR spectrum.
The duration of the NMR signal is ultimately limited by T2 relaxation, but mutual interference of the different NMR frequencies present also causes the signal to be damped more quickly. When NMR frequencies are well-resolved, as is typically the case in the NMR of samples in solution, the overall decay of the FID is relaxation-limited and the FID is approximately exponential (with the time constant T2 changed, indicated by T2*). FID durations will then be of the order of seconds for nuclei such as 1H.
Particularly if a limited number of frequency components are present, the FID may be analysed directly for quantitative determinations of physical properties, such as hydrogen content in aviation fuel, solid and liquid ratio in dairy products (time-domain NMR).
Advances in the development of quantum-scale sensors, particularly NV centres, have enabled the observation of the FID of single nuclei. When measuring the precession of a single nucleus, quantum mechanical measurement back action has to be considered. In this special case, also the measurement itself contributes to the decay as predicted by quantum mechanics.
References
Nuclear magnetic resonance | Free induction decay | Physics,Chemistry | 398 |
19,788,666 | https://en.wikipedia.org/wiki/Adaptive%20switching | An adaptive switch is a network switch designed to normally operate in cut-through mode but if a port's error rate jumps too high, the switch automatically reconfigures the port to run in store-and-forward mode. This optimizes the switch's performance by providing higher speed cut-through switching if error rates are low but higher throughput store-and-forward switching when error rates are high.
Adaptive switching is typically done on a port-by-port basis.
References
Networking hardware | Adaptive switching | Technology,Engineering | 105 |
706,295 | https://en.wikipedia.org/wiki/Canonical%20commutation%20relation | In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). For example,
between the position operator and momentum operator in the direction of a point particle in one dimension, where is the commutator of and , is the imaginary unit, and is the reduced Planck constant , and is the unit operator. In general, position and momentum are vectors of operators and their commutation relation between different components of position and momentum can be expressed as
where is the Kronecker delta.
This relation is attributed to Werner Heisenberg, Max Born and Pascual Jordan (1925), who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard (1927) to imply the Heisenberg uncertainty principle. The Stone–von Neumann theorem gives a uniqueness result for operators satisfying (an exponentiated form of) the canonical commutation relation.
Relation to classical mechanics
By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket multiplied by ,
This observation led Dirac to propose that the quantum counterparts , of classical observables , satisfy
In 1946, Hip Groenewold demonstrated that a general systematic correspondence between quantum commutators and Poisson brackets could not hold consistently.
However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, today called the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the consistent correspondence mechanism, the Wigner–Weyl transform, that underlies an alternate equivalent mathematical representation of quantum mechanics known as deformation quantization.
Derivation from Hamiltonian mechanics
According to the correspondence principle, in certain limits the quantum equations of states must approach Hamilton's equations of motion. The latter state the following relation between the generalized coordinate q (e.g. position) and the generalized momentum p:
In quantum mechanics the Hamiltonian , (generalized) coordinate and (generalized) momentum are all linear operators.
The time derivative of a quantum state is represented by the operator (by the Schrödinger equation). Equivalently, since in the Schrödinger picture the operators are not explicitly time-dependent, the operators can be seen to be evolving in time (for a contrary perspective where the operators are time dependent, see Heisenberg picture) according to their commutation relation with the Hamiltonian:
In order for that to reconcile in the classical limit with Hamilton's equations of motion, must depend entirely on the appearance of in the Hamiltonian and must depend entirely on the appearance of in the Hamiltonian. Further, since the Hamiltonian operator depends on the (generalized) coordinate and momentum operators, it can be viewed as a functional, and we may write (using functional derivatives):
In order to obtain the classical limit we must then have
Weyl relations
The group generated by exponentiation of the 3-dimensional Lie algebra determined by the commutation relation is called the Heisenberg group. This group can be realized as the group of upper triangular matrices with ones on the diagonal.
According to the standard mathematical formulation of quantum mechanics, quantum observables such as and should be represented as self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded. Certainly, if and were trace class operators, the relation gives a nonzero number on the right and zero on the left.
Alternately, if and were bounded operators, note that , hence the operator norms would satisfy
so that, for any n,
However, can be arbitrarily large, so at least one operator cannot be bounded, and the dimension of the underlying Hilbert space cannot be finite. If the operators satisfy the Weyl relations (an exponentiated version of the canonical commutation relations, described below) then as a consequence of the Stone–von Neumann theorem, both operators must be unbounded.
Still, these canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators and . The resulting braiding relations for these operators are the so-called Weyl relations
These relations may be thought of as an exponentiated version of the canonical commutation relations; they reflect that translations in position and translations in momentum do not commute. One can easily reformulate the Weyl relations in terms of the representations of the Heisenberg group.
The uniqueness of the canonical commutation relations—in the form of the Weyl relations—is then guaranteed by the Stone–von Neumann theorem.
For technical reasons, the Weyl relations are not strictly equivalent to the canonical commutation relation . If and were bounded operators, then a special case of the Baker–Campbell–Hausdorff formula would allow one to "exponentiate" the canonical commutation relations to the Weyl relations. Since, as we have noted, any operators satisfying the canonical commutation relations must be unbounded, the Baker–Campbell–Hausdorff formula does not apply without additional domain assumptions. Indeed, counterexamples exist satisfying the canonical commutation relations but not the Weyl relations. (These same operators give a counterexample to the naive form of the uncertainty principle.) These technical issues are the reason that the Stone–von Neumann theorem is formulated in terms of the Weyl relations.
A discrete version of the Weyl relations, in which the parameters s and t range over , can be realized on a finite-dimensional Hilbert space by means of the clock and shift matrices.
Generalizations
It can be shown that
Using , it can be shown that by mathematical induction
generally known as McCoy's formula.
In addition, the simple formula
valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian . We identify canonical coordinates (such as in the example above, or a field in the case of quantum field theory) and canonical momenta (in the example above it is , or more generally, some functions involving the derivatives of the canonical coordinates with respect to time):
This definition of the canonical momentum ensures that one of the Euler–Lagrange equations has the form
The canonical commutation relations then amount to
where is the Kronecker delta.
Gauge invariance
Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum is not gauge invariant. The correct gauge-invariant momentum (or "kinetic momentum") is
(SI units) (cgs units),
where is the particle's electric charge, is the vector potential, and is the speed of light. Although the quantity is the "physical momentum", in that it is the quantity to be identified with momentum in laboratory experiments, it does not satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows.
The non-relativistic Hamiltonian for a quantized charged particle of mass in a classical electromagnetic field is (in cgs units)
where is the three-vector potential and is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation , the Maxwell equations and the Lorentz force law are invariant under the gauge transformation
where and is the gauge function.
The angular momentum operator is
and obeys the canonical quantization relations
defining the Lie algebra for so(3), where is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as
The gauge-invariant angular momentum (or "kinetic angular momentum") is given by
which has the commutation relations
where is the magnetic field. The inequivalence of these two formulations shows up in the Zeeman effect and the Aharonov–Bohm effect.
Uncertainty relation and commutators
All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations, involving positive semi-definite expectation contributions by their respective commutators and anticommutators. In general, for two Hermitian operators and , consider expectation values in a system in the state , the variances around the corresponding expectation values being , etc.
Then
where is the commutator of and , and is the anticommutator.
This follows through use of the Cauchy–Schwarz inequality, since
, and ; and similarly for the shifted operators and . (Cf. uncertainty principle derivations.)
Substituting for and (and taking care with the analysis) yield Heisenberg's familiar uncertainty relation for and , as usual.
Uncertainty relation for angular momentum operators
For the angular momentum operators , etc., one has that
where is the Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators.
Here, for and , in angular momentum multiplets , one has, for the transverse components of the Casimir invariant , the -symmetric relations
,
as well as .
Consequently, the above inequality applied to this commutation relation specifies
hence
and therefore
so, then, it yields useful constraints such as a lower bound on the Casimir invariant: , and hence , among others.
See also
Canonical quantization
CCR and CAR algebras
Conformastatic spacetimes
Lie derivative
Moyal bracket
Stone–von Neumann theorem
References
.
.
Quantum mechanics
Mathematical physics
zh:對易關係 | Canonical commutation relation | Physics,Mathematics | 1,999 |
29,283,004 | https://en.wikipedia.org/wiki/Operation%20Damocles | Operation Damocles was a covert campaign of the Israeli intelligence agency, Mossad in August 1962 which targeted German scientists and technicians, formerly employed in Nazi Germany's rocket program, who were developing rockets for Egypt at a military site known as Factory 333. According to Otto Joklik, an Austrian scientist involved with the project, the rockets being developed were programmed to use radioactive waste.
The chief tactics were letter bombs and abduction. In March 1963, Israeli Prime Minister David Ben-Gurion demanded the resignation of then chief of Mossad, Isser Harel, over the operation, which effectively ended it. The operation and diplomatic pressure had driven the scientists out of Egypt by the end of 1963.
Egypt's rocket program
The Egyptian President Gamal Abdel Nasser did not want to rely upon the West or the Soviet Union for rockets, since such an arrangement would be inconsistent with Egypt's policy of Cold War non-alignment. An indigenous rocket program was thus the only way Egypt could match the military technology of Egypt's then enemy, Israel. At the time, rocket technology was scarce in the Middle East, so Egypt had to look to European countries for material and expertise. Hassan Sayed Kamil, an Egyptian-Swiss arms dealer, provided Egypt with material and recruits from West Germany and Switzerland, despite both countries having official laws prohibiting the provision of weapons to Middle Eastern countries. Many of the West German scientists had previously been involved in Nazi Germany's rocket program during World War II, working at Peenemünde to develop the V-2 rocket, and some had worked for France's rocket program in the aftermath of the war.
Egypt's rocket program came to the world's attention when it successfully test-fired a rocket in July 1962 and then paraded two new types of rocket through the streets of Cairo, causing worldwide interest and shock. The flow of rocket expertise from West Germany to Egypt damaged the relations between Israel and West Germany, but did not stop the payment of reparations and the covert supply of arms to Israel by West Germany. Israel became increasingly concerned with the program after a disaffected Austrian scientist involved with it approached the Israeli secret service, and claimed the Egyptians were attempting to equip the missile with radioactive waste as well as procuring nuclear warheads. In mid-August, Mossad managed to obtain a document written by German scientist , detailing certain aspects of Factory 333 – the number of rockets being built (900), and additional, weaker evidence that there were plans to develop chemical, biological and gas-filled warheads for these rockets. To gain the support of the Israeli population, the head of Mossad planted stories about sinister weapons being developed by the German scientists in Egypt.
Attacks on German scientists
The main tactics employed by Israel against the scientists were letter bombs and abductions. Their families were threatened with violence to persuade the scientists to return to Europe. Mossad provided a small operational unit, headed by future Israeli Prime Minister Yitzhak Shamir, but since it lacked an operational division at that time it mainly used units from the Shin Bet to carry out the attacks.
A parcel sent to rocket scientist Wolfgang Pilz exploded in his office when opened on 27 November 1962, killing five and injuring his secretary.
A parcel sent to the Heliopolis rocket factory killed five Egyptian workers.
A pistol was fired at a West German professor in the town of Lörrach who was researching electronics for Egypt. The bullet missed and the gunman escaped by car.
, 49, the chief of a Munich company supplying military hardware to Egypt disappeared in September 1962 and is believed to have been murdered. Krug was director of an Egyptian dummy company operating out of Munich that was involved in building missiles in Egypt. According to Dan Raviv and Yossi Melman, Krug was killed near Munich by former Nazi commando Otto Skorzeny. According to Ronen Bergman, Krug was kidnapped in Munich by a Mossad squad headed by Mossad chief Isser Harel himself and was killed in Israel after having been subjected to harsh interrogation.
Hans Kleinwachter, a rocket scientist who worked on the V-2 project was targeted in February 1963, but the assassination attempt failed due to a weapon malfunction.
Public exposure of the operation
Two Mossad agents, Joseph Ben-Gal, an Israeli, and Otto Joklik, Austrian, were arrested in Switzerland for threatening Heidi Goercke, daughter of a West German electronic guidance expert working at Factory 333, Paul-Jens Goercke. They ordered her to persuade Goercke to return to Germany, threatening their safety if he did not comply. They were arrested for coercion and illegal operation on behalf of a foreign state. Swiss investigations revealed that they were also involved in the abduction of Krug and the assassination attempt upon Kleinwachter. The arrests caused a public scandal for Israel. Israel publicly denied the claims, asserting that its agents only used methods of "peaceful persuasion".
Resignation of Isser Harel
Following the capture of Adolf Eichmann, Isser Harel became preoccupied by the Holocaust, which hardened his attitude towards the German scientists. He said when challenged about the operation, "There are people who are marked to die".
The campaign ended when Israeli Prime Minister David Ben-Gurion demanded that Mossad halt the attacks because he was worried about the consequences upon German–Israeli relations. Then-Foreign Minister Golda Meir and Israeli diplomats trying to build relations between West Germany and Israel were furious about the attacks. Harel was compelled to resign and Meir Amit, his successor as chief of Mossad, claimed that Harel had overestimated the danger to Israel posed by Egypt's weapon programs. Yitzhak Shamir and others resigned from the Mossad in protest at Harel's treatment. David Ben-Gurion quit his post three months later.
The combination of the death threats and diplomatic pressure drove the scientists away from Egypt by the end of 1963. By 1967, Egypt's rocket program had come to a standstill and Egypt turned to the Soviet Union, which supplied it with Scud B rockets.
References
Mossad operations
Egypt in the Arab–Israeli conflict
Assassination campaigns
Egypt–Germany relations
History of Egypt (1900–present)
1962 in Israel
1962 in Egypt
1963 in Egypt
1963 in Israel
Secret military programs
Political scandals in Israel
Espionage scandals and incidents
August 1962 events in Africa
Fascism in the Arab world
Letter and package bombings
Improvised explosive device bombings in the 1960s
Germany–Israel relations
Gamal Abdel Nasser
Improvised explosive device bombings in Egypt
Attacks on military installations in the 1960s
Attacks on military installations in Egypt
Attacks on buildings and structures in 1962 | Operation Damocles | Engineering | 1,347 |
61,929,977 | https://en.wikipedia.org/wiki/B%C3%BClent%20%C5%9E%C4%B1k | Bülent Şık is a Turkish food engineer, environmental and human rights activist and a whistleblower. He was convicted after disclosing the results from a government study on environmental pollution and carcinogens.
Early life and education
Career
Şık has worked at Akdeniz University in Antalya, where he was a deputy director of the Food Safety and Agricultural Research Center.
In the early 2010s, Şık worked on a 5-year research project for the Turkey's Ministry of Health investigating a possible relation between the high incidence of cancer in western Turkey (Kocaeli, Tekirdağ, Kırklareli, Edirne and Antalya) and toxicity in local soil, water, and food. Şık found dangerous levels of toxicity in a number of food and water samples, concluding that water in several residential areas is unsafe for drinking. In 2015, he reported his findings to the government.
In 2016, he was fired from his university position as assistant professor by a presidential decree-law after signing a petition "calling for peace between Turkish forces and Kurdish militants in southeast Turkey".
In April 2018, as no action was taken on the water pollution for three years, Şık published his findings in the opposition newspaper Cumhuriyet. After the publication, the Turkish government claimed the newspaper publication violated the confidentiality clauses prohibiting to reveal the findings unless approved by the authorities, but it did not deny the accuracy of information. Subsequently the Ministry of Health sued Şık for "revealing confidential information as well as provoking outrage among the public".
On 26 September 2019, Şık was sentenced to 15 months in jail for "disclosing information about duty" while he has been acquitted of "providing prohibited information". Amnesty International has criticized the trial, describing Şık as a whistleblower.
Private life
Bülent Şık is the brother of Ahmet Şık, a journalist and an opposition party member of Parliament.
References
Living people
20th-century births
Food engineers
Turkish engineering academics
Academic staff of Akdeniz University
Cancer researchers
Turkish environmentalists
Turkish human rights activists
Turkish whistleblowers
Turkish prisoners and detainees
Year of birth missing (living people) | Bülent Şık | Engineering | 449 |
25,280,408 | https://en.wikipedia.org/wiki/Online%20locator%20service | An online locator service (also known as location finder, store finder, or store locator, or similar) is a feature found on websites of businesses with multiple locations that allows visitors to the site to find locations of the business within proximity of an address or postal code or within a selected region. Types of businesses that often have this feature include chain retailers, hotels, restaurants, and other businesses that can be found in multiple metropolitan areas.
The locator also provides important information about each location, including its address, phone number, hours of operation, services provided, and sometimes directions to the location. On many sites, searches can be narrowed to locations providing certain services not provided at all locations (e.g. 24-hour operation, handicap accessibility, pharmacies).
Location finders often operate in conjunction with a well-known online map service, such as Google Maps, MapQuest, or Bing Maps, allowing the user to see on a map where the particular location is found on a map.
Software
Store locators use locator software in order to allow visitors to find nearby stores and business locations. In a common form, a visitor inputs a ZIP code and the locator returns all locations in a database within a specified radius. It is often used in conjunction with mapping/driving direction software on brick and click corporate websites to help customers locate a physical business location.
Some online locators determine user location from the user's IP address, rather than asking a user to input his or her location. The company utilizes IP geolocation software from Digital Element to power the “My Local Ace” section of its website. Based on a site visitor’s location, the website can show the visitor how many stores are in their area, as well as a city-level locator map to help the customer find the store closest to their address.
References
Retail processes and techniques
Digital marketing
Geographic position | Online locator service | Mathematics | 393 |
43,302,095 | https://en.wikipedia.org/wiki/Lie%20group%E2%80%93Lie%20algebra%20correspondence | In mathematics, Lie group–Lie algebra correspondence allows one to correspond a Lie group to a Lie algebra or vice versa, and study the conditions for such a relationship. Lie groups that are isomorphic to each other have Lie algebras that are isomorphic to each other, but the converse is not necessarily true. One obvious counterexample is and (see real coordinate space and the circle group respectively) which are non-isomorphic to each other as Lie groups but their Lie algebras are isomorphic to each other. However, for simply connected Lie groups, the Lie group-Lie algebra correspondence is one-to-one.
In this article, a Lie group refers to a real Lie group. For the complex and p-adic cases, see complex Lie group and p-adic Lie group. In this article, manifolds (in particular Lie groups) are assumed to be second countable; in particular, they have at most countably many connected components.
Basics
The Lie algebra of a Lie group
There are various ways one can understand the construction of the Lie algebra of a Lie group G. One approach uses left-invariant vector fields. A vector field X on G is said to be invariant under left translations if, for any g, h in G,
where is defined by and is the differential of between tangent spaces.
Let be the set of all left-translation-invariant vector fields on G. It is a real vector space. Moreover, it is closed under the Lie bracket of vector fields; i.e., is a left-translation-invariant vector field if X and Y are. Thus, is a Lie subalgebra of the Lie algebra of all vector fields on G and is called the Lie algebra of G. One can understand this more concretely by identifying the space of left-invariant vector fields with the tangent space at the identity, as follows: Given a left-invariant vector field, one can take its value at the identity, and given a tangent vector at the identity, one can extend it to a left-invariant vector field. This correspondence is one-to-one in both directions, so is bijective. Thus, the Lie algebra can be thought of as the tangent space at the identity and the bracket of X and Y in can be computed by extending them to left-invariant vector fields, taking the bracket of the vector fields, and then evaluating the result at the identity.
There is also another incarnation of as the Lie algebra of primitive elements of the Hopf algebra of distributions on G with support at the identity element; for this, see #Related constructions below.
Matrix Lie groups
Suppose G is a closed subgroup of GL(n;C), and thus a Lie group, by the closed subgroups theorem. Then the Lie algebra of G may be computed as
For example, one can use the criterion to establish the correspondence for classical compact groups (cf. the table in "compact Lie groups" below.)
Homomorphisms
If
is a Lie group homomorphism, then its differential at the identity element
is a Lie algebra homomorphism (brackets go to brackets), which has the following properties:
for all X in Lie(G), where "exp" is the exponential map
.<ref>More generally, if H is a closed subgroup of H, then </ref>
If the image of f is closed, then and the first isomorphism theorem holds: f induces the isomorphism of Lie groups:
The chain rule holds: if and are Lie group homomorphisms, then .
In particular, if H is a closed subgroup of a Lie group G, then is a Lie subalgebra of . Also, if f is injective, then f is an immersion and so G is said to be an immersed (Lie) subgroup of H. For example, is an immersed subgroup of H. If f is surjective, then f is a submersion and if, in addition, G is compact, then f is a principal bundle with the structure group its kernel. (Ehresmann's lemma)
Other properties
Let be a direct product of Lie groups and projections. Then the differentials give the canonical identification:
If are Lie subgroups of a Lie group, then
Let G be a connected Lie group. If H is a Lie group, then any Lie group homomorphism is uniquely determined by its differential . Precisely, there is the exponential map (and one for H) such that and, since G is connected, this determines f uniquely. In general, if U is a neighborhood of the identity element in a connected topological group G, then coincides with G, since the former is an open (hence closed) subgroup. Now, defines a local homeomorphism from a neighborhood of the zero vector to the neighborhood of the identity element. For example, if G is the Lie group of invertible real square matrices of size n (general linear group), then is the Lie algebra of real square matrices of size n and .
The correspondence
The correspondence between Lie groups and Lie algebras includes the following three main results.Lie's third theorem: Every finite-dimensional real Lie algebra is the Lie algebra of some simply connected Lie group. The homomorphisms theorem: If is a Lie algebra homomorphism and if G is simply connected, then there exists a (unique) Lie group homomorphism such that . The subgroups–subalgebras theorem''': If G is a Lie group and is a Lie subalgebra of , then there is a unique connected Lie subgroup (not necessarily closed) H of G with Lie algebra .
In the second part of the correspondence, the assumption that G is simply connected cannot be omitted. For example, the Lie algebras of SO(3) and SU(2) are isomorphic, but there is no corresponding homomorphism of SO(3) into SU(2). Rather, the homomorphism goes from the simply connected group SU(2) to the non-simply connected group SO(3). If G and H are both simply connected and have isomorphic Lie algebras, the above result allows one to show that G and H are isomorphic. One method to construct f is to use the Baker–Campbell–Hausdorff formula.
For readers familiar with category theory the correspondence can be summarised as follows: First, the operation of associating to each connected Lie group its Lie algebra , and to each homomorphism of Lie groups the corresponding differential at the neutral element, is a (covariant) functor
from the category
of connected (real) Lie groups to the category of finite-dimensional (real) Lie-algebras. This functor has a left adjoint functor from (finite dimensional) Lie algebras to Lie groups
(which is necessarily unique up to canonical isomorphism). In other words
there is a natural isomorphism of bifunctors
is the (up to isomorphism unique) simply-connected Lie group with Lie algebra .
The associated natural unit morphisms of the adjunction are isomorphisms, which corresponds to being fully faithful
(part of the second statement above).
The corresponding counit is the canonical projection
from the
simply connected covering; its surjectivity corresponds to being a faithful functor.
Proof of Lie's third theorem
Perhaps the most elegant proof of the first result above uses Ado's theorem, which says any finite-dimensional Lie algebra (over a field of any characteristic) is a Lie subalgebra of the Lie algebra of square matrices. The proof goes as follows: by Ado's theorem, we assume is a Lie subalgebra. Let G be the closed (without taking the closure one can get pathological dense example as in the case of the irrational winding of the torus) subgroup of generated by and let be a simply connected covering of G; it is not hard to show that is a Lie group and that the covering map is a Lie group homomorphism. Since , this completes the proof.
Example: Each element X in the Lie algebra gives rise to the Lie algebra homomorphism
By Lie's third theorem, as and exp for it is the identity, this homomorphism is the differential of the Lie group homomorphism for some immersed subgroup H of G. This Lie group homomorphism, called the one-parameter subgroup generated by X, is precisely the exponential map and H its image. The preceding can be summarized to saying that there is a canonical bijective correspondence between and the set of one-parameter subgroups of G.
Proof of the homomorphisms theorem
One approach to proving the second part of the Lie group-Lie algebra correspondence (the homomorphisms theorem) is to use the Baker–Campbell–Hausdorff formula, as in Section 5.7 of Hall's book. Specifically, given the Lie algebra homomorphism from to , we may define locally (i.e., in a neighborhood of the identity) by the formula
where is the exponential map for G, which has an inverse defined near the identity. We now argue that f is a local homomorphism. Thus, given two elements near the identity and (with X and Y small), we consider their product . According to the Baker–Campbell–Hausdorff formula, we have , where
with indicating other terms expressed as repeated commutators involving X and Y. Thus,
because is a Lie algebra homomorphism. Using the Baker–Campbell–Hausdorff formula again, this time for the group H, we see that this last expression becomes , and therefore we have
Thus, f has the homomorphism property, at least when X and Y are sufficiently small. This argument is only local, since the exponential map is only invertible in a small neighborhood of the identity in G and since the Baker–Campbell–Hausdorff formula only holds if X and Y are small. The assumption that G is simply connected has not yet been used.
The next stage in the argument is to extend f from a local homomorphism to a global one. The extension is done by defining f along a path and then using the simple connectedness of G to show that the definition is independent of the choice of path.
Lie group representations
A special case of Lie correspondence is a correspondence between finite-dimensional representations of a Lie group and representations of the associated Lie algebra.
The general linear group is a (real) Lie group and any Lie group homomorphism
is called a representation of the Lie group G. The differential
is then a Lie algebra homomorphism called a Lie algebra representation. (The differential is often simply denoted by .)
The homomorphisms theorem (mentioned above as part of the Lie group-Lie algebra correspondence) then says that if is the simply connected Lie group whose Lie algebra is , every representation of comes from a representation of G. The assumption that G be simply connected is essential. Consider, for example, the rotation group SO(3), which is not simply connected. There is one irreducible representation of the Lie algebra in each dimension, but only the odd-dimensional representations of the Lie algebra come from representations of the group. (This observation is related to the distinction between integer spin and half-integer spin in quantum mechanics.) On the other hand, the group SU(2) is simply connected with Lie algebra isomorphic to that of SO(3), so every representation of the Lie algebra of SO(3) does give rise to a representation of SU(2).
The adjoint representation
An example of a Lie group representation is the adjoint representation of a Lie group G; each element g in a Lie group G defines an automorphism of G by conjugation: ; the differential is then an automorphism of the Lie algebra . This way, we get a representation , called the adjoint representation. The corresponding Lie algebra homomorphism is called the adjoint representation of and is denoted by . One can show , which in particular implies that the Lie bracket of is determined by the group law on G.
By Lie's third theorem, there exists a subgroup of whose Lie algebra is . ( is in general not a closed subgroup; only an immersed subgroup.) It is called the adjoint group of . If G is connected, it fits into the exact sequence:
where is the center of G. If the center of G is discrete, then Ad here is a covering map.
Let G be a connected Lie group. Then G is unimodular if and only if for all g in G.
Let G be a Lie group acting on a manifold X and Gx the stabilizer of a point x in X. Let . Then
If the orbit is locally closed, then the orbit is a submanifold of X and .
For a subset A of or G, let
be the Lie algebra centralizer and the Lie group centralizer of A. Then .
If H is a closed connected subgroup of G, then H is normal if and only if is an ideal and in such a case .
Abelian Lie groups
Let G be a connected Lie group. Since the Lie algebra of the center of G is the center of the Lie algebra of G (cf. the previous §), G is abelian if and only if its Lie algebra is abelian.
If G is abelian, then the exponential map is a surjective group homomorphism. The kernel of it is a discrete group (since the dimension is zero) called the integer lattice of G and is denoted by . By the first isomorphism theorem, induces the isomorphism .
By the rigidity argument, the fundamental group of a connected Lie group G is a central subgroup of a simply connected covering of G; in other words, G fits into the central extension
Equivalently, given a Lie algebra and a simply connected Lie group whose Lie algebra is , there is a one-to-one correspondence between quotients of by discrete central subgroups and connected Lie groups having Lie algebra .
For the complex case, complex tori are important; see complex Lie group for this topic.
Compact Lie groups
Let G be a connected Lie group with finite center. Then the following are equivalent.G is compact.
(Weyl) The simply connected covering of G is compact.
The adjoint group is compact.
There exists an embedding as a closed subgroup.
The Killing form on is negative definite.
For each X in , is diagonalizable and has zero or purely imaginary eigenvalues.
There exists an invariant inner product on .
It is important to emphasize that the equivalence of the preceding conditions holds only under the assumption that G has finite center. Thus, for example, if G is compact with finite center, the universal cover is also compact. Clearly, this conclusion does not hold if G has infinite center, e.g., if . The last three conditions above are purely Lie algebraic in nature.
If G is a compact Lie group, then
where the left-hand side is the Lie algebra cohomology of and the right-hand side is the de Rham cohomology of G. (Roughly, this is a consequence of the fact that any differential form on G can be made left invariant by the averaging argument.)
Related constructions
Let G be a Lie group. The associated Lie algebra of G may be alternatively defined as follows. Let be the algebra of distributions on G with support at the identity element with the multiplication given by convolution. is in fact a Hopf algebra. The Lie algebra of G'' is then , the Lie algebra of primitive elements in . By the Milnor–Moore theorem, there is the canonical isomorphism between the universal enveloping algebra of and .
See also
Compact Lie algebra
Milnor–Moore theorem
Formal group
Malcev Lie algebra
Distribution on a linear algebraic group
Citations
References
External links
Notes for Math 261A Lie groups and Lie algebras
Formal Lie theory in characteristic zero, a blog post by Akhil Mathew
Differential geometry
Lie algebras
Lie groups
Manifolds | Lie group–Lie algebra correspondence | Mathematics | 3,258 |
366,726 | https://en.wikipedia.org/wiki/The%20Three%20Doctors%20%28Doctor%20Who%29 | The Three Doctors is the first serial of the tenth season of the British science fiction television series Doctor Who, first broadcast in four weekly parts on BBC1 from 30 December 1972 to 20 January 1973.
In the serial, the solar engineer Omega (Stephen Thorne), the creator of the experiments that allowed the Time Lords to travel in time, seeks revenge on the Time Lords after he was left for dead in a universe made of antimatter. The Time Lords recruit the time travellers the First Doctor (William Hartnell), the Second Doctor (Patrick Troughton), and the Third Doctor (Jon Pertwee) for help when Omega drains power throughout the universe, threatening all of existence.
The serial opened the tenth anniversary year of the series, and features the first three Doctors all appearing in the same serial. This makes it the first Doctor Who story in which an earlier incarnation of the Doctor returns to the show. It was also Hartnell's last appearance as the First Doctor prior to his death in 1975.
Plot
A superluminal signal is sent to Earth, carrying with it an energy blob that seems intent on capturing the Doctor, but has already mysteriously abducted two individuals; a local game warden, and scientific researcher Dr. Tyler. The homeworld of the Time Lords is also under siege; they are trapped themselves, with universal energy being drained through a black hole, threatening to unravel the fabric of time and space. Desperate to send help, the Time Lords break the First Law of Time by recruiting a previous incarnation of the Doctor from his own past. As the Second Doctor and the present Third Doctor cannot cope with each other's personalities, the Time Lords attempt to retrieve the First Doctor to "keep them in order", but he is trapped in a "time eddy", unable to fully materialise, communicating through a viewscreen. The Doctors investigate, while UNIT headquarters faces an attack by shapeless lumpy-globule-like creatures. The First Doctor assists both Doctors by correctly surmising that the previously-sent energy blob is a bridge to another universe. The Third Doctor attempts to go alone, but Jo is abruptly abducted with him. The Second Doctor later allows the TARDIS with himself, the Brigadier, and Benton inside, to be taken by the blob, although this causes UNIT HQ to be stolen as well.
Jo, the Third Doctor, and Dr. Tyler, whom they discover there, assess their situation in this new mystery universe of antimatter, inside the black hole. The Third Doctor also deduces that a conversion has taken place for their assailants and themselves to somehow exist in each other's universes without annihilation. Before they can do any more, though, they are accosted by the shapeless creatures and taken to an unfamiliar location. When they arrive, they meet the legendary Time Lord Omega, who created the supernova method that powers Time Lord civilisation, but which also supposedly killed him. Omega seeks revenge on the Time Lords, whom he assumes left him stranded alone for centuries in his universe, of which he explains that he willed into existence. Assuming he has been deceived once again by the Time Lords after discovering the Second Doctor and correctly deducing his identity, Omega imprisons both Doctors, Jo, Benton, and Tyler. After the two Doctors help everyone escape through both of their own willpowers, Omega discovers the getaway, and challenges the Third Doctor to a battle of minds, nearly killing him.
The Second Doctor convinces Omega to stop, appealing to his desire to escape. Now calmer, Omega explains further to the two that he shaped this reality within the black hole both with his willpower and the power of its singularity. However, due to this, his will is the only support keeping this reality stable. He cannot freely leave without releasing control, but releasing control would collapse the antimatter universe instantly, annihilating everything in it; and so Omega's intention is for the Doctors to take his place maintaining it. As he prepares to leave with the Doctors' help, they are stunned to find that the extremely prolonged exposure to the singularity has destroyed Omega's physical body; his willpower now also maintains his essence, and he will cease to exist if he leaves. Suffering a nervous breakdown from the shock, Omega now seeks to destroy all creation.
Taking advantage of his neurosis, the two Doctors escape back to the TARDIS with all of the abductees. With the First Doctor, the two devise a way to defeat Omega, and also discover the Second Doctor's previously-lost recorder within the TARDIS' force field generator. The two meet with Omega again, claiming they can give him his freedom. Omega, though, retorts that he cannot be freed, and demands that they share his exile. The Doctors agree, on the condition that all of his abductees are sent safely back to Earth. Once done, the two present him with the generator. Omega knocks it over in rage at the paltry offer and the recorder falls out, annihilating everything it meets in the antimatter universe in a flash (having fallen into the force field during the abduction, it was protected from conversion, remaining as normal matter), creating a new universal source of energy, and ejecting the Doctors in the TARDIS, UNIT HQ, and all of its stolen structures and objects, back to their proper places in the normal universe. With the Time Lords' power restored, they return the First and Second Doctors to their respective time periods.
Forlorn, the Third Doctor implies to Jo that death was the only freedom anyone could offer Omega. Out of forgiveness, the Time Lords then send the Doctor a new dematerialisation circuit for the TARDIS and restore his knowledge of how to travel through space and time, lifting his exile.
Production
Working titles for this story included The Black Hole. The script was originally supposed to feature all three Doctors equally, but William Hartnell was too ill to be able to play the full role as envisioned. He was, therefore, reduced to a pre-recorded cameo role, appearing only on the TARDIS's scanner and the space-time viewer of the Time Lords. It would be the last time he played the Doctor and his last acting role before his death in 1975. Hartnell's scenes were filmed at BBC's Ealing Studios.
The only time that all three Doctors appeared together was in promotional photos for the story. One session of these took place in October 1972 at a photo studio in Battersea - this produced the image that was used for the cover of the Radio Times magazine to promote the story.
The production team also planned for Frazer Hines to reprise his role of Jamie McCrimmon alongside the Second Doctor; however, Hines was not available, due to his work on the soap opera Emmerdale Farm. Much of the role originally intended for Jamie was reassigned to Sergeant Benton.
Casting notes
The Chancellor is portrayed by Clyde Pollitt, who had also played one of the Time Lords who tried and exiled the Second Doctor. Barry Letts states in the DVD commentary that this was intentional, as he meant for this to be the same character. Similarly, Graham Leaman reappears as a Time Lord, having been seen in the same role in Colony in Space (1971) discussing the Master's activities and the Time Lords' use of the exiled Doctor as an agent. The same DVD commentary and the on-screen production captions note that the unavailability of actor Richard Franklin led to a shifting of the roles by the UNIT supporting cast. Sergeant Benton took on the majority of the role written for Captain Yates and a new character, Corporal Palmer, took on most of the lines originally written for Benton.
Reception
Patrick Mulkern of Radio Times wrote that The Three Doctors "may not be the greatest story ever told" but it ended the Doctor's exile on Earth and brought back Troughton, though unfortunately Hartnell was not able to do much. The A.V. Club reviewer Christopher Bahn summarised that the serial "has some good ideas in it, but they're treated with such an unambitious lack of imagination that there's not enough actually happening here for the story to be offensively bad—just boring". He felt the "most enjoyable part" was the "comic squabbling" between Pertwee and Troughton, and also called the Brigadier a "saving grace". DVD Talk's Ian Jane gave the serial three out of five stars, noting that it was "slightly silly" and the production designs and special effects were "definitely not the best that the series has had to offer". He also felt that the story was wrapped up too quickly and was "fairly predictable". However, he praised Pertwee and Troughton's interplay, the fact that Jo was given more to do, and Stephen Thorne's performance as Omega. Alisdair Wilkins of io9 picked The Three Doctors as the worst Doctor Who story of the classic series, feeling that the Second Doctor and the Brigadier were written as too comical, the story had too much padding, and that Omega was a "shouting, one-dimensional villain".
Broadcast
The serial was repeated on BBC2 in November 1981, daily (Monday–Thursday) (23 November 1981 to 26 November 1981) at 17:40 as part of "The Five Faces of Doctor Who". The four episodes achieved ratings of 5.0, 4.5, 5.7 and 5.8 million viewers respectively.
Commercial releases
In print
A novelisation of this serial, written by Terrance Dicks, was published by Target Books in November 1975.
The novelisation provides a rationale for Omega's realm to be a quarry: Over the millennia, Omega has become weary of the mental effort required to generate a verdant landscape and now makes do with rock and soil. The Second Doctor is referred to throughout as Doctor Two. In the book, Mr Ollis is renamed Mr Hollis. It is stated that Omega is only the second Time Lord that the Doctor has come up against as an adversary, the first being The Master.
Home media
The Three Doctors was released twice on VHS, first in August 1991 and thereafter remastered and re-released in 2002 as part of the WHSmith's The Time Lord Collection boxed set. It was released on DVD in the UK in November 2003 as part of the Doctor Who 40th Anniversary Celebration releases, representing the Jon Pertwee years. Some copies came in a box set housing a limited edition Corgi model of "Bessie", the Third Doctor's vintage roadster. A special edition of the DVD, with new bonus features, was released in the UK on 13 February 2012 in the third of the ongoing Revisitations DVD box sets with additional bonus features.
In 2019, The Three Doctors was released as part of the Season Ten Boxed Set Blu-Ray collection. The story and its special features occupy one disc in the set, and include features from previous releases and specially-made content.
Tales of the TARDIS
A special edition of the episode aired on BBC iPlayer on 1 November 2023, in the spin-off Tales of the TARDIS.
References
External links
(archived on 2013-06-01)
Target novelisation
First Doctor serials
Second Doctor serials
Third Doctor serials
Doctor Who multi-Doctor stories
Doctor Who serials novelised by Terrance Dicks
1972 British television episodes
1973 British television episodes
Anniversary television episodes
Television episodes written by Bob Baker (scriptwriter)
Fiction about black holes
Doctor Who anniversary specials
Television episodes set in England
Television episodes set in the 20th century | The Three Doctors (Doctor Who) | Physics | 2,378 |
22,788,074 | https://en.wikipedia.org/wiki/Sophie%20Germain%27s%20theorem | In number theory, Sophie Germain's theorem is a statement about the divisibility of solutions to the equation of Fermat's Last Theorem for odd prime .
Formal statement
Specifically, Sophie Germain proved that at least one of the numbers , , must be divisible by if an auxiliary prime can be found such that two conditions are satisfied:
No two nonzero powers differ by one modulo ; and
is itself not a power modulo .
Conversely, the first case of Fermat's Last Theorem (the case in which does not divide ) must hold for every prime for which even one auxiliary prime can be found.
History
Germain identified such an auxiliary prime for every prime less than 100. The theorem and its application to primes less than 100 were attributed to Germain by Adrien-Marie Legendre in 1823.
General proof of the theorem
While the auxiliary prime has nothing to do with the divisibility by and must also divide either , or for which the violation of the Fermat Theorem would occur and
most likely the conjecture is true that for given the auxiliary prime may be arbitrarily large similarly to the Mersenne primes she most likely proved the theorem in the general case by her considerations by infinite ascent because then at least one of the numbers , or must be arbitrarily large if divisible by infinite number of divisors and so all by the equality then they do not exist.
Notes
References
Laubenbacher R, Pengelley D (2007) "Voici ce que j'ai trouvé": Sophie Germain's grand plan to prove Fermat's Last Theorem
Theorems in number theory
Fermat's Last Theorem | Sophie Germain's theorem | Mathematics | 346 |
40,679,472 | https://en.wikipedia.org/wiki/Event%20tree%20analysis | Event tree analysis (ETA) is a forward, top-down, logical modeling technique for both success and failure that explores responses through a single initiating event and lays a path for assessing probabilities of the outcomes and overall system analysis. This analysis technique is used to analyze the effects of functioning or failed systems given that an event has occurred.
ETA is a powerful tool that will identify all consequences of a system that have a probability of occurring after an initiating event that can be applied to a wide range of systems including: nuclear power plants, spacecraft, and chemical plants. This technique may be applied to a system early in the design process to identify potential issues that may arise, rather than correcting the issues after they occur. With this forward logic process, use of ETA as a tool in risk assessment can help to prevent negative outcomes from occurring, by providing a risk assessor with the probability of occurrence. ETA uses a type of modeling technique called "event tree", which branches events from one single event using Boolean logic.
History
The name "Event Tree" was first introduced during the WASH-1400 nuclear power plant safety study (circa 1974), where the WASH-1400 team needed an alternate method to fault tree analysis due to the fault trees being too large. Though not using the name event tree, the UKAEA first introduced ETA in its design offices in 1968, initially to try to use whole plant risk assessment to optimize the design of a 500MW Steam-Generating Heavy Water Reactor. This study showed ETA condensed the analysis into a manageable form. ETA was not initially developed during WASH-1400, this was one of the first cases in which it was thoroughly used. The UKAEA study used the assumption that protective systems either worked or failed, with the probability of failure per demand being calculated using fault trees or similar analysis methods. ETA identifies all sequences which follow an initiating event. Many of these sequences can be eliminated from the analysis because their frequency or effect are too small to affect the overall result. A paper presented at a CREST symposium in Munich, Germany, in 1971 indicated how this was done. The conclusions of the US EPA study of the Draft WASH-1400 acknowledges the role of Ref 1 and its criticism of the Maximum Credible Accident approach used by AEC. MCA sets the reliability target for the containment but those for all other safety systems are set by smaller but more frequent accidents and would be missed by MCA.
In 2009 a risk analysis was conducted on underwater tunnel excavation under the Han River in Korea using an earth pressure balance type tunnel boring machine. ETA was used to quantify risk, by providing the probability of occurrence of an event, in the preliminary design stages of the tunnel construction to prevent any injuries or fatalities because tunnel construction in Korea has the highest injury and fatality rates within the construction category.
Theory
Performing a probabilistic risk assessment starts with a set of initiating events that change the state or configuration of the system. An initiating event is an event that starts a reaction, such as the way a spark (initiating event) can start a fire that could lead to other events (intermediate events) such as a tree burning down, and then finally an outcome, for example, the burnt tree no longer provides apples for food. Each initiating event leads to another event and continuing through this path, where each intermediate event's probability of occurrence may be calculated by using fault tree analysis, until an end state is reached (the outcome of a tree no longer providing apples for food). Intermediate events are commonly split into a binary (success/failure or yes/no) but may be split into more than two as long as the events are mutually exclusive, meaning that they can not occur at the same time. If a spark is the initiating event there is a probability that the spark will start a fire or will not start a fire (binary yes or no) as well as the probability that the fire spreads to a tree or does not spread to a tree. End states are classified into groups that can be successes or severity of consequences. An example of a success would be that no fire started and the tree still provided apples for food while the severity of consequence would be that a fire did start and we lose apples as a source of food. Loss end states can be any state at the end of the pathway that is a negative outcome of the initiating event. The loss end state is highly dependent upon the system, for example if you were measuring a quality process in a factory a loss or end state would be that the product has to be reworked or thrown in the trash. Some common loss end states:
Loss of Life or Injury/ Illness to personnel
Damage to or loss of equipment or property (including software)
Unexpected or collateral damage as a result of tests
Failure of mission
Loss of system availability
Damage to the environment
Methodology
The overall goal of event tree analysis is to determine the probability of possible negative outcomes that can cause harm and result from the chosen initiating event. It is necessary to use detailed information about a system to understand intermediate events, accident scenarios, and initiating events to construct the event tree diagram. The event tree begins with the initiating event where consequences of this event follow in a binary (success/failure) manner. Each event creates a path in which a series of successes or failures will occur where the overall probability of occurrence for that path can be calculated. The probabilities of failures for intermediate events can be calculated using fault tree analysis and the probability of success can be calculated from 1 = probability of success (ps) + probability of failure (pf). For example, in the equation 1 = (ps) + (pf) if we know that pf=.1 from fault tree analysis then through simple algebra we can solve for ps where ps = (1) - (pf) then we would have ps = (1) - (.1) and ps=.9.
The event tree diagram models all possible pathways from the initiating event. The initiating event starts at the left side as a horizontal line that branches vertically. the vertical branch is representative of the success/failure of the initiating event. At the end of the vertical branch a horizontal line is drawn on each the top and the bottom representing the success or failure of the first event where a description (usually success or failure) is written with a tag that represents the path such as 1s where s is a success and 1 is the event number similarly with 1f where 1 is the event number and f denotes a failure (see attached diagram). This process continues until the end state is reached. When the event tree diagram has reached the end state for all pathways the outcome probability equation is written.
Steps to perform an event tree analysis:
Define the system: Define what needs to be involved or where to draw the boundaries.
Identify the accident scenarios: Perform a system assessment to find hazards or accident scenarios within the system design.
Identify the initiating events: Use a hazard analysis to define initiating events.
Identify intermediate events: Identify countermeasures associated with the specific scenario.
Build the event tree diagram
Obtain event failure probabilities: If the failure probability can not be obtained use fault tree analysis to calculate it.
Identify the outcome risk: Calculate the overall probability of the event paths and determine the risk.
Evaluate the outcome risk: Evaluate the risk of each path and determine its acceptability.
Recommend corrective action: If the outcome risk of a path is not acceptable develop design changes that change the risk.
Document the ETA: Document the entire process on the event tree diagrams and update for new information as needed.
Mathematical concepts
1 = (probability of success) + (probability of failure)
The probability of success can be derived from the probability of failure.
Overall path probability = (probability of event 1) × (probability of event 2) × ... × (probability of event n)
In risk analysis
The event tree analysis can be used in risk assessments by determining the probability that is used to determine risk when multiply by the hazard of events. Event Tree Analysis makes it easy to see what pathway creating the biggest probability of failure for a specific system. It is common to find single-point failures that do not have any intervening events between the initiating event and a failure. With Event Tree Analysis single-point failure can be targeted to include an intervening step that will reduce the overall probability of failure and thus reducing the risk of the system. The idea of adding an intervening event can happen anywhere in the system for any pathway that generates too great of a risk, the added intermediate event can reduce the probability and thus reduce the risk.
Advantages
Enables the assessment of multiple, co-existing faults and failures
Functions simultaneously in cases of failure and success
No need to anticipate end events
Areas of single point failure, system vulnerability, and low payoff countermeasures may be identified and assessed to deploy resources properly
Paths in a system that lead to a failure can be identified and traced to display ineffective countermeasures.
Work can be computerized
Can be performed on various levels of details
Visual cause and effect relationship
Relatively easy to learn and execute
Models complex systems into an understandable manner
Follows fault paths across system boundaries
Combines hardware, software, environment, and human interaction
Permits probability assessment
Commercial software is available
Limitations
Addresses only one initiating event at a time.
The initiating challenge must be identified by the analyst
Pathways must be identified by the analyst
Level of loss for each pathway may not be distinguishable without further analysis
Success or failure probabilities are difficult to find.
Can overlook subtle system differences
Partial successes/failures are not distinguishable
Requires an analyst with practical training and experience
Software
Though ETA can be relatively simple, software can be used for more complex systems to build the diagram and perform calculations more quickly with reduction of human errors in the process. There are many types of software available to assist in conducting an ETA. In nuclear industry, RiskSpectrum software is widely used which has both event tree analysis and fault tree analysis. Professional-grade free software solutions are also widely available. SCRAM is an example open-source tool that implements the Open-PSA Model Exchange Format open standard for probabilistic safety assessment applications.
See also
Fault tree analysis
Failure modes and effect analysis
References
Data modeling
Risk analysis methodologies
Systems engineering | Event tree analysis | Engineering | 2,065 |
344,173 | https://en.wikipedia.org/wiki/Rotational%20energy | Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed:
where
The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass.
Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion:
In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow).
An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of . The Earth has a moment of inertia, I = . Therefore, it has a rotational kinetic energy of .
Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process).
See also
Flywheel
List of energy storage projects
Rigid rotor
Rotational spectroscopy
Notes
References
Resnick, R. and Halliday, D. (1966) PHYSICS, Section 12-5, John Wiley & Sons Inc.
Forms of energy
Rotation | Rotational energy | Physics | 409 |
24,107,652 | https://en.wikipedia.org/wiki/C15H11O4 | {{DISPLAYTITLE:C15H11O4}}
The chemical compound C15H11O4 (or C15H11O4+, molar mass : 255.24 g/mol, exact mass: 255.065734) may refer to:
Apigeninidin, an anthocyanidin
Guibourtinidin, an anthocyanidin | C15H11O4 | Chemistry | 83 |
7,720 | https://en.wikipedia.org/wiki/Coprophagia | Coprophagia ( ) or coprophagy ( ) is the consumption of feces. The word is derived from the Ancient Greek "feces" and "to eat". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy). Feces may be already deposited or taken directly from the anus.
In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in some sexual acts, such as the practices of anilingus and felching where sex partners insert their tongue into each other's anus and ingest biologically significant amounts of feces. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions.
Coprophagia by humans
In cuisine
The feces of the rock ptarmigan is used in Urumiit, which is a delicacy in some Inuit cuisine. Several beverages are made using the feces of animals, including but not limited to Kopi luwak, insect tea, and Black Ivory Coffee. Casu martzu is a cheese that uses the digestive processes of live maggots to help ferment and break down the cheese's fats.
As a cult practice
Members of a religious cult in Thailand routinely ate the feces and dead skin of their leader, whom they considered to be a holy man with healing powers.
As a paraphilia
According to the DSM-5, coprophilia is a paraphilia where the object of sexual interest is feces. This can involve coprophagia.
Coprophagia is sometimes depicted in pornography, typically under the term "scat" (from scatology), such as in the shock video 2 Girls 1 Cup. The 120 Days of Sodom, a 1785 novel by Marquis de Sade, prominently features depictions of erotic sadomasochistic coprophagia. The 1975 film of the same name also contains scenes of coprophilia and coprophagia.
As a supposed medical treatment
Ayurveda and Siddha medicine use animal excreta in various forms, with the most important being the dung and urine of the Zebu.
During the mid 16th century, physicians tasted their patients' feces to better judge their state and condition, according to François Rabelais. Rubelais studied medicine, but was also a writer of satirical and grotesque fiction, so the truth of this statement is unclear.
Lewin reported "... consumption of fresh, warm camel feces has been recommended by Bedouins as a remedy for bacterial dysentery; its efficacy (probably attributable to the antibiotic subtilisin from Bacillus subtilis) was anecdotally confirmed by German soldiers in Africa during World War II". However, this story is likely a myth, and independent research has been unable to verify these claims.
As a symptom
Coprophagia has also been observed in some people with schizophrenia and pica.
Coprophagia by nonhuman animals
By invertebrates
Coprophagous insects consume and redigest the feces of large animals. These feces contain substantial amounts of semidigested food, particularly in the case of herbivores, owing to the inefficiency of the large animals' digestive systems. Thousands of species of coprophagous insects are known, especially among the orders Diptera and Coleoptera. Examples of such flies are Scathophaga stercoraria and Sepsis cynipsea, dung flies commonly found in Europe around cattle droppings.
Among beetles, dung beetles are a diverse lineage, many of which feed on the microorganism-rich liquid component of mammals' dung, and lay their eggs in balls composed mainly of the remaining fibrous material. Group living and aggregation among common earwigs promotes allo-coprophagy (consuming the feces of other members of one's own species) to promote the growth of helpful gut bacteria and provide a food source when food is scarce.
Through proctodeal feeding, termites eat one another's feces as a means of obtaining their hindgut protists. Termites and protists have a symbiotic relationship (e.g. with the protozoan that allows the termites to digest the cellulose in their diet). For example, in one group of termites, a three-way symbiotic relationship exists; termites of the family Rhinotermitidae, cellulolytic protists of the genus Pseudotrichonympha in the guts of these termites, and intracellular bacterial symbionts of the protists.
By vertebrates
Lagomorphs (rabbits, hares, pikas) and some other mammals ferment fiber in their cecums, which is then expelled as cecotropes and eaten from the anus, a process called "cecotrophy". Then their food is processed through the gastrointestinal tract a second time, which allows them to absorb more nutrition. While cecotropes are expelled from the anus, they are not feces and thus eating them is not called coprophagia.
Domesticated and wild mammals are sometimes coprophagic.
Some dogs may lack critical digestive enzymes when they are only eating processed dried foods, so they gain these from consuming fecal matter. They only consume fecal matter that is less than two days old which supports this theory.
Cattle in the United States are often fed chicken litter. Concerns have arisen that the practice of feeding chicken litter to cattle could lead to bovine spongiform encephalopathy (mad-cow disease) because of the crushed bone meal in chicken feed. The U.S. Food and Drug Administration regulates this practice by attempting to prevent the introduction of any part of cattle brain or spinal cord into livestock feed. Chickens also eat their own feces. Other countries, such as Canada, have banned chicken litter for use as a livestock feed.
The young of elephants, giant pandas, koalas, and hippos eat the feces of their mothers or other animals in the herd, to obtain the bacteria required to properly digest vegetation found in their ecosystems. When such animals are born, their intestines are sterile and do not contain these bacteria. Without doing this, they would be unable to obtain any nutritional value from plants. Piglets with access to maternal feces early in life exhibited better performance.
Hamsters, guinea pigs, chinchillas, hedgehogs, and pigs eat their own droppings, which are thought to be a source of vitamins B and K, produced by gut bacteria. Sometimes, there is also the aspect of self-anointment while these creatures eat their droppings. On rare occasions gorillas have been observed consuming their feces, possibly out of boredom, a desire for warm food, or to reingest seeds contained in the feces.
Coprophagia by plants
Some carnivorous plants, such as pitcher plants of the genus Nepenthes, obtain nutrition from the feces of commensal animals. Notable examples include Nepenthes jamban, whose specific name is the Indonesian word for toilet. Manure is organic matter, mostly animal feces, that is used as organic fertilizer for plants in agriculture.
See also
Coprophilous fungi
Fecal bacteriotherapy
Faecal transplant
Fecal–oral route, a route of disease transmission
Gomutra
Kopi luwak
Panchagavya
Pig toilet
Scathophagidae
Scatophagidae
References
Further reading
External links
Eating behaviors
Ethology
Feces
Pica (disorder) | Coprophagia | Biology | 1,661 |
37,687,136 | https://en.wikipedia.org/wiki/C19H23ClN2 | {{DISPLAYTITLE:C19H23ClN2}}
The molecular formula C19H23ClN2 (molar mass: 314.85 g/mol, exact mass: 314.1550 u) may refer to:
Clomipramine
Homochlorcyclizine
Molecular formulas | C19H23ClN2 | Physics,Chemistry | 65 |
37,397 | https://en.wikipedia.org/wiki/Epcot | EPCOT is a theme park at the Walt Disney World Resort in Bay Lake, Florida. It is owned and operated by The Walt Disney Company through its Disney Experiences division. The park opened on October 1, 1982, as EPCOT Center—the second of four theme parks built at the resort. Often referred to as a "permanent world's fair", EPCOT is dedicated to the celebration of human achievement, particularly technological innovation and international culture and is known for its iconic landmark Spaceship Earth, a geodesic sphere.
During early development of the Florida property, Walt Disney wanted to build an experimental planned community showcasing modern innovation, known as "EPCOT", an acronym for Experimental Prototype Community of Tomorrow. After Disney's death in 1966, the company felt his grand vision was impractical. However, it laid the groundwork for EPCOT Center, a theme park that retained the core spirit of Disney's vision. The park was divided into two distinct areas: Future World reprises the idea of showcasing modern innovation through educational entertainment attractions within avant-garde pavilions, while World Showcase highlights the diversity of human cultures from various nations. From the late 2010s to the early 2020s, the park underwent a major overhaul, adding new attractions and Future World was restructured into three areas: World Celebration, World Discovery and World Nature.
The park spans , more than twice the size of Magic Kingdom Park. In 2023, the park attracted 11.98 million guests, making it the eighth-most visited theme park in the world.
History
1960s: Experimental concept
The genesis for EPCOT was originally conceived as a utopian city of the future by Walt Disney in the 1960s. The concept was an acronym for Experimental Prototype Community of Tomorrow, often interchanging "city" and "community." In Walt Disney's words in 1966: "EPCOT will take its cue from the new ideas and new technologies that are now emerging from the creative centers of American industry. It will be a community of tomorrow that will never be completed but will always be introducing and testing, and demonstrating new materials and new systems. And EPCOT will always be a showcase to the world of the ingenuity and imagination of American free enterprise."
Walt Disney's original vision, sometimes called Progress City, would have been home to 20,000 residents and would be a living laboratory showcasing cutting-edge technology and urban planning. It was to be built in the shape of a circle with an urban city center in the center with community buildings, schools, and recreational complexes. It would be surrounded by rings of residential areas and industrial areas, all connected by monorail and PeopleMover lines. Automobile traffic would be kept underground, leaving pedestrians safe above ground. This radial plan concept is strongly influenced by British planner Ebenezer Howard and his Garden Cities of To-morrow.
Disney went as far as petitioning the Florida State Legislature for the creation of the Reedy Creek Improvement District (RCID), with the authority of a governmental body over the Walt Disney World land. The RCID was established in 1967. However, Walt Disney was not able to obtain funding and permission to start work on his Florida property until he agreed to first build the Magic Kingdom theme park. He died in 1966, nearly five years before Magic Kingdom opened.
1970s: Concept evolves into park
After Walt Disney's death, the company decided that it did not want to be in the business of running a city without Walt's guidance. The original plans for the park showed indecision over the park's purpose. Some Imagineers wanted it to represent the cutting edge of emerging technologies, while others wanted it to showcase international cultures and customs. At one point, a model of the futuristic park was pushed together against a model of a World's Fair international theme, and the two were combined.
The park was originally named EPCOT Center to reflect the ideals and values of the city. It was constructed for an estimated $800 million to $1.4 billion and took three years to build, at the time the largest construction project on Earth. The park spans , more than twice the size of Magic Kingdom. The parking lot serving the park is (including bus area) and can accommodate 11,211 vehicles.
1980s: Opening and operation
The grand opening festivities for EPCOT took place over three weeks in October 1982—supervised and directed by Disney Legend Bob Jani. The park officially opened to the public on October 1, with a dedication ceremony in front of Spaceship Earth that served as both the kick-off ceremony as well as the dedication of the Spaceship Earth attraction itself. Presiding over the ceremony was Walt Disney Productions chairman and CEO Card Walker, Florida Governor Bob Graham, and president of AT&T (the sponsor of Spaceship Earth at opening) William Ellinghaus.
On opening day, Future World featured six pavilions: Spaceship Earth, CommuniCore, Journey Into Imagination, The Land, Universe of Energy, and World of Motion. World Showcase featured nine pavilions: Mexico, China, Germany, Italy, The American Adventure, Japan, France, United Kingdom, and Canada.
Each pavilion had its own custom opening ceremony throughout the next three weeks—culminating in the three-day grand opening event. On October 24, 1982, EPCOT was officially dedicated by Walt Disney Productions executive chairman Donn Tatum and Card Walker. A 450-piece marching band made up of players from college bands all over the country performed several songs including "We've Just Begun to Dream" and "The World Showcase March"—the latter written exclusively for the opening events by the Sherman Brothers. Water was gathered from major rivers, lakes, and seas from across the globe and emptied into the park's Fountain of Nations to mark the opening.
During the 1980s, several additional pavilions opened: Horizons in 1983, Morocco in 1984, The Living Seas in 1986, Norway in 1988, and Wonders of Life in 1989.
1990s–2000s: Change in vision
Despite its initial success, EPCOT was constantly faced with the challenges of evolving with worldwide progress, an issue that caused the park to lose relevance and become outdated in the 1990s. To maintain attendance levels, Disney introduced seasonal events such as the International Flower & Garden Festival and the International Food & Wine Festival in 1994 and 1995, respectively.
It was during this era that Disney sought to differentiate the EPCOT theme park from Walt Disney's EPCOT concept by making the park's name a word rather than a acronym—spelling it in lowercase as a proper noun: "Epcot". Walt Disney World then added the current year to the park's name, emulating the naming scheme for expos and world's fairs like Expo 67. The park became Epcot '94 and Epcot '95 before Disney quietly abandoned the naming concept in 1996 and the park simply became Epcot.
In the mid-1990s, Disney also began to gradually phase out the park's edutainment attractions in favor of more modern and thrilling attractions. As a result, many of the attractions within the Future World pavilions, were either overhauled or replaced entirely. The Land pavilion saw its attractions replaced under new sponsor Nestlé between late 1993 and January 1995, and Spaceship Earth was updated with music by Edo Guidotti and narration from Jeremy Irons in 1994. Universe of Energy was reconfigured as Ellen's Energy Adventure in 1996. Journey Into Imagination closed in 1998 and was replaced with Journey into YOUR Imagination the following year, World of Motion was replaced with Test Track, and Horizons was demolished in 1999 and replaced with Mission: SPACE in 2003.
In 2000, Walt Disney World held the Millennium Celebration with the central focus of the event at EPCOT, and a 25-story "magic wand" structure was built next to Spaceship Earth. Millennium Village was closed on January 1, 2001, and was turned into the World Showplace festival center, which is frequently used for EPCOT festivals.
Attraction changes continued into the new millennium. Journey into YOUR Imagination closed in 2001 due to strong negative reception and was replaced with Journey into Imagination with Figment in 2002. The Living Seas was closed in 2005, and rethemed with the introduction of characters from Finding Nemo, as The Seas with Nemo & Friends. That same year, Soarin', a flight simulator ride originally developed for Disney California Adventure Park, was added to The Land (replacing Food Rocks) following its massive popularity in California. The Wonders of Life pavilion closed in 2007, with the pavilion being occasionally used for the park's annual festivals until permanent closure. The Mexico pavilion's El Rio del Tiempo attraction closed on January 2, and Gran Fiesta Tour Starring The Three Caballeros opened in its space a few months later. After the "magic wand" structure was removed from Spaceship Earth, the attraction's fourth version, narrated by Judi Dench, soft-opened on December 8. Kim Possible World Showcase Adventure, an interactive scavenger hunt, opened at EPCOT in 2009.
2010s–present: Transformation and redesign
Test Track was refurbished into a new version presented by Chevrolet in 2012, and Kim Possible World Showcase Adventure was rethemed to Agent P's World Showcase Adventure the same year. The Norway pavilion's Maelstrom attraction closed in 2014 and replaced two years later by Frozen Ever After. Soarin' was also temporarily closed while a new film was added to the attraction. In 2017, Mission: SPACE was divided into a new green/Earth mission, and the original orange/Mars mission.
In November 2016, Disney revealed that EPCOT would be receiving “a major transformation” that would help transition the park into being “more Disney, timeless, relevant, family-friendly”. In July 2017, the formal announcement came that EPCOT would undergo a multi-year redesign and expansion plan that would introduce Guardians of the Galaxy and Ratatouille attractions to Future World and World Showcase, respectively, as well as maintaining the original vision and spirit for the park. As part of the announcement, Ellen's Energy Adventure closed the following month, and the pavilion's show building was reused for Guardians of the Galaxy: Cosmic Rewind, while the EPCOT 35 Legacy Showcase exhibition opened in the Odyssey Pavilion. That same year, the park reported the first drop in overall attendance ranking among the four Walt Disney World Resort parks, dropping from second to third place, the first in its history.
On August 25, 2019, at the 2019 D23 Expo, Disney expanded on the plans for the improvements to EPCOT. One of the most significant changes announced was the creation of four distinct "neighborhoods"; the subdivision of Future World into three areas (World Celebration, World Discovery, and World Nature). Journey of Water—Inspired by Moana, a walkthrough attraction, was also announced. At the same expo, Disney also announced that Pinar Toprak would be composing a new musical anthem for the park. Toprak's "EPCOT Anthem" was eventually used in various nighttime shows, such as Harmonious and Luminous, as well as featured in ambient music within the entrance plaza and throughout World Celebration.
On October 1, 2019, it was announced that a new nighttime fireworks show, EPCOT Forever, and The EPCOT Experience Center, a preview space for the park's expansion project, would replace IllumiNations: Reflections of Earth and EPCOT 35 Legacy Showcase. In late 2019, EPCOT installed new directory signage in Seabase Alpha, restoring the former Living Seas logo, as the pavilion was renamed to The Seas Pavilion. Agent P's World Showcase Adventure closed on February 23, 2020; it was slated to be replaced with DuckTales World Showcase Adventure, which did not open until 2022.
In early 2020, Disney officially announced that the park's name would revert back to all-uppercase (from Epcot to EPCOT) as an homage to both the park's original name and Walt Disney's original concept—although the name is still not an acronym.
EPCOT was closed from March 16 to July 15, 2020, due to the COVID-19 pandemic in Florida. Modified operations were established, including a pause on concerts and fireworks, in order to promote sufficient physical distancing. Spaceship Earth: Our Shared Story, the attraction's fifth update, the Wondrous China film, the PLAY! pavilion in World Discovery, and the United Kingdom pavilion's Cherry Tree Lane expansion were indefinitely delayed due to the COVID-19 pandemic, and the CommuniCore Hall exhibit space and the CommuniCore Plaza festival stage was built instead of a three-level festival pavilion.
On September 29, 2021, the nighttime spectacular Harmonious replaced EPCOT Forever as part of the resort's 50th anniversary celebration. The show ended its run on March 31, 2023, in preparation for Luminous: The Symphony of Us which debuted later that year; EPCOT Forever returned during the interim period. Remy's Ratatouille Adventure (duplicated from Disneyland Paris) opened in the France pavilion on October 1 as part of the same celebrations. The EPCOT Experience Center closed in 2022, and Guardians of the Galaxy: Cosmic Rewind opened on May 27.
Journey of Water: Inspired by Moana opened in World Nature opened on October 16, 2023, and World Celebration Gardens, divided into five sections (Inspiration Gardens, CommuniCore Gardens, Connections Gardens, Creations Gardens, and Dreamers Point) opened on December 5 of that year. CommuniCore Hall and Plaza, named after the former Future World pavilion, opened to the general public on June 10, 2024. Test Track closed for refurbishment on June 17 to make way for the attraction's third iteration, with the return of General Motors sponsor, which will be reopened in late Summer 2025. At D23 2024, it was announced that this lounge will take the place of the former Siemens lounge attached to Spaceship Earth and will open in late Spring 2025.
On November 21, 2024, it was announced that the second stage has been installed in the CommuniCore Plaza Stage, and the stage will be the home of JOYFUL! A Celebration of the Season, as a seasonal entertainment offering during the 2024 annual EPCOT International Festival of the Holidays.
Park layout and attractions
EPCOT is divided into four themed areas, known as "neighborhoods": World Celebration, World Discovery, World Nature, and World Showcase.
The park consists of a variety of avant-garde pavilions that explore innovative aspects and applications including technology and science, with each pavilion featuring self-contained attractions and distinct architecture in its design. Currently, the park features ten major pavilions: Galaxy, Imagination, Journey, Land, Motion, Odyssey, Seas, Space, Spaceship Earth, and World Showcase, which itself has eleven individual nation pavilions.
World Celebration, Discovery, and Nature were originally grouped as one area called Future World, which debuted with six pavilions: Spaceship Earth, CommuniCore, Imagination!, The Land, Universe of Energy, and World of Motion. The Horizons pavilion opened the following year, and The Living Seas and Wonders of Life pavilions were added in 1986 and 1989, respectively, bringing the lineup to nine. CommuniCore, World of Motion, Horizons, Wonders of Life, Universe of Energy, and Innoventions closed in 1994, 1996, 1999, 2007, 2017, and 2019, respectively. The Fountain of Nations, a large circular musical fountain which debuted with the park, was removed in 2019 as well. Each pavilion was initially sponsored by a corporation which helped fund its construction and maintenance in return for the corporation's logos and some marketing elements appearing throughout the pavilion.
Additionally, each pavilion of Future World featured a unique circular logo designed by Norm Inouye (except for the Wonders of Life logo due to its later introduction), which was featured on park signage and throughout the attractions themselves. The pavilion logos were gradually phased out in the early 2000s, as the pavilions instead were identified by name and recognized by the main attraction(s) housed inside. Several homages remained scattered throughout the park, including merchandising. However, in 2019, the circular pavilion logos were revived as part of the park's transformation, with both classic logos reprised and refreshed and newer logos introduced.
World Celebration
World Celebration serves as the park's main entrance and a central hub that honors global human interaction and connection, including communication, imagination, creativity, and the visual and culinary arts. The neighborhood features four major pavilions—Spaceship Earth, Imagination, Odyssey, and CommuniCore—as well as additional attractions, shops, and restaurants.
Guests enter through the main entrance and walk underneath Spaceship Earth, an eighteen-story-tall geodesic sphere structure and the anchor pavilion, which also houses an eponymous dark ride attraction that depicts the history of communication. Directly behind Spaceship Earth are the World Celebration Gardens and Dreamers Point, featuring lush interactive gardens, lighting fixtures and Walt the Dreamer—a bronze statue commemorating Walt Disney. The Imagination! pavilion celebrates the concept of imagination and features Journey into Imagination with Figment, a dark ride starring Figment that explores the senses. CommuniCore Hall and Plaza is a multi-use pavilion used for exhibitions, gallery space, a mixology bar, a demonstration kitchen, and music performances, as well as meet-and-greets with Disney characters. The Odyssey Pavilion is an exhibition space during the park's annual festivals.
World Celebration is also home to Creations Shop, the park's main gift shop; Connections Eatery & Cafe, a quick-service restaurant and Starbucks themed to global food history; and Club Cool, an Coca-Cola-themed attraction and shop featuring complimentary samples of Coca-Cola soft drinks from around the world.
World Discovery
World Discovery centers on space, science, technology and intergalactic exploration. Lying on the east side of World Celebration, the Discovery neighborhood currently features three major pavilions in clockwise layout.
Guardians of the Galaxy: Cosmic Rewind, an enclosed spinning roller coaster based on the superhero team of the same name. The building originally opened as Universe of Energy.
Mission: SPACE is a centrifugal motion simulator thrill ride that replicates a space flight experience to Mars and a low orbit tour over the surface of Earth. Next to it is Space 220, a themed restaurant simulating dining aboard a space station located 220 miles above Earth. The building is located on the original plot site of Horizons.
Test Track is a high-speed slot car ride inspired by the automobile testing procedures that Chevrolet uses to evaluate concept cars. The Motion Pavilion was one of the original pavilions of EPCOT and has always housed an attraction sponsored by General Motors.
In between Guardians of the Galaxy: Cosmic Rewind and Mission: SPACE is one standing but unused pavilion that once housed Wonders of Life.
World Nature
World Nature focuses on understanding and preserving the beauty, awe and balance of the natural world. Located on the west side of World Celebration, the Nature neighborhood features three major pavilions in counterclockwise layout—inspired by human interaction with the Earth, specifically themes of ocean exploration, hydrology, agriculture, horticulture, ecology, ecotourism, and travel.
Based on ocean exploration and inspired by the Finding Nemo series, The Seas pavilion features the sixth-largest aquarium in the world with marine life exhibits; an Omnimover attraction inspired by Finding Nemo; and Turtle Talk with Crush, an interactive show hosted by Crush from Finding Nemo. Connected to the building is the Coral Reef Restaurant, a themed seafood restaurant that provides views into the aquarium. Nearby is Journey of Water, an outdoor walkthrough water attraction depicting the Earth's water cycle, inspired by Moana. Finally, the Land pavilion features three attractions; Soarin' Around the World, an attraction that simulates a hang gliding flight over various regions of the world; Living with the Land, a narrated boat tour through Audio-Animatronics scenes, a greenhouse and hydroponics lab; and Awesome Planet, a short documentary film presented in the pavilion's Harvest Theater about the Earth's biomes and the perils of climate change.
World Showcase
World Showcase is the park's largest neighborhood, dedicated to representing the culture, history, cuisine, architecture, and traditions of 11 nations from across four continents—North America, Europe, Asia, and Africa. Each nation pavilion features attractions, shops, restaurants, and landscaping that celebrate or portray authentic settings from each country—several pavilions also contain recreations inspired by existing buildings and landmarks, such as the Eiffel Tower, Itsukushima Shrine, Hampton Court Palace, Château Laurier, Gol Stave Church, St Mark's Campanile, and the Kutubiyya Mosque. Of the 11 pavilions, only Morocco and Norway were not present at the park's opening, as they were added in 1984 and 1988 respectively.
The nation pavilions surround the World Showcase Lagoon, a man-made lake located in the center of World Showcase with a perimeter of , which is the site of the park's nighttime fireworks display, Luminous: The Symphony of Us. In counter-clockwise order, the 11 pavilions are:
Canada
United Kingdom
France
Morocco
Japan
United States
Italy
Germany
China
Norway
Mexico
The American Adventure is the host pavilion of World Showcase, sharing its name with its marquee attraction: a stage show detailing American history and hosted by Audio-Animatronics versions of Benjamin Franklin and Mark Twain. The pavilion also includes the American Heritage Gallery, a changing exhibition space. On the shores of the lagoon is the America Gardens Theatre, an outdoor amphitheater that hosts the park's festival concerts.
The France Pavilion hosts Impressions de France in Palais du Cinéma, an 18-minute Cinerama-style film depicting the culture of France, and along with Beauty and the Beast: Sing-Along. Tucked behind the lagoon-facing portion of the pavilion is Remy's Ratatouille Adventure, a 3D dark ride inspired by Pixar’s Ratatouille.
The Canada and China Pavilions each host Circle-Vision 360° films—Canada Far and Wide and Reflections of China—that depict the diverse cultures and countrysides of their respective countries. Two dark boat rides reside within the Norway and Mexico Pavilions—Frozen Ever After and Gran Fiesta Tour Starring The Three Caballeros, respectively—inspired by Frozen and The Three Caballeros.
A secondary park gate is located between the France and United Kingdom pavilions of World Showcase and is known as the International Gateway. The International Gateway is directly accessible to guests arriving from the Disney Skyliner and from watercraft transport, and by walkways from the nearby EPCOT Area Resorts and Disney's Hollywood Studios.
Each pavilion contains themed architecture, landscapes, streetscapes, attractions, shops and restaurants representing the respective country's culture and cuisine. In an effort to maintain the authenticity of the represented countries, the pavilions are primarily staffed by citizens of the respective countries as part of the Cultural Representative Program through Q1 visa agreements. Some pavilions also contain themed rides, shows, and live entertainment representative of the respective country. The Morocco pavilion was directly sponsored by the Moroccan government until 2020, when Disney took ownership of the pavilion. The remaining pavilions are primarily sponsored by private companies with affiliations to the represented countries.
Originally, the showcase was to include partnerships with the governments of the different countries. According to Disney's 1975 Annual Report, the Showcase would:
Proposed pavilions and unused locations
There are currently seven undeveloped spots for countries around the World Showcase in between the locations of the current countries. Two sites are located on either side of the United Kingdom, one between France and Morocco, one between Morocco and Japan, one between Italy and Germany, and two between Germany and China.
In 1982, Disney announced three pavilions were "coming soon": Israel, Spain and Equatorial Africa, blending elements of the cultures of countries such as Kenya and Zaire. A model of the Equatorial Africa pavilions was also shown on the opening day telecast. However, the pavilions were never built. Instead, a small African themed refreshment shop known as the "Outpost" currently resides in the area between China and Germany, where the Equatorial Africa pavilion was to be located.
More than 50 nations, among them, Brazil, Chile, India, Indonesia, Israel, New Zealand, Saudi Arabia, Sweden and five African countries (Eritrea, Ethiopia, Kenya, Namibia, and South Africa), took part in the Millennium Village, a project that took place in EPCOT during Millennium Celebration from 1999 to 2001. The Millennium Village was located inside a temporary structure built behind the United Kingdom pavilions that remains in use as World ShowPlace.
Alcohol policy
Unlike Magic Kingdom, which up until 2012 did not serve alcohol, most stores and restaurants at EPCOT, especially in the World Showcase, serve and sell a variety of alcoholic beverages including specialty drinks, craft beers, wines, and spirits reflective of the respective countries. The park also hosts the EPCOT International Food & Wine Festival, an annual event featuring food and drink samplings from all over the world, along with live entertainment and special exhibits.
Annual events
EPCOT hosts a number of special events during the year:
The EPCOT International Flower & Garden Festival, inaugurated in 1994, uses specially-themed floral displays throughout the park, including topiary sculptures of Disney characters. Each event takes more than a full year to plan and more than 20,000 cast member hours.
The EPCOT International Food & Wine Festival, inaugurated in 1995, draws amateur and professional gourmets to sample delicacies from all around the world, including nations that do not have a permanent presence in World Showcase. Celebrity chefs are often on-hand to host the events. In 2008, the festival featured the Bocuse d'Or USA, the American semifinal of the biennial Bocuse d'Or cooking competition.
The EPCOT International Festival of the Arts, inaugurated in 2017, is a festival showcasing visual, culinary, and performing arts. The first annual event took place on weekends from January 13 through February 20, 2017.
The EPCOT International Festival of the Holidays (previously known as Epcot Holidays Around the World from 1996 to 2016), inaugurated in 2017, is the park's annual holiday celebration. The World Showcase pavilions feature storytellers describing their nation's holiday traditions, and three nightly performances of the Candlelight Processional featuring an auditioned mass choir and a celebrity guest narrating the story of Christmas. Kiosks throughout the World Showcase feature holiday dishes. On New Year's Eve, the park offers a variety of additional entertainment including live DJ dance areas throughout the park.
Attendance
The Walt Disney Company generally does not publish attendance figures for its theme parks, so industry groups such as the Themed Entertainment Association estimate these figures.
See also
List of EPCOT attractions
EPCOT Resort Area
WestCOT
References
Further reading
Alcorn, Steve and David Green. Building a Better Mouse: The Story of the Electronic Imagineers Who Designed Epcot. Themeperks Press, 2007, .
Mannheim, Steve (2002). Walt Disney and the Quest for Community. Routledge. .
External links
1982 establishments in Florida
Amusement parks opened in 1982
Tourist attractions in Greater Orlando
Walt Disney World
Architecture related to utopias | Epcot | Engineering | 5,554 |
20,776,409 | https://en.wikipedia.org/wiki/George%20Streisinger | George Streisinger (December 27, 1927 – August 11, 1984) was an American molecular biologist and co-founder of the Institute of Molecular Biology at the University of Oregon. He was the first person to clone a vertebrate, cloning zebrafish in his University of Oregon laboratory. He also pioneered work in the genetics of the T-even bacterial viruses. In 1972, along with William Franklin Dove he was awarded a Guggenheim Fellowship award, and in 1975 he was selected as a member of the National Academy of Sciences, making him the second Oregonian to receive the distinction. The University of Oregon's Institute of Molecular Biology named their main building "Streisinger Hall" in his honor.
Personal History
George Streisinger was born in Budapest, Hungary, on December 27, 1927. Because they were Jewish, in 1937, his family left Budapest for New York to escape Nazi persecution. Streisinger attended New York public schools and graduated from the Bronx High School of Science in 1944. He obtained a B.S. degree from Cornell University in 1950, and a Ph.D. from the University of Illinois in 1953. He completed postdoctoral studies at the California Institute of Technology from 1953 to 1956. He married Lotte Sielman in 1949. Streisinger accepted a post at the University of Oregon Institute of Molecular Biology in Eugene in 1960. Streisinger was well known as an innovative professor in and out of the classroom, conscripting a dance class to illustrate protein synthesis, and often requested beginning and non-major biology students. He was very politically active, organizing grass-roots resistance to the Vietnam war and legislative opposition to John Kennedy's civil defense program. He testified to successfully ban mutagenic herbicides in Douglas fir reforestation, and led and won a battle to exclude secret war department research from the University of Oregon campus.
His wife, Lotte, was a noted artist and community activist, and the founder of the Eugene Saturday Market, the inspiration for the Portland Oregon Saturday Market.
Research
Following his graduation from Cornell, George under- took graduate studies in the genetics of T-even coliphage with Salvador Luria in the Bacteriology Department of the University of Illinois. His studies revealed phenotypic mixing, in which a phage with a host-range genotype of one phage type was found in a particle who was phenotypically dissimilar. When published in 1956, these studies had profound impact on the study of viral biology.
During his postdoc at Caltech, with Jean Weigle, he undertook further studies on T2 × T4 hybrids, which led to the discovery of DNA modification (by glucosylation).
At the University of Oregon, Streisinger pioneered the study of zebrafish in his lab. Zebrafish can be genetically modified easily, and researchers can modify them to mimic the traits of certain diseases. In analyzing these created diseases, scientists seek solutions to diseases which affect humans. Over 9,000 researchers in 1,551 labs throughout 31 countries study zebrafish, and many of them received their initial training at the University of Oregon.
References
Cornell University alumni
Jewish American scientists
American people of Hungarian-Jewish descent
1927 births
1984 deaths
American molecular biologists
Cloning
20th-century American Jews | George Streisinger | Engineering,Biology | 670 |
24,012,524 | https://en.wikipedia.org/wiki/Kopp%E2%80%93Etchells%20effect | The Kopp–Etchells effect is a sparkling ring or disk that is sometimes produced by rotary-wing aircraft when operating in sandy conditions, particularly near the ground at night. The name was coined by photographer Michael Yon to honor two soldiers who were killed in combat; Benjamin Kopp, a US Army Ranger, and Joseph Etchells, a British soldier. Both were killed in combat in Sangin, Afghanistan in July 2009.
Other names that have been used to describe this phenomenon include scintillation, halo effect, pixie dust, and corona effect.
Explanation
Helicopter rotors are fitted with abrasion shields along their leading edges to protect the blades. These abrasion strips are often made of titanium, stainless steel, or nickel alloys, which are very hard, but not as hard as sand. When a helicopter flies low to the ground in sandy environments, sand can strike the metal abrasion strip and cause erosion, which produces a visible corona or halo around the rotor blades. The effect is caused by the pyrophoric oxidation of the ablated metal particles.
In this way, the Kopp–Etchells effect is similar to the sparks made by a grinder, which are also due to pyrophoricity. When a speck of metal is chipped off the rotor, it is heated by rapid oxidation. This occurs because its freshly exposed surface reacts with oxygen to produce heat. If the particle is sufficiently small, then its mass is small compared to its surface area, and so heat is generated faster than it can be dissipated. This causes the particle to become so hot that it reaches its ignition temperature. At that point, the metal continues to burn freely.
Abrasion strips made of titanium produce the brightest sparks, and the intensity increases with the size and concentration of sand grains in the air.
Sand particles are more likely to hit the rotor when the rotorcraft is near the ground. This occurs because sand is blown into the air by the downwash and then carried to the top of the rotor disk by a vortex of air. This process is called recirculation and can lead to a complete brownout in severe situations. The Kopp–Etchells effect is not necessarily associated with takeoff and landing operations. It has been observed without night vision goggles at altitudes as high as .
Other theories
The effect is often and incorrectly believed to be an electrical phenomenon, either as a result of static electricity as in St. Elmo's Fire, or due to the interaction of sand with the rotor (triboelectric effect), or a piezoelectric property of quartz sand.
Mechanical action has been considered, whereby impact with the sand particles may cause photoluminescence. Additionally, mechanisms relating to triboluminescence, chemiluminescence, and electroluminescence have been suggested.
Yet another incorrect theory is that the extreme speed of the helicopter blades pushes sand particles out of the way so fast that they burn up like meteors in the atmosphere due to adiabatic heating.
Groundcrew have mistaken the phenomenon for fire or other malfunctions.
Consequences
The erosion associated with the Kopp–Etchells effect presents costly maintenance and logistics problems, and is an example of foreign object damage (FOD).
Sand hitting the moving rotor blades represents a security risk because of the highly visible ring it produces, which places military operations at a tactical disadvantage when trying to remain concealed in darkness.
The light from the Kopp–Etchells effect can interfere with the pilot's ability to see, especially when using night vision equipment. This may cause difficulty with landing safely, and produce spatial disorientation.
See also
Index of aviation articles
Corona discharge
Wingtip vortices
References
Materials science
Military aviation
Night flying | Kopp–Etchells effect | Physics,Materials_science,Engineering | 769 |
34,590,189 | https://en.wikipedia.org/wiki/LIM%20domain-binding%20protein%20family | In molecular biology, the LIM domain-binding protein family is a family of proteins which binds to the LIM domain of LIM (LIN-11, Isl-1 and MEC-3) homeodomain proteins which are transcriptional regulators of development.
Examples
Nuclear LIM interactor (NLI) / LIM domain-binding protein 1 (LDB1) is located in the nuclei of neuronal cells during development, it is co-expressed with ISL1 in early motor neuron differentiation and has a suggested role in the ISL1 dependent development of motor neurons. It is suggested that these proteins act synergistically to enhance transcriptional efficiency by acting as co-factors for LIM homeodomain and Otx class transcription factors both of which have essential roles in development. The Drosophila melanogaster protein Chip is required for segmentation and activity of a remote wing margin enhancer. Chip is a ubiquitous chromosomal factor required for normal expression of diverse genes at many stages of development. It is suggested that Chip cooperates with different LIM domain proteins and other factors to structurally support remote enhancer-promoter interactions.
References
Protein families | LIM domain-binding protein family | Biology | 236 |
15,398,793 | https://en.wikipedia.org/wiki/Dynabeads | Dynabeads are superparamagnetic spherical polymer particles with a uniform size and a consistent, defined surface for the adsorption or coupling of various bioreactive molecules or cells.
Development and description
Dynabeads were developed after John Ugelstad managed to create uniform polystyrene spherical beads (defined as microbeads) of exactly the same size, at the University of Trondheim, Norway in 1976, something otherwise only achieved by NASA in the weightless conditions of SkyLab. Dynabeads are typically 1 to 5 micrometers in diameter. This is in contrast to Magnetic-activated cell sorting beads, which are approximately 50 nm.
The technology behind the beads, called Dynabeads, was licensed to Dynal in 1980.
Following a series of mergers and acquisitions, Dynal and Dynabeads are currently owned and produced by Invitrogen, part of Thermo Fisher Scientific.
Applications
This discovery of Dynabeads revolutionised the liquid-phase kinetic separation of many biological materials. Since being licensed to Dynal in 1980, this magnetic separation technology has been since used for the isolation and manipulation of biological material, including cells, nucleic acids, proteins and pathogenic microorganisms. The uniformity in size, shape, and surface area allow for reproducibility and help to minimize chemical agglutination.
Dynabeads are frequently used for cell isolation. Cell-types often of interest to purify may be specific leukocytes, such as CD4+ T cells, stem cells, or circulating tumor cells (CTCs). Dynabeads may be covalently linked to an antibody that recognizes a specific protein on the surface of the target cell-type. Alternatively, Dynabeads may attach to the cell indirectly, either via streptavidin on the Dynabead linking to a biotinylated primary antibody, or a secondary antibody on the Dynabead linking to the primary antibody. Streptavidin linkage to the primary antibody allows Dynabeads to capture cells with lower expression of the surface protein.
See also
Plastic magnet
Flow cytometry
Magnetic-activated cell sorting
References
Further reading
External links
Bead separation Ebook
Surface science
NASA spin-off technologies
Norwegian inventions | Dynabeads | Physics,Chemistry,Materials_science | 470 |
23,098,195 | https://en.wikipedia.org/wiki/Mission%20assurance | Mission Assurance is a full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies threatening mission success.
Aspects of Mission Assurance
Mission Assurance includes the disciplined application of system engineering, risk management, quality, and management principles to achieve success of a design, development, testing, deployment, and operations process. Mission Assurance's ideal is achieving 100% customer success every time. Mission Assurance reaches across the enterprise, supply base, business partners, and customer base to enable customer success.
The ultimate goal of Mission Assurance is to create a state of resilience that supports the continuation of an agency's critical business processes and protects its employees, assets, services, and functions. Mission Assurance addresses risks in a uniform and systematic manner across the entire enterprise.
Mission Assurance is an emerging cross-functional discipline that demands its contributors (project management, governance, system architecture, design, development, integration, testing, and operations) provide and guarantee their combined performance in use.
The United States Department of Defense 8500-series of policies has three defined mission assurance categories that form the basis for availability and integrity requirements.
A Mission Assurance Category (MAC) is assigned to all DoD systems
.
It reflects the importance of an information system for the successful completion of a DoD mission. It also determines the requirements for availability and integrity.
MAC I systems handle information vital to the operational readiness or effectiveness of deployed or contingency forces. Because the loss of MAC I data would cause severe damage to the successful completion of a DoD mission, MAC I systems must maintain the highest levels of both integrity and availability and use the most rigorous measure of protection.
MAC II systems handle information important to the support of deployed and contingency forces. The loss of MAC II systems could have a significant negative impact on the success of the mission or operational readiness. The loss of integrity of MAC II data is unacceptable; therefore MAC II systems must maintain the highest level of integrity. The loss of availability of MAC II data can be tolerated only for a short period of time, so MAC II systems must maintain a medium level of availability. MAC II systems require protective measures above industry best practices to ensure adequate integrity and availability of data.
MAC III systems handle information that is necessary for day-to-day operations, but not directly related to the support of deployed or contingency forces. The loss of MAC III data would not have an immediate impact on the effectiveness of a mission or operational readiness. Since the loss of MAC III data would not have a significant impact on mission effectiveness or operational readiness in the short term, MAC III systems are required to maintain basic levels of integrity and availability. MAC III systems must be protected by measures considered as industry best practices.
NASA's Process Based Mission Assurance Knowledge Based System is an implementation of Mission Assurance that provides "quick and easy access to critical Safety & Mission Assurance data... across all NASA programs and projects."
See also
eMASS
Information Assurance
Reliability engineering
Quality engineering
Risk management
Availability
Integrity
External links
Mission Assurance in a Budget-Constrained Environment: 29th National Space Symposium, April 2013. Addresses "mission assurance" as the term is used in the US Department of Defense space launch industry for military payloads (with one definition given at approximately 12:00 ff. in the video).
References
Reliability engineering
Military space program of the United States | Mission assurance | Engineering | 674 |
7,879,916 | https://en.wikipedia.org/wiki/Vacuum%20cementing | Vacuum cementing or vacuum welding is the natural process of solidifying small objects in a hard vacuum. The most notable example is dust on the surface of the Moon.
This effect was reported to be a problem with the first American and Soviet satellites, as small moving parts would seize together.
In 2009 the European Space Agency published a peer-reviewed paper detailing why cold welding is a significant issue that spacecraft designers need to carefully consider. The paper also cites a documented example from 1991 with the Galileo spacecraft high-gain antenna.
One source of difficulty is that vacuum welding does not exclude relative motion between the surfaces that are to be joined. This allows the broadly defined notions of galling, fretting, sticking, stiction and adhesion to overlap in some instances. For example, it is possible for a joint to be the result of both vacuum welding and galling (and/or fretting and/or impact). Galling and vacuum welding, therefore, are not mutually exclusive.
See also
References
The Implications of the Ranger Moon Pictures (Page 4 references lunar dust vacuum welding)
Lunar Rated Fasteners (Page 3 specifies how to build components resistant to vacuum welding)
Spaceflight technology
Vacuum | Vacuum cementing | Physics | 241 |
8,613,140 | https://en.wikipedia.org/wiki/Debaryomyces | Debaryomyces is a genus of yeasts in the family Saccharomycetaceae.
Species
D. artagaveytiae
D. carsonii
D. castellii
D. coudertii
D. etchellsii
D. globularis
D. hansenii
D. kloeckeri
D. kursanovii
D. marama
D. macquariensis
D. melissophilus
D. mrakii
D. mycophilus
D. nepalensis
D. occidentalis
D. oviformis
D. polymorphus
D. prosopidis
D. pseudopolymorphus
D. psychrosporus
D. robertsiae
D. singareniensis
D. udenii
D. vanrijiae
D. vietnamensis
D. vindobonensis
D. yamadae
References
Saccharomycetaceae
Yeasts
Osmophiles | Debaryomyces | Biology | 191 |
32,719,831 | https://en.wikipedia.org/wiki/Cheugugi | Cheugugi (Hangul: 측우기, Hanja: 測雨器) is the first well-known rain gauge invented and used during the Joseon dynasty of Korea. It was invented and supplied to each provincial offices during the King Sejong the Great's reign. As of 2010, only one example of the Cheugugi remains, known as the Geumyeong Cheugugi (Hangul: 금영측우기, Hanja: 錦營測雨器), which literally means "Cheugugi installed on the provincial office's yard." It is designated as National Treasures #561 of Korea and was installed in provincial office of Gongju city, 1837 by King Yeongjo, the 21st king of Joseon. In addition, the official record of the rainfall by Cheugugi from King Jeongjo's reign to Emperor Gojong's reign is preserved.
Intention
In the early days of the Joseon dynasty, there was a system to measure and report a region's rainfall for the sake of agriculture. However, the method to measure rainfall in those days was primitive, measuring the depth of rain water in puddles.
This method could not tell the exact rainfall, because there are differences in the amount of rainwater absorbed into the ground by the nature of the local soil. To prevent errors of this kind, King Sejong the Great ordered the Gwansanggam (Hangul: 관상감, Hanja: 觀象監) (the Joseon kingdom's research institute of astronomy, geography, calendar and weather) to build a rainwater container, the Cheugugi, made of iron in August 1441 (according to the lunar calendar) based on the idea of his Crown Prince, who later became Munjong of Joseon. In the early days of the Cheugugi, it was mainly used in the capital area only.
In 1442, the king ordered the Gwansanggam again to design a standardized system to measure and record the rainfall. He also ordered his provincial governors, appointed by the king, to install an identical Cheugugi in the courtyard of each provincial office, where the governors would measure and record the rainfall.
It was originally made of iron, but there were copper and ceramic ones built later.
Exterior features
As it is described above, the Cheugugi was mainly made of iron. By observing the preserved one, it is generally characterized by its oil-drum shape which is fixed on the hexahedral stone support, Cheugudae (측우대). The reasonable height of the Cheugudae means that splashed water cannot enter the Cheugugi.
The depth of the preserved Cheugugi is about 32 cm and the diameter is about 15 cm.
Operation
It is estimated that the measuring rainfalls by the standardized Cheugugi was institutionalized from May 8, 1442 (lunar calendar). From that day, the word "Cheugugi" was inscribed on the official records of the Annals of the Joseon Dynasty (조선왕조실록).
The rainfall is measured by dipping a ruler and recorded by poon (Hangul: 푼, Hanja: 分) units (approximately 0.303 cm (0.120 inch)). Furthermore, the information of the time when the rain began and stopped is recorded by each case, always, throughout the nation.
Examples
Some Cheugudaes continue to exist:
The Gwansanggam Cheugudae
Daegu Sunhwadang Cheugudae (established at Daegu)
Changdeokgung Palace's Cheugudae (moved to the National Palace Museum of Korea)
Tongyeong Cheugudae
Yeon-gyeong-dang (the royal residence in forbidden garden of Changdeokgung Palace) Cheugudae
There is also Ma-jeon-gyo (Bridge) which is generally known as Supyo-gyo across the Cheonggyecheon (stream flows center of Joseon era's Seoul city (inside area of the Seoul wall), near the Gyungbok Palace). The generally known name originated from the Supyo-seok attached on the pier of the bridge. The Supyo-seok's meaning and function is "the water level gauge" of Cheonggyecheon, telling how much the stream's water level rises by rain. It was established in King Sejong the Great's reign (second year of his reign) and is still in existence nowadays. But 1958 when the Cheonggyecheon was covered as a road by the Korean government, it was moved to Jang-chung park and it remains there. There was a plan to move the bridge to its original location, during the Cheonggyecheon restoration. But the plan could not be fulfilled, because there was a difference between the restored width of the Cheonggyecheon and the bridge's length. So, the bridge remains in Jang-chung Park.
References
Korean inventions
Measuring instruments
Meteorological instrumentation and equipment
Rain | Cheugugi | Technology,Engineering | 1,082 |
36,401,616 | https://en.wikipedia.org/wiki/OPS%205118 | OPS 5118, also known as Navstar 6, GPS I-6 and GPS SVN-6, was an American navigation satellite launched in 1980 as part of the Global Positioning System development programme. It was the sixth of eleven Block I GPS satellites to be launched.
Background
Global Positioning System (GPS) was developed by the U.S. Department of Defense to provide all-weather round-the-clock navigation capabilities for military ground, sea, and air forces. Since its implementation, GPS has also become an integral asset in numerous civilian applications and industries around the globe, including recreational used (e.g., boating, aircraft, hiking), corporate vehicle fleet tracking, and surveying. GPS employs 24 spacecraft in 20,200 km circular orbits inclined at 55°. These vehicles are placed in 6 orbit planes with four operational satellites in each plane.
Spacecraft
The first eleven spacecraft (GPS Block 1) were used to demonstrate the feasibility of the GPS system. They were 3-axis stabilized, nadir pointing using reaction wheels. Dual solar arrays supplied over 400 watts. They had S-band communications for control and telemetry and Ultra high frequency (UHF) cross-link between spacecraft. They were manufactured by Rockwell Space Systems, were 5.3 meters across with solar panels deployed, and had a design life expectancy of 5 years. Unlike the later operational satellites, GPS Block 1 spacecraft were inclined at 63°.
Launch
OPS 5118 was launched at 22:00 UTC on 26 April 1980, atop an Atlas F launch vehicle with an SGS-1 upper stage. The Atlas used had the serial number 34F, and was originally built as an Atlas F. The launch took place from Space Launch Complex 3E at Vandenberg Air Force Base, and placed OPS 5118 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-27 apogee motor.
Mission
By 16 May 1980, OPS 5118 had been raised to an orbit with a perigee of , an apogee of , a period of 717.94 minutes, and 62.8° of inclination to the equator. The satellite had a design life of 5 years and a mass of . It broadcast the PRN 09 signal in the GPS demonstration constellation, and was retired from service on 6 March 1991.
References
Spacecraft launched in 1980
GPS satellites | OPS 5118 | Technology | 476 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.